uuid int64 541B 3,299B | dataset stringclasses 1
value | text stringlengths 1 4.29M |
|---|---|---|
1,314,259,995,263 | arxiv | \section{Introduction}
Let $(X,\omega)$ be a compact symplectic manifold, and let $\mu: X \to {\liek}^*$ be a moment map for a Hamiltonian action of a compact Lie group $K$
on $(X, \omega)$. Then the symplectic quotient (or Marsden--Weinstein reduction of $X$ at 0 \cite{MW}) is given by $X \senv K = \mu^{-1}(0)/K$ with its induced symplectic structure.
Let us fix a $K$-invariant inner product
on the Lie algebra ${\mathfrak k}$ of $K$;
then the associated normsquare $|\!|\mu|\!|^2$ of $\mu$ can be considered
as a Morse function on $X$. This is not in general a Morse function in
the classical sense, nor even a Morse-Bott function, since
the connected components of its set of critical points are
not in general submanifolds of $X$. Nonetheless, given a suitably compatible $K$-invariant Riemannian metric on $X$, there is
a Morse stratification $\{ S_{\beta} : \beta \in \mathcal{B}\}$
of $X$ induced by
$|\!|\mu|\!|^2$ such that each stratum $S_{\beta}$ is a $K$-invariant
locally closed symplectic submanifold of $X$ \cite{K}. Here the stratum to which $x \in X$ belongs is determined by
the limit set of its path of steepest descent for $|\!|\mu|\!|^2$, and the index $\beta$ is the intersection with a positive Weyl chamber ${\mathfrak t}_+$ for $K$ of the co-adjoint orbit which is the image under $\mu$ of the corresponding critical set.
We can attempt to construct symplectic quotients for the restrictions of the Hamiltonian $K$-action to the strata $S_\beta$. However the usual construction, given by $(S_\beta \cap \mu^{-1}(0))/K$, is empty if $\beta \neq 0$. When $K=T$ is abelian we can deal with this problem by shifting the moment map by a suitable constant; a natural choice is to replace $\mu^{-1}(0)$ here with $\mu^{-1}( (1 + \epsilon)\beta) $ for $0 < \epsilon <\!< 1$. However when $\beta$ is not central $\mu^{-1}( (1 + \epsilon)\beta) $ will not in general be $K$-invariant, so we must modify the construction.
We will see in this paper that this can be done by recalling that there are natural identifications
$$S_\beta \cong K \times_{K\beta} (Y_\beta \cap S_\beta)$$
where $Y_\beta$ is the locally closed submanifold given for the Morse--Bott function $\mu_\beta(x) = \mu(x).\beta$ by
$$Y_\beta = \{ y \in X : \mbox{the downward trajectory of $y$ for $\text{grad}(\mu_\beta)$ has a limit point $x$ with } \mu_\beta(x) = |\!|\beta|\!|^2 \} ,
$$
and the stabiliser $K_\beta$ of $\beta$ under the adjoint action of $K$ acts diagonally on the product of $K$ with the open subset $Y_\beta \cap S_\beta$ of $Y_\beta$. For sufficiently small $\epsilon > 0$ we will see that $Y_\beta \cap \mu^{-1}((1 + \epsilon)\beta) \subseteq S_\beta$ is compact and
$$ S_\beta \, \senv K := (Y_\beta \cap \mu^{-1}((1 + \epsilon)\beta)/K_\beta $$
has an induced symplectic structure with which it can be regarded as a symplectic quotient for the $K$-action on the stratum $S_\beta$.
The motivation for this construction comes from the relationship between symplectic quotients and geometric invariant theory (GIT).
Suppose that $X \subseteq {\mathbb P } ^n$ is a nonsingular complex projective variety, that $\omega$ is the restriction to $X$ of the Fubini-Study K\"{a}hler form on the complex projective space $ {\mathbb P } ^n$ and that $K$ acts linearly on $X$ via a unitary representation $\rho: K \to U(n+1)$. Then the open stratum $S_0$ coincides with the semistable locus $X^{ss}$ in the sense of Mumford's GIT for the induced linear action on $X$ of the complexification $G=K_{\mathbb C }$ of $K$, and the inclusion $\mu^{-1}(0) \to X^{ss}$ composed with the quotient map from $X^{ss}$ to the GIT quotient $X/\!/G$ induces an identification of the symplectic quotient $X \senv K = \mu^{-1}(0)/K$ with $X/\!/G$. (For this reason, even in the non-algebraic case, we will refer to the strata $S_\beta$ for $\beta \neq 0$ as the unstable strata). The unstable strata $S_\beta$
are $G$-invariant locally closed subvarieties of $X$ and have descriptions of the form $$S_\beta = KY_\beta^{ss} = GY_\beta^{ss} \cong G \times_{P_\beta} Y_\beta^{ss} \cong K \times_{K_\beta} Y_\beta^{ss}$$
where $P_\beta$ is a parabolic subgroup of $G$, and $Y_\beta^{ss} = Y_\beta \cap S_\beta$ has an inductive description involving semistability for the action of a Levi subgroup $L_\beta$ of $P_\beta$, after twisting the linearisation by a suitable rational character of $P_\beta$.
Since such characters do not in general extend to $G$, in order to construct GIT quotients of the unstable strata $S_\beta$ it is natural to consider quotients of the subvarieties $Y_\beta$ by the action of the non-reductive groups $P_\beta$.
Recent results have extended classical GIT to suitable non-reductive linear algebraic group actions on projective varieties \cite{BDHK,BDHK2}, and the modified symplectic quotient construction for unstable strata just described is suggested by these advances, together with links between non-reductive GIT and the symplectic implosion construction of Guillemin, Jeffrey and Sjamaar \cite{GJS,implone}. In the algebraic setting the modified symplectic quotient construction coincides with a non-reductive GIT quotient construction for the $P_\beta$ action on $Y_\beta$ with an appropriately twisted linearisation.
In their fundamental paper \cite{AB} Atiyah and Bott observed that the Yang--Mills functional over a compact Riemann surface $\Sigma$ plays the role of $|\!|\mu |\!|^2$ in an infinite-dimensional analogue of this picture (modulo a constant which depends on the addition of a central constant to the moment map). Here the corresponding analogue of the GIT or symplectic quotient is a moduli space of semistable holomorphic bundles of fixed rank and degree over $\Sigma$, and the stratification $\{ S_\beta : \beta \in \mathcal{B} \}$
is by the Harder--Narasimhan type of a holomorphic bundle.
The primary motivation for considering the Yang--Mills functional in \cite{AB} (and $|\!|\mu |\!|^2$ in the finite-dimensional setting explored in \cite{K}) as a Morse function was to study the cohomology (at least the Betti numbers) of the symplectic quotient. This was done by relating the equivariant cohomology of the compact symplectic manifold $X$, or its infinite-dimensional analogue in the Yang--Mills case, to the equivariant cohomology of the strata, and by describing the equivariant cohomology of the unstable strata inductively in terms of semistable strata for symplectic submanifolds of $X$ acted on by compact subgroups of $K$ (or subgroups of its infinite-dimensional analogue, the relevant gauge group). Later work \cite{Witten,JK,JK2} showed how related ideas could be used to study intersection pairings on $X \senv K$ and the ring structure of its cohomology. In a future paper \cite{BK} we will show how to extend these results to symplectic quotients of unstable strata and other non-reductive GIT quotients.
The layout of this paper is as follows. In $\Sigma$2 we will review the Morse stratification for the normsquare of a moment map on a compact symplectic manifold with a compact Hamiltonian action. In $\Sigma$3 and $\Sigma$4 we will summarise the relevant results from non-reductive GIT and symplectic implosion. Finally $\Sigma$5 describes the construction of symplectic quotients of unstable strata for compact Hamiltonian actions on compact symplectic manifolds, with the main results summarised in Theorem \ref{mainresult}, and $\Sigma$6 considers the infinite-dimensional Yang--Mills analogue.
\section{Normsquares of moment maps and their Morse stratifications}
Suppose that a compact
Lie group $K$ with Lie algebra ${{\mathfrak k}}$ acts smoothly
on a symplectic manifold
$X$ and preserves the symplectic form $\omega$.
Any $a\in {{\mathfrak k}}$ determines
a vector field $x\mapsto a_x$ on $X$ defined by
the infinitesimal action of $a$.
A moment map for the action of $K$ on $X$ is a smooth $K$-equivariant map
$\mu :X\rightarrow {{\mathfrak k}}^{\ast}$
which satisfies
$$d\mu(x)(\xi).a=\omega_x(\xi,a_x)$$
for all $x\in X$, $\xi\in T_xX$ and $a\in {{\mathfrak k}}$. Equivalently,
if $\mu_a:X \to {{\mathbb R }}$ denotes the component
of $\mu$ along
$a\in {{\mathfrak k}}$ defined for all $x\in X$ by the pairing
$\mu_a(x)=\mu(x).a$
between $\mu(x) \in {{\mathfrak k}}^{\ast}$ and
$a \in {{\mathfrak k}}$, then $\mu_a$ is a Hamiltonian function
for the vector field on $X$ induced by
$a$.
If the stabiliser $K_{\zeta}$ of $\zeta\in {{\mathfrak k}}^{\ast}$ under the adjoint action of $K$
acts with only finite stabilisers on $\mu^{-1}(\zeta)$, then $\mu^{-1}(\zeta)$ is
a submanifold of $X$ and the symplectic form $\omega$ induces a
symplectic structure on the orbifold $\mu^{-1}(\zeta)/K_{\zeta}$ which is the Marsden--Weinstein
reduction at $\zeta$ of the action
of $K$ on $X$. The symplectic quotient $X \senv K$ is the Marsden--Weinstein reduction $\mu^{-1}(0)/K$ at 0.
The reduction
$\mu^{-1}(\zeta)/K_{\zeta}$ also inherits a symplectic structure
when the action of $K_{\zeta}$
on $\mu^{-1}(\zeta)$ has positive-dimensional stabilisers, but in this case it is likely
to have more serious singularities.
\begin{rem} \label{remalgsit}
Let
$X$ be a nonsingular complex projective variety
embedded in complex projective space $ {\mathbb P } ^n$, and let $G=K_{\mathbb C }$
be a reductive complex Lie group with maximal compact subgroup $K$ acting on $X$ via a
representation $\rho:G\rightarrow GL(n+1;{\mathbb C })$.
By choosing coordinates on $ {\mathbb P } ^n$ appropriately we can assume that $\rho$ maps
$K$
into the unitary group $U(n+1)$. Then the Fubini-Study form $\omega$ on $ {\mathbb P } ^n$ restricts to
a $K$-invariant K\"{a}hler form on $X$, and there is a moment map
$\mu :X\rightarrow {{\mathfrak k}}^{\ast}$ defined (up to multiplication by a
constant scalar factor depending on the convention chosen for
the normalisation of the Fubini-Study form) by
\begin{equation} \mu(x).a = \frac{\overline{\hat{x}}^{t}\rho_{\ast}(a)\hat{x}}
{2\pi i|\!|\hat{x}|\!|^2} \label{mmap} \end{equation}
for all $a\in {{\mathfrak k}}$, where $\hat{x}\in {{\mathbb C }}^{n+1}-\{0\}$ is a representative
vector for $x\in {\mathbb P } ^n$ and the representation $\rho:K \to U(n+1)$ induces
$\rho_*: {\mathfrak k} \to {{\mathfrak u}}(n+1)$ and dually $\rho^*:{{\mathfrak u}}(n+1)^* \to {\liek}^*$.
In this situation the symplectic quotient
$X \senv K = \mu^{-1}(0)/K$ coincides with
the GIT quotient $X/\!/G$ in algebraic geometry described in $\Sigma$3 below \cite{K}. Moreover $x \in X$ lies in the semistable locus $X^{ss}$ if and only if the closure of its orbit $Gx$ meets $\mu^{-1}(0)$, and $x$ lies in the stable locus if and only if its orbit $Gx$ meets the open subset $\mu^{-1}(0)_{\text{reg}}$ of $\mu^{-1}(0)$ where $d\mu$ is surjective. The inclusion $\mu^{-1}(0) \to X^{ss}$ composed with the quotient map $X^{ss} \to X/\!/G$ is $K$-invariant and induces a bijection $\mu^{-1}(0)/K \to X/\!/G$ which can be used to identify the symplectic quotient $X\senv K = \mu^{-1}(0)/K$ with the GIT quotient $X/\!/G$.
When $X$ is K\"ahler but not necessarily algebraic then $\mu^{-1}(0)/K$ inherits a K\"ahler structure (at least away from its singularities) by identifying $\mu^{-1}(0)_{\text{reg}}/K$ with the quotient by $G$ of the open subset $G\mu^{-1}(0)_{\text{reg}}$ of $X$.
\end{rem}
Let us fix a maximal torus $T$ of $K$ and an inner product on the Lie algebra
${\mathfrak k}$ which is invariant under the adjoint action of $K$, and which we will use to identify ${\liek}^*$ with ${\mathfrak k}$. We will assume that the inner product is chosen so that its restriction to the Lie algebra ${\mathfrak t}$ of $T$ takes rational values on the lattice given by the derivatives at the identity of homomorphisms $S^1 \to T$. Then we can consider the associated normsquare $|\!|\mu|\!|^2$ of $\mu$
as a Morse function on $X$; it is not a Morse function in the classical sense, nor even a Morse--Bott function, but it is shown in \cite{K} that $|\!|\mu|\!|^2$ is a \lq minimally degenerate' Morse function.
More precisely, the set of critical points for $f=||\mu||^2$ is a finite disjoint union of closed subsets $\{C_\beta : \beta \in \mathcal B\}$ along each of which $f$ is \emph{minimally degenerate} in the following sense.
\begin{defn} A locally closed submanifold $\Sigma_\beta$ containing $C_\beta$ with orientable normal bundle in $X$
is a \emph{minimising submanifold} for $f=|\!|\mu|\!|^2$ if
\begin{enumerate} \item the restriction of $f$ to $\Sigma_\beta$ achieves its minimum value exactly on $C_\beta$, and
\item the tangent space to $\Sigma_\beta$ at any point $x \in C_\beta$ is maximal among subspaces of $T_x X$ on which the Hessian $H_x(f)$ is non-negative.
\end{enumerate}
If a minimising submanifold $\Sigma_\beta$ exists, then $f$ is called minimally degenerate along $C_\beta$.
\end{defn}
In \cite{K} it is shown that as a consequence $|\!|\mu|\!|^2$ induces a smooth stratification $\{S_\beta : \beta \in \mathcal B\}$ of $X$ such that, for a suitable choice of Riemannian metric (which can be taken to be the K\"ahler metric if $(X,\omega)$ is K\"ahler), $x \in X$ lies in the stratum $S_\beta$ if its path of steepest descent for $|\!|\mu|\!|^2$ has a limit point in the critical subset $C_\beta$. The stratum $S_\beta$ then coincides with $\Sigma_\beta$ near $C_\beta$.
\begin{rem} \label{almostcx}
Here we choose a $K$-invariant Riemannian metric which is compatible with the symplectic structure in the sense that
$X$ has a $K$-invariant almost-complex structure such that if $\xi \in T_x X$ then $i\xi$ is the dual with respect to the metric of the linear form $\zeta \rightarrow \omega_x (\zeta, \xi)$ on $T_x X$. This implies that
$$\text{grad} \mu_\beta (x) = i \beta_x$$ for all $x \in X$, where $\mu_\beta(x) = \mu(x).\beta$.
\end{rem}
It is shown in \cite{K} that in the situation of Remark \ref{remalgsit} the open stratum $S_0$ of this stratification $\{S_\beta : \beta \in \mathcal B\}$ coincides with the semistable locus $X^{{\text{ss}}}$, and that each stratum $S_\beta$ has the form
$$S_\beta \cong G \times_{P_\beta} Y_\beta^{{\text{ss}}}$$
where $Y^{{\text{ss}}}_\beta$ is a locally closed nonsingular subvariety of $X$ and $P_\beta$ is a parabolic subgroup of $G$ (\cite{K} Theorem 6.18). Moreover, there is a linear action of a Levi subgroup $L_\beta$ of $P_\beta$ on a nonsingular closed subvariety
$Z_\beta$ of $X$ such that $Y^{{\text{ss}}}_\beta$ retracts equivariantly onto the subset $Z^{{\text{ss}}}_\beta$
of semistable points for this action.
It is also shown in \cite{K} that $|\!|\mu|\!|^2$ is equivariantly perfect, in the sense that its equivariant Morse inequalities are in fact equalities, and that this leads to an {inductive procedure} for calculating $\dim H^j_G (X^{{\text{ss}}}; \mathbb Q)$, which in good cases give {the Betti numbers of the quotient variety} $X/\!/G$.
When $X$ is merely a compact symplectic manifold acted on by a compact group $K$, the function $||\mu||^2$ still induces a smooth stratification of $X$ and is $K$-equivariantly perfect, providing a formula for the Betti numbers of the symplectic quotient $\mu^{-1}(0)/K$ in the good case when 0 is a regular value of $\mu$, which involves the $K$-equivariant cohomology of the critical subsets $C_\beta$. Indeed the same is true when $||\mu||^2$ is replaced with any convex function of $\mu$ (cf. \cite{AB} \Sigma\S8,12).
The set $\mathcal B$ indexing the critical subsets $C_\beta$ and the stratification $\{S_\beta : \beta \in \mathcal B\}$ can be identified with a finite set of orbits of the adjoint representation of $K$ on its Lie algebra $\frak k$ (which is identified with its dual using the fixed invariant inner product). Each orbit in $\mathcal B$ is the image under the moment map $\mu : X \to \frak k^\star \cong \frak k$ of the critical subset which it indexes. If a choice is made of a positive Weyl chamber $\frak t_+$ in the Lie algebra of some maximal torus $T$ of $K$, then each adjoint
orbit intersects $\frak t_+$ in a unique point, so ${\mathcal B}$ can also be identified with a finite set of points
in $\frak t_+$. In the situation of Remark \ref{remalgsit} a point of $\frak t_+$ lies in ${\mathcal B}$ if it is the closest point to the origin of the convex hull of a nonempty set of the weights of the unitary representation of $K$ which defines its action on $X \subseteq {\mathbb P } ^n$. The same is true more generally if we interpret weight here as the image under the $T$-moment map of a connected component of the fixed point set $X^T$.
When ${\mathcal B}$ is identified with a finite set of points
in $\frak t_+$, for $\beta \in {\mathcal B}$ the submanifold $Z_\beta$ of $X$ is the union of those components of the fixed point set of the subtorus $T_\beta$ of $K$ generated by $\beta$ on which the moment map for $T_\beta$ given by composing $\mu$ with the restriction map from ${\liek}^*$ to ${\mathfrak t}^*_\beta$ takes the value $\beta$. Then
$$C_\beta = K(Z_\beta \cap \mu^{-1}(\beta)) \cong K \times_{K_\beta} (Z_\beta \cap \mu^{-1}(\beta)) $$
where the subgroup $K_\beta$ is the stabiliser of $\beta$ under the adjoint action of $K$ on its Lie algebra, and in the K\"ahler case, the complexification $L_\beta$ of $K_\beta$ is a Levi subgroup of the parabolic subgroup $P_\beta$ of $G=K_{\mathbb C }$.
Since the moment map is $K$-equivariant the image of $Z_\beta$ under $\mu$ is contained in the Lie algebra of $K_\beta$, and thus $\mu |_{Z_\beta}$ can be regarded as a moment map for the action of $K_\beta$ on $Z_\beta$. As moment maps are only determined up to the addition of
a central constant, $\mu |_{Z_\beta} - \beta$ is also a moment map for the
action of $K_\beta$ on $Z_{\beta}$.
\begin{rem} In the situation of Remark \ref{remalgsit} this change of moment map corresponds to a modification of
the linearisation of the action of $K_\beta$ on $Z_{\beta}$, and we define
$Z_{\beta}^{ss}$ to be the set of semistable points of $Z_{\beta}$ with respect
to this modified linear action. Equivalently, $Z_{\beta}^{ss}$ is the stratum
labelled by 0 for the Morse stratification of the function $|\!|\mu - \beta|\!|^2$ on
$Z_{\beta}$. Then
$$Y_{\beta}^{ss} = p_{\beta}^{-1}(Z_{\beta}^{ss}) $$
where $Y_\beta$ and $p_{\beta}:Y_{\beta} \to Z_{\beta}$ are
given by
$ \label{pb} p_{\beta}(x) = \lim_{t \to \infty} \exp (-it\beta) x$ and
$$Y_\beta = \{ y \in X \, | \, p_\beta(y) \in Z_\beta \}.$$
If $B$ is the Borel
subgroup of $G$ associated to the choice of positive Weyl chamber ${\mathfrak t}_+$ and if
$P_{\beta}$ is the parabolic subgroup $B K_\beta$, then $Y_{\beta}$ and
$Y_{\beta}^{ss}$ are $P_{\beta}$-invariant and we have
$ S_{\beta} = K Y_\beta^{ss} \cong K \times_{K_\beta} Y_\beta^{ss} \cong G \times_{P_{\beta}} Y_{\beta}^{ss}. $
Moreover when $X$ is nonsingular $Y_{\beta}$ is a nonsingular subvariety of $X$ and $p_{\beta}:Y_{\beta}
\to Z_{\beta}$ is a locally trivial fibration whose fibre is isomorphic to
${\mathbb C }^{m_{\beta}}$ for some $m_{\beta} \geq 0$.
An element $g$ of $G$ lies in the parabolic subgroup $P_{\beta}$ if and only if
$\exp(-it\beta) g \exp(it\beta)$ tends to a limit in $G$ as $t \to \infty$, and this limit defines
a surjection $q_{\beta}: P_{\beta} \to L_\beta$ such that
$$ p_\beta( gy) = q_\beta (g) p_\beta(y)$$ for each $g \in P_\beta$ and $y \in Y_\beta$.
Since $G=KB$ and $B \subseteq P_{\beta}$
we have $G\overline{Y_{\beta}} = K \overline{Y_{\beta}}$, which is compact, and hence
\begin{equation} \label{closure} \overline{S_{\beta}} \subseteq G\overline{Y_{\beta}}
\subseteq S_{\beta} \cup \bigcup_{|\!| \gamma |\!| > |\!| \beta |\!|} S_{\gamma}. \end{equation}
\end{rem}
\section{Non-reductive geometric invariant theory}
\subsection{GIT for reductive groups}
In Mumford's classical {G}eometric {I}nvariant {T}heory we choose a linearisation of an action of a reductive group $G$ on a complex projective variety $X$; this
is given by an ample line bundle $L$ on $X$ and a lift of the action to $L$.
When $L$ is very ample, so that $X$ can be embedded in a projective space $ {\mathbb P } ^n$ such that $L$ is the restriction of the hyperplane line bundle $\mathcal{O}(1)$, the action is given by a
representation $\rho:G \to GL(n+1)$ and $ {\hat{\mathcal{O}}}_L(X) = \bigoplus_{k= 0}^{\infty} H^0(X,L^{\otimes k})$ is $\Bbbk[x_0,\ldots,x_n]/\mathcal{I}_X$ where $\mathcal{I}_X$ is the ideal generated by the homogeneous polynomials which vanish on $X$.
$$\begin{array}{ccccl}
(X,L) & \leadsto
& {\hat{\mathcal{O}}}_L(X)&=& \bigoplus_{k= 0}^{\infty} H^0(X,L^{\otimes k})\\
| &&&& \\
| & & \bigcup \!| & &\\
\downarrow & & & & \\
X/\!/G & {\reflectbox{\ensuremath{\leadsto}}}
& {\hat{\mathcal{O}}}_L(X)^G & & \mbox{ algebra of invariants. }
\end{array}$$
Since $G$ is reductive, the algebra of $G$-invariants ${\hat{\mathcal{O}}}_L(X)^G$ is {finitely generated} as a graded algebra and so defines a projective variety
$X/\!/G = \mbox{Proj}({\hat{\mathcal{O}}}_L(X)^G)$. The inclusion of ${\hat{\mathcal{O}}}_L(X)^G$ in ${\hat{\mathcal{O}}}_L(X)$ determines a rational map $X - - \rightarrow X/\!/G$ which fits into a diagram
$$\begin{array}{rcccl}
& X & - - \rightarrow & X/\!/G & \mbox{ projective variety}\\
& \bigcup & & || & \\
\mbox{semistable} & X^{ss} & \stackrel{\mbox{onto}}{\longrightarrow} & X/\!/G & \\
& \bigcup & & \bigcup & \mbox{open}\\
\mbox{stable} & X^s & \longrightarrow & X^s/G &
\end{array} $$
where $X^s$ and $X^{ss}$ are open subvarieties of $X$, the GIT quotient ${X}/\!/G$ is a categorical quotient for the action of $G$ on $X^{ss}$ via the $G$-invariant surjective morphism $\phi_G: X^{ss} \to X/\!/G$, and
$$\phi_G(x) = \phi_G( y) \Leftrightarrow \overline{Gx} \cap \overline{Gy} \cap X^{ss} \neq \emptyset.$$
\begin{rmk} \label{rem2}
A complex Lie group $G$ is reductive if and only if it is the complexification $G=K_{\mathbb C }$ of a maximal compact subgroup $K$,
and then $X/\!/G = \mu^{-1}(0)/K$ for a suitable moment map $\mu$ for the action of $K$ (see Remark \ref{remalgsit} above). Indeed, recall that
in this situation the semistable locus $X^{ss}$ coincides with the open stratum $S_0$, while $x \in X$ lies in $S_0 = X^{ss}$ if and only if the closure of its orbit $Gx$ meets $\mu^{-1}(0)$, and $x$ lies in the stable locus if and only if its orbit $Gx$ meets the open subset $\mu^{-1}(0)_{\text{reg}}$ of $\mu^{-1}(0)$ where $d\mu$ is surjective. Then the inclusion $\mu^{-1}(0) \to X^{ss}$ composed with the quotient map $X^{ss} \to X/\!/G$ induces an identification of the symplectic quotient $X\senv K = \mu^{-1}(0)/K$ with the GIT quotient $X/\!/G$.
When $X$ is K\"ahler but not necessarily algebraic then
we can define an equivalence relation $\sim$ on the open stratum $S_0$ by $x \sim y$ if and only if
$\overline{Gx} \cap \overline{Gy} \cap X^{ss} \neq \emptyset$; if $x$ lies in the open subset $G \mu^{-1}(0)_{\text{reg}}$ of $S_0$ then $x \sim y$ if and only if $y \in Gx$. Then the inclusion $\mu^{-1}(0) \to S_0$ induces an identification of the symplectic quotient $X\senv K = \mu^{-1}(0)/K$ with the topological quotient $S_0/\sim$.
Thus $\mu^{-1}(0)/K$ inherits a stratified K\"ahler structure, with the complex structure induced from $S_0$ and the K\"ahler form given by the symplectic form on $\mu^{-1}(0)/K$ \cite{Huck}.
\end{rmk}
The subsets $X^{ss}$ and $X^s$ of $X$ for a linear action of a reductive group $G$ with respect to an ample linearisation $\mathcal{L}$ are characterised by the Hilbert--Mumford criteria
\cite[Chapter 2]{GIT}, \cite{New}:
\begin{propn}
\label{sss} (i) A point $x \in X$ is semistable (respectively
stable) for the action of $G$ on $X$ if and only if for every
$g\in G$ the point $gx$ is semistable (respectively
stable) for the action of a fixed maximal torus $T$ of $G$.
\noindent (ii) A point $x \in X$ with homogeneous coordinates $[x_0:\ldots:x_n]$
in some coordinate system on $ {\mathbb P } ^n$
is semistable (respectively stable) for the action of a maximal
torus $T$ of $G$ acting diagonally on $ {\mathbb P } ^n$ with
weights $\alpha_0, \ldots, \alpha_n$ if and only if the convex hull
$$\conv \{\alpha_i :x_i \neq 0\}$$
contains $0$ (respectively contains $0$ in its interior).
\end{propn}
The projective GIT quotient $X/\!/G$ contains as an open subset the geometric quotient $X^s/G$ of the stable set $X^s$. When $X$ is nonsingular then the singularities of $X^s/G$ are very mild, since the stabilisers of stable points are finite subgroups of $G$. If $X^{ss} \neq X^s \neq \emptyset$ the singularities of $X/\!/G$ are typically more severe, but $X/\!/G$ has a \lq partial desingularisation' $\tilde{X}/\!/G$ which ( if $X$ is irreducible and $X^s \neq \emptyset$) is also a projective completion of $X^s/G$ and is itself a geometric quotient
$$\tilde{X}/\!/G = \tilde{X}^{ss}/G$$
by $G$ of an open subset $\tilde{X}^{ss} = \tilde{X}^s$ of a $G$-equivariant blow-up $\tilde{X}$ of $X$ \cite{K2}.
$\tilde{X}^{ss}$ is obtained from ${X}^{ss}$ by successively blowing up along the subvarieties of semistable points stabilised by reductive subgroups of $G$ of maximal dimension and then removing the unstable points in the resulting blow-up.
Thus for irreducible $X$ we have\\
i) when $X^{ss} = X^s \neq \emptyset$ the GIT quotient $X/\!/G = X^s/G$ is a projective variety which is a geometric quotient of the open subvariety $X^s$ of $X$;\\
ii) when $X^{ss} \neq X^s \neq \emptyset$ then the GIT quotient $X/\!/G$ is a projective completion of the geometric quotient $X^s/G$, and $X^s/G$ has another projective completion $\tilde{X}/\!/G = \tilde{X}^{s}/G$ which is a \lq partial desingularisation' of $X/\!/G$ in the sense just described.
\begin{rem} The GIT quotient $X/\!/G$ has an ample line bundle which pulls back to a positive tensor power on $X^{ss}$ of the line bundle $L$ defining the linearisation $\mathcal{L}$.
Note that when we replace the linearisation $\mathcal{L}$ for the action of $G$ on $X$ by any positive tensor power of itself, the stable and semistable loci $X^s$ and $X^{ss}$ and the GIT quotient $X/\!/G$ are unchanged. From a symplectic viewpoint the symplectic form and moment map (and the induced symplectic form on the symplectic quotient) are multiplied by a positive integer, but $\mu^{-1}(0)$ is unchanged. In particular this means that it makes sense to multiply a linearisation by a rational character $\chi/m$ of $G$, where $\chi:G \to {\mathbb C }^*$ is a character and $m$ is a positive integer: from a GIT perspective we can interpret the result as multiplying the induced linearisation on $L^{\otimes m}$ by the character $\chi$. From a symplectic viewpoint we are adding a central constant to the moment map.
\end{rem}
\begin{ex}
\label{subsec:unstable strata}
As we have seen, associated to the $G$-action on $X$ with linearisation $\mathcal{L}$ and an invariant inner product on the Lie algebra of a maximal compact subgroup $K$ of $G$, there is a stratification (the Morse stratification for $|\!|\mu |\!|^2$)
$$ X = \bigsqcup_{\beta \in \mathcal{B}} S_\beta$$ of $X$ by locally closed subvarieties $S_\beta$,
indexed by a partially ordered finite subset $\mathcal{B}$ of a positive Weyl chamber for the reductive group $G$, such that
(i) $S_0 = X^{ss}$,
\noindent and for each $\beta \in \mathcal{B}$
(ii) the closure of $S_\beta$ is contained in $\bigcup_{\gamma \geqslant \beta} S_\gamma$, and
(iii) $S_\beta = KY_\beta^{ss} = GY_\beta^{ss} \cong G \times_{P_\beta} Y_\beta^{ss} \cong K \times_{K_\beta} Y_\beta^{ss}$
\noindent where
$P_\beta$ is a parabolic subgroup of $G$ acting on a projective subvariety $\overline{Y}_\beta$ of $X$ with an open subset $Y_\beta^{ss}$ which is determined by the action of the Levi subgroup $L_\beta$ of $P_\beta$ with respect to a suitably twisted linearisation \cite{Hess,K}.
Here the linearisation $\mathcal{L}$ is restricted to the action of the parabolic subgroup $P_\beta$ over $\overline{Y}_\beta$, and then twisted by the rational character $\beta$ of $P_\beta$.
To construct a quotient by $G$ of (an open subset of) an unstable stratum $S_\beta$, we can study the linear action on $\overline{Y}_\beta$ of the parabolic subgroup $P_\beta$. In order to have a non-empty quotient we must modify the linearisation $\mathcal{L}$, and it is natural to do this by twisting it by a rational character; such a character may not extend to a character of $G$, which is why it makes sense to consider the action on $\overline{Y_\beta}$ of the non-reductive group $P_\beta$. Twisting by $\beta$ (or subtracting $\beta$ from the moment map for the maximal compact subgroup $K_\beta$ of $P_\beta$) gives a categorical quotient $Z_\beta/\!/L_\beta \cong (Z_\beta \cap \mu^{-1}(\beta))/K_\beta \cong C_\beta/K$ for the action of $P_\beta$ on $Y_\beta^{ss}$, or equivalently for the action of $G$ on $S_\beta$ (cf. Remark \ref{pb}), but in general this is far from being a geometric quotient. To have hope of a non-empty open subset of $S_\beta$ with a geometric quotient (when $\beta \neq 0$) one can instead try twisting the action of $P_\beta$ on $\overline{Y_\beta}$ by $(1 + \epsilon)\beta$ where $0 < \epsilon <\!< 1$, or by another perturbation of $\beta$ whose restriction to $T_\beta$ is of this form.
\end{ex}
\subsection{GIT for non-reductive groups}
Motivated by Example \ref{subsec:unstable strata}, let us consider a complex projective variety $X$ acted on linearly (with ample linearisation $\mathcal{L}$) by a linear algebraic group $H$ which is not necessarily reductive. Then $H = U \rtimes R$ is the semi-direct product of its unipotent radical $U$ by a reductive subgroup $R$; here $R$ is a Levi subgroup of $H$ and is unique up to conjugation by $H$.
An immediate difficulty arises when trying to extend classical GIT to non-reductive linear algebraic groups $H$; this is that in general we cannot define a projective {variety} $X/\!/H = \mbox{Proj}({\hat{\mathcal{O}}}_L(X)^H)$
because ${\hat{\mathcal{O}}}_L(X)^H$ is not necessarily
finitely generated as a graded algebra. However in \cite{BDHK,DK} it is shown that given an $H$-action on $X$ with linearisation $\mathcal{L}$ as above, $X$ has open subvarieties $X^s$
(\lq stable points') and $X^{ss}$ (\lq semistable points') with a geometric quotient $X^s \to X^s/H$ and an \lq enveloping quotient' $X^{ss} \to X{/ / \kern-0.64em{e}} H$, with
a diagram
$$\begin{array}{rcccl}
& X & & & \\
& \bigcup & & & \\
\mbox{semistable} & X^{ss} & \longrightarrow & X{/ / \kern-0.64em{e}} H & \\
& \bigcup & & \bigcup & \mbox{open}\\
\mbox{stable} & X^s & \longrightarrow & X^s/H &
\end{array} $$
where {\em if} ${\hat{\mathcal{O}}}_L(X)^H$ is finitely generated then $X{/ / \kern-0.64em{e}} H = \mbox{Proj}({\hat{\mathcal{O}}}_L(X)^H)$ as in the reductive case.
However $X{/ / \kern-0.64em{e}} H$ is not always a projective variety; moreover (even when ${\hat{\mathcal{O}}}_L(X)^H$ is finitely generated and so $X{/ / \kern-0.64em{e}} H = \mbox{Proj}({\hat{\mathcal{O}}}_L(X)^H)$ is a projective variety) the $H$-invariant morphism
$X^{ss} \to X{/ / \kern-0.64em{e}} H $ is {not necessarily a categorical quotient}, and its image is not in general a subvariety of $X {/ / \kern-0.64em{e}} H$, only a constructible subset. A final problem is that there are in general no obvious analogues in this situation of the Hilbert--Mumford criteria for (semi)stability.
However non-reductive GIT is better behaved when the unipotent radical $U$ of $H = U \rtimes R$ is \lq internally graded' in the sense that its Levi subgroup $R \cong H/U$ has a central one-parameter subgroup $\lambda: {\mathbb C }^* \to R$ whose adjoint action on the Lie algebra of $U$ has only strictly positive weights.
It is shown in \cite{BDHK2, BDHK3,BK15} that,
provided that we are willing to twist the
linearisation for a linear action of $H$ on a projective variety $X$ by an appropriate (rational) character, many of the good properties of Mumford's GIT hold. Many non-reductive linear algebraic group actions arising in algebraic geometry are actions of linear algebraic groups with internally graded unipotent radicals: in particular, any parabolic subgroup of a reductive group has this form, as does the automorphism group of any complete simplicial toric variety \cite{cox}, and the group of $k$-jets of germs of biholomorphisms of $({\mathbb C }^p,0)$ for any positive integers $k$ and $p$ \cite{BK15}.
\begin{ex}
The automorphism group of the weighted projective plane $ {\mathbb P } (1,1,2)$
with weights 1,1 and 2 is
$\mbox{Aut}( {\mathbb P } (1,1,2)) \cong R \ltimes U$
where $R \cong GL(2)$ acting on the two-dimensional weight space with weight 1 is reductive, and
$U \cong ({\mathbb C }^+)^3$ is unipotent
with elements given by $(x,y,z) \mapsto (x,y,z+\lambda x^2 + \mu xy + \nu y^2)$
for $(\lambda,\mu,\nu) \in {\mathbb C }^3$.
\end{ex}
\begin{definition} \label{defn0.1} Let us call a unipotent linear algebraic group $U$ {\em graded unipotent}
if there is a homomorphism $\lambda:{\mathbb C }^* \to Aut(U)$ with the weights of the
${\mathbb C }^*$ action on $Lie(U)$ all {strictly positive}.
For such a homomorphism $\lambda$ let
$$\hat{U} = U \rtimes {\mathbb C }^* = \{(u,t):u \in U, t \in {\mathbb C }^*\}$$
be the associated semi-direct product of $U$ and ${\mathbb C }^*$ with multiplication $(u,t)\cdot(u',t') = (u(\lambda(t)(u')),tt')$. We will say that a linear algebraic group $H=U \rtimes R$ has {\em internally graded unipotent radical} $U$ if the centre $Z(R)$ of $R$ has a one-parameter subgroup $\lambda: {\mathbb C }^* \to Z(R)$ whose adjoint action grades $U$.
When $L$ is very ample, and so induces an embedding of $X$ in a projective space $ {\mathbb P } ^n$, we can choose coordinates on $ {\mathbb P } ^n$ such that the action of ${\mathbb C }^*$ on $X$ is diagonal, given by
$$ t \mapsto \left( \begin{array}{cccc} t^{r_0} & 0 & \ldots & 0\\
0 & t^{r_1} & \ldots & 0\\
& & \ldots & \\
0 & 0 & \ldots & t^{r_n} \end{array} \right) $$
where $r_0 \leq r_1 \leq \cdots \leq r_n$. The {\em lowest bounded chamber} for this linear ${\mathbb C }^*$-action is the closed interval $[r_0,r_j]$ where $r_0 = \cdots = r_{j-1} < r_j \leq \cdots \leq r_n$, with interior the open interval $(r_0,r_j)$, unless the action of ${\mathbb C }^*$ on $X$ is trivial; when the action is trivial so that $r_0 = r_1 = \cdots = r_n$ we will say that $[r_0,r_0]$ is the lowest bounded chamber and it is its own interior. Note that in the situation above, if ${\mathbb C }^*$ acts trivially then so does $U$.
\end{definition}
\begin{thm}[\cite{BDHK2, BDHK3}] \label{firstthm} Let $H=U\rtimes R$ be a linear algebraic group with internally graded unipotent radical $U$ acting linearly on a projective
variety $X$ with linearisation $\mathcal{L}$ on a very ample line bundle $L$. Suppose also that semistability coincides with stability for the unipotent radical $U$, in the sense that
$$ x \in Z_{{\rm min}} \Rightarrow {\rm Stab}_U(x) = \{ e \} $$
where $Z_{{\rm min}} $ is the union of those connected components of the fixed point set $X^{{\mathbb C }^*}$ where ${\mathbb C }^*$ acts on the fibres of $L^*$ with minimum weight. Then the linearisation for the action of $\hat{U}$ on $X$ can be twisted by a rational character of $\hat{U}$ so that 0 lies in the interior of the lowest bounded chamber for the linear ${\mathbb C }^*$ action on $X$ and\\
(i) the algebras ${\hat{\mathcal{O}}}_{\mathcal{L}^{\otimes c}}(X)^{\hat{U}} = \oplus_{m=0}^\infty H^0(X,L^{\otimes cm})^{\hat{U}}$ of $\hat{U}$-invariants and
${\hat{\mathcal{O}}}_{\mathcal{L}^{\otimes c}}(X)^{H} = \oplus_{m=0}^\infty H^0(X,L^{\otimes cm})^{H}$ of $H$-invariants
are {finitely generated} for any sufficiently divisible integer $c > 0$,
so that the enveloping quotients $X{/ / \kern-0.64em{e}} \hat{U} = \mbox{Proj}({\hat{\mathcal{O}}}_{\mathcal{L}^{\otimes c}}(X)^{\hat{U}})$ and $X{/ / \kern-0.64em{e}} H = (X{/ / \kern-0.64em{e}} \hat{U})/\!/(R/\lambda({\mathbb C }^*))$
are projective varieties;\\
(ii) $X^{ss,\hat{U}} = X^{s,\hat{U}} $ and also $X^{ss,H}$ and $X^{s,\hat{U}} $ have Hilbert--Mumford descriptions and $X{/ / \kern-0.64em{e}} \hat{U} = X^{s,\hat{U}} / \hat{U}$ is a geometric quotient of $X^{s,\hat{U}} $ by $\hat{U}$.
Moreover, even when the condition that semistability should coincide with stability for the unipotent radical fails,
there is \\
(iii) a projective variety, containing the geometric quotient $X^{s,\hat{U}} /\hat{U}$ as an open subset, which
is a geometric quotient $\tilde{X}^{ss, \hat{U}}/\hat{U}$ by $\hat{U}$ of an open subset $\tilde{X}^{ss, \hat{U}}$ of a $\hat{U}$-equivariant blow-up $\tilde{X}$ of $X$, and \\
(iv) an induced linear action of $R/\lambda({\mathbb C }^*)$ on $\tilde{X}^{ss, \hat{U}}/\hat{U}$ whose reductive GIT quotient is a projective variety which contains the geometric quotient $X^{s,H}/H$ as an open subset.
\end{thm}
\section{Symplectic implosion}
The non-reductive
GIT quotients described in $\Sigma$3 can be studied using symplectic techniques closely
related to the \lq symplectic implosion' construction of Guillemin, Jeffrey and
Sjamaar \cite{GJS,implone}. In this paper this link will be described for the special case of the unstable strata for the moment map normsquare; in \cite{BK} we will explore more general situations.
For the original construction \cite{GJS} we suppose that $U$ is a maximal unipotent subgroup of a complex reductive group
$G$ acting linearly (with respect to an ample line bundle $L$) on a complex projective variety
$X$, and we assume that the linear action of $U$ on $X$
extends to a linear action of $G$. Then
the algebra of invariants $\bigoplus_{k \geq 0} H^{0}(X,L^{\otimes k})^U$ is a finitely generated graded algebra and the
enveloping quotient $X\env U$ is the associated projective variety
${\rm Proj}(\bigoplus_{k \geq 0} H^{0}(X,L^{\otimes k})^U)$ \cite{Grosshans2}.
It is shown in \cite{GJS} that if $K$ is a maximal compact subgroup of $G$, and $X$ is given a suitable $K$-invariant K\"{a}hler
form, then $X\env U$
can be identified with the \lq symplectic implosion' or \lq imploded cross-section' $X_{{\rm impl}}$ of $X$ by $K$. In this section we will recall this construction and its generalisation \cite{implone} to the situation when $U$ is the unipotent radical
of any parabolic subgroup $P$ of $G$.
As before let $(X,\omega)$ be a symplectic manifold on which a compact connected Lie group $K$
acts with a moment map $\mu:X \to {\liek}^*$ where ${\mathfrak k}$ is the Lie algebra of $K$, and fix an invariant inner product on ${\mathfrak k}$, using it to identify ${\liek}^*$
with ${\mathfrak k}$. Let $T$ be a maximal torus of $K$ with Lie algebra ${\mathfrak t} \subseteq {\mathfrak k}$
and Weyl group $W = N_K(T)/T$, and let ${\mathfrak t}^*_+ \cong {\mathfrak t}^*/W \cong {\liek}^*/{\rm Ad}^*(K)$
be a positive Weyl chamber in ${\liek}^*$. The {\em imploded cross-section} \cite{GJS}
of $X$ is then
$$ X_{{\rm impl}} = \mu^{-1}({\mathfrak t}^*_+)/\approx $$
where $x \approx y$ if and only if $\mu(x) = \mu(y) = \zeta \in {\mathfrak t}^*_+$ and
$x = ky$ for some element $k$ of the commutator subgroup $[K_\zeta,K_\zeta]$ of the stabiliser $K_\zeta$ of $\zeta$ under the co-adjoint action of
$K$. If $\Sigma$ is the
set of faces of ${\mathfrak t}^*_+$ then $X_{{\rm impl}}$ is the disjoint union $$ X_{{\rm impl}} \ = \ \coprod_{\sigma \in \Sigma} \frac{\mu^{-1}(\sigma)}{[K_\sigma,K_\sigma]}
\ = \ \mu^{-1}(({\mathfrak t}^*_+)^\circ) \ \ \ \sqcup
\coprod_{\begin{array}{c}\sigma \in \Sigma\\ \sigma \neq ({\mathfrak t}^*_+)^\circ \end{array}
} \frac{\mu^{-1}(\sigma)}{[K_\sigma,K_\sigma]} $$
where $K_\sigma = K_\zeta$ for any $\zeta \in \sigma$. We give $X_{{\rm impl}}$
the quotient topology induced from $\mu^{-1}({\mathfrak t}^*_+)$, and it
inherits a stratified symplectic structure, where the strata are the locally
closed subsets $\mu^{-1}(\sigma)/[K_\sigma,K_\sigma]$. Each such stratum is the symplectic
reduction by the action of $[K_\sigma,K_\sigma]$ of a locally closed symplectic
submanifold
$$X_\sigma = K_\sigma \mu^{-1}( \bigcup_{\tau \in \Sigma, \bar{\tau} \supseteq \sigma} \tau)$$
of $X$; locally near every point $y \in X_{{\rm impl}}$ can be identified symplectically with the product of the
stratum containing $y$ and a normal cone \cite{GJS}. The induced action of $T$ on
$X_{{\rm impl}}$ preserves this stratified symplectic structure and has a moment map
$$\mu_{\text{impl}}:X_{{\rm impl}} \to {\mathfrak t}^*_+ \subseteq {\mathfrak t}^*$$
induced by the restriction of $\mu$ to $\mu^{-1}({\mathfrak t}^*_+)$.
If $\zeta \in {\mathfrak t}^*_+$ the symplectic reduction of $X_{{\rm impl}}$ at
$\zeta$ for the action of $T$ is the symplectic reduction of $X$ at $\zeta$ for the action
of $K$:
$$ \frac{\mu_{\text{impl}}^{-1}(\zeta)}{T} = \frac{\mu^{-1}(\zeta)}{T.[K_\zeta,K_\zeta]} = \frac{\mu^{-1}(\zeta)}{K_\zeta}.
$$
The universal imploded cross-section (or universal symplectic implosion) is the imploded cross-section$$ (T^*K)_{{\rm impl}} = K \times {\mathfrak t}^*_+ / \approx $$
of the cotangent bundle $T^*K \cong K \times {\liek}^*$ with respect to the $K$-action given by the
right action of $K$ on itself, with an induced action of $K \times T$ from the left action
of $K$ on itself and the right action of $T$ on $K$. Any other implosion
$X_{{\rm impl}}$ can be constructed as the symplectic quotient of the product $X \times (T^*K)_{{\rm impl}}$
by the diagonal action of $K$ \cite{GJS}.
The universal symplectic implosion $(T^*K)_{{\rm impl}}$ is always a complex affine variety and its symplectic structure is
given by a K\"{a}hler form. As in \cite{GJS} we can assume for simplicity that $K$ is semisimple and simply connected; for general compact connected
$K$ one can reduce to this case by considering the product $\tilde{K}$ of the centre of $K$ and the universal cover of its commutator subgroup $[K,K]$, and expressing $K$ as $\tilde{K}/\Upsilon$, where $\Upsilon$ is a finite
central subgroup of $\tilde{K}$. When $B$ is a Borel subgroup of the complexification $G=K_c$ of $K$ with $G = KB$ and $K \cap B = T$, and $\umax \leq B$
is the unipotent radical of $B$ (and hence a maximal unipotent subgroup of $G$), then
$\umax$ is a Grosshans subgroup of $G$ \cite{Grosshans}. This means that the quasi-affine variety $G/\umax$
can be embedded as an open subset of an affine variety in such a way that its
complement has complex codimension at least two, and so the algebra
of invariants $\mathcal{O}(G)^\umax$ is
finitely generated. By
\cite{GJS} Proposition 6.8 there is a natural $K \times T$-equivariant identification
$$(T^*K)_{{\rm impl}} \cong {\rm Spec}(\mathcal{O}(G)^\umax)$$
of the canonical affine completion ${\rm Spec}(\mathcal{O}(G)^\umax)$
of $G/\umax$ with $(T^*K)_{{\rm impl}}$. It follows that if $X$ is a complex projective
variety on which $G$ acts linearly with respect to an ample
line bundle $L$, and $\omega$ is an associated $K$-invariant K\"{a}hler form on
$X$, then the symplectic quotient $X_{{\rm impl}}$ of $X \times (T^*K)_{{\rm impl}}$ by $K$
can be identified with the non-reductive GIT quotient
$$ X/\!/\umax = {\rm Proj}(\hat{\mathcal{O}}_L(X)^\umax) \cong (X \times {\rm Spec} (\mathcal{O}(G)^\umax )))/\!/G
\cong X_{{\rm impl}}. $$
Suppose now that $U$ is the unipotent radical of a parabolic subgroup $P$ of the complex reductive
group $G$ with Lie algebra $\liep$. By replacing $P$ with a suitable conjugate in $G$,
we can assume that $P$ contains the Borel subgroup $B$ of $G$ and $U \leq \umax$. Then $P =
U {L^{(P)}} \cong U \rtimes {L^{(P)}}$, where
the Levi subgroup ${L^{(P)}}$ of $P$ contains the complex maximal torus $T_c$ of $G$,
and we can assume in addition that ${L^{(P)}}$ is the complexification of its intersection
$${K^{(P)}} = {L^{(P)}} \cap K = P \cap K$$
with $K$. There is a subset $S_P$ of the set $S$ of simple roots such that $P$ is the
unique parabolic subgroup of $G$ containing $B$ with the property that if $\alpha \in S$ then the root space $\lieg_{-\alpha} \subseteq \liep$ if and only if $\alpha \in S_P$.
The Lie algebra of $L^{(P)}$ is generated by the root spaces $\lieg_\alpha$ and $\lieg_{-\alpha}$
for $\alpha \in S_P$ together with the Lie algebra ${\mathfrak t}_c = {\mathfrak t} \otimes_{\mathbb R } {\mathbb C }$ of the complexification
$T_c$ of $T$, and the Lie algebra of $U$ is
$$ {\mathfrak u} = \bigoplus_{\alpha \in R^+: \lieg_\alpha \not\subseteq {\rm Lie}(L^{(P)})} \lieg_\alpha $$
where $R^+$ is the set of positive roots for $G$. The Lie algebra of $P$ is
$$ \liep = {\mathfrak t}_c \oplus \bigoplus_{\alpha \in R(S_P)} \lieg_\alpha $$
where $R(S_P)$ is the union of $R^+$ with the set of all roots which
can be written as sums of negatives of the simple roots in $S_P$.
We can decompose ${\mathfrak k}^{(P)} = {\rm Lie} K^{(P)}$ and
${\mathfrak t}$ as
$${\mathfrak k}^{(P)} = [{\mathfrak k}^{(P)},{\mathfrak k}^{(P)}] \oplus \liez^{(P)} \ \ \mbox{ and } \ \ {\mathfrak t} = {\mathfrak t}^{(P)} \oplus
\liez^{(P)}$$
where $[{\mathfrak k}^{(P)},{\mathfrak k}^{(P)}]$ is the Lie algebra of the semisimple part $Q^{(P)} = [K^{(P)},K^{(P)}]$ of $K^{(P)}$, while ${\mathfrak t}^{(P)}$ is the Lie
algebra of the maximal torus $T^{(P)} = T \cap [K^{(P)},K^{(P)}]$ of $Q^{(P)}$, and $\liez^{(P)}$ is
the Lie algebra of the centre $Z(K^{(P)})$ of $K^{(P)}$.
When $U = \umax$ the Iwasawa decomposition
$G = K \ \exp(i{\mathfrak t}) \ \umax$
allows us to identify $G/\umax$ with $K\exp(i{\mathfrak t})$. More generally
there is a decomposition
\begin{equation} \label{Iwas} G = K \times_{K^{(P)}} P = K \times_{K^{(P)}} \ L^{(P)} U = K \times_{K^{(P)}} \ K^{(P)} \exp(i {\mathfrak k}^{(P)}) U
= K \exp(i{\mathfrak k}^{(P)}) U \end{equation}
giving an identification of $G/U$ with $K \exp(i {\mathfrak k}^{(P)})$.
$U$ is a Grosshans subgroup of $G$ \cite{Grosshans2}, and so the algebra of invariants $\mathcal{O}(G)^U$ is
finitely generated and $G/U$ has a canonical affine completion
$$ G/U \subseteq \overline{G/U}^{{\rm a}} = {\rm Spec}(\mathcal{O}(G)^U) $$
where the complement of the open subset $G/U$ of the affine variety $\overline{G/U}^{{\rm a}}$
has complex codimension at least two. Therefore if $G$ acts linearly on a complex projective
variety $X$ with linearisation $\mathcal{L}
, then the algebra of invariants
$$\hat{\mathcal{O}}_{\mathcal{L}}(X)^U \cong (\hat{\mathcal{O}}_{\mathcal{L}}(X) \otimes \mathcal{O}(G)^U)^G$$
is finitely generated, and the associated projective variety
$X/\!/U = {\rm Proj}(\hat{\mathcal{O}}_{\mathcal{L}}(X)^U)$
is isomorphic to the GIT quotient $ (\overline{G/U}^{{\rm a}} \times X)/\!/G$.
It is shown in \cite{implone} that, just as in the case when $U=\umax$, there is a $K$-invariant K\"{a}hler
form on $\overline{G/U}^{{\rm a}}$ which gives us an identification of $X\env U$ with a symplectic quotient of
$\overline{G/U}^{{\rm a}} \times X$ by $K$, and thus a symplectic description of $X\env U$ generalising
the symplectic implosion construction of \cite{GJS}.
To describe this generalised universal symplectic implosion,
let $\Lambda = \ker(\exp |_{{\mathfrak t}})$ be the exponential lattice in ${\mathfrak t}$, and let
$\Lambda^* = {\rm Hom}_{{\mathbb Z }}(\Lambda,{\mathbb Z })$ be the weight lattice in ${\mathfrak t}^*$, so that
$\Lambda^*_+ = \Lambda^* \cap {\mathfrak t}^*_+$ is the monoid of dominant weights. For $\lambda \in \Lambda^*_+$
let $V_{\lambda}$ be the irreducible $G$-module with highest weight $\lambda$, and let
$\Pi = \{\varpi_1, \ldots, \varpi_r \}$
be the set of fundamental weights, forming a ${\mathbb Z }$-basis of $\Lambda^*$ and a minimal set of generators
for $\Lambda^*_+$.
Recall that
there is an isomorphism of
$G \times G$-modules
\begin{equation} \label{thisiso3}
\mathcal{O}(G) \cong \bigoplus_{\lambda \in \Lambda^*_+} V_{\lambda} \otimes V_\lambda^*
\cong \bigoplus_{\lambda \in \Lambda^*_+} V_{\lambda} \otimes V_{\iota \lambda} \end{equation}
which restricts to an isomorphism of $G \times T_c$-modules
\begin{equation} \label{thisiso2}
\mathcal{O}(G)^\umax \cong \bigoplus_{\lambda \in \Lambda^*_+} V_\lambda^{(T)} \otimes V_{\lambda}^*
\cong \bigoplus_{\lambda \in \Lambda^*_+} V_{\lambda}^*. \end{equation}
where $V^{(T)}_\lambda$ is the irreducible $T_c$-module with highest weight $\lambda$.
The graded algebra $\mathcal{O}(G)^\umax$ is generated
by its finite-dimensional vector subspace
$\bigoplus_{\varpi \in \Pi} V_{\varpi}^*$, which
gives us a closed
$G \times T_c$-equivariant embedding of $\overline{G/U}_{{\rm max}}^{{\rm a}} = {\rm Spec}(\mathcal{O}(G)^\umax)$
into the affine space $\bigoplus_{\varpi \in \Pi} V_{\varpi}$. It is shown in \cite{GJS} that $(T^*K)_{{\rm impl}}$ can be identified with the image of this embedding, equipped with the restriction of
a flat $K$-invariant K\"{a}hler
structure on $\bigoplus_{\varpi \in \Pi} V_{\varpi}$.
To extend this construction to $\overline{G/U}^{{\rm a}}$ when $U$ is the unipotent radical
of a parabolic subgroup $P$ as above, it is observed in \cite{implone} that
$\mathcal{O}(G)^U$ is generated by the smallest (finite-dimensional)
${K^{(P)}}$-invariant subspace of $\mathcal{O}(G)$
which contains
$\bigoplus_{\varpi \in \Pi} V_{\varpi}^* \cong \bigoplus_{\varpi \in \Pi}
V_{\varpi}^{(T)} \otimes V_{\varpi}^*.$ Here ${K^{(P)}}$ acts on $\mathcal{O}(G)$ via left multiplication on $G$.
Let $E^{{(P)}}$ be the dual of this smallest such ${K^{(P)}}$-invariant subspace $(E^{{(P)}})^*$
of $\mathcal{O}(G)$;
then $(E^{{(P)}})^*$ is fixed pointwise by $U$, and its inclusion in $\mathcal{O}(G)^U \subseteq \mathcal{O}(G)$ induces a closed $ {L^{(P)}}
\times G$-equivariant
embedding of $\overline{G/U}^{{\rm a}} = {\rm Spec}(\mathcal{O}(G)^U)$ into the affine space $E^{{(P)}}$.
Then
$({E^{(P)}})^*$ decomposes under the action of $K \times {K^{(P)}}$ as a direct sum of irreducible
$K \times {K^{(P)}}$-modules
$$({E^{(P)}})^* = \bigoplus_{\varpi \in \Pi} (V_{\varpi}^{({{P}})})^*$$
where $(V_{\varpi}^{({P})})^*$ is the smallest $K \times {K^{(P)}}$-invariant
subspace of $\mathcal{O}(G)$ containing
$V_{\varpi}^*$.
Moreover
$(V_{\varpi}^{({{P}})})^* \cong V_{\varpi}^{{K^{(P)}}} \otimes V_{\varpi}^* $
where $V_{\varpi}^{{K^{(P)}}}$ is the irreducible $K^{(P)}$-module with highest
weight $\varpi$,
so
$${E^{(P)}} = \bigoplus_{\varpi \in \Pi} V_{\varpi}^{({{P}})}
= \bigoplus_{\varpi \in \Pi} (V_{ \varpi}^{K^{(P)}})^* \otimes
V_{\varpi} .$$
If
$v^{(P)}_{\varpi}$ is the vector in $V_{ \varpi}^{({{P}})} \cong
(V_{\varpi}^{K^{(P)}})^* \otimes V_{\varpi}$ representing the inclusion of
$V_{\varpi}^{K^{{(P)}}}$ in $V_{\varpi}$ then
the embedding of $G/U \subseteq \overline{G/U}^{{\rm a}}$ in $E^{(P)}$ induced by the inclusion of $(E^{(P)})^*$ in $\mathcal{O}(G)^U$
takes the identity coset $U$ to $\sum_{\varpi \in \Pi} v_{\varpi}^{(P)}$.
Let
$$V_{\varpi}^{{K^{(P)}}} = \bigoplus_{\lambda \in \Lambda_{\varpi}^*} V_{\varpi,\lambda}^{{K^{(P)}}}$$
be the decomposition of $V_{\varpi}^{{K^{(P)}}}$ into weight spaces with weights
$\lambda \in {\mathfrak t}^*$ under the action of the maximal torus $T$ of $K^{(P)}$. Then
$V_{\varpi}^{{{(P)}}}$ decomposes as a $K \times T$-module into a sum of
irreducible $K \times T$-modules
$$V_{\varpi}^{{{(P)}}} \cong \bigoplus_{\lambda} V_{\varpi} \otimes (V_{\varpi,\lambda}^{{K^{(P)}}})^* $$
and
$v_{\varpi}^{(P)} = \sum_{\lambda} v_{\varpi,\lambda}^{(P)}$
where $v_{\varpi,\lambda}^{(P)} \in V_{\varpi} \otimes (V_{\varpi,\lambda}^{{K^{(P)}}})^*$
represents the inclusion of $V_{\varpi,\lambda}^{{K^{(P)}}}$ in $V_{\varpi}$. In particular
$v_{\varpi,\varpi}^{(P)}$ is a highest weight vector for the action of
$K \times K^{(P)}$ on $V^{(P)}_{\varpi}$.
The embedding of $G/U \subseteq \overline{G/U}^{{\rm a}}$ in $E^{(P)}$ induced by the inclusion of $(E^{(P)})^*$ in $\mathcal{O}(G)^U$
takes the identity coset to $\sum_{\varpi \in \Pi} v_{\varpi}^{(P)}$. From the
decomposition $G = K \exp(i{\mathfrak k}^{(P)}) U$ (\ref{Iwas}) and the
compactness of $K$ it follows that the closure $\overline{G/U}^{{\rm a}}$ of the
$G$-orbit of $\sum_{\varpi \in \Pi} v_{\varpi}^{(P)}$ in $E^{(P)}$ is given by the
$K$-sweep
$$\overline{G/U}^{{\rm a}} = K (\overline{\exp (i {\mathfrak k}^{(P)}) \sum_{\varpi \in \Pi} v^{(P)}_{\varpi}})$$
of the closure in $E^{(P)}$ of the $\exp (i {\mathfrak k}^{(P)})$-orbit of $\sum_{\varpi \in \Pi} v^{(P)}_{\varpi}$.
Similarly the closure
of the $L^{(P)}$-orbit of $\sum_{\varpi \in \Pi} v^{(P)}_{\varpi}$
is given by
$K^{(P)} (\overline{\exp (i {\mathfrak k}^{(P)}) \sum_{\varpi \in \Pi} v^{(P)}_{\varpi}})$.
There is a unique $K \times {K^{(P)}}$-invariant Hermitian inner product on ${E^{(P)}} = \bigoplus_{\varpi \in \Pi}
V_{\varpi}^{(P)}$
satisfying $|\!| v_{\varpi,\varpi}^{(P)}|\!| = 1$ for each $\varpi \in \Pi$, which is obtained from
$K$-invariant Hermitian inner products on the irreducible $K$-modules $V_{\varpi}$
and their restrictions to
$K^{(P)}$-invariant Hermitian inner products on the irreducible $K^{(P)}$-modules $V_{\varpi}^{(P)}$.
This gives ${E^{(P)}}$ a flat
K\"{a}hler structure which is $K \times {K^{(P)}}$-invariant.
If we identify $(V_{ \varpi}^{K^{(P)}})^* \otimes
V_{\varpi}^{K^{(P)}}$ with ${\rm End}(V_{ \varpi}^{K^{(P)}})$ equipped with the
Hermitian structure
$\langle A,B \rangle = {\rm Trace}(AB^*)$
in the standard way, then $v^{(P)}_{\varpi}$ is identified with the identity map
in ${\rm End}(V_{ \varpi}^{K^{(P)}})$.
\begin{defn} \label{defncone} Let $\liets_{(P)+} $ be the cone in ${\mathfrak t}^*$ given by
$$\liets_{(P)+}
= \bigcup_{w \in W^{(P)}} {\rm Ad}^*(w) {\mathfrak t}^*_+$$
where $W^{(P)}$ is the Weyl group of $Q^{(P)}=[K^{(P)},K^{(P)}]$ (which is a subgroup of
the Weyl group $W$ of $K$).
\end{defn}
It is shown in \cite{implone} that the restriction to the closure $\overline{\exp(i{\mathfrak t}) \sum_{\varpi \in \Pi} v^{(P)}_{\varpi}}$
of the $\exp(i{\mathfrak t})$-orbit in $E^{(P)}$ of $ \sum_{\varpi \in \Pi} v^{(P)}_{\varpi}$
of the moment map $\mu^{E^{(P)}}_T$ for the action of $T$
on $E^{(P)}$ is a homeomorphism onto the cone
$\liets_{(P)+} $
in ${\mathfrak t}^*$. Its inverse
provides a continuous injection
\begin{equation} \label{calffP} \mathcal{F}^{(P)} : \liets_{(P)+} \to \overline{G/U}^{{\rm a}} \subseteq E^{(P)} \end{equation}
such that $\mu_T^{E^{(P)}} \circ \mathcal{F}^{(P)}$ is the identity on $\liets_{(P)+} $.
Moreover $\overline{\exp(i{\mathfrak t}) \sum_{\varpi \in \Pi} v_\varpi^{(P)}}$
is the union of finitely many $\exp(i{\mathfrak t})$-orbits, each of the form $$
\mathcal{F}^{(P)}(\sigma) = \exp(i{\mathfrak t}) \sum_{\varpi \in \Pi, \lambda \in \Lambda^*_\varpi \cap \bar{\sigma}} v^{(P)}_{\varpi,\lambda}$$
where $\sigma$
is an open face of $\liets_{(P)+} $. Furthermore
the restriction of the
$K^{(P)}$-moment map $\mu^{E^{(P)}}:E^{(P)} \to ({\mathfrak k}^{(P)})^*$ to the closure
of the $\exp(i{\mathfrak k}^{(P)})$-orbit in $E^{(P)}$ of $ \sum_{\varpi \in \Pi} v^{(P)}_{\varpi}$
is a homeomorphism from $\overline{\exp(i{\mathfrak k}^{(P)}) \sum_{\varpi \in \Pi} v^{(P)}_{\varpi}}$
onto the closed subset
$${\mathfrak k}^{(P)*}_+ = {\rm Ad}^*(K^{(P)})\ \liets_{(P)+} $$
of ${\mathfrak k}^{(P)*}$, and $\overline{\exp(i{\mathfrak k}^{(P)}) \sum_{\varpi \in \Pi} v_\varpi^{(P)}}$
is the union of finitely many $\exp(i{\mathfrak k}^{(P)})$-orbits which
correspond under this homeomorphism to the open faces of ${\mathfrak k}^{(P)*}_{+}$.
The inverse of $\mu^{E^{(P)}}:\overline{\exp(i{\mathfrak k}^{(P)}) \sum_{\varpi \in \Pi} v^{(P)}_{\varpi}} \to {\mathfrak k}^{(P)*}_+$ gives us a continuous
$K^{(P)}$-equivariant map
$$ \mathcal{F}^{(P)} : {\mathfrak k}_{+}^{(P)*} \to \overline{G/U}^{{\rm a}} \subseteq E^{(P)} $$ extending (\ref{calffP})
such that $\mu_T^{E^{(P)}} \circ \mathcal{F}^{(P)}$ is the identity on ${\mathfrak k}^{(P)*}_{+}$. This in turn extends to a continuous
$K \times K^{(P)}$-equivariant surjection
$$\mathcal{F}^{(P)} : K \times {\mathfrak k}_{+}^{(P)*} \to \overline{G/U}^{{\rm a}} . $$
\begin{defn} \label{dffn}
If $ \zeta \in {\mathfrak k}^{(P)*}_+ = {\rm Ad}^*({K^{(P)}})\liets_{(P)+} = {\rm Ad}^*({K^{(P)}}){\mathfrak t}^*_{+}$
let $\zeta = {\rm Ad}^*(k)\xi$ with $k \in K^{(P)}$ and $\xi \in {\mathfrak t}^*_+$, and let $\sigma_0$ be the open face of
${\mathfrak t}^*_+$ containing $\xi$. Let $\sigma_0(P)$ be the open face of ${\mathfrak t}^*_+$ whose closure is
$$\overline{\sigma_0(P)}
= \{ \zeta \in {\mathfrak t}^*: \zeta \cdot \alpha = 0 \mbox{ for all }\alpha \in R_{\sigma_0} \setminus R^{(P)} \}$$
where $R$ and $R^{(P)}$ are the sets of roots of $K$ and $K^{(P)}$, and
$ R_{\sigma_0}
= \{ \alpha \in R: \zeta \cdot \alpha = 0 \mbox{ for all }\zeta \in \sigma_0 \},$
so that $\sigma_0(P)$ is an open subset of the open face containing $\sigma_0$ of the
cone $\liets_{(P)+}$.
Finally let $K_\zeta(P) = k K_\xi k^{-1}$ where $K_\xi(P) = K_{\sigma_0(P)}$ is the stabiliser
under the adjoint action of $K$ of any element of $\sigma_0(P)$.
\end{defn}
\begin{rem}
If $\xi$ lies in the interior of $\liets_{(P)+}$ then $K_\zeta(P) = T$ and
$[K_\zeta(P),K_\zeta(P)]$ is trivial.
\end{rem}
This leads to the following definition given in \cite{implone}
of the ${K^{(P)}}$-imploded cross-section (or generalised symplectic implosion) $X_{{\rm KimplK^{(P)}}}$.
\begin{defn} \label{impq}
Let $(X,\omega)$ be a symplectic manifold with a Hamiltonian action of $K$
with moment map $\mu:X \to {\liek}^*$.
Let
$$ {\mathfrak k}^{(P)*}_+ = {\rm Ad}^*({K^{(P)}})\liets_{(P)+} = {\rm Ad}^*({K^{(P)}}){\mathfrak t}^*_{+}
= {\rm Ad}^*({Q^{(P)}}){\mathfrak t}^*_{+}
\subseteq {{\mathfrak k}^{(P)*}} \label{lieqsplus} $$
be the sweep of ${\mathfrak t}^*_{+}$ under the co-adjoint action of ${K^{(P)}}$ on ${\liek}^*$, and let $\Sigma^{(P)}$ be the set of open faces of ${\mathfrak k}^{(P)*}_{+}$.
If $\zeta \in {\mathfrak k}^{(P)*}$ let $K_\zeta(P) $ be as in Definition \ref{dffn}.
The {\em ${K^{(P)}}$-imploded cross-section} (or generalised symplectic implosion)
of $X$ is
$$ X_{{\rm KimplK^{(P)}}} = \mu^{-1}({\mathfrak k}^{(P)*}_+)/\approx_{K^{(P)}} $$
where $x \approx_{K^{(P)}} y$ if and only if
$\mu(x) = \mu(y) = \zeta \in
{\mathfrak k}^{(P)*}_+$
and
$x = \kappa y$ for some $\kappa \in
[K_\zeta(P),K_\zeta(P)]$.
The {\em universal ${K^{(P)}}$-imploded cross-section} (or universal generalised symplectic implosion for $K^{(P)} \subseteq K$) is the ${K^{(P)}}$-imploded cross-section
$$ (T^*K)_{{\rm KimplK^{(P)}}} = K \times {\mathfrak k}^{(P)*}_+ / \approx_{K^{(P)}} $$
for the cotangent bundle $T^*K \cong K \times {\liek}^*$ with respect to the $K$-action induced from the
right action of $K$ on itself.
\end{defn}
The map
$\mathcal{F}^{(P)} : K \times {\mathfrak k}_{+}^{(P)*} \to \overline{G/U}^{{\rm a}} $
induces a
$K \times K^{(P)}$-equivariant homeomorphism
\begin{equation} \label{imp1}
(T^*K)_{{\rm KimplK^{(P)}}} = K \times {\mathfrak k}^{(P)*}_+ / \approx_{K^{(P)}}
\to
\overline{G/U}^{{\rm a}} \subseteq {E^{(P)}}. \end{equation}
Moreover under this
identification of $ K \times {\mathfrak k}^{(P)*}_+ / \approx_{K^{(P)}} $ with
$\overline{G/U}^{{\rm a}} \subseteq {E^{(P)}}$, the moment map
for the action of $K \times K^{(P)}$ on $E^{(P)}$ is induced by
the map $
(K \times {\mathfrak k}^{(P)*}_+)/\approx_{K^{(P)}} \to
{\liek}^* \times {\mathfrak k}^{(P)*}$
given by
$$(k,\zeta) \mapsto (Ad^*(k)(\zeta),\zeta)).$$
$\overline{G/U}^{\rm a}$ has an induced $K \times K^{(P)}$-invariant K\"{a}hler
structure as a complex subvariety of $E^{(P)}$; it is stratified by its (finitely many) $G$-orbits, and the $K \times K^{(P)}$-invariant K\"{a}hler structure
on $E^{(P)}$ restricts to a $K \times K^{(P)}$-invariant symplectic structure on
each stratum. Under the homeomorphism
$ (T^*K)_{{\rm KimplK^{(P)}}} \to
\overline{G/U}^{{\rm a}} $ of (\ref{imp1}) these strata correspond to the
locally closed subsets
$$
\frac{K \times {\rm Ad}^*(K^{(P)})\sigma
}{\approx^{K^{(P)}}} \cong K^{(P)} \times_{K_\sigma \cap K^{(P)}}
\left( \frac{K \times \sigma
}{\approx^{K^{(P)}}} \right)$$
$$ \cong K^{(P)} \times_{K_\sigma \cap K^{(P)}}
\left( \frac{K \times \sigma
}{
[K_{\
(P)
},K_{\
(P)
}]} \right)$$
of $(T^*K)_{{\rm KimplK^{(P)}}}$ where $\sigma \in \Sigma
$ runs over the open faces of ${\mathfrak t}^*_+$.
By construction, when $K$ acts on a symplectic manifold $X$ with moment map $\mu:X \to {\liek}^*$, then
the symplectic quotient of $\overline{G/U}^{{\rm a}} \times X = (T^*K)_{{\rm KimplK^{(P)}}} \times X$
by the diagonal action of $K$ can be identified via $\mathcal{F}^{(P)}$ with $X_{{\rm KimplK^{(P)}}}$
(and in particular if $X$ is a projective variety with a linear action of
the complexification $G$ of $K$, then $X_{{\rm KimplK^{(P)}}}$ can be identified with
the GIT quotient of $\overline{G/U}^{{\rm a}} \times X$
by the diagonal action of $G$).
Thus $X_{{\rm KimplK^{(P)}}}$ inherits a
stratified $K \times K^{(P)}$-invariant
symplectic structure
$$ X_{{\rm KimplK^{(P)}}} = \bigsqcup_{\sigma \in \Sigma} \frac{\mu^{-1}(\sigma)}{\approx^{K^{(P)}}}$$
\begin{equation} = \mu^{-1}(({\mathfrak k}^{(P)*}_+)^\circ) \sqcup
\bigsqcup_{\begin{array}{c}\sigma \in \Sigma\\ \sigma \neq ({\mathfrak t}^*_+)^\circ \end{array}
}
K^{(P)} \times_{K_\sigma \cap K^{(P)}}
\left( \frac{\mu^{-1} (\sigma) }{
[K_{\sigma(P)},K_{\sigma(P)}]} \right) \end{equation}
with strata indexed by the set $\Sigma$ of open faces of ${\mathfrak t}^*_+$,
which are locally
closed symplectic submanifolds of $X_{{\rm KimplK^{(P)}}}$.
The induced action of ${K^{(P)}}$ on
$X_{{\rm KimplK^{(P)}}}$ preserves this symplectic structure and has a moment map
$$\mu_{X_{{\rm KimplK^{(P)}}}}:X_{{\rm KimplK^{(P)}}} \to {\mathfrak k}^{(P)*}_+ \subseteq {{\mathfrak k}^{(P)*}}$$
inherited from the restriction of $\mu$ to $\mu^{-1}({\mathfrak k}^{(P)}_+)$.
\begin{rem} If ${K^{(P)}}=T$ and $\zeta \in {\mathfrak k}^{(P)*}_+$ then
$K_\zeta(P)=K_\zeta$, and
so $X_{{\rm KimplT}}$ is the standard
imploded cross-section $X_{{\rm impl}}$ of \cite{GJS}. On the other hand if ${K^{(P)}}=K$ then
$K_\zeta(P)$ is conjugate to $T$ and $[K_\zeta(P),K_\zeta(P)]$ is trivial
for all $\zeta \in {\mathfrak k}^{(P)*}_+$,
so $X_{{\rm KimplK}} = T^*K$.
\end{rem}
\begin{rem} \label{helpfulrem}
When $K$ acts holomorphically on a K\"ahler manifold $X$ with moment map $\mu:X \to {\liek}^*$, then the action of $K$ extends to a holomorphic action of its complexification $G=K_{\mathbb C }$. Since the generalised symplectic implosion $X_{{\rm KimplK^{(P)}}}$ is the symplectic quotient of $\overline{G/U}^{{\rm a}} \times X = (T^*K)_{{\rm KimplK^{(P)}}} \times X$
by the diagonal action of $K$, it has an induced K\"ahler structure (cf. Remarks \ref{remalgsit}, \ref{rem2}). The open subset $ \mu^{-1}(({\mathfrak k}^{(P)*}_+)^\circ)$ of $X_{{\rm KimplK^{(P)}}}$ corresponds to the open subset $(G/U) \times X$
of $\overline{G/U}^{{\rm a}} \times X$, and provides a $K^{(P)}$-invariant slice for the action of $U$ on the open subset $ U\mu^{-1}(({\mathfrak k}^{(P)*}_+)^\circ)$ of $X$.
Thus if $Y$ is a $P$-invariant complex submanifold of $X$ which meets $\mu^{-1}(({\mathfrak k}^{(P)*}_+)^\circ)$, then $Y \cap \mu^{-1}(({\mathfrak k}^{(P)*}_+)^\circ)$ is a $K^{(P)}$-invariant slice for the action of $U$ on the open subset $ U(Y \cap \mu^{-1}(({\mathfrak k}^{(P)*}_+)^\circ))$ of $Y$, and its closure in $X_{{\rm KimplK^{(P)}}}$, which is the image of $ Y \cap \mu^{-1}({\mathfrak k}^{(P)*}_+)$ in $X_{{\rm KimplK^{(P)}}}$, has an induced K\"ahler structure. However the singularities of this closure on the image of the boundary of $ Y \cap \mu^{-1}({\mathfrak k}^{(P)*}_+)$ are likely to be more serious and harder to describe than in the case when $Y$ is $G$-invariant (or equivalently $K$-invariant).
In the general case when $K$ acts on a symplectic manifold $(X,\omega)$ with moment map $\mu:X \to {\liek}^*$, then we can choose a $K$-invariant almost complex structure which is compatible with $\omega$ as at Remark \ref{almostcx}. If $Y$ is a $K^{(P)}$-invariant almost complex submanifold of $X$ which is invariant under the induced infinitesimal action of $U$, then just as in the case above the image of $ Y \cap \mu^{-1}({\mathfrak k}^{(P)*}_+)$ in $X_{{\rm KimplK^{(P)}}}$ has an induced $K^{(P)}$-invariant symplectic structure, and almost complex structure, such that it can be regarded as an almost-K\"ahler quotient of $Y$ by the infinitesimal action of $U$.
There is an induced Hamiltonian action of
$K^{(P)}$ (or any subgroup of $K^{(P)}$) with moment map $\mu_{\text{Yimpl}}$
induced by the restriction of $\mu$ to $Y \cap \mu^{-1}({\mathfrak k}^{(P)}_+)$, and we can shift this moment map by any constant in the Lie algebra $\liez^{(P)}$ of the centre $Z(K^{(P)})$ of $K^{(P)}$.
It follows from the definition of $\liets_{(P)+}$ that for a generic choice of $\eta$ in $\liez^{(P)}$ we have $K_\eta = K^{(P)}$ and $\eta \in \mu^{-1}(({\mathfrak k}^{(P)*}_+)^\circ)$, so the symplectic quotient by $K^{(P)}$ is given by
$$ \mu_{\text{Yimpl}}^{-1}(\eta)/K^{(P)} = (Y \cap \mu^{-1}(\eta))/K^{(P)}.$$
When $Y$ is $K$-invariant this simply recovers for us the symplectic reduction of $Y$ at $\eta$ by the action of $K$, but the viewpoint from symplectic implosion allows us to extend this construction to include submanifolds $Y$ which are not $K$-invariant (cf. \cite{GS}).
\end{rem}
\section{Symplectic quotients of unstable strata}
As before let $X$ be a compact symplectic manifold with a Hamiltonian action of a compact group $K$ with moment map $\mu:X \to {\liek}^*$, and choose a compatible $K$-invariant almost complex structure and Riemannian metric as at Remark \ref{almostcx}. Fix an invariant inner product on ${\mathfrak k}$ with associated norm.
Let $\{ S_\beta : \beta \in \mathcal{B}\}$ be the Morse stratification for the function $||\mu||^2$. Recall that
the set $\mathcal B$ indexing the critical subsets $C_\beta$ for $||\mu||^2$ and the stratification $\{S_\beta : \beta \in \mathcal B\}$ can be identified with a finite subset of a positive Weyl chamber $\frak t_+$ in the Lie algebra of a maximal torus $T$ of $K$, where a point of $\frak t_+$ lies in ${\mathcal B}$ if it is the closest point to the origin of the convex hull of a nonempty set of the weights for the Hamiltonian action of $K$, and we interpret \lq weight' as the image under the $T$-moment map of a connected component of the fixed point set $X^T$. Then for $\beta \in {\mathcal B}$ the submanifold $Z_\beta$ of $X$ is the union of those components of the fixed point set of the subtorus $T_\beta$ of $K$ generated by $\beta$ on which the moment map for $T_\beta$ given by composing $\mu$ with the restriction map from ${\liek}^*$ to ${\mathfrak t}^*_\beta$ takes the value $\beta$, and
$$C_\beta = K(Z_\beta \cap \mu^{-1}(\beta)) \cong K \times_{K_\beta} (Z_\beta \cap \mu^{-1}(\beta)) $$
where the subgroup $K_\beta$ is the stabiliser of $\beta$ under the adjoint action of $K$ on its Lie algebra.
Recall that $\mu |_{Z_\beta}$ can be regarded as a moment map for the action of $K_\beta$ on $Z_\beta$, and so can $\mu |_{Z_\beta} - \beta$ since $\beta$ is central in $K_\beta$. We can define
$Z_{\beta}^{ss}$ to be the stratum
labelled by 0 for the Morse stratification of the normsquare $|\!|\mu - \beta|\!|^2$ of the moment map
$\mu |_{Z_\beta} - \beta$ on
$Z_{\beta}$.
For $x \in X$ we let $ p_{\beta}(x)$ be the limit $\lim_{t \to \infty} \exp (-it\beta) x$ of the downward trajectory from $x$ for the Morse--Bott function $\mu_\beta = \mu.\beta$ on $X$,
and define
$$Y_{\beta}^{ss} = p_{\beta}^{-1}(Z_{\beta}^{ss}) $$
where $Y_\beta$ with $p_{\beta}:Y_{\beta} \to Z_{\beta}$ is
given by
$$Y_\beta = \{ y \in X \, | \, p_\beta(y) \in Z_\beta \}$$
(cf. Remark \ref{pb}).
Then $Y_{\beta}$ and
$Y_{\beta}^{ss}$ are $K_{\beta}$-invariant almost-complex submanifolds of $X$ and we have
$$ S_{\beta} = K Y_\beta^{ss} \cong K \times_{K_\beta} Y_\beta^{ss} . $$
Moreover $p_{\beta}:Y_{\beta}
\to Z_{\beta}$ is a locally trivial fibration whose fibre is isomorphic to
${\mathbb C }^{m}$ for some $m \geq 0$ depending on $\beta$ (and possibly also on the connected component of $Z_\beta$ over which the fibre lies).
The locally closed almost-complex submanifold $Y_\beta$ of $X$ is invariant under the action of the maximal torus $T$ of $K$, and hence so is its closure $\overline{Y_\beta}$. Therefore by a result of Atiyah \cite{A} (see also \cite{B,K}) the image of $\overline{Y_\beta}$ under the $T$-moment map $\mu_T$ given by composing $\mu$ with restriction ${\liek}^* \to {\mathfrak t}^*$ is a convex polytope in ${\mathfrak t}^*$; indeed it is the convex hull of the (finitely many) images of the $T$-fixed points in $\overline{Y_\beta}$. Thus $\mu_T(\overline{Y_\beta})$ is contained in the half-space in ${\mathfrak t}^*$ consisting of those $\eta \in {\mathfrak t}^*$ satisfying $\eta.\beta \geqslant |\!| \beta|\!|^2$; since by assumption $C_\beta = K(Z_\beta \cap \mu^{-1}(\beta))$ is non-empty, $\beta$ is in fact the closest point to 0 in ${\mathfrak t}^* \cong {\mathfrak t}$ of this convex hull, and a point $y \in X$ lies in $Y_\beta$ if and only if $\beta$ is the closest point to 0 of the image under $\mu_T$ of its trajectory under the gradient flow of $\mu_\beta$.
Recall that $g \in G=K_{\mathbb C }$ lies in the parabolic subgroup $P_{\beta}$ if and only if
$\exp(-it\beta) g \exp(it\beta)$ tends to a limit in $G$ as $t \to \infty$, and this limit defines
a surjective homomorphism $q_{\beta}: P_{\beta} \to L_\beta$
whose kernel is the unipotent radical $U_\beta$ of $P_\beta$. The chosen almost-K\"ahler structure on $X$ is $K$-invariant, and so by the definition of a moment map the gradient flow of $\mu_\beta$ is given by the vector field $x \mapsto i\beta_x$ where $x \mapsto \beta_x$ is the infinitesimal action of $\beta \in {\mathfrak t}$ on $X$. Thus $Y_\beta$ is invariant under the infinitesimal action of $P_\beta$ on $X$.
This means that we can apply the symplectic implosion construction associated to the unipotent radical $U_\beta$ of $P_\beta$ to $\overline{Y_\beta}$ as in Remark \ref{helpfulrem}, and take a symplectic quotient of the result by the induced Hamiltonian action of the maximal compact subgroup $K^{(P_\beta)} = K_\beta$ of $P_\beta$. As discussed in Remark \ref{helpfulrem}, we can shift the moment map for this induced Hamiltonian action by any constant in the Lie algebra $\liez^{(P_\beta)}$ of the centre $Z(K^{(P_\beta)})$ of $K^{(P_\beta)}$,
and for a generic choice of $\eta$ in $\liez^{(P_\beta)}$ we have $K_\eta = K^{(P_\beta)}$ and the symplectic quotient by $K^{(P_\beta)} = K_\beta$ is given by
$ (\overline{Y_\beta} \cap \mu^{-1}(\eta))/K_\beta.$
By definition if $\eta = \beta$ or any nonzero scalar multiple of $\beta$ then $K_\eta = K_\beta=K^{(P_\beta)} $ and this symplectic quotient is $ (\overline{Y_\beta} \cap \mu^{-1}(\eta))/K_\beta.$ It follows from the description of $Y_\beta$ above that if $\eta$ is a generic element of $\liez^{(P_\beta)}$ and $\eta.\beta < |\!| \beta|\!|^2$ (or if we have equality and $\eta \neq \beta$) then $ (\overline{Y_\beta} \cap \mu^{-1}(\eta))/K_\beta$ is empty, while if $\eta = \beta$ then the symplectic quotient $ (\overline{Y_\beta} \cap \mu^{-1}(\eta))/K_\beta = ({Z_\beta} \cap \mu^{-1}(\beta))/K_\beta = C_\beta/K$ collapses the stratum onto its critical subset. On the other hand if $\eta$ is a sufficiently small perturbation of $\beta$ in $\liez^{(P_\beta)}$ then
$ \overline{Y_\beta} \cap \mu^{-1}(\eta) \subseteq {Y_\beta}$ so
$$ (\overline{Y_\beta} \cap \mu^{-1}(\eta))/K_\beta = ( {Y_\beta} \cap \mu^{-1}(\eta))/K_\beta.$$
It is therefore natural to choose $\eta$ to be
$(1 + \epsilon)\beta$ for some sufficiently small $\epsilon > 0$ and define
\begin{equation} \label{defnsqus}
S_\beta \senv_\epsilon K := ( {Y_\beta} \cap \mu^{-1}((1 + \epsilon)\beta))/K_\beta.
\end{equation}
This has a stratified symplectic structure and
it follows from the theory of variation of symplectic quotients \cite{DH,GSt,LS} (cf. \cite{dh98,Thaddeus}) that $S_\beta \senv_\epsilon K$ is independent of $\epsilon$ up to diffeomorphism for $0 < \epsilon <\!< 1$ and the induced symplectic structure varies in a predictable fashion with $\epsilon$; we can also use this theory to study the variation if $\eta$ is chosen to be a different perturbation of $\beta$.
We have thus proved our main result.
\begin{thm} \label{mainresult}
Let $X$ be a compact symplectic manifold with a Hamiltonian action of a compact group $K$ with moment map $\mu:X \to {\liek}^*$. Choose a compatible $K$-invariant almost complex structure and Riemannian metric on $X$, and an invariant inner product on ${\mathfrak k}$ with associated norm.
Let $\{ S_\beta : \beta \in \mathcal{B}\}$ be the Morse stratification for the function $||\mu||^2$. If $\beta \in \mathcal{B} \setminus \{0\} $ and $0< \epsilon <\!< 1$ then
$$
S_\beta \senv_\epsilon K = ( {Y_\beta} \cap \mu^{-1}((1 + \epsilon)\beta))/K_\beta$$
is a compact stratified symplectic space which can be interpreted as a symplectic quotient for the action of $K$ on the stratum $S_\beta$.
When $X \subseteq {\mathbb P } _n$ is a complex projective variety equipped with the Fubini--Study K\"ahler form and a linear action of $K$ defined by a unitary representation $K \to U(n+1)$, then when $\epsilon$ is rational $
S_\beta \senv_\epsilon K$ can be identified with a quotient of $S_\beta$ by $G=K_{\mathbb C }$ obtained from non-reductive GIT applied to the action of the parabolic subgroup $P_\beta$ on $Y_\beta$, with the linearisation twisted by the rational character $(1+\epsilon)\beta$ of $P_\beta$.
\end{thm}
\section {The Yang--Mills functional over a compact Riemann surface}
Atiyah and Bott observed \cite{AB} that the Yang--Mills functional over a compact Riemann surface $\Sigma$ plays the role of $|\!|\mu |\!|^2$ for an infinite-dimensional Hamiltonian action. Here the symplectic quotient can be identified with a moduli space of semistable holomorphic bundles of fixed rank and degree over $\Sigma$, and the stratification $\{ S_\beta : \beta \in \mathcal{B} \}$
is by the Harder--Narasimhan type of a holomorphic bundle.
Let $\Sigma$ be a compact Riemann surface of genus $g \geqslant 2$, and let $\mathcal{E}$ be a fixed $C^{\infty}$ complex hermitian vector bundle of rank $n$ and degree $d$ over $\Sigma$. Let $\mathcal{C}$ be the space of all holomorphic structures on $\mathcal{E}$. Since $\Sigma$ has complex dimension one there are no integrality conditions to be satisfied, so $\mathcal{C}$ can be identified with the space of unitary connections on $\mathcal{E}$, which is an infinite-dimensional complex affine space with a flat K\"ahler structure.
Let $\mathcal{G}_{\mathbb C }$ denote the group of all $C^{\infty}$ complex automorphisms of $\mathcal{E}$. We can regard $\mathcal{G}_{\mathbb C }$ as the complexification of the gauge group $\mathcal{G}$ consisting of $C^{\infty}$ unitary automorphisms of $\mathcal{E}$. The natural action of $\mathcal{G}_{\mathbb C }$ on $\mathcal{C}$ preserves its complex structure and the action of the gauge group $\mathcal{G}$ preserves the K\"ahler structure and is Hamiltonian with a moment map given by the curvature of a connection. The central subgroup ${\mathbb C }^*$ of $\mathcal{G}_{\mathbb C }$ given by scalar multiplication on $\mathcal{E}$ acts trivially on $\mathcal{C}$, so the moment map in the direction of the corresponding central $S^1$ in $\mathcal{G}$ is constant; it is essentially given by the ratio $d/n$. The Yang--Mills functional on $\mathcal{C}$ takes a connection to the normsquare of its curvature and hence plays the role of $|\!|\mu|\!|^2$ for the action of the gauge group on $\mathcal{C}$, except that it is more natural to choose the moment map $\mu$ so that $\mu^{-1}(0)$ is nonempty by adding a suitable central constant to the curvature. This means that the Yang--Mills functional differs from $|\!|\mu|\!|^2$ by a constant, so their Morse stratifications will coincide.
Atiyah and Bott \cite{AB} identified the symplectic quotient $\mu^{-1}(0)/\mathcal{G}$ of $\mathcal{C}$ by the gauge group with the
moduli space $\mathcal{M}(n,d)$ of semistable holomorphic vector bundles of rank $n$ and degree $d$ on $\Sigma$ (modulo S-equivalence).
Recall that a holomorphic vector bundle $E$ over $\Sigma$ is semistable (respectively stable) if every holomorphic subbundle $D$ of $E$ satisfies
$\text{slope} (D) \leq \text{slope} (E)$, (respectively $\text{slope} D) < \text{slope} (E)$),
where $\text{slope} (D) = \text{deg}(D)/\text{rank}(D)$
(and thus semistable bundles of coprime rank and degree are stable). Any semistable vector bundle $E$ has a Jordan--H\"older filtration by sub-bundles of the same slope as $E$ whose successive subquotients are stable; its associated graded bundle is the direct sum of these successive subquotients (which is independent of the choice of Jordan--H\"older filtration), and two semistable bundles of rank $n$ and degree $d$ are S-equivalent if their associated graded bundles are isomorphic.
A holomorphic bundle $E$ over $\Sigma$ of rank $n$ and degree $d$ has a canonical Harder--Narasimhan filtration
$$0 = E_0 \subset E_1 \subset \cdots \subset E_{s-1} \subset E_s = E$$
such that $\text{slope} (E_{j-1}) > \text{slope} (E_{j})$ and $E_j/E_{j-1}$ is semistable for $1 \leq j \leq s$. The Harder--Narasimhan type of $E$ is then given by the data provided by the ranks and degrees of the successive subquotients $E_j/E_{j-1}$; in \cite{AB} this is encoded in the vector
$$\lambda(E) = (d_1/n_1, \ldots, d_1/n_1, d_2/n_2, \ldots, d_s/n_s)$$
in which $d_j/n_j$ occurs $n_j$ times.
The main aim of \cite{AB} is to study the cohomology of the moduli space $\mathcal{M}(n,d)$ by showing that the Yang--Mills functional is equivariantly perfect as a Morse function. Because of the analytical difficulties created by working in infinite-dimensions and the singularities in the critical locus for the Yang--Mills functional, Morse theory is not applied directly to the Yang--Mills functional in \cite{AB} but instead the stratification is defined directly in terms of Harder--Narasimhan types; however the analytical difficulties were later overcome \cite{Dask}. Let $\Lambda$ denote the set of all Harder--Narasimhan types, and for any Harder--Narasimhan type $\lambda$, let $\mathcal{C}_\lambda$ denote the subset of $\mathcal{C}$ consisting of holomorphic structures on $\mathcal{E}$ with Harder--Narasimhan type $\lambda$. Atiyah and Bott showed that $\{ \mathcal{C}_\lambda : \lambda \in \Lambda \}$ is a $\mathcal{G}$-equivariantly perfect stratification of $\mathcal{C}$. They conjectured that it coincides with the Morse stratification for the Yang--Mills functional, which was later proved by Daskalopoulos \cite{Dask}.
The moduli space $\mathcal{M}(n,d)$ can also be constructed as finite-dimensional symplectic or GIT quotients, and the inductive formulas of \cite{AB} for its Betti numbers can be rederived via these \lq finite-dimensional approximations' to the Yang--Mills picture \cite{ADK,Karxiv}. In \cite{hok12} it is shown that the moduli spaces $\mathcal{M}(n,d)$ (and more generally moduli spaces of sheaves over any fixed nonsingular projective scheme) can be constructed as GIT quotients for actions of complex reductive groups on finite-dimensional complex varieties such that, for any given Harder--Narasimhan type $\lambda$ for bundles of rank $n$ and degree $d$ there is a choice of GIT construction of $\mathcal{M}(n,d)$ for which the bundles of Harder--Narasimhan type $\lambda$ appear as a stratum in the associated stratification $\{S_\beta:\beta \in \mathcal{B}\}$. The results of non-reductive GIT described in $\Sigma$3 can then be used to construct moduli spaces of holomorphic bundles of fixed Harder--Narasimhan type \cite{behjk16, bhk17}.
Alternatively we can attempt to use the infinite-dimensional Yang--Mills construction of the moduli space $\mathcal{M}(n,d)$ as a symplectic quotient of $\mathcal{C}$ by the gauge group and the methods of this paper to find an analogous symplectic construction of moduli spaces of holomorphic bundles of fixed Harder--Narasimhan type. Ignoring the analytical difficulties associated to working with infinite-dimensional spaces and groups, we might proceed as follows.
Let $\lambda(E) = (d_1/n_1, \ldots, d_1/n_1, d_2/n_2, \ldots, d_s/n_s)$ be a Harder--Narasimhan type and fix a $C^{\infty}$ filtration
\begin{equation}
\label{filt}
0 = \mathcal{E}_0 \subset \mathcal{E}_1 \subset \cdots \subset \mathcal{E}_{s-1} \subset \mathcal{E}_s = \mathcal{E} \end{equation}
of the $C^{\infty}$ bundle $\mathcal{E}$ with $\text{deg}(\mathcal{E}_j/\mathcal{E}_{j-1}) = d_j$ and
$\text{rank}(\mathcal{E}_j/\mathcal{E}_{j-1}) = n_j$ for $1 \leq j \leq s$. Define $\mathcal{Y}_\lambda$ to be the subset of $\mathcal{C}$ consisting of those holomorphic structures (or equivalently unitary connections) on $\mathcal{E}$ which are compatible with this filtration, in the sense that the subbundles $\mathcal{E}_j$ are all holomorphic subbundles, and define $\mathcal{Y}_\lambda^{ss}$ to consist of those holomorphic structures for which in addition the induced holomorphic structures on the subquotients $\mathcal{E}_j/ \mathcal{E}_{j-1}$ are semistable, so that the holomorphic structure on $\mathcal{E}$ lies in $\mathcal{C}_\lambda$. Let $\mathcal{P}_\lambda$ be the subgroup of $\mathcal{G}_{\mathbb C }$ consisting of the complex $C^{\infty}$-automorphisms of $\mathcal{E}$ which preserve the filtration (\ref{filt}) and let $\mathcal{U}_\lambda$ be the kernel of its induced action on the direct sum of the successive subquotients $\mathcal{E}_j/\mathcal{E}_{j-1}$. There is a $C^{\infty}$ decomposition of $\mathcal{E}$ as the orthogonal direct sum of the successive subquotients $\mathcal{E}_j/\mathcal{E}_{j-1}$; let
$\mathcal{L}_\lambda$ be the subgroup of $\mathcal{P}_\lambda$ preserving this direct sum decomposition and let $\mathcal{K}_\lambda$ be its intersection with the gauge group $\mathcal{G}$. Finally let $\mathcal{Z}_\lambda$ be the subset of $\mathcal{Y}_\lambda$ consisting of holomorphic structures for which this orthogonal direct sum decomposition of $\mathcal{E}$ is a holomorphic decomposition, and let $\mathcal{Z}_\lambda^{ss} = \mathcal{Z}_\lambda \cap \mathcal{Y}_\lambda^{ss}$.
Then $\mathcal{Y}_\lambda$, $\mathcal{Y}_\lambda^{ss}$, $\mathcal{Z}_\lambda$, $\mathcal{Z}_\lambda^{ss}$ and $\mathcal{P}_\lambda$, $\mathcal{U}_\lambda$, $\mathcal{L}_\lambda$, $\mathcal{K}_\lambda$ play the roles for the Hamiltonian action of the gauge group $\mathcal{G}$ on $\mathcal{C}$, and on its stratum $\mathcal{C}_\lambda$, which $Y_\beta$, $Y_\beta^{ss}$, $Z_\beta$, $Z_\beta^{ss}$ and $P_\beta$, $U_\beta$, $L_\beta$ and $K_\beta$ play in the finite-dimensional setting for the Hamiltonian action of the compact group $K$ on the compact symplectic (or K\"ahler) manifold $X$, and its stratum $S_\beta$. Note however that it is really $\lambda - (d/n, \ldots, d/n)$, not $\lambda$ itself, which plays the role of $\beta$, since the central circle subgroup in the gauge group acts trivially, so
$$(1 + \epsilon) \lambda - \varepsilon (d/n, \ldots, d/n)$$
plays the role of $(1 + \varepsilon )\beta$.
Thus by analogy with the finite-dimensional situation we expect the stratum $\mathcal{C}_\lambda$ to have a symplectic quotient
$$ \mathcal{C}_\lambda \senv_\varepsilon \mathcal{G} = ( \mathcal{Y}_\lambda \cap \text{curv}^{-1}((1 + \epsilon)\lambda - \varepsilon (d/n, \ldots ,d/n)))/\mathcal{K}_\lambda$$
for $0 < \varepsilon <\!< 1$, where $\text{curv}$ assigns to a holomorphic structure, or equivalently a unitary connection, on $\mathcal{E}$ its curvature, appropriately normalised, and $\mathcal{Y}_\lambda$ and $\mathcal{K}_\lambda$ are defined as above. At least away from its singularities we expect this symplectic quotient to be identifiable with a suitable moduli space of holomorphic bundles of Harder--Narasimhan type $\lambda$.
|
1,314,259,995,264 | arxiv |
\subsection{Discussions}\label{adaptive}
\subsubsection{Why does DMP work better?}\label{why}
As discussed in~\cite{long2018understanding}, membership inference attacks exploit the \emph{unique influence} of a member record on the target model.
To prevent this, DMP trains an unprotected model, $\theta_\mathsf{up}$, on private training data, $D_\mathsf{tr}$, and uses predictions of $\theta_\mathsf{up}$ on a reference data, $X_\mathsf{ref}$, to train its final protected model, $\theta_\mathsf{p}$.
Therefore, DMP reduces the influence of $D_\mathsf{tr}$ by hindering the direct access of $\theta_\mathsf{p}$ to $D_\mathsf{tr}$. We also show membership privacy can be further strengthened by selecting $X_\mathsf{ref}$ with low entropy predictions.
Note that low entropy $X_\mathsf{ref}$ are ``easy-to-classify'' samples whose predictions are not affected by small changes in $D_\mathsf{tr}$. Therefore, $\theta_\mathsf{p}$ trained on these predictions preserves membership privacy of $D_\mathsf{tr}$.
The main goal of DMP is to improve privacy-utility tradeoffs of ML models, i.e., increase the prediction accuracy of a model for a given level of membership privacy.
Both regularization and differential privacy mechanisms can achieve any level of membership privacy. But, with the same membership privacy, DMP models have significantly better prediction accuracies than conventionally regularized or DP models.
This is because \emph{DMP incorporates utility in its objective}: More specifically, it forces the predictions of $\theta_\mathsf{p}$ and $\theta_\mathsf{up}$ on non-member data, $X_\mathsf{ref}$, to be the same, using soft labels based loss function in~\eqref{kld_datum}, and effectively transfers the prediction accuracy of $\theta_\mathsf{up}$ to $\theta_\mathsf{p}$.
On the contrary, DP mechanisms~\cite{abadi2016deep,papernot2017semi} add noise calibrated to satisfy DP guarantees and do not incorporate utility constraints, which makes training highly accurate DP models challenging.
On the other hand, the loss of adversarial regularization puts different weights on two terms: first, the classification loss computed by directly accessing $D_\mathsf{tr}$, and second, the accuracy of the membership inference attack to be mitigated. Due to the direct access to $D_\mathsf{tr}$, to reduce overfitting, the weight on the second term needs to be high as proposed in the original work~\cite{nasr2018machine}. The goal of the second term is to train the final model to fool the attack model, and therefore, it does not help to improve the prediction accuracy of the final model. Therefore, the higher weight on the second term significantly harms the accuracy of the final model.
In self-distillation~\cite{hinton2014distilling,papernot2016distillation}, soft predictions of a model on its own training data are used to improve generalization of the model without compromising the model accuracy.
DMP uses predictions on reference data, disjoint from the private training data, to train its final model. This achieves high prediction accuracy due to the high quality predictions, while providing strong membership privacy.
\subsubsection{Synthetic reference data for DMP}\label{adaptive:synth}
An important requirement of DMP is access to some reference data for distillation.
First, as mentioned before, DMP uses \emph{unlabeled} reference data;
most data owners have access to huge amounts of unlabeled data, but labeled data is scarce due to the expensive process of manual labeling. In our setting, the data owners will not even release their unlabeled reference data that may be sensitive.
Second, even if a DMP trainer does not have access to enough unlabeled reference data, she can generate \emph{synthetic} (unlabeled) reference data through different means such as generative adversarial networks.
Specifically, we synthesized reference data for CIFAR-10 using the DC-GAN architecture \cite{radford2015unsupervised}.
With 25,000 of \textbf{real} CIFAR-10 data as its reference data, DMP achieves a ($E_\mathsf{gen}$, $A_\mathsf{te}$, $A_\mathsf{bb}$) of (3.1, 65.0, 50.6) (Table \ref{table:performance_comparison}).
On the other hand, by using
25,000 of \textbf{synthetic} CIFAR-10 data generated using DC-GAN, DMP achieves a ($E_\mathsf{gen}$, $A_\mathsf{te}$, $A_\mathsf{bb}$) of (3.5, 56.8, 51.3); these results are close to when non-synthetic CIFAR-10 reference data is used.
As the distribution of synthetic reference data does not exactly match that of real data, the corresponding DMP models have slightly poorer generalization and test accuracies; this can be improved with better generative models.
The larger the synthetic reference data, the better the final DMP models: For 12500, 25000, and 37500 amounts of synthetic reference data, we obtain a ($E_\mathsf{gen}$, $A_\mathsf{te}$, $A_\mathsf{bb}$) of (2.1, 53.0, 50.3), (3.5, 56.8, 51.3), and (5.0, 57.5, 52.1), respectively.
This shows that DMP does not necessarily need real reference data and, and it still outperforms existing defenses even with synthetic reference data.
Recent work has also investigated generating categorical data~\cite{gal2015latent,camino2018generating} using more classical methods.e
\subsubsection{Adaptive attacks on DMP}\label{adaptive:attack}
An adversary may try to adapt her attack against DMP, by leveraging DMP's particular reference data selection, which is based on the entropy of predictions of $\theta_\mathsf{up}$.
However, as discussed in our threat model (Section \ref{prelim:threat}), we assume the reference data is \textbf{not publicly available} due to its possibly sensitive nature.
Hence, the adversary first needs to identify the reference data, e.g., by mounting a membership inference attack on $\theta_\mathsf{p}$;
however, as shown in Section~\ref{exp:ref_risk}, such an attack is highly infeasible since our reference data samples do not have hard labels (but soft labels distilled from the original training data).
Hence, the adversary cannot obtain the reference dataset to adapt her attack.
Next, we show that even if the adversary can obtain the reference data accurately, e.g., through out of band channels, adaptive attacks cannot be effective.
In DMP, reference samples have low entropy predictions of $\theta_\mathsf{up}$, but members of training data of $\theta_\mathsf{up}$ also have low entropies, due to overfitting.
Therefore, training data members and low entropy reference samples could be similar, i.e., close in feature space.
The adversary can exploit this possibility by measuring distances of given target sample to the reference data and inferring that the target sample is a member if it is close to the reference data with low entropy predictions.
However, we show that the proximity of samples in feature space does not correlate with the closeness of their entropy, i.e., a low distance between a target sample and reference samples with low entropy does not imply that the target sample also has low entropy or is a member, and vice-versa.
In other words, \emph{closeness of samples in the feature space cannot be exploited to strengthen membership inference.}
Figure~\ref{fig:aa_dist} demonstrates this for Purchase-100 dataset; this dataset has binary features, therefore, we use the hamming distance.
Each point refers to a target member or non-member sample, and reference samples on x-axis are arranged in increasing order of entropy from left to right.
The X-axis shows the reference sample closest to a given target sample and the Y-axis shows the distance between these two samples in the feature space.
As members and non-members are not separated along either of the axes, there is no correlation between the closeness in feature space and the closeness of entropies.
Hence, adaptive attacks exploiting distances between samples in feature space will not gain any membership information.
\begin{figure}[h]
\centering
\includegraphics[scale=0.32]{./figures/adaptive_dist_based.pdf}
\caption{Closeness of reference and target samples in feature space does not correlate with closeness of entropy, which makes distance based adaptive attack against DMP impossible. Experiment performed for Purchase-100 dataset.}
\label{fig:aa_dist}
\end{figure}
\section{Missing Discussion Details}
In the last section of main paper, we provide various insights in to our DMP defense based on our extensive evaluation. We provide the missing details of the discussions below.
\subsection{Hyperparameter selection in DMP}\label{exp:param_variation}
\subsubsection{The temperature of the softmax layer.}\label{exp:param_variation:temp}
The softmax temperature, $T$, of the unprotected model, $\theta_\textsf{up}$, plays an important role in the amount of knowledge transferred from the unprotected to protected model in DMP.
Our results in Table~\ref{table:acc_vs_temp} confirm our analytical understanding of the use of the softmax temperature: increasing the temperature for AlexNet trained on CIFAR100 dataset reduces the classification accuracy of the final protected model, $\theta_\textsf{p}$, but also strengthens the its membership inference resistance.
Therefore, the softmax temperature $T$ should be chosen depending on the desired privacy-utility tradeoff.
Table \ref{table:exp_setup} shows the temperatures used in our experiments.
\begin{table}[h]
\fontsize{8.5}{10}\selectfont{}
\setlength{\extrarowheight}{0cm}
\begin{center}
\begin{tabular}{ |c|c|c|c|c| }
\cline{1-5}
\multirow{2}{*}{Defense} & Softmax & Training & Test & Attack \\
& $T$ & Accuracy & Accuracy & Accuracy \\ \hline
No defense & n/a & 100 & 36.8 & 91.3 \\ \hline
\multirow{4}{*}{ DMP }& 2 & 46.6 & 37.3 & 57.4 \\
& 4 & 42.2 & 35.7 & 55.6 \\
& 6 & 36.4 & 32.8 & 52.5 \\
& 8 & 12.1 & 12.3 & 51.7 \\ \hline
\end{tabular}
\end{center}
\vspace{-1em}
\caption{Effect of the softmax temperature on DMP: For a fixed $X_\textsf{ref}$, increasing the temperature of softmax layer of $\theta_\textsf{up}$ reduces $\mathcal{R}$ in \eqref{eq:obj_ratio}, which strengthens the membership privacy.}
\label{table:acc_vs_temp}
\end{table}
\begin{table}[h]
\fontsize{8.5}{9}\selectfont{}
\begin{center}
\begin{tabular}{ |c|c|c|c|c| }
\hline
{Combination} &\multirow{2}{*}{Dataset}& \multirow{2}{*}{Architecture} & \multirow{2}{*}{$|\theta|$} & \multirow{2}{*}{$T$} \\
acronym& & & & \\ \hline
P-FC & Purchase & Fully Connected & 1.32M & 1.0 \\ \hline
T-FC & Texas & Fully Connected & 1.32M & 1.0 \\ \hline
C100-A & \multirow{3}{*}{CIFAR100} & AlexNet & 2.47M & 4.0 \\
C100-D12 & & DenseNet12 & 0.77M & 4.0 \\
C100-D19 & & DenseNet19 & 25.6M & 1.0 \\ \hline
C10-A & CIFAR10 & AlexNet & 2.47M & 1.0 \\ \hline
\end{tabular}
\end{center}
\vspace{-1em}
\caption{Temperature of the softmax layers for the different combinations of dataset and network architecture used to produce the results in Table 3 of the main paper.}
\label{table:exp_setup}
\end{table}
\subsubsection{The size of reference data.}\label{exp:param_variation:publen}
In DMP, the more the reference data, the looser the bound on $\mathcal{R}$ in \eqref{eq:obj_ratio}, and therefore, weaker the membership resistance of the corresponding $\theta_\textsf{p}$.
To validate this, we quantify the classification accuracy and the membership inference risk of $\theta_\textsf{p}$ with increasing the amount of $X_\textsf{ref}$.
We use Purchase-100 data and vary $|X_\textsf{ref}|$ as shown in Figure \ref{fig:cls_acc_vs_publen}; we fix the softmax $T$ of $\theta_\textsf{up}$ at 1.0.
$\theta_\textsf{up}$ used here has train accuracy, test accuracy, and membership inference risk of 99.9\%, 77.0\% and 77.1\%, respectively.
Initially, the test accuracy of $\theta_\textsf{p}$ increases with $|X_\textsf{ref}|$ due to the useful knowledge transferred.
But, beyond the test accuracy of $\theta_\textsf{up}$, its predictions essentially insert noise in the training data of $\theta_\textsf{p}$, therefore the gain from increasing the size of reference data slows down.
Although this noise marginalizes the increase in the test performance of $\theta_\textsf{p}$, it also prevents $\theta_\textsf{p}$ from learning more about $D_\textsf{tr}$ and prevents further inference risk.
This is shown by the train accuracy and membership inference risk curves in Figure \ref{fig:cls_acc_vs_publen}.
Therefore, size of reference data should be selected based on the desired tradeoffs of the final model.
\begin{figure}
\centering
\resizebox{8cm}{5cm}{\input{new_tex_figures/cls_acc_vs_publen.tex}}
\vspace*{-.8em}
\caption{Increasing reference data size, $|X_\textsf{ref}|$, increases accuracy of $\theta_\textsf{p}$, but also increases $\mathcal{R}$ in \eqref{eq:obj_ratio}, which increases the membership inference risk due to $\theta_\textsf{p}$.}
\label{fig:cls_acc_vs_publen}
\vspace*{-.5em}
\end{figure}
\subsection{Privacy risk to reference data ($X_\mathsf{ref}$)}\label{exp:ref_risk}
\begin{table}
\fontsize{8.5}{9}\selectfont{}
\begin{center}
\begin{tabular}{ |c|c|c|c|c| }
\hline
{Dataset} & Test & Reference data & \multirow{2}{*}{$A_\textsf{wb}$} & \multirow{2}{*}{$A_\textsf{bb}$} \\
\& model & acc. ($A_\textsf{test}$) & acc. ($A_\textsf{ref}$) & & \\ \hline
P-FC & 74.1 & 80.8 & 53.1 & 52.6 \\ \cline{1-5}
T-FC & 48.6 & 52.0 & 52.2 & 52.0 \\ \cline{1-5}
C100-A & 35.7 & 35.9 & 50.9 & 50.5 \\ \cline{2-5}
C100-D12 & 63.1 & 65.1 & 53.0 & 52.2 \\ \cline{2-5}
C10-A & 65.0 & 66.7 & 53.9 & 52.7 \\ \cline{1-5}
\end{tabular}
\end{center}
\vspace*{-1.0em}
\caption{DMP does not pose membership inference risk to the possibly sensitive reference data. $A_\mathsf{ref}$ and $A_\mathsf{test}$ are accuracies of protected model, $\theta_\mathsf{p}$, on $X_\mathsf{ref}$ and $D_\mathsf{test}$, respectively.}
\label{table:ref_risk}
\end{table}
The reference data used in DMP can be of sensitive nature.
For instance, for Texas-100, the reference data used are unlabeled, sensitive patients' records, and therefore, at the risk of privacy breach.
However, we quantitatively show that \textbf{DMP does not pose membership inference risk to its reference data}.
The results are given in Table \ref{table:ref_risk}.
We note that, for any combination of model and dataset, the membership inference risk to the reference data due to DMP is close to 50\%, which is a random guess.
The intuition here is as follows.
$\theta_\mathsf{p}$ is trained on noisy soft-labels of $\theta_\mathsf{up}$ on $X_\mathsf{ref}$, and therefore, compared to an arbitrary test data, the influence of $X_\mathsf{ref}$ on $\theta_\mathsf{p}$ is not unique, which membership inference attacks exploit~\cite{long2018understanding,shokri2017membership,salem2019ml,nasr2019comprehensive}.
For Purchase-100 and Texas-100, the accuracy of $\theta_\mathsf{p}$ on $X_\mathsf{ref}$ is much higher than on $D_\mathsf{test}$, because for these datasets, $X_\mathsf{ref}$ contains easy-to-classify samples.
\section{Fine tuning the DMP defense\\(Missing details)}\label{analysis:ref_choice}
We propose a fine tuning technique to select/generate appropriate reference data, $X_\mathsf{ref}$, and achieve the desired privacy-utility tradeoffs using our distillation for membership privacy (DMP) defense.
The technique depends on the result given in Proposition~\ref{prop:entropy}; we provide a detailed proof of the results below.
\paragraphb{Detailed proof of Proposition~\ref{prop:entropy}. }
\paragraphe{Deriving the objective for desired $X_\textsf{ref}$. }
Consider two training datasets $D_\textsf{tr}$ and $D'_\textsf{tr}$ such that $D'_\textsf{tr}\leftarrow D_\textsf{tr}-z$, and $X_\textsf{ref}$.
Then, the log of the ratio of the posterior probabilities of learning the exact same parameters $\theta_\textsf{p}$ using DMP is given by \eqref{eq:obj_ratio}.
Observe that, $\mathcal{R}$ is an extension of \eqref{eq:prob_ratio} to the setting of DMP, where $\theta_\mathsf{p}$ is trained via the knowledge transferred using $(X_\textsf{ref},\theta^{X_\textsf{ref}}_\textsf{up})$, instead of directly training on $D_\textsf{tr}$.
\cite{sablayrolles2019white} argue to reduce this ratio to improve membership privacy.
Hence, we want to obtain $X_\mathsf{ref}$ which reduces the ratio $\mathcal{R}$ when $D_\textsf{tr}$, $D'_\textsf{tr}$, and $\theta_\mathsf{p}$ are kept constant.
We note that, although similar in appearance to differential privacy, $\mathcal{R}$ is defined only for the given private dataset, $D_\textsf{tr}$.
\begin{equation}\label{eq:obj_ratio}
\mathcal{R}=\Big|\text{log}\ \frac{\text{Pr}(\theta_\textsf{p}|D_\textsf{tr},X_\textsf{ref})}{\text{Pr}(\theta_\textsf{p}|D'_\textsf{tr},X_\textsf{ref})}\Big|
\end{equation}
Next, we modify $\mathcal{R}$ as:
\begin{align}
\label{eq:obj_ratio1}
& \mathcal{R}= \Big|-\frac{1}{T}\sum_{\mathbf{x}\in X_\textsf{ref}} \mathcal{L}_{\scaleto{\textsf{KL}}{4pt}}((\mathbf{x},\theta^{\mathbf{x}}_\textsf{up});\theta_\textsf{p}) - \mathcal{L}_{\scaleto{\textsf{KL}}{4pt}}((\mathbf{x},\theta'^{\mathbf{x}}_\textsf{up});\theta_\textsf{p})\Big| \\
\label{eq:obj_ratio2}
&\leq \frac{1}{T} \sum_{\mathbf{x}\in X_\textsf{ref}}\Big| \mathcal{L}_{\scaleto{\textsf{KL}}{4pt}}(\theta^{\mathbf{x}}_\textsf{up}\Vert\theta^{\mathbf{x}}_\textsf{p}) - \mathcal{L}_{\scaleto{\textsf{KL}}{4pt}}(\theta'^{\mathbf{x}}_\textsf{up}\Vert\theta^{\mathbf{x}}_\textsf{p})\Big|
\end{align}
\noindent where $\theta_\textsf{up}$ and $\theta'_\textsf{up}$ are trained on $D_\textsf{tr}$ and $D'_\textsf{tr}$, respectively.
Note that, \eqref{eq:obj_ratio1} holds due to the assumption in \eqref{eq:post_assumption} and the KL-divergence loss used to train $\theta_\mathsf{p}$ in DMP.
\eqref{eq:obj_ratio2} follows from \eqref{eq:obj_ratio1} because $|a+b|\leq|a|+|b|$.
Therefore, minimizing \eqref{eq:obj_ratio2} implies minimizing \eqref{eq:obj_ratio}.
Thus, to improve membership privacy due to $\theta_\mathsf{p}$, $X_\mathsf{ref}$ is obtained by solving~\eqref{eq:ref_obj}.
\begin{align}
\label{eq:ref_obj}
X^*_\textsf{ref}=\underset{X_\textsf{ref}\in X}{\text{argmin}}\Big(\frac{1}{T}\sum_{\mathbf{x}\in X_\textsf{ref}} \big|\mathcal{L}_{\scaleto{\textsf{KL}}{4pt}}&(\theta^{\mathbf{x}}_\textsf{up}\Vert\theta^{\mathbf{x}}_\textsf{p}) -\mathcal{L}_{\scaleto{\textsf{KL}}{4pt}}(\theta'^{\mathbf{x}}_\textsf{up}\Vert\theta^{\mathbf{x}}_\textsf{p})\big|\Big)
\end{align}
The objective of \eqref{eq:ref_obj} is minimized when $\theta^{\mathbf{x}}_\mathsf{up} = \theta'^{\mathbf{x}}_\mathsf{up}\ \ \forall\mathbf{x}\in X_\mathsf{ref}$ and is very intuitive: It implies that, $z$ (i.e., $D_\mathsf{tr}-D'_\mathsf{tr}$) enjoys stronger membership privacy when the reference data, $X_\mathsf{ref}$, are such that \emph{the distributions of outputs of $\theta_\mathsf{up}$ and $\theta'_\mathsf{up}$ on $X_\mathsf{ref}$ are not affected by the presence of $z$ in $D_\mathsf{tr}$}.
\\
\paragraphe{Simplifying the objective. }
Next, we simplify \eqref{eq:ref_obj} by replacing $\mathcal{L}_{\scaleto{\textsf{KL}}{4pt}}$ with closely related cross-entropy loss $\mathcal{L}_{\scaleto{\textsf{CE}}{4pt}}$.
The simplified objective is given by \eqref{eq:ref_obj2}.
\begin{align}\label{eq:ref_obj2}
X^*_\textsf{ref}=\underset{X_\textsf{ref}\in X}{\text{argmin}}\sum_{\substack{z'=(\mathbf{x},y)\\ \in (X_\textsf{ref},Y_\textsf{ref})}} \frac{1}{T} \big|\mathcal{L}_{\scaleto{\textsf{CE}}{4pt}}(z';\theta'_\textsf{up}) - \mathcal{L}_{\scaleto{\textsf{CE}}{4pt}}(z';\theta_\textsf{up})\big|
\end{align}
where $\mathcal{L}_{\scaleto{\textsf{CE}}{4pt}}$ is cross-entropy loss and $z'$ is not the same as $z\leftarrow D_\mathsf{tr}-D'_\mathsf{tr}$.
For clarity of presentation, here onward, we denote $\mathcal{L}_{\scaleto{\textsf{CE}}{4pt}}$ by $\mathcal{L}$.
Next, we assume that ground truth labels $Y_\textsf{ref}$ of $X_\textsf{ref}$ are available. Note that $X_\textsf{ref}$ is unlabeled dataset, but \emph{only to empirically demonstrate the validity of the simplification of \eqref{eq:ref_obj} to \eqref{eq:ref_obj2}, we assume that ground truth labels of $X_\textsf{ref}$ are available}.
We validate the simplification in Figure \ref{fig:kl_to_ce}: for any given reference sample, the lower the difference between cross-entropy losses, $\Delta\mathcal{L}$, the lower the corresponding difference between KL-divergence losses; and vice-versa.
Note that, to select/generate a reference sample, we do not need the exact difference between cross-entropy or KL-divergence losses for the sample, but only the difference for the sample relative to the other samples.
Hence, although the difference between cross-entropy losses is not exactly the same as difference between KL-divergence losses, their strong positive correlation is sufficient to make the reduction \eqref{eq:ref_obj} $\rightarrow$ \eqref{eq:ref_obj2} useful in our task.
\\
\vspace{.5em}
\paragraphe{Deriving the final objective to select/generate $X_\mathsf{ref}$. }
Next, to avoid repetitive training, we simplify the term for each sample in \eqref{eq:ref_obj2} using the results of~\cite{koh2017understanding}.
More specifically, they propose a linear approximation to the difference in cross-entropy losses of a pair of models trained with and without a specific sample in their training data.
We note that this is the exact setting of our problem.
If $\theta$ and $\theta_{-z}$ are two models trained with and without a member $z$, then the difference in cross-entropy losses of the two models on some test sample $z_\mathsf{test}=(\mathbf{x}_\mathsf{test},y_\mathsf{test})$ is approximated as:
\begin{equation}\label{koh_result}
|\mathcal{L}(z_\text{test},\theta_{-z}) - \mathcal{L}(z_\mathsf{test},\theta)| \simeq |\nabla_{\theta}\mathcal{L}(z_\mathsf{test},\theta)H^{-1}_{\theta}\nabla_{\theta}\mathcal{L}(z,\theta)|
\end{equation}
\noindent where $H_\theta$ is the Hessian matrix that is defined as $H_\theta=\frac{1}{n}\sum_{z\in D_\textsf{tr}}\nabla^2_\theta(z,\theta)$.
Substituting \eqref{koh_result} in \eqref{eq:ref_obj2} simplifies the objective in \eqref{eq:ref_obj} to:
\begin{align}
\label{eq:sim_ref_obj}
X^*_\textsf{ref}=\underset{X_\textsf{ref}\in X}{\text{argmin}}\sum_{\substack{z'=(\mathbf{x},y)\\ \in (X_\textsf{ref},Y_\textsf{ref})}} \frac{1}{T}|\nabla_{\theta}\mathcal{L}(z',\theta_\textsf{up})H^{-1}_{\theta}\nabla_{\theta}\mathcal{L}(z',\theta_\textsf{up})|
\end{align}
Note that, for a given member $z$, $H^{-1}_{\theta}\nabla_{\theta}\mathcal{L}(z',\theta)$ in \eqref{eq:sim_ref_obj} remains constant and the minimization reduces to minimizing the gradient $\nabla_{\theta}\mathcal{L}(z_\textsf{p},\theta_\textsf{up})$.
The lower the loss $\mathcal{L}(z',\theta_\textsf{up})$, the smaller the gradient $\nabla_{\theta}\mathcal{L}(z',\theta_\textsf{up})$.
Therefore objective \eqref{eq:sim_ref_obj} further simplifies as:
\begin{align}
\label{eq:sim_ref_obj1}
X^*_\textsf{ref}=\underset{X_\textsf{ref}\in X}{\text{argmin}}\ \frac{1}{T}\sum_{\substack{z'=(\mathbf{x}',y)\\ \in (X_\textsf{ref},Y_\textsf{ref})}} \mathcal{L}{\scaleto{\textsf{CE}}{4pt}}(z',\theta_\textsf{up})
\end{align}
\begin{figure}[t]
\centering
\includegraphics[height=7.5cm,width=7.5cm,trim=3cm 3cm 3cm 3cm]{figures/purchase_surface.pdf}
\caption{Empirical validation of simplification of \eqref{eq:ref_obj} to \eqref{eq:ref_obj2}: Increase in $\Delta\mathcal{L}_{\scaleto{\textsf{CE}}{4pt}}$ increases $\Delta\mathcal{L}_{\scaleto{\textsf{KL}}{4pt}}$, and that of \eqref{eq:ref_obj} to \eqref{eq:final_ref_obj}: Increase in $\mathcal{H}(\theta_\textsf{up}(z))$ increases $\Delta\mathcal{L}_{\scaleto{\textsf{KL}}{4pt}}$.}
\label{fig:kl_to_ce}
\vspace*{-1em}
\end{figure}
Note that, in practice, it is not possible to solve the objective in \eqref{eq:sim_ref_obj1} as it is. Because, we cannot compute the loss without the ground truth labels of $X_\textsf{ref}$; recall that $X_\textsf{ref}$ is \emph{unlabeled}.
However, as the loss involved here is the cross-entropy loss, minimizing the loss is equivalent to minimizing the entropy of prediction $\theta_\textsf{up}(\mathbf{x}')$.
This gives us the final objective as:
\begin{align}
\label{eq:final_ref_obj}
X^*_\textsf{ref}=\underset{X_\textsf{ref}\in X}{\text{argmin}}\ \frac{1}{T}\sum_{\mathbf{x}'\in X_\textsf{ref}} \mathcal{H}(\theta_\textsf{up}(\mathbf{x}'))
\end{align}
where, $\mathcal{H}(\mathbf{v})\triangleq \sum_{i}-\mathbf{v}_i\text{log}(\mathbf{v}_i)$ is the entropy of $\mathbf{v}$.
This provides the result of Proposition~\ref{prop:entropy}.
Proposition~\ref{prop:entropy} states that, \emph{using the reference data with low entropy predictions of $\theta_\textsf{up}$ strengthens the membership resistance of $\theta_\textsf{p}$, and vice versa.}
In Figure~\ref{fig:kl_to_ce}, we empirically validate the reductions \eqref{eq:ref_obj} $\rightarrow$ \eqref{eq:sim_ref_obj1}$\rightarrow$ \eqref{eq:final_ref_obj}.
Specifically, we show that, for a given $\theta_\mathsf{up}$, the lower the cross-entropy loss of reference data sample, the lower the entropy of prediction of $\theta_\mathsf{up}$ on the sample, i.e., \eqref{eq:sim_ref_obj1}$\rightarrow$ \eqref{eq:final_ref_obj}.
Then, we show that, the difference between cross-entropy losses of two models $\theta_\textsf{up}$ and $\theta'_\textsf{up}$, trained on neighboring datasets, on a sample increases with the increase in cross-entropy loss of their prediction on the sample, i.e., \eqref{eq:ref_obj2} $\rightarrow$ \eqref{eq:sim_ref_obj1}.
This, in combination with the reduction \eqref{eq:ref_obj} $\rightarrow$ \eqref{eq:ref_obj2} demonstrated in Figure \ref{fig:kl_to_ce}, completes the validation of \eqref{eq:ref_obj} $\rightarrow$ \eqref{eq:sim_ref_obj1}.
Figure~\ref{fig:hypothesis_eval_1} validates our hypothesis.
\section{Missing Details of Experimental Setup}\label{exp_setup}
\subsection{Computing environment}\label{setup:computing}
We will make our code and all the relevant datasets (all the datasets used are already available online) publicly available upon acceptance of the submission.
We perform all of our experiments using PyTorch 1.2 framework on TitanX GPU of 12GB memory.
All the experimental results in the paper are average of three runs of the corresponding experimental setting.
\subsection{Target model architectures}\label{setup:architecutres}
Unlike conventional distillation \cite{hinton2014distilling}, DMP uses same architectures for unprotected and protected models.
Needless to say, using a lower-capacity architecture for the protected model will further improve privacy protection at the cost of reducing utility (prediction accuracy).
The details of the architectures for all the datasets is given in Table \ref{table:exp_setup}.
For Purchase-100 and Texas-100, the fully connected network has hidden layers of sizes \{1024, 512, 256, 128\}.
For CIFAR-100, we choose two DenseNet models to assess the efficacy of DMP for two models with equivalent performance, but significantly different capacities.
In Table \ref{table:exp_setup}, DenseNet12 corresponds to DenseNet-BC (L=100, k=12) and DenseNet19 corresponds to DenseNet-BC (L=190, k=40).
For the comparison with PATE using CIFAR-10, we use the generator and discriminator architectures used in \cite{salimans2016improved}.
\section{Membership inference against highly susceptible classes}\label{appendix:worse_members}
In this section, we elaborate on the membership inference resistance that the DMP and other defenses provide to the CIFAR-10 classes with different susceptibility to membership inference.
Specifically, we measure the membership inference resistance of different classes by plotting ROC curves, and show that the DMP trained models not only provide the on-average privacy, but also protect the classes that are highly susceptible to membership inference when no defense is used.
We perform the same measurements for DP-SGD and adversarial regularization defenses to show that, for the models with equivalent generalization error, disparity of the susceptibility to membership inference across CIFAR-10 classes is similar to our DMP defense.
\begin{figure}
\centering
\begin{tabular}{cc}
\hspace{-2em}
\subfloat{\input{new_tex_figures/cifar10_roc_baseline_redacted}
}
\hspace{-1.5em}
&
\subfloat{\input{new_tex_figures/cifar10_roc_dmp_redacted}
}
\\
\hspace{-2em}
\subfloat{\input{new_tex_figures/cifar10_roc_dp_redacted}
}
\hspace{-1.5em}
&
\subfloat{\input{new_tex_figures/cifar10_roc_advtune_redacted}
}
\end{tabular}
\caption{}
\label{fig:disparity}
\end{figure}
\section{Detailed comparison with PATE}\label{appendix:pate_details}
In this section, we detail the experimental comparison between PATE \cite{papernot2018scalable,papernot2017semi} and our DMP defense for CIFAR10 classification task.
The motivation of this comparison is to show that the DMP-trained models achieve significantly better tradeoffs between membership privacy (i.e., resistance to membership inference attacks) and classification accuracy than the PATE-trained models.
PATE relies on semi-supervised learning that uses a large unlabeled dataset.
PATE computes the labels of a subset of the unlabeled data using an ensemble of teachers.
Each of the teachers is trained on a disjoint set of the private training dataset; all sets have the same size.
Semi-supervised learning involves an unstable game between a generator $G$ and a discriminator $D$. Hence, the architectures of $G$ and $D$ should be compatible for effective learning.
Therefore, instead of AlexNet, which we use in the rest of our CIFAR10 experiments, we use the the pair of discriminator and generator architectures proposed in \cite{salimans2016improved} due to its state-of-the-art classification performance.
Finally, PATE uses its discriminator as the classification model.
For both PATE and DMP, we use the same 25,000 data of CIFAR10 as the private training and the rest of 25,000 data the unlabeled reference data.
The accuracy of the discriminator trained on the entire private training data is 97.65\% and 79.6\% on training and test data, respectively.
We use the 25,000 \emph{training} data to train three ensembles of sizes 5, 10 and 25 teachers.
Each of the teachers in all the ensembles have disjoint and equal-sized training data.
The accuracy, \emph{without adding any noise to labels}, of the corresponding ensembles on the 25000 \emph{reference} samples is 64.92\%, 60.1\% and 54.52\%, respectively.
We use the confident-GNMax (GNMax) aggregation scheme to add DP noise to the aggregate of the votes (i.e., hard labels) of the teachers on the unlabeled reference data.
GNMax labels samples based on remaining privacy budget, hence, it may not label all the reference data samples.
GNMax aggregation scheme is similar to the sparse vector technique \cite{dwork2014algorithmic} and outputs a label only if the noisy version of the votes count of the label crosses a noisy version of a fixed threshold.
Table \ref{table:pate_comparison} details the accuracy of the GNMax aggregation for different number of teachers and privacy levels $(\epsilon,\delta)$.
We use $\delta$ of $10^{-4}$ as the order of the size of the reference data is $10^{4}$ \cite{papernot2018scalable}.
Note that, the DMP-trained discriminator has training, test, and attack accuracies of 77.98\%, 76.79\%, and 50.8\%, respectively.
Table \ref{table:pate_comparison} shows results for PATE with teacher ensembles of different sizes:
At low $\epsilon$ values, GNMax cannot provide many labels, and therefore, PATE suffers significant accuracy degradations.
While at high $\epsilon$ values ($>$1000), GNMax performs better, but does not outperform DMP.
The reason for this is as follows:
At very high $\epsilon$'s, PATE is just a knowledge transfer based semi-supervised learning, while DMP is knowledge transfer based supervised learning.
DMP does not divide its training data among teacher, and therefore, the predictions of the unprotected model used in DMP to train the protected model are more useful in terms of both the quality and quantity.
Therefore, DMP-trained models have significantly higher accuracy than PATE-trained model, for similar membership inference risk, i.e., DMP achieves significantly better membership privacy-model utility tradeoffs.
\section{Statistical Indistinguishability due to DMP}\label{appendix:dmp_stats}
\begin{figure*}
\vspace*{-2em}
\centering
\resizebox{17cm}{2.5cm}{
\begin{tabular}{ccc}
\hspace{-1.5em}
\subfloat{\input{new_tex_figures/bw_cifar100_dense12_training_2.0}
}
&
\hspace{-2em}
\subfloat{\input{new_tex_figures/bw_cifar100_dense12_training_4.0}
}
&
\hspace{-2em}
\subfloat{\input{new_tex_figures/bw_cifar100_dense12_training_6.0}
}
\end{tabular}
}
\vspace{-1em}
\caption{Impact of softmax temperature on training of $\theta_\textsf{p}$:
Increase in the temperature of softmax layer of $\theta_\textsf{up}$ reduces $\Delta\mathcal{L}_{\scaleto{\textsf{KL}}{4pt}}$ in \eqref{eq:obj_ratio2}, and hence, the ratio $\mathcal{R}$ in \eqref{eq:obj_ratio}. This improves the membership privacy and generalization of $\theta_\textsf{p}$.}
\label{fig:training}
\vspace*{-2em}
\end{figure*}
\begin{figure*}
\centering
\resizebox{16cm}{4.5cm}{
\begin{tabular}{ccc}
\hspace{-3em}
\subfloat{\input{tex_figures/cifar100_alexnet_up_grads}
}
&
\hspace{-1em}
\subfloat{\input{tex_figures/cifar100_densenet_up_grads}
}
&
\hspace{-1.5em}
\subfloat{\input{tex_figures/purchase_up_grads}
}
\\[-.7ex]
\hspace{-3em}
\subfloat{\input{tex_figures/cifar100_alexnet_dfp_grads}
}
&
\hspace{-1em}
\subfloat{\input{tex_figures/cifar100_densenet_dfp_grads}
}
&
\hspace{-1.5em}
\subfloat{\input{tex_figures/purchase_dfp_grads}
}
\end{tabular}}
\vspace{-1em}
\caption{
Distributions of gradient norms of members and non-members of private training data.
(\emph{Upper row}): Unlike the distribution of non-members, that of the members of the unprotected model, $\theta_\textsf{up}$, is skewed towards 0 as $\theta_\textsf{up}$ memorizes the members.
(\emph{Lower row}): The distributions of gradient norms for members and non-members for the protected model, $\theta_\textsf{p}$, of DMP are almost indistinguishable.
}
\label{fig:gradients}
\vspace*{-2em}
\end{figure*}
\begin{figure*}
\centering
\resizebox{17cm}{2.5cm}
{\begin{tabular}{ccc}
\hspace{-2.5em}
\subfloat{\input{new_tex_figures/bw_gen_err_purchase_fc}
}
&
\hspace{-1.5em}
\subfloat{\input{new_tex_figures/bw_gen_err_cifar100_alexnet}
}
&
\hspace{-1.5em}
\subfloat{\input{new_tex_figures/bw_gen_err_cifar100_dense12}
}
\end{tabular}}
\vspace{-1em}
\caption{The empirical CDF of the generalization error of models trained with DMP, adversarial regularization (AdvReg), and without defense. The y-axis is the fraction of classes that have generalization error less than the values on x-axis. The generalization error reduction due to DMP is much larger ($10\times$ for CIFAR100 and $2\times$ for Purchase) than due to AdvReg.
The low generalization error improves membership privacy due to DMP.
}
\label{fig:generalization_error}
\vspace*{-2em}
\end{figure*}
\begin{table*}
\fontsize{8.5}{9}\selectfont{}
\begin{center}
\setlength{\extrarowheight}{0.01cm}
\hspace{-2em}
\begin{tabular}{ |c|c|c|c|c|c|c|c|c| }
\hline
\multicolumn{3}{|c|}{{Experimental setup}} & \multicolumn{6}{c|}{{Near-equal $A_\mathsf{test}$ as DMP}} \\ \hline
Dataset & Model & Regularization & $E_\text{gen}$ & $A_\text{test}$ & $A_\textsf{wb}$ & $A_\textsf{bb}$ & $A_\textsf{bl}$ & $A_\textsf{nn}$ \\ \hline \hline
\multirow{4}{*}{Purchase} & \multirow{4}{*}{FC} & WD & 21.7 & 78.1 & 69.7 & 70.1 & 60.9 & 55.6 \\
& & WD + DR & 22.1 & 77.4 & 77.1 & 76.8 & 61.5 & 60.0 \\
& & WD + LS & 21.1 & 78.4 & 76.5 & 76.8 & 60.6 & 56.4 \\
& & WD + CP & 22.9 & 76.9 & 70.1 & 70.5 & 61.5 & 58.5 \\ \hline\hline
\multirow{4}{*}{Texas} & \multirow{4}{*}{FC} & WD & 49.0 & 50.4 & 84.1 & 82.1 & 74.5 & 56.2 \\
& & WD + DR & 41.1 & 52.1 & 82.1 & 81.2 & 70.6 & 60.2 \\
& & WD + LS & 50.9 & 49.1 & 86.0 & 85.7 & 75.5 & 56.9 \\
& & WD + CP & 45.5 & 54.2 & 90.4 & 90.2 & 72.8 & 65.6 \\ \hline\hline
\multirow{4}{*} {CIFAR100} & \multirow{4}{*} {DenseNet12} & WD & 31.0 & 67.8 & 72.9 & 72.9 & 65.5 & N/A \\
& & WD + DR & 31.0 & 68.2 & 73.7 & 73.6 & 65.5 & N/A \\
& & WD + LS & 31.6 & 68.0 & 70.3 & 70.1 & 65.8 & N/A \\
& & WD + CP & 31.1 & 67.5 & 74.3 & 74.7 & 65.6 & N/A \\ \hline\hline
\multirow{4}{*} {CIFAR10} & \multirow{4}{*} {AlexNet} & WD & 31.0 & 68.9 & 73.2 & 73.3 & 65.5 & N/A \\
& & WD + DR & 30.6 & 69.4 & 73.8 & 73.4 & 65.3 & N/A \\
& & WD + LS & 29.9 & 69.9 & 74.8 & 75.0 & 65.5 & N/A \\
& & WD + CP & 29.9 & 70.0 & 70.6 & 71.1 & 65.5 & N/A \\ \hline
\end{tabular}
\end{center}
\vspace*{-1em}
\caption{Generalization error ($E_\textsf{gen}$), test accuracy ($A_\textsf{test}$), and various MIA risks (evaluated using MIAs from Section~\ref{setup:attacks}) of models trained using state-of-the-art regularization techniques. Here we provide MIA risks for regularized models whose accuracy is close to that of DMP-trained models. We note that, for the same test accuracy, DMP-trained models provide significantly higher resistance to MIAs.}
\label{table:regularization_comparison_eq_acc}
\end{table*}
In this section, we show the indistinguishability of the statistics of different features of the target models trained with and without defenses, on the members and non-members of their training data.
Such indistinguishability is necenssary to hinder membership inference attacks (MIAs)~\cite{shokri2017membership}.
\paragraphb{Effect of softmax temperature. }
Figure \ref{fig:training} shows the effect of softmax temperature, $T$, of \emph{unprotected model}, $\theta_\mathsf{up}$, on the training and test accuracies of the protected mode, $\theta_\textsf{p}$.
As expected, we observe in Figure \ref{fig:training} that with the increase in the softmax temperature of $\theta_\textsf{up}$, the generalization error of $\theta_\textsf{p}$ decreases.
From left to right, the generalization errors of $\theta_\textsf{p}$ when the softmax temperatures of $\theta_\textsf{up}$ are set at 2, 4, and 6 are 4.7\% (66.3, 61.6), 3.6\% (66.7, 63.1), and 0.8\% (55.7, 54.9), respectively; parentheses show the corresponding training and test accuracies, respectively.
We keep the temperature of softmax layer in $\theta_\textsf{p}$ constant at 4.0.
This reduction in generalization error improves membership privacy.
\paragraphb{Indistinguishability of gradient norms. }
To assess the efficacy of DMP against the stronger whitebox MIAs (Nasr et al.~\shortcite{nasr2019comprehensive}), we study the gradients of loss of the predictions of unprotected and protected models on members and non-members of the private traininig data, $D_\textsf{tr}$.
Figure \ref{fig:gradients} shows the fraction of members and non-members given on y-axes that fall in a particular range of gradient norm values given on x-axes.
Gradients are computed with respect to the parameters of the given model.
We note that the distribution of the norms of unprotected model (upper figures) is heavily skewed to the left for the members, i.e., towards lower gradient norm values, unlike that for the non-members.
This is because, $\theta_\textsf{up}$ memorizes $D_\textsf{tr}$ and its loss and the gradient of the loss on the members is very small compared to the non-members.
However, for the protected model both members and non-members are evenly distributed across a large range of gradient norm values.
This implies that \emph{DMP significantly reduces the unintended memorization of $D_\textsf{tr}$ in the model parameters}.
Hence, DMP significant reduces (by 27.6\%) the MIA risk to the large capacity Dense19.
\paragraphb{Indistinguishability of train and test accuracies. }
In Figure \ref{fig:generalization_error}, we show the cumulative fraction of classes on y-axis for which the generalization error of the target models is lesser than the corresponding value on the x-axis; the closer the line to the line $x=0$, the lower the generalization error.
Figure \ref{fig:generalization_error} implies that, the models trained using DMP have significantly lower generalization error than those trained using adversarially regularization or without defense.
We observe that, with the no defense case as the baseline, \emph{the generalization error reduction using DMP is more than twice that using adversarial regularization}.
DMP reduces the error by half for Purchase and by $10\times$ for CIFAR100.
\section{Missing experimental details}\label{missing_exp}
\subsubsection{Best tradeoffs due to adversarial regularization}\label{missing_exp:adv}
Table~\ref{table:advreg_best_tradeoffs} gives the results for best tradeoffs due to adversarial regularization that we obtain by tuning its $\lambda$ parameter~\cite{nasr2018machine}.
\begin{table}
\fontsize{8.5}{9}\selectfont{}
\begin{center}
\setlength{\extrarowheight}{0.01cm}
\begin{tabular}{ |c|c|c|c|c|c|c| }
\hline
{Dataset} & \multicolumn{6}{c|}{Adversarial regularization} \\ \cline{2-7}
\& model & $E_\textsf{gen}$ & $A_\textsf{test}$ & $A_\textsf{wb}$ & $A_\textsf{bb}$ & $A_\textsf{bl}$ & $A_\textsf{nn}$ \\ \hline
P-FC & 22.4 & 68.1 & 62.3 & 61.9 & 61.4 & 51.4 \\ \cline{1-7}
T-FC & 15.5 & 45.3 & 66.8 & 66.3 & 57.8 & 51.2 \\ \cline{1-7}
C100-A & 50.9 & 31.6 & 79.3 & 78.3 & 75.5 & N/A \\ \cline{2-7}
C100-D12 & 19.4 & 58.4 & 61.9 & 61.7 & 59.7 & N/A \\ \cline{2-7}
C100-D19 & 30.8 & 53.7 & 69.5 & 68.7 & 65.4 & N/A \\ \cline{1-7}
C10-A & 29.8 & 62.6 & 65.2 & 65.0 & 64.9 & N/A \\ \cline{1-7}
\end{tabular}
\end{center}
\vspace*{-1em}
\caption{Best tradeoffs between test accuracy ($A_\mathsf{test}$) and membership inference risks (evaluated using MIAs from Section~\ref{setup:attacks}) due to adversarial regularization. DMP significantly improves the tradeoffs over the adversarial regularization (results for DMP are in Table~\ref{table:performance_comparison}).
}
\label{table:advreg_best_tradeoffs}
\vspace*{-1em}
\end{table}
\subsubsection{Best tradeoffs due to other regularizations}\label{missing_exp:other}
We see from the `Equivalent $A_\textsf{test}$' column in Table~\ref{table:regularization_comparison_eq_acc} that all regularization techniques improve the classification performance over the corresponding accuracies of baseline models from the Table 2 of main paper.
However, they reduce overfitting negligibly: the maximum reduction in $E_\text{gen}$ due to the regularizations is 1.8\% for Purchase, 10.2\% for Texas, 3.8\% for CIFAR100, and 2.6\% for CIFAR10.
This is because these techniques aim to produce models that generalize better to test data,
but they do not necessarily reduce the memorization of the private training data by the models.
Consequently, these techniques fail to reduce the membership inference risk: the maximum reduction in $A_\textsf{wb}$ due to the regularizations is 7\% for Purchase, 1.9\% for Texas, 1.9\% for CIFAR100, and 6.8\% for CIFAR10.
Note that, the confidence penalty and the label smoothing techniques reduce the inference risk, but not the generalization error.
This is because the corresponding models have smoother output distributions, which are more indistinguishable than the output distributions of models without any privacy.
\section{Conclusions}
We proposed Distillation for Membership Privacy (DMP), a knowledge distillation based defense against membership inference attacks that significantly improves the membership privacy-model utility tradeoffs compared to state-of-the-art defenses. We provided a novel criterion to generate/select reference data in DMP and achieve the desired tradeoffs. Our extensive evaluation demonstrated the state-of-the-art privacy-utility tradeoffs of DMP.
\section{Experimental Setup}\label{exp_setup}
\subsection{Datasets and target model architectures}
We use four datasets and corresponding model architectures that are consistent with the previous works~\cite{shokri2017membership,nasr2019comprehensive,nasr2018machine,salem2019ml}.
\paragraphb{Purchase}~\cite{purchase} is a 100 class classification task with 197,324 binary feature vectors of length 600; each dimension corresponds to a product and its value states if corresponding customer purchased the product; the corresponding label represents the shopping habit of the customer.
\paragraphb{Texas}~\cite{texas} is dataset of patient records. It is a 100 class classification task with 67,300 binary feature vectors of length 6,170 each; each dimension corresponds to symptoms and its value states if corresponding patient has the symptom or not; the label represents the treatment given to the patient.
For Purchase and Texas we use fully connected (FC) networks.
\paragraphb{CIFAR10 and CIFAR100}~\cite{krizhevsky2009learning} are popular image classification datasets, each has size 50k and $32\times 32$ color images. We use Alexnet, DenseNet-12 (with 0.77M parameters), and DenseNet-19 (with 25.6M parameters) models for CIFAR100, and Alexnet for CIFAR10.
Following previous works, \emph{we measure the test accuracy of the target models as their utility}.
\paragraphb{Sizes of dataset splits. }
The dataset splits are given in Table~\ref{tab:data_sizes}. For Purchase and Texas tasks, we use $D_\mathsf{ref}$ of size 10k and \emph{select} $X_\mathsf{ref}$ of size 10k from the remaining data using our entropy-based criterion. For CIFAR datasets, we use $D_\mathsf{ref}$ of size 25k and due to small sizes of these datasets, use the entire remaining 25k data as $X_\mathsf{ref}$.
The `Attack training' (described shortly) column shows the MIA adversary's knowledge of members and non-members of $D_\mathsf{tr}$. Following all the previous works, we assume that the adversary knows 50\% of $D_\mathsf{tr}$.
Further experimental details are provided in Appendix.
\begin{table}[h]
\fontsize{8.5}{9}\selectfont{}
\begin{center}
\begin{tabular}{ |c|c|c|c|c| }
\hline
\multirow{2}{*}{Dataset}& \multicolumn{2}{c|}{DMP training} & \multicolumn{2}{c|}{Attack training} \\ \cline{2-5}
& $|D_\textsf{tr}|$ & $|X_\textsf{ref}|$ & $|D|$ & $|D'|$ \\ \hline
Purchase (P) & 10000 & 10000 & 5000 & 5000 \\
Texas (T) & 10000 & 10000 & 5000 & 5000 \\
CIFAR100 (C100) & 25000 & 25000 & 12500 & 8000 \\
CIFAR10 (C10)& 25000 & 25000 & 12500 & 8000 \\ \hline
\end{tabular}
\vspace*{-.6em}
\caption{All the dataset splits are disjoint. $D$, $D'$ data are the members and non-members of $D_\textsf{tr}$ known to MIA adversary.}
\label{tab:data_sizes}
\end{center}
\vspace*{-2.5em}
\end{table}
\vspace{-.5em}
\subsection{Membership inference attacks}\label{setup:attacks}
We briefly review the four MIAs we use for evaluations. Following previous works, \emph{we use the accuracy of MIAs on target models as a measure of their membership privacy}.
\paragraphb{Bounded loss (BL) attack}~\cite{yeom2018privacy} decides membership using a threshold on the target model's loss on the target sample. When 0-1 loss is used, the attack accuracy is simply the difference in training and test accuracy of target model. We denote BL attack accuracy by $A_\mathsf{bl}$.
\paragraphb{NN attack}~\cite{salem2019ml} uses a \emph{shadow dataset} $d_s$ drawn from the same distribution as $D_\mathsf{tr}$. The attacker splits $d_s$ in $d'_s$ and $d''_s$, trains a \emph{shadow model} $\theta_s$ on $d'_s$, computes predictions of $\theta_s$ on $d'_s$ and $d''_s$, labels the predictions of $d'_s$ as members and that of $d''_s$ as non-members, and trains binary attack model on the predictions. We denote NN attack accuracy by $A_\textsf{nn}$. Due to their small sizes, DMP cannot be evaluated with CIFAR datasets, hence we omit NN attack evaluation for CIFAR datasets.
\paragraphb{NSH attacks}~\cite{nasr2019comprehensive} are similar to NN attacks. They concatenate various whitebox (e.g., model gradients) and/or blackbox (e.g., model loss, predictions) features of target model, while NN attack uses only the target model predictions. We denote whitebox and blackbox NSH attack accuracies by $A_\textsf{wb}$ and $A_\textsf{bb}$, respectively. For NN and NSH attacks, we use the same attack models as the original works.
\section{Experiments}\label{exp}
\subsection{Comparison with regularization techniques} \label{exp:regularizations}
Regularization improves the generalization of ML models, and hence, reduce the MIA risk~\cite{shokri2017membership}.
Hence, we compare DMP with five regularization defeses, including state-of-the-art MIA defense\textemdash adeversarial regularization~\cite{nasr2018machine}.
In all tables, $E_\textsf{gen}$ is generalization error, i.e., ($A_\textsf{train}-A_\textsf{test}$), where $A_\textsf{train}$ and $A_\textsf{test}$ are train and test accuracies of the target model, respectively.
$A^+_\mathsf{test}$ gives the \% increase in $A_\mathsf{test}$ due to DMP over the other regularizers.
$A_\textsf{wb}$, $A_\textsf{bb}$, $A_\textsf{bl}$, $A_\textsf{nn}$ are accuracies of various attacks discussed in the previous section.
Table~\ref{table:baseline} shows accuracies of models trained without any defense; CIFAR models have lower than state-of-the-art accuracies due to smaller training datasets.
\subsubsection{Comparison with adversarial regularization (AdvReg).}
Table \ref{table:performance_comparison} compares $A_\mathsf{test}$ of DMP and AdvReg models, for similar MIA accuracies (i.e., membership privacy). As expected, these models also have similar $E_\mathsf{gen}$'s.
However, $A_\mathsf{test}$ of DMP models is significantly higher than AdvReg models; $A^+_\mathsf{test}$ column shows the \% increase in $A_\mathsf{test}$ due to DMP over AdvReg:
Accuracy improvements due to DMP over AdvReg are close to 100\% for CIFAR-100, and 20\% to 45\% for other datasets.
AdvReg uses accuracy of an MIA model to regularize and train its target models to fool the MIA model.
However, AdvReg allows its target models to directly access $D_\mathsf{tr}$. Hence, to effectively fool the MIA model, it puts relatively large weight on the regularization-loss term. This reduces the impact of the loss on main task and reduces the accuracy of AdvReg models.
DMP uses appropriate reference data to transfer the knowledge of $D_\mathsf{tr}$ to its target models without allowing them direct access.
Hence, DMP significantly outperforms AdvReg in terms of privacy-utility tradeoffs.
\begin{table}
\fontsize{8}{9}\selectfont{}
\begin{center}
\setlength{\extrarowheight}{0.03cm}
\hspace{-2em}
\begin{tabular}{ |c|c|c|c|c|c|c| }
\hline
\multicolumn{7}{|c|}{{Purchase + FC} (DMP's $A_\textsf{test}$ = 74.1)} \\ \hline
{Regularizer} & {$E_\textsf{gen}$} & {$A_\textsf{test}$} & {$A^{+}_\textsf{test}$} & $A_\textsf{wb}$ & $A_\textsf{bb}$ & $A_\textsf{bl}$ \\ \hline
WD & 10.3 & 42.5 & +\textbf{74.4}\% & 54.9 & 55.4 & 55.2 \\ \hline
WD + DR & 9.1 & 42.1 & +\textbf{76.0}\% & 56.4 & 56.8 & 54.6 \\ \hline
WD + LS & 12.3 & 42.0 & +\textbf{76.4}\% & 57.2 & 57.0 & 56.2 \\ \hline
\hline
\multicolumn{7}{|c|}{{Texas + FC} (DMP's $A_\textsf{test}$ = 48.6)} \\ \hline
{Regularizer} & {$E_\textsf{gen}$} & {$A_\textsf{test}$} & {$A^{+}_\textsf{test}$} & $A_\textsf{wb}$ & $A_\textsf{bb}$ & $A_\textsf{bl}$ \\ \hline
WD & 5.0 & 22.5 & +\textbf{116}\% & 58.3 & 57.7 & 52.5 \\ \hline
WD + DR & 6.1 & 14.2 & +\textbf{242}\% & 63.1 & 62.6 & 53.1 \\ \hline
WD + LS & 8.3 & 37.3 & +\textbf{30}\% & 61.7 & 61.0 & 54.2 \\ \hline
\hline
\multicolumn{7}{|c|}{{CIFAR100 + DenseNet-12} (DMP's $A_\textsf{test}$ = 63.1)} \\ \hline
{Regularizer} & {$E_\textsf{gen}$} & {$A_\textsf{test}$} & {$A^{+}_\textsf{test}$} & $A_\textsf{wb}$ & $A_\textsf{bb}$ & $A_\textsf{bl}$ \\ \hline
WD & 4.0 & 26.3 & +\textbf{140}\% & 49.9 & 49.7 & 52.0 \\ \hline
WD + DR & 3.7 & 32.3 & +\textbf{95.4}\% & 51.2 & 51.0 & 51.9 \\ \hline
WD + LS & 2.7 & 13.0 & +\textbf{385}\% & 51.0 & 51.4 & 51.4 \\ \hline
\hline
\multicolumn{7}{|c|}{{CIFAR10 + Alexnet} (DMP's $A_\textsf{test}$ = 65.0)} \\ \hline
{Regularizer} & {$E_\textsf{gen}$} & {$A_\textsf{test}$} & {$A^{+}_\textsf{test}$} & $A_\textsf{wb}$ & $A_\textsf{bb}$ & $A_\textsf{bl}$ \\ \hline
WD & 4.1 & 45.9 & +\textbf{41.6}\% & 52.4 & 52.5 & 52.1 \\ \hline
WD + DR & 3.2 & 44.7 & +\textbf{45.4}\% & 51.9 & 51.7 & 51.6 \\ \hline
WD + LS & 4.8 & 53.2 & +\textbf{22.2}\% & 53.8 & 53.0 & 52.4 \\ \hline
\end{tabular}
\vspace*{-.6em}
\caption{Evaluating three state-of-the-art regularizers, with similar, low MIA risks (high membership privacy) as DMP.
$A^{+}_\textsf{test}$ shows \emph{the \% increase} in $A_\textsf{test}$ due to DMP over the corresponding regularizers.
}
\label{table:regularization_comparison}
\end{center}
\vspace*{-2em}
\end{table}
\subsubsection{Comparison with other regularizers.}
Next, we compare DMP with four state-of-the-art regularizers: weight decay (WD), dropout \cite{srivastava2014dropout} (DR), label smoothing \cite{szegedy2016rethinking} (LS), and confidence penalty \cite{pereyra2017regularizing} (CP). Due to the poor MIA resistance of CP, we defer its results to Appendix.
Table~\ref{table:regularization_comparison} shows the results, when MIA risks of regularized models is close that of DMP models (Table~\ref{table:performance_comparison}).
We note that, in all the cases, $A_\mathsf{test}$ of DMP are significantly higher (up to 385\% increase as $A^+_\mathsf{test}$ column specifies) than $A_\mathsf{test}$ of other regularizers.
This is because, these regularizers aim to improve the test accuracies of target models, but are not designed to reduce MIA risk. Thus, to reduce MIA risk, these regularization techniques add large, suboptimal noises during training, and hurt the utility of resulting models.
\begin{table}[h]
\fontsize{8.5}{9}\selectfont{}
\setlength{\extrarowheight}{0cm}
\begin{center}
\begin{tabular}{ |c|c|c|c|c| }
\hline
\multirow{2}{*}{Defense} & Privacy & \multirow{2}{*}{$E_\mathsf{gen}$} & \multirow{2}{*}{$A_\mathsf{test}$} & \multirow{2}{*}{$A_\mathsf{wb}$} \\
& budget $(\epsilon)$ & & & \\ \hline
No defense & -- & 32.5 & 67.5 & 77.9 \\ \hline
DMP & -- & \cellcolor[gray]{0.8}3.10 & \cellcolor[gray]{0.8}65.0 & \cellcolor[gray]{0.8}51.3 \\ \hline
\multirow{4}{*}{ DP-SGD } & 198.5 & \cellcolor[gray]{0.8}3.60 & \cellcolor[gray]{0.8}52.2 & \cellcolor[gray]{0.8}51.7 \\
& 50.2 & 1.30 & 36.9 & 50.2\\
& 12.5 & 0.30 & 31.7 & 50.0 \\
& 6.8 & -1.60 & 29.4 & 49.9 \\
\hline
\end{tabular}
\vspace{-.6em}
\caption{DP-SGD versus DMP for CIFAR10 and Alexnet. For low MIA risk of $\sim51.3$\%, DMP achieves 24.5\% higher $A_\mathsf{test}$ than of DP-SGD (12.8\% absolute increase in $A_\mathsf{test}$).}
\label{table:dp_sgd}
\end{center}
\vspace*{-1.2em}
\end{table}
\subsection{Comparison with differentially private defenses}\label{exp:dp}
\subsubsection{Comparison with DP-SGD.}
Following the methodology of~\cite{jayaraman2019evaluating}, we compare DMP and DP-SGD~\cite{abadi2016deep} using the empirically observed tradeoffs between membership privacy (MIA resistance) and $A_\mathsf{test}$ of models.
We use only CIFAR10 for these experiments, as the DP-SGD achieves prohibitively low accuracies on difficult tasks such as Texas and CIFAR100. We evaluate MIA risk using the whitebox NSH attack.
Table \ref{table:dp_sgd} shows the results of Alexnet trained on CIFAR10 using DMP and DP-SGD with different privacy budgets $\epsilon$'s; -ve $E_\mathsf{gen}$ implies $A_\mathsf{train}$ is lower than $A_\mathsf{test}$.
DP-SGD incurs significant (35\%) loss in $A_\mathsf{test}$ at lower $\epsilon$ (12.5) to provide strong membership privacy.
At higher $\epsilon$, $A_\mathsf{test}$ of DP-SGD increases, but at the cost of very high generalization error, which facilitates stronger MIAs.
Note that, further increase in privacy budget, $\epsilon$, does not improve tradeoff of DP-SGD.
More importantly, {for low MIA risk of $\sim$ 51.3\%, DMP models have 12.8\% higher $A_\mathsf{test}$ (i.e., 24.5\% improvement) than DP-SGD models}, which shows the superior tradeoffs due to DMP.
\subsubsection{Comparison with PATE.} PATE~\cite{papernot2017semi}, a semi-supervized learning technique, requires a compatible pair of generator and disciminator to achieve acceptable performances. Hence, we use CIFAR10 dataset and, instead of Alexnet, use the generator-discriminator pair from~\cite{salimans2016improved}, which has state-of-the-art performances.
PATE trains a set of teachers, computes hard labels of each teacher on some $X_\mathsf{ref}$, aggregates the labels for each $\mathbf{x}\in X_\mathsf{ref}$ using majority voting, adds DP noise to the aggregate, and finally trains its target model on the noisy aggregate.
We train ensembles of 5, 10, and 25 teachers using $D_\mathsf{tr}$ of sise 25k.
We use the optimized confident-GNMax (GNMax) aggregation scheme of \cite{papernot2018scalable} to label $X_\mathsf{ref}$
We present a subset of results in Table \ref{table:pate} and defer comprehensive comparison to Appendix.
At low $\epsilon$'s ($<$10), GNMax hardly produces any labels, hence, the final target model has very low $A_\mathsf{test}$, but at higher $\epsilon$'s ($>$1000), PATE target model has acceptable $A_\mathsf{test}$.
However, PATE cannot achieve performances close to DMP, as it divides $D_\mathsf{tr}$ among its teachers. Such teachers have significantly lower accuracies and their ensemble cannot achieve the accuracy close to that of the unprotected model of DMP, which is trained on the entire $D_\mathsf{tr}$. Hence, the quality of knowledge transferred in DMP is always higher than that in PATE.
\begin{table}[h]
\fontsize{8.5}{9}\selectfont{}
\begin{center}
\begin{tabular}{ |c|c|c|c|c|c| }
\hline
\centering
{\# of} & Queries & Privacy & \multicolumn{2}{c|}{Target model} & \multirow{2}{*}{$A_\mathsf{wb}$} \\
Teachers & answered & budget $(\epsilon)$ & $E_\mathsf{gen}$ & $A_\mathsf{test}$ & \\ \hline
\multirow{2}{*}{5} & 49 & 195.9 & 31.4 & 33.9 & 49.1 \\
& 1163 & 11684 &65.4 & 68.1 &49.0 \\ \hline
\multirow{2}{*}{10} & 23 & 42.9 &39.1 & 38.3 & 50.1\\
& 1527 & 6535 & 63.9& 65.2 & 49.8 \\ \hline
\multirow{2}{*}{25} & 108 & 183.5 & 53.8& 55.7 & 49.0 \\
& 4933 & 1794.1 & 57.8& 60.3 & 48.6\\ \hline
\end{tabular}
\vspace*{-0.6em}
\caption{
Comparing PATE with DMP.
DMP has $E_\mathsf{gen}$, $A_\mathsf{test}$, and $A_\mathsf{wb}$ of 1.19\%, 76.79\%, and 50.8\%, respectively.
PATE has low accuracy even at high privacy budgets, as it divides data among teachers and produces low accuracy ensembles.
}
\label{table:pate}
\end{center}
\vspace*{-2.5em}
\end{table}
\subsection{Discussions}
Below, we provide further key insights in to DMP defense and defer their detailed discussion to Appendix.
\subsubsection{Hyperparameter selection in DMP. } \emph{Increasing} the temperature of softmax layer of the unprotected model, $\theta_\mathsf{up}$, used to transfer the knowledge of $\theta_\mathsf{up}$, can further reduce the membership leakage of $D_\mathsf{tr}$. This is because, at higher softmax temperatures, predictions of $\theta_\mathsf{up}$ have uniform distribution over all classes and contain no useful information for MIAs.
Similarly, reducing the size of $X_\mathsf{ref}$ reduces MIA risk due to DMP, but comes at the cost of reduction in $A_\mathsf{test}$.
\subsubsection{Privacy risk to reference data ($X_\mathsf{ref}$). }
We evaluate the privacy risk to $X_\mathsf{ref}$, as it can be of sensitive nature, e.g., in case of Texas medical records dataset.
Our results in appendix show that given the final DMP model, $\theta_\mathsf{p}$, and a target sample, MIA adversary (who mounts BL, NN, or NSH attacks) cannot decide if the sample belonged to $X_\mathsf{ref}$ with sufficient confidence.
This is expected, because DMP trains its $\theta_\mathsf{p}$ on noisy, soft-labels of $X_\mathsf{ref}$, which do not contain any sensitive information about $X_\mathsf{ref}$ and its ground-truth labels, which is necessary for MIAs to succeed~\cite{yeom2018privacy}.
We provide detailed results in Appendix.
\subsubsection{DMP with synthetic reference data ($X_\mathsf{ref}$). }
Following previous works~\cite{papernot2018scalable,papernot2017semi}, including the state-of-the-art MIA defense AdvReg~\cite{nasr2018machine}, we assume availability of $X_\mathsf{ref}$.
However, in privacy sensitive domains such as patient medical records, $X_\mathsf{ref}$ may not be available.
Hence, we show that the assumption can be relaxed by using $X_\mathsf{ref}$ {synthesized} from private $D_\mathsf{tr}$ to train DMP models.
For CIFAR10, we use DC-GAN to generate synthetic $X_\mathsf{ref}$ of sizes 12.5k, 25k, and 37.5k from $D_\mathsf{tr}$ of size 25k. We then train three DMP models and evaluate their MIA risk using whitebox NSH attack. We note that for 12.5k, 25k, and 37.5k synthetic $X_\mathsf{ref}$ samples, ($E_\mathsf{gen}$, $A_\mathsf{test}$, $A_\mathsf{wb}$) of DMP are (2.1, 53.0, 50.3), (3.5, 56.8, 51.3), and (5.0, 57.5, 52.1), respectively. Note that, \emph{DMP outperforms existing defenses even with synthetic $X_\mathsf{ref}$} (Tables~\ref{table:performance_comparison},~\ref{table:regularization_comparison}).
\subsubsection{Adaptive attack on DMP. }
In DMP, the reference data, $X_\mathsf{ref}$, is selected such that the predictions of DMP's unprotected model $\theta_\mathsf{up}$ on $X_\mathsf{ref}$ have low entropies. Due to memorization, predictions of $\theta_\mathsf{up}$ on $D_\mathsf{tr}$ also have low entropies. Hence, an adaptive adversary may exploit this peculiar $X_\mathsf{ref}$ selection in DMP. Based on this intuition, we investigate the possibility of an adaptive MIA, which labels a target sample as a member if the sample is close to some $X_\mathsf{ref}$ datum in feature space. However, such attack has accuracy close to random guess. This is because, we observe that the proximity of two samples in feature space has no correlation with the entropy of predictions of given $\theta_\mathsf{up}$ on those samples, which is the selection criterion of DMP. We leave further investigation of adaptive attacks on DMP to future work.
\section{Introduction}\label{introduction}
\begin{comment}
- DMP overview not needed
-
\end{comment}
The remarkable performance of machine learning (ML) in solving many classification tasks has facilitated its adoption in various domains ranging from recommendation systems to critical health-care management.
Many ML-as-a-Service platforms (e.g., Google API, Amazon AWS) enable novice data owners to train ML models and release the models either as a blackbox prediction API or as model parameters that can be accessed in whitebox fashion.
ML models are often trained on data with sensitive user information such as clinical records and personal photos. Hence, ML models trained using sensitive data can leak private information about their data owners.
This has been demonstrated through various inference attacks~\cite{fredrikson2015model,hitaj2017deep,carlini2018secret}
, and most notably the \emph{membership inference attack} (MIA)~\cite{shokri2017membership} which is the focus of our work.
An MIA adversary with a blackbox or whitebox access to a target model aims to determine if a given target sample belonged to the private training data of the target model or not.
MIAs are able to distinguish the members from non-members by \emph{learning} the behavior of the target model on member versus non-member inputs.
They use different features of the target model for this classification, e.g., model predictions \cite{shokri2017membership}, model loss, and gradients of the model parameters for given input~\cite{nasr2019comprehensive}.
MIAs are particularly more effective against deep neural networks~\cite{shokri2017membership,salem2019ml}, because, with their large capacities, such models can better memorize their training data.
Recent work has investigated several defenses against membership inference attacks.
In order to provide the worst case privacy guarantees, \emph{Differential Privacy} (DP) based defenses add very large amounts of noise to the learning objective or model outputs~\cite{papernot2017semi,chaudhuri2011differentially}
. This results in models with unacceptable tradeoffs between privacy and utility~\cite{jayaraman2019evaluating}, therefore questioning their use in practice.
Sablayrolles et al.~\cite{sablayrolles2019white} showed that membership privacy is a weaker notion of privacy than DP, which improves with generalization of ML models.
Similarly, Nasr et al.~\cite{nasr2018machine} proposed \emph{adversarial regularization} targeted to defeat MIAs by improving the target model's generalization.
However, as we demonstrate, the adversarial regularization and other state-of-the-art regularizations, including label smoothing~\cite{szegedy2016rethinking} and dropout~\cite{srivastava2014dropout}, fail to provide acceptable membership privacy-utility tradeoffs (simply called `tradeoffs' here onward).
Memguard~\cite{jia2019memguard}, a blackbox defense, improves model utility, but it cannot protect the model from whitebox MIAs and even the simple threshold based MIAs~\cite{yeom2018privacy}.
In summary, \emph{existing defenses against MIAs offer poor tradeoffs between model utility and membership privacy}.
To this end, our work proposes a defense against MIAs that significantly improves the tradeoffs compared to prior defenses.
That is, for a given degree of membership privacy (i.e., MIA resistance), our defense produces models with significantly higher classification performances compared to prior defenses.
Our defense, called \emph{\textbf{D}istillation for \textbf{M}embership \textbf{P}rivacy} (DMP), leverages \emph{knowledge distillation}~\cite{hinton2014distilling}, which transfers the knowledge of large models to smaller models, and is primarily used for model compression.
Intuitively, DMP protects membership privacy by thwarting the access of the resulting models to the private training data.
The first \emph{pre-distillation} phase of DMP trains an \emph{unprotected} model on the private training data without any privacy protection.
Next, in \emph{distillation} phase, DMP selects/generates reference data and transfers the knowledge of the unprotected model into predictions of the reference data.
In the final \emph{post-distillation} phase, DMP trains a \emph{protected} model on the reference data labeled in the previous phase.
Unlike conventional distillation, we use the same architectures for the unprotected and protected models.
Similar to adversarial regularization and PATE, DMP assumes access to a possibly sensitive and ``unlabeled'' \emph{reference data} drawn from the same distribution as the ``labeled'' private training data, and uses such reference data to train its final models; the reference data is not publicly available.
This is a highly realistic assumption as typical model generating entities (e.g., banks) possess huge amounts of ``unlabeled'' data (but limited labeled data due to the expensive labeling process).
Furthermore, we show that this assumption can be relaxed by synthesizing reference data using generator networks~\cite{micaelli2019zero}.
While some prior work~\cite{papernot2017semi} combined distillation and DP to protect data privacy, our work is \emph{the first} to study the promise of knowledge distillation as the sole technique to train membership privacy-preserving models.
Our key contributions are summarized below:
\vspace*{-.2em}
\begin{itemize}
\item[-] We propose a defense against MIAs, called \emph{\textbf{D}istillation for \textbf{M}embership \textbf{P}rivacy} (DMP).
\item[-] Given an unprotected model trained on a private training data and a reference sample, we provide a novel result that the lower the entropy of prediction of the model on the reference sample, the lower the sensitive membership information in the prediction. We use this result to select/generate appropriate reference data so as to improve the membership privacy due to DMP.
\item[-] We perform an extensive evaluation of DMP to show the state-of-the-art tradeoffs between membership privacy and model accuracy of DMP. For instance, at a fixed high degrees of membership privacy, DMP achieves 30\% to 140\% higher classification accuracies compared to state-of-the-art defenses across various classification tasks.
\end{itemize}
\section{Our Proposed Defense: DMP}\label{dmp}
Now, we present our defense \emph{Distillation For Membership Privacy (DMP)},
which is motivated by the poor membership privacy-utility tradeoffs provided by existing MIA defenses (\S~\ref{related}).
First, we give an intuition behind DMP and detail the DMP training. Finally, to achieve the desired tradeoffs, we give a criterion to tune the selection or generation (e.g., using GANs) of reference data used in DMP.
\paragraphb{Notations. }\label{dmp:notations}
$D_\textsf{tr}$ is a \emph{private} training data.
An ML model trained on $D_\textsf{tr}$ without any privacy protections is called \emph{unprotected} model, denoted by $\theta_\textsf{up}$.
An ML model is called \emph{protected} model, denoted by $\theta_\textsf{p}$, if it protects $D_\textsf{tr}$ from MIAs.
For knowledge transfer, DMP uses an \emph{unlabeled and possibly private reference dataset} which is \emph{disjoint} from $D_\textsf{tr}$; as the reference data is unlabeled, we denote it by $X_\mathsf{ref}$.
We denote the soft label of $\theta$ on $\mathbf{x}$, i.e., $\theta(\mathbf{x})$, by $\theta^\mathbf{x}$.
\paragraphb{Main intuition of DMP. }\label{dmp:intuition}
\cite{sablayrolles2019white} show that $\theta$ trained on a sample $z$ (short for $(\mathbf{x},y)$) provides $(\epsilon,\delta)$ membership privacy to $z$ if the expected loss of the models not trained on $z$ is $\epsilon$-close to the loss of $\theta$ on $z$, with probability at least $1-\delta$.
They assume a posterior distribution of the parameters trained on a given data $D=\{z_1,..,z_n\}$ to be:
\begin{equation}\label{eq:post_assumption}
\mathbb{P}(\theta|z_1,...,z_n)\propto \text{exp}(\sum^n_{i=1} \ell(\theta,z_i))
\end{equation}
Consider a neighboring dataset $D'=\{z_1,..,z'_j,..,z_n\}$ of $D$, which is obtained by modifying at most one sample of $D$~\cite{ding2018detecting}.
\cite{sablayrolles2019white} show that, to provide membership privacy to $z_j$, the log of the ratio of probabilities of obtaining the same $\theta$ from $D$ and $D'$ should be bounded, i.e., \eqref{eq:prob_ratio} should be bounded.
\begin{align}\label{eq:prob_ratio}
\text{log}\Big|\frac{\mathbb{P}(\theta|D)}{\mathbb{P}(\theta|D')}\Big| = |\ell(\theta,z_j)-\ell(\theta,z'_j)|
\end{align}
\eqref{eq:prob_ratio} implies that, if $\theta$ was indeed trained on $z_j$, then to provide membership privacy to $z_j$, the loss of $\theta$ on $z_j$ should be same as the loss on any non-member sample $z'_j$.
\emph{DMP is a strong meta-regularization} technique built on this intuition. DMP aims to protect its target models against the membership inference attacks that exploit the gap between the target model's losses on the members and non-members, by reducing the gap.
DMP achieves this via knowledge transfer and restricts the direct access of $\theta_\textsf{p}$ to the private $D_\mathsf{tr}$, which significantly reduces the membership information leakage to $\theta_\textsf{p}$.
However, unlike existing knowledge transfer, DMP proposes an entropy-based criterion to select/generate $X_\mathsf{ref}$. Simply put, soft labels of the unprotected model $\theta_\mathsf{up}$ on $X_\mathsf{ref}$ should have low entropy and the $X_\mathsf{ref}$ should be far from decision boundaries of $\theta_\mathsf{up}$, i.e., far from $D_\mathsf{tr}$, in the input feature space.
\emph{Intuitively, such samples are easy to classify and none of the members of $D_\textsf{tr}$ significantly affects their predictions, and therefore, these predictions do not leak membership information of any particular member.}
\begin{figure}
\centering
\includegraphics[scale=.8]{figures/dmp.pdf}
\caption{\emph{\textbf{D}istillation for \textbf{M}embership \textbf{P}rivacy} (DMP) defense. (1) In \emph{pre-distillation} phase, DMP trains an unprotected model $\theta_\mathsf{up}$ on the private training data without any privacy protection. (2.1) In \emph{distillation} phase, DMP uses $\theta_\mathsf{up}$ to select/generate appropriate reference data $X_\mathsf{ref}$ that minimizes membership privacy leakage. (2.2) Then, DMP transfers the knowledge of $\theta_\mathsf{up}$ by computing predictions of $\theta_\mathsf{up}$ on $X_\mathsf{ref}$, denoted by $\theta^{X_\mathsf{ref}}_\mathsf{up}$. (3) In \emph{post-distillation} phase, DMP trains the final protected model $\theta_\mathsf{p}$ on $(X_\mathsf{ref},\theta^{X_\mathsf{ref}}_\mathsf{up})$.}
\label{fig:dml_blocks}
\vspace*{-1em}
\end{figure}
\paragraphb{Details of the DMP technique.}\label{dmp:description}
We now detail the three phases of our DMP defense depicted in Figure~\ref{fig:dml_blocks}.
In \emph{pre-distillation phase} (step (1) in Figure~\ref{fig:dml_blocks}), DMP trains $\theta_\textsf{up}$ on the private training data, $D_\textsf{tr}$, using standard SGD optimizer, e.g., Adam.
Such unprotected $\theta_\textsf{up}$ is highly susceptible to MIA due to large generalization error, i.e., difference between train and test accuracies~\cite{shokri2017membership,yeom2018privacy}.
Next, in \emph{distillation phase} (step (2.1) in Figure~\ref{fig:dml_blocks}), DMP obtains $X_\textsf{ref}$ required to transfer the knowledge of $\theta_\textsf{up}$ in $\theta_\textsf{p}$. Note that, $X_\textsf{ref}$ is \emph{unlabeled} and cannot be used directly for any learning.
Then, we compute soft labels of $X_\textsf{ref}$, i.e., $\theta^{X_\mathsf{ref}}_\mathsf{up}=\theta_\textsf{up}(X_\textsf{ref})$ (step (2.2) in Figure~\ref{fig:dml_blocks}).
There are two key factors of the distillation phase that allow us to tune DMP and achieve the desired privacy-utility tradeoffs.
First, the lower the entropy of predictions $\theta^{X_\mathsf{ref}}_\mathsf{up}$, the lower the membership leakage through $X_\mathsf{ref}$ and vice-versa. Such low entropy predictions are characteristics of the members of $D_\textsf{tr}$, however, non-members with low entropy can be obtained (or generated using GANs~\cite{micaelli2019zero}) due to large input feature space.
Second, using higher softmax temperatures to compute $\theta^{X_\mathsf{ref}}_\mathsf{up}$ reduces membership leakage, but may reduce accuracy of the final model, and vice-versa.
Finally, in \emph{Post-distillation phase} (step (3) in Figure~\ref{fig:dml_blocks}), DMP trains a protected model $\theta_\textsf{p}$ on $(X_\textsf{ref},\theta^{X_\mathsf{ref}}_\mathsf{up})$ using Kullback-Leibler divergence loss defined in~\eqref{kld_datum}.
In~\eqref{kld_datum}, $\overline{\mathbf{y}}$ is the target soft label.
The final $\theta_\textsf{p}$ is obtained by solving~\eqref{kld_emprical_risk_min}.
\vspace{-.5em}
\begin{align}
\label{kld_datum}
\mathcal{L}_{\scaleto{\textsf{KL}}{4pt}}(\mathbf{x},\overline{\mathbf{y}})&= \sum^{\mathbf{c}-1}_{i=0}\overline{\mathbf{y}}_i\ \text{log}\Big(\frac{\overline{\mathbf{y}}_i}{\theta_\mathsf{p}(\mathbf{x})_i} \Big)\\
\label{kld_emprical_risk_min}
\theta_\textsf{p}=\underset{\theta}{\text{argmin}}&\ \frac{1}{|X_\textsf{ref}|}\sum_{(\mathbf{x},\overline{\mathbf{y}})\in(X_{\mathsf{ref}},\theta^{X_\mathsf{ref}}_\mathsf{up})} \mathcal{L}_{\scaleto{\textsf{KL}}{4pt}}(\mathbf{x},\overline{\mathbf{y}})
\end{align}
Due to KL-divergence loss in~\eqref{kld_emprical_risk_min}, the resulting model, $\theta_\textsf{p}$, perfectly learns the behavior of $\theta_\textsf{up}$ on the $X_\mathsf{ref}$.
Furthermore, $X_\mathsf{ref}$ being a representative non-member data, i.e., test data, we expect that the test accuracies of $\theta_\textsf{p}$ and $\theta_\textsf{up}$ are close, and that the final DMP models will not suffer significant accuracy reductions~\cite{ba2014deep,romero2014fitnets}.
\\
\paragraphb{Fine-tuning the DMP defense.}\label{dmp:tune}
As mentioned before, the appropriate choice of reference data $X_\mathsf{ref}$ is important to achieve the desired privacy-utility tradeoffs in DMP.
In this section, we show that $X_\mathsf{ref}$ with low entropy predictions of unprotected model $\theta_\mathsf{up}$ strengthens membership privacy and derive an entropy-based criterion to select/generate
$X_\mathsf{ref}$.
\begin{proposition}\label{prop:entropy}
Consider $\theta_\mathsf{up}$ trained on a private $D_\mathsf{tr}$. Then, the membership leakage about $D_\mathsf{tr}$ through predictions $\theta_\mathsf{up}(X_\mathsf{ref})$ can be reduced by selecting/generating $X_\mathsf{ref}$ that are far from $D_\mathsf{tr}$ in input feature space with respect to some $L_p$ distance and whose predictions, $\theta_\mathsf{up}(X_\mathsf{ref})$, have low entropies.
\end{proposition}
\paragraphe{Sketch of proof of Proposition~\ref{prop:entropy}.}
Due to space limitations, we defer the detailed proof to Appendix and provide its sketch here.
Consider two training datasets $D_\textsf{tr}$ and $D'_\textsf{tr}$ such that $D'_\textsf{tr}\leftarrow D_\textsf{tr}-z$, and $X_\textsf{ref}$.
Then, the log of the ratio of the posterior probabilities of learning the exact same parameters $\theta_\textsf{p}$ using DMP is given by \eqref{eq:obj_ratio}.
Observe that, $\mathcal{R}$ is an extension of \eqref{eq:prob_ratio} to the setting of DMP, where $\theta_\mathsf{p}$ is trained via the knowledge transferred using $(X_\textsf{ref},\theta^{X_\textsf{ref}}_\textsf{up})$, instead of directly training on $D_\textsf{tr}$.
\cite{sablayrolles2019white} argue to reduce this ratio to improve membership privacy.
Hence, we want to obtain $X_\mathsf{ref}$ which reduces $\mathcal{R}$ when $D_\textsf{tr}$, $D'_\textsf{tr}$, and $\theta_\mathsf{p}$ are kept constant.
We note that, although similar in appearance to differential privacy, $\mathcal{R}$ is defined only for the given private dataset, $D_\textsf{tr}$.
\begin{equation}\label{eq:obj_ratio}
\mathcal{R}=\Big|\text{log}\ \Big({\text{Pr}(\theta_\textsf{p}|D_\textsf{tr},X_\textsf{ref})}/{\text{Pr}(\theta_\textsf{p}|D'_\textsf{tr},X_\textsf{ref})}\Big)\Big|
\end{equation}
Next, we modify $\mathcal{R}$ as:
\begin{align}
\label{eq:obj_ratio1}
& \mathcal{R}= \Big|-\frac{1}{T}\sum_{\mathbf{x}\in X_\textsf{ref}} \mathcal{L}_{\scaleto{\textsf{KL}}{4pt}}((\mathbf{x},\theta^{\mathbf{x}}_\textsf{up});\theta_\textsf{p}) - \mathcal{L}_{\scaleto{\textsf{KL}}{4pt}}((\mathbf{x},\theta'^{\mathbf{x}}_\textsf{up});\theta_\textsf{p})\Big| \\
\label{eq:obj_ratio2}
&\leq \frac{1}{T} \sum_{\mathbf{x}\in X_\textsf{ref}}\Big| \mathcal{L}_{\scaleto{\textsf{KL}}{4pt}}(\theta^{\mathbf{x}}_\textsf{up}\Vert\theta^{\mathbf{x}}_\textsf{p}) - \mathcal{L}_{\scaleto{\textsf{KL}}{4pt}}(\theta'^{\mathbf{x}}_\textsf{up}\Vert\theta^{\mathbf{x}}_\textsf{p})\Big|
\end{align}
\noindent where $\theta_\textsf{up}$ and $\theta'_\textsf{up}$ are trained on $D_\textsf{tr}$ and $D'_\textsf{tr}$, respectively.
Note that, \eqref{eq:obj_ratio1} holds due to the assumption in \eqref{eq:post_assumption} and the KL-divergence loss used to train $\theta_\mathsf{p}$ in DMP.
\eqref{eq:obj_ratio2} follows from \eqref{eq:obj_ratio1} because $|a+b|\leq|a|+|b|$.
Therefore, minimizing \eqref{eq:obj_ratio2} implies minimizing \eqref{eq:obj_ratio}.
Thus, to improve membership privacy due to $\theta_\mathsf{p}$, $X_\mathsf{ref}$ is obtained by solving~\eqref{eq:ref_obj}.
\begin{align}
\label{eq:ref_obj}
X^*_\textsf{ref}=\underset{X_\textsf{ref}\in X}{\text{argmin}}\Big(\frac{1}{T}\sum_{\mathbf{x}\in X_\textsf{ref}} \big|\mathcal{L}_{\scaleto{\textsf{KL}}{4pt}}&(\theta^{\mathbf{x}}_\textsf{up}\Vert\theta^{\mathbf{x}}_\textsf{p}) -\mathcal{L}_{\scaleto{\textsf{KL}}{4pt}}(\theta'^{\mathbf{x}}_\textsf{up}\Vert\theta^{\mathbf{x}}_\textsf{p})\big|\Big)
\end{align}
The objective of \eqref{eq:ref_obj} is minimized when $\theta^{\mathbf{x}}_\mathsf{up} = \theta'^{\mathbf{x}}_\mathsf{up}\ \ \forall\mathbf{x}\in X_\mathsf{ref}$ and is very intuitive: It implies that, $z$ (i.e., $D_\mathsf{tr}-D'_\mathsf{tr}$) enjoys stronger membership privacy when the reference data, $X_\mathsf{ref}$, are such that \emph{the distributions of outputs of $\theta_\mathsf{up}$ and $\theta'_\mathsf{up}$ on $X_\mathsf{ref}$ are not affected by the presence of $z$ in $D_\mathsf{tr}$}.
Next, we simplify \eqref{eq:ref_obj} by replacing $\mathcal{L}_{\scaleto{\textsf{KL}}{4pt}}$ with closely related cross-entropy loss $\mathcal{L}_{\scaleto{\textsf{CE}}{4pt}}$.
This simplification can be easily validated using $X_\mathsf{ref}$ whose ground truth labels are known.
Specifically, we randomly sample $D_\mathsf{tr}$ and $X_\mathsf{ref}$ from Purchase100 dataset, and compute $\theta_\mathsf{up}$ and $\theta_\mathsf{p}$ using DMP.
Next, for some $z\in D_\mathsf{tr}$, we train $\theta'_\mathsf{up}$ on $D'_\mathsf{tr}$.
Then, for each $\mathbf{x}\in X_\mathsf{ref}$, we compute $\Delta\mathcal{L}_{\scaleto{\textsf{KL}}{4pt}}$ as in~\eqref{eq:ref_obj} and use the available ground truth label of $\mathbf{x}$ to compute $\Delta\mathcal{L}_{\scaleto{\textsf{CE}}{4pt}}$.
Finally, we show that $\Delta\mathcal{L}_{\scaleto{\textsf{KL}}{4pt}}$ and $\Delta\mathcal{L}_{\scaleto{\textsf{CE}}{4pt}}$ are strongly correlated for all $z\in D_\mathsf{tr}$.
Next, we use the linear approximation given by~\cite{koh2017understanding} for the difference in $\mathcal{L}_{\scaleto{\textsf{CE}}{4pt}}$ of a pair of models trained with and without a sample to simplify \eqref{eq:ref_obj}.
Then the result of Proposition~\ref{prop:entropy} follows after a few simple mathematical manipulations.
\paragraphe{Empirical verification of Proposition~\ref{prop:entropy}.}
We randomly pick $D_\mathsf{tr}$ of size 10k from Purhcase100 data and train $\theta_\mathsf{up}$. Then, we sort the rest of Purhcase100 data based on entropy of the predictions of $\theta_\mathsf{up}$ on the data. We form first $X_\mathsf{ref}$ using the first 10k data with the lowest entropies, second $X_\mathsf{ref}$ using the following 10k data, and so on. Finally we train multiple protected models, $\theta_\mathsf{p}$'s, using each of the $X_\mathsf{ref}$'s.
Figure \ref{fig:hypothesis_eval_1} (left) shows the increase in the MIA risk and Figure \ref{fig:hypothesis_eval_1} (right) shows the increase in the classification performance of $\theta_\textsf{p}$ with the increase in average entropy of the $X_\mathsf{ref}$ used.
This tradeoff is because, although the higher entropy predictions contain more useful information \cite{nayak2019zero,hinton2014distilling} and lead to high accuracy of $\theta_\mathsf{p}$, they also contain higher membership information about $D_\mathsf{tr}$ and lead to higher MIA risk.
\begin{figure}[t!]
\centering
\hspace*{-2em}
\begin{tabular}{cc}
\subfloat{\input{new_tex_figures/purchase_at_test_accs_vs_entropy_redacted}}
\hspace*{-.5em}
\subfloat{\input{new_tex_figures/purchase_tr_te_acc}}
\end{tabular}
\vspace*{-1em}
\caption{The lower the entropy of predictions of unprotected model on $X_\mathsf{ref}$, the higher the membership privacy.}
\label{fig:hypothesis_eval_1}
\vspace*{-1.75em}
\end{figure}
\section{Related Work}\label{related}
\paragraphb{Membership inference attacks.}\label{related:meminf}
\cite{shokri2017membership} introduced membership inference attacks (MIAs).
Given a target model trained on a private training data and a target sample, MIA adversary aims to infer whether the target sample is a member of the private training data.
\cite{shokri2017membership} proposed to train a neural network to distinguish the features of the target model on members and non-members. They assumed a partial access to the private trainin data.
\cite{salem2019ml} relaxed this assumptions and showed the transferability of MIAs across datasets.
These works relied on the blackbox features of target models, e.g., model predictions to mount MIAs.
\cite{nasr2019comprehensive} proposed to use whitebox features of target models, e.g., model gradients, along with the blackbox features, to further enhance the MIA accuracy.
Above works used generalization gap (i.e., difference in train and test accuracy) of target models to mount strong MIAs.
The more recent MIA literature focuses on deriving features that can better distinguish the behavior of target models on members and non-members~\cite{leino2019stolen,song2020systematic}.
\paragraphb{Defenses against membership inference attacks. }\label{related:defenses}
MIAs exploit differences in behaviors of target models on members and non-members.
Regularization techniques, including dropout and label smoothing, reduce the difference in terms of accuracies of the target model on members and non-members, and mitigate MIAs to some extent~\cite{shokri2017membership}.
\cite{nasr2018machine} proposed adversarial regularization (AdvReg) tailored to defeat MIAs. AdvReg simultaneously trains the target and attack models in a game theoretic manner, and regularizes the target model using the accuracy of the attack model.
The final target models that use above regularization defenses can be deployed in whitebox manner, i.e., similar to DMP, they are \emph{whitebox defenses}. Hence, we thoroughly compare our DMP defense with all these regularization techniques.
However, as shown in~\cite{song2020systematic} and seen from the original work~\cite{nasr2018machine}, AdvReg is not an effective defense, because it either fails to mitigate MIA or incurs large drops in model utility (classification accuracy).
Jia et al.~\shortcite{jia2019memguard} proposed MemGuard, a blackbox defense that adds noise to the output of the target model such that the noisy output is both accurate and fools the given MIA attack model.
However, MemGuard does not defend against the simplest of threshold based attacks~\cite{yeom2018privacy,sablayrolles2019white}. We omit MemGuard and other blackbox defenses, e.g., top-k predictions~\cite{shokri2017membership}, from evaluations.
Differential privacy based defenses such as DP-SGD~\cite{abadi2016deep} and PATE~\cite{papernot2017semi} are whitebox defenses and provide strong theoretical membership privacy guarantees.
However, as~\cite{jayaraman2019evaluating} show\textemdash and we confirm in our work\textemdash target models trained using DP-SGD and PATE have prohibitively low classification accuracies rendering them unusable.
\section{Preliminaries}\label{preliminaries}
\paragraphb{Knowledge distillation.}\label{prelim:distil}
\cite{bucilua2006model} and \cite{ba2014deep} proposed knowledge distillation, which uses the outputs of a large teacher model to train a smaller student model, in order to \emph{compress} large models to smaller models.
The outputs used for distillation can vary, e.g.,
\cite{hinton2014distilling} use class probabilities generated by the teacher as the outputs, while \cite{romero2014fitnets} use the intermediate activations along with class probabilities of the teacher.
It is well established that \emph{knowledge distillation produces students with accuracies similar to their teachers}~\cite{crowley2018moonshine,zagoruyko2016paying}. This also allows DMP to produce highly accurate target models.
Note that, although we use term ``distillation'', DMP uses teacher and student models of the same sizes, because DMP is not concerned with the size of the resulting model.
\paragraphb{Membership inference attacks. }
Below we give the threat model and MIA methodology that we consider in this work.
\paragraphb{\em{Threat model.} }
The primary \emph{goal} of the adversary is to infer the membership of a target sample $(\textbf{x},y)$ in the private training data $D_\mathsf{tr}$ of a target model $\theta$.
Our DMP defense uses private, unlabeled reference data $X_\mathsf{ref}$ for knowledge transfer, which itself could be privacy sensitive, hence, we consider a secondary goal to infer membership of a target sample in $X_\mathsf{ref}$.
Following the previous works, we assume a strong adversary with the \emph{knowledge} of: target model parameters (the strongest whitebox case), half of the members of $D_\mathsf{tr}$ and equal number of non-members. Similarly, to assess the MIA risk to $X_\mathsf{ref}$, we assume that the adversary has half of the members of $X_\mathsf{ref}$ and the equal number of non-members. Note that, the assumptions on the partial availability of private $D_\mathsf{tr}$ and private $X_\mathsf{ref}$ facilitates the assessment of defenses under a very strong adversary.
The adversary can compute various whitebox and blackbox features of the target model and train an attack model. The adversary \emph{cannot poison} $X_\mathsf{ref}$ as it is not publicly available.
\paragraphb{\em {Methodology.} }
Consider a target model $\theta$ and a sample $(\textbf{x},y)$.
MIAs exploit the differences in the behavior of $\theta$ on members and non-members of the private $D_\mathsf{tr}$.
Therefore, MIAs train a binary attack model to classify target samples into members and non-members.
Such attack models can be neural networks~\cite{shokri2017membership,salem2019ml} or simple thresholding functions where threshold is tuned for maximum attack performance~\cite{yeom2018privacy,sablayrolles2019white,song2020systematic}.
The adversary computes various features of $\theta$ for given $(\mathbf{x},y)$, e.g., prediction $\theta(\mathbf{x},y)$, $\theta$'s loss on $(\mathbf{x},y)$, and the gradients of the loss.
The adversary combines these features to form $F(\mathbf{x},y,\theta)$.
The attack model $h$ takes $F(\mathbf{x},y,\theta)$ as its input and outputs the probability that $(\mathbf{x},y)$ is a member of $D_\mathsf{tr}$.
Let $\text{Pr}_{D_\mathsf{tr}}$ and $\text{Pr}_{\text{\textbackslash} {D_\mathsf{tr}}}$ be the conditional probabilities of the members and non-members of ${D_\mathsf{tr}}$, respectively.
Hence, the expected gain of the attack model for the above setting is given by:
\begin{align}\label{exp_gain}
G^{\theta}(h)&=\underset{\substack{(\mathbf{x},y)\\ \sim \text{Pr}_{D_\mathsf{tr}}}}{\mathbb{E}} [\text{log}(h(F))]+\underset{\substack{(\mathbf{x},y)\\ \sim \text{Pr}_{\text{\textbackslash} D_\mathsf{tr}}}}{\mathbb{E}} [\text{log}(1-h(F))]
\end{align}
In practice, the adversary knows only a finite set of the members $D$ and non-members $D'^A$ required to train $h$, hence computes the above gain empirically as:
\begin{align}\label{emp_gain}
G^{\theta}_{D^A, D'^A}(h)= \sum_{\substack{(\mathbf{x},y)\\ \in D^A}} \frac{\text{log}(h(F))}{|D^A|} + \sum_{\substack{(\mathbf{x},y)\\ \in D'^A}} \frac{\text{log}(1-h(F))}{|D'^A|}
\end{align}
Finally, the adversary solves for $h^*$ that maximizes~\eqref{emp_gain}.
\section{Related Work}\label{related_work}
Privacy preserving machine learning is an active area of research.
Multiple inference attacks against ML models are proposed in literature, e.g., input inference~\cite{fredrikson2015model}, blackbox and whitebox membership inference~\cite{shokri2017membership,nasr2019comprehensive,salem2018ml,leino2019stolen}, attribute inference~\cite{carlini2018secret}, parameter inference~\cite{tramer2016stealing,wang2018stealing}, training data embedding attacks~\cite{song2017machine}, and side-channel attacks~\cite{wei2018know}.
In this paper, we focus on the membership inference attacks for adversaries with blackbox and whitebox access to the model.
Several defenses have been proposed against membership inference attacks~\cite{abadi2016deep,papernot2018scalable,nasr2018machine,papernot2017semi}.
Unfortunately, the existing defenses do not provide acceptable tradeoffs between privacy and utility, i.e., they hurt the model's classification performance significantly to provide membership privacy.
Defenses based on differential privacy (DP) \cite{abadi2016deep,papernot2018scalable,papernot2017semi,hamm2016learning,pathak2010multiparty} provide rigorous membership privacy guarantees, but as demonstrated by Jayaraman et al. \cite{jayaraman2019evaluating}, the resulting models are of no practical use.
Furthremore, as \cite{jayaraman2019evaluating,leino2019stolen} shows---and we confirm in our work---with relaxed privacy budgets, DP defenses are also susceptible to the membership inference.
The primary reason for the susceptibility is the high generalization error of such models, which is sufficient for membership inference~\cite{long2018understanding,rahman2018membership,shokri2017membership,nasr2019comprehensive,leino2019stolen}.
Adversarial regularization~\cite{nasr2018machine} is a recent defense that is tailored to membership inference attacks, and claims to improve the tradeoffs.
However, as shown in Section~\ref{exp:regularizations}, the adversarial regularization defense fails to provide acceptable tradeoffs when evaluated against state-of-the-art membership inference attacks.
Knowledge distillation has been used in
several privacy defenses \cite{papernot2017semi,hamm2016learning,pathak2010multiparty,nissim2007smooth,bassily2018model,wang2019private}, which perform distillation using a noisy aggregate of predictions of models of multiple data holders.
In particular, PATE~\cite{papernot2017semi,papernot2018scalable} combines knowledge distillation and DP~\cite{abadi2016deep}.
In PATE, an input is labeled by an ensemble of \emph{teacher models}, and the final \emph{student model} is trained using the noisy aggregates of all labels.
We perform a comprehensive comparison of DMP and PATE in Section \ref{exp:dp} to show that DMP provides better tradeoffs between membership privacy and accuracy.
DP defenses add large amounts of noise to provide privacy to \emph{any data} with the underlying distribution, and in this process incur high accuracy losses~\cite{papernot2018scalable}.
However, due to the targeted motivation to provide membership privacy, instead of adding any explicit noise, DMP defense uses the novel knowledge transfer via easy-to-classify samples, whose predictions are not affected by the presence of any particular member in the private training data.
Regularization alone is shown to be ineffective against membership inference attacks \cite{long2018understanding,nasr2019comprehensive,leino2019stolen}.
Long et al. \cite{long2018understanding} proposed a membership inference attack against well-generalized models that identifies the vulnerable \emph{outliers} in the sensitive training data of the model, whose membership can be inferred.
In DMP, such outliers can be protected by setting high softmax temperatures or selecting samples with low entropy predictions (Section \ref{dmp:analysis}), but at the cost of utility degradation.
This is similar to previous defenses: in DP-SGD, privacy budget is reduced and in the adversarial regularization, high regularization factor is set to provide privacy to the outliers, and in practice, at relaxed privacy budgets or low regularization factors, these defenses also pose the membership inference risk to such outliers \cite{jayaraman2019evaluating,rahman2018membership}.
However, we note that the primary objective of our DMP defense is to produce models with superior tradeoffs, i.e., achieve superior classification performance for a given degree of membership privacy.
We demonstrated the effectiveness of DMP in Section \ref{exp} in producing such models with state-of-the-art classification accuracy for a given membership privacy.
|
1,314,259,995,265 | arxiv | \section{Introduction}
Adaptive modulation has been successfully deployed in wireless communication systems providing link adaptation \cite{c1}. Using adaptive modulation, the transmission rate is adapted based on the channel conditions, which are estimated at the receiver's side and made available at the transmitter through a feedback channel. When adaptive modulation is implemented in conjunction with power control at the physical layer, a variable rate variable power (VRVP) modulation is considered \cite{c2}. Two alternative schemes of VRVP have been proposed in the literature, known as continuous rate and discrete rate. The latter is more practical from an implementation point of view.
Cognitive radio (CR) has been recently proposed for enhancing spectrum utilization of licensed wireless systems when certain conditions apply \cite{c3}. The knowledge of the channel state is very important for both types of CR networks (CRNs), known as opportunistic spectrum access (OSA) and spectrum sharing (SS) \cite{c4}. Hence, the incorporation of adaptive modulation in CRNs is possible. Recently, a few investigations of the performance of adaptive modulation in CRNs have been accomplished. More specifically, ~\cite{c5} and ~\cite{c6} investigate adaptive modulation in SS CRNs, while ~\cite{c7} and ~\cite{c8} present a performance analysis of adaptive modulation in OSA CRNs. However, none of these works have been assumed a multi-user CRN in fading channels. \let\thefootnote\relax\footnote{This research work is supported by the Qatar National Research Fund (QNRF) under National Research Priorities Program (NPRP) Grant NPRP 09-1168-2-455.}
In this paper, we analyze and evaluate the performance of adaptive modulation in multi-user cognitive fading environment. In particular, we analyze the spectral efficiency of CRNs that employ continuous and discrete rate types of adaptive modulation operating over Nakagami-$m$ channels assuming additionally multiple secondary users (SUs). We assume multi-user diversity (MUD) using opportunistic selection of the SU with the best signal-to-noise-ratio (SNR). Finally, we provide and discuss the results of our analysis.
The rest of this paper is organized as follows. Section II describes the multi-user cognitive radio network model. Section III provides the performance analysis of adaptive modulation operating over multi-user cognitive radio fading channels. In Section IV, we present and discuss the obtained numerical results and in Section V we provide summary of this work.
\section{System Model} \label{system}
We assume a cognitive radio network with one secondary user transmitter (SU-Tx) and multiple secondary user receivers (SU-Rxs) denoted with $i\in {1,...,L}$ where each user $i$ is served through an opportunistic or spectrum sharing access strategy \cite{c3}. We assume that the primary network (PN) consists of one primary user transmitter (PU-Tx) and one primary user receiver (PU-Rx). Fading channels are assumed for all links. The channel gain between the SU-Tx and the $i-th$ SU-Rx is denoted as $g_{s,i}$ and its additive white Gaussian noise (AWGN) denoted as $n_{s,i}$. The average transmit power over the fading channel is $\bar{P}$, the AWGN has power density $N_0/2$ and the received bandwidth is $B$. An SU-Rx can have access to a channel if and only if a predefined maximum level on the instantaneous transmit power $P$ is achieved. This level is determined from the channel state information (CSI) which represents the minimum received SNR, $\gamma_{s,i}$ that is equal to $g_{s,i}\bar{P}/N_0B$ for a channel gain $g_{s,i}$ and a unit of bandwidth $B$.
The transmit power $P$ is controlled based on SNR $\gamma$ using power control, and thereby we denote it as $P(\gamma)$ ~\cite{c9}. The SU-Tx uses an MUD selection strategy to select transmission to the SU-R with the best received SNR ~\cite{c10}. The channel estimate, i.e. $\gamma_{s,i}$ is also available at the SU-Tx side via a feedback channel. We assume that the CSI is perfectly available at the receivers i.e. PU-Rx and SU-Rxs, and that the feedback channel does not induce any delays on the CSI’s transmission. Moreover, a set of $M-ary$ Quadrature Amplitude Modulations (M-QAMs) is considered and their selection relies on the estimated CSI. In the considered system model, first the SU-Tx determines the user who can access the channel through the MUD and in the sequel it selects the transmission rate $R = log_2(M)$ via the selection of the appropriate $M-ary$ modulation from the signal set according to the estimated CSI.
Finally, we make the following assumptions for the considered system: a) the transmission of each symbol is accomplished with a symbol period $T_s = 1/B$ using ideal raised cosine pulses; and b) the fading channel is varying slowly in time, i.e. the receiver is able to sense and track the channel fluctuations and thus it corresponds to a block flat fading channel model with an average received SNR, $\bar{\gamma}$ ~\cite{c11}.
\section{Performance Analysis} \label{analysis}
Assuming VPVR adaptive modulation since power control is used in both SS and OSA CRNs, we will analyse both continuous and discrete rate cases denoted as CR and DR respectively. We derive below first the channel capacity achieved over fading channels and second the spectral efficiency assuming CR and DR adaptive modulation schemes. As mentioned above, the SU-Tx employs MUD to select the SU-Rx and therefore, the received SNR of the selected SU-Rx $\gamma_{s,max}$ is obtained as follows ~\cite{c10}:
\begin{eqnarray} \label{eq1}
\gamma_{s,max} = \max_{1\leq i \leq L} \ \gamma_{s,i}
\end{eqnarray}
with probability density function (PDF) obtained as follows:
\begin{eqnarray} \label{eq2}
f_{\gamma_{s,max}}(x) = L f_{\gamma_{s,i}}(x)F_{\gamma_{s,i}}(x)^{L-1}
\end{eqnarray}
where $f_{\gamma_{s,i}}(x)$ and $F_{\gamma_{s,i}}(x)$ are the PDF and the cumulative distribution function (CDF) of the received SNR $\gamma_{s,i}$ at the $i-th$ SU-Rx respectively.
The overall average achievable capacity at the secondary system (i.e. SU-Tx to SU-Rx) is obtained as follows:
\begin{eqnarray} \label{eq3}
C_{s} &=& \int_0^{\mathcal{\infty}} { B log_2(1+\gamma_{s,max})f_{\gamma_{s,max}}(x) dx} .
\end{eqnarray}
\subsection{Channel Capacity} \label{CR}
The average channel capacity of a fading channel $\bar{C}$ (in bits per second) is given by ~\cite{c9}
\begin{eqnarray} \label{eq4}
\bar{C} = \max_{P(\gamma)} \left\lbrace \int_{0}^{\infty} B log_2 \left( 1+ \gamma \frac{P(\gamma)}{\bar{P}} \right) f(\gamma) \,d \gamma \right\rbrace
\end{eqnarray}
where the instantaneous transmit power $P(\gamma)$ chosen relative to $\gamma$ is subject to the following power constraints:
\begin{eqnarray} \label{eq5}
\int_{0}^{\infty} P(\gamma) f(\gamma) \,d \gamma \leq \bar{P}
\end{eqnarray}
\begin{eqnarray} \label{eq6}
\int_{0}^{\infty} P(\gamma_{sp}) f(\gamma_{sp}) \,d \gamma_{sp} \leq \bar{Q}
\end{eqnarray}
where \eqref{eq5} represents the well-known transmit power constraint applied to OSA systems and the \eqref{eq6} represents the additional interference power constraint applied to SS systems ~\cite{c4}.
\subsubsection{Transmit Power Constraint} \label{constr1}
We consider the case of the average transmit power constraint, in which the fading distribution depends only on secondary link and the optimal power allocation of the SU-Tx is obtained as follows \cite{c4}:
\begin{eqnarray} \label{eq7}
\frac{P(\gamma_s)}{\bar{P}} = \left[ \frac{1}{\gamma_{0,s}} - \frac{1}{\gamma_s} \right] ,& if & \gamma_s > \gamma_{0,s}
\end{eqnarray}
where $\gamma_{0,s}$ is the optimal cut-off level of the received SNR at the SU-Rx, which can be calculated by the substitution of \eqref{eq7} into \eqref{eq5} with equality for maximizing the capacity in \eqref{eq4}.
Considering MUD in conjunction with the average transmit power constraint, the capacity is obtained as follows:
\begin{eqnarray} \label{eq8}
\bar{C} = \int_{\gamma_{0,s}}^{\infty} B log_2(\frac{\gamma_{s,max}}{\gamma_{0,s}}) f(\gamma_{s,max}) \,d \gamma_{s,max} .
\end{eqnarray}
\subsubsection{Interference Power Constraint} \label{constr2}
We consider now the case of the average interference power constraint, in which the fading distribution depends on both secondary and interference links and the optimal power allocation of the SU-Tx is obtained as follows \cite{c4}:
\begin{eqnarray} \label{eq9}
\frac{P(\gamma_{sp})}{\bar{P}} = \left[ \frac{1}{\gamma_{0,sp}} - \frac{1}{\gamma_{sp}} \right], & if & \gamma_{sp} > \gamma_{0,sp}
\end{eqnarray}
where $\gamma_{0,sp}$ is the optimal cut-off level of the received SNR at the SU-Rx considering the interference power constraint and thereby the $\gamma_{sp}$ is equal to \cite{c4}:
\begin{eqnarray} \label{eq10}
\gamma_{sp} = \frac{g_{s,i}\bar{P}}{g_p N_0 B} .
\end{eqnarray}
Considering again MUD in conjunction with the average transmit power constraint, the capacity is obtained as follows:
\begin{eqnarray} \label{eq11}
\bar{C} = \int_{\gamma_{0,s}}^{\infty} B log_2(\frac{\gamma_{sp,max}}{\gamma_{0,sp}}) f(\gamma_{sp,max}) \,d \gamma_{sp,max} .
\end{eqnarray}
Notably, the PDFs $f(\gamma_{s,max})$ and $f(\gamma_{sp,max})$ will be obtained for Rayleigh and Nakagami-$m$ distribution using the analysis provided in Section IV whereby the PDF and CDF will be obtained for a single user and then using equation \eqref{eq2} for the one with the best SNR.
\subsection{Spectral Efficiency in Continuous Rate Adaptive Modulation} \label{Se}
\subsubsection{Transmit Power Constraint}
The power allocation that maximizes the spectral efficiency in SS system, i.e. assuming \eqref{eq7} and the adaptive modulation in ~\cite{c2}, is given as follows:
\begin{eqnarray}\label{eq12}
\frac{P(\gamma_s)}{\bar{P}}=
\begin{cases}
\ \frac{1}{\gamma_{0,s}}-\frac{1}{\gamma_s K}, \gamma_s \geq \frac{\gamma_{0,s}}{K} \\
\ 0 , \gamma_s < \frac{\gamma_{0,s}}{K}\\
\end{cases}
\end{eqnarray}
where $K$ is an effective power loss that retains the bit-error-rate (BER) value and is equal to:
\begin{eqnarray} \label{eq13}
K = \frac{-1.5}{ln(5BER)} .
\end{eqnarray}
Combining the equations above, the spectral efficiency for the continuous rate adaptive modulation is maximized up to a cut-off level in SNR denoted as $\gamma_{0,s,K}=\gamma_s/K$ obtained as follows \cite{c2}:
\begin{eqnarray} \label{eq14}
\langle S_e \rangle_{CR} = \int_{\gamma_{s,K}}^{\infty} log_2(\frac{\gamma_{s,max}}{\gamma_{0,s,K}})f(\gamma_{s,max})d\gamma_{s,max} .
\end{eqnarray}
\subsubsection{Interference Power Constraint}
In the same way as above and taking into account \eqref{eq9}, we have the following:
\begin{eqnarray}\label{eq15}
\frac{P(\gamma_{sp})}{\bar{P}}=
\begin{cases}
\ \frac{1}{\gamma_{0,sp}}-\frac{1}{\gamma_{sp} K}, \gamma_{sp} \geq \frac{\gamma_{0,sp}}{K} \\
\ 0 , \gamma_{sp} < \frac{\gamma_{0,sp}}{K} . \\
\end{cases}
\end{eqnarray}
Replacing the index $s$ with the index $sp$ in \eqref{eq14} taking into account \eqref{eq15}, we can find $S_e$ at the link with channel gain $g_{sp}$. Again, the $f(\gamma_{sp,max})$ is obtained using \eqref{eq2} and the analysis obtained in Section IV.
\subsection{Spectral Efficiency in Discrete Rate Adaptive Modulation}
We now consider a DR MQAM with a constellation set of size $N$ with $M_0 = 0$,$M_1 = 2$ and $M_j = 2^{2(j-1)}$ for $j = 2,...,N$. At each symbol time, the system transmits with a constellation from the set ${M_j = 0,1,...,N}$ ~\cite{c2}. The choice of a constellation depends on $\gamma$, i.e. the SNR over that symbol time, while the $M_0$ constellation corresponds to no data transmission. The spectral efficiency is now defined as the sum of the data rates of each constellation multiplied by the probability that this constellation will be selected and thus it is given as follows:
\begin{eqnarray} \label{eq16}
\langle Se \rangle_{DR} = \Sigma_{j=1}^{N}log_2(M_j)f(\gamma_{s,j} \leq \gamma \leq \gamma_{s,j+1})
\end{eqnarray}
subject to the following power constraint:
\begin{eqnarray} \label{eq17}
\Sigma_{j=1}^{N} \int_{\gamma_{s,j}}^{\gamma_{s,j+1}} \frac{P_j(\gamma)}{\bar{P}} p(\gamma_s)d\gamma_s = 1
\end{eqnarray}
where $P_j(\gamma_s)/\bar{P}$ is the optimal power allocation that is obtained from (\ref{eq7}) for each constellation $M_j$ with a fixed BER as follows:
\begin{eqnarray}\label{eq18}
\frac{P_j(\gamma_s)}{\bar{P}}=
\begin{cases}
\ (M_j-1)\frac{1}{\gamma_{s,K}} -\frac{1}{\gamma_s K}, M_j \leq \frac{\gamma_s}{\gamma_{s,K}^*} \leq M_{j+1}\\
\ 0 , M_j=0\\
\end{cases}
\end{eqnarray}
where $\gamma_{s,K}^*$ is a parameter that will later be optimized to maximize spectral efficiency by defining the optimal constellation size for each $\gamma_s$. The analysis for the interference power constraint is obtained as above by replacing $\gamma_s$ with $\gamma_{sp}$ taking into account (9) and (10).
\section{Fading Distributions in Multi-User Environments} \label{Fadings}
\subsection{Rayleigh Distribution} \label{Rayl2}
\subsubsection{Opportunistic Spectrum Access}
We assume that the channel gains $g_{s,i}$ and $g_{p}$ are independent and identically distributed (i.i.d.) Rayleigh random variables $\forall i$. In OSA systems, only the transmit power constraint is applied and thus \eqref{eq7} depends on the channel gain on the secondary links i.e. $g_{s,i}$; and the PDF is obtained as follows \cite{c9}:
\begin{eqnarray} \label{eq22}
f(x) &=& e^{-x} .
\end{eqnarray}
and the CDF of the PDF in \eqref{eq22} is obtained as follows:
\begin{eqnarray} \label{eq23}
F(x) = \frac{1}{log(e)} \left( 1-e^{-x} \right) .
\end{eqnarray}
Substituting \eqref{eq22} and \eqref{eq23} into \eqref{eq2}, we can derive the PDF $f_{\gamma_{s,max}}(x)$ of the maximum received SNR and thus the capacity and spectral efficiency for CR and DR adaptive modulations derived above.
\subsubsection{Spectrum Sharing}
We assume that the channel gains $g_{s,i}$ and $g_{p}$ are i.i.d. Rayleigh random variables $\forall i$. For notational brevity, we will denote the term $g_{s,max}/g_{p}$ as $g_s/g_p$. We will substitute $X=g_s/g_p$ so that the PDF of the received SNR at the SU-Tx is obtained as follows:
\begin{eqnarray} \label{eq24}
\nonumber
f(x) &=& \int_{0}^{\infty} \ z e^{-x z} e^{-z} \,d z\\
&=& -\frac{e^{-(1+x)z}(1+z+x z)}{(1+x)^2}\mid_0^\infty = \frac{1}{(1+x)^2}
\end{eqnarray}
which is identical to the expression presented in \cite{c14}. The CDF of the PDF in \eqref{eq24} is obtained as follows:
\begin{eqnarray} \label{eq25}
F(x) = 1 - \frac{1}{1+x} .
\end{eqnarray}
Substituting \eqref{eq24} and \eqref{eq25} into \eqref{eq2}, we can derive the PDF $f_{\gamma_{s,max}}(x)$ of the maximum received SNR and thus the capacity and spectral efficiency for CR and DR adaptive modulations derived above.
\subsection{Nakagami$-m$ Distribution} \label{Nakag}
\subsubsection{Opportunistic Spectrum Access}
We now assume that the channels gains $g_{s,i}$ and $g_{p}$ are i.i.d. Nakagami$-m$ random variables $\forall i$ and thus the following Nakagami$-m$ distribution is applied:
\begin{eqnarray} \label{eq26}
f(x) = \frac{m^m x^{(m-1)}}{\Gamma(m)}e^{-m )}, & & x\geq 0
\end{eqnarray}
and the CDF of the PDF in \eqref{eq26} is obtained as follows:
\begin{eqnarray} \label{eq27}
F(x) = \frac{(m-1)m^{m-2}}{\Gamma(m)} (1-(1+m x)e^{-m x}) .
\end{eqnarray}
\subsubsection{Spectrum Sharing}
We now assume that the channels gains $g_{s,i}$ and $g_{p}$ are i.i.d. Nakagami$-m$ random variables $\forall i$ and thus follow the following Nakagami$-m$ distribution for a specific channel gain $Z=z$ :
\begin{eqnarray} \label{eq28}
f(z) = \frac{m^m z^{(m-1)}}{\Gamma(m)}e^{(-m z)}, & & z\geq 0
\end{eqnarray}
where $m$ represents the shape factor under which the ratio of the line-of-sight (LoS) to the multi-path component is realized \cite{c15}. Assuming that both channel gains $g_{s,i}$ and $g_{p}$ have instantaneously the same fading fluctuations i.e. $m_s=m_p=m$, the PDF of the term $X=g_s/g_p$ is obtained as follows:
\begin{eqnarray} \label{eq29}
f(x) = \frac{x^{m-1}}{B(m,m)(x+1)^{2m}}, & & x \geq 0 .
\end{eqnarray}
After some mathematical manipulation, the CDF of the PDF in \eqref{eq29} is obtained as follows:
\begin{eqnarray} \label{eq30}
F_{g_s/g_p}(x) = \frac{1}{B(m,m)} \frac{x^m}{m} {_2}F_1(m,2m;1+m;-x)
\end{eqnarray}
where ${_2}F_1(a,b;c;y)$ is the Gauss hyper-geometric function which is a special function of the hyper-geometric series \cite{c16}. Substituting \eqref{eq29} and \eqref{eq30} into \eqref{eq2}, we can derive the PDF of the received SNR $\gamma_{s,max}$ of the selected SU-Rx for the Nakagami-$m$ case.
\section{Numerical Results}
In the following figures, we depict the capacity and spectral efficiency for continuous and discrete rate cases respectively in OSA and SS CRNs. More specifically, Fig.1 depicts the capacity and spectral efficiency for continuous and discrete rate in $bits/Hz$ versus the average transmit power $P_{av}$ in the secondary link for different number of secondary users (i.e. SU-Rxs) equal to $N_s=1$, $N_s=5$ and $N_s=15$. In this figure, the average interference power constraint does not exist, in other words, we use the transmit power constraint applied in OSA CRNs. Thereby, the performance of adaptive modulation in OSA CRNs is depicted with multiple SU-Rxs. For the discrete rate case, we assume 5 regions of M-QAM with $M\in[{0,4,8,16,64}]$. Obviously, as long as the number of secondary users $N_s$ increases, the capacity and spectral efficiency increase as well. We notice that a small increase in the number of SU-Rxs i.e. $N_s=5$ gives a big performance enhancement in $bits/Hz$, almost more than one and half times. However, a bigger increase in SU-Rxs i.e. $N_s=15$ gives smaller performance enhancement, indicating thereby that the increase in capacity and spectral efficiency exhibits a saturation behavior.
Fig.2 depicts the capacity and spectral efficiency over the average interference power $Q_{av}$ at the link between the SU-Tx and PU-Rx. The average transmit power is taken to be $P_{av}=20dB$. Thereby, the performance of adaptive modulation in SS CRNs is depicted for multiple SU-Rxs over the interference channel. We realize that the performance increase is higher than the one over the average transmit power $P_{av}$ as depicted in Fig.1 either assuming $N_s=5$ or $N_s=15$. This is due to the fact that as the number of SU-Rxs increases, the possibility of finding an SU-Rx with sufficient SNR increases and thus the performance rate on the interference link increases as long as the constraint is being relaxed, i.e. $Q_{av}$ increases.
\begin{figure}[ht]
\includegraphics[width=9cm,height=7cm]{Fig1x.eps}
\caption{Channel capacity ($-o$) and spectral efficiency of continuous ($-s$) and discrete rate ($-d$) adaptive modulation vs. the average transmit power $P_{av}$ for different number of secondary users $N_s$ as depicted.}
\label{fig:model-system}
\end{figure}
\begin{figure}[ht]
\includegraphics[width=9cm,height=7cm]{Fig2x.eps}
\caption{Channel capacity ($-o$) and spectral efficiency of continuous ($-s$) and discrete rate ($-d$) adaptive modulation vs. the average interference power $Q_{av}$ for different number of secondary users $N_s$ as depicted.}
\label{fig:model-system}
\end{figure}
Fig. 3 depicts the spectral efficiency in continuous and discrete rate versus the number of secondary users (SU-Rxs) with the Nakagami-$m$ fading coefficient given by m = 1 assuming Rayleigh and m = 2 assuming Ricean factor equal to 2.4312. In addition we assume interference power constraints of $Q_{av}=-10dB$, $Q_{av}=0dB$ and $Q_{av}=10dB$ as well as a transmit power of $P_{av}=10dB$. Thereby, we depict the performance of adaptive modulation in SS CRNs versus the number of secondary users i.e. SU-Rxs. The impact of $m$ is more evident for high interference power constraints e.g. $Q_{av}=10dB$, where the degradation from $m=1$ to $m=2$ can be more than $2Bps/Hz$ for high number of SU-Rxs, e.g. $N_s=15$. On the other hand, the impact is negligible for low average interference power constraints, e.g. $Q_{av}=-10dB$, where the fading environment i.e. changes in $m$ does not decrease the performance significantly. For a more comprehensive view in Nakagami-$m$ channels, we depict in Fig.4 the spectral efficiency vs. the average interference power $Q_{av}$ for average transmit power $P_{av}=10dB$, and thereby the case of a SS CRN, different number of secondary users $N_s=5$ and $N_s=15$ for $m = 1$ (Rayleigh) and $m = 2$ (Ricean) for the Nakagami−$m$ distribution. Obviously, in Rayleigh conditions the system achieves better performance and the gain is more evident when the number of secondary users $N_s$ increases.
\begin{figure}[ht]
\includegraphics[width=9cm,height=8cm]{Fig3x.eps}
\caption{Capacity and spectral efficiency vs. number of SU-Rxs $N_s$ with $m = 1$ (Rayleigh) and $m = 2$ (Ricean) for the Nakagami−m distribution.}
\label{fig:model-system}
\end{figure}
\begin{figure}[ht]
\includegraphics[width=9cm,height=8cm]{Fig4x.eps}
\caption{Spectral efficiency vs. the average interference power $Q_{av}$ for average transmit power $P_{av}=10dB$, different number of secondary users $N_s$ and with $m = 1$ (Rayleigh) and $m = 2$ (Ricean) for the Nakagami−m distribution.}
\label{fig:model-system}
\end{figure}
\section{Summary}
In this work, we have analyzed adaptive modulation in multi-user cognitive radio fading environments. In particular, we have analyzed the performance of adaptive modulation in cognitive radio networks with multiple secondary users assuming multi-user diversity as a transmission selection strategy. Both opportunistic spectrum access and spectrum sharing cognitive radio systems are considered using constraints on the transmit and interference power, respectively. The derived fading distributions model both Rayleigh and Nakagami-$m$ channels. Finally, the spectral efficiency gain is depicted in a multiple secondary user environment.
|
1,314,259,995,266 | arxiv | \section{Introduction}
The study of distance-$j$ ovoids in generalized polygons was started by Thas,
who investigated the existence of distance-$2$ ovoids in generalized quadrangles and distance-$3$ ovoids in generalized hexagons (which are simply known as ovoids) \cite{Thas1981}.
The existence of distance-$j$ ovoids is related to the existence of particular perfect codes \cite{Cameron1976},
the separability of particular groups \cite{Cameron2008}, and various other topics.
The focus of this work is on distance-$2$ ovoids in the \emph{dual split Cayley hexagon} $\h(q)^D$.
While for the \emph{split Cayley hexagon} $\h(q)$ itself the existence of distance-$2$ ovoids is already known for $q=2,3,4$ \cite{DeWispelaere2004, DeWispelaere-VanMaldeghem2005, DeWispelaere2008} and the ovoids completely classified \cite[Sec.~18.3]{Pech2009}, for $\h(q)^D$ only the non-existence for $q=2$ \cite{Frohardt1994} and the existence for $q=3$ \cite{DeWispelaere2004} is known.
Note that we have $\h(q)$ isomorphic to $\h(q)^D$ if and only if $q$ is a power of $3$ \cite[Cor.~3.5.7]{vanMaldeghem1998}.
Here we present a computer-based proof for the next open case, $\h(4)^D$.
\begin{theorem}\label{thm:non_ex_dualsplitcayley}
The dual split Cayley hexagon $\h(q)^D$ does not possess a distance-$2$ ovoid for $q \in \{2, 4\}$.
\end{theorem}
The proof uses a combination of various algorithmic ideas, mostly Knuth's dancing links algorithm \cite{Knuth2000}, Linton's smallest image set algorithm \cite{Linton2004} and integer linear programming.
We note that Theorem \ref{thm:non_ex_dualsplitcayley} has been used in \cite{Bishnoi2016} to prove that there does not exist any semi-finite generalized hexagon containing $\h(4)^D$ as a full subgeometry.
In fact, non-existence of distance-$2$ ovoids in any given finite generalized hexagon implies that every generalized hexagon containing the given hexagon as a full subgeometry is finite \cite[Cor.~3.7]{Bishnoi2016}.
It was shown in \cite{Offer2005} that a distance-$3$ ovoid in a generalized octagon of order $(s, t)$ can only exist if $s = 2t$.
This implies the non-existence of distance-$3$ ovoids of the dual Ree-Tits octagon $\go(q^2, q)$ for all $q > 2$.
Computationally, we show that the last remaining case, $\go(4, 2)$, does not possess a distance-$3$ ovoid.
A different computation to verify this fact was already done by Brouwer \cite{Brouwer2011}.
Brouwer's result is mentioned as a remark in a liber amicorum in Dutch, which leaves out some details of the used techniques, and it is not
connected to the result by Offer and van Maldeghem in his remark; so it seems worthwhile to restate their combined results as follows.
\begin{theorem}[Brouwer, Offer, van Maldeghem]\label{thm:non_ex_octagon}
The dual Ree-Tits octagon $\go(q^2, q)$ does not possess a distance-$3$ ovoid for all prime powers $q$.
\end{theorem}
\section{Preliminaries}
\subsection{Generalized Polygons}
A \emph{point-line geometry} is a triple $(\mathcal{P}, \mathcal{L}, \sfI)$, $\mathcal{P}$ and $\mathcal{L}$ disjoint,
$\sfI \subseteq \mathcal{P} \times \mathcal{L}$.
The elements of $\mathcal{P}$ are called \emph{points}, the elements of $\mathcal{L}$ are called \emph{lines},
the relation $\sfI$ is called \emph{incidence relation}.
The \textit{point-line dual }of the geometry $(\mathcal{P}, \mathcal{L}, \sfI)$ is the geometry $(\mathcal{P}^D, \mathcal{L}^D, \sfI^D)$ where $\mathcal{P}^D = \mathcal{L}, \mathcal{L}^D = \mathcal{P}$ and $(\ell, x) \in \sfI^D$ iff $(x, \ell) \in \sfI$.
An \textit{automorphism} of a point-line geometry $(\mathcal{P}, \mathcal{L}, \sfI)$ is a bijective map $f : \mathcal{P} \cup \mathcal{L} \rightarrow \mathcal{P} \cup \mathcal{L}$ such that $f(\mathcal{P}) = \mathcal{P}$, $f(\mathcal{L}) = \mathcal{L}$ and $(x, \ell) \in \sfI$ if and only if $(f(x), f(\ell)) \in \sfI$.
The \emph{incidence graph} of a point-line geoemtry $(\mathcal{P}, \mathcal{L}, \sfI)$ has $\mathcal{P} \cup \mathcal{L}$ as
its vertices and two vertices are adjacent if and only if they are incident.
We denote the distance function in this graph by $\delta(\cdot, \cdot)$.
The \emph{point graph} of a point line geometry $(\mathcal{P}, \mathcal{L}, \sfI)$ has $\mathcal{P}$ as
its vertices and two vertices are adjacent if they have distance $2$ in the incidence graph, i.e., they are collinear with a common line.
We usually denote the point graph by $\Gamma$ and denote its distance function by $\mathrm{d}(\cdot, \cdot)$.
A point-line geometry is \textit{connected} if its incidence graph, or equivalently its point graph, is connected.
For a point $x$ and a line $\ell$ we define $\mathrm{d}(x, \ell) := \min \{\mathrm{d}(x, y) : y ~\sfI~\ell\}$.
Similarly for two lines $\ell_1, \ell_2$ we define $\mathrm{d}(\ell_1, \ell_2) = \min\{\mathrm{d}(x,y) : x ~\sfI~\ell_1, y~\sfI~\ell_2\}.$
The set of points at distance at most $i$ from a point $x$ in the point graph will be denoted by $\Gamma_{\leq i}(x)$ and the set set of points at distance at most $i$ from a line $\ell$ will be denoted by $\Gamma_{\leq i}(\ell)$.
The following lemma relates the distance function $\delta$ to the distance function $\mathrm{d}$. We leave its proof to the reader.
\begin{lemma}
\label{lem:dist_delta}
Let $(\mathcal{P}, \mathcal{L}, \sfI)$ be a connected point-line geometry, let $\delta(\cdot, \cdot)$ denote distance function in its incidence graph, and let $\mathrm{d}(\cdot, \cdot)$ denote the distance function in its point graph.
Let $x, y \in \mathcal{P}$ and $\ell, \ell' \in \mathcal{L}$ with $\ell \neq \ell'$.
Then we have $\delta(x, y) = 2\mathrm{d}(x, y)$, $\delta(x, \ell) = 2\mathrm{d}(x, \ell) + 1$ and $\delta(\ell, \ell') = 2\mathrm{d}(\ell, \ell') + 2$.
\end{lemma}
A \emph{generalized $n$-gon} ($n \geq 2$) of order $(s, t)$ is a point-line geometry $(\mathcal{P}, \mathcal{L}, \sfI)$, $\mathcal{P}$ non-empty, such that
\begin{enumerate}[(a)]
\item each $\ell \in \mathcal{L}$ is incident with $s+1$ elements of $\mathcal{P}$,
\item each $x \in \mathcal{P}$ is incident with $t+1$ elements of $\mathcal{L}$,
\item the incidence graph has diameter $n$ and the maximum possible girth, $2n$.
\end{enumerate}
By a famous result of Feit and Higman \cite{Feit-Higman}, generalized $n$-gons of order $(s, t)$ with $s, t > 1$ (the \textit{thick} case) exist only for $n \in \{ 2, 3, 4, 6, 8 \}$.
For $n = 2$ we have a geometry $(\mathcal{P}, \mathcal{L}, \sfI)$ where $\sfI = \mathcal{P} \times \mathcal{L}$ and for $n = 3$ we have a finite projective plane.
Generalized $n$-gons for $n = 4$, $6$ and $8$ are referred to as generalized \textit{quadrangles}, \textit{hexagons} and \textit{octagons}, respectively.
By an easy counting, the number of points in a generalized hexagon of order $(s, t)$ is $(1 + s)(1 + st + s^2 t^2)$ and the number of points in a generalized octagon of order $(s, t)$ is $(1 + s)(1 + st + s^2t^2 + s^3t^3)$.
From the axioms of a generalized polygon it follows that the point-line dual of a generalized polygon of order $(s, t)$ is a generalized polygon of order $(t, s)$.
For $n = 2d$, axiom (c) in the definition of generalized $n$-gons can be replaced by the following set of axioms on the point graph of the geometry \cite[Sec. 1.9.4]{DeBruyn2006_book}:
\begin{enumerate}[(1)]
\item For every line $\ell$ and every point $x$ there exists a unique point $x'$ on $\ell$ such that $\mathrm{d}(x, y) = \mathrm{d}(x, x') + 1$ for all $y \neq x'$ on $\ell$.
\item For every two points $x$, $y$ with $\mathrm{d}(x, y) = i < d$ there exists a unique neighbour of $y$ in the point graph which is at distance $i - 1$ from $x$.
\end{enumerate}
We denote the \emph{Desarguesian projective plane} over $\mathbb{F}_q$ by $\text{PG}(2, q)$.
Then $\h(q, 1)$ denotes the generalized hexagon of order $(q, 1)$ whose points are the incident point-line pairs of $\text{PG}(2, q)$, lines are the points and lines of $\text{PG}(2, q)$, and incidence is reverse containment.
Let $\ell$ be a $2$-dimensional subspace of $\mathbb{F}_q^n$, where $q$ a prime power and $n \geq 2$.
Let $x = (x_1, \dots, x_n), y = (y_1, \dots, y_n)$ be a basis of $\ell$. Then the Grassmann coordinates of $\ell$
are $(x_iy_j - x_jy_i)_{1 \leq i < j \leq n}$. Notice that the Grassmann coordinates are
independent of the choice of $x$ and $y$, up to scalar multiplication.
The \emph{dual split Cayley hexagon} $\h(q)^D$ is a generalized hexagon of order $(q, q)$ and can be defined as follows \cite[Chap. 2]{Tits1959, vanMaldeghem1998}.
Define the quadratic form $Q: \mathbb{F}_q^7 \rightarrow \mathbb{F}$ with $Q(x) = x_0x_4 + x_1x_5 + x_2x_6 - x_3^2$.
\begin{enumerate}[(a)]
\item The \textit{lines} of $\h(q)^D$ all $1$-dimensional subspaces of $\mathbb{F}_q^7$, which vanish on $Q$.
\item The \textit{points} of $\h(q)^D$ are all $2$-dimensional subspaces of $\mathbb{F}_q^7$, which vanish on $Q$
and whose Grassmann coordinates satisfy $p_{12} = p_{34}$, $p_{54} = p_{32}$, $p_{20} = p_{35}$,
$p_{65} = p_{30}$, $p_{01} = p_{36}$ and $p_{46} = p_{31}$.
\item Incidence is reverse containment.
\end{enumerate}
Let $q = p^r$, where $p$ is a prime and $r$ is a positive integer.
Then the automorphism group of $\h(q, 1)$ is isomorphic to $\mathrm{P\Gamma L}_3(q) \rtimes C_2$ and thus it has size $2r(q^3 - 1)(q^3 - q)(q^3 - q^2)/(q - 1)$.
The automorphism group of $\h(q)$ is isomorphic to $\mathrm{G}_2(q) \rtimes \mathrm{Aut}(\mathbb{F}_q)$ and thus it has size $rq^6(q^6 - 1)(q^2 - 1)$.
The following is a well known result on the relationship between these generalized hexagons.
\begin{lemma}[{\cite[Cor. 1.8.6]{vanMaldeghem1998}}]
\label{lem:subhexagon}
The dual split Cayley hexagon $\h(q)^D$ contains a subhexagon $\mathcal{H}$ of order $(q, 1)$ ismorphic to $\h(q, 1)$.
Moreover, for every pair of lines $\ell_1, \ell_2 \in \h(q)^D$ which are at distance $6$ from each other in the incidence graph there is a unique $\h(q, 1)$-subhexagon of $\h(q)^D$ which contains both $\ell_1$ and $\ell_2$.
\end{lemma}
\begin{cor}
\label{cor:subhexagon}
The number of subhexagons of $\h(q)^D$ that are isomorphic to $\h(q, 1)$ is equal to $q^3(1+q)(q^2 - q + 1)/2$.
\end{cor}
\begin{proof}
Let $\delta(\cdot, \cdot)$ denote the distance function in the incidence graph of $\h(q)^D$. Double count the triples $(\ell_1, \ell_2, \mathcal{H})$ where $\ell_2, \ell_2$ are two lines of $\h(q)^D$ with $\delta(\ell_1, \ell_2) = 6$ and $\mathcal{H}$ is a subhexagon isomorphic to $\h(q, 1)$ that contains both $\ell_1$ and $\ell_2$.
There are in total $(1+q)(1 + q^2 + q^4)$ lines in $\h(q)^D$ and $q^5$ lines at distance $6$ from a fixed line.
Therefore, there are $q^5(1 + q)(1+q^2 + q^4)$ such triples.
There are in total $2(1+q+q^2)$ lines in $\h(q, 1)$ and $q^2$ lines at distance $6$ from a fixed line.
Thus, if $k$ is the total number of subhexagons isomorphic to $\h(q, 1)$, then we have $kq^2(2 + 2q + 2q^2) = q^5(1 + q)(1 + q^2 + q^4)$, which gives us $k = q^3(1+q)(q^2 - q + 1)/2$.
\end{proof}
The \emph{dual Ree-Tits octagon} $\go(q^2, q)$, $q$ an odd power of $2$,
is a generalized octagon of order $(q^2, q)$ and its definition can be seen in \cite{Tits1983} or \cite{Coolsaet2005}.
\subsection{Ovoids and Associated Algorithms}
The usual definition of a distance-$j$ ovoid, $j \geq 1$, of a generalized polygon is the following \cite{Offer2005}.
\begin{definition}
Let $\mathcal{S} = (\mathcal{P}, \mathcal{L}, \sfI)$ be a generalized $2d$-gon and let $2 \leq j \leq d$.
\begin{enumerate}[(a)]
\item A \emph{partial distance-$j$ ovoid} of $\mathcal{S}$ is a set of points $\mathcal{O}$ such that all elements of $\mathcal{O}$ have distance at least $2j$ (in the incidence graph) from each other.
\item A \emph{distance-$j$ ovoid} of $\mathcal{S}$ is a partial distance-$j$ ovoid $\mathcal{O}$ such that every element of $\mathcal{P} \cup \mathcal{L}$ has distance at most $j$ from at least one element of $\mathcal{O}$.
\end{enumerate}
\end{definition}
As a consequence of Lemma \ref{lem:dist_delta} we have the following equivalent definition \cite[Sec.~3.5]{DeBruyn2005_valuations} in terms of the point-graph which we will use in this paper.
\begin{definition}
Let $\mathcal{S} = (\mathcal{P}, \mathcal{L}, \sfI)$ be a generalized polygon and let $\mathrm{d}(\cdot, \cdot)$ denote the distance function in the point graph of $\mathcal{S}$.
\begin{enumerate}[(a)]
\item A \emph{partial distance-$j$ ovoid} of $\mathcal{S}$ is a set of points $\mathcal{O}$ such that for every two distinct points $x$ and $y$ we have $\mathrm{d}(x, y) \geq j$.
\item A \emph{distance-$j$ ovoid} of $\mathcal{S}$ is a partial distance-$j$ ovoid $\mathcal{O}$ such that (1) for every point $a$ of $\mathcal{S}$ there exists a point $x$ of $\mathcal{O}$ such that $\mathrm{d}(a, x) \leq j/2$; (2) for every line $\ell$ of $\mathcal{S}$ there exists a point $x \in \mathcal{O}$ such that $\mathrm{d}(\ell, x) \leq (j - 1)/2$.
\end{enumerate}
\end{definition}
\begin{lemma}\footnote{One side of this Lemma is proved in \cite[Lem. 2]{DeBruyn2006_ovoids}.}
\label{lem:exact_cover}
Let $\mathcal{S} = (\mathcal{P}, \mathcal{L}, \mathsf{I})$ be a generalized $2d$-gon.
For any $i \in \{0, \dots, d\}$ and an element $a \in \mathcal{P} \cup \mathcal{L}$, let $\Gamma_{\leq i}(a)$ denote the set of points at distance at most $i$ from $a$ in the point graph of $\mathcal{S}$.
Let $\mathcal{O}$ be a set of points and $j \in \{2, \dots, d\}$.
Then
\begin{enumerate}[$(1)$]
\item for $j$ even, $\mathcal{O}$ is a distance-$j$ ovoid if and only if for all $\ell \in \mathcal{L}$ we have $|\Gamma_{\leq (j - 2)/2}(\ell) \cap \mathcal{O}| = 1$.
\item for $j$ odd, $\mathcal{O}$ is a distance-$j$ ovoid if and only if for all $x \in \mathcal{P}$ we have $|\Gamma_{\leq (j - 1)/2}(x) \cap \mathcal{O}| = 1$.
\end{enumerate}
\end{lemma}
\begin{proof}
We only prove the first case, when $j$ is even, and note that the second part has a similar proof.
Say $\mathcal{O}$ is a distance-$j$ ovoid and let $\ell \in \mathcal{L}$.
Then by the definition of distance-$j$ ovoids there exists a point $x$ in $\mathcal{O}$ such that $\mathrm{d}(x, \ell) \leq (j - 1)/2$, but since $j$ is even and distances are integral we have $\mathrm{d}(x, \ell) \leq (j - 2)/2$.
Say there was another point $y \neq x$ in $\mathcal{O}$ with $\mathrm{d}(y, \ell) \leq (j - 2)/2$.
Then $\mathrm{d}(x, y) \leq \mathrm{d}(x, \ell) + \mathrm{d}(y, \ell) + 1 = j - 1$ which is a contradiction.
Now say $\mathcal{O}$ is a set of points such that for every line $\ell$ we have $|\Gamma_{\leq (j - 2)/2}(\ell) \cap \mathcal{O}| = 1$.
Let $x, y$ be two distinct points in $\mathcal{O}$.
If $\mathrm{d}(x, y) \leq j - 1$, then there exits a line $\ell$ in the path joining $x$ to $y$ such $\mathrm{d}(x, \ell) \leq (j -2)/2$ and $\mathrm{d}(y, \ell) \leq (j - 2)/2$, which is not possible.
Now let $x$ be an arbitrary point of $\mathcal{S}$.
Let $\ell$ be any line through $x$, and let $y$ be the unique point in $\mathcal{O}$ such that $\mathrm{d}(\ell, y) \leq (j - 2)/2$.
Then $\mathrm{d}(x, y) \leq 1 + \mathrm{d}(\ell, y) = j/2$.
Let $\ell$ be an arbitrary line of $\mathcal{S}$, then by the assumption on $\mathcal{O}$ there exists a point in $\mathcal{O}$ at distance at most $(j - 2)/2 \leq (j - 1)/2$ from $\ell$.
Therefore, $\mathcal{O}$ is a distance-$j$ ovoid.
\end{proof}
The \textit{exact cover} problem in a hypergraph $(V, E)$ asks for the existence of a subset $S$ of $E$ such that for every vertex $v$ there exists a unique edge $e$ in $S$ which contains $v$.
The dual of this problem is the \textit{exact hitting set} problem where we need to find a subset $O$ of $V$ such that for every edge $e$ there is a unique vertex $v$ in $O$ which is contained in $E$.
Lemma \ref{lem:exact_cover} makes it clear that the existence of a distance-$j$ ovoid in a generalized $2d$-gon $\mathcal{S}$ is equivalent to existence of an \textit{exact hitting set} in a hypergraph derived from the point graph of $\mathcal{S}$.
For $j$ even the edges of this hypergraph are the subsets $\Gamma_{\leq (j-2)/2}(\ell)$ of $\mathcal{P}$ where $\ell$ is a line, and for $j$ odd the edges of this hypergraph are the subsets $\Gamma_{\leq (j - 1)/2}(x)$ where $x$ is a point.
This makes it possible to use Knuth's \emph{dancing links algorithm for exact covers} \cite{Knuth2000} to find all distance-$j$ ovoids.
Note that the exact cover problem is NP-hard.
A second technique which is available for the exact cover problems is the use of integer linear programming solvers.
We will use it in the following way.
Let $\mathcal{S} = (\mathcal{P}, \mathcal{L}, \mathrm{I})$ be a generalized $2d$-gon.
Let $\mathcal{O}'$ be a possibly empty set of points which forms a partial distance-$j$ ovoid, i.e., every pair of points in $\mathcal{O}'$ are at distance at least $j$ in the point graph.
Let $H = (V, E)$ be the hypergraph as defined above, with $V = \mathcal{P}$ and
\[E =
\begin{cases}
\{\Gamma_{\leq (j - 2)/2}(\ell) : \ell \in \mathcal{L}\} \text{ if } j \text { is even}\\
\{\Gamma_{\leq (j - 1)/2}(p) : p \in \mathcal{P}\} \text{ if } j \text{ is odd}.
\end{cases}
\]
For each $p \in \mathcal{P}$ let $X_{p} \in \{ 0, 1\}$ be a binary variable.
Then the equations
\begin{align}
X_p = 1 && \text{ for all } p \in \mathcal{O}' \notag\\
\sum_{p \in e} X_p = 1 && \text{ for all } e \in E \label{eq:MIP_for_ovoid}
\end{align}
have an integer solution if and only if $\mathcal{S}$ possesses a distance-$j$ ovoid that contains $\mathcal{O}'$.
Similarly, the equations
\begin{align}
X_p = 1 && \text{ for all } p \in \mathcal{O}' \notag\\
\sum_{p \in e} X_p \leq 1 && \text{ for all } e \in E \label{eq:MIP_for_partil_ovoid}
\end{align}
have an integer solution is and only if $\mathcal{S}$ possesses a partial distance-$j$ ovoid that contains $\mathcal{O}'$.
Any of these formulations can be directly used to prove Theorem \ref{thm:non_ex_dualsplitcayley} for $q = 2$ and Theorem \ref{thm:non_ex_octagon}.
We have verified this using Gurobi\footnote{The running time was about one day with Gurobi Optimizer version 6.5.0 build v6.5.0rc1 (linux64) with an Intel Core i5-3550 CPU @ 3.30GHz processor}.
As noted before, non-existence of distance-$3$ ovoids in $\go(q^2, q)$ for $q > 2$ is covered in \cite{Offer2005}
and the case $q=2$ was already mentioned in \cite{Brouwer2011}.
\section{Distance-$2$ Ovoids in $\h(4)^D$}
\begin{lemma}
\label{lem:dual_split_Cayley}
Let $\mathcal{H}$ be a hexagon of order $(s, t)$.
Let $\mathcal{H}'$ be a subhexagon of order $(s, t')$ of $\mathcal{H}$ and
let $\mathcal{O}$ be a distance-$2$ ovoid of $\mathcal{H}$. Then
$\mathcal{H}' \cap \mathcal{O}$ is a distance-$2$ ovoid of $\mathcal{H}'$ and
\begin{align*}
|\mathcal{O} \cap \mathcal{H}'| = s^2t'^2 + st' + 1.
\end{align*}
\end{lemma}
\begin{proof}
By Lemma \ref{lem:exact_cover}, $\mathcal{O}$ is a distance-$2$ ovoid if and only if it meets every line in a unique point.
If each line of $\mathcal{H}$ meets $\mathcal{O}$ in exactly $1$ points, then the same is true for $\mathcal{H}'$.
Moreover, the number of points in a generalized hexagon of order $(s, t)$ is $(1 + s)(1 + st + s^2t^2)$, and thus by double counting, the number of points in a distance-$2$ ovoid is $(1 + st + s^2t^2)$.
Therefore, we have $|\mathcal{O} \cap \mathcal{H}'| = s^2t^2 + st' + 1$.
\end{proof}
While both Knuth's dancing links algorithm and integer programming solvers fail to directly determine the existence distance $2$-ovoids in $\h(4)^D$ which has $1365$ points and $1365$ lines, in view of Lemmas \ref{lem:subhexagon} and \ref{lem:dual_split_Cayley} we can use the following idea:
\emph{first classify all distance-$2$ ovoids in $\h(4, 1)$ up to isomorphism under the action of the stabilizer of $\h(4, 1)$, and then see if any of these ovoids can be extended to a distance-$2$ ovoid of $\h(4)^D$}.
\bigskip \noindent
We note that the stabilizer of a subgeometry of $\h(4)^D$ which is isomorphic to $\h(4, 1)$, under the action of the automorphism group of $\h(4)^D$ is in fact isomorphic to the automorphism group of $\h(4, 1)$.
As the point graph of $\h(q, 1)^D$ corresponds to the incidence graph of the projective plane $\text{PG}(2, q)$, a distance-$2$ ovoid in $\h(q, 1)$ corresponds to a perfect matching of the incidence graph of $\text{PG}(2, q)$.
It is folklore that the number of perfect matchings in a balanced bipartite graph corresponds to the permanent of the biadjacency matrix of that graph (see for example \cite{Plummer2015}).
It is easy to verify the following by calculating the corresponding permanent.
\begin{lemma}[{\cite[A000794]{OEIS}}]\label{lem:permanent_h41}
The number of perfect matchings in the incidence graph of $\mathrm{PG}(2, 4)$ is $18534400$.
\end{lemma}
Notice that a perfect matching is an exact cover, and so we can use Knuth's dancing links algorithm to enumerate all perfect matchings in a bipartite graph.
\begin{prop}\label{prop:class_h4d_ovoids}
Let $G$ be the automorphism group of $\h(4)^D$.
Let $\mathcal{H}$ be a subhexagon of $\h(4)^D$ ismormorphic to $\h(4, 1)$.
Then there are exactly $350$ non-isomorphic distance-$2$ ovoids in $\mathcal{H}$ with respect to $G_{\mathcal{H}}$, the stabilizer of $\mathcal{H}$ under the action of $G$.
\end{prop}
We used a computer to prove Proposition \ref{prop:class_h4d_ovoids}.
The following algorithm was able to classify all $350$ in a few minutes at the time of writing.\footnote{Running time: 28m37.576s with Sage Version 6.4.1 with a Intel Core i5-2400 CPU @ 3.10 GHz processor. We have to point out that Knuth's dancing link algorithm is partially randomized, the running times might vary for many reasons.
A different model of the hexagon with the same hardware and the same Sage version has an average running time of circa 120 minutes. The same model with a different Sage version on a slower processor has a average running time of circa 15 minutes.}
We rely on Linton's algorithm \texttt{SmallestImageSet(H, S)}, which returns the lexicographically
smallest element in the orbit of a set \texttt{S} under the action of a group \texttt{H} \cite{Linton2004}.
\label{alg:class_ovoids_h_4_1}
\begin{algorithmic}
\STATE Let $i$ be an iterator on all distance-$2$ ovoids of $\h(4, 1)$.
\STATE $b \leftarrow 18534400$
\STATE $L \leftarrow \{ \}$
\WHILE{$b > 0$}
\STATE $m \leftarrow i.\text{next}$
\STATE $m \leftarrow $ \texttt{SmallestImageSet($H$, $m$)}
\IF{$m \notin L$}
\STATE $L \leftarrow L \cup \{ m \}$
\STATE $s \leftarrow $ the orbit length of $m$ under $G_{\mathcal{H}}$
\STATE $b \leftarrow b - s$
\ENDIF
\ENDWHILE
\end{algorithmic}
After running the algorithm, $L$ contains all distance-$2$ ovoids of $\h(4, 1)$.
We used the implementation of Dancing Links in SAGE \cite{sage} \footnote{\url{http://www.sagenb.org/src/combinat/matrices/dlxcpp.py}} to find the iterator and the implementation of \texttt{SmallestImageSet} in the GRAPE \cite{grape} package of GAP \cite{GAP4} to find the representatives of these $350$ isomorphism classes of distance-$2$ ovoids. We provide a more explicit description of these $350$ distance-$2$ ovoids at the end of this section.
We provide a list of all non-isomorphic 350 distance-$2$ ovoids and our full code online.\footnote{\url{http://math.ihringer.org/data.php}}
For each distance-$2$ ovoid $\mathcal{O}'$ of $\mathcal{H}$ we can define a integer linear optimization problem (ILP)
as in \eqref{eq:MIP_for_ovoid}. Then the ILP solvers easily shows that these equations are infeasible for all of the $350$ cases.%
\footnote{We verified this with CPLEX (several versions), Gurobi Optimizer (several versions) and the constraint solver Minion. The 350 ILPs in 350 files in the LP format
took 540.3 seconds with Gurobi Optimizer version 6.5.0 build v6.5.0rc1 (linux64) with an Intel Core i5-3550 CPU @ 3.30GHz processor. Minion's running times were similar.}
This proves Theorem \ref{thm:non_ex_dualsplitcayley}.
\begin{rem}
For the next open case, $\h(5)^D$, our algorithmic approach fails for several reasons:
\begin{enumerate}[(a)]
\item The incidence graph of $\text{PG}(2, 5)$ has $4598378639550$ perfect matchings while the automorphism group of $\text{PG}(2, 5)$ has size $744000$.
So a classification of all non-isomorphic distance-$2$ ovoids of $\h(5, 1)$ seems to be out of reach.
\item Even for one given distance-$2$ ovoid of $\h(5, 1)$, the corresponding integer linear program takes too long to solve with state-of-the-art ILP solver.
\end{enumerate}
\end{rem}
One can use the same methods to obtain bounds on partial distance-$2$ ovoids.
\begin{lemma}\label{lem:bnd_without_full_subhex}
Let $\mathcal{O}$ be a partial distance-$2$ ovoid of $\h(q)^D$. Suppose that no subhexagon $\mathcal{H}$ of $\h(q)^D$
isomorphic to $\h(q, 1)$ contains $q^2+q+1$ points of $\mathcal{O}$. Then $|\mathcal{O}| \leq (q^2-q+1) (q^2+q)$
\end{lemma}
\begin{proof}
Let $\mathcal{P}$ be the set of points of $\h(q)^D$.
We double count $(p, \mathcal{H})$, where
$\mathcal{H}$ a subhexagon of $\h(q)^D$ isomorphic to $\h(q, 1)$ and $p \in \mathcal{O} \cap \mathcal{H}$.
From a counting argument similar to the one in the proof of Corollary \ref{cor:subhexagon}, we see that each point is contained in $(1+q)q^3/2$ subhexagons isomorphic to $\h(q, 1)$ which tells us that there are $|\mathcal{O}|(1+q)q^3/2$ such pairs.
Again by Corollary \ref{cor:subhexagon}, there are $q^3(1+q)(q^2 - q + 1)/2$ subhexagons of $\h(q)^D$ which are isomorphic to $\h(q, 1)$.
Under the condition $|\mathcal{O} \cap \mathcal{H}| \leq q^2+q$ this yields
$|\mathcal{O}| \leq (q^2-q+1) \cdot (q^2+q)$.
\end{proof}
For $q = 2$, Lemma \ref{lem:bnd_without_full_subhex} gives us $|\mathcal{O}| \leq 18$ and for $q = 4$ it gives us $|\mathcal{O}| \leq 260$ under the given assumptions.
To prove that the bounds given by Lemma \ref{lem:bnd_without_full_subhex} hold \textit{for all} partial distance-$2$ ovoids of $\h(q)^D$, $q \in \{2, 4\}$, we can use the following computational approach.
If the ILP defined in \eqref{eq:MIP_for_partil_ovoid} does not have a solution larger than some integer $b \geq (q^2-q+1)(q^2+q)$ for all of the $350$ non-isomorphic distance-$2$ ovoids of $\h(q, 1)$, then we obtain $b$ as an upper bound on the size of a partial distance-$2$ ovoids.
We are able to obtain the following results using this approach.
\begin{lemma}
A partial distance-$2$ ovoid $\mathcal{O}$ of $\h(q)^D$ satisfies the following:
\begin{enumerate}[$(a)$]
\item $|\mathcal{O}| \leq 19$ for $q=2$.
\item $|\mathcal{O}| \leq 265$ for $q=4$.
\end{enumerate}
\end{lemma}
In fact, one can easily construct a partial distance-$2$ ovoid of size $19$ in $\h(2)^D$ using a computer.
So the bound for $\h(2)^D$ is sharp.
With Lemma \ref{lem:bnd_without_full_subhex} the bound we obtain for $\h(4)^D$ is $q^4+q=260$.
We suspect that
this is the true bound, but testing one of the $350$ partial distance-$2$ ovoids takes about 2 days with our methods,
so we end up with an unreasonable running time of $2$ years.\footnote{The bound $265$ takes circa
one week with our methods, which is more reasonable to verify.}
We conclude this work by giving a more explicit description of the $350$ perfect matchings of $\text{PG}(2, 4)$.
We provide the structure description of the stabilizers of these ovoids provided by GAP, the lengths of point orbits in $\h(4, 1)$
and the lengths of line orbits in $\h(4, 1)$.
\begin{center}
\begin{tabular}{lllll}
Stabilizer Size & Number & Structure & Point Orbit Lengths & Line Orbit Lengths\\ \midrule
126 & 1 & $(C_3 \times{} (C_7 : C_3)) : C_2$ & $42^1 21^1 14^2 7^2$ & $14^3$\\
84 & 4 & $S_3 \times{} D_{14}$ & $28^1 14^4 7^3$ & $28^1 14^1$\\
54 & 1 & $((C_3 \times{} C_3) : C_3) : C_2$ & $18^4 9^1 3^8$ & $18^1 6^4$\\
42 & 4 & $C_3 \times{} D_{14}$ & $14^3 7^9$ & $14^3$\\
36 & 2 & $S_3 \times{} S_3$ & $12^3 6^{10} 3^1 2^2 1^2$ & $12^1 6^4 2^3$\\
18 & 14 & $C_3 \times{} S_3$ & $6^{13} 3^7 2^2 1^2$ & $6^6 2^3$\\
18 & 2 & $C_3 \times{} S_3$ & $6^{16} 3^1 2^2 1^2$ & $6^6 2^3$\\
12 & 14 & $D_{12}$ & $4^{19} 2^{13} 1^3$ & $4^7 2^7$\\
9 & 3 & $C_3 \times{} C_3$ & $3^{33} 1^6$ & $3^{12} 1^6$\\
6 & 2 & $S_3$ & $2^{42} 1^{21}$ & $2^{14} 1^{14}$\\
6 & 43 & $S_3$ & $2^{50} 1^5$ & $2^{21}$\\
6 & 121 & $C_6$ & $2^{48} 1^9$ & $2^{21}$\\
3 & 139 & $C_3$ & $1^{105}$ & $1^{42}$
\end{tabular}
\end{center}
\section{Conclusion}
As the case $\h(5)^D$ is computationally out of reach,
the next goal should be to replace the computational parts of our proof for $\h(4)^D$ with algebraic arguments.
The investigation of the structure of the $350$ distance-$2$ ovoids of $\h(q, 1)$ shows that it might not be feasible to
describe the these distance-$2$ ovoids explicitly.
Maybe the specific structure of a distance-$2$ ovoid of $\h(q, 1)$ is far less important than the
fact that all subhexagons of $\h(q)^D$ isomorphic to $\h(q, 1)$ meet a distance-$2$ ovoid in exactly $q^2+q+1$ points.
\bibliographystyle{plain}
|
1,314,259,995,267 | arxiv | \section{Introduction}
It has been a long-standing goal to determine
scattering amplitudes in quantum field theory from knowledge
of their analytic structure coupled with other basic physical
and mathematical input.
In planar $\mathcal{N}=4$ super-Yang--Mills theory (which
we refer to as SYM theory), the current state of the art for
carrying out explicit computations of multi-loop amplitudes
is a bootstrap program that relies fundamentally
on assumptions about the location of branch points of certain
amplitudes.
The aim of the research program initiated
in~\cite{Dennen:2015bet,Dennen:2016mdk} for MHV amplitudes
and generalized to non-MHV amplitudes in~\cite{Prlina:2017azl}
(to which this paper should be considered a sequel)
is to provide an \emph{a priori} derivation
of the set of branch points for any given amplitude.
For sufficiently simple amplitudes in SYM theory\footnote{General
amplitudes lie outside the class of generalized
polylogarithm functions that have well-defined
symbols, see for
example~\cite{CaronHuot:2012ab,Nandan:2013ip,Bourjaily:2017bsb}
for a discussion of this in the context of SYM theory.}
this information can go a long way by leading to natural
guesses for the \emph{symbol alphabets}~\cite{Goncharov:2010jf}
of various amplitudes.
The possibility to do so exists because of the simple fact
pointed out in~\cite{Maldacena:2015iua}
that the locus in the space of external data
$\Conf_n(\mathbb{P}^3)$ where
the symbol letters of a given amplitude vanish
should be the same as the locus where the corresponding
Landau equations~\cite{Landau:1959fi,ELOP}
admit solutions. A slight refinement of this statement,
to account for the fact that amplitudes in general have algebraic
branch cuts in addition to logarithmic cuts, was discussed
in Sec.~7 of~\cite{Prlina:2017azl}.
The hexagon bootstrap program, which
has succeeded in computing
all six-point amplitudes through
five loops~\cite{Dixon:2011pw,Dixon:2011nj,Dixon:2013eka,Dixon:2015iva,Caron-Huot:2016owq}, relies on the hypothesis
that these amplitudes can have branch points only at
nine specific loci in the space of external data
$\Conf_6(\mathbb{P}^3)$. Similarly the heptagon
bootstrap~\cite{Drummond:2014ffa},
which has revealed
the symbols of the seven-point four-loop
MHV and three-loop NMHV amplitudes~\cite{Dixon:2016nkn},
assumes 42 particular branch points.
Ultimately we may hope for an all-loop proof of these hypotheses
about six- and seven-point amplitudes, but in this paper
we focus on the less ambitious goal of deriving the singularity
loci for all two-loop NMHV amplitudes in SYM theory.
The result, summarized in~\secRef{symbol-alphabets}, leads to
a natural conjecture for the symbol alphabets of these amplitudes
which we hope may be employed in the near future by bootstrappers
eager to study this class of amplitudes.
The rest of this paper is organized as follows.
In~\secRef{classification}
we develop a procedure for constructing certain boundaries
of two-loop amplituhedra by ``merging'' one-loop configurations
of the type classified in the prequel~\cite{Prlina:2017azl}.
In~\secRef{presentation} we organize the results according to
helicity and codimensionality (the number of on-shell conditions
satisfied by each configuration) and discuss some subtleties
about overconstrained configurations that
require resolution.
Section~\ref{sec:on-shell-diagrams} discusses the connection between
branches of solutions to on-shell conditions and on-shell diagrams,
which provides a useful cross-check of our classification.
In~\secRef{nmhv-landau-analysis} we discuss the analysis of
the Landau equations for configurations relevant for NMHV
amplitudes and, in Eqns.~(\ref{eqn:twoloopalphabet})
and~(\ref{eqn:nmhvsymbolalphabets}), we present a conjecture for
the symbol alphabets of all two-loop NMHV amplitudes.
\section{Classification of Two-Loop Boundaries}
\label{sec:classification}
In this section we classify certain boundaries of two-loop amplituhedra.
This analysis builds heavily on Sections~3--5
of~\cite{Prlina:2017azl}, and in particular we show how to recycle
the one-loop boundaries classified there by ``merging'' pairs of
one-loop boundaries into two-loop boundaries.
We find that two different formulations
of the amplituhedron --- the original
formulation in terms of $C$ and $D$
matrices~\cite{Arkani-Hamed:2013jha},
and the reformulation in terms of sign flips~\cite{Arkani-Hamed:2017vfh} ---
play two complementary roles, exactly as in~\cite{Prlina:2017azl}.
Specifically, the former is useful for establishing the existence
of boundaries by constructing explicit $C$ and $D$ matrix representatives,
while the latter is useful for establishing the non-existence of any
other boundaries.
Before proceeding let us dispense of some important details that
would otherwise overcomplicate our exposition.
There is a parity symmetry between $A_{n,\textrm{k},L}$,
the $n$-point, $\nkmhv{k}$, $L$-loop amplitude in SYM theory,
and its parity conjugate $A_{n,n-\textrm{k}-4,L}$.
For fixed $n$, amplitudes become increasingly complicated as $\textrm{k}$ is increased
from zero, but after $\textrm{k} \sim n/2$ they must begin to decrease
in complexity until the upper bound $\textrm{k} = n-4$.
In what follows we will often make use of lower bounds on $\textrm{k}$,
or on constructions that increment $\textrm{k}$ by 1.
In making these arguments, we always have in mind
that $\textrm{k}$ is sufficiently small compared to $n$.
In other words, unless otherwise stated,
we are always working in the ``low-$\textrm{k}$'' regime, to use
the terminology of~\cite{Prlina:2017azl}. At the very end of our
analysis, once we have all of the desired results in this
regime, we appeal to parity symmetry in order
to translate low-$\textrm{k}$ results into high-$\textrm{k}$ results.
However the details of matching these two regimes
near the midpoint $\textrm{k} \sim n/2$
can be quite intricate, even moreso at two loops than it
was in the one-loop analysis of~\cite{Prlina:2017azl}.
\subsection{Identifying the Relevant Boundaries}
\label{sec:identifying}
In general,
a configuration $(Y, \mathcal{L}^{(1)}, \mathcal{L}^{(2)})$ lies on a
boundary of a two-loop amplituhedron if
at least one item on the following menu is satisfied:
\begin{enumerate}
\renewcommand*\labelenumi{(\theenumi)}
\item $Y$ is such that some
four-brackets of the form $\langle a\,a{+}1\,b\,b{+}1\rangle$
vanish,
\item
$\mathcal{L}^{(1)}$ satisfies some
on-shell conditions
$\langle \mathcal{L}^{(1)}\,a_1\,a_1{+}1\rangle = \cdots =
\langle \mathcal{L}^{(1)}\,a_{d_1}\,a_{d_1}{+}1\rangle = 0$,
\item
$\mathcal{L}^{(2)}$ satisfies some
on-shell conditions
$\langle \mathcal{L}^{(2)}\,b_1\,b_1{+}1\rangle = \cdots =
\langle \mathcal{L}^{(2)}\,b_{d_2}\,b_{d_2}{+}1\rangle = 0$,
\item
or $\langle \mathcal{L}^{(1)}\,\mathcal{L}^{(2)}\rangle = 0$.
\end{enumerate}
Above and through the remainder of the paper, we always take
$\langle ABCD \rangle \equiv [Y ABCD]$ --- what we call projected
four-brackets following~\cite{Arkani-Hamed:2017vfh}.
For the purpose of finding Landau singularities
we are always interested only in loop momenta $(\mathcal{L}^{(1)},
\mathcal{L}^{(2)})$ that exist for generic projected external
data, i.e., for generic $Y$, so we disregard
possibility (1) in all that follows. Next, we note that for configurations
which do not satisfy (4),
the Landau equations decouple into two separate sets
of equations on the two individual loop momenta, so there
can be no new Landau singularities beyond those already found
at one loop.
Therefore in all that follows we only consider boundaries on which
$\langle \mathcal{L}^{(1)}\,\mathcal{L}^{(2)}\rangle = 0$.
The Landau equations similarly degenerate if either $d_1$ or
$d_2$ (defined in the preceeding paragraph) is zero, so we are only interested in configurations
with $d_1 d_2 > 0$.
The above considerations motivate us to define
an $\mathcal{L}$-\emph{boundary} of a two-loop amplituhedron
as a configuration $(Y, \mathcal{L}^{(1)}, \mathcal{L}^{(2)})$ for which
$Y$ is such that the projected external data are generic,
$\langle \mathcal{L}^{(1)}\,\mathcal{L}^{(2)}\rangle = 0$, and
each $\mathcal{L}$ satisfies at least one on-shell condition
of the form $\langle \mathcal{L}\,a\,a{+}1\rangle = 0$.
In particular, these conditions imply that
both $(Y, \mathcal{L}^{(1)})$ and $(Y,\mathcal{L}^{(2)})$ must lie
on boundaries
of some one-loop amplituhedra; each of these must therefore
be one of the 19 branches tabulated
in Tab.~1 of~\cite{Prlina:2017azl}.
\subsection{Merging One-Loop Boundaries}
\label{sec:merging}
The preceding analysis suggests that
the boundaries of two-loop amplituhedra can be
understood by merging various one-loop boundaries.
Let us now see how this works in detail.
Suppose that $(Y^{(1)}, \mathcal{L}^{(1)})$ and $(Y^{(2)}, \mathcal{L}^{(2)})$
lie on boundaries of $\mathcal{A}_{n,\textrm{k}_1,1}$
and $\mathcal{A}_{n,\textrm{k}_2,1}$, respectively.
Then they can be represented as
$Y^{(\alpha)} = C^{(\alpha)} \mathcal{Z}$ and
$\mathcal{L}^{(\alpha)} = D^{(\alpha)} \mathcal{Z}$,
where for each $\alpha \in \{1, 2\}$, the matrices
$C^{(\alpha)}$,
$\left( \begin{smallmatrix} D^{(\alpha)} \\ C^{(\alpha)}
\end{smallmatrix} \right)$,
and $D^{(\alpha)}$ (as shown in~\cite{Prlina:2017azl}), are
all non-negative.
In order to streamline the argument we initially consider
$\textrm{k}_1$ and $\textrm{k}_2$ to be the smallest values of helicity for which
boundaries of the desired class exist, and we take each pair
$(C^{(\alpha)}, D^{(\alpha)})$ to have the form of one of the 19 branches
shown in Secs.~4.2 through 4.4 of~\cite{Prlina:2017azl}.
We will show that such a pair of valid one-loop boundary
configurations can be uplifted into a valid two-loop
boundary configuration $(C, D^{(1)}, D^{(2)})$ satisfying
$\langle \mathcal{L}^{(1)} \, \mathcal{L}^{(2)} \rangle = 0$ by
constructing an appropriate matrix $C$ from $C^{(1)}$ and $C^{(2)}$.
The process of merging two boundaries depends
on whether the two loop momenta $\mathcal{L}^{(1)}$,
$\mathcal{L}^{(2)}$ each pass through some
common external point $Z_i$. If they do, then
we say that they \emph{manifestly intersect} and the
condition that $\langle \mathcal{L}^{(1)}\,\mathcal{L}^{(2)}\rangle = 0$
is automatically satisfied.
In this case we can simply stack the two individual $C$-matrices on top of each
other in order to form
\begin{align}
C = \left( \begin{matrix}
C^{(1)} \\
C^{(2)} \end{matrix}\right).
\label{eqn:mergedC}
\end{align}
If, on the other hand, the two loop momenta do not manifestly
intersect, then we can still ensure that
$\langle \mathcal{L}^{(1)}\,\mathcal{L}^{(2)}\rangle = [ (C \mathcal{Z})\,
\mathcal{L}^{(1)} \, \mathcal{L}^{(2)}] = 0$ by adding
one additional suitably crafted row to $C$. Specifically,
if $A^{(\alpha)}$, $B^{(\alpha)}$ are any
four points in $\mathbb{P}^n$
such that $\mathcal{L}^{(\alpha)} = (A^{(\alpha)} \mathcal{Z},
B^{(\alpha)} \mathcal{Z})$,
then adding a row to $C$ that is any linear combination of
these four points will guarantee
that $\langle \mathcal{L}^{(1)}\,\mathcal{L}^{(2)}\rangle = 0$.
In this manner we have constructed a candidate
for a configuration on the boundary of $\mathcal{A}_{n,\textrm{k},2}$
with $\textrm{k} = \textrm{k}_1 + \textrm{k}_2$ in the case of manifest intersection,
or $\textrm{k} = \textrm{k}_1 + \textrm{k}_2 + 1$ otherwise.
It remains to verify that this configuration is \emph{valid},
which means that $C$ can be chosen so that it and the matrices
$\left( \begin{smallmatrix} D^{(1)} \\ C\end{smallmatrix} \right)$,
$\left( \begin{smallmatrix} D^{(2)}\\ C \end{smallmatrix} \right)$,
and
$
\left( \begin{smallmatrix}
D^{(1)}
\\
D^{(2)}
\\
C
\end{smallmatrix}\right)
$
are all non-negative.
\subsection{Planarity from Positivity}
Let us begin by analyzing the non-negativity of the
$C$-matrix shown in~\eqnRef{mergedC}.
The nonzero columns of each $C^{(\alpha)}$ (which may be read off
from Secs.~4.3 and 4.4 of~\cite{Prlina:2017azl})
are grouped into clusters corresponding to the sets of contiguous indices
appearing in the on-shell conditions satisfied by the corresponding
$\mathcal{L}^{(\alpha)}$. For example,
for a boundary on which the three-mass
triangle on-shell conditions $\langle \mathcal{L}\, i\, i{+}1\rangle
= \langle \mathcal{L}\, j\, j{+}1\rangle =
\langle \mathcal{L}\, k\, k{+}1\rangle = 0$ are satisfied,
the $C$-matrix is zero
except in six columns grouped into three clusters
$\{i, i{+}1\}$, $\{j, j{+}1\}$ and $\{k, k{+}1\}$.
When we stack two $C$-matrices together, the result can be one of two different
cases depending on whether or not the clusters of $C^{(1)}$
are cyclically adjacent compared to the clusters of $C^{(2)}$.
If so, then the stacked $C$-matrix has the schematic form
\begin{equation}
C = \left( \begin{matrix}
C^{(1)} \\
C^{(2)} \end{matrix}\right)=
\left(
\begin{array}{*{11}c}
\cdots & 0 & \star & 0 & \star & 0 & 0 & 0 & 0 & 0 & \cdots
\cr
\cdots & 0 & 0 & 0 & 0 & 0 & \star & 0 & \star & 0 & \cdots
\end{array}
\right) \
\begin{matrix}
\} \ \textrm{k}_1 \ \textrm{rows} \\
\} \ \textrm{k}_2 \ \textrm{rows}
\end{matrix}
\label{eqn:planar}
\end{equation}
which we call \emph{planar}; otherwise it is of the form
\begin{align}
C = \left( \begin{matrix}
C^{(1)} \\
C^{(2)} \end{matrix}\right)=
\left( \begin{array}{*{11}c}
\cdots & 0 & \star & 0 & 0 & 0 & \star & 0 & 0 & 0 & \cdots
\cr
\cdots & 0 & 0 & 0 & \star & 0 & 0 & 0 & \star & 0 & \cdots
\end{array}
\right)
\begin{matrix}
\} \ \textrm{k}_1 \ \textrm{rows\,\hphantom{.}} \\
\} \ \textrm{k}_2 \ \textrm{rows\,.}
\end{matrix}
\label{eqn:nonplanar}
\end{align}
which we call \emph{non-planar}.
In Eqns.~(\ref{eqn:planar}) and~(\ref{eqn:nonplanar}) each $\star$ is
shorthand for one or more contiguous columns (i.e,
clusters) of non-zero entries,
and we suppress displaying columns shared by the two $C$-matrices,
which are not relevant to our argument.
Also as indicated the top (bottom) row is shorthand for
$\textrm{k}_1$ ($\textrm{k}_2$) rows.
Given that our starting point is a pair of matrices
$C^{(1)}$, $C^{(2)}$ that are each non-negative, it is clear that the resulting
stacked $C$-matrix
has a chance to be non-negative (for certain values of its parameters) only for planar configurations;
the minors of~\eqnRef{nonplanar} manifestly have non-definite signs.
In cases when $\mathcal{L}^{(1)}$ and $\mathcal{L}^{(2)}$ do not
manifestly intersect we need to add an additional row to $C$
as described in the previous section.
This additional row can be considered part of either $C^{(1)}$ or $C^{(2)}$.
Since the coefficients in this row can be arbitrary and still
preserve $\langle \mathcal{L}^{(1)}\,\mathcal{L}^{(2)}\rangle=0$,
the coefficients can always be chosen such that the enlarged
$C$-matrix is non-negative.
The conclusion that only planar $C$'s can be made positive
still holds.
The nomenclature of `planar' and `non-planar' clusters is appropriate
in light of the fact that the locations of the clusters precisely correspond
to the sets of indices appearing in on-shell conditions listed in points (2)
and (3) at the beginning of \secRef{identifying}.
In a configuration like~\eqnRef{planar} there exist $a$, $b$
such that all of the on-shell conditions satisfied by
$\mathcal{L}^{(1)}$ lie in the range $\{a, a{+}1, \ldots, b, b+1\}$
while all of the on-shell conditions satisfied by $\mathcal{L}^{(2)}$
lie in the range $\{b, b{+}1, \ldots, a, a{+}1\}$ (as usual, all indices
are always understood mod $n$). Consequently, the two-loop
Landau diagram depicting the merged sets of on-shell
conditions (together with the propagator
$\langle \mathcal{L}^{(1)}\,\mathcal{L}^{(2)}\rangle$
shared between the two loops) is planar.
By the same argument, a nonplanar configuration
such as~\eqnRef{nonplanar}
is necessarily associated to
a nonplanar Landau diagram.
Now let us consider the non-negative matrices
$\left( \begin{smallmatrix} D^{(\alpha)} \\ C^{(\alpha)}\end{smallmatrix} \right)$
for the two individual initial boundary configurations ($\alpha=1$ or $2$).
We require that
these matrices stay non-negative when $C^{(\alpha)}$ is replaced by $C$.
By the argument given in Sec.~4.7 of~\cite{Prlina:2017azl},
this will be the case if the rows added to $C^{(\alpha)}$ have
nonzero entries only in the gaps between clusters of $C^{(\alpha)}$.
But this is just another way to phrase the planarity condition described
above, so again we see that planarity is enforced, this
time by requiring non-negativity of
$\left( \begin{smallmatrix} D^{(\alpha)} \\ C\end{smallmatrix} \right)$.
The final step in establishing the validity
of the configuration $(C, D^{(1)}, D^{(2)})$
is checking that
the matrix
$\left( \begin{smallmatrix} D^{(1)} \\
D^{(2)} \\
C\end{smallmatrix} \right)$
is non-negative.
In the parameterization we have chosen, all of the maximal minors
of this matrix actually vanish. If the two loops manifestly
intersect this can be checked by looking
at the form of the $(C, D)$ matrices tabulated
in~\cite{Prlina:2017azl}. If they do not manifestly intersect the analysis
is even easier, since in such cases we have included in $C$ a row
that is some linear combination of the four rows of $D^{(1)}, D^{(2)}$.
The argument as presented appears to fail
if either of the individual one-loop
boundaries is MHV, in which case there is no $C$ matrix.
However, for MHV boundaries it can
be seen from the expressions tabulated in
Sec.~4.2 of~\cite{Prlina:2017azl} that the $D$-matrix serves
the same role as the $C$-matrix played in the above argument.
For example, if $\textrm{k}_1 = 0$ so that $C^{(1)}$ is empty,
then $C = C^{(2)}$
so the requirement that
$\left( \begin{smallmatrix} D^{(1)} \\ C\end{smallmatrix} \right) =
\left( \begin{smallmatrix} D^{(1)} \\ C^{(2)}\end{smallmatrix} \right)$
must be non-negative requires that the clusters of $D^{(1)}$ be cyclically
adjacent compared to the clusters of $C^{(2)}$.
If both $\textrm{k}_1$ and $\textrm{k}_2$ are zero then $C$ is empty
and the same conclusion
follows from consideration
of the matrix
$
\left( \begin{smallmatrix}
D^{(1)}
\\
D^{(2)}
\end{smallmatrix}\right)
$.
Therefore, in all cases, the various non-negativity conditions
imply that the Landau diagram must be planar.
This emergent planarity was discussed
in context of MHV amplitudes in~\cite{Arkani-Hamed:2013kca}.
In conclusion, we have established that a boundary of
$\mathcal{A}_{n,\textrm{k},2}$ can be constructed by ``merging'' a
boundary of $\mathcal{A}_{n,\textrm{k}_1,1}$
with a boundary of $\mathcal{A}_{n,\textrm{k}_2,1}$,
with $\textrm{k} - \textrm{k}_1 - \textrm{k}_2 = 0$ or $1$ depending
on whether $\mathcal{L}^{(1)}$ and $\mathcal{L}^{(2)}$ manifestly intersect.
So far we have considered $\textrm{k}_1$ and $\textrm{k}_2$ to saturate the
lower bounds shown in Tab.~1 of~\cite{Prlina:2017azl}, but once a valid
configuration $(C, D^{(1)}, D^{(2)})$ has been constructed as described in this
section, it can be lifted to higher values of $\textrm{k}$
by growing the $C$-matrix according to a suitably modified version
of the argument given in Sec.~4.7 of that reference.
\subsection{Establishing the Lower Bound on Helicity}
We have shown that it is possible to merge two one-loop boundaries
with (minimal) helicities $\textrm{k}_1$ and $\textrm{k}_2$ in order to generate
two-loop boundaries with helicities
$\textrm{k} \ge \textrm{k}_1 + \textrm{k}_2$.
The merging algorithm we have described cannot generate
boundaries with $\textrm{k}$ below this lower bound.
In this section we prove that we have not overlooked any
potential two-loop boundaries. To do so, we
use the formulation of
amplituhedra in terms of sign flips~\cite{Arkani-Hamed:2017vfh}
(reviewed also in Sec.~2.2 of~\cite{Prlina:2017azl}) in order
to prove the lower bound.
The proof is essentially a loop-level version of the factorization
argument presented in Sec.~6 of~\cite{Arkani-Hamed:2017vfh} for
tree-level amplituhedra.
Let $(\mathcal{L}^{(1)}, \mathcal{L}^{(2)})$ be some configuration
of loop momenta on some codimension $d_1 + d_2 + 1$ boundary
of $\mathcal{A}_{n,\textrm{k},2}$, satisfying the on-shell
conditions
\begin{align}
\label{eqn:l1cuts}
\langle \mathcal{L}^{(1)}\,a_1\,a_1{+}1\rangle = \cdots
= \langle \mathcal{L}^{(1)}\,a_{d_1}\,a_{d_1}{+}1\rangle &= 0\,, \\
\langle \mathcal{L}^{(2)}\,b_1\,b_1{+}1\rangle = \cdots
= \langle \mathcal{L}^{(2)}\,b_{d_2}\,b_{d_2}{+}1\rangle =
\langle \mathcal{L}^{(1)}\,\mathcal{L}^{(2)}\rangle&= 0\,,
\label{eqn:l2cuts}
\end{align}
with the sets of indices $\{ a_1, \ldots, a_{d_1}\}$ and $\{b_1,\ldots, b_{d_2}\}$
cyclically ordered and
with $1 \le d_1, d_2 \le 4$ as detailed in~\cite{Prlina:2017azl}.
Planarity requires that all of the $b$'s fall inside
an interval between two consecutive $a$'s; specifically,
there exists some $j$ such that $a_j \le b_i \le a_{j+1}$
for all $i$. Once we have identified this value of $j$, let's backtrack
and consider factorization
(as described in~\cite{Arkani-Hamed:2017vfh})
on the boundary
$\langle \mathcal{L}^{(1)}\,a_j\,a_j{+}1\rangle =
\langle \mathcal{L}^{(1)}\,a_{j+1}\,a_{j+1}{+}1\rangle = 0$.
Then $\mathcal{L}^{(1)}$ passes through some point $A$ on the line
$(a_j\, a_{j}{+}1)$ and some point $B$ on the line
$(a_{j+1}\, a_{j+1}{+}1)$. With $\mathcal{L}^{(1)} = (A\,B)$
we consider the sets of momentum twistors
\begin{align}
V &= \{ A, Z_{a_j+1}, \ldots, Z_{a_{j+1}}, B\}\,,\\
W &= \{ B, Z_{a_{j+1}+1},\ldots, Z_{a_j}, A\}\,.
\end{align}
Thinking of $V$ and $W$ separately as ``(projected) external data''
for sub-amplituhedra describing
two smaller sets of scattering
particles\footnote{We put ``(projected) external data'' in quotation marks when it is (projected)
external data only for a sub-amplituhedron, not for the full amplituhedron.},
it follows using arguments analogous to those
in Sec.~6 of~\cite{Arkani-Hamed:2017vfh} that they lie in the principal
domain for helicities $\textrm{k}_V$ and $\textrm{k}_W$ satisfying
$\textrm{k}_V + \textrm{k}_W = \textrm{k}$ where $\textrm{k}$ is the original helicity
sector of the (projected) external data $\{Z_i\}$.
Under the assumption that the two-loop configuration
$(Y,\mathcal{L}^{(1)},\mathcal{L}^{(2)})$ is
a boundary of $\mathcal{A}_{n,\textrm{k},2}(Z)$, we prove below the following statements:
\begin{itemize}
\item if $\mathcal{L}^{(1)}$ is a solution to
the on-shell conditions~(\ref{eqn:l1cuts})
with minimum helicity $\textrm{k}_1$,
then $\textrm{k}_V \geq \textrm{k}_1$,
and similarly,
\item if $\mathcal{L}^{(2)}$ is a solution to
the on-shell conditions~(\ref{eqn:l2cuts})
with minimum helicity $\textrm{k}_2$,
then $\textrm{k}_W \geq \textrm{k}_2$.
\end{itemize}
Once we show this, it follows immediately that
the two-loop configuration $(\mathcal{L}^{(1)},
\mathcal{L}^{(2)})$ cannot be a valid boundary unless
\begin{equation}
\label{eqn:k-inequality}
\textrm{k} = \textrm{k}_V + \textrm{k}_W \geq \textrm{k}_1 + \textrm{k}_2\,.
\end{equation}
\paragraph{Proof.}
The minimum values of helicity $\textrm{k}_{\rm min}$ for which sets of
one-loop one-shell conditions admit solutions inside
the closure of $\mathcal{A}_{n,\textrm{k},1}$ were
derived in Sec.~4 of~\cite{Prlina:2017azl}.
In that analysis, the fact that a set of on-shell conditions does not have
valid solutions of a certain type for $\textrm{k} < \textrm{k}_{\rm min}$ followed
from the fact that the non-negativity constraints on the $C$ and
$\left( \begin{smallmatrix} D \\ C \end{smallmatrix} \right)$
matrices required certain sequences of (projected) four-brackets
to contain at least $\textrm{k}_{\rm min}$ sign flips.
In analyzing the constraints on the solution $\mathcal{L}^{(1)}$
to~\eqnRef{l1cuts}, the relevant sequences of four-brackets
are of the form $\langle \alpha\, \beta\, \gamma\, \bullet \rangle$
where $\alpha$, $\beta$ and $\gamma$ are functions
of the momentum twistors belonging
to the set $S = \{ Z_{a_1}, Z_{a_1+1}, \cdots, Z_{a_{d_1}},
Z_{a_{d_1}+1}\}$ only, and the required sign flips occur
between adjacent entries in $S$.
Note that there are two points ($Z_{a_j}$ and
$Z_{a_{j+1}{+}1}$) in $S$ that lie outside $V$,
the ``(projected) external data'' for one of the sub-amplituhedra
under consideration.
However, because $A$ lies on the line $(a_j\,a_j{+}1)$
and $B$ lies on the line $(a_{j+1}\,a_{j+1}{+}1)$, we clearly
have $(a_j\, a_j{+}1) = (a_j\,A)$ and
similarly $(a_{j+1}\,a_{j+1}{+}1) =
(a_{j+1}\,B)$ so we can choose to express $\alpha$, $\beta$ and $\gamma$
in terms of momentum twistors belonging to
\begin{align}
S' = \{ Z_{a_1}, Z_{a_1+1}, \ldots
Z_{a_j}, A, B, Z_{a_{j+1}+1}, \ldots, Z_{a_{d_1}},
Z_{a_{d_1}+1} \} \subset V\,.
\end{align}
Therefore the abovementioned sequences can all be expressed
in terms of the ``(projected) external data'' associated to the $V$
sub-amplituhedron. Since there are $\textrm{k}_1$ sign flips in $S'$, it
must be the case that $\textrm{k}_V \ge \textrm{k}_1$.
It follows similarly that $\textrm{k}_W \ge \textrm{k}_2$. $\blacksquare$
In~\eqnRef{k-inequality} we derived an inequality
$\textrm{k} \ge \textrm{k}_1+\textrm{k}_2$, and at the end of~\secRef{merging}
we explained that two-loop configurations have support
starting from $\textrm{k} = \textrm{k}_1 + \textrm{k}_2$ or
$\textrm{k} = \textrm{k}_1 + \textrm{k}_2+1$.
In~\secRef{merging}
we effectively defined $\textrm{k}_1$ and $\textrm{k}_2$
as the minimum helicities for configurations of loop momenta
satisfying sets of disjoint on-shell conditions, not
including the shared propagator.
However, in this section the definition of $\textrm{k}_2$ (only) now
includes the shared propagator (c.f.~\eqnRef{l2cuts}).
Effectively, this means that
the $\textrm{k}_2$ here is the same as in~\secRef{merging} only
for manifest intersection, but one greater than the latter in the
case of non-manifest intersection.
\section{Presentation of the Results}
\label{sec:presentation}
It is now a straightforward exercise to explicitly enumerate
all possible pairs of one-loop boundaries, using those listed
in Tab.~1 of~\cite{Prlina:2017azl}, and to determine the minimum
value of $\textrm{k}$ such that the merged configuration is a valid
boundary of $\mathcal{A}_{n,\textrm{k},2}$.
The resulting set is too large to display in a single figure
of the type of Fig.~1 of~\cite{Prlina:2017azl} (which is a summary
of the analogous results at one loop), so we focus first on
the maximal codimension boundaries. Each involves
a total of $d=8$ on-shell conditions:
the shared condition $\langle \mathcal{L}^{(1)}\, \mathcal{L}^{(2)}\rangle = 0$
together with seven conditions on the two loop momenta
($d_1 + d_2 = 7$, in the notation of~\secRef{identifying}).
\begin{figure}
\centering
\includegraphics[width=5.6in]{./figures/graph_flow_two_loop.pdf}
\caption[Graph flow.]{%
The twistor diagrams depicting the 14 distinct
maximal codimension boundaries of two-loop $\nkmhv{\textrm{k}}$ amplituhedra.
See the text for more details.
}
\label{fig:seed-amplituhedron-diagrams}
\end{figure}
We find a total of 14 topologically distinct maximal codimension configurations at two loops, which are summarized
in~\figRef{seed-amplituhedron-diagrams}. The figure emphasizes
the fact that all 14 varieties of
$\mathcal{L}$-boundaries can be obtained by some sequence of
helicity-increasing
operations $\ko{}$ (defined in Sec.~5.2
of~\cite{Prlina:2017azl}) acting on just two primitive diagrams,
one at MHV level and one at NMHV level.
The entirety of this figure should be thought of as the two-loop
$d=8$ analog of the one-loop $d=4$ column of Fig.~2 of that reference.
In the figure, an arrow labeled by $i$ indicates that the diagram
at the end of the arrow can be obtained by acting with
$\ko{i}$ on the diagram at the beginning of the arrow.
An arrow carries two labels if the result of acting with two
different instances of $\ko{}$ gives topologically equivalent diagrams,
in which case only the diagram corresponding to the first label on
the arrow is shown.
Note that for each diagram, the minimal value of $\textrm{k}$ precisely
matches the number of non-MHV intersections.
\subsection{Resolutions}
\label{sec:resolutions}
In each of the 14 twistor diagrams shown
in~\figRef{seed-amplituhedron-diagrams}, the configuration
manifestly exhibits a total of
$2 n_{\textrm{filled}} + n_{\textrm{empty}} = 8$ on-shell conditions, where
$n_{\textrm{filled}}$ is the number of filled nodes and $n_{\textrm{empty}}$ is the number
of empty nodes (including, in each diagram, the node at the intersection
of the two loop momenta).
However, on certain sufficiently high codimension boundaries, additional on-shell conditions can be implied
by the others and are therefore ``accidentally''
satisfied. This phenomenon occurs for the four
twistor diagrams in~\figRef{seed-amplituhedron-diagrams} that have
been drawn with
a filled node at the point $Z_i$ and two empty nodes
in close proximity
(grouped in a faint gray circle in~\figRef{seed-amplituhedron-diagrams}),
representing the four on-shell conditions
\begin{align}
\label{eqn:resolution}
\langle \mathcal{L}^{(1)}\,i{-}1\,i\rangle =
\langle \mathcal{L}^{(1)}\,i\,i{+}1\rangle =
\langle \mathcal{L}^{(2)}\,i\,i{+}1\rangle =
\langle \mathcal{L}^{(1)}\,\mathcal{L}^{(2)}\rangle = 0\,.
\end{align}
The first three conditions are satisfied by
$\mathcal{L}^{(1)} = (Z_i, A)$ and
$\mathcal{L}^{(2)} = (\alpha Z_i + (1 - \alpha) Z_{i+1}, B)$
for any points $A$, $B$.
Then, for generic $A$ and $B$, the fourth condition
in~\eqnRef{resolution} implies that $\alpha = 1$, so the line
$\mathcal{L}^{(2)}$ is forced to pass through the point $Z_i$.
Therefore, configurations of this type satisfy the additional
on-shell condition $\langle \mathcal{L}^{(2)}\,i{-}1\,i\rangle = 0$.
This phenomenon reflects the fact that in general, the
on-shell conditions satisfied by a given configuration are
not independent: some of them may be implied by the others.
In~\cite{Dennen:2016mdk} it was found that solving the Landau equations
for boundaries of this type was rather subtle, and required first
identifying a suitable minimal subset of independent on-shell conditions,
a process
called \emph{resolution}.
It was suggested that a resolution must satisfy two
criteria: (1) the chosen subset of on-shell conditions must imply
the full set of conditions satisfied for generic (projected)
external data, and (2) the Landau diagram corresponding to the subset
must be planar.
The example considered above describes a configuration
that satisfies five on-shell conditions, the four shown
in~\eqnRef{resolution} and also $\langle \mathcal{L}^{(2)}\,i{-}1\,i\rangle
= 0$. There are four possible resolutions that satisfy criterion
(1): we can simply omit any one of the conditions
except for $\langle \mathcal{L}^{(1)}\,\mathcal{L}^{(2)}\rangle = 0$.
However not all four choices will satisfy criterion (2), depending
on the points $A$ and $B$. For the four configurations
appearing in~\figRef{seed-amplituhedron-diagrams} that require
resolution, there are in each case precisely two valid resolutions:
we can omit either $\langle \mathcal{L}^{(2)}\,i{-}1\,i\rangle = 0$
(as was done in~\eqnRef{resolution}), or we can omit
$\langle \mathcal{L}^{(1)}\,i\,i{+}1\rangle = 0$.
In~\figRef{seed-amplituhedron-diagrams} we have chosen to always
draw a resolved configuration in the four cases where it is necessary.
However, in order to avoid clutter we do not draw
both resolutions unless they give rise to inequivalent diagrams.
There are at least three reasons for preferring the resolved configurations.
First of all, it becomes somewhat less
clear how to see the
action of the three graph operators $\ko{}$, $\uo{}$ and
$\ro{}$ on an unresolved configuration.
Also, the need for resolution is an accident that occurs
only when both loop momenta lie in the low-$\textrm{k}$ branch of solutions
to their respective on-shell conditions (or, by parity symmetry,
when they both lie in the high-$\textrm{k}$ branch). If one of them
lies in the low-$\textrm{k}$ branch and the other lies in the high-$\textrm{k}$
branch, then for generic (projected) external data only the
resolved configuration(s) exist; the ``extra'' on-shell condition
would place restrictions on the external data.
Finally, when we turn our attention to finding Landau
singularities in~\secRef{nmhv-landau-analysis},
we will always want to work
with resolved diagrams since these give us the independent sets
of on-shell conditions for which we will need to solve the
Landau equations~\cite{Dennen:2016mdk}.
\subsection{Relaxations}
All lower-codimension $\mathcal{L}$-boundaries
are relaxations: they can be generated
by releasing one or more of the seven
on-shell conditions (excepting $\langle \mathcal{L}^{(1)}\,
\mathcal{L}^{(2)}\rangle = 0$, which we always preserve)
satisfied on the maximal boundaries.
Boundaries of this type can be generated by acting on the
twistor diagrams
in~\figRef{seed-amplituhedron-diagrams}
with sequences of the graph operators $\uo{}$ and $\ro{}$.
In this way one could imagine uplifting the figure
to a three-dimensional generalization
of Fig.~2 of~\cite{Prlina:2017azl}, with the top layer
being a copy of~\figRef{seed-amplituhedron-diagrams}
showing the maximal codimension boundaries ($d=8$), the next layer
showing those with $d=7$, etc.
One novelty compared to the one-loop analysis of~\cite{Prlina:2017azl}
is that starting at two loops the relaxation of a boundary
is not necessarily still a boundary --- this will only be the case
if the Landau diagram of the relaxation continues to be planar.
Rather than attempting to draw the aforementioned web of
interconnected boundaries in a single figure,
we summarize our results in terms of the corresponding Landau diagrams in
Tabs.~\ref{tab:mhv-results}--\ref{tab:n4mhv-results} grouped
according to the minimum helicity for which the configuration is
valid, i.e.~the minimum $\textrm{k}$ for which $\mathcal{A}_{n, \textrm{k}, 2}$ has
boundaries of the type shown in the corresponding twistor diagram.
Because the maximal codimension singularities have
$d_1 + d_2 = 7$, the corresponding Landau diagrams always have
the topology of a planar pentagon-box.
\begin{table}
\centering
\begin{tabular}
{
>{\centering\arraybackslash} m{0.035\textwidth}
>{\centering\arraybackslash} m{0.275\textwidth}
>{\centering\arraybackslash} m{0.6\textwidth}
}
& Twistor Diagram & Landau Diagram \\
\hline \hline
(a)
&
\includegraphics{./figures/mts_n0mhv_a.pdf}
&
\includegraphics{./figures/ld_n0mhv_a.pdf}
\\[-9pt]
\end{tabular}
\caption[Results for $\nkmhv{k \ge 0}$]{%
The twistor and Landau diagram describing a type
(the unique type, for $\textrm{k} = 0$)
of resolved maximal
codimension boundary of $\nkmhv{\textrm{k} \ge 0}$ amplituhedra.
}
\label{tab:mhv-results}
\end{table}
As mentioned above
the lower codimension singularities can be obtained by acting
on the twistor diagrams with sequences of
$\uo{}$ and $\ro{}$ operators. As discussed in Sec~5.2
of~\cite{Prlina:2017azl}, at one loop these operators generate relaxations
that respectively preserve or increase, but can never
decrease, the minimum helicity
for which a configuration is valid.
There is however a subtlety with the $\uo{}$ operator at two loops.
Recall that $\uo{i,\mp}$ is the ``unpinning'' operator which acts on a
loop momentum $\mathcal{L}$
passing through some point $Z_i$ by relaxing
the on-shell condition
$\langle \mathcal{L}\, i\, i{\pm}1\rangle = 0$.
This can have the effect of turning what was a manifest intersection
between the two loop momenta into a non-manifest intersection,
which requires increasing the minimum helicity by 1.
\begin{table}
\centering
\begin{tabular}
{
>{\centering\arraybackslash} m{0.035\textwidth}
>{\centering\arraybackslash} m{0.275\textwidth}
>{\centering\arraybackslash} m{0.6\textwidth}
}
& Twistor Diagram & Landau Diagram \\
\hline \hline
(a)
&
\includegraphics{./figures/mts_n1mhv_a.pdf}
&
\includegraphics{./figures/ld_n1mhv_a.pdf}
\\[-9pt]
(b)
&
\includegraphics{./figures/mts_n1mhv_c.pdf}
&
\includegraphics{./figures/ld_n1mhv_c1.pdf}
\\[-9pt]
(c)
&
\includegraphics{./figures/mts_n1mhv_b.pdf}
&
\includegraphics{./figures/ld_n1mhv_b.pdf}
\\[-9pt]
(d)
&
\includegraphics{./figures/mts_n1mhv_d.pdf}
&
\includegraphics{./figures/ld_n1mhv_d.pdf}
\\[-9pt]
\end{tabular}
\caption[Results for $\nkmhv{k \ge 1}$]{%
The twistor and Landau diagrams describing types of (resolved,
in (a) and (c)) maximal
codimension boundaries of $\nkmhv{\textrm{k} \ge 1}$ amplituhedra.
}
\label{tab:nmhv-results}
\end{table}
\begin{table}
\centering
\begin{tabular}
{
>{\centering\arraybackslash} m{0.035\textwidth}
>{\centering\arraybackslash} m{0.275\textwidth}
>{\centering\arraybackslash} m{0.6\textwidth}
}
& Twistor Diagram & Landau Diagram \\
\hline \hline
(a)
&
\includegraphics{./figures/mts_n2mhv_b.pdf}
&
\includegraphics{./figures/ld_n2mhv_b.pdf}
\\[-9pt]
(b)
&
\includegraphics{./figures/mts_n2mhv_a.pdf}
&
\includegraphics{./figures/ld_n2mhv_a.pdf}
\\[-9pt]
(c)
&
\includegraphics{./figures/mts_n2mhv_c.pdf}
&
\includegraphics{./figures/ld_n2mhv_c1.pdf}
\\[-9pt]
(d)
&
\includegraphics{./figures/mts_n2mhv_d.pdf}
&
\includegraphics{./figures/ld_n2mhv_d1.pdf}
\\[-9pt]
(e)
&
\includegraphics{./figures/mts_n2mhv_e.pdf}
&
\includegraphics{./figures/ld_n2mhv_e.pdf}
\\[-9pt]
\end{tabular}
\caption[Results for $\nkmhv{k \ge 2}$]{%
The twistor and Landau diagrams describing types of (resolved,
in (b)) maximal
codimension boundaries of $\nkmhv{\textrm{k} \ge 2}$ amplituhedra.
}
\label{tab:n2mhv-results}
\end{table}
In the tables we have introduced a new graphical notation in order
to account for this phenomenon: a propagator with a black dot
denotes an on-shell condition that cannot be relaxed without
increasing the minimum helicity for which the configuration is valid.
(We also always draw a black dot on the $\langle
\mathcal{L}^{(1)}\,\mathcal{L}^{(2)}\rangle$ propagator,
as a reminder that we never want to relax it.)
Consider for example the twistor diagram
in~\tabRef{mhv-results}(a). The two loop momenta manifestly
intersect at the point $Z_i$ as explained in the previous section,
but this will no longer be the case if we act on this twistor
diagram with $\uo{i,-}$. Instead, the configuration would
become NMHV rather than MHV (in fact, it would become a
relaxation of~\tabRef{nmhv-results}(d), up to relabeling).
For this reason we draw a black dot on the $(i\,i{+}1)$
propagator on the pentagon in the Landau diagram
of~\tabRef{mhv-results}(a).
\begin{table}
\centering
\begin{tabular}
{
>{\centering\arraybackslash} m{0.05\textwidth}
>{\centering\arraybackslash} m{0.3\textwidth}
>{\centering\arraybackslash} m{0.35\textwidth}
}
& Twistor Diagram & Landau Diagram\\
\hline \hline
(a)
&
\includegraphics{./figures/mts_n3mhv_a.pdf}
&
\includegraphics{./figures/ld_n3mhv_a.pdf}
\\[-9pt]
(b)
&
\includegraphics{./figures/mts_n3mhv_c.pdf}
&
\includegraphics{./figures/ld_n3mhv_c.pdf}
\\[-9pt]
(c)
&
\includegraphics{./figures/mts_n3mhv_b.pdf}
&
\includegraphics{./figures/ld_n3mhv_b.pdf}
\\[-9pt]
\end{tabular}
\caption[Results for $\nkmhv{k \ge 3}$]{%
The twistor and Landau diagrams describing types of
maximal
codimension boundaries of $\nkmhv{\textrm{k} \ge 3}$ amplituhedra.
}
\label{tab:n3mhv-results}
\end{table}
\subsection{Closing Comments}
\label{sec:closing}
In summary,
to get the full list of Landau diagrams at
helicity $\textrm{k} = 0, 1, 2, 3, 4$, one must therefore
consider all of the Landau diagrams in
Tables.~\ref{tab:mhv-results} through~\ref{tab:n4mhv-results},
respectively, together with the diagrams generated therefrom by collapsing
any subset of undotted propagators.
In~\figRef{seed-amplituhedron-diagrams} and in the tables
we have chosen to always
draw the loop momentum satisfying $d_1 = 4$
in blue and the one satisfying $d_2 = 3$ in red, but of course
the amplituhedron is symmetric under the exchange of any $\mathcal{L}$'s
so
in each case
both assignments \color{blue} $\mathcal{L}^{(1)}$\color{black},
\color{red} $\mathcal{L}^{(2)}$ \color{black} and
\color{blue} $\mathcal{L}^{(2)}$\color{black},
\color{red} $\mathcal{L}^{(1)}$\color{black}
describe valid boundaries.
The Landau diagrams in Tables~\ref{tab:mhv-results}--\ref{tab:n4mhv-results}
are always drawn with the understanding that all indicated labels
are cyclically ordered: $i < i' < j < j' < j'' < k < k' < k'' < i$ (mod $n$).
However, the ordering of intersections along the red or blue
loop momentum lines carries no significance. Therefore,
as described in Sec.~5.1 of~\cite{Prlina:2017azl}, there is a second
type of ambiguity between the two classes of diagrams.
For example, the twistor diagram in Tab.~\ref{tab:nmhv-results}(a)
is agnostic about the cyclic ordering of $i$, $k$, and $k'$;
the two independent choices lead to the Landau diagram shown in the table
or to its mirror image.
In all of the tables we use primes (and, when necessary, also
double primes) to indicate pairs (or triplets)
of nodes that can be exchanged, as far as the
twistor diagram is concerned.
Sometimes, as
in the example
Tab.~\ref{tab:nmhv-results}(a) just considered, an exchange generates
a Landau diagram of the same topology, but in other cases it can generate
a new topology. For example, exchanging $k$ and $k'$
in the twistor diagram of Tab.~\ref{tab:nmhv-results}(b) generates
the new Landau diagram
\begin{align}
\begin{gathered}
\includegraphics{./figures/ld_n1mhv_c2.pdf}
\end{gathered}\,,
\nonumber
\end{align}
where it is to be understood that $i < j < k' < k < i$.
\begin{table}
\centering
\begin{tabular}
{
>{\centering\arraybackslash} m{0.05\textwidth}
>{\centering\arraybackslash} m{0.3\textwidth}
>{\centering\arraybackslash} m{0.35\textwidth}
}
& Twistor Diagram & Landau Diagram \\
\hline \hline
(a)
&
\includegraphics{./figures/mts_n4mhv_a.pdf}
&
\includegraphics{./figures/ld_n4mhv_a.pdf}
\\[-9pt]
\end{tabular}
\caption[Results for $\nkmhv{k \ge 4}$]{%
The twistor and Landau diagram describing a type of
maximal codimension boundary of $\nkmhv{\textrm{k} \ge 4}$ amplituhedra.
}
\label{tab:n4mhv-results}
\end{table}
Let us also note
that although when interpreted literally as configurations of
intersecting lines in $\mathbb{P}^3$ most twistor diagrams only
depict the low-$\textrm{k}$ branch of solutions to a given set of
on-shell conditions, it is clear that additional, higher-$\textrm{k}$
boundaries can be generated by replacing one or both of the
$\mathcal{L}$'s with their parity conjugates. The twistor
diagrams appearing in~\figRef{seed-amplituhedron-diagrams}
and in the five tables can therefore
each be thought of as representing four different types of boundaries
corresponding to the same Landau diagram.
Finally, we detail, in Appendix~\ref{sec:twistor-to-landau}, how a partial edge-to-node duality
maps between the twistor diagrams on the left
and the Landau diagrams on the right of these tables, when the two diagrams are treated as graphs.
On the one hand, it is not surprising that there exists some map between
these two classes of graphs, since both are designed to encode the same information.
On the other hand, it is intriguing that there is a straightforward map
between a generic Landau diagram and the minimum-helicity solution
to the on-shell conditions of said diagram
in the very particular choice of loop momentum twistor coordinates.
This observation is also reminiscent of the map from Feynman integrals to their duals
that aided in exploring the dual conformal invariance of SYM theory
amplitudes~\cite{Drummond:2006rz,Alday:2007hr,Drummond:2008vq}
but here, enticingly, this partial edge-to-node map is well-defined even on nonplanar graphs.
\section{The Connection With On-Shell Diagrams}
\label{sec:on-shell-diagrams}
So far, we have seen that to each boundary of an amplituhedron one
can associate a Landau diagram
which encodes information about the singularities of the associated
amplitude.
In this section we explore the connection between Landau diagrams
and a class of closely related diagrams that also encode information about
an amplitude's mathematical structure: the on-shell
diagrams of~\cite{ArkaniHamed:2012nw}.
We explain and demonstrate in several examples that
for a given amplitude, the information content of
certain on-shell diagrams matches the combined
information content in the amplituherdon and Landau diagrams.
Except possibly for cases of the type discussed in the paragraph
following \eqnRef{toreject}, we expect our arguments
to also hold for amplitudes at higher loop order
and higher helicity.
One reason to shift focus to on-shell diagrams is that
anything that can be formulated in terms of
the on-shell diagrams discussed here potentially generalizes
to more general quantum field theories including less supersymmetric
theories as well as the full, non-planar super-Yang--Mills theory.
The major difference is that in the planar theory,
the relevant Landau diagrams can, in principle, be read
off from the boundaries of $\mathcal{A}_{n,\textrm{k},L}$ for
arbitrary $n$, $\textrm{k}$, and $L$, while in the non-planar
sector there is currently no known supplier of this list of diagrams.
Nevertheless, assuming one has a way to generate a representation for
a given non-planar amplitude in terms of Feynman integrals,
all of the techniques discussed in this section
apply equally well to those non-planar integrals.
Putting that ambitious motivation aside, in the rest of this section
we stick to planar SYM theory and show in several examples
that a given Landau diagram encodes a singularity of
an $\nkmhv{\textrm{k}}$ amplitude only if the diagram can be decorated
in such a way that it becomes an on-shell diagram
associated with an $\nkmhv{\textrm{k}}$ amplitude.
We begin with a brief review of on-shell diagrams.
\subsection{On-Shell Diagrams}
\label{sec:on-shell-diagrams-review}
An~\emph{on-shell diagram}, as introduced in~\cite{ArkaniHamed:2012nw},
is a connected trivalent graph
with each node having one of two distinct decorations,
traditionally denoted by
coloring them black or white.
In the application to scattering amplitudes,
each edge of the diagram represents an on-shell condition
(just like in a Landau diagram) and
each black (white) node corresponds to a three-point MHV
($\overline{\rm MHV}$)
tree-level superamplitude.
A straightforward generalization allows nodes of higher degree
which represent higher-point tree-level superamplitudes.
These we depict by a shaded node.
We refer the reader to~\cite{ArkaniHamed:2012nw} for details,
recalling here only a few basic facts.
A \emph{tree-level superamplitude} of
\emph{Grassmann weight} $\kappa$ is a rational function
of (projected) external data that is a homogeneous polynomial
of degree $4 \kappa$ in certain Grassmann variables (the fermionic
partners of the momentum twistors $Z_i$).
Three-point MHV and $\overline{\rm MHV}$ amplitudes respectively
have $\kappa = 2$ and $\kappa= 1$ while for $n > 3$ an $n$-point
amplitude with helicity $\textrm{k}$ has $\kappa = \textrm{k}+2$.
To each on-shell
diagram there is an associated differential form that is obtained
by first multiplying together
the tree-level superamplitudes represented by each of the diagram's
nodes, and then sewing them together according to a set of simple
rules
that involve integrating over four Grassmann variables for each
internal edge (propagator) in the diagram.
Such forms are the values of the residue of the amplitude's
integrand at specific
loci in loop momentum space.
Consider an on-shell diagram $\delta$. Let $\iota$ be the number
of internal edges of $\delta$, and for each node $\nu$ let $\kappa_\nu$
be the Grassmann weight of the tree-level superamplitude at $\nu$.
As a result of the rules just reviewed,
the total Grassmann weight of $\delta$ is
\begin{align}
\label{eqn:total-grassmann-weight}
\kappa_\delta = \sum_{\nu} \kappa_\nu - \iota \,,
\end{align}
and the total helicity is $\textrm{k}_\delta = \kappa_\delta-2$.
To assign a~\emph{coloring} to a Landau diagram depicting some
set of on-shell conditions means to assign to each trivalent node
in the diagram either a white or black coloring, and to
assign to each node $\nu$ of degree $n > 3$ some helicity
$\textrm{k}_\nu = \kappa_\nu - 2 \in \{0, \ldots, n - 4\}$.
Since $\iota$ is fixed by the propagator structure of the diagram,
and each $\kappa_\nu$ is positive,
it is clear from~\eqnRef{total-grassmann-weight}
that the minimal Grassmann weight of a given Landau diagram
results from coloring all trivalent nodes white and from
assigning all nodes of higher degree to be MHV ($\kappa_\nu = 2$).
In this way we see that
the Grassmann weight of an arbitrary coloring of a given Landau diagram
is bounded below
by
\begin{align}
\label{eqn:minimum-grassmann}
\kappa \ge \kappa_{\textrm{min}} = n_{\textrm{tri}} + 2 n_{\textrm{high}} - \iota \,,
\end{align}
where $n_{\textrm{tri}}$ is the number of trivalent nodes and
$n_{\textrm{high}}$ is the number of nodes of degree higher than three.
This implies a minimal helicity
sector $k_{\textrm{min}} = \kappa_{\textrm{min}} - 2$ for which the
Landau diagram can be relevant.
If a diagram has $n_{\textrm{tri}}$ trivalent indices,
there are $2^{n_{\textrm{tri}}}$ colorings of the trivalent nodes, but
in general some of these may lead to on-shell diagrams that
evaluate to zero.
In practice we count the number of permissible colorings of a diagram
by solving the on-shell
conditions implied by the diagram
and mapping each resulting solution to a specific coloring
(see~\cite{ArkaniHamed:2012nw}).
As discussed in~\cite{ArkaniHamed:2010gh}, solving a
set of on-shell conditions in momentum twistor space
amounts to solving a Schubert problem.
At one loop these problems have in general two solutions,
while for an $L$-loop Landau diagram we would in general
expect $2^L$ branches of solutions.
Given a solution to a Schubert problem in momentum twistor space,
it is straightforward\footnote{We thank J.~Bourjaily
for explaining this point to us.}
to check if a given trivalent node is MHV or $\overline{\rm MHV}$
by considering
the rank of the three momentum-twistor lines at the node.
For an MHV node, the three twistors have full rank,
while for an $\overline{\rm MHV}$ node the rank is less than full.
This process is illustrated explicitly in several examples
in the following section.
In summary, we have reviewed that a given Landau diagram
encodes a set of on-shell conditions, and the various branches
of solutions
to those conditions
correspond in general to different minimum helicity sectors.
The permissible colorings of a Landau diagram
are in one-one correspondence with those branches,
and the Grassmann weight $\kappa$ of each
such
Landau-turned-on-shell diagram is related to the minimum helicity sector
$\textrm{k}$
of the corresponding solution via $\kappa=\textrm{k}+2$.
This observation provides an alternative way to phrase
the Landau-equation-based
algorithm we employ to identify singularities of amplitudes,
compared for example to the way it is phrased
in the conclusion of~\cite{Dennen:2016mdk}
or in Sec.~2.5 of~\cite{Prlina:2017azl}.
For one thing, it means we can
identify a singularity of a Landau diagram
as a singularity of $\nkmhv{\textrm{k}}$ amplitudes only if the
diagram admits a coloring with total helicity $\textrm{k}$ (equivalently,
Grassmann weight $\textrm{k}+2$).
More specifically, when first solving the on-shell conditions
(a subset of the Landau equations) for a given
Landau diagram, each solution directly indicates,
via the test reviewed in the previous paragraph, the helicity
sector for which the singularity associated to that solution is relevant.
In the on-shell diagram approach this step is the analog in
the amplituhedron approach of
identifying the values of $\textrm{k}$ for which the momentum twistor
solution lies on the boundary of the $\nkmhv{\textrm{k}}$ amplituhedron.
In the amplituhedron-based approach, there is potential for
confusion because solving the Kirchhoff conditions (the remaining
Landau equations) can lead to solutions for loop momenta
that lie outside the $\nkmhv{\textrm{k}}$ amplituhedron. The on-shell
diagram approach bypasses this confusion because the Kirchhoff
conditions only further localize a loop momentum solution whose helicity
sector has already been identified.
\subsection{Examples at One and Two Loops}
\label{sec:on-shell-diagram-examples}
We now consider several examples in order to emphasize the following point:
\begin{align}
\begin{array}{l}
\textrm{A
Landau diagram contributes singularities to an $\nkmhv{\textrm{k}}$ amplitude
}
\\
\textrm{only
if the diagram permits a coloring with total Grassmann weight $\textrm{k}+2$.
}
\end{array}
\end{align}
For each of our examples, we also list the values of the loop momenta
corresponding to the colorings of the correct Grassmann weight. For the
one-loop examples the same information can be read off from Tab.~1
of~\cite{Prlina:2017azl}. We will show how the on-shell diagram and
amplituhedron-based methods work in tandem to quickly identify the
helicity sector
for which a given solution to the set of on-shell conditions is relevant.
\subsubsection*{One-loop Two-mass Easy Box}
The on-shell conditions
\begin{align}
\langle \mathcal{L}\,i{-}1\,i\rangle =
\langle \mathcal{L}\,i\,i{+}1\rangle =
\langle \mathcal{L}\,j{-}1\,j\rangle =
\langle \mathcal{L}\,j\,j{+}1\rangle = 0
\end{align}
admit two solutions, called branches (12) and
(13) in~\cite{Prlina:2017azl}.
In~\tabRef{two-mass-easy-colorings} we pair the momentum
twistor representation of each solution
with the associated on-shell diagram, i.e.~colored Landau diagram.
Having this information accessible will prove useful when considering
two loops.
In~\tabRef{two-mass-easy-colorings}(a) the minimum Grassmann weight is
computed according to~\eqnRef{minimum-grassmann} and found to be
\begin{align}
\kappa_{\textrm{min}} =
\underbrace{1+1}_{\textrm{white}} + \underbrace{2 + 2}_{\textrm{higher}}-4
= 2
\end{align}
so that it is an MHV ($\textrm{k}=2-2=0$) coloring.
In~\tabRef{two-mass-easy-colorings}(b) the minimum Grassmann weight is
\begin{align}
\kappa_{\textrm{min}} =
\underbrace{2+2}_{\textrm{black}} + \underbrace{2 + 2}_{\textrm{higher}}-4
= 4
\end{align}
so that it is an $\nkmhv{2}$ ($\textrm{k}=4-2=2$) coloring.
Let us now show how to compute
the appropriate node colorings directly from
the momentum twistor solutions in~\tabRef{two-mass-easy-colorings}.
Consider the trivalent node where external label $i$ connects to the loop.
The three lines in momentum twistor space defining the trivalent node are
$(i{-}1\,i)$, $(i\,i{+}1)$, and $\mathcal{L}_*$,
where $\mathcal{L}_*$ is either $(i\,j)$
or $\overline{i} \cap \overline{j}$.
Taking first $\mathcal{L}_{*} = (i\,j)$ ,
we seek the dimension of the space spanned by the three momentum-twistor
lines.
One way to compute this is to ask for the rank of the matrix:
\begin{align}
\textrm{rank}
\left(
\begin{array}{c}
(i{-}1\,i) \\
(i\,i{+}1) \\
(i\,j)
\end{array}
\right)
=
\textrm{rank}
\bordermatrix{
& i{-}1 & i & i{+}1 & j \cr
& 1 & 0 & 0 & 0 \cr
& 0 & 1 & 0 & 0 \cr
& 0 & 1 & 0 & 0 \cr
& 0 & 0 & 1 & 0 \cr
& 0 & 1 & 0 & 0 \cr
& 0 & 0 & 0 & 1 \cr
}
=
4
\end{align}
which has maximal rank. So the node is MHV, and colored white.
In contrast, consider the other solution $\mathcal{L}_{*} =
\overline{i} \cap \overline{j}$.
The analogous matrix is then
\begin{align}
\textrm{rank}
\left(
\begin{array}{c}
(i{-}1\,i) \\
(i\,i{+}1) \\
\overline{i} \cap \overline{j}
\end{array}
\right)
=
\textrm{rank}
\bordermatrix{
& i{-}1 & i & i{+}1 \cr
& 1 & 0 & 0 \cr
& 0 & 1 & 0 \cr
& 0 & 1 & 0 \cr
& 0 & 0 & 1 \cr
& \langle i\,\overline{j} \rangle & - \langle i{-}1\, \overline{j}
\rangle & 0 \cr
& 0 & \langle i{+}1 \overline{j} \rangle &
- \langle i \, \overline{j} \rangle \cr
}
=
3
\end{align}
which does not have maximal rank.
Thus the second solution is encoded in an $\overline{\rm MHV}$ node
at $i$, colored black.
The colorings of the node at $j$ can be computed analogously.
\begin{table}
\centering
\begin{tabular}
{
>{\centering\arraybackslash} m{0.05\textwidth}
>{\centering\arraybackslash} m{0.2\textwidth}
>{$}c<{$}
>{$}c<{$}
}
& Coloring & \nkmhv{\textrm{k}} & \textrm{Twistor Solution}
\\
\hline \hline
(a)
&
\includegraphics{./figures/osd_two_mass_easy_box_mhv}
&
\textrm{k} \ge 0
&
\mathcal{L} = (i\,j)
\\
(b)
&
\includegraphics{./figures/osd_two_mass_easy_box_n2mhv}
&
\textrm{k} \ge 2
&
\mathcal{L} = \overline{i} \cap \overline{j}
\\
\end{tabular}
\caption[Two colorings of the two-mass easy box.]{%
Two colorings of the two-mass easy box.
Row (a) shows the MHV coloring and momentum twistor solution to the
on-shell conditions,
and row (b) shows the same for the $\nkmhv{2}$ solution.
}
\label{tab:two-mass-easy-colorings}
\end{table}
\begin{table}
\centering
\begin{tabular}
{
>{\centering\arraybackslash} m{0.05\textwidth}
>{\centering\arraybackslash} m{0.2\textwidth}
>{$}c<{$}
>{$}c<{$}
}
& Coloring & \nkmhv{\textrm{k}} & \textrm{Twistor Solution}
\\
\hline \hline
(a)
&
\includegraphics{./figures/osd_three_mass_box_nmhv}
&
\textrm{k} \ge 1
&
\mathcal{L} = (i\,j\,j{+}1) \cap (i\,k\,k{+}1)
\\
(b)
&
\includegraphics{./figures/osd_three_mass_box_n2mhv}
&
\textrm{k} \ge 2
&
\begin{array}{c}
\mathcal{L}= (A\,B) \\
A = (j\,j{+}1) \cap \overline{i} \\
B = (k\,k{+}1) \cap \overline{i}
\end{array}
\\
\end{tabular}
\caption[Two colorings of the two-mass easy box.]{%
Two colorings of the three-mass box.
Row (a) shows the NMHV coloring and
momentum twistor solution to the on-shell conditions,
and row (b) shows the same for the $\nkmhv{2}$ solution.
}
\label{tab:three-mass-box-colorings}
\end{table}
\subsubsection*{One-loop Three-mass Box}
We perform the same exercise for the three-mass box on-shell
conditions
\begin{align}
\langle \mathcal{L}\, i{-}1\,i\rangle =
\langle \mathcal{L}\,i\,i{+}1\rangle =
\langle \mathcal{L}\,j\,j{+}1\rangle =
\langle \mathcal{L}\,k\,k{+}1\rangle = 0\,.
\end{align}
The
solutions
of the on-shell conditions
are matched to the two on-shell diagram colorings
in~\tabRef{three-mass-box-colorings},
and the corresponding
minimum Grassmann weights are computed using~\eqnRef{minimum-grassmann}.
The colorings are also directly calculable from the momentum twistor
solutions as in the previous two-mass easy box example.
The three-mass box is worth pointing out because in this
case neither coloring is MHV, in contrast to the previous
example.
\subsubsection*{Two-loop Pentagon-box}
We can recycle our knowledge of one-loop solutions to determine the helicity
sectors to which a given two-loop Landau diagram contributes its singularities.
We consider the pentagon-box of~\tabRef{nmhv-results}(a) as an exemplar.
We solve the pentagon-box on-shell conditions as follows.
We first solve the subsystem of four propagators that depend
on only $\mathcal{L}^{(2)}$:
\begin{align}
\label{eqn:three-mass-color-cut}
\langle \mathcal{L}^{(2)} \, i{-}1 \, i \rangle =
\langle \mathcal{L}^{(2)} \, i \, i{+}1 \rangle =
\langle \mathcal{L}^{(2)} \, k \, k{+}1 \rangle =
\langle \mathcal{L}^{(2)} \, k' \, k'{+}1 \rangle = 0
\end{align}
using either of the two three-mass box solutions
shown in~\tabRef{three-mass-box-colorings}, after an appropriate
exchange of the external labels in order to
match to~\eqnRef{three-mass-color-cut}.
This means there are two branches of colorings: one were the
trivalent node at $i$ is white,
and one where it is black. The two corresponding
solutions $\mathcal{L}^{(2)}_*$ are shown in the first row
of~\tabRef{pentagon-box-colorings}.
For each choice of $\mathcal{L}^{(2)}_*$ we then solve the
remaining four on-shell conditions
\begin{align}
\label{eqn:two-mass-easy-color-cut}
\langle \mathcal{L}^{(1)} \, i \, i{+}1 \rangle =
\langle \mathcal{L}^{(1)} \, j{-}1 \, j \rangle =
\langle \mathcal{L}^{(1)} \, j \, j{+}1 \rangle =
\langle \mathcal{L}^{(1)}\, \mathcal{L}^{(2)}_* \rangle\,.
\end{align}
These four conditions
constitute a two-mass easy box problem, so we can
utilize~\tabRef{two-mass-easy-colorings}
to identify
the two solutions $\mathcal{L}_*^{(1)}$, which color the trivalent
nodes of the box either both white or both black.
These two solutions
are tabulated in the first column of~\tabRef{pentagon-box-colorings}.
Altogether the table shows a grid containing a total of four
distinct solutions, and the four associated distinct colorings.
From this analysis we conclude that only the solution
\begin{align}
\mathcal{L}^{(2)}_{*,1} =
(i\,k\,k{+}1) \cap (i\,k'\,k'{+}1)\,, \ \mathcal{L}^{(1)}_{*,1}=
(j\,i\,i{+}1) \cap (j\,\mathcal{L}_{*,1}^{(2)}) = (i \, j)
\label{eqn:nmhvsolution}
\end{align}
shown in the top left of~\tabRef{pentagon-box-colorings}
is relevant to the NMHV sector.
This means that
when we turn in the following section to the problem of finding
singularities of NMHV amplitudes by solving the Landau equations,
we can disregard the other three solutions.
Were we to attempt an amplituhedron-based answer to this same question,
we would find that the other
solutions to the on-shell conditions do not lie on a boundary
of $\mathcal{A}_{n,1,2}$.
\begin{table}
\centering
\begin{tabular}
{
>{$}c<{$}
|>{\centering\arraybackslash} m{0.33\textwidth}
|>{\centering\arraybackslash} m{0.33\textwidth}
}
\mathcal{L}^{(1)}_* \backslash \mathcal{L}^{(2)}_*
& $(i\,k\,k{+}1) \cap (i\,k'\,k'{+}1) $ &
$
\begin{array}{c}
((k\,k{+}1) \cap \overline{i}\, (k'\,k'{+}1) \cap \overline{i}) \\
\end{array}
$
\\ \hline
(i\,j)
&
\includegraphics{./figures/osd_penta_box_www}
&
\includegraphics{./figures/osd_penta_box_bww}
\\[-12pt]
& $\textrm{k} \ge 1$ & $\textrm{k} \ge 2 $
\\ \hline
\bar{i} \cap \bar{j}
&
\includegraphics{./figures/osd_penta_box_wbb}
&
\includegraphics{./figures/osd_penta_box_bbb}
\\[-12pt]
& $\textrm{k} \ge 3$ & $ \textrm{k} \ge 4 $ \\
\end{tabular}
\caption[Four colorings of the two-loop pentagon-box.]{%
All permissible colorings of the
trivalent nodes of the
two-loop pentagon-box Landau diagram from~\tabRef{nmhv-results}(a).
The first row shows the two possible solutions
to the three-mass
on-shell conditions
(\eqnRef{three-mass-color-cut}) satisfied by
$\mathcal{L}^{(2)}$, the loop momentum in the pentagon.
The first column shows the two possible solutions
to the two-mass easy on-shell
conditions (\eqnRef{two-mass-easy-color-cut}) satisfied
by $\mathcal{L}^{(1)}$, the loop momentum in the box.
The cell at the intersection of a row and a column is the
colored Landau diagram
that results from the two solutions.
Also indicated in each cell is the minimum helicity sector
of the colored Landau diagram, which is achieved only if the gray
nodes are taken to be MHV.
}
\label{tab:pentagon-box-colorings}
\end{table}
\subsection*{General Two-Loop Pentagon-Boxes}
By using the same simple counting arguments applied
to the results in Tables~\ref{tab:mhv-results}--\ref{tab:n4mhv-results}, it
is a straightforward
exercise to show that
\begin{itemize}
\item the set of Landau diagrams corresponding to the maximal codimension boundaries of $\mathcal{A}_{n,\textrm{k},2}$ and
\item the set of on-shell diagrams of pentagon-box
topology that admit an $\nkmhv{\textrm{k}}$ coloring
\end{itemize}
are the same.
Specifically, the second set may be constructed by starting
with a pentagon-box diagram with no external edges or coloring,
then placing all possible combinations of massive and massless edges on
nodes of the diagram in all possible ways,
and finally enumerating all colorings of the resulting Landau diagrams to
identify the minimum possible value of $\textrm{k}$.
\section{Landau Singularities of Two-Loop NMHV Amplitudes}
\label{sec:nmhv-landau-analysis}
Finally we come to step 2 of the algorithm summarized
in Sec.~2.5 of~\cite{Prlina:2017azl}:
in order to determine the locations of Landau singularities of
the two-loop $\nkmhv{\textrm{k}}$ amplitude in SYM theory, we must identify,
for each $\mathcal{L}$-boundary of $\mathcal{A}_{n,\textrm{k},2}$
tabulated in~\secRef{presentation},
the codimension-one loci (if there are any) in $\Conf_n(\mathbb{P}^3)$
on which the corresponding Landau equations admit nontrivial solutions.
The ultimate aim of this project has been to derive (or at least to
conjecture)
symbol alphabets for two-loop amplitudes.
However, as discussed in Sec.~7 of~\cite{Prlina:2017azl},
guessing a symbol alphabet from a list of singularity loci
can require a nontrivial extrapolation.
At one loop the extrapolation is straightforward for all Landau diagrams
except the four-mass box.
\begin{figure}
\centering
\begin{tabular}
{
>{\centering\arraybackslash} m{0.3\textwidth}
>{\centering\arraybackslash} m{0.03\textwidth}
>{\centering\arraybackslash} m{0.3\textwidth}
}
\includegraphics{./figures/ld_four_mass_bubble_box}
&
$\sim$
&
\includegraphics{./figures/ld_four_mass_box}
\end{tabular}
\caption[Four-mass bubble-box is a four-mass box.]{%
The Landau equations of a Landau diagram containing a bubble are
identical to the equations of a Landau diagram with one propagator
of the bubble removed. The two-loop four-mass bubble-box on the left is the only
Landau diagram with a four-mass box contributing to the branch points of the
NMHV amplitude. It has the same well-known branch points as the one-loop four-mass box on the right.
}
\label{fig:bubble-box}
\end{figure}
At two loops, four-mass box subdiagrams
become prevalent starting at $\textrm{k} = 2$, where they appear in the maximal
codimension Landau diagram shown
in~\tabRef{nmhv-results}(e), as well as in many of the relaxations of
the other Landau diagrams in~\tabRef{nmhv-results}.
At $\textrm{k}=1$
there is a single four-mass bubble-box Landau diagram, \figRef{bubble-box},
relevant to two-loop NMHV amplitudes.
As shown in the Appendix of~\cite{Dennen:2016mdk}, Landau diagrams
containing bubble subdiagrams are
equivalent to the same diagram with one of the propagators of the bubble removed.
So we expect the one-loop four-mass box singularity to reappear
as a singularity of the two-loop NMHV amplitude.
Though we are only guaranteed from this analysis that the singularities
match, we can throw caution to the wind and conjecture that
the same symbol entries that appear in the one-loop four-mass box integral
appear also in two-loop NMHV amplitudes.
Of note here: the four-mass box has support starting at $\textrm{k}=2$,
so there is shared singularity structure between the two-loop $\nkmhv{}$ and one-loop $\nkmhv{2}$
amplitudes.
Having dealt with this single caveat,
we restrict our analysis to the remaining NMHV singularities, where
we may hope that our approach allows us to
read off symbol alphabets directly from lists of singularity loci.
\subsection{Computational Approaches}
Sec.~2.4 of~\cite{Prlina:2017azl} reviews the Landau equations
and Sec.~6 of that reference details the process of solving them
in several one-loop examples.
Beyond one loop, one approach for seeking
solutions is to perform the analysis ``one loop at a time'', by considering each
one-loop subdiagram and writing down the constraints on the values of other loop and external momenta
imposed by the on-shell and Kirchhoff conditions of the subdiagram.
After taking the union of those constraints, one may conclude that a solution
exists for generic external data,
or that the solution exists only when the
external data satisfy some set of equations.
Solutions of the former type were associated with
the infrared singularities of an amplitude in~\cite{Dennen:2015bet},
and solutions of the latter type indicate branch points of the amplitude
when they live on codimension-one loci in $\Conf_n(\mathbb{P}^3)$.
Here we recall a few basic facts about this loop-by-loop approach,
which has been
carried out for several cases in~\cite{Dennen:2015bet,
Dennen:2016mdk}.
First, as mentioned above, one edge of a bubble subdiagram
can always be removed without affecting
Landau analysis.
Second,
as shown in the Appendix of~\cite{Dennen:2016mdk},
a generic triangle subdiagram
has seven different branches of solutions that should be considered separately.
All of the solutions demand that
the squared sum of momenta on ``external" edges attached to at
least one of the triangle's corners vanish, and the seven
branches of solutions are classified
according to the number of null corners%
\footnote{These corners can be read off as the factors of the Landau
singularity locus, for example in the rightmost column of Tab.~1,
branch (9), of~\cite{Prlina:2017azl}.}.
There are three branches of ``codimension-one" solutions
(any one of the three corners vanishing),
three branches of ``codimension two" (any two of the three corners vanishing)
and one of ``codimension three" (all corners vanishing).
In a Landau diagram analysis, it will often be the case that one of a triangle's corners
is null by fiat; in this case, the solution space will be reduced.
For example, a ``two-mass" triangle subdiagram has only one codimension-one
solution. In the examples we detail in \secRef{two-loop-sample},
all triangle subdiagrams we describe are of this two-mass variety.
Finally, the Kirchhoff conditions associated to a box subdiagram constitute
four homogeneous equations on four Feynman parameters, so the
existence of nontrivial solutions requires the vanishing
of a certain four-by-four determinant called the \emph{Kirchhoff constraint}
for the box.
The Kirchhoff constraints for the four different cases
of box diagrams are summarized in Eqns.~(2.7)
through~(2.11) of~\cite{Dennen:2015bet}.
It is worth noting one detail regarding the ``one loop at a time" approach.
Because the method starts by enumerating the constraints imposed by the
existence of nontrivial solutions to the Landau equations of each subdiagram, it will miss
the solutions which set all Feynman parameters corresponding to some one-loop subdiagram to zero.
However, Landau singularities obtained this way will always be those already present at lower loop
order. So the ``one loop at a time'' approach neglects no novel branch points.
We comment on a specific example of this phenomenon in the next section.
Let us also describe a conceptually simpler but computationally less effective alternative approach
which we have used as a cross-check on our results.
For a given branch of solutions to a set of on-shell conditions, or
equivalently, for a given on-shell diagram, one can reduce the Landau
equations ``all at once'' to see whether they impose codimension-one
constraints on the external data.
This approach is of course usually feasible only with the aid of
a computer algebra system such as Mathematica.
It also lends itself well to numerical experimentation:
one can probe the presence or absence of a putative
singularity at some locus $a=0$ by generating random numeric
values for the external data except for one free parameter $z$,
and then reducing the Landau equations to see if the existence
of nontrivial solutions forces $z$ to take a value that sets
$a = 0$.
Before proceeding to the examples and results,
let us address the question:
how do we confirm that we have detected all singularities?
Starting from the maximal codimension boundaries of the NMHV
amplituhedron
shown in~\tabRef{nmhv-results}, we determine all corresponding Landau diagrams
keeping in mind the ambiguity mentioned in~\secRef{closing}.
From there it is straightforward to produce all possible relaxed Landau diagrams. And from the diagrams we compute the
singularities using the ``one loop at a time'' approach outlined above.
Once we have a list of potential singularities, we turn to the
``all at once'' numerical probing. Doing so we directly confirm
on a diagram-by-diagram basis not only
that the set of singularities is correct, but also that there
are no additional singularities. We have performed these steps
to confirm the NMHV singularities presented in \secRef{symbol-alphabets}.
We will focus only on Landau diagrams that have minimally-NMHV coloring,
as defined in \secRef{on-shell-diagrams-review}, or equivalently,
diagrams that come from a boundary of a two-loop NMHV amplituhedron.
A priori, we cannot dismiss the possibility that a minimally-MHV diagram
may have
novel singularities coming from an NMHV branch of solutions, but
we have explicitly checked that this does not occur in the two-loop NMHV
amplitudes we consider here.
We will demonstrate our ``one loop at a time" approach to solving
Landau equations on an example in the next section, and then proceed to
list the full set of singularities in~\secRef{symbol-alphabets}.
\subsection{A Sample Two-Loop Diagram}
\label{sec:two-loop-sample}
We now turn to the Landau analysis of the boundaries
displayed in~\tabRef{nmhv-results}.
The analysis is very similar to that of the many examples that have been
considered in~\cite{Dennen:2015bet,Dennen:2016mdk}, to which
we refer the reader for additional details.
Therefore we only carry out the analysis in detail
for the case of~\tabRef{nmhv-results}(a), and summarize all
of the results
in the following section.
At maximal codimension
the on-shell conditions
encapsulated in the Landau diagram of~\tabRef{nmhv-results}(a)
are shown in Eqns.~(\ref{eqn:three-mass-color-cut})
and~(\ref{eqn:two-mass-easy-color-cut}). These have a total of
four discrete solutions, as summarized in~\tabRef{pentagon-box-colorings},
but the only one relevant at NMHV order is the one displayed
in~\eqnRef{nmhvsolution}.
The Landau equations (specifically, the Kirchhoff constraint for
the box subdiagram
defined by~\eqnRef{two-mass-easy-color-cut}) admit a solution
only if~\cite{Dennen:2015bet}
\begin{align}
\langle j(j{-}1\,j{+}1)(i\,i{+}1)\, \mathcal{L}^{(2)}_*\rangle = 0\,.
\end{align}
Substituting in the lower-helicity solution $\mathcal{L}^{(2)}_{*,1}$
and simplifying turns the constraint into
\begin{align}
\langle i\, \overline{j}\rangle
\langle i \, (i{+}1 \, j) \, (k \, k{+}1) \, (k' \, k'{+}1) \rangle = 0\,.
\label{eqn:toreject}
\end{align}
Now we must address a subtlety of the result~(\ref{eqn:toreject})
that is analogous to the one encountered for the maximal
codimension MHV configuration under Eq.~(3.29)
of~\cite{Dennen:2016mdk}.
Like in that case,
the eight-propagator Landau diagram under consideration here,
shown in~\tabRef{nmhv-results}(a), corresponds to a resolution
of a configuration that actually satisfies nine
on-shell conditions, as reviewed in~\secRef{resolutions}.
It was proposed in~\cite{Dennen:2016mdk} that we should trust
the resulting Landau analysis only to the extent that the eight on-shell
conditions imply the ninth for generic external data.
Let us note that if we put
$\langle \mathcal{L}^{(1)}\,\mathcal{L}^{(2)}\rangle=0$
aside for a moment, the NMHV solution to the seven other on-shell
conditions is
\begin{align}
\mathcal{L}^{(2)} = (i\,k\,k{+}1)\cap (i\,k'\,k'{+}1)\,, \qquad
\mathcal{L}^{(1)} = (\alpha Z_i + (1 - \alpha) Z_{i+1}, Z_j)\,,
\end{align}
from which we find
\begin{align}
\langle \mathcal{L}^{(1)}\,\mathcal{L}^{(2)}\rangle
= (1 - \alpha) \langle i(i{+}1\,j)(k\,k{+}1)(k'\,k'{+}1)\rangle\,.
\end{align}
Therefore the conclusion that $\alpha = 1$, and hence
that the ninth condition
$\langle \mathcal{L}^{(1)}\,i{-}1\,i\rangle = 0$
is also satisfied,
actually only follows if
$\langle i(i{+}1\,j)(k\,k{+}1)(k'\,k'{+}1)\rangle \ne 0$.
This observation introduces controversy
about whether the second quantity on the left-hand side of~\eqnRef{toreject} is a valid singularity.
However, note that from the on-shell diagram point of view there is no apparent reason why this singularity
should be excluded, since the diagram can be assigned a valid NMHV coloring
as shown in~\tabRef{pentagon-box-colorings}.
Absent a rigorous argument resolving the matter, we remain agnostic about the status
of this singularity.
It is easy to see that another
solution to the Landau equations with
$\mathcal{L}^{(1)} = (i\,j)$ and $\mathcal{L}^{(2)} = (i\,k\,k{+}1)
\cap (i\,k'\,k'{+}1)$ exists if the four Feynman parameters
associated to the box subdiagram are set to zero. In this case
the box completely decouples and the pentagon subdiagram reduces to
a three-mass box, so this branch exists if the external data
satisfy the corresponding Kirchhoff constraint
\begin{align}
\label{eqn:threemasskirkhoff}
\langle i(i{-}1\,i{+}1)(k\,k{+}1)(k'\,k'{+}1)\rangle = 0\,.
\end{align}
This illustrates the point highlighted in the previous
section that the ``one loop at a time'' approach can miss certain
solutions to the Landau equations associated entirely with
one-loop subdiagrams.
As mentioned, we are only seeking
new singularities, whereas~\eqnRef{threemasskirkhoff}
is already known from one loop.
\bigskip
\noindent
Next we move on to codimension seven.
There are four inequivalent relaxations, which we
now discuss in turn.
These relaxations result from collapsing any of the undotted
propagators of~\tabRef{nmhv-results}(a).
We list only the minimally-NMHV diagrams; see~\figRef{single-relaxations}.
\paragraph{Relaxing $\langle \mathcal{L}^{(2)}\,i{-}1\,i\rangle = 0$}
leads to a double-box Landau diagram, \figRef{single-relaxations}(a).
There are two Kirchhoff constraints (one per box), one of which is
easier to determine than the other.
The easier-to-find Kirchhoff constraint comes from the box formed of the $\mathcal{L}^{(1)}$-dependent
propagators (including the shared propagator). It reads
\begin{equation}
\label{eqn:3akirkhoff1}
\langle j \, (j{-}1 \, j{+}1) \, (i \, i{+}1) \, \mathcal{L}^{(2)}_{*} \rangle = 0 \,,
\end{equation}
where we write $\mathcal{L}^{(2)}_{*}$ to emphasize the loop momentum is
on-shell when all Landau equations are satisfied.
The second Kirchhoff constraint is easiest to find after solving the three $\mathcal{L}^{(1)}$-dependent
on-shell conditions via $\mathcal{L}^{(1)}_{*,1} = (Z_j,B)$, with
$B = \alpha Z_i + (1-\alpha) Z_{i+1}$.
Using this form of $\mathcal{L}^{(1)}$ in the
$\mathcal{L}^{(2)}$-dependent propagators (including the shared one)
results in
\begin{equation}
\langle \mathcal{L}^{(2)} \, i \, B \rangle =
\langle \mathcal{L}^{(2)} \, k \, k{+}1 \rangle =
\langle \mathcal{L}^{(2)} \, k' \, k'{+}1 \rangle =
\langle \mathcal{L}^{(2)} \, j \, B \rangle = 0\,,
\end{equation}
which are now effectively the propagators of a three-mass box.
The second Kirchhoff constraint is therefore
\begin{equation}
\label{eqn:3akirkhoff2}
\langle B \, (i \, j) \, (k \, k{+}1) \, (k' \, k'{+}1) \rangle = 0 \,.
\end{equation}
Solving the remaining on-shell and Kirchhoff constraints
(recall that the three $\mathcal{L}^{(1)}$-dependent conditions were solved already)
fixes
\begin{equation}
\mathcal{L}^{(2)}_{*,1} = (A \, k \, k{+}1) \cap (A \, k' \, k'{+}1) \,, \quad A = (i \, i{+}1) \cap \bar{j} \,, \quad
\textrm{and}
\end{equation}
\begin{equation}
B = (i \, i{+}1) \cap (j \, \mathcal{L}^{(2)}_{*,1}) \,.
\end{equation}
This constraint on $B$ turns~\eqnRef{3akirkhoff2} into a codimension-one constraint on the external data:
\begin{equation}
\langle A \, (i \, j) \, (k \, k{+}1) \, (k' \, k'{+}1) \rangle = 0 \,, \quad A = (i \, i{+}1) \cap \bar{j} \,,
\end{equation}
which is a new, genuinely two-loop, singularity.
\paragraph{Relaxing $\langle \mathcal{L}^{(1)}\,j{-}1\,j\rangle = 0$}
leads to a pentagon-triangle Landau diagram, \figRef{single-relaxations}(b).
There is a single codimension-one branch for the triangle subdiagram
since there is an on-shell line at one of its corners.
This branch leads to Landau equations with a solution locus
that is a Kirchhoff constraint of three-mass box type:
\begin{align}
\label{eqn:threeboxsingularity}
\langle i \, (i{-}1 \, i{+}1) \, (k\, k{+}1) \, (k'\, k'{+}1) \rangle = 0\,.
\end{align}
We do not focus on these already familiar singularities.
Following any codimension-two branch of the triangle subdiagram leads
to Landau singularities that exist only on
codimension-two loci in the space of external data, which are
not of interest to us.
Following the single codimension-three branch for the triangle leads to
a branch of solutions to the Landau equations that exists only if
\begin{align}
\label{eqn:pentasingularity}
\langle i \, (j\, j{+}1) \, (k\, k{+}1) \, (k'\, k'{+}1) \rangle = 0\,,
\end{align}
which is a new type of singularity.
\paragraph{Relaxing $\langle \mathcal{L}^{(1)}\,i\,i{+}1\rangle = 0$}
leads to a pentagon-triangle Landau diagram, \figRef{single-relaxations}(c).
There is again a single codimension-one branch for the triangle subdiagram
leading to an effective decoupling of the two loop momenta
and an overall Landau constraint of the same
form (up to relabeling) as~\eqnRef{threeboxsingularity}.
Following the codimension-two branches for the triangle subdiagram
uncovers constraints of codimension higher than one
on the external data, which cannot sensibly be associated with
branch points.
Following the codimension-three branch for the triangle subdiagram leads to the
same Landau singularity as in~\eqnRef{pentasingularity} (up to relabeling).
\begin{figure}
\centering
\begin{tabular}
{
>{\centering\arraybackslash} m{0.35\textwidth}
>{\centering\arraybackslash} m{0.28\textwidth}
>{\centering\arraybackslash} m{0.28\textwidth}
}
(a) & (b) & (c)
\\[-6pt]
\includegraphics{./figures/ld_n1mhv_r_iminus1_i.pdf}
&
\includegraphics{./figures/ld_n1mhv_r_jminus1_j.pdf}
&
\includegraphics{./figures/ld_n1mhv_r_l1_i_iplus1.pdf}
\\
$\langle \mathcal{L}^{(2)}\,i{-}1\,i\rangle \ne 0$
&
$\langle \mathcal{L}^{(1)}\,j{-}1\,j\rangle \ne 0$
&
$\langle \mathcal{L}^{(1)}\,i\,i{+}1\rangle \ne 0$
\end{tabular}
\caption[Single relaxations]{%
These are the unique single-relaxations of~\tabRef{nmhv-results}(a) that result in $\nkmhv{}$ Landau diagrams.
The computation of the associated Landau singularities is discussed in the text.
}
\label{fig:single-relaxations}
\end{figure}
\bigskip
\noindent
At codimension six there are three inequivalent relaxations,
shown in~\figRef{double-relaxations},
that do not reduce the Landau diagram to an MHV one.
Collapsing any of the undotted propagators of a box subdiagram in~\figRef{double-relaxations}
results in a minimally-MHV Landau diagram, as one of the external labels would
necessarily drop out.
Any additional relaxations of a propagator
in a triangle subdiagram of~\figRef{double-relaxations} will yield a bubble
subdiagram,
which cannot yield a new singularity as we have already emphasized.
\paragraph{Relaxing both $\langle \mathcal{L}^{(1)} \,i \, i{+}1 \rangle$ = $\langle \mathcal{L}^{(2)} \, i{-}1 \, i \rangle = 0$}
leads to a box-triangle Landau diagram, \figRef{double-relaxations}(a).
The single codimension-one branch of the triangle
leads to the effective decoupling of the two loops
and results in Landau singularities at Mandelstam-type loci:
\begin{align}
\langle i \, i{+}1 \, k \, k{+}1 \rangle \langle i \, i{+}1 \, k' \, k'{+}1 \rangle \langle k \, k{+}1 \, k' \, k'{+}1 \rangle = 0\,.
\end{align}
The same Landau singularities are obtained by following the codimension-two branches for the triangle.
Following the codimension-three branch for the triangle leads to the constraint
\begin{align}
\langle j \, (i \, i{+}1) \, (k \, k{+}1) \, (k' \, k'{+}1) \rangle = 0\,.
\end{align}
\paragraph{Relaxing both $\langle \mathcal{L}^{(2)} \, i{-}1 \, i \rangle = \langle \mathcal{L}^{(1)} \, j{-}1 \, j \rangle = 0$}
leads to a box-triangle Landau diagram, \figRef{double-relaxations}(b).
All branches of the triangle subdiagram
result in bubble-type singularities, $\langle a \, a{+}1 \, b \, b{+}1 \rangle$, or
higher codimension constraints.
\paragraph{Relaxing both $\langle \mathcal{L}^{(1)} \, i \, i{+}1 \rangle = \langle \mathcal{L}^{(1)} \, j{-}1 \, j \rangle = 0$}
leads to a pentagon-bubble Landau diagram, \figRef{double-relaxations}(c),
as discussed above and displayed in~\figRef{bubble-box}, with a singularity
on the locus
\begin{align}
\langle i \, (j \, j{+}1) \, (k \, k{+}1) \, (k' \, k'{+}1) \rangle = 0\,.
\end{align}
\begin{figure}
\centering
\begin{tabular}
{
>{\centering\arraybackslash} m{0.32\textwidth}
>{\centering\arraybackslash} m{0.32\textwidth}
>{\centering\arraybackslash} m{0.225\textwidth}
}
\\
(a) & (b) & (c)
\\[-6pt]
\includegraphics{./figures/ld_n1mhv_r_l1_i_ip1_l2_im1_i.pdf}
&
\includegraphics{./figures/ld_n1mhv_r_im1_i_jm1_j.pdf}
&
\includegraphics{./figures/ld_n1mhv_r_l1_i_ip1_jm1_j.pdf}
\\
$\langle \mathcal{L}^{(2)}\,i{-}1\,i\rangle \ne 0$
$\langle \mathcal{L}^{(1)}\,i\,i{+}1\rangle \ne 0$
&
$\langle \mathcal{L}^{(2)} \, i{-}1 \, i \rangle \ne 0$
$ \langle \mathcal{L}^{(1)} \, j{-}1 \, j \rangle \ne 0$
&
$\langle \mathcal{L}^{(1)}\,i\,i{+}1\rangle \ne 0$
$\langle \mathcal{L}^{(1)}\,j{-}1\,j\rangle \ne 0$
\\
\end{tabular}
\caption[Double relaxations]{%
These are the unique minimally NMHV relaxations of
the diagrams~\figRef{single-relaxations}.
As such, these are also
double-relaxations of~\tabRef{nmhv-results}(a).
Computing the associated singularities is discussed in the text.
Any further relaxations of triangles yield bubble subdiagrams.
In (c), relaxing either of $\langle \mathcal{L}^{(2)}\,i\,i{\pm}1\rangle= 0$ yields the four-mass bubble-box of~\figRef{bubble-box}.
}
\label{fig:double-relaxations}
\end{figure}
\paragraph{Relaxing both $\langle \mathcal{L}^{(1)} \, j{-1} \, j \rangle = \langle \mathcal{L}^{(1)} \, j \, j{+}1 \rangle = 0$}
is displayed in \figRef{last-double-relaxation}.
This case is interesting because it emphasizes the interplay between on-shell diagrams and the amplituhedron.
From the on-shell diagram perspective, this diagram naively has a minimally MHV coloring,
\figRef{last-double-relaxation}(b).
However the graph moves that preserve on-shell functions
(particularly the ``collapse and re-expand'' and ``bubble deletion'' of Sec.~2.6 of \cite{ArkaniHamed:2012nw})
permit redrawing the coloring as a three-mass box on-shell diagram \figRef{last-double-relaxation}(c),
colored in its minimal helicity manner, $\textrm{k} \ge 1$ .
Since the graph moves preserve the on-shell function,
the original on-shell diagram must also be minimally NMHV.
It is straightforward to check that the momentum twistor solution
corresponding to this minimal coloring \figRef{last-double-relaxation}(a)
is in fact a boundary of an NMHV amplituhedron, not an MHV one,
and so the on-shell diagram and amplituhedron perspectives align.
For the two-loop amplitude, this diagram does not contribute new
possible branch points, but this phenomenon is something to keep
in mind for future studies.
\begin{figure}
\centering
\begin{tabular}
{
>{\centering\arraybackslash} m{0.3\textwidth}
>{\centering\arraybackslash} m{0.3\textwidth}
>{\centering\arraybackslash} m{0.3\textwidth}
}
\\
(a) & (b) & (c)
\\[-6pt]
\includegraphics{./figures/ld_n1mhv_r_secretly.pdf}
&
\includegraphics{./figures/ld_n1mhv_r_secretly_ww.pdf}
&
\includegraphics{./figures/ld_n1mhv_r_secretly_tmb.pdf}
\\
$\langle \mathcal{L}^{(1)}\,j{-}1\,j\rangle \ne 0$
$\langle \mathcal{L}^{(1)}\,j\,j{+}1\rangle \ne 0$
&
Minimal Coloring
&
After Graph Moves
\\
\end{tabular}
\caption[Last NMHV double relaxation.]{%
The Landau diagram (a) appears to have a minimally MHV coloring (b).
Yet the corresponding on-shell function is related by the on-shell
diagram moves of \cite{ArkaniHamed:2012nw} to one in the NMHV helicity sector (c).
}
\label{fig:last-double-relaxation}
\end{figure}
\bigskip
\noindent
There are no new NMHV triple relaxations,
but we revisit a case discussed earlier to show how it naturally
arises in this organizational scheme.
\paragraph{Relaxing all of
$\langle \mathcal{L}^{(2)} \, i{-}1 \, i \rangle =
\langle \mathcal{L}^{(1)} \, i \, i{+}1 \rangle =
\langle \mathcal{L}^{(1)} \, j{-}1 \, j \rangle = 0$}
leads to the bubble-box Landau diagram discussed above and displayed in~\figRef{bubble-box}.
As mentioned above, this does not contribute a new two-loop singularity
but it does indicate that two-loop NMHV amplitudes inherit
the four-mass box singularity that appears at one loop only
starting at $\textrm{k}=2$.
Our analysis indicates this is a fairly common phenomenon:
Landau diagrams for an $L$-loop $\nkmhv{\textrm{k}}$ amplitude
that contain bubble or triangle subdiagrams will often
contain singularities that also contribute
to $(L-1)$-loop $\nkmhv{\textrm{k}+1}$ amplitudes.
\subsection{Two-Loop NMHV Symbol Alphabets}
\label{sec:symbol-alphabets}
The full set of loci in the external kinematic space
$\Conf_n(\mathbb{P}^3)$ where two-loop NMHV amplitudes have
Landau singularities is obtained by carrying out the analysis
of the previous section for all Landau diagrams appearing
in Tabs.~\ref{tab:mhv-results} and~\ref{tab:nmhv-results},
together with all of their (still NMHV) relaxations.
Among the set of singularities generated in this way are the two-loop MHV singularities
that arise from the configuration shown in~\tabRef{mhv-results},
which live on the loci
\begin{align} \begin{split}
\langle a \, a{+}1\, b\, c \rangle = 0\,, \\
\langle a \, a{+}1\, \overline{b} \cap \overline{c} \rangle = 0\,,
\label{eqn:twoloopalphabet}
\end{split} \end{align}
for arbitrary indices $a, b, c$.
The set of brackets appearing on the left-hand sides
of~\eqnRef{twoloopalphabet} correspond exactly to the set
of symbol letters of two-loop MHV amplitudes originally
found in~\cite{CaronHuot:2011ky}.
For the NMHV configurations shown in~\tabRef{nmhv-results}
we find additional singularities that live on loci of the form\footnote{Out
of caution we have included on the first line the singularities of
the type shown in~\eqnRef{toreject}; but we remind the reader of the discussion in
the subsequent
paragraph; for $n=8$ it happens that the first line is necessarily
a particular case of the second and/or fourth so there is no controversy.}
\begin{align}
\begin{split}
\langle i\,(i{\pm}1\,\ell)(j\,j{+}1)(k\,k{+}1)\rangle &= 0\,,\\
\langle j \, (j{-}1\, j{+}1) \, (j'\, j'{+}1) \, (i\, \ell) \rangle & =0 \,, \\
\langle i\,(j\,j{+}1)(k\,k{+}1)(\ell\,\ell{+}1)\rangle & = 0 \,, \\
\langle i\,i{+}1\, \overline{j} \cap (k \, k' \, k'{+}1) \rangle &= 0 \,, \\
\langle \overline{i} \cap (i\,i'\, i'{+}1) \, \cap \, \overline{j} \cap (j\,j'\,j'{+}1) \rangle &= 0 \,, \\
\llangle (i\,i{+}1) \cap \overline{j};(i\,j)(k\,k{+}1)(\ell\,\ell{+}1)\rrangle & = 0 \,,
\end{split}
\label{eqn:nmhvsymbolalphabets}
\end{align}
using notation explained in Appendix~\ref{sec:notation}.
The indices are restricted (as a consequence of planarity)
to have the cyclic
ordering $\ell \le \{ i, i'\} \le \{ j, j' \} \le \{k, k'\} \le \ell$
(or the reflection of this, with all $\le$'s replaced by $\ge$'s)
where the curly bracket notation means that the relative ordering of an index
with its primed partner is not fixed (tracing back to the
ambiguity discussed in~\secRef{closing}).
In addition to singularities of the type listed
in~\eqnRef{nmhvsymbolalphabets}, two-loop NMHV amplitudes
also have four-mass box singularities as discussed in the
beginning of~\secRef{nmhv-landau-analysis} and illustrated
in~\figRef{bubble-box}.
Although guessing symbol letters from knowledge of singularity
loci is in general nontrivial (see Sec.~7
of~\cite{Prlina:2017azl}),
we conjecture that
the quantities appearing on the left-hand sides
of Eqns.~(\ref{eqn:twoloopalphabet}) and~(\ref{eqn:nmhvsymbolalphabets}),
together with appropriate symbol letters of four-mass box type (see
the example in the following section),
constitute the symbol alphabet of two-loop NMHV amplitudes in
SYM theory.
It is to be understood that all degenerations
of the indicated forms are meant to be included as well, for example
such as taking $j=j'-1$ in the first line. For certain values of
some indices the expressions can degenerate into symbol letters
(or products of symbol letters) that already
appear in~\eqnRef{twoloopalphabet}, or
elsewhere in~\eqnRef{nmhvsymbolalphabets}, but other degenerate cases are
valid, new NMHV letters.
It is interesting to note that for arbitrary $n$ the conjectural set of
symbol letters in~\eqnRef{nmhvsymbolalphabets}
is not closed under parity, unlike the
two in~\eqnRef{twoloopalphabet} which are parity conjugates of each
other\footnote{More precisely, the parity conjugate of the first
quantity in~\eqnRef{twoloopalphabet}
is $\langle a{-}1\,a\,a{+}1\,a{+}2\rangle$ times the second;
they become exactly parity conjugate in a gauge where the
momentum twistors are scaled so that all four-brackets of
four adjacent indices are set to 1.}.
We know of no a priori reason why the symbol alphabet for a given
amplitude in SYM theory should be closed under parity;
in principle,
the parity symmetry of the theory requires
only that the symbol alphabet of $\nkmhv{\textrm{k}}$
amplitudes must be the parity conjugate of the symbol alphabet
of $\nkmhv{n-\textrm{k}-4}$ amplitudes.
The absence of parity symmetry is a simple consequence of the fact that
different branches of solutions to the Landau equations give non-zero
support to amplitudes in different helicity sectors (or,
equivalently, overlap boundaries of amplituhedra in different helicity sectors).
From this point of view it appears to be an accident that the
two-loop MHV symbol alphabet is closed under parity; we guess
that this will continue to hold at arbitrary loop order.
It is also an interesting consistency check that for $n < 8$ the symbol
letters in~\eqnRef{nmhvsymbolalphabets} necessarily degenerate
into letters of the type already present at MHV order.
This is consistent with all results available to date from
the hexagon and heptagon amplitude bootstrap programs, which are
based on the hypothesis that the symbol alphabet for all amplitudes
with $n < 8$ is given by~\eqnRef{twoloopalphabet} to all loop order.
Genuinely new NMHV letters begin to appear only starting at $n=8$,
to which we now turn our attention.
\subsection{Eight-point Example}
For the sake of illustration let us conclude by explicitly
enumerating our conjecture for the two-loop NMHV symbol
alphabet for the case $n=8$. First let us recall that
the corresponding MHV symbol alphabet~\cite{CaronHuot:2011ky}
is comprised of 116 letters:
\begin{itemize}
\item 68 four-brackets of the form
$\langle a\, a{+}1\, b\, c\rangle$ (there are altogether
of $\binom{8}{4}=70$ four-brackets
of the more general form
$\langle a\, b\, c\, d\rangle$, but at $n=8$
both
$\langle 1\, 3\, 5\, 7\rangle$ and
$\langle 2\, 4\, 6\, 8\rangle$ are excluded by the requirement
that at least one pair of indices must be adjacent),
\item 8 cyclic images
of $\langle 1\,2\,\overline{4} \cap \overline{6}\rangle$,
\item
and 40 degenerate cases of $\langle a\, a{+}1\, \overline{b} \cap
\overline{c}\rangle$ consisting of
8 cyclic images each of
$\langle 1\,(2\,3)(4\,5)(7\,8)\rangle$,
$\langle 1\,(2\,3)(5\,6)(7\,8)\rangle$,
$\langle 1\,(2\,8)(3\,4)(5\,6)\rangle$,
$\langle 1\,(2\,8)(3\,4)(6\,7)\rangle$,
as well as
$\langle 1\,(2\,8)(4\,5)(6\,7)\rangle$.
\end{itemize}
Referring the reader again to Appendix~\ref{sec:notation} for
details on our notation,
we conjecture that an additional 88 letters appear
in the symbol alphabet of the two-loop $n=8$ NMHV amplitude\footnote{The
$116+88=204$ symbol letters of this amplitude can
be assembled into
$204-8=196$ dual conformally invariant
cross-ratios in many different ways. We
cannot \emph{a priori} rule out the possibility that the symbol
of this amplitude might be expressible in terms of an even smaller
set of carefully chosen multiplicatively independent
cross-ratios, though this type of reduction is not possible
in any known six- or seven-point examples.}
\begin{itemize}
\item 48 degenerate cases consisting of 16 dihedral images
each of
$\langle 1\,(2\,3)(4\,5)(6\,7) \rangle$,
$\langle 1\,(2\,3)(4\,5)(6\,8) \rangle$,
as well as
$\langle 1\,(2\,8)(3\,4)(5\,7) \rangle$,
\item
8 cyclic images of
$\langle \overline{2} \cap (2\,4\,5) \cap \overline{8} \cap (8\,5\,6)\rangle$
(this set is closed under reflections, so adding all dihedral
images would be overcounting),
\item the 8 distinct dihedral images of
$\langle \overline{2} \cap (2\,4\,5) \cap \overline{6} \cap (6\,8\,1)\rangle$
(which is distinct from its reflection but comes back to itself
after cycling the indices by four),
\item 16 dihedral images of
$\llangle (1\,2)\cap\overline{4};(1\,4)(5\,6)(7\,8)\rrangle$,
\item and finally 8
four-mass box-type letters.
\end{itemize}
The last of these were displayed in Eq.~(7.1)
of~\cite{Prlina:2017azl} and take the form
\begin{align}
f_{i\ell}f_{jk} \pm( f_{ik} f_{j\ell}- f_{ij}
f_{k\ell}) \pm \sqrt{(f_{ij} f_{k\ell} - f_{ik} f_{j\ell} + f_{i\ell} f_{jk})^2
-4 f_{ij} f_{jk} f_{k\ell} f_{i\ell}}\,,
\end{align}
where $f_{ij} \equiv \langle i\, i{+}1\, j\, j{+}1\rangle$
and the signs may be chosen independently.
For $n=8$ there are two inequivalent
choices $\{i,j,k,\ell\} = \{1,3,5,7\}$ or
$\{2,4,6,8\}$, for a total of eight possible symbol letters of this type.
\section{Conclusion}
The symbol alphabets for all two-loop MHV amplitudes in SYM theory
were first found in~\cite{CaronHuot:2011ky}.
In~\cite{Dixon:2011nj,CaronHuot:2011kk}
it was found that two-loop NMHV amplitudes have the same
symbol alphabets as the corresponding MHV amplitudes
for $n=6, 7$, which is now believed to be true to all loop order.
However, the question of whether two-loop NMHV amplitudes
for $n>7$ have the same symbol alphabets as their MHV
cousins has remained open.
In this paper we find that the former have branch points
(of the type shown in~\eqnRef{nmhvsymbolalphabets})
not shared by the latter, answering this question in the negative.
Our conjectures for the two-loop NMHV symbol alphabets
are formulated
in terms of quantities analogous to the
cluster $\mathcal{A}$-coordinates of~\cite{Golden:2013xva},
although it is simple to confirm that at least some of them are not
cluster coordinates of the $\Gr(4,n)$ cluster algebra (it is possible
that none of them are, but some of them are more difficult to check).
For the purpose of carrying out the amplitude bootstrap, it is
however more convenient to assemble these letters into
dual conformally invariant cross-ratios.
In the literature considerable effort (see
for example~\cite{Golden:2013lha,Golden:2014xqa,Golden:2014pua,Harrington:2015bdt,Drummond:2017ssj})
has gone into divining deep mathematical structure of
amplitudes hidden in the particular kinds of cross-ratios
that might appear, especially when they can be taken to
cluster $\mathcal{X}$-coordinates (or Fock-Goncharov coordinates)
of the type reviewed in~\cite{Golden:2013xva}.
However, we see no hint in the Landau analysis
or inherent to the twistor or on-shell diagrams employed
in this paper that suggests any preferred way of
building such cross-ratios.
It is inherent in the approach taken here
following~\cite{Dennen:2016mdk,Prlina:2017azl} (as well as in the
amplitude bootstrap program itself) that we eschew
knowledge of or interest in explicit representations of
amplitudes in terms of local Feynman integrals.
However, as mentioned in the conclusion of~\cite{Prlina:2017azl},
the procedure of identifying relevant boundaries of amplituhedra
and then solving the Landau equations associated to each one
as if it literally represented some Feynman integral
is suggestive that this approach might be thought of as naturally
generating integrand expansions
around the highest codimension amplituhedron boundaries\footnote{We are
grateful to N.~Arkani-Hamed for extensive discussions on this point.}.
This approach might lead to a resolution of the controversy
regarding the status of Landau singularities
of the type~\eqnRef{toreject}
obtained from maximal codimension boundaries.
This analysis is, however, beyond the scope
of our paper and we remain agnostic about the status of this branch point in anticipation of
empirical data.
If this singularity is shown to be spurious, this would be an interesting result not easily explainable using on-shell diagram techniques, and it would signal that boundaries of amplituhedra contain more information waiting to be explored.
These observations highlight a point that we
have emphasized several times in this
paper and the prequel~\cite{Prlina:2017azl}.
Namely, several threads in this tapestry, including
the connection to on-shell diagrams
reviewed in~\secRef{on-shell-diagrams}
and the simple relation between twistor diagrams
and Landau diagrams in Appendix~\ref{sec:twistor-to-landau},
do not inherently rely on planarity.
This hints at the tantalizing possibility that
some of our toolbox may be useful for studying non-planar
amplitudes about which much less is known
(see~\cite{Arkani-Hamed:2014via,Bern:2014kca,Bern:2015ple}).
One of the stronger hints
--- the relationship between on-shell diagrams and Landau diagrams ---
also aids in corroborating results.
A vanishing on-shell diagram indicates a
location where the analytic structure of an amplitude is trivial;
that is exactly the same information encoded by the boundaries of the amplituhedron.
The simple connection between the results tabulated in~\secRef{presentation}
and those obtained via the
on-shell diagram approach provides an important cross-check
supporting the validity of our analysis, as well as giving
additional corroboration to the definition of amplituhedra.
\acknowledgments
We have benefited greatly from
very stimulating discussions with N.~Arkani-Hamed, L.~Dixon
and J.~Bourjaily, and from
collaboration with A.~Volovich in the early stages of this work.
This work was supported in part by: the US Department of Energy under
contract DE-SC0010010 Task A,
Simons Investigator Award \#376208 of
A.~Volovich (JS),
the Simons Fellowship Program in Theoretical Physics (MS),
the National Science Foundation under Grant No. NSF PHY-1125915 (JS),
and the Munich Institute for Astro- and Particle
Physics (MIAPP) of the DFG cluster of excellence ``Origin and Structure
of the Universe'' (JS).
MS is also grateful to the CERN theory group for hospitality
and support during the course of this work.
|
1,314,259,995,268 | arxiv | \section{Introduction}
Recent years have witnessed enormous progress in unravelling hidden structures and especially relations for scattering amplitudes in QFT (c.f.~\cite{Elvang:2013cua}).
For example, scattering amplitudes of gluons, gravitons, and Goldstone particles {\it etc.} are closely related to each other via a web of deep relations; perhaps the most famous ones are the double-copy relations (see the review~\cite{Bern:2019prr} and references therein), originally discovered as field-theory limit~\cite{Bjerrum-Bohr:2009ulz} of tree-level Kawai-Lewellen-Tye(KLT) relations in string theory~\cite{Kawai:1985xq} and extended to loop level by Bern-Carrasco-Johansson(BCJ) via color-kinematics duality in QFT~\cite{Bern:2008qj,Bern:2010ue}. Such tree-level double-copy relations have been extended to a large class of theories including EFTs {\it e.g.} using Cachazo-He-Yuan (CHY) formulae~\cite{CHY1,CHY2,CHY3,CHY5,CHY4}, where they are naturally interpreted as a ``direct product" operation, for amplitudes in various theories. Another natural operation can be called ``direct sum", which produce mixed amplitudes of {\it e.g.} gravitons and gluons in Einstein-Yang-Mills theory (EYM) or those in Yang-Mills-scalar (YMs). More recently, a different kind of amplitude relations~\cite{Dong:2021qai}, dubbed ``universal expansions" have been shown using the uniqueness theorem in~\cite{Arkani-Hamed:2016rak,Rodina:2016jyz,Rodina:2016mbk} in all these theories. Originally discovered using CHY formulas~\cite{Lam:2016tlk,Fu:2017uzt,Du:2017kpo}, such relations expand {\it e.g.} gravity (gluon) amplitude into linear combination of those in EYM (YMs) amplitudes, and in a sense they interpolate between the two operations above.
On the other hand, as the simplest off-shell extension of amplitudes, form factors have also attracted increasing attentions recently since numerous structures have been uncovered especially at multi-loop level in ${\cal N}=4$ SYM theory ({\it c.f.} ~\cite{Brandhuber:2012vm,Brandhuber:2014ica,Loebbert:2015ova,Loebbert:2016xkw,Brandhuber:2017bkg,Brandhuber:2018xzk,Sever:2020jjx,Dixon:2020bbt,Dixon:2021tdw,Dixon:2022rse,Sever:2021nsq,Sever:2021xga,Boels:2012ew,Yang:2016ear,Lin:2020dyj, Lin:2021kht,Lin:2021qol,Lin:2021lqo}).
Despite all these developments, it is fair to say that much less is known even for tree-level form factors in gauge theories in general dimensions. For example, while all-multiplicity BCJ expressions are known for tree amplitudes in any dimension~\cite{Fu:2017uzt,Teng:2017tbo,Du:2017kpo, Edison:2020ehu, He:2021lro, Cheung:2021zvb}, no such results or even evidence for CK duality has been found for $n$-point Yang-Mills form factors with {\it e.g.} ${\rm tr}(F^2)$ operator. To our best knowledge, such form factors in general dimension are still computed mainly using Feynman diagrams, which quickly get out of control. Relatedly, mainly due to the off-shellness of the operator in form factors, it is much more difficult to apply string-inspired methods such as CHY formulas to form factors (see~\cite{He:2016jdg,Brandhuber:2016xue,He:2016dol} for such a twistor-string formula in four dimensions). It is thus natural to ask if any of the above structures for gluon amplitudes can be found in form factors, and if one could connect form factors to amplitudes in some way.
In this paper, we make a first step in answering these questions by proposing a natural decomposition of length-two form factors into scattering amplitudes in Yang-Mills-scalar (YMS) theory~\cite{Chiodaroli:2014xia, CHY4}, which in turn provide an efficient way for computing form factors explicitly. Our main results can be divided into two parts. First, in sec.~\ref{sec:F2decomp} we present a nice decomposition of Yang-Mills form factors with operator ${\rm tr}(F^2)$ (which carries off-shell momentum $q$)
\begin{equation}
F_n^{\operatorname{tr}(F^2)}(1,\ldots,n)=\int {\rm d}^{D}x \ e^{iqx} \langle 0|\operatorname{tr}(F^2)|1^{g},\ldots,n^{g}\rangle\,,
\end{equation}
and it is very similar to the expansion of Yang-Mills amplitude into a linear combination of YMS ones. More precisely, we show that the $F_n^{{\rm tr}(F^2)}$ can be written as a linear combination of $F_n^{{\rm tr}(\phi^2)}$ with $r$ scalars and $n{-}r$ gluons for $r=2, 3, \cdots, n$, and the coefficients are given by Lorentz trace of $m$ linearized field strengths.
Moreover, in sec.~\ref{sec:phi2decomp} we propose that these YMS form factors $F_n^{{\rm tr}(\phi^2)}$ in turn can be decomposed into a sum of $(n{+}1)$-point YMS amplitudes with one additional (off-shell) scalar leg. Note that the latter has $r{+}1$ bi-adjoint scalars
, and in principle such an expansion is not too surprising since their Feynman diagrams take similar form. To illustrate this point, let us consider the simplest ${\rm tr}(\phi^2)$ form factor, $F^{\rm tr(\phi^2)}_n (1^\phi, 2^g, \cdots, (n{-}1)^g, n^\phi)$ where the $2$ scalars are chosen to be adjacent in the ordering. All contributing Feynman diagrams have a scalar line connecting $1$ and $n$, and it is obvious that the operator, or ``$q$ leg", must be inserted via a $\phi^3$ vertex along the line. These Feynman diagrams are identical to all those for the color-ordered amplitude with $n{-}2$ gluons coupled to (adjacent) scalars, $1, n, n{+}1{=}q$, $A^{\rm YMS} (1^\phi, 2^g, \cdots, (n{-}1)^g, n^\phi, (n{+}1)^\phi)$. More precisely, since $q^2\neq 0$ for form factors but the YMS amplitudes are defined for massless momenta (in any dimension), the equality holds on the support of momentum conservation with the following prescription for the off-shell leg $q\to -\sum_{i=1}^n p_i$:
\begin{equation} \label{eq:prototype}
F_n^{{\rm tr}(\phi^2)} (1^\phi,\cdots, n^\phi)=A_{n{+}1}^{\rm YMS} (1^\phi, \cdots, n^\phi, q^{\phi})|_{q\to -\sum_{i=1}^n p_i}\,,
\end{equation}
where we have suppressed the gluons, and it is crucial that we need to express both sides in terms of on-shell momenta $p_1, \cdots, p_n$ only. Among other things, the identification with YMS amplitudes explains why such special form factors can be arranged to satisfy color-kinematics duality and double copy to interesting quantities for gravity~\cite{Lin:2021pne}. We will see that \eqref{eq:prototype} is a prototype for the new relations which expresses general $n$-point form factor into $n{+}1$-point amplitudes, and throughout the paper such identities are always understood with this prescription.
In sec.~\ref{sec:expand and CHY}, by combining these two types of expansions, we obtain $F_n^{{\rm tr}(F^2)}$ as a linear combination of $n{+}1$-point YMS amplitudes, with coefficients given in terms of traces of field strengths. As a consequence, we have CHY formulae for all these form factors via CHY for YMS amplitudes, which reveal even more unexpected simplicity and give the first example of worldsheet formula for form factors in general dimension. Such formulae also provide a new method for explicitly computing all-multiplicity form factors (both for ${\rm tr}(\phi^2)$ and ${\rm tr}(F^2)$ operator), which are much more efficient than Feynman diagrams.
\section{Decomposition of the ${\rm tr}(F^2)$ form factor into ${\rm tr}(\phi^2)$ ones} \label{sec:F2decomp}
The Yang-Mills-scalar amplitudes and Yang-Mills amplitudes have close relations. Physically, it is not hard to understand that by ``pulling out" polarizations, Yang-Mills amplitudes can become Yang-Mills-scalar amplitudes. In fact, this is also the case for form factors. In this section, after first reviewing the amplitudes relations, we are going to establish a parallel relation between pure Yang-Mills form factors and Yang-Mills-scalar form factors.
Before writing down the exact relation, we review the definition of the theory and amplitudes/form factors that we are concerned about.
We are considering the Yang-Mills-scalar theory with the Lagrangian \cite{Chiodaroli:2014xia}~\footnote{This theory can be obtained by coupling bi-adjoint $\phi^3$ with the dimensional reduction of Yang-Mills theory; in the literature the latter is sometimes denoted as YMS theory and the former generalized YMS theory, but we will only use YMS for the theory with coupling to bi-adjoint $\phi^3$.}:
\begin{equation}\label{eq:ymsL}
\begin{aligned}
\mathcal{L}^{\mathrm{YMS}{+}\phi^3}=&-\frac{1}{2} \operatorname{tr}_{\mathrm{C}}\left(D_{\mu} \Phi^{I} D^{\mu} \Phi^{I}\right)-\frac{1}{4} \operatorname{tr}_{\mathrm{C}}\left(F_{\mu \nu} F^{\mu \nu}\right)-\frac{g^{2}}{4} \operatorname{tr}_{\mathrm{C}}\left(\left[\Phi^{I}, \Phi^{J}\right]^{2}\right)\, \\
&-\frac{\lambda}{3 !} f_{I, J, K} f_{\tilde{I}, \tilde{J}, \tilde{K}} \Phi^{I, \tilde{I}} \Phi^{J, \tilde{J}} \Phi^{K, \tilde{K}} \,,
\end{aligned}
\end{equation}
where $\Phi^{I}=\sum_{a}\Phi^{I}_{\tilde{I}}T^{\tilde{I}}$ is the scalar field with flavor index $I$ and (a sum over) color index $\tilde{I}$, and $F^{\mu\nu}=F^{\mu\nu}_{\tilde{I}}T^{\tilde{I}}$ is the Yang-Mills field strength. Also, we denote the color and flavor trace by $\operatorname{tr}_{\rm C}$ and $\operatorname{tr}_{\rm FL}$ respectively.
Moreover, we are mostly interested in the single-trace double ordered amplitudes
\begin{equation}
A^{\rm YMS}(i_1\ldots i_r|1\ldots n):=\langle 0| 1^{g}\ldots i_1^{\phi} \ldots,j^{g} \ldots i_r^{\phi} \ldots n^{g}\rangle \Big|_{\operatorname{tr}_{\rm C}(T^{1}\cdots T^{n}),\operatorname{tr}_{\rm FL}(Y^{i_1}\cdots Y^{i_r})}\,,
\end{equation}
and similarly the single-trace double ordered form factors
\begin{align}
F^{\operatorname{tr}(\phi^2)}(i_1 \ldots i_r&|1 \ldots n)\\
\nonumber &=\int d^{D}x\ e^{\mathrm{i}q\cdot x} \langle 0| \mathcal{O}_{\Phi} | 1^{g} \ldots i_1^{\phi} \ldots j^{g}\ldots i_r^{\phi} \ldots n^{g}\rangle \Big|_{\operatorname{tr}_{\rm C}(T^{1}\cdots T^{n}) \operatorname{tr}_{\rm FL}(Y^{i_1}\cdots Y^{i_r})}\,.
\end{align}
Here the operator is defined as $\mathcal{O}_{\Phi}=\operatorname{tr}(\phi^2):=\sum_{I,\tilde{I}} \Phi^{I,\tilde{I}}\Phi_{I,\tilde{I}}$.
\subsection{Review of the decomposition for amplitudes}
The expansion of Yang-Mills amplitudes on Yang-Mills-scalar amplitudes was originally proposed in \cite{Lam:2016tlk,Fu:2017uzt,Du:2017kpo} by studying the CHY representation of Yang-Mills and Yang-Mills scalar amplitudes, and later categorized as a special case in a class of universal expansions \cite{Dong:2021qai}. The explicit formula is
\begin{equation}\label{eq:YMdecomp}
\hskip -3pt
A^{ \rm YM}(1,\ldots, n)=\sum_{r=0}^{n-2}\sum_{i_1<\ldots<i_r}\sum_{\sigma\in S_r} \mathrm{W}^{\rm f}_{1n}(\sigma(i_1, \ldots, i_r)) A^{ \rm YMS}(1,\sigma({i_1},\ldots,{i_r}), n|1,2\ldots, n)\,,
\end{equation}
where the``open trace" of the field strength ${\rm f}_{i}^{\mu\nu}=p_i^{\mu}\epsilon_{i}^{\nu}-p_i^{\nu}\epsilon_{i}^{\mu}$
\begin{equation}
\mathrm{W}^{\rm f}_{1n}(j_1,\ldots,j_r):= \epsilon_{1,\mu_1} (\mathrm{f}_{j_1})^{\mu_1}_{\mu_2}\cdots (\mathrm{f}_{j_r})^{\mu_r}_{\mu_{r+1}} \epsilon_n^{\mu_{r+1}}\,.
\end{equation}
To understand \eqref{eq:YMdecomp}, one should first notice that both particle $1,n$ are special. For particles $2,\ldots,(n-1)$, the gauge invariance is manifest in \eqref{eq:YMdecomp}. The gauge invariance of particle $1$(or $n$), however, is not obvious. Actually, according to \cite{Arkani-Hamed:2016rak}, if we express the Yang-Mills amplitudes with a local form\footnote{Here the terminology "local form" only refers to cubic diagram expansion in this paper. In the original argument, the authors of \cite{Arkani-Hamed:2016rak} shows that one can start from a more general ansatz.}, as is the case in \eqref{eq:YMdecomp}, it is only possible to manifest the gauge invariance of $(n-2)$ particles.
Furthermore, as proven in \cite{Dong:2021qai}, requiring the gauge invariance of $(n-2)$ particles already spits out the $\mathrm{W}_{1n}^{\rm f}$ factor, yet one can not confirm that the gauge invariant block accompanied with $\mathrm{W}_{1n}^{\rm f}$ is indeed a YMS amplitude. The next step is to use the gauge invariance of another particle say the first particle, which uniquely fixes the Yang-Mills amplitudes, as well as its expansion \eqref{eq:YMdecomp}. Note that to make this gauge invariance condition obvious, we have to combine all terms in the local expression such as \eqref{eq:YMdecomp} together and reach a ``non-local" form with more poles in the denominator compared with cubic diagrams. For instance, for the four-gluon amplitude, we have (see \cite{Green:1987sp}, and we use the notations in \cite{Bern:2017tuc})
\begin{equation}
A_4^{ \rm YM}=\frac{T_8}{st}, \text{ with } T_{8}=\frac{1}{2}\left(4\operatorname{tr}^{\rm f}(1,2,3,4)-\operatorname{tr}^{\rm f}(1,2)\operatorname{tr}^{\rm f}(3,4)\right)+ \text{cyclic}(2,3,4),
\end{equation}
which trivialized the gauge invariance of all polarizations. However, there are two Mandelstams in the denominator rather than one for four-point cubic diagrams.
It is also helpful to understand the expansion \eqref{eq:YMdecomp} from the perspective of transmuted operators~\cite{Cheung:2017ems}. By taking the following (ordered) ``open-trace" derivatives $\widetilde{\mathcal{D}}_{(i_1,i_2,\ldots, i_{r{-}1},i_r)}$
\begin{equation}
\widetilde{\mathcal{D}}_{(i_1,i_2,\ldots, i_{r{-}1},i_r)}:= \partial_{({\color{red}\bm \epsilon_{i_{1}}}\cdot p_{i_2})}\partial_{(\epsilon_{i_2}\cdot p_{i_3})}\cdots \partial_{(\epsilon_{i_{r{-}1}}\cdot {\color{red}\bm \epsilon_{i_{r}}})}
\end{equation}
with the property
\begin{equation}
\begin{aligned}
&\widetilde{\mathcal{D}}_{(1,i_1, \ldots, i_r,n)} \mathrm{W}_{1n}(j_1, \ldots, j_r)=1 \Leftrightarrow (i_1, \ldots, i_r)=(j_1, \ldots, j_r)\,,\\
&\widetilde{\mathcal{D}}_{(1,i_1, \ldots, i_r,n)} \mathrm{W}_{1n}(j_1, \ldots, j_m)=0 \text{ for any other situations}\,,
\end{aligned}
\end{equation}
one can extract the YMS amplitude from the YM one as
\begin{equation}\label{eq:YMderivative}
\widetilde{\mathcal{D}}_{(1,\sigma({i_1},\ldots,{i_r}),n)}A^{\rm YM}(1\ldots n)=A^{ \rm YMS}(1,\sigma({i_1},\ldots,{i_r}), n|1,2,\ldots, n)\,.
\end{equation}
This relation\footnote{Strictly, we need to choose a kinematic basis where $p_n$ is eliminated by momentum conservation for this relation to be true.} will also be useful when discussing the factorizations of form factor expansions in sec.\ref{ssec:trF2proof}.
Below we will see that for form factors, a decomposition similar to \eqref{eq:YMdecomp} holds but manifests gauge invariance for all particles. Let us start from some simple examples.
\subsection{Three- and Four-point example for form factors}
Now we study the simple three- and four-point form factors to see the pattern for the form factor expansion.
\subsubsection{Three-point example}
Let us start from the simplest three-point form factor $F_{3}^{\operatorname{tr}(F^2)}$: one can reorganize the expression in the following form\footnote{Note that $\operatorname{tr}^{\rm f}(1,2)=2 (p_1\cdot \epsilon_2)(p_2\cdot \epsilon_1)-2(p_1\cdot p_2)( \epsilon_1\cdot \epsilon_2)$ has an overall factor 2 and $\operatorname{tr}^{\rm f}(1,2,3)=(p_1\cdot \epsilon_2)(p_2\cdot \epsilon_3)( p_3\cdot \epsilon_1)+\cdots$ does not.}
\begin{equation}\label{eq:3pttrF2}
\begin{aligned}
F^{\operatorname{tr}(F^2)}_3(1^g,2^g,3^g)=&\operatorname{tr}^{\rm f}(1,2) \left(\frac{p_1\cdot \epsilon_3 }{s_{13}}+\frac{-p_2\cdot \epsilon_3}{s_{23}}\right)+\text{cyclic}(1,2,3)\\
&+2 \operatorname{tr}^{\rm f}(1,2,3)\left(\frac{1}{s_{12}}+\frac{1}{s_{13}}+\frac{1}{s_{23}}\right)\,.
\end{aligned}
\end{equation}
The key feature in \eqref{eq:3pttrF2}, similar to the amplitudes, is to strip part of the polarizations off and leave gauge-invariant blocks containing the rest polarizations. These gauge-invariant blocks turn out to be scalar form factors:
\begin{equation} \label{eq:3pttrphi2}
\begin{aligned}
&F^{\operatorname{tr}(\phi^2)}_3(1^\phi,2^\phi|1^\phi,2^\phi,3^g)=\left(\frac{p_1\cdot \epsilon_3 }{s_{13}}+\frac{-p_2\cdot \epsilon_3}{s_{23}}\right)\\
&F^{\operatorname{tr}(\phi^2)}_3(1^\phi,2^\phi,3^\phi|1^\phi,2^\phi,3^\phi)=\left(\frac{1}{s_{12}}+\frac{1}{s_{13}}+\frac{1}{s_{23}}\right)
\end{aligned}
\end{equation}
so that
\begin{equation}\label{eq:3pttrF2b}
\begin{aligned}
F^{\operatorname{tr}(F^2)}_3(1^g,2^g,3^g)=&\operatorname{tr}^{\rm f}(1,2) F^{\operatorname{tr}(\phi^2)}_3(1^\phi,2^\phi|1^\phi,2^\phi,3^g)+\text{cyclic}(1,2,3)\\
&+ 2\operatorname{tr}^{\rm f}(1,2,3)F_3^{\operatorname{tr}(\phi^2)}(1^\phi,2^\phi,3^\phi|1^\phi,2^\phi,3^\phi)\,.
\end{aligned}
\end{equation}
From the three-point example, we see the basic difference between the form factor decomposition \eqref{eq:3pttrF2b} and the amplitudes ones are (1) instead of ``open-traces" in \eqref{eq:YMdecomp}, here we have ``closed-trace" defined as
\begin{equation}
\operatorname{tr}^{\rm f}(i_1,\ldots, i_r)=({\rm f}_{i_1})^{\mu_1}_{\mu_2}({\rm f}_{i_2})^{\mu_2}_{\mu_3}\cdots ({\rm f}_{i_r})^{\mu_r}_{\mu_1}\,;
\end{equation}
(2) \eqref{eq:3pttrF2b} is manifestly gauge invariant for all particles, in contrast to \eqref{eq:YMdecomp} for only $(n-2)$ of them; (3) Crossing symmetry is trivial for \eqref{eq:3pttrF2b}.
\subsubsection{Four-point example}
Then we turn to the four-point example. Given the three properties mentioned in the three-point example, we are expecting an expansion looking like
\begin{align}\label{eq:4pttrF22}
F_4^{\operatorname{tr}(F^2)}(1,2,3,4)&
=
\lambda_2 \sum_{i_1<i_2}\operatorname{tr}^{\rm f}(i_1,i_2) F_4^{\operatorname{tr}(\phi^2)}(i_1,i_2|1,2,3,4) \nonumber\\
&+\lambda_3\sum_{i_1<i_2<i_3}\operatorname{tr}^{\rm f}(i_1,i_2,i_3) F_4^{\operatorname{tr}(\phi^2)}(i_1,i_2,i_3|1,2,3,4) \\
&+\lambda_4\sum_{\sigma\in S_{4}/(\mathbb{Z}_4\times S_2)}\operatorname{tr}^{\rm f}(\sigma(1,2,3,4)) F_4^{\operatorname{tr}(\phi^2)}(\sigma(1,2,3,4)|1,2,3,4) \nonumber \,,
\end{align}
where the last sum is to sum over permutations of four particles module the reflection and the cyclic permutation. The explicit calculation show that \eqref{eq:4pttrF22} is indeed correct if $\lambda_2=1,\lambda_3=\lambda_4=2$.
It is helpful to list explicitly some examples of the Feynman diagrams of the double-ordered blocks appearing in \eqref{eq:4pttrF22}
\begin{equation}\label{eq:4ptYMSFFblocks}
\begin{aligned}
\includegraphics[width=0.85\linewidth]{fig/FeynDiags.eps}
\end{aligned}
\end{equation}
where we introduce the convention employed in this paper for Feynman diagrams: thin black straight lines for massless scalar, wiggle lines for gluons and double blue lines for $q$; also, drawing multiple double blue $q$-legs in a single diagram means an implicit summation.
From these concrete four-point examples, we observe that the Feynman diagrams of double ordered form factors $F_n^{\operatorname{tr}(\phi^2)}(\alpha|1, \ldots, n)$ can be obtained from the double ordered amplitudes (in the Yang-Mills-scalar theory) $A^{ \rm YMS}(\alpha|1, \ldots, n)$ by inserting the $q$-leg to appropriate positions.
A more delicate way to organize \eqref{eq:4pttrF22} is as follows
\begin{equation}\label{eq:4pttrF23}
F_4^{\operatorname{tr}(F^2)}(1,2,3,4)=\sum_{r=2}^{4}\sum_{i_1<\cdots<i_r}\sum_{\sigma\in S_{r}/\mathbb{Z}_r}\operatorname{tr}^{\rm f}(\sigma(i_1, \ldots, i_r)) F_4^{\operatorname{tr}(\phi^2)}(\sigma(i_1, \ldots, i_r)|1,2,3,4)\,,
\end{equation}
in which we have used the property
\begin{equation}
\operatorname{tr}^{\rm f}(\alpha)=(-1)^{|\alpha|} \operatorname{tr}^{\rm f}(\alpha^{-1}),\quad F^{\operatorname{tr}(\phi^2)}(\alpha|\beta)=(-1)^{|\alpha|}F^{\operatorname{tr}(\phi^2)}(\alpha^{-1}|\beta)\,.
\end{equation}
It is interesting to notice that a ``flavor-kinematics" duality exists in the sense that when inspecting the last sum in \eqref{eq:4pttrF23} in which $i_1\cdots i_r$ are scalars, the ``closed-trace" of polarizations can be obtained by a simple substitution of the flavour trace
\begin{equation}
\begin{aligned}
&\sum_{\sigma\in S_{r}/\mathbb{Z}_r}\operatorname{tr}^{\rm f}(\sigma(i_1, \ldots, i_r)) F_4^{\operatorname{tr}(\phi^2)}(\sigma(i_1, \ldots, i_r)|1,2,3,4) \\
=& \sum_{\sigma\in S_{r}/\mathbb{Z}_r}\operatorname{tr}_{\rm FL}(\sigma(i_1, \ldots, i_r))\big|_{T^{I_i}\rightarrow \mathrm{f}^{i}} F_4^{\operatorname{tr}(\phi^2)}(\sigma(i_1, \ldots, i_r)|1,2,3,4)\\
=&\sum_{\sigma\in S_{r}/\mathbb{Z}_r} \big(\mathbf{F}_4^{\operatorname{tr}(\phi^2)}(1,2,3,4) \text{ where } i_1, \ldots, i_r \text{ are scalars}\big)\big|_{T^{I_i}\rightarrow \mathrm{f}^{i}}
\end{aligned}
\end{equation}
where in the last line $\mathbf{F}_4^{\operatorname{tr}(\phi^2)}$ is the color-ordered form factor rather than the double-ordered one. As a result, we can to reorganize the expansion according to the scalar ``skeleton" diagrams, which can be defined by eliminating external gluons in cubic diagrams. For the four-point case mentioned above, one gets
\begin{equation}
\begin{aligned}
\mathbf{F}_4^{\operatorname{tr}(\phi^2)}(1^{\phi},2^{\phi},3^{\phi},4^{g})\big|_{T^{I_i}\rightarrow \mathrm{f}^{i}}&\hskip -5pt =f^{123}\big|_{T^{I_i}\rightarrow \mathrm{f}^{i}} {F}_{4}^{\operatorname{tr}(\phi^2)}\Big(\begin{aligned}
\includegraphics[height=0.075\linewidth]{fig/cubic3pt.eps}
\end{aligned}\hskip -3pt \Big)=2 \operatorname{tr}^{\rm f}(1,2,3){F}_{4}^{\operatorname{tr}(\phi^2)}
\Big(\begin{aligned}
\includegraphics[height=0.075\linewidth]{fig/cubic3pt.eps}
\end{aligned}\hskip -3pt \Big)
\end{aligned}
\end{equation}
where the scalar ``skeleton" is nothing but a three point vertex,
and
\begin{equation}
\begin{aligned}
\mathbf{F}_4^{\operatorname{tr}(\phi^2)}&(1^{\phi},2^{\phi},3^{\phi},4^{\phi})\big|_{T^{I_i}\rightarrow \mathrm{f}^{i}}\\
= & f^{12 \rm x}f^{\rm x 34} \big|_{T^{I_i}\rightarrow \mathrm{f}^{i}}
F_4^{\operatorname{tr}(\phi^2)}\Big(
\begin{aligned}
\includegraphics[height=0.072\linewidth]{fig/t4pt.eps}
\end{aligned}
\Big) + f^{41 \rm x}f^{\rm x 23} \big|_{T^{I_i}\rightarrow \mathrm{f}^{i}} F_4^{\operatorname{tr}(\phi^2)}\Big(
\begin{aligned}
\includegraphics[width=0.078\linewidth]{fig/s4pt.eps}
\end{aligned}
\Big)\\
= & 2\operatorname{tr}^{\rm f}(1,[2,3],4) F_4^{\operatorname{tr}(\phi^2)}\Big(
\begin{aligned}
\includegraphics[width=0.078\linewidth]{fig/s4pt.eps}
\end{aligned}
\Big) + 2\operatorname{tr}^{\rm f}(1,2,[3,4])F_4^{\operatorname{tr}(\phi^2)}\Big(
\begin{aligned}
\includegraphics[height=0.072\linewidth]{fig/t4pt.eps}
\end{aligned}
\Big)\,,
\end{aligned}
\end{equation}
where $[,]$ means commutator.
And the explicit Feynman diagrams for each ``skeleton" form are given by
\begin{equation}
\begin{aligned}
&{F}_{4}^{\operatorname{tr}(\phi^2)}
\Big(\begin{aligned}
\includegraphics[height=0.075\linewidth]{fig/cubic3pt.eps}
\end{aligned}\Big)=\begin{aligned}
\includegraphics[height=0.085\linewidth]{fig/FeynDiagsb.eps}
\end{aligned} ,\\
&{F}_{4}^{\operatorname{tr}(\phi^2)}
\Big(\begin{aligned}
\includegraphics[width=0.078\linewidth]{fig/s4pt.eps}
\end{aligned}\Big)=\hskip -8pt\begin{aligned}
\includegraphics[width=0.16\linewidth]{fig/s4q.eps}
\end{aligned}, \quad {F}_{4}^{\operatorname{tr}(\phi^2)}
\Big(\begin{aligned}
\includegraphics[height=0.072\linewidth]{fig/t4pt.eps}
\end{aligned}\Big)=\hskip -10pt \begin{aligned}
\includegraphics[height=0.11\linewidth]{fig/t4q.eps}
\end{aligned}.
\end{aligned}
\end{equation}
Such a ``skeleton" diagram expansion can be generalized to higher points and also has a close relation to color-kinematics duality and double copy, as will be discussed in other works. Below we continue to focus on the decomposition similar to \eqref{eq:4pttrF23}.
\subsection{The general $n$-point case}\label{ssec:trF2proof}
Following \eqref{eq:4pttrF23}, the $n$-point generalization is quite obvious\footnote{Alternatively, the last sum can be replace by $\sum_{\sigma\in S_{t}/(\mathbb{Z}_t\times \mathbb{Z}_2)}\operatorname{tr}^{\rm f}(\sigma(i_1,\ldots,i_t))(1+\delta_{2 t})$ $ F^{\operatorname{tr}(\phi^2)}(\sigma(i_1,\ldots,i_t)|1,\ldots,n)$. The reason why the two particle part is special is that the flavor factor related to $F^{\operatorname{tr}(\phi^2)}(a,b|1 \ldots n)$ is $\delta^{ab}=\operatorname{tr}(T^{I_a}T^{I_b})$, not a product of structure constants.}
\begin{equation}\label{eq:npttrF2}
F_n^{\operatorname{tr}(F^2)}(1,\ldots,n)=\sum_{r=2}^{n}\sum_{i_1<\cdots<i_r}\sum_{\sigma\in S_{t}/\mathbb{Z}_t}\operatorname{tr}^{\rm f}(\sigma(i_1, \ldots, i_r)) F_n^{\operatorname{tr}(\phi^2)}(\sigma(i_1, \ldots, i_r)|1, \ldots, n)\,.
\end{equation}
And it is straight forward to define a transmuted operator as the ``closed-trace" derivative $\mathcal{D}_{\sigma(i_1,i_2,\ldots,i_r)}$
\begin{equation}
\mathcal{D}_{(i_1,i_2,\ldots,i_r)}:= \partial_{({\color{red}\bm \epsilon_{i_{1}}}\cdot p_{i_2})}\partial_{(\epsilon_{i_2}\cdot p_{i_3})}\cdots \partial_{(\epsilon_{i_r}\cdot {\color{red}\bm p_{i_{1}}})}\,,
\end{equation}
so that
\begin{equation}
\begin{aligned}
&\mathcal{D}_{(i_1, \ldots, i_r)} \operatorname{tr}^{\rm f}(j_1, \ldots, j_r)=1 \Leftrightarrow \exists \kappa \in \mathbb{Z}_{r} \text{ so that } \kappa(i_1, \ldots, i_r)=(j_1, \ldots, j_r)\,,\\
&\mathcal{D}_{(i_1,\ldots, i_r)} \operatorname{tr}^{\rm f}(j_1,\ldots, j_m)=0 \text{ for any other situations. }
\end{aligned}
\end{equation}
Parallel to \eqref{eq:YMderivative}, we also have\footnote{We can formally distinguish a ``closed-trace" derivative on $\alpha$ and $\alpha^{\scriptscriptstyle \rm T}$, since $\mathcal{D}_{\alpha}=(-1)^{|\alpha|}\mathcal{D}_{\alpha}^{\scriptscriptstyle \rm T}$. Combining both $\mathcal{D}_{\alpha}$ and $\mathcal{D}_{\alpha}^{\scriptscriptstyle \rm T}$ together (without distinguishing them) only gives the $(1+\delta_{2t})$ factor in the last footnote.}
\begin{equation}
\mathcal{D}_{(i_1, \ldots, i_r)}F_n^{\operatorname{tr}(F^2)}(1, \ldots, n)=F_n^{\operatorname{tr}(\phi^2)}(i_1, \ldots, i_r|1, \ldots, n)\,.
\end{equation}
Then we give a proof of such a decomposition formula by examining the factorization properties.
Within \eqref{eq:npttrF2}, two kind of building blocks are $F_n^{\operatorname{tr}(\phi^2)}(i_1, \ldots, i_r|1, \ldots, n)$ and $\operatorname{tr}(i_1, \ldots, i_r)$, and we give their factorization properties below.
First, we focus on the form factors.
For convenience, we can consider the residue on the $s_{1,\ldots, m}$ pole, on which the double ordered form factor factorize as a double ordered amplitudes and a lower point form factor:
\begin{equation}\label{eq:DOFFfact1}
\begin{aligned}
\text{Res}[F_n^{\operatorname{tr}(\phi^2)}&(i_1, \ldots, i_r|1, \ldots, n)]_{s_{1,\cdots, m}= 0}\\
&=A^{ \rm YMS}(i_{1}, \ldots, i_{r'}, I^{+}|1, \ldots, m,I^{+})F_{n{-}m{+}1}^{\operatorname{tr}(\phi^2)}(I^{-},i_{r^{\prime}+1}, \ldots, i_r|I^{-},m{+}1, \ldots, n)\,,
\end{aligned}
\end{equation}
here we have assumed that the interchanged particle is a scalar denoted by $I^+,I^-$. If it is a gluon(denoted by $I^{+}(\epsilon),I^{-}(\bar{\epsilon})$), things will be quite simpler, as the amplitude must be composed of continuous gluons and the scalar skeleton is not changed by the factorization\footnote{The reason why the scalar structure is preserved is that we take only the single trace flavor structure $\operatorname{tr}_{\rm FL}(Y^{i_1}\ldots Y^{i_r})$ so that the scalar "skeleton" must be connected. Thus, if the interchanged particle is a gluon, all the scalars must be in only one side of the factorization.}
\begin{equation}\label{eq:DOFFfact2}
\begin{aligned}
\text{Res}[F_n^{\operatorname{tr}(\phi^2)}(i_1,&\ldots,i_r|1,\ldots,n)]_{s_{1,\ldots, m}= 0}\\
&=\sum_{\rm \epsilon\in \epsilon_{\pm}}A^{ \rm YM}(1,\ldots,m,I^{+}(\epsilon))F_{n{-}m{+}1}^{\operatorname{tr}(\phi^2)}(i_1,\ldots,i_r|I^{-}(\Bar{\epsilon}),\ldots,n)\,.
\end{aligned}
\end{equation}
Second, a ``factorization" related to $\operatorname{tr}(i_1 \ldots i_r)$ is crucial. In particular, we ``factorize" the ``closed-trace" derivatives $\mathcal{D}_{(i_1,\ldots,i_r)}$ into an ``open-trace" derivative and a ``closed-trace" derivatives acting respectively on two blocks
\begin{equation}\label{eq:splitderivative}
\mathcal{D}_{(i_1,\ldots,i_r)}\cong \widetilde{\mathcal{D}}_{(i_1,\ldots,i_{r^{\prime}},I^{+}(\epsilon))}{\scriptstyle \circ} {\mathcal{D}}_{(I^{-}(\bar{\epsilon}),i_{r^{\prime}+1},\ldots,i_r)}\,,
\end{equation}
the ${\scriptstyle \circ}$ implies a summation over helicity and the $\cong$ symbol means that the two sides of \eqref{eq:splitderivative} coincides when acting on amplitudes, as will be explained in the Appendix~\ref{app:proof dec trF2}.
To show the consistency of \eqref{eq:npttrF2}, we consider a commutator between taking derivatives and taking residues on physical poles. Such a technique was originally proposed in \cite{Cheung:2017ems}.
More precisely, we show the following two approaches are equivalent:
\begin{enumerate}
\item Taking derivatives before residues
\begin{equation}\label{eq:cutderivative1}
\text{Expression 1: } \text{Res}\left[\mathcal{D}_{\sigma(i_1,i_2,\ldots,i_r)}F_n^{\operatorname{tr}(F^2)}(1,\ldots,n)\right]_{s_{1,\ldots,m}= 0}
\end{equation}
\item Taking residues before derivatives
\begin{align}\label{eq:cutderivative2}
\text{Expression 2: }&\mathcal{D}_{\sigma(i_1,i_2,\ldots,i_r)}\left(\text{Res}\big[F_n^{\operatorname{tr}(F^2)}(1,\ldots,n)\big]_{s_{1,\ldots,m}= 0}\right)\\
=&\mathcal{D}_{\sigma(i_1,i_2,\ldots,i_r)}\left(A^{ \rm YM}(1,\ldots,m,I^{+}(\epsilon))\circ F_{n{-}m{+}1}^{\operatorname{tr}(F^2)}(I^{-}(\bar{\epsilon}),m{+}1,\ldots,n)\right)\,. \nonumber
\end{align}
\end{enumerate}
Since the form factor is just a rational function of Lorentz products of polarization vectors and momenta, its reasonable to expect that Expression 1 equals to Expression 2. Now we explain why such an identity holds if we have the decomposition \eqref{eq:npttrF2}.
We start from Expression 1, plugging in \eqref{eq:npttrF2} and performing a direct calculation
\begin{equation}
\text{Res}\left[\mathcal{D}_{\sigma(i_1,i_2,\ldots,i_r)}F_n^{\operatorname{tr}(F^2)}(1,\ldots,n)\right]_{s_{1,\ldots,m}= 0}= \text{Res}\left[F_n^{\operatorname{tr}(\phi^2)}(\sigma(i_1,\ldots,i_r)|1,\ldots,n)\right]_{s_{1,\ldots,m}=0}\,,
\end{equation}
giving
\begin{equation}\label{eq:cutderivative12}
A^{ \rm YMS}(i_{1},\ldots,i_{r'},I^{+}|1,\ldots,m,I^{+})F_{n{-}m{+}1}^{\operatorname{tr}(\phi^2)}(I^{-},i_{r^{\prime}+1},\ldots,i_r|I^{-},m{+}1,\ldots,n)\,.
\end{equation}
where we have used the factorization of double ordered form factors \eqref{eq:DOFFfact1} for simplicity. There is no obvious difference for considering the exchanging gluon case ($I$ is a gluon).
Next we try to calculate Expression 2.
Given \eqref{eq:splitderivative}, the next step is trivial: one can plug in the decomposition formula for both form factors and amplitudes, so that the Expression 2 in \eqref{eq:cutderivative2} becomes
\begin{equation}
\left(\widetilde{\mathcal{D}}_{(i_1,\ldots,i_r)}A^{ \rm YM}(1,\ldots,m,I^{+}(\epsilon))\right)\left({\mathcal{D}}_{(I^{+}(\epsilon),i_{r^{\prime}+1},\ldots,i_r)}F_{n{-}m{+}1}^{\operatorname{tr}(F^2)}(I^{-}(\bar{\epsilon}),m{+}1,\ldots,n)\right)\,.
\end{equation}
Plugging in the (lower point) decomposition formula for both the amplitude and the form factor, one finally reach
\begin{align}\label{eq:cutderivative13}
&\widetilde{\mathcal{D}}_{(i_1,\ldots,i_{r'})}A^{ \rm YM}(1,\ldots,m,I^{+}(\epsilon))= A^{ \rm YMS}(i_{1},\ldots,i_{r'},I^{+}|1,\ldots,m,I^{+}) \\
\nonumber &{\mathcal{D}}_{(I^{+}(\epsilon),i_{r^{\prime}+1},\ldots,i_r)}F_{n{-}m{+}1}^{\operatorname{tr}(F^2)}(I^{-}(\bar{\epsilon}),m{+}1,\ldots,n)= F_{n{-}m{+}1}^{\operatorname{tr}(\phi^2)}(I^{-},i_{r^{\prime}+1},\ldots,i_r|I^{-},m{+}1,\ldots,n)\,,
\end{align}
of which the product is exactly \eqref{eq:cutderivative12}.
To summarize, we have shown that assuming the decomposition \eqref{eq:npttrF2} is correct, the Expression 1 and 2 in \eqref{eq:cutderivative1} and \eqref{eq:cutderivative2} are equivalent, which is the most important consistency check of \eqref{eq:npttrF2}. Furthermore, by induction, one can deduce the higher-point decomposition formula from lower-point cases, at least up to special contributions vanishing under all the derivatives $\mathcal{D}_{(i_1,\ldots,i_r)}$. However, gauge invariance and momentum power counting forbid such contributions. In conclusion, we have a proof of \eqref{eq:npttrF2}, based on a similar decomposition for Yang-Mills amplitudes \eqref{eq:YMdecomp} and unitarity \emph{i.e.} factorizations of form factor\footnote{It is sufficient to consider only the factorizations because there is no $n$-particle gauge invariant contact terms with the correct mass dimension. }.
\section{Expanding ${\rm tr}(\phi^2)$ form factors into YMS amplitudes}\label{sec:phi2decomp}
In this section, we present a general expansion of ${\rm tr}(\phi^2)$ form factors with $r$ scalars and $n{-}r$ gluons into YMS amplitudes with $r{+}1$ scalars (including $q$). As we will show, the pure scalar case ($n=r$) plays an important role and turns out to be the most difficult step in establishing the expansion for general case, since the inclusion of gluons becomes relatively simple once this is done.
\subsection{Expansion for pure scalar cases}\label{ssec:scalarskeleton}
Let us begin by considering pure-scalar, $r$-point ${\rm tr}(\phi^2)$ form factor, which we denote as $F_r^{{\rm tr}(\phi^2)}(\alpha|\beta)$ with two orderings $\alpha, \beta$. We will present an algorithm for its expansion into a linear combination of $(r{+}1)$-point bi-adjoint $\phi^3$ amplitudes. The existence of such an expansion per se is not surprising since we can always express trivalent scalar diagrams as linear combination of bi-adjoint $\phi^3$ amplitudes. However, the key of our construction is to maintain both orderings for the $(r{+}1)$-point amplitude, where the $q$ leg is inserted in $\alpha$ and $\beta$ orderings.
\subsubsection{The $\alpha=\beta$ cases}
The special case $F_r^{{\rm tr}(\phi^2)}(\alpha|\alpha)$, say $\alpha=(1,2, \ldots, r)$, turns out to be particularly simple but illuminating. We find that
\begin{equation} \label{eq pure scalar}
\begin{split}
&F_r^{{\rm tr}(\phi^2)}(1,2,\ldots,r| 1,2,\ldots,r) \\
=&\sum_{1\leq a \leq b <r } A(1,2,\ldots,a,q,a{+}1,\ldots,r| 1,2,\ldots,b,q,b{+}1,\ldots,r)\,,
\end{split}
\end{equation}
here and throughout we use $A(\alpha|\beta)$ to represent a double ordered YMS($\phi^3$) amplitudes. Note that the RHS of \eqref{eq pure scalar} has $1+2+\cdots+(r{-}1)=r(r{-}1)/2$ terms. And let us spell out the simplest examples for $r=3,4$:
\begin{equation} \label{eq pure scalar 3pt}
\begin{split}
F_3^{{\rm tr}(\phi^2)}(1,2,3| 1,2,3) = & A(1,q,2,3| 1,q,2,3)+A(1,q,2,3| 1,2,q,3)\\
&+ A(1,2,q,3| 1,2,q,3)\,.
\end{split}
\end{equation}
and
\begin{align} \label{eq pure scalar 4pt}
&F_4^{{\rm tr}(\phi^2)}(1,2,3,4| 1,2,3,4) = \\
& A(1,q,2,3,4| 1,q,2,3,4)+A(1,q,2,3,4| 1,2,q,3,4)+A(1,q,2,3,4| 1,2,3,q,4) \nonumber \\
&+ A(1,2,q,3,4| 1,2,q,3,4)+A(1,2,q,3,4| 1,2,3,q,4) \nonumber \\
&+ A(1,2,3,q,4| 1,2,3,q,4). \nonumber
\end{align}
By definition, the LHS of \eqref{eq pure scalar} satisfies the following cyclic and reflection symmetries, which for consistency the RHS should also do:
\begin{enumerate}
\item Cyclicity: the form factors are defined to satisfy
$$F_r^{{\rm tr}(\phi^2)}(1,2,\ldots,r| 1,2,\ldots,r)=F_r^{{\rm tr}(\phi^2)}(r,1,2,\ldots,r{-}1| r,1,2,\ldots,r{-}1);$$ On the other hand, the U(1) decoupling relation for those double ordered amplitudes ensures that the RHS also has the cyclicity property.
\item Reflection:
we have the reflection symmetry $$F_r^{{\rm tr}(\phi^2)}(1,2,\ldots,r| 1,2,\ldots,r)= (-1)^r F_r^{{\rm tr}(\phi^2)}(1,2,\ldots,r| r,r{-}1,\ldots,1).$$ for the ordered form factor; and if we give the following definition similar to \eqref{eq pure scalar} when dealing with $F(\alpha|\alpha^{-1})$
\begin{equation}\label{eq pure scalar2}
\begin{split}
&F_r^{{\rm tr}(\phi^2)}(1,2,\ldots,r| r,r{-}1,\ldots,1) \\
=&-\sum_{1\leq a \leq b <n } A(1,2,\ldots,a,q,a{+}1,\ldots,r| (1,2,\ldots,b,q,b{+}1,\ldots,r)^{-1}),
\end{split}
\end{equation}
then the RHS of \eqref{eq pure scalar} also gives $(-1)^{r}$ when taking reflections, due to the reflection property of those $(r{+}1)$-point amplitudes. The minus sign in \eqref{eq pure scalar2} will also play an important role below. We also mention another reflection symmetry, which is trivial due to U(1) decoupling, as
$$F_r^{{\rm tr}(\phi^2)}(r,r{-}1,\ldots,1| r,r{-}1,\ldots,1)=F_r^{{\rm tr}(\phi^2)}(1,2,\ldots,r| 1,2,\ldots,r)$$
with
\begin{equation}
\begin{split}
&F_r^{{\rm tr}(\phi^2)}(r,r{-}1,\ldots,1| r,r{-}1,\ldots,1) \\
=&\sum_{1\leq b \leq a <r } A(r,r{-}1,\ldots,a{+1},q,a,\ldots,1| r,r{-}1,\ldots,b{+1},q,b,\ldots,1)\,.
\end{split}
\end{equation}
\end{enumerate}
Moreover, the special double sum structure is naturally consistent with factorizations.
Given the cyclic invariance of \eqref{eq pure scalar}, it's suffice to consider the pole $s_{1,\ldots, m}$. Within the expansion \eqref{eq pure scalar}, only those terms from the $a\geq m$ part of the summation contribute, thus one gets
\begin{equation}\label{eq:purescalarproof}
\begin{split}
&{\rm Res}[F_r^{{\rm tr}(\phi^2)}(1,2,\ldots,r| 1,2,\ldots,r)]_{s_{1,\ldots, m}} \\
=&\sum_{m \leq a \leq b <r } {\rm Res}[A(1,2,\ldots,a,q,a{+}1,\ldots,r| 1,2,\ldots,b,q,b{+}1,\ldots,r)]_{s_{1,\ldots, m}}\\
=&\ A(1,\ldots,m,I^{+}|1,\ldots,m,I^{+})\ \times \\
& \qquad \sum_{m \leq a \leq b <r }A(I^{-},\ldots,a,q,a{+}1,\ldots,r| I^{-},\ldots,b,q,b{+}1,\ldots,r)\\
=&\ A(1,\ldots,m,I^{+}|1,\ldots,m,I^{+}) F_{r-m{+}1}^{{\rm tr}(\phi^2)}(I^{-},\ldots,r| I^{-},\ldots,r)\,.
\end{split}
\end{equation}
In the end, we comment that the special double sum structure in Eq.\eqref{eq pure scalar} serves as a building block for the expansions of general cases.
\subsubsection{The $\alpha\neq\beta$ cases: examples}
Next we consider the expansion for $\alpha\neq \beta$, focusing on some concrete examples before moving to most general cases.
To begin with, we consider the case where $\alpha$ and $\beta$ only differs by a {\it partial reflection}, {\it i.e.} for some $1<t<r$ we reverse the sequence $I_2:=(t{+}1, \ldots, r)$ while keeping $I_1:=(1, \ldots, t)$. It turns out the result is remarkably simple:
\begin{equation} \label{eq onereflection}
\begin{split}
&F_r^{{\rm tr}(\phi^2)}(1,2,\ldots,t,r,r{-}1,\ldots,t{+}1| 1,2,\ldots,t,t{+}1,\ldots,r ) \\
=& -\sum_{t{+}1\leq b \leq a <r } A(I_1,r,r{-}1,\ldots,a{+}1,q,a,\ldots,t{+}1|
I_1
t{+}1, \ldots, b, q, b{+}1, \ldots, r) \\
& +\sum_{1\leq a \leq b \leq t } A(1,2,\ldots,a,q,a{+}1,\ldots,t,I_2^{-1} |1,2,\ldots,b,q,b{+}1,\ldots,t,I_2 )\,.
\end{split}
\end{equation}
where in the first line we insert $q$ in the sequence $I_2$ in a way similar to \eqref{eq pure scalar}, and in the second one\footnote{There are boundary cases in the second sum: for $a=t$ we define $(a,q,a{+}1) := (t,q,I_2^{-1})$ and similar for $b=t$.} we insert $q$ as if we are dealing with a $(t{+}1)$-point form factor $\widetilde{F}_{t{+}1}^{{\rm tr}(\phi^2)}(1,2,\ldots,t,I_2^{-1}| 1,2,\ldots,t,I_2)$. In particular, we add a tilde to emphasize that it is an effective $(t{+}1)$-point one.
As an illustrative example, for $r=4$ with a reflection on $I_2=(34)$, we write
\begin{align}\label{eq onereflection 4pta}
&F_4^{{\rm tr}(\phi^2)}(1,2,4,3 | 1,2,3,4) \nonumber \\
=& -A(1,2,4,q,3 | 1,2,3,q,4) \\
&+A(1,q,2,4,3 | 1,q,2,3,4) +A(1,q,2,4,3 | 1,2,q,3,4) +A(1,2,q,4,3 | 1,2,q,3,4). \nonumber
\end{align}
\begin{figure}[htbp!]
\centering
\begin{equation*}
\begin{aligned}
\begin{aligned}\includegraphics[width=0.15\linewidth]{fig/one_reflection1.eps}\end{aligned}= \begin{aligned}\includegraphics[width=0.15\linewidth]{fig/one_reflection2.eps}\end{aligned} +\begin{aligned}\includegraphics[width=0.15\linewidth]{fig/one_reflection3.eps}\end{aligned} \left( \begin{aligned}\includegraphics[width=0.15\linewidth]{fig/one_reflection4.eps}\end{aligned}\right)
\end{aligned}
\end{equation*}
\caption{Feynman diagrams for $F_4^{{\rm tr}(\phi^2)}(1,2,4,3 | 1,2,3,4)$ and its expansion.}\label{fig example onereflection}
\end{figure}
As shown in Figure \ref{fig example onereflection}, the first line of the above expansion computes two graphs where $q$ is inserted on the $3,4$-leg respectively, and the second line gives the effective 3-point form factor $\widetilde{F}_3^{{\rm tr}(\phi^2)}(1,2,(4,3)|1,2,(3,4))$, in which the subgroup $(3,4)$ is regarded as a single particle.
This can be made precise by noticing that $\widetilde{F}_3^{{\rm tr}(\phi^2)}(1,2,(4,3)$ $|1,2,(3,4))\propto s_{3,4}^{-1}$. Further taking the residue on the $s_{3,4}$ pole shows only the $\widetilde{F}_3^{{\rm tr}(\phi^2)}(1,2,(4,3)|1,2,(3,4))$ part contributes
{
\begin{align}
&\text{Res}[F_4^{{\rm tr}(\phi^2)}(1,2,4,3 | 1,2,3,4)]_{s_{3,4}=0} \nonumber \\
= & \text{Res}[\widetilde{F}_3^{{\rm tr}(\phi^2)}(1,2,(4,3)|1,2,(3,4))]_{s_{3,4}=0} \\
=&\left(A(1,q,2,I^{+}|1,q,2,I^{+}){+}A(1,q,2,I^{+}|1,2,q,I^{+}){+}A(1,2,q,I^{+}|1,2,q,I^{+})\right)A(I^{-},4,3|I^{-},3,4)\nonumber \\
=&\ F_3^{{\rm tr}(\phi^2)}(1,2,I^{+}|1,2,I^{+}) A(I^{-},4,3|I^{-},3,4), \nonumber
\end{align}}
where the intermediate particles are denoted by $I^+,I^-$ respectively, and the fact that $\text{Res}[A(1,2,4,q,3 | 1,2,3,q,4)]_{s_{3,4}}{=}0$ is also used.
Note that there exists an alternative representation
\begin{align}\label{eq onereflection 4pt}
&F_4^{{\rm tr}(\phi^2)}(1,2,4,3 | 1,2,3,4) \nonumber \\
=& A(1,q,2,4,3 | 1,q,2,3,4) \\
&- A(1,2,q,4,3 | 3,4,q,1,2) -A(1,2,q,4,3 | 3,q,4,1,2)-A(1,2,4,q,3 | 3,q,4,1,2)\,, \nonumber
\end{align}
where the first term on the RHS computes the contribution with $q$ inserted in the subgroup $(1,2)$ and the bottom line is expanding $\widetilde{F}_3^{{\rm tr}(\phi^2)}((1,2),4,3|3,4,(1,2))$. \eqref{eq onereflection 4pta} is equivalent to \eqref{eq onereflection 4pt} so that one can to chose either of them freely, and this will also be the case in the more general discussions later.
\
Now we present another example at 8-point, $F_8^{{\rm tr}(\phi^2)}(8,1,2,6,7,3,4,5 | 1,2,\ldots,8 )$. Importantly, now we introduce an object which captures the main features of Feynman diagrams for $A(\alpha | \beta )$, {\it i.e.} the {\it mutual partial triangulation} for a given ordered pair $(\alpha|\beta)$\cite{CHY3, Arkani-Hamed:2017mur}.
We emphasize that the mutual partial triangulation is of particular importance in the discussions below.
The procedure to draw a mutual partial triangulation is simply given by the following steps:
\begin{enumerate}
\item Draw $n$ points on the boundary of a disk ordered cyclically by $\alpha$.
\item Draw a closed path of line segments connecting the points in order $\beta$. These line segments enclose a set of polygons, forming a polygon expansion.
\item The internal vertices, as intersections of the aforementioned line segments, of the decomposition correspond to cuts on cubic diagrams.
\item The cuts are translated to diagonals of the $\alpha$-ordered $n$-gon, forming a mutual partial triangulation.
\end{enumerate}
For our purpose, it is equivalent to draw a mutual partial triangulation for $(\beta|\alpha)$ and a step-by-step procedure to draw such a triangulation for $(1,2,\ldots,8| 8,1,2,6,7,3,4,5)$ is presented in Figure \ref{1711triangulation} where the original picture is found in \cite{Arkani-Hamed:2017mur} and the first three pictures are originally given in \cite{CHY3}.
\begin{figure}[t!]
\centering
\includegraphics[width=0.9\linewidth]{1711triangulation.png}
\caption{Step by step procedure to draw the mutual partial triangulation for $(1,2,3,4,5,6,7,8| 8,1,2,6,7,3,4,5)$.}\label{1711triangulation}
\end{figure}
Now we further label the edges by the nearest nodes in their anticlockwise direction to represent the corresponding particles in Figure~\ref{8pt Feynman and tri}(left). Note that there are 3 diagonals cutting out 3 parts $(8,1,2),(3,4,5),(7,8)$ from the polygon, and each diagonal implicates that there is an overall pole in the amplitude, which is $s_{1,2,8},s_{3,4,5},s_{7,8}$ in this example. Accordingly, we draw the sketch Feynman diagrams for $A(8,1,2,6,7,3,4,5|1,2,3,4,5,6,7,8)$ in Figure~\ref{8pt Feynman and tri}(right).
\begin{figure}[htbp!]
\centering
\subfloat{
\begin{minipage}[c]{0.4\linewidth}
\centering
\begin{tikzpicture}
\node[regular polygon,
draw,minimum size=3cm,
regular polygon sides = 8,rotate=360/16] (p) at (0,0) {};
\node[above] at (p.side 1) {2}; \node[left] at (p.side 2) {1};
\node[left] at (p.side 3) {8}; \node[below] at (p.side 4) {7};
\node[below] at (p.side 5) {6}; \node[right] at (p.side 6) {5};
\node[right] at (p.side 7) {4}; \node[above] at (p.side 8) {3};
\draw (p.corner 4) -- (p.corner 1); \draw (p.corner 1) -- (p.corner 6);
\draw (p.corner 6) -- (p.corner 4);
\end{tikzpicture}
\end{minipage}
}
\subfloat{
\begin{minipage}[c]{0.4\linewidth}
\centering
\includegraphics[width=0.7\linewidth]{fig/8pt_Feynman_and_tri.eps}
\end{minipage}
}
\caption{Mutual partial triangulation(left) for $(1,2,3,4,5,6,7,8| 8,1,2,6,7,3,4,5)$ and the corresponding Feynman diagrams(right).} \label{8pt Feynman and tri}
\end{figure}
\begin{figure}[htbp!]
\centering
\begin{equation*}
\begin{aligned}
\begin{aligned}
\includegraphics[width=0.24\linewidth]{fig/8pt_example1.eps}
\end{aligned}
=&\begin{aligned} \includegraphics[width=0.24\linewidth]{fig/8pt_example2.eps}
\end{aligned}+ \begin{aligned}
\includegraphics[width=0.24\linewidth]{fig/8pt_example3.eps}
\end{aligned}\\
=& \ldots + {\color[rgb]{0.5,0,0.5}\left(\begin{aligned}
\includegraphics[width=0.24\linewidth]{fig/8pt_example4.eps}
\end{aligned}+
\begin{aligned}
\includegraphics[width=0.24\linewidth]{fig/8pt_example5.eps}
\end{aligned} \right)}\\
=& \ldots\ldots + {\color[rgb]{1,0.5,0}\left(\begin{aligned}
\includegraphics[width=0.24\linewidth]{fig/8pt_example6a.eps}
\end{aligned} +
\begin{aligned}
\includegraphics[width=0.24\linewidth]{fig/8pt_example6.eps}
\end{aligned} \right)}
\end{aligned}
\end{equation*}
\centering
\subfloat{
\begin{minipage}[c]{0.24\linewidth}
\centering
\begin{tikzpicture}
\node[regular polygon,
draw,minimum size=2.5cm,
regular polygon sides = 8,rotate=360/16] (p) at (0,0) {};
\node[above] at (p.side 1) {2}; \node[left] at (p.side 2) {1};
\node[left] at (p.side 3) {8}; \node[below] at (p.side 4) {7};
\node[below] at (p.side 5) {6};
\node[right] at (p.side 6) {5};
\node[right] at (p.side 7) {4};
\node[above] at (p.side 8) {3};
\draw (p.corner 4) -- (p.corner 1); \draw (p.corner 1) -- (p.corner 6);
\draw (p.corner 6) -- (p.corner 4);
\end{tikzpicture}
\end{minipage}
}
\subfloat{
\begin{minipage}[c]{0.24\linewidth}
\centering
\begin{tikzpicture}
\node[regular polygon,
draw,minimum size=2.5cm,
regular polygon sides = 8,rotate=360/16] (p) at (0,0) {};
\draw[white,line width=2pt] (p.corner 7) -- (p.corner 8);
\draw[white,line width=2pt] (p.corner 6) -- (p.corner 7);
\draw[white,line width=2pt] (p.corner 1) -- (p.corner 8);
\node[above] at (p.side 1) {2}; \node[left] at (p.side 2) {1};
\node[left] at (p.side 3) {8}; \node[below] at (p.side 4) {7};
\node[below] at (p.side 5) {6};
\draw (p.corner 4) -- (p.corner 1); \draw (p.corner 1) -- (p.corner 6) node[midway, right] {(345)} ;
\draw (p.corner 6) -- (p.corner 4);
\end{tikzpicture}
\end{minipage}
}
\subfloat{
\begin{minipage}[c]{0.24\linewidth}
\centering
\begin{tikzpicture}
\node[regular polygon,
draw,minimum size=2.5cm,
regular polygon sides = 8,rotate=360/16] (p) at (0,0) {};
\draw[white,line width=2pt] (p.corner 7) -- (p.corner 8);
\draw[white,line width=2pt] (p.corner 6) -- (p.corner 7);
\draw[white,line width=2pt] (p.corner 1) -- (p.corner 8);
\draw[white,line width=2pt] (p.corner 4) -- (p.corner 5);
\draw[white,line width=2pt] (p.corner 5) -- (p.corner 6);
\node[above] at (p.side 1) {2}; \node[left] at (p.side 2) {1};
\node[left] at (p.side 3) {8};
\draw (p.corner 4) -- (p.corner 1); \draw (p.corner 1) -- (p.corner 6) node[midway, right] {(345)} ;
\draw (p.corner 6) -- (p.corner 4) node[midway, below] {(67)};
\end{tikzpicture}
\end{minipage}
}
\subfloat{
\begin{minipage}[c]{0.24\linewidth}
\centering
\begin{tikzpicture}
\node[regular polygon,
draw,minimum size=2.5cm,
regular polygon sides = 8,rotate=360/16] (p) at (0,4) {};
\draw[white,line width=2pt] (p.corner 7) -- (p.corner 8);
\draw[white,line width=2pt] (p.corner 6) -- (p.corner 7);
\draw[white,line width=2pt] (p.corner 1) -- (p.corner 8);
\draw[white,line width=2pt] (p.corner 4) -- (p.corner 5);
\draw[white,line width=2pt] (p.corner 5) -- (p.corner 6);
\draw[white,line width=2pt] (p.corner 1) -- (p.corner 2);
\draw[white,line width=2pt] (p.corner 2) -- (p.corner 3);
\draw[white,line width=2pt] (p.corner 3) -- (p.corner 4);
\draw (p.corner 4) -- (p.corner 1) node[midway, left] {(812)}; \draw (p.corner 1) -- (p.corner 6) node[midway, right] {(345)} ;
\draw (p.corner 6) -- (p.corner 4) node[midway, below] {(67)};
\end{tikzpicture}
\end{minipage}
}
\caption{Feynman diagrams computed in each steps of the expansion for $F_8^{{\rm tr}(\phi^2)}(8,1,2,6,7,3,4,5 | 1,2,3,4,5,6,7,8 )$(top), and the reduction of the mutual partial triangulation(bottom).}
\label{fig:8pt example1}
\end{figure}
Based on the analysis above, we know that there are 4 cubic diagrams for $A(8,1,2,6,7,3,4,5|1,2,3,4,5,6,7,8)$, and the scalar form factor $F(8,1,2,6, 7, 3, 4,5|$ $1,2,3,4,5,6,7,8)$ is obtained by inserting the $q$-leg to all possible scalar lines, leading us to a sum of $4\times 13$ diagrams.
To write down an expansion of such a form factor, similar to previous example, we first spell out the contribution from diagrams with $q$ inserted inside the sub group $(3,4,5)$, so that the rest contributions resemble an effective 6-point form factor $\widetilde{F}_6^{{\rm tr}(\phi^2)}$
{
\begin{align}
&(3,4,5) \text{ contribution } =A(8,1,2,6,7,3,q,4,5 | 1,2,3,q,4,5,6,7,8 ) + \nonumber \\
& A(8,1,2,6,7,3,q,4,5 | 1,2,3,4,q,5,6,7,8 )+A(8,1,2,6,7,3,4,q,5 | 1,2,3,4,q,5,6,7,8 ) \nonumber\\
& F_8^{{\rm tr}(\phi^2)}(8,1,2,6,7,3,4,5 | 1,2,3,4,5,6,7,8 ) - (3,4,5) \text{ contribution }, \\
& \hskip 5cm = \widetilde{F}_6^{{\rm tr}(\phi^2)}(8,1,2,6,7,(3,4,5) | 1,2,(3,4,5),6,7,8 ). \nonumber
\end{align}}
Note the operation of subtracting the $(3,4,5)$ contribution means diagrammatically erasing edges $3,4,5$ in the mutual partial triangulation, see the second polygon in Figure~\ref{fig:8pt example1}(bottom). This gives us a mutual partial triangulation for a 6-point form factor regarding the sub group $(3,4,5)$ as a single particle.
Importantly, such an effective ``6-point" form factor is proportional to $s_{3,4,5}^{-1}$.
Furthermore, we have the following property \textbf{(without taking residues)}
\begin{equation}\label{eq:f6subgroup}
\begin{aligned}
\widetilde{F}_6^{{\rm tr}(\phi^2)}&(8,1,2,6,7,(3,4,5) | 1,2,(3,4,5),6,7,8 )\\
&=\frac{1}{s_{3,4,5}}F_6^{{\rm tr}(\phi^2)}(8,1,2,6,7,I^{+} | 1,2,I^{+},6,7,8)A(3,4,5,I^{-}|3,4,5,I^{-})\,.
\end{aligned}
\end{equation}
To make the expressions in \eqref{eq:f6subgroup} precise, in $A(3,4,5,I^{-}|3,4,5,I^{-})$ we use the momentum conservation to eliminate $I_{-}$ so that $A(3,4,5,I^{-}|3,4,5,I^{-})=1/s_{3,4}+1/s_{4,5}$, and in $F_6^{{\rm tr}(\phi^2)}(8,1,2,6,7,I^{+}| 1,2,I^{+},6,7,8)$ we replace $s_{I^{+}\ldots}$ with $s_{3,4,5,\ldots}$.
Next, the contribution from diagrams with $q$ inserted inside the sub groups $(1,2,8)$ and $(6,7)$ respectively can be given in a similar way, with an effective 3-point form factor $\widetilde{F}_{3}^{\operatorname{tr}(\phi^2)}((8,1,2),(6,7),(3,4,5) | (3,4,5),(6,7),(8,1,2) )$ remained. Again, it is proportional to $s^{-1}_{3,4,5}s^{-1}_{6,7}s^{-1}_{8,1,2}$, guaranteeing that subgroups $(3,4,5),(6,7),(1,2,8)$ can all be treated as single particles.
The Feynman diagrams sketching the successive reduction procedure are shown in Figure~\ref{fig:8pt example1}(top).
For the final step of our expansion, one should be careful since it is slightly different from a ``real'' 3-point form factor, the contribution is given by:
\begin{equation}
\begin{split}
\widetilde{F}_{3}^{\operatorname{tr}(\phi^2)}&( (8,1,2),(6,7),(3,4,5) | (3,4,5),(6,7),(8,1,2) )=\\
&-A( (8,1,2),q,(6,7),(3,4,5) | (3,4,5),(6,7) \shuffle q ,(8,1,2) ) \\
&-A( (8,1,2),(6,7),q,(3,4,5) | (3,4,5),q,(6,7) ,(8,1,2) ).
\end{split}
\end{equation}
{\it i.e.} $q$ needs to run inside $(6,7)$ in the second ordering. This finalizes the expansion of the 8-point form factor.
Also, a step by step reduction of the mutual partial triangulation corresponds to the construction above is given in Figure \ref{fig:8pt example1}(bottom).
\begin{figure}[htbp!]
\centering
\begin{equation*}
\begin{aligned}
\begin{aligned}
\includegraphics[width=0.24\linewidth]{fig/8pt_example1.eps}
\end{aligned}
=&\begin{aligned} \includegraphics[width=0.24\linewidth]{fig/8pt_example2.eps}
\end{aligned}+ \begin{aligned}
\includegraphics[width=0.24\linewidth]{fig/8pt_example3.eps}
\end{aligned}\\
=& \ldots +{\color[rgb]{0.5,0,0.5} \left(\begin{aligned}
\includegraphics[width=0.24\linewidth]{fig/8pt_example4.eps}
\end{aligned}+
\begin{aligned}
\includegraphics[width=0.24\linewidth]{fig/8pt_example5.eps}
\end{aligned}\right)} \\
=& \ldots\ldots + {\color[rgb]{1,0.5,0}\left(\begin{aligned}
\includegraphics[width=0.24\linewidth]{fig/8pt_example7a.eps}
\end{aligned} + \begin{aligned}
\includegraphics[width=0.19\linewidth]{fig/8pt_example7.eps}
\end{aligned}\right)}
\end{aligned}
\end{equation*}
\centering
\subfloat{
\begin{minipage}[c]{0.24\linewidth}
\centering
\begin{tikzpicture}
\node[regular polygon,
draw,minimum size=2.5cm,
regular polygon sides = 8,rotate=360/16] (p) at (0,0) {};
\node[above] at (p.side 1) {2}; \node[left] at (p.side 2) {1};
\node[left] at (p.side 3) {8}; \node[below] at (p.side 4) {7};
\node[below] at (p.side 5) {6};
\node[right] at (p.side 6) {5};
\node[right] at (p.side 7) {4};
\node[above] at (p.side 8) {3};
\draw (p.corner 4) -- (p.corner 1); \draw (p.corner 1) -- (p.corner 6);
\draw (p.corner 6) -- (p.corner 4);
\end{tikzpicture}
\end{minipage}
}
\subfloat{
\begin{minipage}[c]{0.24\linewidth}
\centering
\begin{tikzpicture}
\node[regular polygon,
draw,minimum size=2.5cm,
regular polygon sides = 8,rotate=360/16] (p) at (0,0) {};
\draw[white,line width=2pt] (p.corner 7) -- (p.corner 8);
\draw[white,line width=2pt] (p.corner 6) -- (p.corner 7);
\draw[white,line width=2pt] (p.corner 1) -- (p.corner 8);
\node[above] at (p.side 1) {2}; \node[left] at (p.side 2) {1};
\node[left] at (p.side 3) {8}; \node[below] at (p.side 4) {7};
\node[below] at (p.side 5) {6};
\draw (p.corner 4) -- (p.corner 1); \draw (p.corner 1) -- (p.corner 6) node[midway, right] {(345)} ;
\draw (p.corner 6) -- (p.corner 4);
\end{tikzpicture}
\end{minipage}
}
\subfloat{
\begin{minipage}[c]{0.24\linewidth}
\centering
\begin{tikzpicture}
\node[regular polygon,
draw,minimum size=2.5cm,
regular polygon sides = 8,rotate=360/16] (p) at (0,0) {};
\draw[white,line width=2pt] (p.corner 7) -- (p.corner 8);
\draw[white,line width=2pt] (p.corner 6) -- (p.corner 7);
\draw[white,line width=2pt] (p.corner 1) -- (p.corner 8);
\draw[white,line width=2pt] (p.corner 4) -- (p.corner 5);
\draw[white,line width=2pt] (p.corner 5) -- (p.corner 6);
\node[above] at (p.side 1) {2}; \node[left] at (p.side 2) {1};
\node[left] at (p.side 3) {8};
\draw (p.corner 4) -- (p.corner 1); \draw (p.corner 1) -- (p.corner 6) node[midway, right] {(345)} ;
\draw (p.corner 6) -- (p.corner 4) node[midway, below] {(67)};
\end{tikzpicture}
\end{minipage}
}
\subfloat{
\begin{minipage}[c]{0.24\linewidth}
\centering
\begin{tikzpicture}
\node[regular polygon,
draw,minimum size=2.5cm,
regular polygon sides = 8,rotate=360/16] (q) at (0,0) {};
\draw[white,line width=2pt] (q.corner 7) -- (q.corner 8);
\draw[white,line width=2pt] (q.corner 6) -- (q.corner 7);
\draw[white,line width=2pt] (q.corner 1) -- (q.corner 8);
\draw[white,line width=2pt] (q.corner 4) -- (q.corner 5);
\draw[white,line width=2pt] (q.corner 5) -- (q.corner 6);
\node[above] at (q.side 1) {2}; \node[left] at (q.side 2) {1};
\node[left] at (q.side 3) {8};
\draw (q.corner 4) -- (q.corner 1) node[midway, right] {(34567)};
\end{tikzpicture}
\end{minipage}
}
\caption{Feynman diagrams computed in each steps of an alternative expansion for $F_8^{{\rm tr}(\phi^2)}(8,1,2,6,7,3,4,5 | 1,2,3,4,5,6,7,8 )$(top), and the reduction of the mutual partial triangulation(bottom).}
\label{fig:8pt example2}
\end{figure}
In addition, we would like to emphasize that there are several ways of performing the reductions and they are all equivalent in the end. For instance, we have an alternative construction here: compute the contributions from diagrams with $q$ inserted in the sub groups $(3,4,5),(6,7),((3,4,5),(6,7))$ successively and end up with an effective 4-point form factor $\widetilde{F}_4^{{\rm tr}(\phi^2)}(8,1,2,(6,7,3,4,5)|8,1,2,(3,4,5,6,7))$ reading
{\small
\begin{align}
& \widetilde{F}_4^{{\rm tr}(\phi^2)}(8,1,2,(6,7,3,4,5)|8,1,2,(3,4,5,6,7)) = A(8,q,1,2,(6,7,3,4,5)|8,q,1,2,(3,4,5,6,7)) \nonumber \\
&+A(8,q,1,2,(6,7,3,4,5)|8,1,q,2,(3,4,5,6,7))+A(8,q,1,2,(6,7,3,4,5)|8,1,2,q,(3,4,5,6,7))\nonumber \\
&+ A(8,1,q,2,(6,7,3,4,5)|8,1,q,2,(3,4,5,6,7)) +A(8,1,q,2,(6,7,3,4,5)|8,1,2,q,(3,4,5,6,7))\nonumber \\
&+ A(8,1,2,q,(6,7,3,4,5)|8,1,2,q,(3,4,5,6,7))
\end{align}
}
See Figure \ref{fig:8pt example2} for illustrative Feynman diagrams and the mutual partial triangulation.
\subsubsection{The $\alpha \neq \beta$ cases: a general algorithm} \label{ssec:expand scalar general}
We now present the construction process for a pure scalar ${\rm tr}(\phi^2)$ form factor with any given ordered pair $(\alpha| \beta)$. As illustrated by the examples given above, the algorithm can be summarized as follows:
\textbf{Step (1)} Draw the mutual partial triangulation corresponding to $(\alpha|\beta)$.
\textbf{Step (2)} Consider a set of continuous edges $I_1=(\alpha_c,\alpha_{c+1},\ldots,\alpha_{d})$ cut out by a single diagonal line first, see Figure \ref{fig decom of general cases}(a). There are only two possibilities relevant to such a partial mutual triangulation: either $I_1$ or $I_1^{-1}$ must be a sub-ordering in $\beta$.
To give the contribution from diagrams with $q$ inserted in $I_1$, namely the $I_1$ contribution, we write down the following special double sum for $\phi^3$ amplitudes with $q$ inserted in the sequence $I_1$ similar to \eqref{eq pure scalar}
{
\begin{equation*}
\begin{aligned}
&I_1 \text{ contribution}=\\
&\sum_{c\leq a <d } A(\alpha_c,\ldots,\alpha_a,q,\alpha_{a{+}1},\ldots,\alpha_d,\ldots| \alpha_c,\ldots,\alpha_{a},(\alpha_{a{+}1},\ldots,\alpha_{d-1})\shuffle q,\alpha_d,\ldots).
\end{aligned}
\end{equation*}}
Or if $I_1$ and $I_1^{-1}$ appear in $\alpha$ and $\beta$ respectively, we write
{
\begin{equation*}
\begin{aligned}
&I_1 \text{ contribution}=\\
-&\sum_{c\leq a <d } A(\alpha_c,\ldots,\alpha_a,q,\alpha_{a{+}1},\ldots,\alpha_d,\ldots| (\alpha_c,\ldots,\alpha_{a},(\alpha_{a{+}1},\ldots,\alpha_{d-1})\shuffle q,\alpha_d)^{-1},\ldots).
\end{aligned}
\end{equation*}}
In a word, a reversed ordering of the subgroup gives an extra minus sign.
After subtracting the $I_1$ contribution given above, one can now regard the subgroup $(\alpha_c,\alpha_{c+1},\ldots,\alpha_{d})$ as a single particle, which means
\begin{equation*}
F_r^{\operatorname{tr}(\phi^2)}(\alpha|\beta)-I_1 \text{ contribution } \propto s_{I_1}^{-1}\,.
\end{equation*}
Such a difference can be defined as an effective lower-point form factor
$F^{\operatorname{tr}(\phi^2)}_{r^{\prime}}(\alpha|\beta)$ with $r^{\prime}=r{-}|I_1|{+}1$ and $I_1$ treated as a single particle in $\alpha,\beta$.
On the mutual partial triangulation diagram, this means we now discard the edges $\alpha_c,\ldots,\alpha_d$ and replace it with the diagonal cutting them out. Thus, we obtain a lower point mutual partial triangulation diagram with less diagonals, see Figure~\ref{fig decom of general cases}(b).
\textbf{Step (3)} Repeat the Step (2) and reduce the number of diagonals in the mutual partial triangulation, until we reach a polygon with no diagonal lines. Within this procedure, one should be particular careful about inserting $q$ into the subgroups. For example, when dealing with the following effective form factor
$$\widetilde{F}_{s}^{\operatorname{tr}(\phi^2)}(I_1,I_2,I_{3},\kappa|I_1^{\prime},I_2^{\prime},I_{3}^{\prime},\rho)\propto (s_{I_1}s_{I_2}s_{I_3})^{-1},$$
where $I_{i}$ and $I_{i}^{\prime}$ for $i=1,2,3$ are subgroups contain the same set of particles, and $\kappa\neq\rho$ are different sub-orderings of the other particles. In the mutual partial triangulation diagram, $I_i$ are edges and there exists a diagonal dividing $I_1$, $I_2, I_3$ and the rest of the polygon. Cutting out such a quadrilateral, composed of $I_1,I_2,I_3$ and the diagonal, gives the following contribution
\begin{equation*}
\begin{aligned}
(I_1,I_2,I_3) \text{ contribution} = & A(I_1,q,I_2,I_{3},\kappa|I_1^{\prime},I_2^{\prime}\shuffle q,I_{3}^{\prime},\rho)\\
&+ A(I_1,I_2,q,I_{3},\kappa|I_1^{\prime},I_2^{\prime}, q,I_{3}^{\prime},\rho)\,.
\end{aligned}
\end{equation*}
The subtlety here is that the $q$-leg has to be shuffled into subgroups in the right ordering, while $q$ should only be placed between subgroups in the left ordering. As is also the case with Step (2), subtracting the $(I_1,I_2,I_3)$ contribution from $\widetilde{F}_{s}^{\operatorname{tr}(\phi^2)}$ gives the following $s^{\prime}{=}(s{-}|I_1|{-}|I_2|{-}|I_3|{+}3)$-point effective form factor
\begin{equation*}
\begin{aligned}
\widetilde{F}^{\operatorname{tr}(\phi^2)}_{s^{\prime}}&((I_1,I_2,I_3),\kappa|(I_1^{\prime},I_2^{\prime},I_3^{\prime}),\rho)\\
&=\widetilde{F}_{s}^{\operatorname{tr}(\phi^2)}(I_1,I_2,I_{3},\kappa|I_1^{\prime},I_2^{\prime},I_{3}^{\prime},\rho)-(I_1,I_2,I_3) \text{ contribution},
\end{aligned}
\end{equation*}
which is proportional to $s_{I_1 I_2 I_3}^{-1}\times (s_{I_1}s_{I_2}s_{I_3})^{-1}$.
\textbf{Step (4)}
At last, a polygon with no diagonal is obtained, of which the edges are denoted by $I_1,I_2,\ldots,I_t$ as in Figure \ref{fig decom of general cases}(c). Then the final contribution is given by
\begin{equation*}
\sum_{1\leq a <t } A(I_1,\ldots,I_a,q,I_{a{+}1},\ldots,I_t| I_1^\prime,\ldots,I_a^\prime,(I_{a{+}1}^\prime,\ldots,I_{t{-}1}^\prime) \shuffle q,I_t^\prime),
\end{equation*}
or the reversed version with a minus sign
\begin{equation*}
-\sum_{1\leq a <t } A(I_1,\ldots,I_a,q,I_{a{+}1},\ldots,I_t| \big(I_1^\prime,\ldots,I_a^\prime,(I_{a{+}1}^\prime,\ldots,I_{t{-}1}^\prime) \shuffle q,I_t^\prime\big)^{-1})\,.
\end{equation*}
Still, in the second ordering the sequence $I_a^\prime$ contains the same particle as in $I_a$ but they can be different up to possible permutations.
\
In the end, we emphasize again that there are other expansions ending up with a different polygon, an all possible expansions are equivalent.
In practice we usually specify one of the constructions as our convention in sec.~\ref{sec:expand and CHY} by requiring the final contribution to correspond to the last polygon (without any diagonals) contains the last edge $i_r$.
\begin{figure}[htbp!]
\centering
\subfloat[]{
\begin{minipage}[c]{0.3\linewidth}
\centering
\begin{tikzpicture}
\coordinate (p) at (0,0);
\draw (p) circle[radius=1.5cm];
\draw (100:1.5cm) -- (200:1.5cm); \draw (260:1.5cm) -- (200:1.5cm);
\draw (100:1.5cm) -- (260:1.5cm);
\draw (30:1.5cm) -- (300:1.5cm);
\node[right] at (0:0.1cm) {...};
\node[right] at (0:1.5cm) {$\alpha_c$};
\node[right] at (345:1.5cm) {$\alpha_{c+1}$};
\node[right] at (330:1.5cm) {$\ldots$};
\node[right] at (315:1.5cm) {$\alpha_{d}$};
\filldraw (5:1.5cm) circle (.05)
(355:1.5cm) circle (.05)
(345:1.5cm) circle (.05)
(315:1.5cm) circle (.05) (325:1.5cm) circle (.05);
\node[left] at (195:1.5cm) {$\alpha_1$};
\node[left] at (180:1.5cm) {$\alpha_2$};
\node[left] at (165:1.5cm) {$\ldots$};
\filldraw (200:1.5cm) circle (.05) (190:1.5cm) circle (.05) (180:1.5cm) circle (.05);
\end{tikzpicture}
\end{minipage}
}
\subfloat[]{
\begin{minipage}[c]{0.3\linewidth}
\centering
\begin{tikzpicture}
\coordinate (p) at (0,0);
\draw (100:1.5cm) -- (200:1.5cm); \draw (260:1.5cm) -- (200:1.5cm);
\draw (100:1.5cm) -- (260:1.5cm);
\node[right] at (0:0.1cm) {...};
\draw (30:1.5cm) arc(30:300:1.5cm) ;
\draw (30:1.5cm) -- (300:1.5cm);
\end{tikzpicture}
\end{minipage}
}
\subfloat[]{
\begin{minipage}[]{0.3\linewidth}
\centering
\begin{tikzpicture}
\coordinate (p) at (0,0);
\draw (100:1.5cm) --node[right] {$I_t$} (200:1.5cm) ;
\draw (100:1.5cm) arc(100:200:1.5cm) ;
\node[left] at (195:1.5cm) {$I_1$};
\node[left] at (180:1.5cm) {$I_2$};
\node[left] at (165:1.5cm) {$\ldots$};
\draw[white] (210:1.5cm) arc(210:360:1.5cm) ;
\filldraw (200:1.5cm) circle (.05) (190:1.5cm) circle (.05) (180:1.5cm) circle (.05);
\end{tikzpicture}
\end{minipage}
}
\caption{The reduction of the mutual partial triangulation for the general expansion of algorithm. (c) presents a special choice of the final block (covered in Step (4) of the algorithm) to be $\widetilde{F}_r^{{\rm tr}(\phi^2)}(I_1,I_2,\ldots,I_t|I_1^\prime,I_2^\prime,\ldots,I_t^\prime)$ with $I_1{=}\alpha_1,I_2{=}\alpha_2,\ldots,I_{t{-}1}{=}\alpha_{t{-}1}$ and $I_t{=}(\alpha\backslash (\alpha_1,\alpha_2,\ldots,\alpha_{t{-}1}))$.} \label{fig decom of general cases}
\end{figure}
\subsection{From pure scalar cases to ${\rm tr}(\phi^2)$ form factors with gluons}\label{ssec:withgluons}
Now we move on to the expansion of ${\rm tr}(\phi^2)$ form factors which contain arbitrary number of gluons. As mentioned before, such an expansion is closely related to its scalar skeleton, {\it i.e.} the expansion of a pure-scalar form factor with the same scalar double orderings. We give an ``inserting rule" which transforms the pure-scalar result into one with gluons, thus provide a complete expansion for ${\rm tr}(\phi^2)$ form factors appear on the RHS of \eqref{eq:npttrF2}.
\subsubsection{Rules for inserting gluons}
Let us denote such a general form factor as
$F_n^{{\rm tr}(\phi^2)}(\alpha(i_1,i_2,\ldots,i_r)|i_1,\mathcal{G}_{i_1},i_2,\mathcal{G}_{i_2} \ldots,i_r, \mathcal{G}_{i_r})$ where we label the scalars by $i_1<i_2<\cdots<i_r$ and gluons between scalar pair $i_a,i_{a{+}1}$ as $\mathcal{G}_{i_a}$. Recall the expansion of its scalar skeleton:
\begin{equation}
\begin{split}
&F_r^{{\rm tr}(\phi^2)}(\alpha(i_1,i_2,\ldots,i_r)| i_1,i_2,\ldots,i_r) \\
=& \sum_{a,b} A(\alpha_{1},\alpha_{2},\ldots,\alpha_{a},q,\alpha_{a{+}1},\ldots,\alpha_{r} | i_1,i_2,\ldots,i_b,q,i_{b{+}1},\ldots,i_r)\,,
\end{split}
\end{equation}
with certain $a,b$ determined in the last subsection.
We emphasize again that it is crucial to have both orderings for $r$ scalars in each $\phi^3$ amplitude the same as those in the form factor. Given such a skeleton result, we propose that the expansion of the form factor with gluons is given by a surprisingly simple rule: literally place every gluons set into the second ordering and sum over the shuffle product $ \mathcal{G}_{i_b} \shuffle q$ if $q$ is inserted between $i_b,i_{b{+}1}$:
\begin{equation}
\begin{aligned}
& F_n^{{\rm tr}(\phi^2)}(\alpha(i_1,i_2,\ldots,i_r)|i_1,\mathcal{G}_{i_1},i_2,\mathcal{G}_{i_2} \ldots,i_r, \mathcal{G}_{i_r}) \\
=& \sum_{a,b} A(\alpha_{1},\alpha_{2},\ldots,\alpha_{a},q,\alpha_{a{+}1},\ldots,\alpha_{r} | i_1,\mathcal{G}_{i_1},i_2 \ldots i_b,\mathcal{G}_{i_b} \shuffle q,i_{b{+}1},\ldots,i_r, \mathcal{G}_{i_r}).
\end{aligned}
\end{equation}
For the simplest skeleton case, $\alpha(i_1,i_2,\ldots,i_r)=(i_1,i_2,\ldots,i_r)$, the result with gluons is remarkably simple:
\begin{equation} \label{eq noreflection gluon}
\begin{aligned}
&F_n^{{\rm tr}(\phi^2)}(i_1,i_2,\ldots,i_r| i_1,\mathcal{G}_{i_1},i_2,\mathcal{G}_{i_2},\ldots,i_r,\mathcal{G}_{i_r}) \\
=&\sum_{1\leq a \leq b <n} A(i_1,i_2,\ldots,i_a,q,i_{a{+}1},\ldots,i_r| i_1,\mathcal{G}_{i_1},i_2,\mathcal{G}_{i_2},\ldots,i_b,\mathcal{G}_{i_b}\shuffle q,i_{b{+}1},\ldots,i_r,\mathcal{G}_{i_r}),
\end{aligned}
\end{equation}
where $q$ is shuffled with all gluons in any gluon set ${\cal G}_{i_b}$ where it needs to be inserted for the skeleton case. Let us give an example with 4 scalars explicitly:
\begin{equation}
\begin{split}
&F_8^{{\rm tr}(\phi^2)}(1,2,5,7| 1,2,3^g,4^g,5,6^g,7,8^g) \\
=& A(1,q,2,5,7| 1,q,2,3^g,4^g,5,6^g,7,8^g)+A(1,q,2,5,7| 1,2,(3^g,4^g) \shuffle q,5,6^g,7,8^g) \\
&+A(1,q,2,5,7| 1,2,3^g,4^g,5,6^g\shuffle q,7,8^g)\\
&+A(1,2,q,5,7| 1,2,(3^g,4^g)\shuffle q,5,6^g,7,8^g)+A(1,2,q,5,7| 1,2,3^g,4^g,5,6^g\shuffle q,7,8^g) \\
&+A(1,2,5,q,7| 1,2,3^g,4^g,5,6^g\shuffle q,7,8^g).
\end{split}
\end{equation}
In general for $\alpha\neq \beta$ the expansion becomes more complicated but we emphasize that it is as complicated as the skeleton case; once that is given we simply shuffle $q$ with gluons in any set that it should be inserted! We simply give another example with $r=4$, which is obtained from the skeleton expansion \eqref{eq onereflection 4pt}:
\begin{equation}
\begin{split}
&F_8^{{\rm tr}(\phi^2)}(1,2,7,5| 1,2,3^g,4^g,5,6^g,7,8^g) \\
=& A(1,q,2,7,5 | 1,q,2,3^g,4^g,5,6^g,7,8^g) \\
&- A(1,2,q,7,5 | 1,2,3^g,4^g,5,6^g,7,8^g \shuffle q) -A(1,2,q,7,5 | 1,2,3^g,4^g,5,6^g \shuffle q,7,8^g) \\
&- A(1,2,7,q,5 | 1,2,(3^g,4^g) \shuffle q,5,6^g\shuffle q,7,8^g).
\end{split}
\end{equation}
We will outline a proof of this rule by checking factorizations of the expansion, but before doing so we remark that this shows how close ${\rm tr}(\phi^2)$ form factors are to amplitudes with an additional leg: once we relate all scalar diagrams contributing to the skeleton case, we simply attach gluons according to their orderings with $q$ shuffled with them, on both sides of the expansion.
\subsubsection{Consistency checks by factorizations}
Here let us first provide a consistency check for the $\alpha(i_1,i_2,\ldots,i_r)=(i_1,i_2,\ldots,i_r)$ cases, \eqref{eq noreflection gluon}, by analysing the factorizations on both sides, and we leave the checks for general cases in Appendix~\ref{app:proof general gluon}. Since we can choose $(i_1,\mathcal{G}_{i_1},i_2,\mathcal{G}_{i_2},\ldots,i_r,\mathcal{G}_{i_r})$ to be cyclically equivalent to $(1,2,\ldots,n)$, any possible factorization channel is governed by a planar variable $s_{c,c+1,\ldots,d} \rightarrow 0$. There are two possibilities for the type of particles corresponding to the cut propagator.
For a gluon propagator, it is important to notice the original form factor must factorize into a pure YM amplitude times a lower point form factor, hence $\{c,c+1,\ldots,d\}$ must be a subset of $\mathcal{G}_{i_k}$ for a certain $k\in \{1,2,\ldots,r\}$. For example, one assumes $\{c,c+1,\ldots,d\}=\{\mathcal{G}_{i_1}^\prime\} \subset \{ \mathcal{G}_{i_1} \}$ and the LHS factorizes into:
\begin{equation}\label{eq:YMSexpanfact}
A^{\mathrm{YM}}(c,c+1,\ldots,d,I^+(\epsilon)) \frac{1}{s_{c,c+1,\ldots,d}} F_{n+c-d+2}^{{\rm tr}(\phi^2)} (i_1,i_2,\ldots,i_r| i_1,\Bar{\mathcal{G}}_{i_1}^\prime,i_2,\ldots,i_r,\mathcal{G}_{i_r}),
\end{equation}
where $\bar{\mathcal{G}}_{i_1}^\prime := (i_1+1,i_1+2,\ldots,c-1,I^-(\bar{\epsilon}),d+1,\ldots,i_2-1)$ with the intermediate gluon represented by $I^+(\epsilon),I^-(\bar{\epsilon})$. What is crucial here is that the scalar skeleton is unaffected by the factorization. In this sense, applying the expansion formula again to the lower point form factor in \eqref{eq:YMSexpanfact} gives a sum which trivially matches the factorization of the amplitudes in the RHS \eqref{eq noreflection gluon} term by term, since the amplitudes with $q$ inserted inside $\mathcal{G}_{i_1}^\prime$ do not contribute.
For a scalar propagator, the original form factor is now factorizing into a YMS amplitude times a lower point form factor.
It is easy to confirm that behaviour of both sides are equal if $\{i_1,i_r\} \not\subset \{c,c+1,\ldots,d\}$, similar to the computation in \eqref{eq:purescalarproof}. Otherwise the analysis is more involved and we give more detailed depictions here.
For the simplicity of notations, we assume $(c,c+1,\ldots,d)=(\mathcal{G}_{i_{k}}^\prime,\ldots,i_r, \mathcal{G}_{i_{r}},i_1, \mathcal{G}_{i_{1}}^\prime)$, where $\mathcal{G}_{i_{k}}^\prime, \mathcal{G}_{i_{1}}^\prime$ are ordered subsets of $\mathcal{G}_{i_{k}},\mathcal{G}_{i_{1}}$ and are adjacent to $i_{k+1},i_1$ respectively. Hence factorizing the $n$-point form factor on the LHS of \eqref{eq noreflection gluon} gives
\begin{equation}\label{eq:noreflectiongluon2}
\begin{split}
&A(i_{k+1},\ldots,i_r,i_1,I^+|\mathcal{G}_{i_{k}}^\prime,\ldots,i_r, \mathcal{G}_{i_{r}},i_1, \mathcal{G}_{i_{1}}^\prime,I^+) \frac{1}{s_{c,c+1,\ldots,d}} \times \\
&\qquad F_{n+c-d+2}^{{\rm tr}(\phi^2)}(I^-,i_2,\ldots,i_k|I^-,\bar{\mathcal{G}}_{i_1}^\prime,i_2,\mathcal{G}_{i_2},\ldots,i_k,\bar{\mathcal{G}}_{i_k}^\prime),
\end{split}
\end{equation}
where and the intermediate scalar is represented by $I^+,I^-$ and $\bar{\mathcal{G}}_{i_1}^\prime := \mathcal{G}_{i_{1}} \backslash \mathcal{G}_{i_{1}}^\prime $ and so does $\bar{\mathcal{G}}_{i_k}^\prime$.
Then we inspect the RHS of \eqref{eq noreflection gluon}. Among all the amplitudes being summed, we claim that the following contributions are special
\begin{equation}\label{eq:noreflectiongluon4}
\sum_{1 \leq a < k } A(i_1,i_2,\ldots,i_a,q,i_{a{+}1},\ldots,i_k,\ldots,i_r|i_1,\mathcal{G}_{i_1},i_2,\mathcal{G}_{i_2},\ldots,i_k,\mathcal{G}_{i_k}^\prime,\Bar{\mathcal{G}}_{i_k}^\prime \shuffle q,i_{k+1},\ldots,i_r,\mathcal{G}_{i_r}),
\end{equation}
because $q$ gets to shuffle with $\Bar{\mathcal{G}}_{i_k}^\prime$. To see why this is special, we further write down its factorization on the $s_{c,c{+}1,\ldots,d}$ pole
\begin{equation}\label{eq:noreflectiongluon3}
\begin{split}
&\hskip -10pt A(i_{k+1},\ldots,i_r,i_1,I^+|\mathcal{G}_{i_{k}}^\prime,\ldots,i_r, \mathcal{G}_{i_{r}},i_1, \mathcal{G}_{i_{1}}^\prime,I^+) \frac{1}{s_{c,c+1,\ldots,d}} \times\\
& \Big( A(I^-,q,i_2,i_3,\ldots,i_k|I^-,\Bar{\mathcal{G}}_{i_1}^\prime,i_2,\mathcal{G}_{i_2},\ldots,i_k,\Bar{\mathcal{G}}_{i_k}^\prime \shuffle q) \ + \\
& \ \ \sum_{2\leq a<k} A(I^-,i_2,\ldots,i_a,q,i_{a{+}1},\ldots,i_k|I^-,\Bar{\mathcal{G}}_{i_1}^\prime,i_2,\mathcal{G}_{i_2},\ldots,i_k,\Bar{\mathcal{G}}_{i_k}^\prime \shuffle q) \Big).
\end{split}
\end{equation}
Compared with \eqref{eq:noreflectiongluon2}, we observe that in that equation $q$ does not meet $\Bar{\mathcal{G}}_{i_k}^{\prime}$ if we expand the $F_{n{+}c{-}d{+}2}^{\operatorname{tr}(\phi^2)}$ therein, but \eqref{eq:noreflectiongluon3} contains $\Bar{\mathcal{G}}_{i_k}^{\prime}\shuffle q$ in the second ordering of the amplitudes. However, such a mismatch is actually not a problem because of the U(1) decoupling relation
\begin{equation*} A(I^-,q,i_2,i_3,\ldots,i_k|\gamma) + \sum_{2\leq a<k} A(I^-,i_2,\ldots,i_a,q,i_{a{+}1},\ldots,i_k|\gamma )=0\,,
\end{equation*}
which is valid for any $\gamma$ and we pick $\gamma := (I^-,\Bar{\mathcal{G}}_{i_1}^\prime,i_2,\mathcal{G}_{i_2},\ldots,i_k,\Bar{\mathcal{G}}_{i_k}^\prime \shuffle q)$ here.
Given that the contributions from \eqref{eq:noreflectiongluon4} vanish, the rest part of the RHS of \eqref{eq noreflection gluon} is then correctly equal to the LHS under this factorization channel, after performing the expansion for the lower point form factor. We also comment that the subtlety for $\{i_1,i_r\} \subset \{c,c+1,\ldots,d\} $ cases considered above actually illustrate the cyclicity for \eqref{eq noreflection gluon}.
\section{A complete expansion for ${\rm tr}(F^2)$ form factors and CHY formulae} \label{sec:expand and CHY}
In this section, we show that by combining results above, we obtain an expansion of $n$-point $\mathrm{tr}(F^2)$ form factor into $(n+1)$-point YMS amplitudes with $r{+}1$ scalars, with coefficients given by traces of $r$ field strengths. Such an expansion reads
\begin{equation}\label{eq:general}
F_n^{{\rm tr}(F^2)}=\sum_{i_1< \cdots< i_r, r=2}^n \sum_{\alpha \in S_{r}/\mathbb{Z}_r} \mathrm{tr}^\mathrm{f}(\alpha_1,\alpha_2,\ldots,\alpha_r)\sum_{\pi \in \alpha \shuffle q } {\rm sgn}_\pi~A(\pi | 1, \cdots, q \shuffle {\cal G}_i, \cdots, n)
\end{equation}
where the first two summations come from the decomposition into ${\rm tr}(\phi^2)$ form factors with $r$ scalars, $1\leq i_1< i_2 <\cdots < i_r\leq n$, in the same ordering $\alpha$ ($\alpha_a:=\alpha(i_a)$) as the trace; the remaining part denotes the expansion of such form factors into $(n{+}1)$-point YMS amplitudes: we first sum over ordering $\pi$ for $r{+}1$ scalars with $q$ inserted into $\alpha$, and for each $\pi$ (in addition to a possible sign) we need to implicitly sum over the second ordering of all $n{+}1$ particles, with $1,2, \cdots, n$ and $q$ shuffled with gluons in some sets ${\cal G}_{i_1}, \cdots, {\cal G}_{i_r}$. The details of these two sums are given in sec. \ref{ssec:expand scalar general}.
Since these YMS amplitudes have all been computed (see \cite{Cheung:2021zvb,Edison:2020ehu,He:2021lro} for closed-form expressions and automatized codes), our expansion thus provides an algorithm for computing $n$-point form factors explicitly. Here we summarize the number of amplitudes involved in our expression for $\mathrm{tr}(F^2)$ form factor in Table~\ref{tab:counting}. Although the number of YMS amplitudes grow rapidly with $n$, the majority of them are bi-adjoint scalar amplitudes or those with very few gluons, while the most complicated ones ($r=2$) are relatively rare.
\begin{table}[htbp!]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
\diagbox{$n$}{$r$} & 2 & 3 & 4 & 5 & 6 & 7 & total \\ \hline
3 & 4 & 3 & & & & & 7 \\ \hline
4 & 10 & 15 & 15 & & & &40 \\ \hline
5 & 20 & 45 & 91 & 77 & & & 233 \\ \hline
6 & 35 & 105 & 321 & 545 &408 & & 1414 \\ \hline
7 & 56 & 210 & 861 & 2198 &3293 &2210 &8828 \\
\hline
\end{tabular}
\caption{The number of $r+1$ scalar YMS amplitudes appear in the decomposition of $F_n^{{\rm tr}(F^2)}(1,2,\ldots,n)$.}
\label{tab:counting}
\end{table}
\subsection{CHY formulae for form factors}
Another interesting consequence of \eqref{eq:general} is a CHY formula for
the ${\rm tr}(F^2)$ form factor itself (as well as all ${\rm tr}(\phi^2)$ form factors as intermediate steps). This simply arise from CHY formulas for YMS amplitudes, which we recall that for $r{+}1$ scalars in ordering $\pi$ and all $n{+}1$ particles in ordering $\beta(1,\cdots, n{+}1)$ read
\begin{equation}
A_{n{+}1}(\pi|\beta)=\int d\mu_{n{+}1}~{\rm PT}_{n{+}1} (\beta)~{\rm PT}_{r{+}1} (\pi) {\rm Pf}\Psi_{n{-}r}
\end{equation}
where the ingredients, such as the measure, the Parke-Taylor factors (of length $n{+}1$ and $r{+}1$), and the Pfaffian (of remaining $n{-}r$ gluons) are reviewed in Appendix~\ref{app:CHY} for completeness.
The integrand for this general CHY formula takes the nice form
\begin{equation}
{\rm PT}(1,\cdots, n)\sum_{i_1< \cdots< i_r, r=2}^n {\rm Pf} \Psi_{n{-}r}~\sum_{\alpha \in S_{r}/\mathbb{Z}_r} \mathrm{tr}^\mathrm{f}(\alpha_1,\alpha_2,\ldots,\alpha_r)\sum_{\pi \in \alpha \shuffle q } {\rm sgn}_\pi~{\rm PT}(\pi) \times {\cal S}_{\pi}
\end{equation}
where we have factored out an overall ${\rm PT}(1,\cdots,n)$ and defined the ``inverse soft factor" ${\cal S}_{\pi}$ for inserting $q$ into gluon sets as indicated by the ordering $\pi$,
\begin{equation}\label{ISF}
{\cal S}_{\pi}:=\sum_{a \in I(\pi)} \Big(\frac 1 {\sigma_{i_a, q}}- \frac 1 {\sigma_{i_{a{+}1}, q}}\Big)\,,
\end{equation}
where $\sigma_{i_a,q} := \sigma_{i_a} - \sigma_q$. Note that when $q$ is shuffled with a gluon set ${\cal G}_{i_1}$ between scalars $i_1$ and $i_2$, the overall effect is nothing but the factor involving the two end points: $\sum_{j=i_1}^{i_2{-}1} \frac 1 {\sigma_{j,q}}- \frac 1{\sigma_{j{+}1, q}}=\frac 1 {\sigma_{i_1, q}}- \frac 1 {\sigma_{i_2, q}}$; ${\cal S}_\pi$ is a sum of such factors for those gluon sets with $q$ inserted (as indicated by $I(\pi) \subset \{1, \cdots, r\}$). Note that if we sum over all gluon sets $I(\pi)=\{1, \cdots, r\}$, it vanishes because of the U(1) identity.
For example, for $r=2$, $\alpha=(i, j)$ with $i<j$, there is only one $\pi=i, q, j$, and the the CHY integrand is nothing but
\begin{equation}
\mathrm{PT}(1,2,\ldots,n) \operatorname{Pf} \Psi_{n{-}2} (\overline{i,j}) \ \mathrm{tr}^\mathrm{f}(i,j) \mathrm{PT}(i,q,j) \Big(\frac{1}{\sigma_{i,q}}-\frac{1}{\sigma_{j,q}}\Big),
\end{equation}
For $r=3$ with $\alpha=(i,j,k)$ where $i<j<k$, we sum over two possible orderings $\pi=(i, q, j,k)$ and $(i, j, q, k)$, and the corresponding soft factors are simply
\begin{equation}
S_{i,q,j,k}=\Big(\frac{1}{\sigma_{i,q}}-\frac{1}{\sigma_{j,q}}\Big)+ \Big(\frac{1}{\sigma_{j,q}}-\frac{1}{\sigma_{k,q}}\Big)=\Big(\frac{1}{\sigma_{i,q}}-\frac{1}{\sigma_{k,q}}\Big), \quad S_{i,j,q,k}= \Big(\frac{1}{\sigma_{j,q}}-\frac{1}{\sigma_{k,q}}\Big).
\end{equation}
Let us finally give some more examples for $r=4$. For $\alpha=(i,j,k,l)$ with $i<j<k<l$, the factors are
\begin{equation}
S_{i,q,j,k,l}= \Big(\frac{1}{\sigma_{i,q}}-\frac{1}{\sigma_{l,q}}\Big), \quad S_{i,j,q,k,l}= \Big(\frac{1}{\sigma_{j,q}}-\frac{1}{\sigma_{l,q}}\Big), \quad S_{i,j,k,q,l}= \Big(\frac{1}{\sigma_{k,q}}-\frac{1}{\sigma_{l,q}}\Big)\,.
\end{equation}
For $\alpha=(i,j,l,k)$ the soft factors become
\begin{align}
&S_{i,q,j,l,k}= \Big(\frac{1}{\sigma_{i,q}}-\frac{1}{\sigma_{j,q}}\Big) \quad S_{i,j,q,l,k}=\Big(\frac{1}{\sigma_{k,q}}-\frac{1}{\sigma_{l,q}}\Big) +\Big(\frac{1}{\sigma_{l,q}}-\frac{1}{\sigma_{i,q}}\Big)=\Big(\frac{1}{\sigma_{k,q}}-\frac{1}{\sigma_{i,q}}\Big) \nonumber \\
& S_{i,j,l,q,k}= \Big(\frac{1}{\sigma_{k,q}}-\frac{1}{\sigma_{l,q}}\Big).
\end{align}
Note that for this case, the ${\rm sgn}_\pi$'s are ${\rm sgn}_{i,q,j,l,k}=1$ and ${\rm sgn}_{i,j,q,l,k}={\rm sgn}_{i,j,l,q,k}=-1$. In fact, for $\alpha=(i_1,i_2,\ldots,i_r)$ with $i_1<i_2<\ldots<i_r$ which corresponds to the $\alpha=\beta$ case of the scalar skeleton, the result is remarkably simple: there are $(r{-}1)$ possible orderings $\pi=(i_1,i_2,\ldots,i_a,q,i_{a{+}1},\ldots,i_r)$ for $a=1,2,\ldots,r{-}1$, each accompanied with ${\rm sgn}_\pi=+1$, and the inverse soft factors are
\begin{equation}\label{specialISF}
S_{i_1,i_2,\ldots,i_a,q,i_{a{+}1},\ldots,i_r}=\frac{1}{\sigma_{i_a,q}}-\frac{1}{\sigma_{i_r,q}} .
\end{equation}
We remark that, as for any $(n{+}1)$-point amplitude in this paper, our CHY formula needs to be evaluated with the prescription $q\to - \sum_{i=1}^n p_i$. This can be directly realized if we use the partial SL$(2,\mathbb{C})$ gauge fixing with $\sigma_q \to \infty$, which eliminate $q$ from the formula.
We have recorded the expansion of ${\rm tr}(F^2)$ form factors into YMS amplitudes up to $n=9$ in the auxiliary {\sc Mathematica} file. We have also provided the explicit result up to $n=7$ where the YMS amplitudes are evaluated by the package in~\cite{He:2021lro} based on CHY formula. As a simple illustration, we give explicit results for $\mathrm{tr}(F^2)$ form factor for $n=3,4$. For example, combining \eqref{eq:3pttrphi2} and \eqref{eq:3pttrF2b}, one gets
\begin{equation}
F_3^{{\rm tr}(F^2)}(1,2,3)=\frac{1}{s_{1,2}}\left(-p_1\cdot \epsilon_2 \mathrm{tr}^\mathrm{f}(1,3) +\mathrm{tr}^\mathrm{f}(2,3) p_2\cdot \epsilon _1+2 \mathrm{tr}^\mathrm{f}(1,2,3)\right)+\mathrm{cyclic}(1,2,3).
\end{equation}
The $n=4$ result involves $10$ graphs organized into cyclic orbits of length $4$ and $2$:
\begin{equation}
\begin{aligned}
&F_4^{{\rm tr}(F^2)}(1,2,3,4) \\
=& \frac{1}{s_{2,3} s_{2,3,4}}( \mathrm{tr}^\mathrm{f}(1,2)( p_2\cdot \epsilon _3 p_2\cdot \epsilon _4+ p_2\cdot \epsilon _3 p_3\cdot \epsilon _4)-\mathrm{tr}^\mathrm{f}(1,3)( p_2\cdot \epsilon _4 p_3\cdot \epsilon _2+ p_3\cdot \epsilon _2 p_3\cdot \epsilon _4)\\
& +\mathrm{tr}^\mathrm{f}(1,4)( -p_2\cdot \epsilon _3 p_4\cdot \epsilon _2+ \epsilon _2\cdot f_3\cdot p_4)-2 \mathrm{tr}^\mathrm{f}(1,2,3)( p_2\cdot \epsilon _4+ p_3\cdot \epsilon _4)-2 \mathrm{tr}^\mathrm{f}(1,2,4) p_2\cdot \epsilon _3\\
&+2 \mathrm{tr}^\mathrm{f}(1,3,4) p_3\cdot \epsilon _2+2 \mathrm{tr}^\mathrm{f}(1,2,3,4)-2 \mathrm{tr}^\mathrm{f}(1,3,2,4))\\
&+ \frac{1}{s_{3,4} s_{2,3,4}}(\mathrm{tr}^\mathrm{f}(1,2) (p_2\cdot \epsilon _3 p_3\cdot \epsilon _4- \epsilon _3\cdot f_4\cdot p_2)-\mathrm{tr}^\mathrm{f}(1,3) (p_3\cdot \epsilon _2 p_3\cdot \epsilon _4+ \epsilon _2\cdot f_4\cdot p_3)\\
&+\mathrm{tr}^\mathrm{f}(1,4) (p_4\cdot \epsilon _2 p_4\cdot \epsilon _3+ \epsilon _2\cdot f_3\cdot p_4)-2 \mathrm{tr}^\mathrm{f}(1,2,3) p_3\cdot \epsilon _4+2 \mathrm{tr}^\mathrm{f}(1,2,4) p_4\cdot \epsilon _3\\
&+2 \mathrm{tr}^\mathrm{f}(1,3,4) (p_3\cdot \epsilon _2+ p_4\cdot \epsilon _2)+2 \mathrm{tr}^\mathrm{f}(1,2,3,4)-2 \mathrm{tr}^\mathrm{f}(1,2,4,3))\\
&+\frac{1}{2 s_{1,2} s_{3,4}}(\mathrm{tr}^\mathrm{f}(1,3) p_1\cdot \epsilon _2 p_3\cdot \epsilon _4-\mathrm{tr}^\mathrm{f}(1,4) p_1\cdot \epsilon _2 p_4\cdot \epsilon _3-\mathrm{tr}^\mathrm{f}(2,3) p_2\cdot \epsilon _1 p_3\cdot \epsilon _4\\
&+\mathrm{tr}^\mathrm{f}(2,4) p_2\cdot \epsilon _1 p_4\cdot \epsilon _3-2 \mathrm{tr}^\mathrm{f}(1,2,3) p_3\cdot \epsilon _4+2 \mathrm{tr}^\mathrm{f}(1,2,4) p_4\cdot \epsilon _3-2 \mathrm{tr}^\mathrm{f}(1,3,4) p_1\cdot \epsilon _2\\
&+2 \mathrm{tr}^\mathrm{f}(2,3,4) p_2\cdot \epsilon _1+2 \mathrm{tr}^\mathrm{f}(1,2,3,4)-2 \mathrm{tr}^\mathrm{f}(1,2,4,3)) + \mathrm{cyclic}(1,2,3,4).
\end{aligned}
\end{equation}
\section{Conclusion and Outlook}
In this paper we have presented two types of new relations for tree-level form factors and scattering amplitudes in Yang-Mills-scalar theory. Not only do we have a decomposition of ${\rm tr}(F^2)$ form factors into ${\rm tr}(\phi^2)$ ones similar to the so-called universal expansion for amplitudes~\cite{Dong:2021qai}, but these $n$-point form factors can also be further expanded into $(n{+}1)$-point YMS amplitudes with an additional scalar leg and unity coefficients. As we have seen, such new relations provide an efficient method for computing form factors in terms of YMS amplitudes, which had been computed mainly using Feynman diagrams. Moreover, combined with CHY formulae and even closed-form expressions for YMS amplitudes, we have obtained such formulae for all-multiplicity form factors in general dimension.
There are numerous open questions raised by our preliminary investigations. We have only considered form factors with length-two operators and it would be highly desirable to see if such relations, especially the first type, exist for form factors with other operators, such as ${\rm tr}(F^3)$. It would also be interesting to understand in general why such universal expansions exist, perhaps from certain ``uniqueness" theorem for form factors similar to amplitudes \cite{Arkani-Hamed:2016rak,Rodina:2016jyz,Rodina:2016mbk}. Moreover, already the simplest expansion into amplitudes for ${\rm tr}(\phi^2)$ form factor with two adjacent scalars, \eqref{eq:prototype}, provide an explanation for ``double-copy" relations found in~\cite{Lin:2021pne}, and an important question is whether these new relations, which connect form factors to amplitudes, will be useful to study double copy of $F^{{\rm tr}(\phi^2)}$ with more than two scalars and even $F^{{\rm tr}(F^2)}$.
The existence of CHY formulae for form factors is also very suggestive, and we expect to find more simplifications, along the line of \eqref{ISF} and \eqref{specialISF}, as well hidden structures in such formulae. It would be interesting to see if they can be derived from certain correlators in ambitwistor or conventional string theories~\cite{Mason:2013sva, Berkovits:2013xba}. Besides, it would be interesting to understand the aforementioned form-factor double copy from the viewpoint of the CHY formalism, which should be particularly suitable for studying such double-copy relations.
Last but not least, given the success at tree level, it is natural to ask if similar relations can be found for form factors at loop level, {\it e.g.} if loop integrands of $F^{{\rm tr}(F^2)}$ can be expanded into those of $F^{{\rm tr}(\phi^2)}$, and even related to integrands for amplitudes. Such explorations may allow us to extend even more fascinating structures found for multi-loop amplitudes into form factors.
\begin{acknowledgments}
We would like to thank Gang Chen, Congkao Wen, Yong Zhang, Mao Zeng and especially Gang Yang for helpful discussions. The research of S. H. is supported in part by the Key Research Program of CAS, Grant No. XDPB15 and National Natural Science Foundation of China, Grants No. 11935013, No. 11947301, No. 12047502, and No. 12047503. G.L. is supported in part by the National Natural Science Foundation of
China Grants No.~11935013. G.L. thanks the Higgs Centre for Theoretical Physics at the University of Edinburgh for the Visiting Researcher Scheme. G.L. also thanks the Queen Mary University for hospitality in the final stage of the paper.
\end{acknowledgments}
\newpage
|
1,314,259,995,269 | arxiv | \section{Introduction}
Let $f : X \rightarrow S$ be a surjective morphism of finite type between connected locally Noetherian normal schemes, $\overline{\eta}$ a geometric generic point of $S$, and $\ast$ a geometric point of $X\times_{S}\overline{\eta}$.
Suppose that the scheme $X\times_{S}\overline{\eta}$ is connected.
Consider the sequence of \'etale fundamental groups.
\begin{equation}
\pi_{1}(X\times_{S}\overline{\eta},\ast) \rightarrow \pi_{1}(X,\ast) \rightarrow \pi_{1}(S,\ast)\rightarrow 1
\label{introexact}
\end{equation}
In \cite{SGA1}, the following proposition is proved:
\begin{prop}(\cite{SGA1} Exp.X Corollaire 1.4)
Suppose that $f$ is proper and flat with geometrically reduced fibers.
Moreover, suppose that $f_{\ast}O_{X}=O_{S}$.
Then the sequence (\ref{introexact}) is exact.
\label{SGAhom}
\end{prop}
Note that the scheme $S$ is not assumed to be normal in \cite{SGA1}.
This proposition has been improved by Hoshi \cite{Ho} and Mitsui \cite{Mit} (cf.\,Proposition \ref{Hoshi-exact} and \ref{Mitsui-exact}).
They discussed the case where the morphism $f$ has geometrically reduced fibers.
In the present paper, we discuss homotopy exact sequences without this assumption.
Our main result is as follows (see Theorem \ref{suff} for weak conditions):
\begin{thm}
Suppose that the following conditions are satisfied:
\begin{itemize}
\item The morphism $f$ is flat or the scheme $S$ is regular.
\item Let $s$ be a point of $S$ whose local ring is of dimension $1$.
Write $\xi_{1}, \ldots, \xi_{n}$ for the generic points of the scheme $f^{-1}(s)$, $e_{i}$ for the multiplicity of $\xi_{i}$, and $k(\xi_{i})$ (resp.\,$k(s)$) for the residual field of $\xi$ (resp.\,$s$).
Then $\mathrm{gcd}\,(e_{1}, \ldots, e_{n})=1$ and the algebraic closure of $k(s)$ in $k(\xi_{i})$ is separable for some $i$.
\end{itemize}
Then the sequence (\ref{introexact}) is exact.
\label{introthm}
\end{thm}
We can not drop any assumption of Theorem \ref{suff} (cf.\,Section \ref{necsection}, Example \ref{curve(F)}, and Remark \ref{curve(F)rem}).
For instance, we have the following two propositions (see Section \ref{necsection} for general settings):
\begin{prop} (cf.\,Corollary \ref{fundcor} and Example \ref{neceexam}.2)
Suppose that the scheme $S$ is the spectrum of a semi-local Dedekind domain which contains $\mathbb{Q}$, and that the scheme $X$ is regular.
Then the sequence (\ref{introexact}) is exact if and only if the greatest common divisor of the multiplicities of the irreducible components of each closed fiber of $f$ is $1$.
\label{introneceexam}
\end{prop}
\begin{prop} (cf.\,Proposition \ref{geomexact})
Suppose that the scheme $S$ is a smooth curve over a field $k$ of characteristic $0$, and the scheme $X$ is regular.
Moreover, suppose that the scheme $S$ is not proper rational (cf.\,Definition \ref{curvedfn}).
Then the sequence (\ref{introexact}) is exact if and only if the greatest common divisor of the multiplicities of the irreducible components of each closed fiber of $f$ is $1$.
\label{introgeomexact}
\end{prop}
We apply the above results to the case where $f: X \rightarrow S$ is a morphism from a regular variety to a hyperbolic curve (cf.\,Definition \ref{curvedfn}).
In particular, we prove that a certain morphism is characterized by the property that the kernel of the induced homomorphism between \'etale fundamental groups is topologically finitely generated
(see Theorem \ref{curve criterion} for more details):
\begin{thm} (cf.\,Theorem \ref{curve criterion})
Suppose that $S$ is a hyperbolic curve over a field of characteristic $0$, and that the scheme $X$ is regular.
The following three conditions are equivalent.
\begin{enumerate}
\item The greatest common divisor of the multiplicities of the irreducible components of each closed fiber of $f$ is $1$.
\item The sequence (\ref{introexact}) is exact.
\item The group $\mathrm{Ker}(\pi_{1}(X, \ast)\rightarrow \pi_{1}(S, \ast))$ is topologically finitely generated.
\end{enumerate}
\label{intro curve criterion}
\end{thm}
Note that condition 1 is stated only in the language of schemes and that condition 3 is stated only in the language of groups.
Such a statement is natural in the framework of anabelian geometry (cf.\,\cite{tama1}, \cite{Moch1}, \cite{Ho}).
In anabelian geometry, we attempt to get information of varieties from their \'etale fundamental groups.
In this sense, Theorem \ref{intro curve criterion} may be regarded as a group theoretical characterization of a morphism as in condition 1.\\
Let us explain the proofs of homotopy exact sequences.
Let $X'\rightarrow X$ be an \'etale covering space whose pull-back $X'\times_{S}\overline{\eta}\rightarrow X\times_{S}\overline{\eta}$ has a section.
To show that the sequence (\ref{introexact}) is exact, we need to construct an \'etale covering $S' \rightarrow S$ such that the pull-back $X\times_{S}S'$ is isomorphic to $X'$ over $X$.
In \cite{SGA1}, the Stein factorization of the morphism $X'\rightarrow S$ plays role of $S'$.
Since $f$ is not proper in \cite{Ho}, the normalization of $S$ in the function field of $X'$ plays role of $S'$ there.
(In the present paper, we need to use the normalization of $S$ in the separable closure of the function field of $S$ in the function field of $X'$).
In \cite{Ho} and \cite{Mit}, they replace $X$ by another scheme over $S$ which is fiathfully flat with geometrically normal fibers to show that the morphism $S'\rightarrow S$ is \'etale.
In our situation, we can not find such a good scheme.
If the scheme $S$ is regular, it suffices to show that the morphism $S'\rightarrow S$ is \'etale over an open subscheme of $S$ whose complement is of codimension $\geq2$ by Zariski-Nagata purity.
If the scheme $S$ is not regular, we need to assume that the morphism $f$ is flat (cf.\,Example \ref{norreg}.1).
In this case, we use Serre's criterion for normality to compare the morphism $X' \rightarrow X$ and the morphism $S' \rightarrow S$.\\
The content of each section is as follows:
In Section \ref{suffsec}, we give the proof of Theorem \ref{introthm}.
In Section \ref{Lemdede}, we discuss properties of Dedekind schemes to have many tame extensions.
In Section \ref{necsection}, we give the proofs of Proposition \ref{introneceexam} and Proposition \ref{introgeomexact}.
In Section \ref{curves}, we give the proof of Theorem \ref{curve criterion}.
In Section \ref{app}, we discuss the property (F).
In Section \ref{app2}, we discuss the homotopy exact sequence for geomerically connected (not necessarily generic) fibers.
{\it Acknowledgements:} The author would like to thank Yuichiro Hoshi for some helpful discussions.
Also, the author would like to thank Takeshi Tsuji for useful advice.
This work was supported by the Research Institute for Mathematical Sciences, a Joint Usage/Research Center located in Kyoto University.
\section{Sufficient conditions}
In this section, we give the proof of Theorem \ref{introthm} in a generalized setting.
Let $f: X \rightarrow S$ be a surjective morphism essentially of finite type between connected locally Noetherian normal separated schemes.
We write $K(X)$ (resp.\,$K(S)$) for the function field of $X$ (resp.\,$S$).
Take a geometric generic point $\overline{\eta}$ of $S$ and write $X_{\overline{\eta}}$ for the scheme $X\times_{S}\overline{\eta}$.
Suppose that $X_{\overline{\eta}}$ is connected (and hence irreducible).
Take a geometric point $\overline{x}$ of $X_{\overline{\eta}}$.
Then we obtain the following sequence of \'etale fundamental groups:
\begin{equation}
\pi_{1}(X_{\overline{\eta}}, \overline{x})\rightarrow \pi_{1}(X, \overline{x}) \rightarrow \pi_{1}(S, \overline{x}) \rightarrow 1.
\label{exac}
\end{equation}
\begin{rem}
\begin{enumerate}
\item
Let $S' \rightarrow S$ be a finite \'etale morphism which the morphism $\overline{\eta} \rightarrow S$ factors through.
The sequence (\ref{exac}) is exact if and only if the sequece
$$\pi_{1}(X_{\overline{\eta}}, \overline{x})\rightarrow \pi_{1}(X\times_{S}S', \overline{x}) \rightarrow \pi_{1}(S', \overline{x}) \rightarrow 1$$
is exact.
\item Since $f$ is generically geometrically connected, the homomorphism\\
$\pi_{1}(X, \overline{x}) \rightarrow \pi_{1}(S, \overline{x})$ is surjective by \cite{Ho} lemma 1.6.
\item
The composite homomorphism $\pi_{1}(X_{\overline{\eta}}, \overline{x})\rightarrow \pi_{1}(X, \overline{x})\rightarrow \pi_{1}(S, \overline{x})$ is trivial.
\item
Thus, the sequence (\ref{exac}) is exact if and only if
$$\mathrm{Im}(\pi_{1}(X_{\overline{\eta}}, \overline{x})\rightarrow \pi_{1}(X, \overline{x}))\supset \mathrm{Ker}(\pi_{1}(X, \overline{x}) \rightarrow \pi_{1}(S, \overline{x}) ).$$
\end{enumerate}
\label{twoexact}
\end{rem}
First, we recall sufficient conditions given by Hoshi and Mitsui which generalize Proposition \ref{SGAhom}.
\begin{prop}(\cite{Ho} Proposition 1.10)
Suppose that there exist a connected locally Noetherian normal separated scheme $Y$ and a morphism $p: Y \rightarrow X$.
Moreover, suppose that the following conditions are satisfied:
\begin{itemize}
\item The morphism $p$ is dominant and induces an outer surjection\\ $\pi_{1}(Y) \rightarrow \pi_{1}(X)$.
\item The morphism $f$ is generically geometrically integral.
\item The composite morphism $f\circ p$ is of finite type, faithfully flat, geometrically normal, and generically geometrically connected.
\end{itemize}
Then the sequence (\ref{exac}) is exact.
\label{Hoshi-exact}
\end{prop}
\begin{prop}(\cite{Mit} Theorem 4.22)
Suppose that $f$ is flat and geometrically reduced.
Moreover, suppose that the sheaf $O_{S}$ is integrally closed in the sheaf $f_{\ast}O_{X}$.
Then the sequence (\ref{exac}) is exact.
\label{Mitsui-exact}
\end{prop}
Since the schemes $X$ and $S$ are normal, these schemes enjoy the following properties:
\begin{lem}
Let $U$ be a connected locally Noetherian normal scheme.
Write $K(U)$ for the function field of $U$.
\begin{enumerate}
\item Let $\ast$ be a geometric point of $\mathrm{Spec}\, K(U)$.
Then the homomorphism $\pi_{1}(\mathrm{Spec}\, K(U), \ast) \rightarrow \pi_{1}(U, \ast)$ is surjective.
\item Let $V \rightarrow U$ be a connected \'etale covering space.
Write $K(V)$ for the function field of $V$.
Let $L$ be an intermediate field of $K(V)\subset K(U)$.
Then the normalization $W$ of $U$ in $L$ is an \'etale covering space of $U$.
\end{enumerate}
\label{interfield}
\end{lem}
\begin{proof}
Assertion 1 is well-known, and assertion 2 follows from assertion 1.
\end{proof}
We rephrase the exactness of the sequence (\ref{exac}) in terms of \'etale covering spaces of $X$.
\begin{prop}
The following four conditions are equivalent.
\begin{enumerate}
\item $\mathrm{Im}(\pi_{1}(X_{\overline{\eta}}, \overline{x})\rightarrow \pi_{1}(X, \overline{x})) \subset \mathrm{Ker}(\pi_{1}(X, \overline{x}) \rightarrow \pi_{1}(S, \overline{x})).$
\item Let $C$ be a $\pi_{1}(X, \overline{x})$-set.
Suppose that there exists a $\pi_{1}(X_{\overline{\eta}}, \overline{x})$-orbit of $C$ which is trivial.
Then there exist a connected $\pi_{1}(S, \overline{x})$-set $D$ and a $\pi_{1}(X, \overline{x})$-equivariant isomorphism between $D$ and $C$.
\item Let $X'$ be a connected \'etale covering space of $X$.
Suppose that the \'etale covering space $X_{\overline{\eta}}\times_{X}X'\rightarrow X_{\overline{\eta}}$ has a section.
Then there exist an \'etale covering space $S' \rightarrow S$ and an $X$-isomorphism between $X\times_{S}S'$ and $X'$.
\item Let $X'$ be a connected \'etale covering space of $X$.
Write $K_{X'/S}$ for the separable closure of $K(S)$ in the function field of $X'$.
The normalization $N_{X'/S}$ of $S$ in $K_{X'/S}$ is \'etale over $S$.
\end{enumerate}
\label{essential}
\end{prop}
\begin{proof}
Since the homomorphism $\pi_{1}(X, \overline{x}) \rightarrow \pi_{1}(S, \overline{x})$ is surjective, condition 1 is equivalent to condition 2.
The equivalence of 2 and 3 is clear.
We prove the equivalence of 3 and 4.
Let $X'$ be a connected \'etale covering space of $X$.
Write $K(X')$ for the function field of $X'$, $K_{X'/S}$ for the separable closure of $K(S)$ in the function field of $X'$, and $N_{X'/S}\rightarrow S$ for the normalization of $S$ in $K_{X'/S}$.
First, we prove the implication $3 \Rightarrow 4$.
Write $K(XN)$ for the composite field $K(X)K_{X'/S}$ in $K(X')$.
The normalization $X_{N}$ of $X'$ in $K(XN)$ is \'etale covering space of $X$ by Lemma \ref{interfield}.
Moreover, since the \'etale covering space $X_{\overline{\eta}}\times_{X}X_{N}$ has a section, there exists a finite \'etale covering space $S' \rightarrow S$ and an $X$-isomorphism $X_{N} \cong X\times_{S}S'$ by condition 3.
Therefore $N_{X'/S}$ is isomorphic to $S'$ over $S$ and thus condition 4 holds.
Next, we prove the implication $4 \Rightarrow 3$.
Suppose that the morphism\\
$X_{\overline{\eta}}\times_{X}X' \rightarrow X_{\overline{\eta}}$ has a section.
By condition 4, $N_{X'/S}$ is an \'etale covering space of $S$.
It suffices to show that the induced morphism $\phi: X' \rightarrow X\times_{S}N_{X'/S}$ is an isomorphism.
Since $X\times_{S}N_{X'/S}$ is \'etale over $X$ and connected, the morphism $\phi$ is finite \'etale surjective.
The number of connected components of $X_{\overline{\eta}}\times_{S}N_{X'/S}$ coincides with the covering degree of $N_{X'/S}$ over $S$.
On the other hand, the number of connected components of $X_{\overline{\eta}}\times_{X}X'=\overline{\eta}\times_{S}X'$ coincides with the extension degree $[K_{X'/S}:K(S)]$.
Therefore, there is a bijective between the set of connected components of $X_{\overline{\eta}}\times_{X}X'$ and that of $X_{\overline{\eta}}\times_{S}N_{X'/S}$.
By the assumption of the section, we can show that the covering degree of $X'$ over $X\times_{S}N_{X'/S}$ is $1$.
Thus, condition 3 holds.
\end{proof}
Recall that we do not assume that the scheme $S$ is regular.
Since we can not use Zariski-Nagata purity theorem, we show the following technical lemma needed later.
\begin{lem}
Let $S'$ be an integral scheme and $S' \rightarrow S$ a quasi-finite dominant morphism.
\begin{enumerate}
\item Suppose that $f$ is flat, and that the extension between the function fields of $S'$ and $S$ is separable.
Then the scheme $X\times_{S}S'$ is integral.
\item Moreover, suppose that the scheme $S'$ is normal, and that the morphism $S' \rightarrow S$ is \'etale over each point of $S$ whose local ring is of dimension $1$.
Then the scheme $X\times_{S}S'$ is normal.
\end{enumerate}
\label{nonZar}
\end{lem}
\begin{proof}
Write $K(S')$ for the function field of $S'$.
Since $f$ is generically geometrically connected, the scheme $X\times_{S}\mathrm{Spec}\, K(S')$ is integral.
Therefore, assertion 1 follows from flatness of $f$.
By Serre's criterion for normality, it suffices to show that the scheme $X\times_{S}S'$ satisfies ($R_{1}$) and ($S_{2}$) to prove assertion 2.
Any point of the scheme $X\times_{S}S'$ over a point of $S'$ whose local ring is of dimension $\leq1$ is normal by the assumption of the morphism $S' \rightarrow S$.
Since $f$ is flat, the image of any point of the scheme $X\times_{S}S'$ whose local ring is of dimension $1$ is a point of $S'$ whose local ring is of dimension $\leq1$.
Therefore, $X\times_{S}S'$ satisfies ($R_{1}$).
Since $f$ is flat, any point of the scheme $X\times_{S}S'$ over a point of $S'$ whose local ring is of dimension $\geq2$ is of depth $\geq2$.
Therefore, the scheme $X\times_{S}S'$ satisfies ($S_{2}$).
\end{proof}
\begin{prop}
Suppose that $f$ is flat or $S$ is regular.
Then the four conditions in Proposition \ref{essential} is equivalent to the following condition:
\begin{enumerate}
\setcounter{enumi}{4}
\item Let $X'$ be a connected \'etale covering space of $X$.
Write $K_{X'/S}$ for the separable closure of $K(S)$ in the function field of $X'$.
The normalization $N_{X'/S}$ of $S$ in $K_{X'/S}$ is \'etale over each point of $S$ whose local ring is of dimension $1$.
\end{enumerate}
\label{essprop}
\end{prop}
\begin{proof}
The implication $4\Rightarrow 5$ is clear.
We prove the implication $5\Rightarrow 4$.
By condition 5, the morphism $N_{X'/S} \rightarrow S$ is \'etale over each point of $S$ whose local ring is of dimension $\leq 1$.
If $S$ is regular, the morphism $N_{X'/S} \rightarrow S$ is \'etale by Zariski-Nagata purity (cf.\,Proposition \ref{ZN}).
Hence we may assume that $f$ is flat.
The scheme $X\times_{S}N_{X'/S}$ is normal by Lemma \ref{nonZar}, and therefore the morphsim $X\times_{S}N_{X'/S} \rightarrow X$ is \'etale by Lemma \ref{interfield}.
Since $f$ is faithfully flat, the morphism $X\times_{S}N_{X'/S} \rightarrow X$ is also \'etale.
We finish the proof of Proposition \ref{essprop}.
\end{proof}
\begin{dfn}
Let $\iota_{i} : k \hookrightarrow K_{i} (1\leq i \leq n)$ be inclusions of fields.
We say that the inclusions $\{ \iota_{i} \}$ satisfy the property (F) if the following condition is satisfied:\\
(F): For any algebraic separable extension $L_{i}$ of $K_{i}$ ($1\leq i \leq n$) and any subfield $l$ of the product ring $\underset{1\leq i \leq n}{\prod}L_{i}$ which is algebraic over the diagonal subfield $k$ defined by $\iota_{i}$, the extension $k \subset l$ is separable.
\label{(F)}
\end{dfn}
\begin{rem}
\begin{enumerate}
\item If $K_{i}$ is geometrically reduced over $k$ for some $i$, the inclusions $\{ \iota_{i} \}$ satisfy the property (F).
In fact, if $k$ is purely inseparably closed in $K_{i}$ (i.e., $k^{p^{-\infty}}\cap K_{i}=k$) for some $i$, the inclusions $\{ \iota_{i} \}$ satisfy the property (F).
\item We discuss the property (F) in Section \ref{app}.
\end{enumerate}
\label{(F)geom}
\end{rem}
\begin{dfn}
We say that $f$ satisfies the property (R) if the following condition is satisfied.\\
(R): Let $s$ be a point of $S$ whose local ring is of dimension $1$.
Write $\xi_{1}, \ldots, \xi_{n}$ for the generic points of the scheme $f^{-1}(s)$, $e_{i}$ for the multiplicity of $\xi_{i}$, and $k(\xi_{i})$(resp.\,$k(s)$) for the residual field of $\xi$(resp.\,$s$).
Then $\mathrm{gcd}\,(e_{1}, \ldots, e_{n})=1$ and the inclusions $k(s) \hookrightarrow k(\xi_{i})$ satisfy the property (F).
\end{dfn}
We prove the main theorem of the present paper (cf.\,Theorem \ref{introthm})
\begin{thm}
Suppose that $f$ satisfies the property (R), and that one of the following conditions are satisfied.
\begin{multicols}{2}
\begin{itemize}
\item The morphism $f$ is flat.
\item The scheme $S$ is regular.
\end{itemize}
\end{multicols}
Then the sequence (\ref{exac}) is exact.
\label{suff}
\end{thm}
\begin{proof}
By Remark \ref{twoexact}, it suffices to show that
$$\mathrm{Ker}(\pi_{1}(X, \overline{x}) \rightarrow \pi_{1}(S, \overline{x})) \subset \mathrm{Im}(\pi_{1}(X_{\overline{\eta}}, \overline{x})\rightarrow \pi_{1}(X, \overline{x})).$$
Let $X' \rightarrow X$ be a finite \'etale covering space.
Write $K_{X'/S}$ for the separable closure of $K(S)$ in the function field of $X'$.
By Proposition \ref{essprop}, it suffices to show that the normalization $N_{X'/S}$ of $S$ in $K_{X'/S}$ is finite \'etale over $S$ at each point of $N_{X'/S}$ whose local ring is of dimension $1$.
Let $n$ be such a point of $N_{X'/S}$.
Write $s$ for the image of $n$ in $S$.
It suffices to show that the extensions of the discrete valuation rings $O_{S,s} \subset O_{N_{X'/S},n}$ are unramified.
Therefore, the assertion follows from the hypothesis of Theorem \ref{suff}.
\end{proof}
\begin{rem}
If the morphism $f$ is not flat and the scheme $S$ is not regular, Theorem \ref{suff} does not holds in general (cf.\,Example \ref{norreg}.1).
\end{rem}
\label{suffsec}
\section{Lemmas for Dedekind schemes}
In this section, we discuss some properties of Dedekind schemes which we use in Section \ref{necsection}.
\label{Lemdede}
\subsection{A fundamental lemma}
We proves a lemma for Dedekind schemes needed later .
\begin{lem}
Let $R$ be a strict henselian discrete valuation ring.
Write $K$ for the field of fractions of $R$.
Let $K'$ be a finite tamely ramified extension field of $K$.
Write $R'$ for the normalization of $R$ in $K'$ and $e'$ for the ramification index of this extension $K\subset K'$.
Let $A$ be a discrete valuation ring which dominates $R$ whose ramification index is $e$.
Suppose that the field of fractions $L$ of $A$ is geometrically connected over $K$ and that $e$ is divisible by $e'$.
Then the normalization $A'$ of $A\otimes_{R}R'$ (cf.\,Lemma\ref{nonZar}.1) is \'etale over $A$.
\label{essnec}
\end{lem}
\begin{proof}
Let $\widetilde{A}$ be a strict henselization of $A$.
Then $\widetilde{A}\otimes_{A}A'$ is normalization of $\widetilde{A}\otimes_{A}R'$ (in its total ring of fractions).
Therefore, it suffices to show that $\widetilde{A}\otimes_{A}R'$ is the product ring of $e'$ copies of $\widetilde{A}$.
Let $\varpi$ (resp.\,$\varpi '$; $\varpi_{\widetilde{A}}$) be a uniformizer of $R$ (resp.\,$R'$; $\widetilde{A}$).
There exists a unit $u'$ (resp.\,$u_{\widetilde{A}}$) of $R'$ (resp.\,$\widetilde{A}$) such that $\varpi =u'(\varpi ')^{e'}$ (resp.\,$\varpi =u_{\widetilde{A}}(\varpi_{\widetilde{A}})^{e}$).
Since there exist a unit $v'$ (resp.\,$v_{\widetilde{A}}$) of $R'$ (resp.\,$\widetilde{A}$) which satisfies that $(v')^{e'}=u'$ (resp.\,$v_{\widetilde{A}}^{e'}=u_{\widetilde{A}}$), we may assume that $(\varpi ')^{e'}=\varpi$.
Thus, $R'$ is isomorphic to $R[T]/(T^{e'}-\varpi)$ and $\widetilde{A}\otimes_{R}R'$ is isomorphic to $\widetilde{A}[T]/\underset{1\leq i \leq e'}{\prod}(T-\zeta_{e'}^{i}(\varpi_{A} )$.
Here, $\zeta_{e'}$ is a primitive $e'$-th root of unity in $\widetilde{A}$.
Therefore, $\widetilde{A}\otimes_{A}A'$ is isomorphic to $\underset{1\leq i \leq e'}{\prod}\widetilde{A}[T]/(T-\zeta_{e'}^{i}v_{\widetilde{A}}(\varpi_{\widetilde{A}})^{\frac{e}{e'}})$.
We finished the proof of Lemma \ref{essnec}.
\end{proof}
\subsection{Examples of Dedekind schemes}
We discuss whether there exists a convenient (cf.\,Definition \ref{(T)dfn}) tame covering of a given Dedekind scheme.
\begin{dfn}
\begin{enumerate}
\item Let $S$ be a scheme.
We shall say that $S$ is a Dedekind scheme if $S$ is connected, locally Noetherian, normal, and of dimension $1$.
\item Let $S$ be a Dedekind scheme.
We say that $S$ has the property (T) if, for each closed point $s\in S$ and a prime number $l$ which is not divisible by the characteristic of the residual field of $s$, there exist a normal scheme $S'$ and a finite dominant morphism $S'\rightarrow S$ which satisfy the following conditions:
\begin{itemize}
\item The morphism $S' \rightarrow S$ is finite Galois \'etale over $S\setminus\{s\}$.
\item The ramification index of $s$ is $l$.
\end{itemize}
\end{enumerate}
\label{(T)dfn}
\end{dfn}
\begin{rem}
The conditions of $S'$ is equivalent to the following condition:
\begin{itemize}
\item The ramification indices are divisible by $l$ for all closed points of the scheme $S'$ over the point $s$.
\item There exists a closed point of $S'$ over $s$ whose ramification index is $l$.
\item The morphism $S' \rightarrow S$ is finite \'etale over $S\setminus\{s\}$.
\end{itemize}
\label{weak}
\end{rem}
\begin{lem}
Let $R$ be a semi-local Dedekind domain (hence a principal ideal domain).
Then the scheme $S=\mathrm{Spec}\, R$ satisfies the property (T).
\label{semi}
\end{lem}
\begin{proof}
Write $K$ for the field of fractions of $R$, $\mathfrak{m}_{i} \,(1\leq i \leq n)$ for the maximal ideals of $R$, and $p_{i}\, (1\leq i \leq n)$ for the characteristic of $R/\mathfrak{m}_{i}$.
Let $l$ be a prime number which is not divisible by $p_{1}$.
By Chinese Remainder Theorem, we can choose elements $a$ and $b$ of $R$ which satisfy the following conditions:
\begin{multicols}{2}
\begin{itemize}
\item $\begin{cases} a \in \mathfrak{m}_{i} \quad ( l\notin \mathfrak{m}_{i}) \\
a \equiv 1\, \mathrm{mod}\, \mathfrak{m}_{i} \quad( l\in \mathfrak{m}_{i})\end{cases}$
\item $\begin{cases} b \in \mathfrak{m}_{1}\setminus \mathfrak{m}_{1}^{2} \\
b \equiv 1\, \mathrm{mod}\, \mathfrak{m}_{i} \quad(\mathfrak{m}_{i}\neq\mathfrak{m}_{1})\end{cases}.$
\end{itemize}
\end{multicols}
Then the extension of $K$ defined by the polynomial $T^{l}-aT-b$ satisfies the condition in Remark \ref{weak}.
Therefore, the Dedekind scheme $S$ satisfies the property (T).
\end{proof}
\begin{dfn}
Let $k$ be a field and $C$ a scheme over $k$.
\begin{enumerate}
\item We say that $C$ is a smooth curve over $k$ if the structure morphism $C \rightarrow \mathrm{Spec}\, k$ is smooth of relative dimension $1$ and geometrically connected.
Let $C$ be a smooth curve over $k$ and $\overline{k}$ be the algebraic closure of $k$.
Write $\overline{C}$ for the smooth compactification of $C$ over $k$, $g_{C}$ for the genus of $\overline{C}$, and $r_{C}$ for the number of closed points of the scheme $(\overline{C}\setminus C)\times_{\mathrm{Spec}\, k}\mathrm{Spec}\, \overline{k}$.
\item We say that $C$ is proper rational if $g_{C}=r_{C}=0$.
\item We say that $C$ is a hyperbolic curve if $2g_{C}+r_{C}-2>0$ and the reduced closed subscheme $\overline{C}\setminus C$ of $\overline{C}$ is finite \'etale over $\mathrm{Spec}\, k$.
\end{enumerate}
\label{curvedfn}
\end{dfn}
\begin{lem}
Let $k$ be a field and $S$ a smooth curve over $k$.
Suppose that $S$ is not proper rational.
Then the Dedekind scheme $S$ satisfies the property (T).
\label{nonproper}
\end{lem}
\begin{proof}
We may assume that the field $k$ is algebraically closed.
Take $s\in S$ and $l$ as in Definition \ref{(T)dfn}.
If $S$ is not proper, choose a point $s'$ of $\overline{S}\setminus S$, where $\overline{S}$ is the smooth compactification of $S$ over $k$.
Then there exists a finite dominant morphism $\overline{S'} \rightarrow \overline{S}$ from a proper smooth curve $\overline{S'}$ over $k$ to $\overline{S}$ which is a $\mathbb{Z}/l\mathbb{Z}$-\'etale covering over $S\setminus\{s,s'\}$ and totally (tamely) ramified over $s$ and $s'$.
Therefore, $S$ satisfies the property (T).
If $S$ is proper, the genus of $S$ is not $0$.
Thus, there exists a nontrivial finite Galois \'etale covering $S' \rightarrow S$.
Choose two closed points $s_{1}$ and $s_{2}$ of $S'$ over $s$.
Then there exists a $\mathbb{Z}/l\mathbb{Z}$-Galois covering $S'' \rightarrow S'$ which is finite \'etale over $S'\setminus\{s_{1}, s_{2}\}$ and totally ramified over $s_{1}$ and $s_{2}$.
By Remark \ref{weak}, $S$ satisfies the property (T).
\end{proof}
\section{Necessary conditions}
In this section, we show that the conditions in Theorem \ref{suff} is necessary to obtain a homotopy exact sequence in some cases.
Let $f, X, S, \overline{\eta}, X_{\overline{\eta}}$, and $\overline{x}$ be as in Section \ref{suffsec}.
In this section, we suppose that the morphism $f$ is generically geometrically reduced (cf, Remark \ref{geomred}.1), and that the scheme $X$ is regular (cf, Example \ref{norreg}.2).
Moreover, suppose that the morphism $f$ is flat.
Since $f$ is faithfully flat, it follows that $S$ is also regular.
\begin{rem}
\begin{enumerate}
\item Suppose that $S$ is a smooth curve over an algebraically closed field $k$ of positive characteristic and write $F: S \rightarrow S$ for the absolute Frobenius morphism of $S$.
Then the composite morphism $F\circ f$ (always) does not satisfy the condition in Theorem \ref{suff} even if $f$ does (and hence we have a homotopy exact sequence associated to $f$).
\item Since $f$ is formally smooth at the generic point of $S'$, there exists an dense open subset of $S$ such that $f$ is formally smooth there.
\end{enumerate}
\label{geomred}
\end{rem}
We can use Zariski-Nagata purity theorem since we assume that $X$ is regular.
\begin{prop} (Zariski-Nagata purity)
Let $U$ be a connected regular scheme, $X$ a connected normal scheme, and $\phi: X \rightarrow U$ a quasi-finite dominant morphism.
Then the ramified locus of $\phi$ is a closed subset of $X$ of pure codimension $1$.
\label{ZN}
\end{prop}
\begin{proof}
\cite{SGA1} Exp.X.Th\'eor\`eme de puret\'e 3.1.
\end{proof}
\begin{thm}
Suppose that there exist a connected normal scheme $S'$ and a finite dominant morphism $S'\rightarrow S$ which satisfy the following conditions:\\
\begin{itemize}
\item The morphism $S'\rightarrow S$ is \'etale over the generic point of $S$ (cf.\,Proposition \ref{ZN}).
\item Let $s' $ be a point of $S'$ whose local ring is of dimension $1$.
Write $s$ for the image of $s'$ in $S$.
Then the extension of discrete valuation rings $O_{S,s}\subset O_{S',s'}$ is at most tamely ramified with ramification index $e_{s'}$.
\item Let $\xi$ be a generic point of the scheme $f^{-1}(s)$ and write $e$ for the multiplicity of $\xi$.
Then $e$ is divisible by $e_{s'}$.
\end{itemize}
Then the normalization $X'$ of the scheme $X\times_{S}S'$ in its function field is \'etale over $X$.
Moreover, the sequence (\ref{exac}) is not exact.
\label{essnece}
\end{thm}
\begin{proof}
Note that the scheme $X\times_{S}S'$ is integral by Lemma \ref{nonZar}.1.
By Zariski-Nagata purity, it suffices to show that the morphism $X' \rightarrow X$ is \'etale over $\xi_{i}\,(1\leq i \leq n)$.
Therefore, we may assume that $S$ is the spectrum of the discrete valuation ring $\mathrm{Spec}\, O_{S,s}$, where $O_{S,s}$ is the local ring at $s$.
Write $O_{S,s}^{\mathrm{sh}}$ for the strict henselization of $O_{S,s}$.
By pulling-back all schemes by the morphism $\mathrm{Spec}\, O_{S,s}^{\mathrm{sh}} \rightarrow \mathrm{Spec}\, O_{S,s}$ and using Lemma \ref{nonZar}, we may assume that $S$ is the spectrum of a strict henselian discrete valuation ring.
Moreover, we may assume that $X$ is the spectrum of the discrete valuation ring $\mathrm{Spec}\, O_{X,\xi}$, where $O_{X,\xi}$ is the local ring at one of $\xi_{i}$ ($1\leq i\leq n$).
Therefore, Theorem \ref{essnece} follows from Lemma \ref{essnec} and Proposition \ref{essential}.
\end{proof}
\begin{rem}
Suppose that the scheme $S$ is quasi-compact (hence Noetherian), and that the morphism $f$ is of finite type.
Then the set $\{s\in S: \dim O_{S,s}=1, e_{s}\neq 1\}$ is finite by Remark \ref{geomred}.2.
\end{rem}
\begin{cor}
Suppose that the following conditions are satisfied:
\begin{itemize}
\item $S$ is a Dedekind scheme and satisfies the property (T) (cf.\,Definition \ref{(T)dfn}, Lemma \ref{semi}, Lemma \ref{nonproper}).
\item Let $s$ be a closed point of $S$ and $\xi_{1}, \ldots, \xi_{n}$ generic points of the scheme $f^{-1}(s)$.
Write $e_{i}$ for the multiplicity of $\xi_{i}$, $k(\xi_{i})$(resp.\,$k(s)$) for the residual field of $\xi$(resp.\,$s$), and $p(s)$ for the characteristic of the field $k(s)$.
Then $(e_{s}=)\,\mathrm{gcd}\,(e_{1}, \ldots, e_{n})$ is not divisible by $p(s)$ and the inclusions $k(s) \hookrightarrow k(\xi_{i})$ satisfy the property (F).
\end{itemize}
Then the sequence (\ref{exac}) is exact if and only if $e_{s}=1$ for each closed point $s$ of $S$.
\label{fundcor}
\end{cor}
\begin{proof}
Corollary \ref{fundcor} follows from Theorem \ref{suff} and Theorem \ref{essnece}.
\end{proof}
\begin{exam} (cf.\,Proposition \ref{introneceexam})
We discuss the conditions of Corollary \ref{fundcor}.
\begin{enumerate}
\item Suppose that $S$ is the spectrum of a discrete valuation ring with perfect residual fields of characteristic $p$.
Then the property (T) and (F) are automatically satisfied (cf.\,Lemma \ref{semi} and Remark \ref{(F)geom}).
Therefore, we only need to suppose that $e_{s}$ is not divisible by $p$ to apply Corollary \ref{fundcor}.
\item Suppose that $S$ is the spectrum of semi-local Dedekind domain which contains $\mathbb{Q}$.
Then all the conditions of Corollary \ref{fundcor} are automatically satisfied (cf.\,Lemma \ref{semi} and Remark \ref{(F)geom}).
\end{enumerate}
\label{neceexam}
\end{exam}
\begin{prop} (cf.\,Proposition \ref{introgeomexact})
Suppose that $S$ is a smooth curve over a field $k$ of characteristic $0$.
Moreover, suppose that $S$ is not proper rational.
Then the sequence (\ref{exac}) is exact if and only if the greatest common divisor of the multiplicities of the irreducible components of each closed fiber of $f$ is $1$.
\label{geomexact}
\end{prop}
\begin{proof}
Proposition \ref{geomexact} follows from Lemma \ref{nonproper} and Corollary \ref{fundcor}.
\end{proof}
\begin{exam}
Let $k$ be an algebraically closed field, $C'$ a smooth curve over$k$, and $\sigma$ an automorphism of $C'$ over $k$ of prime order $l\,(>2)$.
Write $C'\rightarrow C$ for the quotient scheme of $C'$ by $\mathbb{Z}/l\mathbb{Z}=\langle\sigma \rangle$, $c'_{i}\,(1\leq i \leq n)$ for ramified points of $C'$, and $c_{i}\,(1\leq i \leq n)$ for the image of $c'_{i}$ in $S$.
Write $B'$ for the scheme obtained by blowing-up at each point $(c'_{i}, c'_{j})\,(1\leq i, j \leq n)$ of $C'\times_{\mathrm{Spec}\, k}C'$.
The automorphism $(\sigma^{2}, \sigma)$ of $C'\times_{\mathrm{Spec}\, k}C'$ induces an automorphism of $B'$.
Then the scheme $B'$ has exactly $2n^{2}$ fixed points.
Write $Y'$ for the open subscheme of $B'$ whose complement is the set of the fixed points and $Y\rightarrow B \rightarrow Z$ for the quotient morphisms of the morphisms $Y' \rightarrow B' \rightarrow C'\times_{\mathrm{Spec}\, k}C'$ by $\mathbb{Z}/l\mathbb{Z}=\langle (\sigma^{2}, \sigma) \rangle$.
Since $\{(c'_{i}, c'_{j}); 1\leq i, j \leq n\}$ is the ramified locus of the morphism $C'\times_{\mathrm{Spec}\, k}C' \rightarrow Z$, the scheme $Z$ is not regular but normal by Proposition \ref{ZN}.
On the other hand, the morphism $Y' \rightarrow Y$ is \'etale.
\begin{enumerate}
\item
We show that Theorem \ref{suff} does not hold in general if the morphism $f$ is not flat and the scheme $S$ is not regular.
Consider the case where $f$ is the morphism $Y\rightarrow Z$.
Since the dimensions of fibers of $X\rightarrow S$ are $0$ or $1$, $f$ is not flat.
It holds that $X_{\overline{\eta}}=\overline{\eta}$.
Moreover, the \'etale covering space $Y' \rightarrow Y$ is not induced by an \'etale covering space of $Z$.
Therefore, the sequence (\ref{exac}) is not exact for this morphism.
\item
We show that Proposition \ref{geomexact} does not hold in general if $X$ is not regular.
The second projection $C'\times_{\mathrm{Spec}\, k}C' \rightarrow C'$ is a $\mathbb{Z}/l\mathbb{Z}$-equivariant morphism.
Consider the case where $f$ is the morphism $Z\rightarrow C$.
The fiber of the point $c_{i}\,(1\leq i \leq n)$ is irreducible and the multiplicity of its generic point is $l$.
To see that the sequence (\ref{exac}) is exact, it suffices to show that condition 4 in Proposition \ref{essential} is satisfied.
Let $X'$ be a connected \'etale covering of $X(=Z)$.
This covering corresponds to a connected \'etale covering of $Y'$ which does not induce an extension of the residual field of the generic point of the fiber of the image of each point $(c_{i},c_{l})\,(1\leq i, j \leq n)$ of $C'\times_{\mathrm{Spec}\, k}C'$ in $Z$.
Therefore, the normalization of $C$ in the function field of $X'$ is \'etale over $C$.
\end{enumerate}
\label{norreg}
\end{exam}
\label{necsection}
\section{An application to morphisms to curve}
In this section, we apply Proposition \ref{geomexact} to morphisms from smooth varieties to smooth curves over a field of characteristic $0$.
\begin{dfn} (\cite{Ho2}\,Definition 2.5)
We shall write
$$\mathbb{P}_{\not\exists \twoheadrightarrow \infty}$$
for the property of a profinite group defined as follows: A profinite group $G$ has the
property $\mathbb{P}_{\not\exists \twoheadrightarrow \infty}$ if, for an arbitrary open subgroup $H\subset G$ of $G$, there exists a prime number $l_{H}$ such that there is no quotient of $H$ which is free pro-$l_{H}$ and not topologically finitely generated.
\end{dfn}
Let $k$ be a field of characteristic $0$, $S$ a smooth curve over $k$, $X$ a normal scheme of finite type and geometrically connected over $k$, and $f$ a dominant morphism from $X$ to $S$ over $k$.
Write $N_{X/S}$ for the normalization of $S$ in the function field of $S$ in the function field of $X$ and $S'$ for the maximal \'etale subextension of the finite dominant morphism $N_{X/S} \rightarrow S$.
Thus, we have a natural factorization
$f: X\rightarrow N_{X/S}\rightarrow S' \rightarrow S$.
Let $\overline{\eta}$ be a geometric generic point $S'$.
Write $X_{\overline{\eta}}$ for the scheme $X\times_{S'}\overline{\eta}$.
Take a geometric point $\overline{x}$ of $X_{\overline{\eta}}$.
\begin{thm} (cf.\,Theorem \ref{intro curve criterion})
Consider the following conditions.
\begin{enumerate}
\item The morphism $f': X \rightarrow S'$ is surjective and the scheme $X_{\overline{\eta}}$ is connected. Moreover, the greatest common divisor of the multiplicities of the irreducible components of each closed fiber of $f'$ is $1$.
\item The scheme $X_{\overline{\eta}}$ is connected and the sequence of \'etale fundamental groups
\begin{equation}
\pi_{1}(X_{\overline{\eta}}, \overline{x})\rightarrow \pi_{1}(X, \overline{x}) \rightarrow \pi_{1}(S', \overline{x}) \rightarrow 1
\label{exact}
\end{equation}
is exact.
\item The group $\mathrm{Ker}(\pi_{1}(X, \overline{x})\rightarrow \pi_{1}(S, \overline{x}))$ has the property $\mathbb{P}_{\not\exists \twoheadrightarrow \infty}$.
\end{enumerate}
Then it holds that $1\Rightarrow 2 \Rightarrow 3$.
If the scheme $S$ is neither a proper rational curve nor an affine line, it holds that $3 \Rightarrow 2$.
If the scheme $S$ is not proper rational and $X$ is regular, it holds that $2 \Rightarrow 1$.
\label{curve criterion}
\end{thm}
\begin{proof}
The implications between conditions $2$ and $3$ are results of Hoshi (cf.\,\cite{Ho2} Theorem 2.8).
We show the rest of Theorem \ref{curve criterion}.
Suppose that condition 2 is satisfied, and the scheme $S$ is not proper rational.
By Theorem \ref{suff} and Proposition \ref{geomexact}, it suffices to show that the morphism $f'$ is surjective.
Since $N_{X/S}=S'$, the scheme $S'$ satisfies the property (T) by Lemma \ref{nonproper}.
Therefore, if we can take a point of complement of the image of $f'$ in $S'$, there exists a finite \'etale covering of $X$ such that condition 4 in Proposition \ref{essential} does not hold.
But this contradicts the assumption that the sequence (\ref{exact}) is exact.
\end{proof}
\begin{rem}
By \cite{Ho2} Remark 2.5.1, a topologically finitely generated profinite group satisfies the property $\mathbb{P}_{\not\exists \twoheadrightarrow \infty}$.
Suppose that $S$ is a hyperbolic curve over $k$ and that $X$ is regular.
Since the profinite group $\pi_{1}(X_{\overline{\eta}}, \overline{x})$ is topologically finitely generated, the conditions in Theorem \ref{curve criterion} hold if and only if the group $\mathrm{Ker}(\pi_{1}(X, \overline{x})\rightarrow \pi_{1}(S, \overline{x}))$ is topologically finitely generated.
\end{rem}
\begin{rem}
If we drop the assumption that the scheme $X$ is regular, the implication $2\Rightarrow1$ does not hold (cf.\,Example \ref{norreg}.2).
\end{rem}
\label{curves}
\section{Appendix 1 : The property (F)}
In this section, we discuss the property (F) (cf.\,Definition \ref{(F)}).
\label{app}
\subsection{Examples}
If we drop the property (F), Theorem \ref{suff} does not hold.
\begin{exam}
Let $K$ be a strict henselian discrete valuation field with imperfect residual field $k$ of characteristic $p>0$.
Write $O_{K}$ for the valuation ring of $K$.
Let $\mathfrak{C} \rightarrow \mathrm{Spec}\, O_{K}$ be a proper smooth morphism of relative dimension $1$ with geometrically connected fibers.
Suppose that the $p$-rank of its special fiber is positive.
There exists a $\mathbb{Z}/p\mathbb{Z}$-Galois \'etale covering space $\mathfrak{X}\rightarrow\mathfrak{C}$ by \cite{tama1} Lemma (5.5).
Choose a generator $\sigma$ of the Galois group $\mathbb{Z}/p\mathbb{Z}\, ( \subset \mathrm{Aut}(\mathfrak{X}))$.
Let $K'$ be a $\mathbb{Z}/p\mathbb{Z}$-Galois extension of $K$ whose residual extension is purely inseparable of degree $p$.
Write $O_{K'}$ for the valuation ring of $K'$.
Choose a generator $\tau$ of the Galois group $\mathbb{Z}/p\mathbb{Z}\, ( \subset \mathrm{Aut}(\mathrm{Spec}\, K'))$ and consider a $\mathbb{Z}/p\mathbb{Z}$-action on the scheme $\mathfrak{X}\times_{\mathrm{Spec}\, O_{K}}\mathrm{Spec}\, O_{K'}$ induced by the automorphism $\sigma\times\tau$.
Then the second projection $\mathfrak{X}\times_{\mathrm{Spec}\, O_{K}}\mathrm{Spec}\, O_{K'}\rightarrow \mathrm{Spec}\, O_{K'}$ is a $\mathbb{Z}/p\mathbb{Z}$-equivariant morphism.
\begin{equation}
\xymatrix{
\mathfrak{X}\times_{\mathrm{Spec}\, O_{K}}\mathrm{Spec}\, O_{K'} \ar[d] \ar[r] & \mathrm{Spec}\, O_{K'} \ar[d]
&\mathfrak{X}\times_{\mathrm{Spec}\, O_{K}}\mathrm{Spec}\, O_{K'} \ar[d] \ar[r] & \mathrm{Spec}\, O_{K'} \ar[d] \\
\mathfrak{X} \ar[r] & \mathrm{Spec}\, O_{K}
& \mathfrak{Z} \ar[r] & \mathrm{Spec}\, O_{K}
}
\label{(F)bad}
\end{equation}
Write $\mathfrak{Z}$ for the quotient scheme $(\mathfrak{X}\times_{\mathrm{Spec}\, O_{K}}\mathrm{Spec}\, O_{K'})/\langle\sigma\times\tau\rangle$.
$\mathfrak{Z}$ is a scheme over $\mathrm{Spec}\, O_{K}$ and its special fiber is isomorphic to $\mathfrak{C}\times_{\mathrm{Spec}\, O_{K}} \mathrm{Spec}\, k$ over $k$.
Therefore, the scheme $\mathfrak{Z}$ is regular and the induced morphism $\mathfrak{X}\times_{\mathrm{Spec}\, O_{K}}\mathrm{Spec}\, O_{K'} \rightarrow \mathfrak{Z}$ is finite \'etale.
Note that the left square is Cartesian and the right square is not Cartesian in the diagram (\ref{(F)bad}).
The normalization of $\mathrm{Spec}\, O_{K}$ in the function field of $\mathfrak{X}\times_{\mathrm{Spec}\, O_{K}}\mathrm{Spec}\, O_{K'}$ coincides with $\mathrm{Spec}\, O_{K'}$.
Therefore, the sequence of \'etale fundamental group induced by the morphism $\mathfrak{Z} \rightarrow \mathrm{Spec}\, O_{K}$ is not exact by Proposition \ref{essential}.
Note that the special fiber of the morphism $\mathfrak{Z} \rightarrow \mathrm{Spec}\, O_{K}$ is integral and thus the gcd of multiplicities of all the irreducible components of special fibers is $1$.
\label{curve(F)}
\end{exam}
\begin{rem}
\begin{enumerate}
\item We do not need to assume that $\mathfrak{C}$ is relative dimension $1$ over $ \mathrm{Spec}\, O_{K}$ in the argument given in Example \ref{curve(F)}.
\item If we replace the condition of residual extension $K'$ by the condition that the ramification index of the extension $K \subset K'$ is $p$, the multiplicity of the special fiber is $p$.
Therefore, we need to suppose that the gcd in the property (R) is not divisible by $p$.
\end{enumerate}
\label{curve(F)rem}
\end{rem}
\subsection{Generalities on (F)}
We discuss generalities on (F) without proofs.
Let $k$ be a field and $\iota_{i}: k \hookrightarrow K_{i}\,(1\leq i \leq m)$ inclusions of fields.
\begin{prop}
Write $k_{i}$ for the algebraic closure of $k$ in $K_{i}$ and $\iota_{i}' : k \hookrightarrow k_{i}$ for the inclusion induced by $\iota_{i}$.
Moreover, write $k_{i}^{\mathrm{sep}}$ for the (absolute) separable closure of $k_{i}$ and $ \iota_{i}^{\mathrm{sep}} : k \hookrightarrow k_{i}$ for the inclusion induced by $\iota_{i}$.
The following are equivalent.
\begin{enumerate}
\item The inclusions $\{\iota_{i}\}$ satisfy the property (F).
\item The inclusions $\{ \iota_{i}'\}$ satisfy the property (F).
\item The inclusions $\{ \iota_{i}^{\mathrm{sep}}\}$ satisfy the property (F).
\end{enumerate}
\label{alg}
\end{prop}
\begin{dfn}
We say that the inclusions $\{ \iota_{i} \}$ satisfy the property (F') if the following condition is satisfied:
For any subfield $l$ of the product ring $\underset{1\leq i \leq n}{\prod}K_{i}$ which is algebraic over the diagonal subfield $k$ defined by $\iota_{i}$, the extension $k \subset l$ is separable.
\label{(F')}
\end{dfn}
\begin{exam}
The property (F) implies the property (F'), but the property (F') does not imply the property (F).
Consider the inclusions\\
$\mathbb{F}_{p}(X^{p}+Y^{p},X^{p}Y^{p})\hookrightarrow \mathbb{F}_{p}(X,Y^{p})$ and $\mathbb{F}_{p}(X^{p}+Y^{p},X^{p}Y^{p})\hookrightarrow \mathbb{F}_{p}(X+Y, XY)$.
Then the field extension $\mathbb{F}_{p}(X+Y, XY) \subset \mathbb{F}_{p}(X,Y)$ is separable and the field $\mathbb{F}_{p}(X,Y)$ contains the field $\mathbb{F}_{p}(X,Y^{p})$ which is inseparable over the field\\
$\mathbb{F}_{p}(X^{p}+Y^{p},X^{p}Y^{p})$.
\end{exam}
\begin{note}
Let $k \subset K'$ be an extension of fields.
Write $k'$ for the algebraic closure of $k$ in $K$ and $k'_{n}$ the normal closure of $k'$ over $k$ (i.e., minimal normal extension field of $k$ which contains $k'$).
We write $k'_{p}$ for the field $k'_{n}\cap k^{p^{-\infty}}$.
\label{defp}
\end{note}
\begin{lem}
Let $k, K', k',$ and $k'_{p}$ be as in Notation\,\ref{defp}.
Write $k'_{s}$ for the separable closure of $k'$, $k'_{s,n}$ the normal closure of $k'_{s}$ over $k$, and $k'_{s,p}$ for the field $k'_{s,n}\cap k^{p^{-\infty}}.$
\begin{enumerate}
\item $k'_{p}=k'_{s,p}$.
\item Write $k_{s}$ for the separable closure of $k$ in $k'_{s}$.
Then $k_{s}$ and $k'_{s,p}$ are linearly disjoint and $k'_{s,n}=k_{s}k'_{s,p}$.
\end{enumerate}
\end{lem}
\begin{rem}
We obtain fields $k_{i,p} \,(1\leq i \leq m)$ from the inclusions $\{ \iota_{i} \}$.
Note that we can not consider the intersection of $k_{i} \,(1\leq i \leq m)$ (cf.\,Lemma \ref{alg}) but the intersection of $k_{i,p} \,(1\leq i \leq m)$.
\end{rem}
\begin{lem}
The inclusions $\{\iota_{i}\}$ satisfy the property (F) if the intersection of $k_{i,p} \,(1\leq i \leq m)$ coincide with $k$.
\end{lem}
\begin{prop}
Suppose that the degree of extension $k^{p} \subset k$ is $\leq 1$.
\begin{enumerate}
\item Any algebraic extension of $k$ is a linear disjoint sum of an algebraic separable extension of $k$ and a purely inseparable extension of $k$.
\item The property (F) is satisfied if and only if the algebraic closure of $k$ in each $k_{i}$ is separable over $k$.
\end{enumerate}
\end{prop}
\begin{exam}
We give some examples of fields $k$ such that the degree of extension $k^{p} \subset k$ is $\leq 1$.
\begin{enumerate}
\item A perfect field.
\item An extension field of a perfect field with transcendental degree $1$.
\item A field of Laurent series over a perfect field.
\end{enumerate}
\end{exam}
\section{Appendix 2 : geometrically connected fibers}
In this section, we discuss homotopy exact sequences for a geometric (not necessarily generic) point of $S$.
Let $X, S, f$, and $\overline{s}$ be as in Section \ref{suffsec}.
Consider a geometric (not necessarily generic) point $\overline{\eta'}$ of $S$.
Write $\widetilde{S}_{\overline{\eta'}}$ for the strict localization of $S$ at $\overline{\eta'}$ and fix an $S$-morphism $\overline{\eta} \rightarrow \widetilde{S}_{\overline{\eta'}}$.
\begin{rem}
\begin{enumerate}
\item If the sequence (\ref{exac}) is exact, the sequence
\begin{equation}
\pi_{1}(X\times_{S}\widetilde{S}_{\overline{\eta'}}, \overline{x})\rightarrow \pi_{1}(X, \overline{x}) \rightarrow \pi_{1}(S, \overline{x}) \rightarrow 1
\label{exachen}
\end{equation}
is exact.
\item If the condition of Theorem \ref{suff} is satisfied for each point $s\in S$ which does not specialize the image of $\overline{\eta'}$, then the sequence (\ref{exachen}) is exact.
\item Therefore, the homomorphism $\pi_{1}(X\times_{S}\overline{\eta}, \overline{x}) \rightarrow \pi_{1}(X\times_{S}\widetilde{S}_{\overline{\eta'}}, \overline{x})$ is not surjective in general.
\end{enumerate}
\label{rem1}
\end{rem}
\begin{rem}
\begin{enumerate}
\item Suppose that the morphism $X\times_{S}\widetilde{S}_{\overline{\eta'}} \rightarrow \widetilde{S}_{\overline{\eta'}}$ is proper and flat.
Note that $\widetilde{S}_{\overline{\eta'}}=\mathrm{Spec}\, (f\times \mathrm{id}_{\widetilde{S}_{\overline{\eta'}}})_{\ast}O_{X\times_{S}\widetilde{S}_{\overline{\eta'}}} \rightarrow S$ is a universally homeomorphism and therefore that the scheme $X\times_{S}\overline{\eta'}$ is connected.
Take a geometric point $\overline{x'}$ of $X\times_{S}\overline{\eta'}$.
Then the homomorphism $\iota: \pi_{1}(X\times_{S}\overline{\eta'}, \overline{x'}) \rightarrow\pi_{1}(X\times_{S}\widetilde{S}_{\overline{\eta'}}, \overline{x'})$ is an isomorphism.
\item In our case, $\iota$ is neither surjective nor injective in general.
\end{enumerate}
\label{rem2}
\end{rem}
\begin{cor}
Suppose that $f$ is proper and flat.
Then the sequence
\begin{equation}
\pi_{1}(X\times_{S}\overline{\eta'}, \overline{x'})\rightarrow \pi_{1}(X, \overline{x'}) \rightarrow \pi_{1}(S, \overline{x'}) \rightarrow 1
\label{exaccl}
\end{equation}
is exact if the following conditions are satisfied:
\item Let $s \in S$ be a point whose local ring is of dimension $1$.
Suppose that the point $s$ does not specialize the image of $\overline{\eta'}$.
Let $\xi_{1}, \ldots, \xi_{n}$ be generic points of the scheme $f^{-1}(s)$.
Write $e_{i}$ for the multiplicity of $\xi_{i}$ and $k(\xi_{i})$ (resp.\,$k(s)$) for the residual field of $\xi$ (resp.\,$s$).
Then $\mathrm{gcd}\,(e_{1}, \ldots, e_{n})=1$ and the inclusions $k(s) \hookrightarrow k(\xi_{i})$ satisfy the property (F).
\label{appcor}
\end{cor}
\begin{proof}
Corollary \ref{appcor} follows from Remark \ref{rem1}.2 and Remark \ref{rem2}.1.
\end{proof}
We need an ad hoc condition to make the sequence (\ref{exaccl}) exact by Remark \ref{rem1}.3 and Remark \ref{rem2}.2.
\begin{prop} (cf.\,\cite{Ho} Proposition 1.10)
Suppose that the following condition is satisfied.
\begin{itemize}
\item The morphism $f$ is flat or the scheme $S$ is regular.
\item $f$ satisfies the property (R).
\item For any connected finite \'etale covering $X' \rightarrow X$ and any $S$-morphism $\overline{\eta'} \rightarrow N_{X'/S}$, the scheme $\overline{\eta'}\times_{N_{X'/S}}X'$ is connected.
(Here, we write $N_{X'/S}$ for the normalization of $S$ in the algebraic separable closure $K_{X'/S}$ of the function field of $S$ in the function field of $X'$.)
\end{itemize}
Then the sequence (\ref{exaccl}) is exact.
\end{prop}
\begin{proof}
It suffices to show that the implication $4 \Rightarrow 3$ in Proposition \ref{essential} follows if we replace $\overline{\eta}$ by $\overline{\eta'}$.
Since the number of the connected components of the scheme $X_{\overline{\eta'}}\times_{X}X' =\overline{\eta'}\times_{S}X'=(\overline{\eta'}\times_{S}N_{X'/S})\times_{N_{X'/S}}X'$ coincides with the covering degree of $N_{X'/S} \rightarrow S$ by the third condition, the assertion follows.
\end{proof}
\label{app2}
|
1,314,259,995,270 | arxiv | \section{Introduction}
Quantum-enhanced metrology is a feasible quantum protocol under current technology, which is comparable with quantum communication and quantum computation that require perfect security and large-scale architecture, respectively. It can outperform any classical strategies by using entangled or squeezed states as an input signal \cite{giovannetti2004}, due to their nonclassical behaviors.
For example, the $N00N$ state exhibits $N$-time oscillation in a single run whereas the coherent state exhibits single-time oscillation in the same run \cite{dowling2008}.
In comparison to the coherent state which represents the number-phase uncertainty with equality, the squeezed state reduces the uncertainty of phase while it enlarges that of number (intensity) \cite{dowling2008}. It is known that, for the standard quantum state, one utilizes the coherent state that is a minimum uncertainty state and imitates the oscillatory behavior of a classical harmonic oscillator.
We focus on the two-mode input state for quantum phase estimation in lossy interferometry. In the lossless scenario, the $N00N$ state \cite{dowling2008} and
$N00N$-type states \cite{Gerry01,Joo11, Zhang13, LLNK15, Knott2016, LLLN16} demonstrate the Heisenberg limit(HL) which is the fundamental limit by quantum mechanics, representing a scaling of $1/N$.
The state provides enhancement of $\sqrt{N}$ by entanglement in precision whereas coherent state represents a scaling of $1/\sqrt{N}$, i.e.,
the shot-noise limit (SNL) or the standard quantum limit (SQL). $N$ is the input photon number, which can be replaced by the mean photon number of the input state.
In the lossy scenario, however, the $N00N$ state and $N00N$-type states are fragile against photon loss, such that people studied a class of path-entangled photon states \cite{Huver08}, Holland-Burnett states \cite{HB, Datta}, and general pure states with definite photon number $N$ \cite{Dorner, Rafal} in order to be resilient to photon loss.
Although a pure state with definite photon number $N=2$ was optimized experimentally to show one of the best states in the lossy quantum-enhanced metrology\cite{Kac}, we could not find any relation between two components ($|\rangle_a$ and $|\rangle_b$) in the pure state. Here we raise a question: is there any specific relation between two components of a two-mode state in lossy quantum-enhanced metrology? For a specific relation, we consider an optimal distance of two components in a two-mode state which varies with photon loss in an interferometry.
It is related to initially prepare an optimal input state against photon loss in order to avoid additional operations in the measurement stage.
The optimal input state preserves quantum advantage in the presence of photon loss, while beating the standard interferometric limit (SIL) of a coherent state. In the presence of photon loss, the boundary of the coherent state becomes readjusted as the SIL \cite{Rafal}.
Here we inject a coherent input state ($|\sqrt{2}\alpha\rangle_a |0\rangle_b$) into an interferometer. After a $50:50$ beam splitter, the input coherent state is transformed into a separable coherent state ($|\alpha\rangle_a |\alpha\rangle_b$) which is compared with our optimal entangled coherent state in the presence of photon loss. Thus, in our paper, the separable coherent state plays roles of the SQL in the absence of photon loss as well as the SIL in the presence of photon loss.
In general, an arbitrary two-mode entangled state can be represented by $|\psi_1\rangle_a|\psi_2\rangle_b+|\psi_2\rangle_a|\psi_1\rangle_b$,
where $|\psi_{1(2)} \rangle$ is an arbitrary single-mode state. It is required to specify them in order to observe the distance between the two components $\psi_1$ and $\psi_2$. In continuous variable systems, coherent states are more feasible than the other states, e.g., the squeezed vacuum state and squeezed coherent state.
Using the coherent states, we propose optimal entangled coherent states $|\alpha\rangle_a|\beta\rangle_b+|\beta\rangle_a|\alpha\rangle_b$ for quantum phase estimation in the presence of photon loss.
The entangled coherent state, which is implementable in a laboratory \cite{cat}, is a good candidate to control the optimal distance between the two components in the two-mode state.
In Fig. 1, we simply put the photon-loss process after the phase-shifting process, due to the commutativity between the two processes \cite{Rafal,Oh17}.
We investigate the optimal distance of $\alpha$ and $\beta$ for quantum phase estimation with different photon loss rates $R$. The optimal distance is obtained by an economical point, representing the maximum information that we can extract per input energy.
Using the formula of the quantum Fisher information (QFI) over the input mean photon number, we show how the optimal distance of $\alpha$ and $\beta$ varies with the photon loss rate in the interferometry. The result is also explained with the degree of entanglement (DOE) for the optimal entangled coherent state.
In the constraint of the input mean photon number, the optimal entangled coherent state is compared with a separable coherent state in the presence of photon loss.
Furthermore, we derive the optimal measurement for the quantum Fisher information.
This paper is organized as follows. We propose a generation scheme of an entangled coherent state and analyze its degree of entanglement.
Then we investigate optimal distance of the coherent state components $\alpha$ and $\beta$ with photon loss rate, and compare the optimal entangled coherent state with a separable coherent state. It is also analyzed by the output state entanglement.
Next, we discuss the corresponding optimal measurement. Finally, we summarize our results and discuss some issues.
\section{State generation and degree of entanglement}
\begin{figure}
\centerline{\scalebox{0.35}{\includegraphics[angle=0]{scheme}}}
\vspace{-1.6in}
\caption{Entangled coherent state for quantum phase estimation in the presence of photon loss. $R$ is a photon loss rate.
$\phi$ represents a phase shifter, $\exp(i\phi\hat{n})$. The input entangled coherent state $|\alpha\rangle_a|\beta\rangle_b+|\beta\rangle_a|\alpha\rangle_b$ is prepared after a $50:50$ beam splitter.}
\label{fig:fig1}
\end{figure}
An input entangled coherent state can be produced by impinging a coherent state and an even cat state into a $50:50$ beam splitter which takes
the transformation of $\hat{a}^{\dag}\rightarrow \frac{1}{\sqrt{2}}(\hat{a}^{\dag}+\hat{b}^{\dag})$ and
$\hat{b}^{\dag}\rightarrow \frac{1}{\sqrt{2}}(\hat{b}^{\dag}-\hat{a}^{\dag})$.
In Fig. 1, we inject a coherent state $|\frac{\alpha+\beta}{\sqrt{2}}\rangle_a$ and an even cat state
$|\frac{(\alpha-\beta)}{\sqrt{2}}\rangle_b+|-\frac{(\alpha-\beta)}{\sqrt{2}}\rangle_b$ into the $50:50$ beam splitter. Then, we produce an entangled coherent state $|\alpha\rangle_a|\beta\rangle_b+|\beta\rangle_a|\alpha\rangle_b$,
where a single-mode coherent state is given by $|\alpha\rangle=e^{-|\alpha|^2/2}\sum^{\infty}_{n=0}\frac{\alpha^n}{\sqrt{n!}}|n\rangle$.
Assuming that $\alpha$ and $\beta$ are real variables, we can transform the entangled coherent state into
\begin{eqnarray}
&&\frac{1}{\sqrt{N_T}}(|\alpha\rangle_a|\beta\rangle_b+|\beta\rangle_a|\alpha\rangle_b)\\
&&=\frac{1}{2\sqrt{N_T}}(\langle A_+|A_+\rangle|\bar{A}_+\rangle_a|\bar{A}_+\rangle_b-\langle A_-|A_-\rangle|\bar{A}_-\rangle_a|\bar{A}_-\rangle_b),\nonumber
\end{eqnarray}
where $|A_{\pm}\rangle=|\alpha\rangle \pm |\beta\rangle$, $|\bar{A}_{\pm}\rangle=\frac{1}{\sqrt{\langle A_{\pm}|A_{\pm}\rangle}}|A_{\pm}\rangle$,
$\langle A_-|A_+\rangle=0$, and $N_T=2(1+\exp[-(\beta-\alpha)^2])$.
Using the von Neumann entropy \cite{ent} which is computed as $DOE\equiv-Tr[\rho_a\log_2\rho_a]=-Tr[\rho_b\log_2\rho_b]=-\sum_i\lambda_i\log_2\lambda_i$ ($\lambda_i$ denotes the eigenvalues of $\rho_a$ or $\rho_b$) for a pure bipartite state,
we derive the DOE as
\begin{eqnarray}
\text{DOE}=-\sum_{k=+,-}\frac{\langle A_k|A_k\rangle^2}{4N_T}\log_2[\frac{\langle A_k|A_k\rangle^2}{4N_T}],
\end{eqnarray}
where $\langle A_{\pm}|A_{\pm}\rangle=1\pm \exp[-(\beta-\alpha)^2/2]$.
In Fig. 2, we show that the degree of entanglement increases with $|\alpha-\beta|$. Since we assumed that $\alpha$ and $\beta$ are real values, we can take
$|\alpha-\beta|$ as the distance between the two components $\alpha$ and $\beta$. This implies that the two different states
$|2\alpha\rangle_a|0\rangle_b+|0\rangle_a|2\alpha\rangle_b$ and $|\alpha\rangle_a|-\alpha\rangle_b+|-\alpha\rangle_a|\alpha\rangle_b$ have the same degree of entanglement which counted their normalization factors.
\begin{figure}
\centerline{\scalebox{0.6}{\includegraphics[angle=0]{Entanglement}}}
\vspace{0in}
\caption{Degree of entanglement for the entangled coherent state $|\alpha\rangle_a|\beta\rangle_b+|\beta\rangle_a|\alpha\rangle_b$,
as a function of $|\alpha-\beta|$. }
\label{fig:fig2}
\end{figure}
Experimentally, in superconducting circuit-QED systems, an even cat state $|\frac{(\alpha-\beta)}{\sqrt{2}}\rangle+|-\frac{(\alpha-\beta)}{\sqrt{2}}\rangle$ was prepared with the fidelity more than $99\%$, and an entangled coherent state $|\alpha\rangle_a|\alpha\rangle_b+|-\alpha\rangle_a|-\alpha\rangle_b$ was prepared with $81\%$ fidelity at $\alpha=1.92$ \cite{cat}. It is expected that the $|\alpha\rangle_a|\beta\rangle_b+|\beta\rangle_a|\alpha\rangle_b$ state can be also prepared with the fidelity more than $80\%$ in the superconducting circuit-QED.
\section{Optimal distance of two components ($\alpha$ and $\beta$) in an entangled coherent state}
Using the entangled coherent state $|\alpha\rangle_a|\beta\rangle_b+|\beta\rangle_a|\alpha\rangle_a$, we study quantum phase estimation in lossy interferometry. It can be quantified by means of the quantum Fisher information that is the optimal measure of how well we can detect small changes in a parameter, which is the maximized classical Fisher information (CFI) with positive-operator valued measures.
From an information-theoretic point of view, the inverse of the QFI can determine the ultimate precision limit of quantum phase estimation \cite{BK}.
This implies that the more the QFI is, the better the precision limit of a phase is.
The QFI is defined as $F_Q=Tr[\rho_{\phi}\hat{L}_{\phi}^2]$, where $\rho_{\phi}$ contains the phase information of $\phi$ and $\hat{L}_{\phi}$ is the symmetric logarithmic derivative (SLD) operator \cite{BK}.
Using an output state $\rho_{\phi}=\sum_n \lambda_n|\lambda_n\rangle\langle\lambda_n|$, we can obtain the QFI by a formula of $F_Q=4\sum_n\lambda_n f_n-\sum_{n\neq m}\frac{8\lambda_n\lambda_m}{\lambda_n + \lambda_m}|\langle \lambda^{'}_n|\lambda_m\rangle|^2$, where $f_n=\langle\lambda^{'}_n| \lambda^{'}_n\rangle-|\langle\lambda^{'}_n|\lambda_n\rangle|^2$ and
$| \lambda^{'}_n\rangle=\frac{\partial |\lambda_n\rangle}{\partial \phi}$.
\begin{figure}
\centerline{\scalebox{0.6}{\includegraphics[angle=0]{lossless}}}
\vspace{0in}
\caption{Economical point of quantum phase estimation using the state $|\alpha\rangle_a|\beta\rangle_b+|\beta\rangle_a|\alpha\rangle_b$ without photon loss.
Given a value of $\alpha$, we find the optimal value of $\beta$ achieving the maximum value of $F_Q/\langle \hat{n}_a\rangle$, the quantum Fisher information over the input mean photon number in mode $a$. At $\alpha <0.4$, $\beta_{opt}=-\alpha$ which were not shown here.
The optimal value of $\beta$ is numerically obtained under the constraint of $-\alpha \leq \beta < \alpha$.
}
\label{fig:fig3}
\end{figure}
Here we are interested in the quantum phase estimation for the cost-effective use of the input entangled coherent state.
This means that we want to acquire more information of a phase parameter per input energy.
For a measure of the cost-effectiveness, we define an economical point as
\begin{equation}
\text{Eco}(R, \alpha, \beta)\equiv \max (\frac{F_Q}{\langle \hat{n}_a\rangle}),
\end{equation}
where $F_Q$ is the QFI of the output state in Fig. 1 and $\langle \hat{n}_a\rangle$ is the input mean photon number in mode $a$ after the $50:50$ beam splitter. Since the total input mean photon number is satisfied with the condition $\langle \hat{n}_a+\hat{n}_b\rangle=2\langle\hat{n}_a\rangle$, it is enough for us to consider either of the two modes as an input energy.
Given a value of $\alpha$, we find the optimal value of $\beta$ in the range of $-\alpha\leq \beta< \alpha$ to maximize $F_Q/\langle \hat{n}_a\rangle$.
\subsection{Equal losses in both arms}
After experiencing a phase shifting operation and photon loss, the output state is given by
\begin{eqnarray}
\rho_{\phi}&=&\frac{1}{2[1+e^{-(\beta-\alpha)^2}]}\nonumber\\
&\times&\bigg[ |\sqrt{T}\alpha e^{i\phi}\rangle_a\langle \sqrt{T}\alpha e^{i\phi}|\otimes |\sqrt{T}\beta \rangle_b\langle \sqrt{T}\beta |\nonumber\\
&&+ |\sqrt{T}\beta e^{i\phi}\rangle_a\langle \sqrt{T}\beta e^{i\phi}|\otimes |\sqrt{T}\alpha \rangle_b\langle \sqrt{T}\alpha | \nonumber\\
&& +e^{-R(\beta-\alpha)^2}\bigg( |\sqrt{T}\alpha e^{i\phi}\rangle_a\langle \sqrt{T}\beta e^{i\phi}|\otimes |\sqrt{T}\beta \rangle_b\langle \sqrt{T}\alpha|\nonumber\\
&&+ |\sqrt{T}\beta e^{i\phi}\rangle_a\langle \sqrt{T}\alpha e^{i\phi}|\otimes |\sqrt{T}\alpha \rangle_b\langle \sqrt{T}\beta|\bigg) \bigg],
\end{eqnarray}
where $T=1-R$. The corresponding quantum Fisher information is derived as
\begin{eqnarray}
F_Q&=&4\bigg[ \sum_{k=+,-}\lambda_k(\langle \lambda^{'}_k|\lambda^{'}_k\rangle -|\langle \lambda^{'}_k|\lambda_k\rangle|^2)\nonumber\\
&&-\frac{2\lambda_+\lambda_-}{\lambda_+ +\lambda_-}(|\langle \lambda^{'}_+|\lambda_-\rangle|^2+|\langle \lambda^{'}_-|\lambda_+\rangle|^2) \bigg],
\end{eqnarray}
where
\begin{eqnarray}
\langle \lambda^{'}_{\pm}|\lambda^{'}_{\pm}\rangle&=&\frac{T}{2[1\pm e^{-T(\beta-\alpha)^2}]}
\bigg[ \alpha^2(T\alpha^2+1)+\beta^2(T\beta^2+1)\nonumber\\
&&\pm 2\alpha\beta(T\alpha\beta+1)e^{-T(\beta-\alpha)^2}\bigg],\nonumber\\
\langle \lambda^{'}_{\pm}|\lambda_{\pm}\rangle&=&\frac{-iT}{2[1\pm e^{-T(\beta-\alpha)^2}]}
\bigg[ \alpha^2+\beta^2\pm 2\alpha\beta e^{-T(\beta-\alpha)^2}\bigg],\nonumber\\
\langle \lambda^{'}_{+}|\lambda_{-}\rangle&=& \langle \lambda^{'}_{-}|\lambda_{+}\rangle=
\frac{-iT(\alpha^2-\beta^2)}{2\sqrt{1-e^{-2T(\beta-\alpha)^2}}},\nonumber\\
\lambda_{\pm}&=&\frac{(1\pm e^{-R(\beta-\alpha)^2})}{2(1+e^{-(\beta-\alpha)^2})}(1\pm e^{-T(\beta-\alpha)^2}).
\end{eqnarray}
\begin{figure}
\centerline{\scalebox{0.31}{\includegraphics[angle=0]{lossy}}}
\vspace{-0.3in}
\caption{(Photon loss in both arms) Economical point of quantum phase estimation using the state $|\alpha\rangle_a|\beta\rangle_b+|\beta\rangle_a|\alpha\rangle_b$ with photon loss rate $R$.
Give a value of $\alpha$, we find the optimal value of $\beta$ achieving the maximum value of $F_Q/\langle \hat{n}_a\rangle$, the quantum Fisher information over the input mean photon number in mode $a$, depending on the photon loss rate $R$. (a) $\alpha=0.6$, (b) $\alpha=1$, (c) $\alpha=1.8$, and (d) $\alpha=3$.
We consider the constraint of $-\alpha \leq \beta < \alpha$. }
\label{fig:fig4}
\end{figure}
In a lossless condition $(R=0)$, given a value of $\alpha$, the optimal value of $\beta$ is negative at $\alpha<1.8$ but it increases with $\alpha$ and approaches zero at $\alpha\geq 1.8$, as shown in Fig. 3. At $\alpha <0.4$, the optimal value of $\beta$ is given as $-\alpha$ but we did not show it in Fig. 3.
For small values of $\alpha$, the optimal entangled coherent state is close to the state $|\alpha\rangle_a|-\alpha\rangle_b+|-\alpha\rangle_a|\alpha\rangle_b$. For large values of $\alpha$, the optimal entangled coherent state becomes the state $|\alpha\rangle_a|0\rangle_b+|0\rangle_a|\alpha\rangle_b$.
We note that the former state is close to a scaling of the SQL and the latter state approaches a scaling of the HL. With increasing value of $\alpha$, the ultimate precision limit of the optimal entangled coherent state proceeds from the SQL to the HL, while the optimal distance of $|\alpha-\beta|$ varies from $|2\alpha|$ to $|\alpha|$.
In a lossy condition $(R\neq 0)$, given a value of $\alpha$, the optimal value of $\beta$ increases with the photon loss rate $R$. This represents that the optimal distance
of $|\alpha-\beta|$ decreases with the photon loss rate. In Fig. 4, we show that the optimal distance is getting smaller (larger) with the increasing (decreasing) photon loss rate. This means that initially we need to prepare less (more) entangled coherent states under the increasing (decreasing) photon loss rate. At $\alpha=0.6$, the optimal value of $\beta$ increases approximately from $-0.30$ to $0.21$ with the photon loss rate $R$.
At $\alpha=1$, it moves from $-0.16$ to $0.39$. At $\alpha=1.8$, it starts to move from $0$ to $0.92$. Then, at $\alpha=3$, it shifts from $0$ to $1.84$.
We note that, in Fig. 4 (a)-(c), the optimal value of $\beta$ can be any value of $\alpha$ under $R>0.8$ so that the optimal entangled coherent state can be a separable state $|\alpha\rangle_a|\alpha\rangle_b$. It is also applied for the case of Fig. 4 (d) under $R>0.6$.
\begin{figure}
\centerline{\scalebox{0.31}{\includegraphics[angle=0]{comparison}}}
\vspace{-0.3in}
\caption{(Photon loss in both arms) Comparison between the state $|\alpha\rangle_a|\beta\rangle_b+|\beta\rangle_a|\alpha\rangle_b$ and a separable coherent state $|\alpha\rangle_a|\alpha\rangle_b$ for the quantum Fisher information with $N_{\text{av}}=\langle\hat{n}_a\rangle$, in the constraint of input mean photon number. $R$ is the photon loss rate.
Black dashed line ($|\alpha\rangle_a|\alpha\rangle_b$), red dotted curve ($|\alpha\rangle_a|0\rangle_b+|0\rangle_a|\alpha\rangle_b$),
and blue solid curve ($|\alpha\rangle_a|\beta\rangle_b+|\beta\rangle_a|\alpha\rangle_b$).
(a) $\beta=-0.2\alpha$, (b) $\beta=0.3\alpha$, (c) $\beta=0.5\alpha$, and (d) $\beta=0.7\alpha$.
}
\label{fig:fig5}
\end{figure}
In Fig. 5, the results of the entangled coherent state $|\alpha\rangle_a|\beta\rangle_b+|\beta\rangle_a|\alpha\rangle_b$ can be compared with those of a separable coherent state $|\alpha\rangle_a|\alpha\rangle_b$. Given a mean photon number $|\alpha|^2$ for the coherent state, the quantum Fisher information is derived as $4(1-R)|\alpha|^2$ in the presence of photon loss rate $R$. Under the constraint of the input mean photon number, the entangled coherent state outperforms the coherent state by means of the QFI. In a lossless condition ($R=0$), the entangled coherent state with $\beta\sim -\alpha$ exhibits higher QFI than the coherent state at the small mean photon number, whereas the entangled coherent state with $\beta=0$ exhibits higher QFI than the coherent state at the large mean photon number. In a lossy condition ($R\neq 0$), the entangled coherent state with $\beta=\gamma \alpha$ shows higher QFI than the coherent state for different photon loss rates $R$, where the ratio $\gamma$ increases from $-1$ to less than $1$ with the increasing $R$.
\subsection{Losses in one arm}
We can consider the same scenario under a photon loss only in one arm of the interferometry. Assuming the photon loss only in the mode $a$ of Fig. 1, we derive the output state as
\begin{eqnarray}
\rho_{\phi}&=&\frac{1}{2[1+e^{-(\beta-\alpha)^2}]}
\bigg[ |\sqrt{T}\alpha e^{i\phi}\rangle_a\langle \sqrt{T}\alpha e^{i\phi}|\otimes |\beta \rangle_b\langle \beta |\nonumber\\
&&+ |\sqrt{T}\beta e^{i\phi}\rangle_a\langle \sqrt{T}\beta e^{i\phi}|\otimes |\alpha \rangle_b\langle \alpha | \nonumber\\
&& +e^{-R(\beta-\alpha)^2}\bigg( |\sqrt{T}\alpha e^{i\phi}\rangle_a\langle \sqrt{T}\beta e^{i\phi}|\otimes |\beta \rangle_b\langle \alpha|\nonumber\\
&&+ |\sqrt{T}\beta e^{i\phi}\rangle_a\langle \sqrt{T}\alpha e^{i\phi}|\otimes |\alpha \rangle_b\langle \beta|\bigg) \bigg],
\end{eqnarray}
where $T=1-R$. The QFI of the Eq. (5) is calculated with
\begin{eqnarray}
\langle \lambda^{'}_{\pm}|\lambda^{'}_{\pm}\rangle
&=&\frac{T}{2[1\pm e^{-\frac{1}{2}(1+T)(\beta-\alpha)^2}]}
\bigg[ \alpha^2(T\alpha^2+1) +\beta^2\nonumber\\
&\times&(T\beta^2+1)\pm 2\alpha\beta(T\alpha\beta+1)e^{-\frac{1}{2}(1+T)(\beta-\alpha)^2}\bigg],\nonumber\\
\langle \lambda^{'}_{\pm}|\lambda_{\pm}\rangle&=&\frac{-iT\bigg[ \alpha^2+\beta^2\pm 2\alpha\beta e^{-\frac{1}{2}(1+T)(\beta-\alpha)^2}\bigg]}{2[1\pm e^{-\frac{1}{2}(1+T)(\beta-\alpha)^2}]}
,\nonumber\\
\langle \lambda^{'}_{+}|\lambda_{-}\rangle&=& \langle \lambda^{'}_{-}|\lambda_{+}\rangle=
\frac{iT(\alpha^2-\beta^2)}{2\sqrt{1-e^{-(1+T)(\beta-\alpha)^2}}},\nonumber\\
\lambda_{\pm}&=&\frac{(1\pm e^{-\frac{R}{2}(\beta-\alpha)^2})}{2(1+e^{-(\beta-\alpha)^2})}(1\pm e^{-\frac{1}{2}(1+T)(\beta-\alpha)^2}).
\end{eqnarray}
It exhibits similar behavior to Fig. 4. Given a value of $\alpha$, the optimal value of $\beta$ increases with the photon loss rate $R$, such that the optimal distance of $|\alpha-\beta|$ decreases with the photon loss rate, as shown in Fig. 6. Using the QFI, in Fig. 7, we show that the entangled coherent state exhibits higher QFI than the coherent state with increasing $R$.
\begin{figure}
\centerline{\scalebox{0.31}{\includegraphics[angle=0]{lossy1}}}
\vspace{-0.3in}
\caption{(Photon loss in one arm) Economical point of quantum phase estimation using the state $|\alpha\rangle_a|\beta\rangle_b+|\beta\rangle_a|\alpha\rangle_b$ with photon loss rate $R$.
Give a value of $\alpha$, we find the optimal value of $\beta$ achieving the maximum value of $F_Q/\langle \hat{n}_a\rangle$, the quantum Fisher information over the input mean photon number in mode $a$, depending on the photon loss rate $R$. (a) $\alpha=0.6$, (b) $\alpha=1$, (c) $\alpha=1.8$, and (d) $\alpha=3$.
We consider the constraint of $-\alpha \leq \beta < \alpha$. }
\label{fig:fig6}
\end{figure}
\begin{figure}
\centerline{\scalebox{0.31}{\includegraphics[angle=0]{comparison1}}}
\vspace{-0.25in}
\caption{(Photon loss in one arm) Comparison between the state $|\alpha\rangle_a|\beta\rangle_b+|\beta\rangle_a|\alpha\rangle_b$ and a separable coherent state $|\alpha\rangle_a|\alpha\rangle_b$ for the quantum Fisher information with $N_{\text{av}}=\langle\hat{n}_a\rangle$, in the constraint of input mean photon number. $R$ is the photon loss rate.
Black dashed line ($|\alpha\rangle_a|\alpha\rangle_b$), red dotted curve ($|\alpha\rangle_a|0\rangle_b+|0\rangle_a|\alpha\rangle_b$),
and blue solid curve ($|\alpha\rangle_a|\beta\rangle_b+|\beta\rangle_a|\alpha\rangle_b$).
(a) $\beta=-0.1\alpha$, (b) $\beta=0.2\alpha$, (c) $\beta=0.4\alpha$, and (d) $\beta=0.5\alpha$.
}
\label{fig:fig7}
\end{figure}
\section{Output state entanglement}
In quantum metrology, the best strategy is to achieve the highest QFI as we can. In the absence of photon loss, the more the degree of entanglement, the higher the QFI. However, in the presence of photon loss, a state in a high degree of entanglement can be faster to lose the amount of entanglement than a state in a low degree of entanglement with the increasing photon loss rate $R$. This is explained by analyzing the entanglement of the output optimal entangled coherent states. For a measure of entanglement in the output state, we consider the negativity which is described with the absolute value of the sum of the negative eigenvalues of a partial transposed state \cite{VW02},
$E_N(\rho)\equiv\frac{||\rho^{T_a}||_1-1}{2}=|\sum_i\mu_i|$, where $||\rho^{T_a}||_1$ is the trace norm and $\mu_i$ are the negative eigenvalues of a partial transposed state $\rho^{T_a}$.
\subsection{Equal losses in both arms}
The negativity of the output state is given by
\begin{eqnarray}
E_N(\rho_{out})=\frac{e^{(1-R)(\beta-\alpha)^2}-1}{2(1+e^{(\beta-\alpha)^2})}.
\end{eqnarray}
Given a fixed value of $\alpha$, we show that a high entangled state is faster to lose entanglement than a low entangled state with increasing $R$, as shown in Fig. 8.
At $R=0$, the amount of entanglement represents their initial entanglement. Then, the output state entanglement is exhibited with a nonzero photon loss rate. At some points of $R$, the initial low entangled state can have relatively more entanglement than the initial high entangled state. The surviving output entanglement contributes to produce relatively high quantum Fisher information by its interference terms. In Fig. 8 (d), the amount of entanglement is distinctively reversed with the increasing photon loss rate.
Thus, in the presence of photon loss, it is valuable to prepare a less entangled coherent state initially with the increasing photon loss rate.
\begin{figure}
\centerline{\scalebox{0.31}{\includegraphics[angle=0]{Output1}}}
\vspace{-0.5in}
\caption{(Photon loss in both arms) Output entanglement of the optimal entangled coherent state $|\alpha\rangle_a|\beta\rangle_b+|\beta\rangle_a|\alpha\rangle_b$ as a function of photon loss rate $R$ for fixed values of (a) $\alpha=1$, (b) $\alpha=1.8$, (c) $\alpha=3$, and (d) $\alpha=5$.
Green dashed curve ($\beta=-0.2\alpha$), red dotted curve ($\beta=0$), purple dotdashed curve ($\beta=0.5\alpha$), and
blue solid curve ($\beta=0.7\alpha$).
}
\label{fig:fig8}
\end{figure}
\subsection{Losses in one arm}
The negativity of the output state is given by
\begin{eqnarray}
E_N(\rho_{out})=\bigg|\frac{B_1-\sqrt{B^2_2-4B_3}}{16N_T}\bigg|,
\end{eqnarray}
where $B_1=8(1-e^{-R(\beta-\alpha)^2/2})(1-e^{-(1+T)(\beta-\alpha)^2/2})$,
$B_2=8(1-e^{-R(\beta-\alpha)^2/2})(e^{-T(\beta-\alpha)^2/2}-e^{-(\beta-\alpha)^2/2})$,
$B_3=16(1+e^{-R(\beta-\alpha)^2/2})^2(1-e^{-T(\beta-\alpha)^2})(1-e^{-(\beta-\alpha)^2})$,
and $N_T=2(1+e^{-(\beta-\alpha)^2})$.
This shows a similar behavior to Fig. 8. For a fixed value of $\alpha$, a high entangled state is faster to lose entanglement than a low entangled state with increasing $R$, as shown in Fig. 9. After photon loss, the initial low entangled state can have relatively more entanglement than the initial high entangled state.
\begin{figure}
\centerline{\scalebox{0.31}{\includegraphics[angle=0]{Output2}}}
\vspace{-0.4in}
\caption{(Photon loss in one arm) Output entanglement of the optimal entangled coherent state $|\alpha\rangle_a|\beta\rangle_b+|\beta\rangle_a|\alpha\rangle_b$ as a function of photon loss rate $R$ for fixed values of (a) $\alpha=1$, (b) $\alpha=1.8$, (c) $\alpha=3$, and (d) $\alpha=5$.
Green dashed curve ($\beta=-0.1\alpha$), red dotted curve ($\beta=0$), purple dotdashed curve ($\beta=0.4\alpha$), and
blue solid curve ($\beta=0.5\alpha$).
}
\label{fig:fig9}
\end{figure}
\section{Optimal measurement}
Now we turn to the optimal measurement for the quantum Fisher information.
The corresponding optimal measurement is derived with the symmetric logarithmic derivative, the eigenbasis of which represents the optimal measurement basis \cite{Paris}.
The SLD is defined as $\hat{L}_{\phi}=2\sum_{n,m}\frac{\langle\lambda_m| \partial_{\phi}\rho_{\phi}|\lambda_n\rangle}{\lambda_n+\lambda_m}|\lambda_m\rangle\langle\lambda_n|$, where $\rho_{\phi}=\sum_n \lambda_n|\lambda_n\rangle\langle\lambda_n|$,
$\langle \lambda_n|\lambda_m\rangle=\delta_{n,m}$, and
$\partial_{\phi}\rho_{\phi}=\frac{\partial\rho_{\phi}}{\partial\phi}=\sum_n(\partial_{\phi}\lambda_n|\lambda_n\rangle\langle\lambda_n|+\lambda_n|\partial_{\phi}\lambda_n\rangle\langle\lambda_n|+\lambda_n|\lambda_n\rangle\langle\partial_{\phi}\lambda_n|)$.
\subsection{Equal losses in both arms}
Using the SLD formula, we derive the SLD of the output state $\rho_{\phi}$ as
\begin{eqnarray}
\hat{L}_{\phi}=A(|\lambda_-\rangle\langle \lambda_+|- |\lambda_+\rangle\langle \lambda_-|),
\end{eqnarray}
where
\begin{eqnarray}
&&A=\frac{iT(\alpha^2-\beta^2)(e^{-T(\beta-\alpha)^2}+e^{-R(\beta-\alpha)^2})}{(1+e^{-(\beta-\alpha)^2})\sqrt{1-e^{-2T(\beta-\alpha)^2}}},\\
&&|\lambda_{\pm}\rangle=\frac{(|\sqrt{T}\alpha e^{i\phi}\rangle_a|\sqrt{T}\beta\rangle_b\pm |\sqrt{T}\beta e^{i\phi}\rangle_a|\sqrt{T}\alpha\rangle_b)}{\sqrt{2(1\pm e^{-T(\beta-\alpha)^2})}}. \nonumber
\end{eqnarray}
One of the corresponding eigenbases is $|\lambda_+\rangle \pm i|\lambda_-\rangle$.
To obtain the QFI, we need to perform a measurement with the eigenbasis $|\lambda_+\rangle \pm i|\lambda_-\rangle$
which consists of $|\sqrt{T}\alpha e^{\phi}\rangle_a|\sqrt{T}\beta\rangle_b$ and $|\sqrt{T}\beta e^{\phi}\rangle_a|\sqrt{T}\alpha\rangle_b$. This requires a correlated detection setup which performs measurements either on the state of $|\sqrt{T}\alpha e^{\phi}\rangle_a|\sqrt{T}\beta\rangle_b$ or on the state of
$|\sqrt{T}\beta e^{\phi}\rangle_a|\sqrt{T}\alpha\rangle_b$. Each detection setup can be implemented with heterodyne (also called double homodyne) detection on each output mode. However, it is not known how to correlate the different detection setups, and we could not find it either. We leave it for our future work.
Thus, the optimal measurement basis of the output state is a correlated measurement basis which is not achieved with currently known detection schemes.
\subsection{Losses in one arm}
For a photon loss only in one arm of the interferometry, the SLD formula is also given by Eq. (11) but all the components are derived as
\begin{eqnarray}
&&A=\frac{-iT(\alpha^2-\beta^2)(e^{-\frac{1}{2}(1+T)(\beta-\alpha)^2}+e^{-\frac{R}{2}(\beta-\alpha)^2})}
{(1+e^{-(\beta-\alpha)^2})\sqrt{1-e^{-(1+T)(\beta-\alpha)^2}}},\nonumber\\
&&|\lambda_{\pm}\rangle=\frac{(|\sqrt{T}\beta e^{i\phi}\rangle_a|\alpha\rangle_b
\pm |\sqrt{T}\alpha e^{i\phi}\rangle_a|\beta\rangle_b)}{\sqrt{2(1\pm e^{-\frac{1}{2}(1+T)(\beta-\alpha)^2})}}.
\end{eqnarray}
We also need to perform a correlated measurement
which consists of the states $|\sqrt{T}\alpha e^{\phi}\rangle_a|\beta\rangle_b$ and $|\sqrt{T}\beta e^{\phi}\rangle_a|\alpha\rangle_b$. This requires a correlated detection setup which performs measurements either on the state of $|\sqrt{T}\alpha e^{\phi}\rangle_a|\beta\rangle_b$ or on the state of
$|\sqrt{T}\beta e^{\phi}\rangle_a|\alpha\rangle_b$.
With a feasible measurement setup, we wonder if the classical Fisher information using photon number resolving detection (PNRD) can approach the quantum Fisher information bound in the presence of photon loss.
The CFI is given by $F(\phi)=\sum_{n_a,n_b}\frac{1}{P(n_a,n_b|\phi)}[\frac{\partial P(n_a,n_b|\phi)}{\partial \phi}]^2$, where $P(n_a,n_b|\phi)$ is a probability of detecting $n_a$ photons on mode $a$ and $n_b$ photons on mode $b$ for a given phase $\phi$.
An example of the entangled coherent states, i.e., $|\alpha\rangle_a|0\rangle_b+|0\rangle_a|\alpha\rangle_b$, cannot attain the QFI bound by the CFI using the PNRD \cite{LLLN16}. The CFI of the state depends on a phase parameter.
It is known that the quantum Fisher information of a single parameter is independent of the parameter \cite{Helstrom}.
Thus, in the presence of photon loss, we cannot attain the ultimate precision limit of the QFI with the CFI using the PNRD.
For the other feasible measurement setup, we may also consider Gaussian measurement \cite{Gaussian19} that is implemented with general-dyne detection. This will be handled in our next project.
\section{Summary and Discussion}
\begin{figure}
\centerline{\scalebox{0.31}{\includegraphics[angle=0]{diagram}}}
\vspace{-1.6in}
\caption{Economical point as a function $R,~\alpha,$ and $\beta$.
Given a value of $\alpha$, we find the optimal value of $\beta$ in the range of $-\alpha\leq \beta< \alpha$ to maximize $F_Q/\langle \hat{n}_a\rangle$.
At $R\rightarrow 0$, the optimal value of $\beta$ goes to $-\alpha$ for small $\alpha$ and it becomes zero for large $\alpha$.
}
\label{fig:fig10}
\end{figure}
We investigated an optimal distance of two components ($\alpha$ and $\beta$) in an entangled coherent state
$|\alpha\rangle_a|\beta\rangle_b+|\beta\rangle_a|\alpha\rangle_b$ for quantum phase estimation in lossy interferometry. Defining the economical point as the quantum Fisher information over the mean photon number of the input mode $a$, we found that the optimal distance of the two components gets smaller with photon loss rate $R$. This represents that initially we need to prepare a less entangled coherent state with increasing $R$.
We verified that the initial low entangled state can have relatively more entanglement than the initial high entangled state with increasing $R$.
At low photon loss rate ($R\rightarrow 0$), the optimal entangled state is close to
$|\alpha\rangle_a|-\alpha\rangle_b+|-\alpha\rangle_a|\alpha\rangle_b$ for small $\alpha$ and
$|\alpha\rangle_a|0\rangle_b+|0\rangle_a|\alpha\rangle_b$ for large $\alpha$.
This is summarized in Fig. 10.
We also showed that the optimal entangled coherent state preserves quantum advantage in the presence of photon loss, while surpassing the SIL of a separable coherent state $|\alpha\rangle_a|\alpha\rangle_b$.
Under a fixed input mean photon number, the optimal entangled coherent state is more resilient to photon loss than the separable coherent state, even in high photon loss rates.
Then we derived the corresponding optimal measurement which is not a simple detection scheme but requires correlation measurement bases.
It is natural to consider the other type of entangled coherent states, i.e., $|\alpha\rangle_a|\beta\rangle_b-|\beta\rangle_a|\alpha\rangle_b$.
Since the state does not contain vacuum probability, it is less energy-efficient than $|\alpha\rangle_a|\beta\rangle_b+|\beta\rangle_a|\alpha\rangle_b$ in the condition of the economical point.
Moreover the state can exhibit even worse performance than the separable coherent state $|\alpha\rangle_a|\alpha\rangle_b$ under the constraint of the input mean photon number, as shown in appendix A. That is why we only considered the state $|\alpha\rangle_a|\beta\rangle_b+|\beta\rangle_a|\alpha\rangle_b$ in the lossy quantum-enhanced metrology.
As a further work, we may consider the optimal entangled coherent states propagating in a turbulent atmosphere \cite{SV09,Bohmann17}.
The idea can be also exploited with microwave-optical photon pairs \cite{Stefano15}.
\begin{acknowledgments}
This work was supported by a grant to Quantum Frequency Conversion Project funded by Defense Acquisition Program Administration and Agency for Defense Development.
\end{acknowledgments}
|
1,314,259,995,271 | arxiv | \section{Introduction}
\label{sec:intro}
Given a data matrix composed of a set of objects in its rows and their corresponding attributes in its columns, biclustering is a data mining technique characterized by the simultaneous clustering of both rows and columns of the data matrix, aiming at revealing highly consistent patterns in sub-matrices \cite{ChengChurch2000}. The biclustering result cannot be achieved by two sequential clustering steps, and the internal consistency of each bicluster may involve more general affinity measures than those usually associated with conventional clustering approaches. In fact, a bicluster may be interpreted as a local model, clearly indicating which subset of attributes is responsible for keeping those objects together. As a consequence, when more flexible biclustering structures are considered, such as the ones admitting arbitrarily positioned and overlapping biclusters, as will be the case in this paper, any object or attribute of the data matrix may belong to none, one, or more than one of the obtained biclusters \cite{MadeiraOliveira2004}.
Besides those distinctive aspects when compared to conventional clustering, the biclustering problem has a strong connection with several other relevant problems in multivariate data analysis, including subspace clustering \cite{KriegelEtAl2009}, formal concept analysis (FCA) \cite{Ganter1997}, frequent pattern mining (FPM) \cite{ceglarEtAl2006, HippEtAl2000}, and graph theory. In the subspace clustering area, the biclustering problem is called \emph{pattern-based clustering} \cite{KriegelEtAl2009}. The problem of mining the \emph{concept lattice} (i.e., to enumerate all \emph{formal concepts}) from a \emph{formal context} is the same as enumerating all maximal biclusters of ones from a binary data matrix \cite{Kaytoue2011}, and it is the same as enumerating all \emph{maximal bicliques} from a \emph{bipartite graph}. Besides that, the \emph{intent} of a formal concept is the same as a \emph{closed itemset} \cite{LakhalStumme2005}. Several algorithms of FCA and FPM, which are restricted to binary datasets, such as In-Close2 \cite{Andrews2011} and Charm \cite{ZakiEtAL2002}, are characterized by exhibiting four key properties: (1) efficiency (take polynomial time per bicluster), (2) completeness (find all maximal biclusters), (3) correctness (all biclusters attend the user-defined measure of similarity), and (4) non-redundancy (all the obtained biclusters are maximal and the same bicluster is not enumerated twice). So, very powerful biclustering algorithms have already been proposed to deal with binary datasets.
Recently, Veroneze \emph{et al.} \cite{VeronezeEtAl2017} proposed a family of algorithms, called RIn-Close, also exhibiting those four key properties when enumerating biclusters directly in numerical (not only binary, but also integer or real-valued) data matrices. It may be considered a significant achievement, given that, before the RIn-Close family of algorithms, finding biclusters in numerical data matrices was accomplished by algorithms not exhibiting those four properties, or by discretizing and itemizing the numerical matrix, ultimately treating binary matrices, which implies information loss \cite{Besson2007,SrikantAgrawal1996}. Notice that the RIn-Close family of algorithms is capable of mining perfect and perturbed biclusters with constant values on rows (CVR) and constant values on columns (CVC), and also perfect biclusters with coherent values (CHV). There is also an algorithm to enumerate perturbed CHV biclusters, but in this case the algorithm can not be considered efficient, due to the necessity of dealing with expanded matrices and mining the CHV biclusters from CVC biclusters \cite{VeronezeEtAl2017}.
Motivated by the existence of relevant practical biclustering problems in which numerical (discrete or continuous) and categorical (ordinal or nominal) attributes are simultaneously present in the same dataset, we are going to propose here and extension of one of the RIn-Close algorithms to directly treat those kind of mixed-attribute datasets, retaining the four key properties and thus enlarging the applicability of the RIn-Close family. The authors are aware of the existence in the literature of alternative biclustering proposals to deal with mixed-attribute datasets (see Section~\ref{sec:relwork} for more details), but none of those existing proposals exhibits the four key properties or, when the four key properties are present, the numerical attributes should pass through discretization and itemization before the mining process, which inevitably promote information loss. So, we are going to present in this paper the first enumerative biclustering algorithm with those four key properties to directly mine all maximal biclusters in a mixed-attribute dataset, without the necessity of any pre-processing step. It is worth anticipating that ($i$) according to the convenience of the user, pre-processing, such as normalization, scaling, or discretization of any attribute, is fully admissible, thus being optional, but not mandatory; ($ii$) fully numerical or even fully categorical datasets are special cases of mixed-attribute datasets, being promptly treatable by our new proposal as well.
In Section~\ref{sec:bic}, we will formally present the types of biclusters that can be mined from a data matrix. Most importantly, we will demonstrate that CVC biclusters are the only type of biclusters that makes an immediate sense when mixed-attribute datasets are considered. Therefore, we firstly extend the definition of a CVC bicluster provided by \cite{VeronezeEtAl2017}, so that it will work with numerical and/or categorical attributes. Subsequently, we will generalize RIn-Close\_CVC \cite{VeronezeEtAl2017} to enumerate all maximal CVC biclusters in mixed-attribute datasets. Even when handling a strictly numerical data matrix, the previous version of RIn-Close\_CVC requires normalization or scaling of the real-valued attributes, particularly when the range of the attributes are very different. The extended version of RIn-Close\_CVC, to be proposed here, makes this pre-processing step optional, in the sense that the final results will not be influenced by normalizing or scaling any attribute (column of the data matrix), supposing the user has properly defined the consistency threshold for each numerical column. So, we also have a threshold to decide if a row or a column will enter a given bicluster, but here the threshold will be applied over the original attribute values, without information loss.
An additional advantage of the extended version of RIn-Close\_CVC is the ability to directly handle missing values, without the necessity of performing an imputation step, as required by the original version of RIn-Close\_CVC \cite{VeronezeEtAl2017}.
Essentially, we are going to propose a general-purpose and low-cost enumerative algorithm devoted to biclustering mixed-attribute datasets, characterized by no information loss and no introduction of additional noise. Besides, the sparser the matrix, the faster the enumeration tends to be \cite{Veroneze2016}. This is because our proposal simply ignores the missing elements of the data matrix. Notice that this ability to deal with missing data can be easily incorporated to the other RIn-Close algorithms, as was done by Veroneze \cite{Veroneze2016}.
Given that enumerative biclustering algorithms may return a huge amount of biclusters, part of them being useless or at least of low relevance, the contribution of this paper goes further. We have explored the strong connections between biclustering and FPM, borrowing metrics generally adopted to evaluate association rules (AR) \cite{AgrawalEtAl1993} and class association rules (CAR) \cite{LiuEtAl1998} to select a subset of relevant biclusters, where relevance may be associated with user-defined thresholds for these FPM metrics. The motivation for not adopting other internal evaluation metrics available in the literature, such as the \emph{Mean Squared Error} \cite{ChengChurch2000}, is the lack of consensus about which is the most indicated, particularly in the context of mixed-attribute datasets. We also have several proposals for external evaluation metrics \cite{HortaCampello2014}, but they require a reference solution, not attainable in real datasets.
Nonetheless, even using a filter based on FPM metrics, the number of biclusters may still be too much for a manual inspection of each bicluster. So, we will incorporate a simple greedy heuristic to select an even smaller subset of the enumerated biclusters to present to the user. This twice-reduced subset of biclusters keeps the same object coverage when compared to the output of the aforementioned FPM relevance filter, aiming at preserving representativeness. These two filters are similar to the ones proposed by Veroneze \cite{Veroneze2016}. In that work, RIn-Close was adapted to mine only \emph{pure biclusters} from a labeled dataset, which are full confident biclusters, in the sense of being composed of objects sharing the same class label. Also, the greedy heuristic presented here prioritizes the biclusters with small intents, being a slightly different version of the one proposed in \cite{Veroneze2016}.
Based on the results provided by this cascade of two filters, we will discuss the potential of the biclustering approach to provide interpretative models for labeled datasets. In fact, it is not easy for the user to properly interpret a biclustering solution. We argue that \emph{quantitative association rules} (QARs) \cite{zhu2009} and \emph{quantitative class association rules} (QCARs) are simple and interpretative formats to present biclusters to the user. For instance, using QCARs directly extracted from the biclusters, the user is informed about the attributes involved, their range of values and the associated class that is being represented.
The remainder of the paper is organized as follows. Section~\ref{sec:bic} introduces definitions and mathematical notation for biclustering, and also specifically for mixed-attribute biclustering. Section~\ref{sec:CAR} reviews some FPM definitions and metrics, and describes the two filters used in this work to select biclusters. Section~\ref{sec:relwork} is devoted to related works. The extended version of RIn-Close\_CVC is presented in Section~\ref{sec:rinclose}. Experimental results are discussed in Section~\ref{sec:exp}. Concluding remarks and further steps of the research are outlined in Section~\ref{sec:conclusion}.
\section{Biclustering}
\label{sec:bic}
The formalism used here to describe a bicluster and its variants is based on \cite{VeronezeEtAl2017}.
Let $\mathbf{A}_{n \times m}$ be a data matrix with the row index set $X = \left \{ 1, 2,..., n \right \}$ and the column index set $Y = \left \{ 1, 2, ...,m \right \}$. Each row represents an object, and each column represents an attribute. Each element $a_{ij} \in \mathbf{A}$ represents the relationship between object $i$ and attribute $j$. We use $(X,Y)$ to denote the entire matrix $\mathbf{A}$. Considering that $I \subseteq X$ and $J \subseteq Y$, $\mathbf{A}_{IJ} = (I, J)$ denotes the submatrix of $\mathbf{A}$ with the row index subset $I$ (named \emph{extent} in FCA) and column index subset $J$ (named \emph{intent} in FCA).
\begin{mydef}
A bicluster is a submatrix $(I,J)$ of the data matrix $\mathbf{A}_{n \times m}$ such that the rows in the index subset $I = \left \{ i_1,..., i_k \right \}$ ($I \subseteq X$ and $k \leq n$) exhibit a consistent pattern across the columns in the index subset $J = \left \{ j_1,..., j_s \right \}$ ($J \subseteq Y$ and $s \leq m$), and vice-versa.
\label{def:bic}
\end{mydef}
Thus, a bicluster $(I,J)$ is a $k \times s$ submatrix of the matrix $\mathbf{A}$, not necessarily with contiguous rows and columns, such that it meets a certain homogeneity criterion. A biclustering algorithm looks for a set of biclusters $\mathfrak{B} = (I_l, J_l)$, $l = 1, ..., q$, such that each bicluster $(I_l, J_l)$ satisfies some specific characteristics of homogeneity \cite{MadeiraOliveira2004}. Considering these characteristics, there are four major types of biclusters \cite{MadeiraOliveira2004}: ($i$) biclusters with constant values (CTV), ($ii$) biclusters with constant values on columns (CVC) or rows (CVR), ($iii$) biclusters with coherent values (CHV), and ($iv$) biclusters with coherent evolutions (CHE). There are many subtypes of CHE biclusters, and the order-preserving submatrix (OPSM) biclusters are the most famous among them. The total number of biclusters, $q$, will depend on the features of the selected biclustering algorithm, on the constraints imposed, and on the behaviour of the dataset being analysed.
\subsection{Types of Biclusters}
\label{sec:bicTypes}
Although perfect biclusters can be found in some data matrices, they are usually masked by noise in real datasets. Therefore, a user-defined parameter $\epsilon \geq 0$ determines the maximum residue (perturbation) allowed in a bicluster. Perfect biclusters are mined using $\epsilon = 0$, whereas perturbed biclusters are mined using $\epsilon > 0$. Dealing with numerical data matrices, the RIn-Close family of algorithms has specialized algorithms for mining perfect biclusters that are faster than the algorithms for mining perturbed biclusters. Fig.~\ref{fig:typesOfBic} shows examples of different types of numerical biclusters, in both perfect and perturbed cases.
\begin{figure*}
\centering
\subfigure[Perfect biclusters.]{
\includegraphics[trim=2.5cm 13cm 7.5cm 2cm, clip, scale=0.65]{typesOfBic1.pdf}
\label{fig:typesOfBica}
}
\subfigure[Perturbed biclusters.]{
\includegraphics[trim=2.5cm 13cm 7.4cm 2cm, clip, scale=0.65]{typesOfBic2.pdf}
\label{fig:typesOfBicb}
}
\caption{Examples of different types of biclusters (extracted from \cite{VeronezeEtAl2017}).}
\label{fig:typesOfBic}
\end{figure*}
\begin{mydef}[CTV biclusters]
A \emph{CTV bicluster} is a submatrix $(I, J)$ of a data matrix $\mathbf{A}_{n \times m}$ such that
\begin{equation}
\max_{i \in I, j \in J} (a_{ij}) - \min_{i \in I, j \in J} (a_{ij}) \leq \epsilon.
\label{eq:ctvbic}
\end{equation}
\label{def:ctvbic}
\end{mydef}
Fig.~\ref{fig:typesOfBica} shows an example of a perfect CTV bicluster, and Fig.~\ref{fig:typesOfBicb} shows an example of a perturbed CTV bicluster that can be mined using $\epsilon \geq 1$.
\begin{mydef}[CVC biclusters]
A \emph{CVC bicluster} is a submatrix $(I, J)$ such that
\begin{equation}
\max_{i \in I} (a_{ij}) - \min_{i \in I} (a_{ij}) \leq \epsilon, \forall j \in J.
\label{eq:cvcbic}
\end{equation}
\label{def:cvcbic}
\end{mydef}
Fig.~\ref{fig:typesOfBica} shows an example of a perfect CVC bicluster, and Fig.~\ref{fig:typesOfBicb} shows an example of a perturbed CVC bicluster that can be mined using $\epsilon \geq 1$.
The definition of a CVR bicluster is the equivalent transpose of the definition of a CVC bicluster. So, we can mine CVR biclusters by transposing the original data matrix and using an algorithm to mine CVC biclusters. See examples of CVR biclusters in Fig.~\ref{fig:typesOfBic}.
There are two perspectives for CHV biclusters: (\textit{i}) additive model, and (\textit{ii}) multiplicative model (see Fig.~\ref{fig:typesOfBic}). Biclusters based on the additive model are called \textit{shifting biclusters}. Biclusters based on the multiplicative model are called \textit{scaling biclusters}. Any row (column) of a perfect shifting bicluster can be obtained by adding a constant value to any other row (column) of the bicluster. Similarly, any row (column) of a perfect scaling bicluster can be obtained by multiplying a constant value to any other row (column) of the bicluster. The problems of mining shifting and scaling biclusters are equivalent. Using an algorithm to mine shifting (scaling) biclusters, we can mine scaling (shifting) biclusters by previously taking the logarithm (exponential) of all entries of the data matrix. Therefore, we are going to provide only the definition of a shifting bicluster in this paper.
\begin{mydef}[CHV biclusters - additive model]
Let $Z^{jl} = \{a_{ij} - a_{il}\}_{i \in I}$, $j,l \in J$, be the set of values of the difference between two attributes for the subset of rows $I$. A \emph{shifting bicluster} is a submatrix $(I,J)$ such that
\begin{equation}
\max(Z^{jl}) - \min(Z^{jl}) \leq \epsilon, \forall j,l \in J.
\label{eq:chvabic}
\end{equation}
\label{def:chvabic}
\end{mydef}
Algorithms for finding CHE (coherent evolution) biclusters address the problem of finding coherent evolutions across the rows and/or columns of the data matrix, regardless of their exact values \cite{MadeiraOliveira2004}. The OPSM biclusters are the most famous among the CHE biclusters.
\begin{mydef}[OPSM biclusters]
An \emph{OPSM bicluster} is a submatrix $(I, J)$ of a data matrix $\mathbf{A}_{n \times m}$ such that there is a permutation $P = \{p_1, p_2, ..., p_s\}$ of the set of columns $J$, where $a_{ip_1} \leq a_{ip_2} \leq ... \leq a_{ip_s}$, $\forall i\in I$.
\label{def:opsmbic}
\end{mydef}
Figure~\ref{fig:typesOfBic} shows an OPSM bicluster, where $P = \{4, 2, 3, 1, 5\}$.
\subsection{Maximality and Algebraic Properties}
\label{sec:bicMaxProp}
\begin{mydef}[Maximal bicluster]
Given the desired characteristics of homogeneity, a bicluster $(I,J)$ is called a \emph{maximal bicluster} if and only if:
\begin{itemize}
\item $\forall x \in X \setminus I$, $(I \cup \{x\}, J)$ is not a (valid) bicluster, and
\item $\forall y \in Y \setminus J$, $(I, J \cup \{y\})$ is not a (valid) bicluster.
\end{itemize}
\end{mydef}
\noindent It means that a bicluster is maximal if we cannot add any object/attribute without violating the desired characteristics of homogeneity. For instance, a CTV bicluster $(I,J)$ is called a maximal CTV bicluster iff:
\begin{itemize}
\item $\forall x \in X \setminus I$, $\max_{i \in I \cup \{x\}, j \in J} (a_{ij}) - \min_{i \in I \cup \{x\}, j \in J} (a_{ij}) > \epsilon$, and
\item $\forall y \in Y \setminus J$, $\max_{i \in I, j \in J \cup \{y\}} (a_{ij}) - \min_{i \in I, j \in J \cup \{y\}} (a_{ij}) > \epsilon$.
\end{itemize}
For all bicluster definitions in Subsection~\ref{sec:bicTypes}, we have the following algebraic properties.
\begin{property}[Anti-Monotonicity]
Let $(I,J)$ be a bicluster. Any submatrix $(I', J')$, where $I' \subseteq I$ and $J' \subseteq J$, is also a (valid) bicluster.
\end{property}
\begin{property}[Monotonicity]
Let $(I,J)$ be a maximal bicluster. Any supermatrix of $(I,J)$ is not a (valid) bicluster.
\end{property}
Usually, the efficient enumerative algorithms of FCA and FPM areas are based on the monotonicity and anti-monotonicity properties \cite{Besson2007}. In fact, we do not know any FCA / FPM efficient algorithm that is not based on these properties. Notice that the RIn-Close family of algorithms also holds both properties.
\subsection{Mixed-Attribute Biclustering}
A dataset may have \emph{numerical} and \emph{categorical} attributes. The numerical attributes can be \emph{discrete} (integer attributes) or \emph{continuous} (real-valued attributes). The categorical attributes can be \emph{ordinal} (attributes that have some kind of implicit or natural order) and \emph{nominal} (attributes that do not have an implicit or natural order or rank). Binary attributes can be seen as nominal attributes that can assume only two values, such as \emph{Yes} or \emph{No}.
\begin{mydef}
A \emph{mixed-attribute dataset} is a dataset that contains more than one attribute type associated with its columns.
\label{def:mixedData}
\end{mydef}
Table~\ref{tab:maDataEx1} shows an example of a mixed-attribute dataset. The attributes \emph{Sex}, \emph{Smoker}, and \emph{Religion} are nominal attributes, with \emph{Sex} and \emph{Smoker} being binary attributes. \emph{Social Class} is an ordinal attribute, where the label \emph{E} represents the poorest people, and the label \emph{A} represents the richest people. \emph{Age} is an integer attribute. \emph{Weight} and \emph{Height} are real-valued attributes.
\linespread{1}
\begin{table}[]
\footnotesize
\centering
\caption{Example of a mixed-attribute dataset, with two perturbed biclusters highlighted. There are 20 objects and 7 attributes.}
\label{tab:maDataEx1}
\begin{tabular}{cccrrclc}
\toprule
\textbf{\#} & \textbf{Sex} & \textbf{Age} & \textbf{Weight (kg)} & \textbf{Height (m)} & \textbf{Smoker} & \textbf{Religion} & \textbf{Social Class} \\
\midrule
1 & \colorbox[rgb]{0.7,0.7,0.7}{F} & 32 & 94.87 & 1.72 & Y & \colorbox[rgb]{0.7,0.7,0.7}{Christian} & \colorbox[rgb]{0.7,0.7,0.7}{C} \\
2 & F & 34 & 99.39 & 1.63 & N & Christian & D \\
3 & F & 33 & 124.15 & 1.66 & N & Hindu & C \\
4 & M & 52 & 49.77 & 1.71 & Y & Christian & E \\
5 & F & 57 & 65.13 & 1.80 & N & Hindu & C \\
6 & F & 39 & 58.71 & 1.74 & N & Buddhist & E \\
7 & \colorbox[rgb]{0.7,0.7,0.7}{F} & 39 & 67.41 & 1.56 & N & \colorbox[rgb]{0.7,0.7,0.7}{Christian} & \colorbox[rgb]{0.7,0.7,0.7}{C} \\
8 & \colorbox[rgb]{0.7,0.7,0.7}{F} & 47 & 67.19 & 1.79 & Y & \colorbox[rgb]{0.7,0.7,0.7}{Christian} & \colorbox[rgb]{0.7,0.7,0.7}{B} \\
9 & M & 58 & 42.95 & 1.48 & N & Christian & A \\
10 & \colorbox[rgb]{0.9,0.9,0.9}{M} & 17 & 109.52 & \colorbox[rgb]{0.9,0.9,0.9}{1.62} & \colorbox[rgb]{0.9,0.9,0.9}{N} & Christian & \colorbox[rgb]{0.9,0.9,0.9}{C} \\
11 & F & 42 & 91.12 & 1.76 & N & Buddhist & D \\
12 & F & 48 & 58.07 & 1.50 & N & Islamist & D \\
13 & \colorbox[rgb]{0.9,0.9,0.9}{M} & 43 & 46.69 & \colorbox[rgb]{0.9,0.9,0.9}{1.61} & \colorbox[rgb]{0.9,0.9,0.9}{N} & Hindu & \colorbox[rgb]{0.9,0.9,0.9}{B} \\
14 & \colorbox[rgb]{0.9,0.9,0.9}{M} & 55 & 85.38 & \colorbox[rgb]{0.9,0.9,0.9}{1.54} & \colorbox[rgb]{0.9,0.9,0.9}{N} & Islamist & \colorbox[rgb]{0.9,0.9,0.9}{C} \\
15 & M & 34 & 39.77 & 1.70 & N & Christian & B \\
16 & M & 34 & 83.90 & 1.74 & N & Islamist & D \\
17 & M & 51 & 55.72 & 1.93 & Y & Islamist & B \\
18 & \colorbox[rgb]{0.7,0.7,0.7}{F} & 47 & 57.10 & 1.51 & N & \colorbox[rgb]{0.7,0.7,0.7}{Christian} & \colorbox[rgb]{0.7,0.7,0.7}{C} \\
19 & M & 38 & 54.01 & 1.85 & Y & Islamist & C \\
20 & \colorbox[rgb]{0.9,0.9,0.9}{M} & 45 & 73.10 & \colorbox[rgb]{0.9,0.9,0.9}{1.59} & \colorbox[rgb]{0.9,0.9,0.9}{N} & Islamist & \colorbox[rgb]{0.9,0.9,0.9}{C} \\
\bottomrule
\end{tabular}
\end{table}
\linespread{1.5}
Given the bicluster types presented in Subsection~\ref{sec:bicTypes}, we argue that only CVC biclusters make a direct sense in mixed-attribute datasets - see, for instance, the two highlighted biclusters in Table~\ref{tab:maDataEx1}. Clearly, CVR, CTV, CHV and OPSM biclusters can not be properly characterized with heterogeneous attributes, because attributes of a distinct nature can not be directly related to each other, as required by those bicluster types.
However, Definition~\ref{def:cvcbic} that was already provided to describe CVC biclusters is specific for numerical datasets. Moreover, Definition~\ref{def:cvcbic} also considers that all attributes assume values in the same range, because the same value of $\epsilon$ is adopted for all attributes, which requires a normalization pre-processing step. Therefore, to account for mixed-attribute datasets, the definition of CVC biclusters must be generalized accordingly.
Notice that categorical attributes are discrete entities. The domain of a discrete attribute can be represented by a set of symbols. In the ordinal case, a set of integer values obeying a bijective mapping is a straightforward choice. Thus, we can use integer attributes instead of ordinal attributes without any loss of information. In the nominal case, a one-hot binary representation is generally adopted to impose the same Hamming distance between any pair of distinct values of that attribute. However, in the case of two categories, a single bit is enough. For instance, the values of the nominal attributes \emph{Sex} and \emph{Smoker} may be represented by a bit of information, while the nominal attributes associated with \emph{Religion} may be mapped to one-hot binary sequences. The values \emph{F} and \emph{M} of the \emph{Sex} attribute are mapped to 0 and 1, respectively. The values \emph{N} and \emph{Y} of the \emph{Smoker} attribute are mapped to 0 and 1, respectively. The values \emph{Christian}, \emph{Islamist}, \emph{Hindu}, and \emph{Buddhist} of the \emph{Religion} attribute are mapped to 1000, 0100, 0010 and 0001, respectively. On the other hand, the discrete values of the ordinal attribute \emph{Social Class}, \emph{A}, \emph{B}, \emph{C}, \emph{D}, and \emph{E}, are mapped to 1, 2, 3, 4 and 5, respectively.
Before generalizing the definition of a CVC bicluster, it is possible to simplify even more the numerical representation of a nominal attribute. Given that a nominal attribute is discrete and finite, and supposing we are just focusing on detecting if the attribute value is equal or not, then we may convert each nominal attribute to a distinct integer, producing a more concise numerical representation than one-hot binary representation. We are aware that this representation imposes an arbitrary ordinal relation among the previous nominal attribute values, but this ordinal relation will not affect the results, being an internal manipulation transparent to the user. So, the values \emph{Christian}, \emph{Islamist}, \emph{Hindu}, and \emph{Buddhist} of the \emph{Religion} attribute can be mapped to 1, 2, 3 and 4, respectively. Table~\ref{tab:maDataEx2} contains only numbers and has essentially the same information of Table~\ref{tab:maDataEx1}.
\linespread{1}
\begin{table}[]
\footnotesize
\centering
\caption{Mixed-attribute dataset of Table~\ref{tab:maDataEx1} with its categorical attributes mapped to integer and binary representations. The two highlighted biclusters here are equivalent to the two highlighted biclusters in Table~\ref{tab:maDataEx1}.}
\label{tab:maDataEx2}
\begin{tabular}{cccrrclc}
\toprule
\textbf{\#} & \textbf{Sex} & \textbf{Age} & \textbf{Weight (kg)} & \textbf{Height (m)} & \textbf{Smoker} & \textbf{Religion} & \textbf{Social Class} \\
\midrule
1 & \colorbox[rgb]{0.7,0.7,0.7}{0} & 32 & 94.87 & 1.72 & 1 & \colorbox[rgb]{0.7,0.7,0.7}{1} & \colorbox[rgb]{0.7,0.7,0.7}{3} \\
2 & 0 & 34 & 99.39 & 1.63 & 0 & 1 & 4 \\
3 & 0 & 33 & 124.15 & 1.66 & 0 & 3 & 3 \\
4 & 1 & 52 & 49.77 & 1.71 & 1 & 1 & 5 \\
5 & 0 & 57 & 65.13 & 1.80 & 0 & 3 & 3 \\
6 & 0 & 39 & 58.71 & 1.74 & 0 & 4 & 5 \\
7 & \colorbox[rgb]{0.7,0.7,0.7}{0} & 39 & 67.41 & 1.56 & 0 & \colorbox[rgb]{0.7,0.7,0.7}{1} & \colorbox[rgb]{0.7,0.7,0.7}{3} \\
8 & \colorbox[rgb]{0.7,0.7,0.7}{0} & 47 & 67.19 & 1.79 & 1 & \colorbox[rgb]{0.7,0.7,0.7}{1} & \colorbox[rgb]{0.7,0.7,0.7}{2} \\
9 & 1 & 58 & 42.95 & 1.48 & 0 & 1 & 1 \\
10 & \colorbox[rgb]{0.9,0.9,0.9}{1} & 17 & 109.52 & \colorbox[rgb]{0.9,0.9,0.9}{1.62} & \colorbox[rgb]{0.9,0.9,0.9}{0} & 1 & \colorbox[rgb]{0.9,0.9,0.9}{3} \\
11 & 0 & 42 & 91.12 & 1.76 & 0 & 4 & 4 \\
12 & 0 & 48 & 58.07 & 1.50 & 0 & 2 & 4 \\
13 & \colorbox[rgb]{0.9,0.9,0.9}{1} & 43 & 46.69 & \colorbox[rgb]{0.9,0.9,0.9}{1.61} & \colorbox[rgb]{0.9,0.9,0.9}{0} & 3 & \colorbox[rgb]{0.9,0.9,0.9}{2} \\
14 & \colorbox[rgb]{0.9,0.9,0.9}{1} & 55 & 85.38 & \colorbox[rgb]{0.9,0.9,0.9}{1.54} & \colorbox[rgb]{0.9,0.9,0.9}{0} & 2 & \colorbox[rgb]{0.9,0.9,0.9}{3} \\
15 & 1 & 34 & 39.77 & 1.70 & 0 & 1 & 2 \\
16 & 1 & 34 & 83.90 & 1.74 & 0 & 2 & 4 \\
17 & 1 & 51 & 55.72 & 1.93 & 1 & 2 & 2 \\
18 & \colorbox[rgb]{0.7,0.7,0.7}{0} & 47 & 57.10 & 1.51 & 0 & \colorbox[rgb]{0.7,0.7,0.7}{1} & \colorbox[rgb]{0.7,0.7,0.7}{3} \\
19 & 1 & 38 & 54.01 & 1.85 & 1 & 2 & 3 \\
20 & \colorbox[rgb]{0.9,0.9,0.9}{1} & 45 & 73.10 & \colorbox[rgb]{0.9,0.9,0.9}{1.59} & \colorbox[rgb]{0.9,0.9,0.9}{0} & 2 & \colorbox[rgb]{0.9,0.9,0.9}{3} \\
\bottomrule
\end{tabular}
\end{table}
\linespread{1.5}
Now we are ready to propose a simple generalization of Definition~\ref{def:cvcbic}, associated with CVC biclusters, to allow an immediate manipulation of mixed-attribute datasets. We are going to use one particular $\epsilon$ per column, and every time that a nominal attribute is being manipulated in a specific column of the mixed-attribute dataset, $\epsilon$ should be taken as zero. On the other hand, for categorical attributes exhibiting an ordinal relation, a suitable integer value should be adopted for $\epsilon$ (it will depend on what the user wants to accept as being part of the same group).
\begin{mydef}[CVC biclusters]
A \emph{CVC bicluster} is a submatrix $(I, J)$ such that
\begin{equation}
\max_{i \in I} (a_{ij}) - \min_{i \in I} (a_{ij}) \leq \epsilon_j, \forall j \in J,
\label{eq:cvcbic2}
\end{equation}
\noindent where $\epsilon_j$ is the user-defined maximum allowed perturbation for attribute $j$.
\label{def:cvcbic2}
\end{mydef}
The two highlighted biclusters in Table~\ref{tab:maDataEx2} are the same as the two highlighted biclusters in Table~\ref{tab:maDataEx1}, considering the numerical conversion defined along this subsection. Therefore, this new definition of CVC biclusters completely meets the requirements for mining biclusters in mixed-data matrices.
Remarkably, this new definition of CVC biclusters also meets the monotonicity and anti-monotonicity properties, which is fundamental for enumerative algorithms (as pointed in \cite{VeronezeEtAl2017}). This new definition can also be used to mine biclusters in data matrices formed solely by numerical attributes. For each numerical attribute $j$, $\epsilon_j$ will reflect the range of values assumed by that attribute. Clearly, this new definition can also be used to mine biclusters in data matrices formed solely by categorical attributes.
\section{Class Association Rules}
\label{sec:CAR}
The concepts and metrics for traditional \emph{association mining} \cite{ceglarEtAl2006} are based on binary datasets. So, we will first provide the main concepts and metrics based on the binary case and, after that, we will generalize them to mixed-attribute datasets.
Let $\mathbf{A}_{n \times m}$ be a binary matrix with the row index set $X = \left \{ 1, 2,..., n \right \}$ and the column index set $Y = \left \{ 1, 2, ...,m \right \}$. Each row represents an object, and each column represents an attribute (or \emph{item}, which is the name commonly used in FPM).
\begin{mydef}
A subset $J = \left \{ j_1,..., j_s \right \} \subseteq Y$ is called an \emph{itemset}.
\end{mydef}
Let $I$ be the set of objects that are common to all the items in the itemset $J$, which is given by:
\begin{equation}
I = \{i \in X | a_{ij} = 1, \; \forall j \in J\}.
\label{eq:support}
\end{equation}
\noindent Notice that the pair $(I,J)$ is a CTV bicluster of 1s.
The \emph{support} of an itemset $J$ is given by
\begin{equation}
sup(J) = |I|,
\label{eq:support}
\end{equation}
\noindent where $|\zeta|$ is the number of elements in the set $\zeta$. The \emph{relative support} of an itemset $J$ is given by
\begin{equation}
rsup(J) = \frac{sup(J)}{n}.
\end{equation}
\begin{mydef}
\noindent An itemset $J$ is a \emph{frequent itemset} if $sup(J) \geq mR$, where $mR$ is a user-defined threshold.
\end{mydef}
\begin{mydef}
\noindent An \emph{association rule} (AR) is an expression in the form $J \Rightarrow H$, where $J$ and $H$ are itemsets and $J \cap H = \emptyset$. $J$ is called the \emph{body} or \emph{antecedent}, and $H$ is called the \emph{head} or \emph{consequent} of the rule.
\end{mydef}
Let us assume that the objects of the data matrix $\mathbf{A}$ are labeled, let $C = \{c_1, c_2, ..., c_k\}$ be the set of possible class labels of the objects, and let $c \in C$.
\begin{mydef}
A \emph{class association rule} (CAR) is an expression of the form $J \Rightarrow c$, where $J$ is an itemset and $c$ is a class label.
\end{mydef}
As we are still only talking about the binary case, a CAR of the type $J \Rightarrow c$ means that the presence of the attributes in $J$ implies class label $c$. For instance, let $\mathbf{A}$ be a matrix whose objects are patients and attributes are symptoms. Let the set of class labels $C$ represents some diseases. Let the itemset $J$ represents the following symptoms $\{fever, nausea, lumbarPain, urethraBurning\}$, and let the class label $c$ represents the disease \emph{Nephritis}. Thus, $J \Rightarrow c$ ($\{fever, nausea, lumbarPain, urethraBurning\} \Rightarrow Nephritis$) means that if a patient has fever, nausea, lumbar pain and urethra burning, then it has Nephritis, with a certain confidence.
Let $c_{I}$ be the set of objects from the data matrix $\mathbf{A}$ with class label $c$, i.e, $c_{I} = \{i \in X | label(i) = c\}$, where $label(.)$ is a function that returns the class label of an object. The \emph{support} of a class label $c$ is given by $sup(c) = |c_{I}|$.
The \emph{support} of a CAR of the type $J \Rightarrow c$ is given by:
\begin{equation}
sup(J \Rightarrow c) = |I \cap c_{I}|.
\end{equation}
\noindent Thus, the \emph{relative support} of a CAR of the type $J \Rightarrow c$ provides an estimate of the probability of the joint occurrence of itemset $J$ and class label $c$:
\begin{equation}
rsup(J \Rightarrow c) = \frac{sup(J \Rightarrow c)}{n} = P(J \wedge c).
\end{equation}
The \emph{confidence} of a CAR of the type $J \Rightarrow c$ is the conditional probability that an object belongs to the class $c$ given that it contains the itemset $J$:
\begin{equation}
conf(J \Rightarrow c) = P(c|J) = \frac{P(J \wedge c)}{P(J)} = \frac{rsup(J \Rightarrow c)}{rsup(J)} = \frac{sup(J \Rightarrow c)}{sup(J)}.
\end{equation}
The completeness of a CAR of the type $J \Rightarrow c$ is given by
\begin{equation}
comp(J \Rightarrow c) = P(J|c) = \frac{P(J \wedge c)}{P(c)} = \frac{rsup(J \Rightarrow c)}{rsup(c)} = \frac{sup(J \Rightarrow c)}{sup(c)}.
\end{equation}
\noindent Thus, completeness is the proportion of instances that are predicted by a CAR of the type $J \Rightarrow c$, while confidence is the fraction of correct predictions made by this CAR.
\emph{Lift} is defined as the ratio of the observed joint probability of $J$ and $c$ to the expected joint probability if they were statistically independent, that is,
\begin{equation}
lift(J \Rightarrow c) = \frac{P(J \wedge c)}{P(J)P(c)} = \frac{rsup(J \Leftarrow c)}{rsup(J)rsup(c)} = \frac{conf(J \Leftarrow c)}{rsup(c)}.
\end{equation}
\noindent One common use of lift is to measure the degree of surprise of a rule. A lift value close to 1 means that the support of a rule is expected considering the support of its components. We usually look for values that are much larger than 1 (i.e., above expectation) or smaller than 1 (i.e., below expectation). Notice that lift is always larger than or equal to the confidence because it is the confidence divided by the consequent's probability.
\emph{Leverage} measures the difference between the observed and expected joint probability of $J$ and $c$ assuming they are independent, that is,
\begin{equation}
leverage(J \Rightarrow c) = P(J \wedge c) - P(J)P(c) = rsup(J \Leftarrow c) - rsup(J)rsup(c).
\end{equation}
\noindent Leverage gives an "absolute" measure of how surprising a rule is. If two rules have the same confidence and lift, the metric leverage indicates which one is stronger.
\subsection{Quantitative Class Association Rules}
\label{ssec:QCAR}
Now, let us think about more general cases where we do not consider only binary attributes. In the literature, these rules are called \emph{quantitative association rules} (QAR) \cite{SrikantAgrawal1996, zhu2009}. In the binary case, we omit the domain of interest of the attributes because they are always equal to 1. For instance, in the itemset $\{fever, nausea, lumbarPain, urethraBurning\}$, we are assuming that our objects of interest are the ones exhibiting these symptoms. So, we could rewrite this itemset as $\{fever\{1\}, nausea\{1\}, lumbarPain\{1\}, urethraBurning\{1\}\}$. In more general cases, each rule must always indicate the range of values of its attributes. For instance, the \emph{quantitative-itemset} $\{Sex\{M\}, Height[1.54,1.62], Smoker\{N\}, SocialClass\{B,C\}\}$ refers to the objects of the dataset that have the attribute \emph{Sex} equal to \emph{M}, the attribute \emph{Height} in the interval $[1.54,1.62]$, the attribute \emph{Smoker} equal to \emph{N}, and the attribute \emph{Social Class} equal to \emph{B} or \emph{C}. Notice that this information is provided by one of the biclusters highlighted in Table~\ref{tab:maDataEx1}: the one composed of the rows \{10, 13, 14, 20\} and attributes \{Sex, Height, Smoker, SocialClass\}.
\begin{mydef}
A \emph{quantitative-itemset} $\mathfrak{J}$ is a set of attributes and their domain of interest, i.e., $\mathfrak{J} = \{j_1 \in D_1, j_2 \in D_2, ..., j_s \in D_s \}$, where $j_\mathfrak{i}$ is an attribute and $D_\mathfrak{i}$ is its domain of interest. If $j_\mathfrak{i}$ is a discrete attribute, $D_\mathfrak{i}$ is a finite set of values; if $j_\mathfrak{i}$ is a continuous attribute, $D_\mathfrak{i}$ is an interval.
\end{mydef}
\begin{mydef}
A \emph{quantitative association rule} (QAR) is an expression of the form $\mathfrak{J} \Rightarrow \mathfrak{H}$, where $\mathfrak{J}$ and $\mathfrak{H}$ are quantitative-itemsets, and the intersection between the attributes of $\mathfrak{J}$ and $\mathfrak{H}$ is empty.
\label{def:qar}
\end{mydef}
\begin{mydef}
A \emph{quantitative class association rule} (QCAR) is an expression of the form $\mathfrak{J} \Rightarrow c$, where $\mathfrak{J}$ is a quantitative-itemset and $c$ is a class label.
\end{mydef}
Thus, a quantitative-itemset is simply a generalization of an itemset, as well as a QAR and a QCAR are generalizations of an AR and a CAR, respectively.
Notice that $\mathbf{A}_{n \times m}$ is now a mixed-data matrix, with each column admitting only numerical or only categorical values. Let $I$ be the set of objects that meets the requirement imposed by the quantitative-itemset $\mathfrak{J}$:
\begin{equation}
I = \{i \in X| a_{ij} \in D, \; \; \forall (j \in D) \in \mathfrak{J} \}.
\end{equation}
The \emph{support} of $\mathfrak{J}$ is given by
\begin{equation}
sup(\mathfrak{J}) = |I|,
\label{eq:support}
\end{equation}
\noindent and it follows that all other metrics previously presented for CARs will be calculated in the same way for QCARs.
Let $J$ be equal to the column indexes of the attributes in $\mathfrak{J}$. Thus, the pair $(I,J)$ is a CVC bicluster. So, a CVC bicluster $(I,J)$ provides all the necessary information to build a quantitative-itemset (that is a component of a QAR, or the antecedent of a QCAR), and vice versa. Notice also that a CVC bicluster is a generalization of a CTV bicluster.
It is important to highlight that quantitative association rules are generally divided into two classes in the literature: frequent rules and distributional rules \cite{zhu2009}. Our definitions are based on \emph{frequent rules} because it is the case that has a direct relation with the biclustering problem. See \cite{zhu2009} for more details about distributional rules.
\subsection{Filters to select significant biclusters from the enumerative solution}
Enumerative algorithms generally return a huge amount of biclusters. In this paper, we are proposing two filters to select a reduced set of significant biclusters from the enumerative solution.
The first filter (\emph{1st-Filter}) is based on FPM metrics to measure the quality of a rule. To mine the biclusters, the user must set the parameter $mR$ in RIn-Close, which is equivalent to the minimum \emph{support} of an itemset (or rule). To discard mined biclusters of low relevance, we rely upon the metrics \emph{confidence} and \emph{lift}. Confidence informs us the fraction of correct predictions, and lift measures the degree of surprise of the rule. Thus, the first step to implement this filter is to build QCARs from the enumerated biclusters. The second and final step is to select the biclusters whose QCARs meet the user-defined thresholds for these two metrics.
A bicluster may contain objects belonging to more than one class in its extent, but we assume that each bicluster represents only one class, the one that holds the majority of the objects in the bicluster's extent. For instance, let the bicluster's extent $I$ be equal to $I = \{4, 8, 14, 15, 17\}$ and the class labels of these objects be, respectively, $\{0, 0, 1, 0, 1\}$. Then, the class label represented by the bicluster is $0$. For each enumerated bicluster, we build a QCAR based only in the class label that it represents. This rule has the highest confidence among the alternative QCARS that we can build based on this bicluster.
The amount of biclusters selected by the 1st-filter can still be large, so we are proposing a greedy heuristic as a second filter (\emph{2nd-Filter}). Its goal is to select a very small set of representative biclusters, thus allowing manual inspection. The selected biclusters will be exhibited in the form of QCARs, which are highly interpretable.
Algorithm~\ref{alg:greedyHeur} shows the pseudocode of the greedy heuristic. To compute the row-coverage of a bicluster, we consider only the objects belonging to the class represented by that bicluster. For instance, in the previous example, the rows covered by the bicluster considering the represented class are 4, 8, and 15. The row-coverage of a biclustering solution $\mathfrak{B}$ is the union of the row-coverage of its biclusters. The final set of chosen biclusters will have the same row-coverage of the filter input. When more than one bicluster provides the same row-coverage of $\mathfrak{B}'$, the bicluster with the smaller intent is prioritized. For each class, the bicluster with the highest \emph{completeness} will be always selected by this heuristic.
\linespread{1}
\begin{algorithm}
\caption{Greedy Heuristic (2nd-Filter)}
\label{alg:greedyHeur}
\begin{algorithmic}[1]
\small
\REQUIRE Biclustering solution $\mathfrak{B}$
\ENSURE Biclustering solution $\mathfrak{B}'$
\STATE $cov \leftarrow$ row-coverage of $\mathfrak{B}$
\STATE $aux \leftarrow 0$
\WHILE{$aux < cov$}
\STATE Select the bicluster $(I,J)$ from $\mathfrak{B}$ that maximizes the row-coverage of $\mathfrak{B}'$ \COMMENT{In the case of a tie, choose the bicluster with the smallest intent}
\STATE Insert $(I,J)$ in $\mathfrak{B}'$
\STATE Remove $(I,J)$ from $\mathfrak{B}$
\STATE $aux \leftarrow$ row-coverage of $\mathfrak{B}'$
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\linespread{1.5}
According to Xing \emph{et al.} \cite{XinEtAl2006}, a useful compact pattern set should simultaneously exhibit high significance and low redundancy. These two goals are achieved by our cascade of filters. By means of \emph{1st-Filter}, we select only significant biclusters. And, by means of \emph{2nd-Filter}, which locally maximizes the row-coverage, we select a smaller set of biclusters with low-redundancy, but keeping the same row-coverage of the filter input.
\section{Related Works}
\label{sec:relwork}
In \cite{VandrommeEtAL2016}, the authors presented a biclustering method designed to handle mixed-attribute datasets. This method uses a pre-processing step to simplify the data by means of discretization, and a constructive greedy heuristic to build the biclusters by iteratively adding columns. Their goal was, as expected, to detect CVC biclusters. To the best of Vandromme \emph{et al.}'s knowledge, this was the first method to handle mixed-attribute datasets in the biclustering literature \cite{VandrommeEtAL2016}.
A core principle here is the fact that, when we discretize the numerical attributes, we are no longer looking for biclusters in a mixed-data matrix. Real-valued attributes are each one mapped to a discrete attribute. And even in the case of integer attributes with many distinct values, they are mapped to a smaller set of discrete values. So, after pre-processing, any biclustering algorithm that handles discrete matrices can be used. Discretization may simplify the biclustering task, but it implies loss of information and there is no control on the overall effect of the discretization step on the final results.
In fact, after imposing discretization, we have better proposals than \cite{VandrommeEtAL2016}, especially when we consider the connection between biclustering, FPM and FCA. Notice that we can extract CVC biclusters from quantitative-itemsets (and vice-versa), and (quantitative) association rules are mined from (quantitative-)frequent itemsets. Veroneze \textit{et al.} \cite{VeronezeEtAl2017} also showed that well-known heuristic-based biclustering algorithms can have a poor performance when trying to identify the existing biclusters in a simple and controlled scenario, thus fully favouring the use of efficient enumerative algorithms, such as the ones provided in FPM and FCA literature and the RIn-Close family \cite{VeronezeEtAl2017}.
An approach to mine biclusters from non-binary datasets using traditional FPM and FCA algorithms devoted to binary datasets (such as Apriori \cite{AgrawalSrikant1994}, Charm \cite{ZakiEtAL2002}, or In-Close2 \cite{Andrews2011}) is (1) to discretize the dataset, and (2) to itemize the discrete dataset. Notice that each dataset attribute is an item in the binary case. Basically, the itemization (the second step of the proposed approach) consists in creating a binary dataset from a discrete dataset, without information loss. The first step will necessarily involve some kind of information loss. An item here is a pair $<att,v>$, where $att$ is an attribute (of the original dataset), and $v$ is a discretized value. So, we have as many items as the number of pairs $<att,v>$. Thus, there is a trade-off between faster execution time with fewer discretized values and reduced information loss with more discretized values. Therefore, depending on the nature of the dataset, the user is not totally free to choose the granularity of the discretization.
As far as we known, Srikant \& Agrawal \cite{SrikantAgrawal1996} were the first ones to address the problem of mining quantitative association rules in mixed-attribute datasets. Their proposal discretizes the numerical (quantitative) attributes into partitions or intervals, using equi-depth partitioning. The authors proposed a metric, called partial completeness, to estimate the information loss and help the user to choose the number of intervals. Notice that if a numerical attribute has few distinct values, it does not need to be partitioned in intervals. After the partition in intervals, all the attributes (numerical and categorical) are mapped to positive integers. The next step is the itemization. To solve the problem of not finding rules due to the minimum support, the proposal combines adjacent intervals (or values) of quantitative attributes. Therefore, instead of using a pair $<att,v>$ for an item, the proposals uses a triplet $<att,l, u>$, where $l$ is the lower bound and $u$ is the upper bound. $l = u$ if the attribute is categorical. A single element of the original matrix could be assigned to more than one item (which is similar to the multiple item assignments used in BicPAM \cite{HenriquesMadeira2014}). As a consequence, there is a tendency of an increase in the computational cost and of the occurrence of redundant frequent itemsets (consequently rules). To make the demand for computational resource and the degree of redundancy still worse, the proposal adopts an Apriori-based algorithm to mine the frequent itemsets (Apriori-based algorithms mine all frequent itemsets, not only the closed frequent itemsets).
Let us emphasize that the set of closed frequent itemsets uniquely determines the exact frequency of all frequent itemsets, and it can be orders of magnitude smaller than the set of all frequent itemsets \cite{ZakiEtAL2002}. Moreover, the usage of closed frequent itemsets instead of frequent itemsets drastically reduces the number of rules that have to be presented to the user, without any information loss \cite{LakhalStumme2005}. Charm \cite{ZakiEtAL2002} is an example of an FPM algorithm that mines closed frequent itemsets.
Garcia et al. \cite{GarciaEtAl2010} proposed a multivariate discretization algorithm based on clustering, called \emph{Clustering Based Discretization} (CBD). Only the attributes with higher values of purity (a measure that informs how well the attribute discriminates the classes) are used. So, CBD considers class labels in its discretization routine, being a proposal suitable only for labeled datasets. After the conversion of continuous attributes to discrete ones, the dataset is itemized and a traditional FPM algorithm can be applied.
BicPAM \cite{HenriquesMadeira2014} and BiC2PAM \cite{HenriquesMadeira2016} also relies on discretization, itemization, and the usage of a traditional FPM algorithm to mine the biclusters. BiC2PAM extends BicPAM to incorporate constraints derived from background knowledge in the mining process. BicPAM and BiC2PAM are available in a free bicluster software called BicPAMS \cite{HenriquesEtAl2017}.
BicPAM is a framework that relies on 3 steps: pre-processing (which includes normalization, discretization, itemization, handling of missing values, and tackling varying levels of noise), mining (where some FPM algorithm is used to mine the biclusters), and post-processing (in which the biclusters can be extended, merged and filtered out, among other possibilities). BicPAM makes available three discretization options (each one with key implications on the target solution), and the user can easily incorporate other options into the framework. BicPAM also makes available several FPM algorithms in the mining step, and the user can also incorporate others. To alleviate common drawbacks related to discretization procedures (such as information loss), the user can choose to assign multiple items over a single element, tackling the items-boundary problem. The drawback of this strategy is that it usually generates many redundant biclusters (even when using algorithms to mine closed frequent itemsets), guiding to extra computational cost, and the information loss is still present, though attenuated. For more contributions regarding biclustering based on FPM algorithms, see the survey of Henriques \emph{et al.} \cite{HenriquesEtAl2015}.
The missing values can be simply ignored in methods that rely in itemization to mine the biclusters. Henriques \& Madeira \cite{HenriquesMadeira2014} also proposed the use of additional items, specially handled according to a level of relaxation imposed by the user.
Aiming at bypassing the itemization step, we may resort to enumerative biclustering algorithms that mine CVC biclusters directly from numerical matrices, such as RIn\_Close\_CVC, RIn\_Close\_CVCP and their competitors \cite{VeronezeEtAl2017}. They are able to mine the biclusters from a discretized matrix (that has only integer numbers), thus avoiding itemization. This implies that it is possible to use a more flexible discretization, without restrictions in the arity of an attribute. Additionally, RIn\_Close\_CVCP is a very efficient algorithm, exhibiting a computational cost similar to that of In-Close2.
In conceptual terms, approaches relying on discretization as well as our proposal can control the level of noise inside a bicluster, but not at the same extent. By construction, after defining the discretization policy, approaches relying on discretization are able to guarantee that the level of noise in the mined biclusters will belong to a specific interval. However, they are not able to guarantee, for arbitrary matrices, finding all the biclusters exhibiting a level of noise inside that interval. On the other hand, our proposal guarantees to find all the maximal biclusters exhibiting a level of noise inside a given interval. Therefore, no matter how optimized the discretization policy, any a priori and computationally feasible discretization involves information loss, being not able to compete with online approaches such as ours.
To illustrate this relevant limitation of approaches based on a priori discretization, Table~\ref{tab:rvMatrix} shows an arbitrary example of a real-valued matrix with 10 objects and 3 attributes, and Table~\ref{tab:dMatrix} shows this matrix after discretization using equi-width partitioning with bins of size 0.2. As the matrix of Table~\ref{tab:rvMatrix} was randomly created using a uniform distribution, the equi-width partitioning is a reasonable choice. Notice that the itemization process would produce 15 items, thus this small example indicates the restrictions imposed by the itemization in the arity of the discretization. Using our proposal with minimum number of objects and attributes set to 2, and $\epsilon = 0.2$ for all attributes, we would obtain 12 biclusters, which are listed on Table~\ref{tab:bics_rvMatrix}. Using an FPM or FCA algorithm, such as Charm, in the itemized matrix computed from the matrix of Table~\ref{tab:dMatrix} (with the same restrictions of minimum number of objects and attributes), we would obtain only 7 of these 12 biclusters, being 5 of these 12 biclusters only partially recovered. These 7 biclusters are highlighted in bold in Table~\ref{tab:bics_rvMatrix}. Of course, we could alleviate the information loss using multiple item assignments. However, how many items per element should we assign in a real-world problem? Moreover, multiple item assignment tends to generate redundant biclusters and extra computational cost.
As already mentioned, an adequate discretization process has a significant impact on the quality of the biclusters, and also in the computational cost of the proposals. There are numerous discretization methods available in the literature. Liu \emph{et al.} \cite{LiuEtAl2002} presented a survey of discretization methods and discussed various dimensions in which discretization methods can be categorized. They also gave some guidelines for how to choose a discretization method under various circumstances. However, they stated that the choice of a suitable discretization method is generally a complex matter, largely depending on the demands and particularities of the application. For instance, if the data does not have class information, only unsupervised methods can be applied. They also stated that the availability of parallel computing or computer clusters opens the possibility of using multivariate discretization.
As there is a combinatorial explosion of possibilities for discretization, a fair comparison will require a vast series of experiments and a careful analysis of the context involved, properly addressing pros and cons. Therefore, such an issue is out of the scope of this work and we end the section reminding that no matter the quality of the discretization, information loss will occur. That is why we have conceived our approach to avoid a priori discretization.
\linespread{1}
\begin{table}[!htb]
\begin{minipage}[t]{.47\textwidth}
\caption{Example of a real-valued matrix.}
\label{tab:rvMatrix}
\centering
\footnotesize
\begin{tabular}{crrr}
\toprule
\textbf{\#} & \textbf{1} & \textbf{2} & \textbf{3} \\
\midrule
\textbf{1} & 0.278 & 0.422 & 0.743 \\
\textbf{2} & 0.547 & 0.916 & 0.392 \\
\textbf{3} & 0.958 & 0.792 & 0.655 \\
\textbf{4} & 0.965 & 0.959 & 0.171 \\
\textbf{5} & 0.158 & 0.656 & 0.706 \\
\textbf{6} & 0.971 & 0.036 & 0.032 \\
\textbf{7} & 0.957 & 0.849 & 0.277 \\
\textbf{8} & 0.485 & 0.934 & 0.046 \\
\textbf{9} & 0.800 & 0.679 & 0.097 \\
\textbf{10} & 0.142 & 0.758 & 0.823 \\
\bottomrule
\end{tabular}
\end{minipage}
\begin{minipage}[t]{.53\textwidth}
\caption{Matrix of Table~\ref{tab:rvMatrix} after discretization using equi-width partitioning with bins of size 0.2.}
\label{tab:dMatrix}
\centering
\footnotesize
\begin{tabular}{crrr}
\toprule
\textbf{\#} & \textbf{1} & \textbf{2} & \textbf{3} \\
\midrule
\textbf{1} & 2 & 3 & 4 \\
\textbf{2} & 3 & 5 & 2 \\
\textbf{3} & 5 & 4 & 4 \\
\textbf{4} & 5 & 5 & 1 \\
\textbf{5} & 1 & 4 & 4 \\
\textbf{6} & 5 & 1 & 1 \\
\textbf{7} & 5 & 5 & 2 \\
\textbf{8} & 3 & 5 & 1 \\
\textbf{9} & 4 & 4 & 1 \\
\textbf{10} & 1 & 4 & 5 \\
\bottomrule
\end{tabular}
\end{minipage}
\end{table}
\linespread{1.5}
\linespread{1}
\begin{table}[!htb]
\caption{Biclusters mined from the data matrix of Table~\ref{tab:rvMatrix} using our proposal (to be formally presented in Section~\ref{sec:rinclose}) with minimum number of objects and attributes set to 2, and $\epsilon = 0.2$ for all attributes.}
\label{tab:bics_rvMatrix}
\centering
\footnotesize
\begin{tabular}{ccc}
\toprule
\textbf{\#} & \textbf{Objects} & \textbf{Attributes} \\
\midrule
\textbf{1} & 1, 5, 10 & 1, 3 \\
\textbf{2} & \textbf{5, 10} & \textbf{1, 2}, 3 \\
\textbf{3} & \textbf{2, 8} & \textbf{1, 2} \\
\textbf{4} & 3, 7, 9 & 1, 2 \\
\textbf{5} & 7, 9 & 1, 2, 3 \\
\textbf{6} & 3, \textbf{4, 7} & \textbf{1, 2} \\
\textbf{7} & \textbf{4, 7} & \textbf{1, 2}, 3 \\
\textbf{8} & \textbf{4, 6}, 9 & \textbf{1, 3} \\
\textbf{9} & 4, 7, 9 & 1, 3 \\
\textbf{10} & \textbf{3, 5}, 10 & \textbf{2, 3} \\
\textbf{11} & \textbf{2, 7} & \textbf{2, 3} \\
\textbf{12} & \textbf{4, 8} & \textbf{2, 3} \\
\bottomrule
\end{tabular}
\end{table}
\linespread{1.5}
\section{The Extended Version of RIn-Close\_CVC}
\label{sec:rinclose}
Veroneze \emph{et al.} \cite{VeronezeEtAl2017} proposed an algorithm to enumerate all maximal CVC biclusters in numerical datasets, named RIn-Close\_CVC. From now on, we will call it RIn-Close for simplicity. RIn-Close \cite{VeronezeEtAl2017} is based on Definition~\ref{def:cvcbic} for CVC biclusters. In this section, we will generalize RIn-Close \cite{VeronezeEtAl2017} to prepare this biclustering algorithm to enumerate CVC biclusters in mixed-data matrices. Thus, this new version will be based on Definition~\ref{def:cvcbic2}, which is a generalization of Definition~\ref{def:cvcbic}. Strictly numerical datasets and strictly categorical datasets can also be treated by this extended version of RIn-Close.
The previous version of RIn-Close \cite{VeronezeEtAl2017} was not prepared to enumerate biclusters directly from a dataset with missing values. Some strategies to handle missing values were proposed for the RIn-Close family of algorithms \cite{VeronezeEtAl2017}. The simplest one is to remove the rows and/or columns (usually the ones with smaller dimension) containing missing values, at the cost of information loss. Another simple strategy is the previous estimation of the missing values using some imputation technique of the literature. The problem with this approach is that it generally will introduce additional noise to the dataset, which may significantly reduce the biclusters' homogeneity, thus promoting unnecessary bicluster partitioning. Essentially, a single large original bicluster with some missing elements may be recovered as dozens of smaller biclusters, possibly with a high overlap among them \cite{OliveiraEtAl2015}.
Here, RIn-Close will be extended to mine biclusters directly from datasets with missing values. We will look for biclusters in the regions of the dataset without missing values, ignoring the regions with missing values. Thus, the sparser the matrix, the smaller the portion of the matrix to be mined. Our approach to deal with missing data has a low computational cost, avoids information loss, and does not introduce additional noise into the dataset \cite{Veroneze2016}, being consistent with highly competitive approaches in the literature \cite{HenriquesMadeira2014}.
Besides the incorporation of these new features, this new version of RIn-Close keeps the four key properties of the original proposal \cite{VeronezeEtAl2017}: efficiency, completeness, correctness, and non-redundancy. Also, it has the same worst-case time complexity.
Algorithms~\ref{alg:rinclose} to \ref{alg:ComputeRM} present the pseucode of this new version of RIn-Close and of its main functions. The proposed new features are highlighted in red. Note that with few and simple modifications, we are able to reach our goal: provide an algorithm to enumerate all maximal biclusters in mixed-data (or strictly numerical / categorical) matrices with (or without) missing values.
Firstly, we create the \emph{supremum} bicluster $(I,J)$ in Algorithm~\ref{alg:rinclose}, which contains all rows of the dataset in its \emph{extent} (set of rows of the bicluster), and no column in its \emph{intent} (set of columns of the bicluster). From the supremum, all other biclusters will be mined recursively. This strategy was already adopted by In-Close2 \cite{Andrews2011}, which is the enumerative algorithm for binary datasets that, after generalizations and extensions, gave rise to the RIn-Close family of algorithms \cite{VeronezeEtAl2017}.
\linespread{1}
\begin{algorithm}
\caption{RIn-Close}
\label{alg:rinclose}
\begin{algorithmic}[1]
\small
\REQUIRE Data matrix $\mathbf{A}_{n \times m}$, minimum number of rows $mR$, minimum number of columns $mC$, vector with the user-defined maximum perturbation for each attribute $\mathbf{\epsilon}$
\ENSURE Biclustering solution $\mathfrak{B}$
\STATE $y \leftarrow 1$ \COMMENT{index of the initial attribute}
\STATE $I \leftarrow \{ 1, 2,..., n \}$ \COMMENT{extent - set of rows of the bicluster}
\STATE $J \leftarrow \{ \}$ \COMMENT{intent - set of columns of the bicluster}
\STATE $\Gamma \leftarrow \{ \}$ \COMMENT{set of rows to check the row-maximality of the descendants}
\STATE ComputeBiclustersFrom($(I,J),y, \Gamma$)
\end{algorithmic}
\end{algorithm}
\linespread{1.5}
\linespread{1}
\begin{algorithm}
\caption{ComputeBiclustersFrom}
\label{alg:ComputeBiclustersFrom}
\begin{algorithmic}[1]
\small
\REQUIRE Bicluster $(I,J)$ to be closed, current attribute $y$, set of rows to check the row-maximality of the descendants $\Gamma$
\FOR{$j \leftarrow y$ to $m$}
\IF{$j \notin J$}
\IF{$\max_{i \in I}(a_{ij}) - \min_{i \in I}(a_{ij}) \leq {\color{red}\epsilon_j}$ {\color{red}\AND $a_{ij} \neq mv, \forall i \in I$}}
\STATE $J \leftarrow J \cup \{j\}$
\ELSE
\STATE Compute the possible new extents \COMMENT{Eq.~\ref{eq:rinc_cvc_compExt}}
\FOR{each possible new extent $G$}
\IF{$|G| \geq mR$ \AND $G \notin ST$ \AND $G$ is canonical \AND $G$ is row-maximal}
\STATE Sort the row indexes in $G$
\STATE Insert $G$ in the symbol table $ST$
\STATE $\Omega \leftarrow ComputeRM(G, j, \Gamma)$
\STATE PutInQueue($G, j, \Omega$)
\ENDIF
\ENDFOR
\ENDIF
\ENDIF
\ENDFOR
\IF{$|J| \geq mC$}
\STATE Store the bicluster $(I,J)$ in the solution $\mathfrak{B}$
\ENDIF
\WHILE{GetFromQueue($G, j, \Omega$)}
\STATE $H \leftarrow J \cup \{j\}$
\STATE ComputeBiclustersFrom($(G,H),j+1, \Omega$)
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\linespread{1.5}
In Algorithm~\ref{alg:ComputeBiclustersFrom}, each bicluster $(I,J)$ is closed, i.e., its intent is completed with all possible columns for the extent $I$ (line 4). The expression $a_{ij} \neq mv$ means: the element $a_{ij}$ is not a missing value. If the attribute $j$ is not an inherited attribute and it cannot be added to the intent $J$, the possible new extents are computed (line 6). Given that $I$ is the current extent, and $j$ is the current attribute, the possible new extents are given by
\begin{equation}
\{G | [G \subseteq I] \; \wedge \; [\max_{i \in G}(\{a_{ij}\}) - \min_{i \in G}(\{a_{ij}\}) \leq {\color{red}\epsilon_j}] {\color{red} \; \wedge \; [a_{ij} \neq mv, \forall i \in G] } \; \wedge \; [G \; \mathrm{is \; maximal}]\}.
\label{eq:rinc_cvc_compExt}
\end{equation}
\noindent It is easily achieved by ordering the values of the data matrix $\mathbf{A}$ in rows $I$ and column $j$. The user should use a large number to represent the missing values ($mv$), so the missing values will be at the bottom of the list, and this whole portion of the list can be ignored. If a new possible extent $G$ passes the verifications of line 8, then it will give rise to a new bicluster with extent $G$.
Letting $J$ be the current intent, and $j$ be the current attribute, a possible new extent $G$ of a bicluster is not canonical if
\begin{equation}
\exists k \in Y \setminus J | [k < j] \: \wedge \: [\max_{i \in G}(a_{ik}) - \min_{i \in G}(a_{ik}) \leq {\color{red}\epsilon_k}] {\color{red}\: \wedge \: [a_{ik} \neq mv, \forall i \in G]},
\label{eq:rinc_cvc_iscan}
\end{equation}
\noindent i.e., if there is an attribute $k < j$ that we can add to the bicluster $(G, J)$ and it remains a valid CVC bicluster.
Letting $J$ be the current intent, $j$ be the current attribute, $H = J \cup \{j\}$, and $\Gamma$ be the set of rows that must be checked to verify the row-maximality, a possible new extent $G$ is not row-maximal if there is an object $g \in \Gamma$ that we can add to the bicluster $(G,H)$ and it remains a valid CVC bicluster, i.e.,
\begin{equation}
\exists g \in \Gamma | [\max_{i \in \{G \cup \{g\}\}}(a_{ik}) - \min_{i \in \{G \cup \{g\}\}}(a_{ik}) \leq {\color{red}\epsilon_k}] {\color{red} \: \wedge \: [a_{gk} \neq mv]}, \forall \; k \in H.
\label{eq:cvc_ismaximal}
\end{equation}
Besides the canonicity and the row-maximality verifications, we also verify if the possible new extent $G$ does not belong to a symbol table $ST$. This verification is based on the fact that two distinct CVC biclusters must have two distinct extents in order to be maximal. So, to avoid redundant maximal biclusters, the extents that have already been generated are tracked using an efficient symbol table implementation, such as hash tables (HTs) or balanced search trees (BSTs).
From these verifications, the only one that was inspired by In-Close2 is the canonicity test, which was originally proposed in \cite{Kuznetsov1996}. Clearly, the canonicity test was generalized by Veroneze \emph{et al.} \cite{VeronezeEtAl2017} to deal with numerical datasets. The other two verifications were already part of the previous version of RIn-Close. Here, we are just updating these three verifications to accomplish the new features of RIn-Close.
\linespread{1}
\begin{algorithm}
\caption{ComputeRM}
\label{alg:ComputeRM}
\begin{algorithmic}[1]
\small
\REQUIRE new extent $G$, current attribute $j$, set of rows to check the row-maximality $\Gamma$
\ENSURE new set of rows to check the row-maximality $\Omega$
\STATE $\mathbf{v} \leftarrow \{a_{ij}\}_{i \in G}$
\STATE $\mathbf{v} \leftarrow sort(\mathbf{v})$ \COMMENT{ascending order}
\STATE $p1 \leftarrow \mathbf{v}_{mR}$ \COMMENT{pivot value 1}
\STATE $p2 \leftarrow \mathbf{v}_{|\mathbf{v}| - mR + 1}$ \COMMENT{pivot value 2}
\STATE $\Omega \leftarrow \Gamma \cup \{i \in X \setminus G| \; [[p1 - a_{ij} \leq {\color{red}\epsilon_j}] \; \vee \; [a _{ij} -p2 \leq {\color{red}\epsilon_j}]] {\color{red}\; \wedge \; [a_{ij} \neq mv]} \}$
\end{algorithmic}
\end{algorithm}
\linespread{1.5}
To explain the function $ComputeRM$ of Algorithm~\ref{alg:ComputeRM}, let us see the example in Figure~\ref{fig:RM}, which considers $\epsilon = 3$ and $mR = 2$. Suppose that $m_x$ is the current attribute and $I = \{g_a, g_b, ..., g_l\}$ is the current extent. So, we have four new possible extents: (d1), (d2), (d3) and (d4). To exemplify, let us compute the set $\Omega$ for (d2). Let us suppose that $\Gamma= \{\}$. The pivot elements are $g_e$ and $g_h$ because they are the $mR$-$th$ first and last elements of (d2), respectively. Their values are $g_e = 3$ and $g_h = 5$. Rows with values greater than or equal to 0 ($g_e - \epsilon$) or less than or equal to 8 ($g_h + \epsilon$) must comprise $\Omega$, so $\Omega = \Gamma \cup \{g_a, g_b, g_c, g_j\} = \{g_a, g_b, g_c, g_j\}$.
\begin{figure}
\centering
\includegraphics[trim=2cm 14.5cm 13cm 2.5cm, clip, scale=0.65]{RM3.pdf}
\caption{Example of how to find $RM$ (considering $\epsilon = 3$ and $mR = 2$). Extracted from \cite{VeronezeEtAl2017}.}
\label{fig:RM}
\end{figure}
This new version of RIn-Close has the same worst-case time complexity of its previous version: $O(kmn(mn + x))$, where $k$ is the number of enumerated biclusters, and $x$ is the worst-case time of searching in the symbol table, so $x = O(\log k)$ for BSTs and $x = O(k)$ for HTs. But HTs have a much better computational cost on average: $O(1)$. For this reason, our RIn-Close implementation uses a HT.
One detail we would like to comment is that we can abort the closure of a bicluster if, even adding all remaining attributes to its intent, it will not meet the minimum number of columns $mC$ (therefore, its next
descendants will not meet the minimum number of columns $mC$ as well). Although this restriction can be checked only during the closure of a bicluster, it will also prune the search space and save computational resources because ($i$) it stops the construction of a bicluster that will be discarded later, given that it does not meet the restriction $mC$, and ($ii$) it avoids generating descendants that will not meet the
restriction $mC$ as well \cite{VeronezeEtAl2017}. This aspect was omitted from Algorithm~\ref{alg:ComputeBiclustersFrom} for the purpose of emphasizing the main steps.
\section{Experimental Results}
\label{sec:exp}
This section describes the datasets used in our experiments and presents the results, followed by an extensive discussion of the main achievements.
\subsection{Description of the datasets}
Table~\ref{tab:datasets} briefly describes the datasets used in our experiments, extracted from the UCI Repository \cite{Lichman2013}. Their attributes are outlined in Tables~\ref{tab:dat_acute} to \ref{tab:dat_zoo}, together with the maximum perturbation $\epsilon$ for each attribute, to be used by the RIn-Close algorithm when mining the biclusters. The attributes are labeled as R (real-valued), I (integer), O (ordinal), or N (nominal). Table~\ref{tab:dataLabels} contains the description of the class labels associated with all the datasets.
These datasets were chosen because they came with the description of all the attributes and class labels. So, biclustering results can be easily interpreted. Given that the number of attributes is not so high, we can easily illustrate how the biclusters guide to interpretative models. Similarly, we can provide QCARs that are able to properly discriminate the class labels.
For the nominal attributes, $\epsilon = 0$ is the only choice that makes sense. For the other types of attributes, we set this parameter based on trial-and-error. Our goal was to provide a biclustering solution with a good coverage of the dataset instances. The choice of the parameter $\epsilon$ for real-valued attributes and for integer attributes with many distinct values were assisted by the bin sizes, returned by the function \emph{histcounts} of MATLAB R2015a.
There are datasets with balanced and unbalanced classes, with 2 to 7 distinct class labels. Most of the datasets are unbalanced. There are class labels with very few instances, for instance, the class label 5 of the Zoo dataset has only 4 instances. The dataset Acute has two decision variables, and we will provide results for both of them. Thus, we are exploring several scenarios to show to the user the biclustering capability to provide interesting rules.
\linespread{1}
\begin{table}[]
\footnotesize
\centering
\caption{Numerical aspects of the datasets.}
\label{tab:datasets}
\begin{tabular}{rlrrrll}
\toprule
\textbf{\#} & \textbf{Name} & \textbf{\# rows} & \textbf{\# columns} & \textbf{\# labels} & \textbf{$mv$} & \textbf{Description}\\
\midrule
1 & Acute \cite{CzerniakEtAl2003} & 120 & 6 & 2 & no & Acute Inflammations \\
2 & Car & 1728 & 6 & 4 & no & Car Evaluation \\
3 & Heart & 270 & 13 & 2 & no & Heart Disease \\
4 & Voting & 435 & 16 & 2 & yes & Congressional Voting Records \\
5 & Zoo & 101 & 16 & 7 & no & Zoo Animals \\
\bottomrule
\end{tabular}
\end{table}
\linespread{1.5}
\linespread{1}
\begin{table}[]
\footnotesize
\centering
\caption{Description of the attributes in the Acute dataset.}
\label{tab:dat_acute}
\begin{tabular}{rllclr}
\toprule
\textbf{\#} & \textbf{Name} & \textbf{Description} & \textbf{Type} & \textbf{Domain} & \textbf{$\epsilon$}\\
\midrule
1 & temperature & Temperature of patient & R & $[35.5, 41.5]$ & 2.4 \\
2 & nausea & Occurrence of nausea & N & \{yes, no\} & 0.0 \\
3 & lumbarPain & Lumbar pain & N & \{yes, no\} & 0.0 \\
4 & urinePushing & Urine pushing (continuous need for urination) & N & \{yes, no\} & 0.0 \\
5 & micturitionPain & Micturition pains & N & \{yes, no\} & 0.0 \\
6 & urethraBurning & Burning of urethra, itch, swelling of urethra outlet & N & \{yes, no\} & 0.0 \\
\bottomrule
\end{tabular}
\end{table}
\linespread{1.5}
\linespread{1}
\begin{table}[]
\footnotesize
\centering
\caption{ Description of the attributes in the Car dataset.}
\label{tab:dat_car}
\begin{tabular}{rllclr}
\toprule
\textbf{\#} & \textbf{Name} & \textbf{Description} & \textbf{Type} & \textbf{Domain} & \textbf{$\epsilon$}\\
\midrule
1 & buying & Buying price & O & \{v-high, high, med, low\} & 0 \\
2 & maint & Maintenance price & O & \{v-high, high, med, low\} & 1 \\
3 & doors & Number of doors & O & \{2, 3, 4, 5-more\} & 1 \\
4 & persons & Capacity in terms of persons to carry & O & \{2, 4, more\} & 0 \\
5 & lugBoot & Size of the luggage boot & O & \{small, med, big\} & 0 \\
6 & safety & Estimated safety of the car & O & \{low, med, high\} & 0 \\
\bottomrule
\end{tabular}
\end{table}
\linespread{1.5}
\linespread{1}
\begin{table}[]
\footnotesize
\centering
\caption{ Description of the attributes in the Heart dataset.}
\label{tab:dat_heart}
\begin{tabular}{rlp{5cm}cp{5cm}r}
\toprule
\textbf{\#} & \textbf{Name} & \textbf{Description} & \textbf{Type} & \textbf{Domain} & \textbf{$\epsilon$}\\
\midrule
1 & age & Age & I & \{29, 30, ..., 77\} & 4.0 \\
2 & sex & Sex & N & \{female, male\} & 0.0 \\
3 & chestPain & Chest pain type & N & \{typical angina, atypical angina, non-anginal pain, asymptomatic\} & 0.0 \\
4 & bloodPres & Resting blood pressure & R & {[}94, 200{]} & 10.0 \\
5 & chol & serum cholestoral in mg/dl & R & {[}126, 564{]} & 30.0 \\
6 & fastBSugar & fasting blood sugar \textgreater 120 mg/dl & N & \{yes, no\} & 0.0 \\
7 & electro & resting electrocardiographic results & N & \{normal, having ST-T wave abnormality, showing probable or definite left ventricular hypertrophy by Estes' criteria\} & 0.0 \\
8 & heartRate & maximum heart rate achieved & R & {[}71, 202{]} & 10.0 \\
9 & exercIAngina & exercise induced angina & N & \{yes, no\} & 0.0 \\
10 & oldpeak & ST depression induced by exercise relative to rest & R & {[}0, 6.2{]} & 0.5 \\
11 & slope & the slope of the peak exercise ST segment & O & \{upsloping, flat, downsloping\} & 0.0 \\
12 & vesselsColor & number of major vessels colored by flourosopy & I & \{0, 1, 2, 3\} & 0.0 \\
13 & thal & thal & N & \{normal, fixed defect, reversable defect\} & 0.0 \\
\bottomrule
\end{tabular}
\end{table}
\linespread{1.5}
\linespread{1}
\begin{table}[]
\footnotesize
\centering
\caption{ Description of the attributes in the Voting dataset.}
\label{tab:dat_voting}
\begin{tabular}{rllclr}
\toprule
\textbf{\#} & \textbf{Name} & \textbf{Description} & \textbf{Type} & \textbf{Domain} & \textbf{$\epsilon$}\\
\midrule
1 & hInfants & handicapped-infants & N & \{yes, no\} & 0 \\
2 & wProject & water-project-cost-sharing & N & \{yes, no\} & 0 \\
3 & budgetRes & adoption-of-the-budget-resolution & N & \{yes, no\} & 0 \\
4 & physicianFF & physician-fee-freeze & N & \{yes, no\} & 0 \\
5 & ES-aid & el-salvador-aid & N & \{yes, no\} & 0 \\
6 & rgSchools & religious-groups-in-schools & N & \{yes, no\} & 0 \\
7 & antiSatelliteTT & anti-satellite-test-ban & N & \{yes, no\} & 0 \\
8 & aidNicaraguaC & aid-to-nicaraguan-contras & N & \{yes, no\} & 0 \\
9 & mxMissile & mx-missile & N & \{yes, no\} & 0 \\
10 & immigration & immigration & N & \{yes, no\} & 0 \\
11 & sfCorpCut & synfuels-corporation-cutback & N & \{yes, no\} & 0 \\
12 & eduSpending & education-spending & N & \{yes, no\} & 0 \\
13 & superfundRS & superfund-right-to-sue & N & \{yes, no\} & 0 \\
14 & crime & crime & N & \{yes, no\} & 0 \\
15 & dutyFree & duty-free-exports & N & \{yes, no\} & 0 \\
16 & admSA & export-administration-act-south-africa & N & \{yes, no\} & 0 \\
\bottomrule
\end{tabular}
\end{table}
\linespread{1.5}
\linespread{1}
\begin{table}[]
\footnotesize
\centering
\caption{ Description of the attributes in the Zoo dataset.}
\label{tab:dat_zoo}
\begin{tabular}{rlclr}
\toprule
\textbf{\#} & \textbf{Name} & \textbf{Type} & \textbf{Domain} & \textbf{$\epsilon$}\\
\midrule
1 & hair & N & \{yes, no\} & 0 \\
2 & feathers & N & \{yes, no\} & 0 \\
3 & eggs & N & \{yes, no\} & 0 \\
4 & milk & N & \{yes, no\} & 0 \\
5 & airborne & N & \{yes, no\} & 0 \\
6 & aquatic & N & \{yes, no\} & 0 \\
7 & predator & N & \{yes, no\} & 0 \\
8 & toothed & N & \{yes, no\} & 0 \\
9 & backbone & N & \{yes, no\} & 0 \\
10 & breathes & N & \{yes, no\} & 0 \\
11 & venomous & N & \{yes, no\} & 0 \\
12 & fins & N & \{yes, no\} & 0 \\
13 & legs & I & \{0,2,4,5,6,8\} & 0 \\
14 & tail & N & \{yes, no\} & 0 \\
15 & domestic & N & \{yes, no\} & 0 \\
16 & catsize & N & \{yes, no\} & 0 \\
\bottomrule
\end{tabular}
\end{table}
\linespread{1.5}
\linespread{1}
\begin{table}[]
\footnotesize
\centering
\caption{Class Labels of each dataset.}
\label{tab:dataLabels}
\begin{tabular}{cp{12cm}r}
\toprule
\multicolumn{3}{c}{\textbf{Acute - 1st decision variable}} \\
\midrule
\textbf{Label} & \textbf{Description} & \textbf{\# instances} \\
\midrule
0 & No inflammation of urinary bladder & 61 \\
1 & Inflammation of urinary bladder & 59 \\
\midrule
\multicolumn{3}{c}{\textbf{Acute - 2nd decision variable}} \\
\midrule
\textbf{Label} & \textbf{Description} & \textbf{\# instances} \\
\midrule
0 & No nephritis of renal pelvis origin & 70 \\
1 & Nephritis of renal pelvis origin & 50 \\
\midrule
\multicolumn{3}{c}{\textbf{Car}} \\
\midrule
\textbf{Label} & \textbf{Description} & \textbf{\# instances} \\
\midrule
1 & Unacceptable & 1210 \\
2 & Acceptable & 384 \\
3 & Good & 69 \\
4 & Very Good & 65 \\
\midrule
\multicolumn{3}{c}{\textbf{Heart}} \\
\midrule
\textbf{Label} & \textbf{Description} & \textbf{\# instances} \\
\midrule
0 & Absence of heart disease & 150 \\
1 & Presence of heart disease & 120 \\
\midrule
\multicolumn{3}{c}{\textbf{Voting}} \\
\midrule
\textbf{Label} & \textbf{Description} & \textbf{\# instances} \\
\midrule
0 & Republicans & 168 \\
1 & Democrats & 267 \\
\midrule
\multicolumn{3}{c}{\textbf{Zoo}} \\
\midrule
\textbf{Label} & \textbf{Description} & \textbf{\# instances} \\
\midrule
1 & aardvark, antelope, bear, boar, buffalo, calf, cavy, cheetah, deer, dolphin, elephant, fruitbat, giraffe, girl, goat, gorilla, hamster,hare, leopard, lion, lynx, mink, mole, mongoose, opossum, oryx, platypus, polecat, pony, porpoise, puma, pussycat, raccoon, reindeer, seal, sealion, squirrel, vampire, vole, wallaby, wolf & 41 \\
2 & chicken, crow, dove, duck, flamingo, gull, hawk, kiwi, lark, ostrich, parakeet, penguin, pheasant, rhea, skimmer, skua, sparrow, swan, vulture, wren & 20 \\
3 & pitviper, seasnake, slowworm, tortoise, tuatara & 5 \\
4 & bass, carp, catfish, chub, dogfish, haddock, herring, pike, piranha, seahorse, sole, stingray, tuna & 13 \\
5 & frog, frog, newt, toad & 4 \\
6 & flea, gnat, honeybee, housefly, ladybird, moth, termite, wasp & 8 \\
7 & clam, crab, crayfish, lobster, octopus, scorpion, seawasp, slug, starfish, worm & 10 \\
\bottomrule
\end{tabular}
\end{table}
\linespread{1.5}
\subsection{Parameter setting}
To enumerate the biclusters, we set the minimum number of rows and the minimum number of columns to $mR = 5$ and $mC = 1$, respectively, for all datasets but Zoo, for which we use $mR = 3$ due to the existence of a class with only 4 instances. The $\epsilon$ value associated with each attribute is presented in Tables~\ref{tab:dat_acute} to \ref{tab:dat_zoo}.
The parameters of the 1st-Filter are established as follows. The minimum confidence is set to 0.95, and the minimum distance from 1 for the lift metric was set to 0.2. The only exception is again the Zoo dataset, where the minimum confidence is set to 1.00 due to the easiness with which rules with high row-coverage are found. The second filter does not have user-defined parameters.
\subsection{Results and Discussion}
Table~\ref{tab:results} summarizes the results of the biclustering solutions returned by RIn-Close, before and after the application of the cascade of two filters. As already explained, \emph{\% row-coverage} considers only the objects from the class label represented by each bicluster.
This is the only part of the experiments that would admit a comparison with contenders in the literature. However, given that all the existing contenders are based on discretization of the numerical attributes, the discussion raised at the end of Section~\ref{sec:relwork}, supported by an illustrative example, provides sufficient evidence that any discretization-based approach would not achieve a biclustering solution capable of overcoming the results of Table~\ref{tab:results}, given that they are not able to guarantee enumerating all maximal biclusters.
As we have already mentioned, an enumerative algorithm may return a huge amount of biclusters. For instance, RIn-Close mined $189,785$ biclusters in the Voting dataset. Our 1st-Filter, which is based on FPM metrics, was able to obtain a reduction in the amount of biclusters from $40\%$ up to $90\%$. So, it was very effective in selecting a reduced subset of significant biclusters. Nonetheless, the number of biclusters is still far above what could be manually inspected. Now, making use of the 2nd-Filter (the greedy heuristic), we were able to select a minimum of 4 to a maximum of 54 biclusters, depending on the dataset.
By construction, the row-coverage of both filters is the same. The 1st-Filter did not have a great impact on the row-coverage of the biclustering solution. So, even keeping only the biclusters with high confidence and lift, the biclustering solutions are comprising most of the objects of each dataset. The largest reduction occurred for the dataset Car, being of $13\%$. For the other datasets, the reduction was absent or insignificant, even with our choices being quite strict for the user-define thresholds of the 1st-Filter. The smaller the parameter $mR$ (of RIn-Close) and the more relaxed the user-defined thresholds of the 1st-Filter, the greater the coverage. On the other hand, relaxed user-defined thresholds also implies a stronger need for the opinion of an expert to determine the relevance of a bicluster.
As we are dealing with datasets with few attributes, the filters did not have a great impact on the column-coverage. An exception was the results for the second decision variable of Acute dataset: we need only 3 attributes to determine the presence or absence of \emph{Inflammation of urinary bladder}. In case studies characterized by the existence of much more attributes, not considered here, the results tend to be different. For instance, in \cite{Veroneze2016}, RIn-Close was used to analyse a numerical dataset with $2,308$ attributes (genes). A filter similar to the 1st-Filter selected $641$ genes, which means a column-coverage reduction of more than $70\%$. And a filter similar to the 2nd-Filter selected only $62$ of these $641$ genes. This is a promising practical tendency, given that, under the presence of a high number of attributes, being able to automatically select a small subset of relevant attributes is high desirable in biosciences and other related areas.
\linespread{1}
\begin{table}[]
\footnotesize
\centering
\caption{Biclustering Results.}
\label{tab:results}
\begin{tabular}{lrrr}
\toprule
\multicolumn{4}{c}{\textbf{Acute - 1st decision variable}} \\
\midrule
& \textbf{Original} & \textbf{1st Filter} & \textbf{2nd-Filter} \\
\midrule
\textbf{\# of biclusters} & 172 & 54 & 4 \\
\textbf{\% row-coverage} & 100.00 & 100.00 & 100.00 \\
\textbf{\% column-coverage} & 100.00 & 100.00 & 83.33 \\
\midrule
\multicolumn{4}{c}{\textbf{Acute - 2nd decision variable}} \\
\midrule
& \textbf{Original} & \textbf{1st-Filter} & \textbf{2nd-Filter} \\
\midrule
\textbf{\# of biclusters} & 172 & 66 & 4 \\
\textbf{\% row-coverage} & 100.00 & 100.00 & 100.00 \\
\textbf{\% column-coverage} & 100.00 & 100.00 & 50.00 \\
\midrule
\multicolumn{4}{c}{\textbf{Car}} \\
\midrule
& \textbf{Original} & \textbf{1st-Filter} & \textbf{2nd-Filter} \\
\midrule
\textbf{\# of biclusters} & 4,147 & 1,940 & 54 \\
\textbf{\% row-coverage} & 98.67 & 85.01 & 85.01 \\
\textbf{\% column-coverage} & 100.00 & 100.00 & 100.00 \\
\midrule
\multicolumn{4}{c}{\textbf{Heart}} \\
\midrule
& \textbf{Original} & \textbf{1st-Filter} & \textbf{2nd-Filter} \\
\midrule
\textbf{\# of biclusters} & 82,150 & 15,808 & 38 \\
\textbf{\% row-coverage} & 100.00 & 99.63 & 99.63 \\
\textbf{\% column-coverage} & 100.00 & 100.00 & 100.00 \\
\midrule
\multicolumn{4}{c}{\textbf{Voting}} \\
\midrule
& \textbf{Original} & \textbf{1st-Filter} & \textbf{2nd-Filter} \\
\midrule
\textbf{\# of biclusters} & 189,785 & 109,873 & 13 \\
\textbf{\% row-coverage} & 99.77 & 99.08 & 99.08 \\
\textbf{\% column-coverage} & 100.00 & 100.00 & 87.50 \\
\midrule
\multicolumn{4}{c}{\textbf{Zoo}} \\
\midrule
& \textbf{Original} & \textbf{1st-Filter} & \textbf{2nd-Filter} \\
\midrule
\textbf{\# of biclusters} & 4,429 & 346 & 9 \\
\textbf{\% row-coverage} & 100.00 & 100.00 & 100.00 \\
\textbf{\% column-coverage} & 100.00 & 100.00 & 100.00 \\
\bottomrule
\end{tabular}
\end{table}
\linespread{1.5}
Tables~\ref{tab:rulesAcute} to \ref{tab:rulesZoo} show, for each dataset, the QCARs directly extracted from the biclusters select by the cascade of two filters, after enumeration of all existing biclusters. Notice the interpretative power of the QCARs when compared to the corresponding crude biclusters.
Table~\ref{tab:rulesAcute} shows the rules for the dataset Acute. For its first decision variable, we can say that the main rule is the third one, given that it has a very high completeness and leverage. Almost all the patients with inflammation of urinary bladder presented urine pushing (continuous need for urination) and micturition pains. The variable indicating continuous need for urination appears in other two rules (the second and fourth rules). It is a strong indicative that this variable is important to determine the presence or absence of inflammation of urinary bladder. The difference between the second and fourth rules are the presence or absence of urine pushing, both of them indicating the absence of burning of urethra, itch, or swelling of urethra outlet. This set of rules show us that all patients with inflammation of urinary bladder had urine pushing, but there are some patients without inflammation of urinary bladder that also presented urine pushing. In the first rule, the attribute that indicates the presence or absence of micturition pains appears again. In this case, the absence of micturition pains and nauseas, together with the presence of lumbar pain, are indicating the absence of inflammation of urinary bladder.
Now, let us analyse the rules of the second decision variable of the Acute dataset. We note that the third and fourth rules, that indicate presence of nephritis of renal pelvis origin, overlap and could be rewritten as a single rule: temperature[37.90,41.50], lumbarPain\{yes\} $\Rightarrow$ 1. It happens because of the user-defined parameter $\epsilon$ for the attribute temperature (see Table~\ref{tab:dat_acute}). To avoid this event, we figure out two possibilities. The first one is to run the enumerative algorithm few times with different settings for the vector $\epsilon$, producing a set of biclustering solutions. So, we can select relevant biclusters from this pool of solutions. The second possibility is to post-process the rules to group the ones that overlaps or are adjacent. This second option is commonly used by the algorithms that discretize the dataset. We will explore these two possibilities in a future work with the goal of providing classifiers based on the QCARs created from the biclusters. Anyway, these results indicate that all patients with fever and lumbar pain presented nephritis of renal pelvis origin. Practically, all the patients who did not present the disease had temperature below 37.90 Celsius degrees and absence of nausea. Likewise, a high number of patients without nausea and lumbar pain did not have the disease.
\linespread{1}
\begin{table}[]
\footnotesize
\centering
\caption{Rules for the Acute dataset.}
\label{tab:rulesAcute}
\begin{tabular}{lp{10cm}rrrr}
\toprule
\multicolumn{6}{c}{\textbf{Acute - 1st decision variable}} \\
\midrule
\# & \textbf{Rule} & \textbf{Comp.} & \textbf{Conf.} & \textbf{Lift} & \textbf{Lev.} \\
\midrule
1 & nausea\{no\}, lumbarPain\{yes\}, micturitionPain\{no\} $\Rightarrow$ 0 & 0.67 & 1.00 & 1.97 & 0.17 \\
2 & urinePushing\{no\}, urethraBurning\{no\} $\Rightarrow$ 0 & 0.66 & 1.00 & 1.97 & 0.16 \\
3 & urinePushing\{yes\}, micturitionPain\{yes\} $\Rightarrow$ 1 & 0.83 & 1.00 & 2.03 & 0.21 \\
4 & urinePushing\{yes\}, urethraBurning\{no\} $\Rightarrow$ 1 & 0.51 & 1.00 & 2.03 & 0.13 \\
\midrule
\multicolumn{6}{c}{\textbf{Acute - 2nd decision variable}} \\
\midrule
\# & \textbf{Rule} & \textbf{Comp.} & \textbf{Conf.} & \textbf{Lift} & \textbf{Lev.} \\
\midrule
1 & temperature[35.50,37.90], nausea\{no\} $\Rightarrow$ 0 & 0.98 & 1.00 & 1.71 & 0.21 \\
2 & nausea\{no\}, lumbarPain\{no\} $\Rightarrow$ 0 & 0.82 & 1.00 & 1.71 & 0.17 \\
3 & temperature[39.40,41.50], lumbarPain\{yes\} $\Rightarrow$ 1 & 0.71 & 1.00 & 2.40 & 0.20 \\
4 & temperature[37.90,40.30], lumbarPain\{yes\} $\Rightarrow$ 1 & 0.34 & 0.95 & 2.29 & 0.09 \\
\bottomrule
\end{tabular}
\end{table}
\linespread{1.5}
Table~\ref{tab:rulesCar} depicts the selected rules for the Car dataset. This dataset has 4 distinct class labels, and was the one requiring more rules to cover its instances, adding up to 54. The rules of class 2, which represents 22\% of the instances, have low completeness. So, we need many rules to cover its instances. It means that the objects (cars) classified as acceptable cars compose a heterogeneous group. Class 1 has 70\% of the instances, and Rules \#1 and \#2 have a completeness of almost 50\% for this class. Rule \#1 indicates that a car with capacity for only two persons was considered unacceptable. And rule \#2 indicates that a car with low safety rating was also considered unacceptable. So, these two attributes (carrying capacity and safety rating) are decisive to explain half of the members of class 1. Rule \#3 has also a high completeness, almost 20\%. It indicates that if a car has a very high buying price and a high or very-high maintenance price, then the car is considered unacceptable. Rules \#4 and \#5 could be joined in one, indicating that if a car has a high or very high buying price, a small luggage boot, and only a medium safety rating, then the car is considered unacceptable. The other rules for class 1 are more specific, but all of them has maximum confidence (i.e, 100\%). In fact, all the selected rules for this dataset have maximum confidence. As we have said, the rules for class 2, acceptable cars, are fragmented and most of them are very specific, having a low completeness. The main rules for this class have 6\% of completeness and they involve four attributes: buying price, maintenance price, capacity in terms of persons to carry, and estimated safety rating. The same subset of attributes, together with the size of the luggage boot, appears in the rules for the class 3 (good cars) and class 4 (very good cars). The number of instances covered by the rules presented in Table~\ref{tab:rulesCar} were 1,132, 273, 24, and 40 for class label 1, 2, 3, and 4, respectively.
\linespread{1}
\begin{table}[]
\tiny
\centering
\caption{Rules for the Car dataset.}
\label{tab:rulesCar}
\begin{tabular}{lp{10cm}rrrr}
\toprule
\# & \textbf{Rule} & \textbf{Comp.} & \textbf{Conf.} & \textbf{Lift} & \textbf{Lev.} \\
\midrule
1 & persons\{2\} $\Rightarrow$ 1 & 0.48 & 1.00 & 1.43 & 0.100 \\
2 & safety\{low\} $\Rightarrow$ 1 & 0.48 & 1.00 & 1.43 & 0.100 \\
3 & buying\{v-high\}, maint\{high,v-high\} $\Rightarrow$ 1 & 0.18 & 1.00 & 1.43 & 0.037 \\
4 & buying\{high\}, lugBoot\{small\}, safety\{med\} $\Rightarrow$ 1 & 0.04 & 1.00 & 1.43 & 0.008 \\
5 & buying\{v-high\}, lugBoot\{small\}, safety\{med\} $\Rightarrow$ 1 & 0.04 & 1.00 & 1.43 & 0.008 \\
6 & buying\{med\}, maint\{high,v-high\}, lugBoot\{small\}, safety\{med\} $\Rightarrow$ 1 & 0.02 & 1.00 & 1.43 & 0.004 \\
7 & buying\{high\}, doors\{2,3\}, persons\{4\}, lugBoot\{med\}, safety\{med\} $\Rightarrow$ 1 & 0.01 & 1.00 & 1.43 & 0.001 \\
8 & buying\{v-high\}, doors\{2,3\}, persons\{4\}, lugBoot\{med\}, safety\{med\} $\Rightarrow$ 1 & 0.01 & 1.00 & 1.43 & 0.001 \\
9 & buying\{med\}, maint\{high,v-high\}, persons\{4\}, safety\{high\} $\Rightarrow$ 2 & 0.06 & 1.00 & 4.50 & 0.011 \\
10 & buying\{high\}, maint\{low,med\}, persons\{4\}, safety\{high\} $\Rightarrow$ 2 & 0.06 & 1.00 & 4.50 & 0.011 \\
11 & buying\{high\}, maint\{med,high\}, persons\{4\}, safety\{high\} $\Rightarrow$ 2 & 0.06 & 1.00 & 4.50 & 0.011 \\
12 & buying\{v-high\}, maint\{low,med\}, persons\{4\}, safety\{high\} $\Rightarrow$ 2 & 0.06 & 1.00 & 4.50 & 0.011 \\
13 & buying\{med\}, maint\{high,v-high\}, doors\{3,4\}, persons\{more\}, safety\{high\} $\Rightarrow$ 2 & 0.03 & 1.00 & 4.50 & 0.005 \\
14 & buying\{med\}, maint\{high,v-high\}, doors\{4,5-more\}, persons\{more\}, safety\{high\} $\Rightarrow$ 2 & 0.03 & 1.00 & 4.50 & 0.005 \\
15 & buying\{high\}, maint\{low,med\}, doors\{3,4\}, persons\{more\}, safety\{high\} $\Rightarrow$ 2 & 0.03 & 1.00 & 4.50 & 0.005 \\
16 & buying\{high\}, maint\{low,med\}, doors\{4,5-more\}, persons\{more\}, safety\{high\} $\Rightarrow$ 2 & 0.03 & 1.00 & 4.50 & 0.005 \\
17 & buying\{high\}, maint\{med,high\}, doors\{3,4\}, persons\{more\}, safety\{high\} $\Rightarrow$ 2 & 0.03 & 1.00 & 4.50 & 0.005 \\
18 & buying\{high\}, maint\{med,high\}, doors\{4,5-more\}, persons\{more\}, safety\{high\} $\Rightarrow$ 2 & 0.03 & 1.00 & 4.50 & 0.005 \\
19 & buying\{v-high\}, maint\{low,med\}, doors\{3,4\}, persons\{more\}, safety\{high\} $\Rightarrow$ 2 & 0.03 & 1.00 & 4.50 & 0.005 \\
20 & buying\{v-high\}, maint\{low,med\}, doors\{4,5-more\}, persons\{more\}, safety\{high\} $\Rightarrow$ 2 & 0.03 & 1.00 & 4.50 & 0.005 \\
21 & buying\{low\}, maint\{low,med\}, persons\{4\}, lugBoot\{small\}, safety\{med\} $\Rightarrow$ 2 & 0.02 & 1.00 & 4.50 & 0.004 \\
22 & buying\{low\}, maint\{med,high\}, persons\{4\}, lugBoot\{small\}, safety\{med\} $\Rightarrow$ 2 & 0.02 & 1.00 & 4.50 & 0.004 \\
23 & buying\{low\}, maint\{high,v-high\}, persons\{4\}, lugBoot\{small\}, safety\{high\} $\Rightarrow$ 2 & 0.02 & 1.00 & 4.50 & 0.004 \\
24 & buying\{low\}, maint\{high,v-high\}, persons\{4\}, lugBoot\{big\}, safety\{med\} $\Rightarrow$ 2 & 0.02 & 1.00 & 4.50 & 0.004 \\
25 & buying\{low\}, maint\{high,v-high\}, persons\{more\}, lugBoot\{big\}, safety\{med\} $\Rightarrow$ 2 & 0.02 & 1.00 & 4.50 & 0.004 \\
26 & buying\{med\}, maint\{low,med\}, persons\{4\}, lugBoot\{small\}, safety\{med\} $\Rightarrow$ 2 & 0.02 & 1.00 & 4.50 & 0.004 \\
27 & buying\{med\}, maint\{med,high\}, persons\{4\}, lugBoot\{small\}, safety\{high\} $\Rightarrow$ 2 & 0.02 & 1.00 & 4.50 & 0.004 \\
28 & buying\{med\}, maint\{med,high\}, persons\{4\}, lugBoot\{big\}, safety\{med\} $\Rightarrow$ 2 & 0.02 & 1.00 & 4.50 & 0.004 \\
29 & buying\{med\}, maint\{med,high\}, persons\{more\}, lugBoot\{big\}, safety\{med\} $\Rightarrow$ 2 & 0.02 & 1.00 & 4.50 & 0.004 \\
30 & buying\{med\}, maint\{high,v-high\}, persons\{4\}, lugBoot\{big\}, safety\{med\} $\Rightarrow$ 2 & 0.02 & 1.00 & 4.50 & 0.004 \\
31 & buying\{med\}, maint\{high,v-high\}, persons\{more\}, lugBoot\{med\}, safety\{high\} $\Rightarrow$ 2 & 0.02 & 1.00 & 4.50 & 0.004 \\
32 & buying\{med\}, maint\{high,v-high\}, persons\{more\}, lugBoot\{big\}, safety\{med\} $\Rightarrow$ 2 & 0.02 & 1.00 & 4.50 & 0.004 \\
33 & buying\{med\}, maint\{high,v-high\}, persons\{more\}, lugBoot\{big\}, safety\{high\} $\Rightarrow$ 2 & 0.02 & 1.00 & 4.50 & 0.004 \\
34 & buying\{high\}, maint\{low,med\}, persons\{4\}, lugBoot\{big\}, safety\{med\} $\Rightarrow$ 2 & 0.02 & 1.00 & 4.50 & 0.004 \\
35 & buying\{high\}, maint\{low,med\}, persons\{more\}, lugBoot\{med\}, safety\{high\} $\Rightarrow$ 2 & 0.02 & 1.00 & 4.50 & 0.004 \\
36 & buying\{high\}, maint\{low,med\}, persons\{more\}, lugBoot\{big\}, safety\{med\} $\Rightarrow$ 2 & 0.02 & 1.00 & 4.50 & 0.004 \\
37 & buying\{high\}, maint\{low,med\}, persons\{more\}, lugBoot\{big\}, safety\{high\} $\Rightarrow$ 2 & 0.02 & 1.00 & 4.50 & 0.004 \\
38 & buying\{high\}, maint\{med,high\}, persons\{4\}, lugBoot\{big\}, safety\{med\} $\Rightarrow$ 2 & 0.02 & 1.00 & 4.50 & 0.004 \\
39 & buying\{high\}, maint\{med,high\}, persons\{more\}, lugBoot\{med\}, safety\{high\} $\Rightarrow$ 2 & 0.02 & 1.00 & 4.50 & 0.004 \\
40 & buying\{high\}, maint\{med,high\}, persons\{more\}, lugBoot\{big\}, safety\{med\} $\Rightarrow$ 2 & 0.02 & 1.00 & 4.50 & 0.004 \\
41 & buying\{high\}, maint\{med,high\}, persons\{more\}, lugBoot\{big\}, safety\{high\} $\Rightarrow$ 2 & 0.02 & 1.00 & 4.50 & 0.004 \\
42 & buying\{v-high\}, maint\{low,med\}, persons\{4\}, lugBoot\{big\}, safety\{med\} $\Rightarrow$ 2 & 0.02 & 1.00 & 4.50 & 0.004 \\
43 & buying\{v-high\}, maint\{low,med\}, persons\{more\}, lugBoot\{med\}, safety\{high\} $\Rightarrow$ 2 & 0.02 & 1.00 & 4.50 & 0.004 \\
44 & buying\{v-high\}, maint\{low,med\}, persons\{more\}, lugBoot\{big\}, safety\{med\} $\Rightarrow$ 2 & 0.02 & 1.00 & 4.50 & 0.004 \\
45 & buying\{v-high\}, maint\{low,med\}, persons\{more\}, lugBoot\{big\}, safety\{high\} $\Rightarrow$ 2 & 0.02 & 1.00 & 4.50 & 0.004 \\
46 & buying\{low\}, maint\{low,med\}, persons\{4\}, lugBoot\{small\}, safety\{high\} $\Rightarrow$ 3 & 0.12 & 1.00 & 25.04 & 0.004 \\
47 & buying\{low\}, maint\{low,med\}, persons\{4\}, lugBoot\{big\}, safety\{med\} $\Rightarrow$ 3 & 0.12 & 1.00 & 25.04 & 0.004 \\
48 & buying\{low\}, maint\{low,med\}, persons\{more\}, lugBoot\{big\}, safety\{med\} $\Rightarrow$ 3 & 0.12 & 1.00 & 25.04 & 0.004 \\
49 & buying\{low\}, maint\{low,med\}, persons\{4\}, lugBoot\{big\}, safety\{high\} $\Rightarrow$ 4 & 0.12 & 1.00 & 26.58 & 0.004 \\
50 & buying\{low\}, maint\{low,med\}, persons\{more\}, lugBoot\{big\}, safety\{high\} $\Rightarrow$ 4 & 0.12 & 1.00 & 26.58 & 0.004 \\
51 & buying\{low\}, maint\{med,high\}, persons\{4\}, lugBoot\{big\}, safety\{high\} $\Rightarrow$ 4 & 0.12 & 1.00 & 26.58 & 0.004 \\
52 & buying\{low\}, maint\{med,high\}, persons\{more\}, lugBoot\{big\}, safety\{high\} $\Rightarrow$ 4 & 0.12 & 1.00 & 26.58 & 0.004 \\
53 & buying\{med\}, maint\{low,med\}, persons\{4\}, lugBoot\{big\}, safety\{high\} $\Rightarrow$ 4 & 0.12 & 1.00 & 26.58 & 0.004 \\
54 & buying\{med\}, maint\{low,med\}, persons\{more\}, lugBoot\{big\}, safety\{high\} $\Rightarrow$ 4 & 0.12 & 1.00 & 26.58 & 0.004 \\
\bottomrule
\end{tabular}
\end{table}
\linespread{1.5}
Table~\ref{tab:rulesHeart} presents the selected rules for the Heart dataset. We have 19 rules describing the patients with and without heart disease. Only one patient is not covered by a rule (he has heart disease). More than 30\% of the patients with heart disease presented asymptomatic chest pain, reversible defect in thal, and flat slope of the peak exercise ST segment (rule \#34). Also, more than 30\% of the patients with heart disease presented these two first characteristics together with resting electrocardiographic results having ST-T wave abnormality (showing probable or definite left ventricular hypertrophy by Estes' criteria) (rule \#33). Other 17\% of the patients with heart disease have the same result for the electrocardiographic, and are male with serum cholesterol in mg/dl in the range [274, 304] (rule \#29). A portion of 23\% of the patients with heart disease are male, with asymptomatic chest pain and one major vessel coloured by fluoroscopy (rule \#27). Among the rules of class 1 (presence of heart disease), we have six rules containing male sex and only one containing female sex. On the other hand, the rule with the highest completeness of the class 0 (rule \#8), indicates that 30\% of the patients without heart disease are female, non-exercise induced angina, and with 0 major vessels coloured by fluoroscopy. Another two rules with a high coverage of the patients without heart disease (15\%) are rules \#11 and \#13, both indicating ST depression induced by exercise relative to rest in the range [0, 0.40]. Besides this, rule \#11 indicates atypical angina chest pain and normal thal, and rule \#13 indicates resting blood pressure in the range [110, 120] and 0 major vessels coloured by fluoroscopy.
\linespread{1}
\begin{table}[]
\tiny
\centering
\caption{Rules for the Heart dataset.}
\label{tab:rulesHeart}
\begin{tabular}{lp{10cm}rrrr}
\toprule
\# & \textbf{Rule} & \textbf{Comp.} & \textbf{Conf.} & \textbf{Lift} & \textbf{Lev.} \\
\midrule
1 & age[42,46], heartRate[156,165] $\Rightarrow$ 0 & 0.04 & 1.00 & 1.80 & 0.01 \\
2 & age[50,54], chestPain\{non-anginal Pain\} $\Rightarrow$ 0 & 0.13 & 0.95 & 1.71 & 0.03 \\
3 & age[51,54], fastBSugar\{yes\}, exercIAngina\{no\} $\Rightarrow$ 0 & 0.05 & 1.00 & 1.80 & 0.01 \\
4 & age[53,57], heartRate[158,168] $\Rightarrow$ 0 & 0.08 & 1.00 & 1.80 & 0.02 \\
5 & age[54,58], bloodPres[100,110], fastBSugar\{no\}, vesselsColor\{0\} $\Rightarrow$ 0 & 0.04 & 1.00 & 1.80 & 0.01 \\
6 & age[55,59], fastBSugar\{no\}, heartRate[145,155], vesselsColor\{0\} $\Rightarrow$ 0 & 0.03 & 1.00 & 1.80 & 0.01 \\
7 & age[62,66], bloodPres[120,128], oldpeak[0,0.40] $\Rightarrow$ 0 & 0.03 & 1.00 & 1.80 & 0.01 \\
8 & sex\{F\}, exercIAngina\{no\}, vesselsColor\{0\} $\Rightarrow$ 0 & 0.30 & 0.96 & 1.72 & 0.07 \\
9 & sex\{M\}, chestPain\{non-anginal Pain\}, bloodPres[120,130], chol[226,255], electro\{normal\} $\Rightarrow$ 0 & 0.04 & 1.00 & 1.80 & 0.01 \\
10 & chestPain\{typical Angina\}, fastBSugar\{no\}, oldpeak[1.40,1.90] $\Rightarrow$ 0 & 0.03 & 1.00 & 1.80 & 0.01 \\
11 & chestPain\{atypical Angina\}, oldpeak[0,0.40], thal\{normal\} $\Rightarrow$ 0 & 0.15 & 0.96 & 1.72 & 0.03 \\
12 & chestPain\{non-anginal Pain\}, slope\{upsloping\}, vesselsColor\{1\} $\Rightarrow$ 0 & 0.07 & 1.00 & 1.80 & 0.02 \\
13 & bloodPres[110,120], oldpeak[0,0.40], vesselsColor\{0\} $\Rightarrow$ 0 & 0.15 & 0.96 & 1.72 & 0.03 \\
14 & bloodPres[132,142], oldpeak[0,0.50], vesselsColor\{0\} $\Rightarrow$ 0 & 0.14 & 0.95 & 1.72 & 0.03 \\
15 & bloodPres[150,160], fastBSugar\{yes\} $\Rightarrow$ 0 & 0.04 & 1.00 & 1.80 & 0.01 \\
16 & chol[204,234], electro\{LVH\}, vesselsColor\{0\} $\Rightarrow$ 0 & 0.11 & 1.00 & 1.80 & 0.03 \\
17 & chol[295,325], electro\{normal\}, exercIAngina\{no\} $\Rightarrow$ 0 & 0.08 & 1.00 & 1.80 & 0.02 \\
18 & electro\{normal\}, slope\{upsloping\}, vesselsColor\{0\}, thal\{normal\} $\Rightarrow$ 0 & 0.25 & 0.95 & 1.71 & 0.06 \\
19 & oldpeak[0.30,0.80], thal\{normal\} $\Rightarrow$ 0 & 0.15 & 0.96 & 1.72 & 0.03 \\
20 & age[48,52], sex\{M\}, fastBSugar\{no\}, exercIAngina\{no\}, thal\{reversable Defect\} $\Rightarrow$ 1 & 0.05 & 1.00 & 2.25 & 0.01 \\
21 & age[57,60], bloodPres[150,160], fastBSugar\{no\}, electro\{LVH\} $\Rightarrow$ 1 & 0.04 & 1.00 & 2.25 & 0.01 \\
22 & age[58,62], vesselsColor\{2\} $\Rightarrow$ 1 & 0.10 & 1.00 & 2.25 & 0.02 \\
23 & age[59,63], sex\{F\}, electro\{normal\}, slope\{flat\} $\Rightarrow$ 1 & 0.04 & 1.00 & 2.25 & 0.01 \\
24 & age[65,67], chestPain\{asymptomatic\}, oldpeak[0.60,1] $\Rightarrow$ 1 & 0.04 & 1.00 & 2.25 & 0.01 \\
25 & age[66,70], exercIAngina\{yes\} $\Rightarrow$ 1 & 0.07 & 1.00 & 2.25 & 0.02 \\
26 & sex\{M\}, chestPain\{non-anginal Pain\}, slope\{flat\}, vesselsColor\{1\} $\Rightarrow$ 1 & 0.05 & 1.00 & 2.25 & 0.01 \\
27 & sex\{M\}, chestPain\{asymptomatic\}, vesselsColor\{1\} $\Rightarrow$ 1 & 0.23 & 0.97 & 2.17 & 0.06 \\
28 & sex\{M\}, bloodPres[136,146], fastBSugar\{no\}, oldpeak[1.60,2] $\Rightarrow$ 1 & 0.06 & 1.00 & 2.25 & 0.01 \\
29 & sex\{M\}, chol[274,304], electro\{LVH\} $\Rightarrow$ 1 & 0.17 & 0.95 & 2.15 & 0.04 \\
30 & sex\{M\}, heartRate[124,132], oldpeak[0.80,1.20] $\Rightarrow$ 1 & 0.04 & 1.00 & 2.25 & 0.01 \\
31 & chestPain\{asymptomatic\}, bloodPres[130,140], heartRate[103,111] $\Rightarrow$ 1 & 0.04 & 1.00 & 2.25 & 0.01 \\
32 & chestPain\{asymptomatic\}, bloodPres[150,160], fastBSugar\{no\}, thal\{reversable Defect\} $\Rightarrow$ 1 & 0.07 & 1.00 & 2.25 & 0.02 \\
33 & chestPain\{asymptomatic\}, electro\{LVH\}, thal\{reversable Defect\} $\Rightarrow$ 1 & 0.32 & 0.97 & 2.19 & 0.08 \\
34 & chestPain\{asymptomatic\}, slope\{flat\}, thal\{reversable Defect\} $\Rightarrow$ 1 & 0.33 & 0.95 & 2.14 & 0.08 \\
35 & bloodPres[124,132], heartRate[131,141], thal\{reversable Defect\} $\Rightarrow$ 1 & 0.07 & 1.00 & 2.25 & 0.02 \\
36 & bloodPres[130,140], chol[330,353] $\Rightarrow$ 1 & 0.04 & 1.00 & 2.25 & 0.01 \\
37 & bloodPres[132,140], oldpeak[2.60,3.10] $\Rightarrow$ 1 & 0.04 & 1.00 & 2.25 & 0.01 \\
38 & fastBSugar\{no\}, oldpeak[3.40,3.80], slope\{flat\} $\Rightarrow$ 1 & 0.04 & 1.00 & 2.25 & 0.01 \\ \bottomrule
\end{tabular}
\end{table}
\linespread{1.5}
The selected rules for the Voting dataset are exhibited in Table~\ref{tab:rulesVoting}. It is possible to verify that 5 rules were used to describe the Republicans and 8 rules were used to describe the Democrats. Almost all Democrats, 92\%, are identified by the negative vote in the physician-fee-freeze (rule \#6). At the same time, 83\% of the Republicans are identified by the positive vote in the physician-fee-freeze and by the negative vote in the adoption-of-the-budget-resolution (rule \#1). When we have the same attributes in both rules of class 0 or class 1, they always appear with the opposite values, for instance, physician-fee-freeze, education-spending, and adoption-of-the-budget-resolution. Rule \#10 shows that the Democrats voted in favour of the adoption-of-the-budget-resolution, and synfuels-corporation-cutback. Whereas, rule \#2 shows that the Republicans voted against these two topics, and also against duty-free-exports. Similarly, rule \#11 shows that Democrats voted in favour of synfuels-corporation-cutback and against education-spending. Rule \# 4 indicates that Republicans voted against synfuels-corporation-cutback, in favour of education-spending, and against anti-satellite-test-ban. From the 4 voters not covered by any rule of Table~\ref{tab:rulesVoting}, 3 of them have too many missing values. The fourth voter is a Democrat exhibiting an abnormal pattern.
Notice that the Voting dataset contains only binary attributes, thus allowing a direct comparison with results provided by traditional FPM/FCA algorithms. Our rules contain positive (yes) and negative (no) responses for the attributes. Rules mined in a traditional way by FPM/FCA algorithms contain only positive answers. Of course, it is possible to use strategies, such as the itemization, that creates an augmented binary matrix with twice the columns of the original one. Depending on the number of attributes, this matrix augmentation may become computationally prohibitive.
\linespread{1}
\begin{table}[]
\footnotesize
\centering
\caption{Rules for the Voting dataset.}
\label{tab:rulesVoting}
\begin{tabular}{lp{10cm}rrrr}
\toprule
\# & \textbf{Rule} & \textbf{Comp.} & \textbf{Conf.} & \textbf{Lift} & \textbf{Lev.} \\
\midrule
1 & budgetRes\{no\}, physicianFF\{yes\} $\Rightarrow$ 0 & 0.83 & 0.96 & 2.48 & 0.19 \\
2 & budgetRes\{no\}, sfCorpCut\{no\}, dutyFree\{no\} $\Rightarrow$ 0 & 0.62 & 0.97 & 2.52 & 0.15 \\
3 & physicianFF\{yes\}, admSA\{yes\} $\Rightarrow$ 0 & 0.56 & 0.96 & 2.48 & 0.13 \\
4 & antiSatelliteTT\{no\}, sfCorpCut\{no\}, eduSpending\{yes\} $\Rightarrow$ 0 & 0.55 & 0.97 & 2.51 & 0.13 \\
5 & ES-aid\{yes\}, antiSatelliteTT\{yes\}, mxMissile\{no\}, sfCorpCut\{no\} $\Rightarrow$ 0 & 0.11 & 0.95 & 2.46 & 0.03 \\
6 & physicianFF\{no\} $\Rightarrow$ 1 & 0.92 & 0.99 & 1.62 & 0.21 \\
7 & aidNicaraguaC\{yes\}, eduSpending\{no\} $\Rightarrow$ 1 & 0.70 & 0.96 & 1.56 & 0.15 \\
8 & hInfants\{yes\}, mxMissile\{yes\} $\Rightarrow$ 1 & 0.43 & 0.96 & 1.56 & 0.10 \\
9 & budgetRes\{yes\}, immigration\{no\} $\Rightarrow$ 1 & 0.43 & 0.96 & 1.56 & 0.10 \\
10 & budgetRes\{yes\}, sfCorpCut\{yes\} $\Rightarrow$ 1 & 0.40 & 0.97 & 1.58 & 0.09 \\
11 & sfCorpCut\{yes\}, eduSpending\{no\} $\Rightarrow$ 1 & 0.36 & 0.97 & 1.58 & 0.08 \\
12 & wProject\{yes\}, superfundRS\{no\} $\Rightarrow$ 1 & 0.24 & 0.98 & 1.60 & 0.06 \\
13 & wProject\{yes\}, dutyFree\{yes\} $\Rightarrow$ 1 & 0.24 & 0.97 & 1.58 & 0.05 \\
\bottomrule
\end{tabular}
\end{table}
\linespread{1.5}
Finally, Table~\ref{tab:rulesZoo} is devoted to present the selected rules for the Zoo dataset. Only one rule is necessary to describe the animals of classes 1, 2, 4, 5, and 6. Classes 3 and 7 are each one described by two rules. A peculiar aspect of the obtained results is that we have some attributes appearing in all rules, such as milk and feathers. Milk has the value yes only for class 1, and feathers has the value yes only for class 2. In fact, these attributes alone can fully describe these classes. They appear together with other attributes because the biclusters are maximal, and all discriminant aspects present in the instances of a class are shown. For instance, all animals from class 1 do not have feathers, are mammals, have backbone, breath, and are not venomous.
\linespread{1}
\begin{table}[]
\tiny
\centering
\caption{Rules for the Zoo dataset.}
\label{tab:rulesZoo}
\begin{tabular}{lp{10cm}rrrr}
\toprule
\# & \textbf{Rule} & \textbf{Comp.} & \textbf{Conf.} & \textbf{Lift} & \textbf{Lev.} \\
\midrule
1 & feathers\{no\}, milk\{yes\}, backbone\{yes\}, breathes\{yes\}, venomous\{no\} $\Rightarrow$ 1 & 1.00 & 1.00 & 2.46 & 0.24 \\
2 & hair\{no\}, feathers\{yes\}, eggs\{yes\}, milk\{no\}, toothed\{no\}, backbone\{yes\}, breathes\{yes\}, venomous\{no\}, fins\{no\}, legs\{2\}, tail\{yes\} $\Rightarrow$ 2 & 1.00 & 1.00 & 5.05 & 0.16 \\
3 & hair\{no\}, feathers\{no\}, eggs\{yes\}, milk\{no\}, airborne\{no\}, aquatic\{no\}, backbone\{yes\}, breathes\{yes\}, fins\{no\}, tail\{yes\}, domestic\{no\} $\Rightarrow$ 3 & 0.80 & 1.00 & 20.20 & 0.04 \\
4 & hair\{no\}, feathers\{no\}, milk\{no\}, airborne\{no\}, predator\{yes\}, toothed\{yes\}, backbone\{yes\}, fins\{no\}, legs\{0\}, tail\{yes\}, domestic\{no\}, catsize\{no\} $\Rightarrow$ 3 & 0.60 & 1.00 & 20.20 & 0.03 \\
5 & hair\{no\}, feathers\{no\}, eggs\{yes\}, milk\{no\}, airborne\{no\}, aquatic\{yes\}, toothed\{yes\}, backbone\{yes\}, breathes\{no\}, fins\{yes\}, legs\{0\}, tail\{yes\} $\Rightarrow$ 4 & 1.00 & 1.00 & 7.77 & 0.11 \\
6 & hair\{no\}, feathers\{no\}, eggs\{yes\}, milk\{no\}, airborne\{no\}, aquatic\{yes\}, toothed\{yes\}, backbone\{yes\}, breathes\{yes\}, fins\{no\}, legs\{4\}, domestic\{no\}, catsize\{no\} $\Rightarrow$ 5 & 1.00 & 1.00 & 25.25 & 0.04 \\
7 & feathers\{no\}, eggs\{yes\}, milk\{no\}, aquatic\{no\}, toothed\{no\}, backbone\{no\}, breathes\{yes\}, fins\{no\}, legs\{6\}, tail\{no\}, catsize\{no\} $\Rightarrow$ 6 & 1.00 & 1.00 & 12.62 & 0.07 \\
8 & hair\{no\}, feathers\{no\}, eggs\{yes\}, milk\{no\}, airborne\{no\}, toothed\{no\}, backbone\{no\}, fins\{no\}, legs\{0\}, tail\{no\}, domestic\{no\}, catsize\{no\} $\Rightarrow$ 7 & 0.40 & 1.00 & 10.10 & 0.04 \\
9 & hair\{no\}, feathers\{no\}, milk\{no\}, airborne\{no\}, predator\{yes\}, toothed\{no\}, backbone\{no\}, fins\{no\}, domestic\{no\} $\Rightarrow$ 7 & 0.80 & 1.00 & 10.10 & 0.07 \\
\bottomrule
\end{tabular}
\end{table}
\linespread{1.5}
\section{Concluding remarks}
\label{sec:conclusion}
In this paper, we provided an enumerative biclustering algorithm to mine all maximal biclusters directly in mixed-attribute datasets, with or without missing values. A mixed-attribute dataset may be represented by a mixed-data matrix having any kind of attribute in each column, ranging from numerical (discrete or continuous) to categorical (ordinal or nominal). Of course, when a single type of data is present, such as all attributes being binary, or all attributes being real values, our proposal will work as well. As far as we know, all the alternative biclustering algorithms in the literature also devoted to handle mixed-attribute datasets must rely on discretization and/or itemization routines, thus involving information loss. This new algorithm is an extension of an existing proposal to mine constant values on columns (CVC) numerical biclusters, denoted RIn-Close\_CVC \cite{VeronezeEtAl2017}, and keeps the four key properties of its predecessor: efficiency, completeness, correctness, and non-redundancy. The extension does not require additional computational cost, so that the extension exhibits the same worst-case time complexity of the original algorithm.
Additionally, the strong connection between biclustering and frequent pattern mining (FPM) is widely explored to (1) present the biclusters in a user-friendly and intuitive form, by automatically converting them to quantitative class association rules (QCARs), and (2) select a subset of meaningful biclusters from the enumerative solution by threshold indices derived from consolidated FPM metrics, more specifically confidence and lift. Moreover, our experimental results indicated that the QCARs extracted from the biclusters are valuable and automatic means of providing useful and relevant interpretable models of a dataset.
In addition to the selection of biclusters based on FPM metrics, we also provided a simple heuristic to select a small but still representative group of biclusters. Our results showed that these biclusters yield a parsimonious set of relevant rules for discriminating the class labels.
In a future work, the interplay between RIn-Close\_CVC biclustering and QCARs will be further explored in the context of associative classification, which is an emerging FPM research field devoted to the synthesis of high-performance rule-based classifiers \cite{LiuEtAl1998, NguyenEtAl2015}. There are open issues in the selection of the mined CARs when building rule-based classifiers. Our intention is to address those open issues and to incorporate QCARs and fuzzy CARs \cite{AntonelliEtAl2015}.
Given that the enumerated biclusters are maximal, the associated QCARs are not the most parsimonious ones, because we are mainly focused on representative power. As another future perspective of the research, we are going to explore strategies for selecting a small set of the most informative attributes to discriminate between class labels.
\section*{Acknowledgments}
F. J. Von Zuben would like to thank CNPq (process 309115/2014-0) for the financial support.
\linespread{1}
{\footnotesize
|
1,314,259,995,272 | arxiv | \section{Introduction}
Our work is part of the active research stream which studies the close link between continuous dissipative dynamical systems and the optimization algorithms obtained by temporal discretization. In this context, second-order evolution equations provide a natural and intuitive way to speed up algorithms.
Then, the optimization properties come from the damping term. It is the skill of the mathematician to design this term to obtain rapidly converging trajectories and algorithms (ideally, obtain optimal convergence rates). Precisely, we will consider the following system (ADIGE) which covers a large number of situations. Let us fix the setting.
Let $\mathcal H$ be a real Hilbert space endowed with the scalar product $\< \cdot,\cdot\>$ and norm $\|\cdot\|$.
Let $f: {\mathcal H} \to {\mathbb R}$ be a differentiable function (not necessarily convex), whose gradient $\nabla f: {\mathcal H} \to {\mathcal H}$ is Lipschitz continuous on the bounded subsets of ${\mathcal H}$, and such that $\inf_{{\mathcal H}} f > -\infty$ (when considering the Hessian of $f$, we will assume that $f$ is twice differentiable).
Our objective is to study from the optimization point of view the Autonomous Damped Inertial Gradient Equation
\begin{equation*}
\mbox{\rm (ADIGE)} \qquad \ddot{x}(t) + \mathcal G \Big( \dot{x}(t), \nabla f({x}(t)), \nabla^2 f({x}(t))\Big) + \nabla f (x(t)) = 0,
\end{equation*}
where the damping term $\mathcal G \Big( \dot{x}(t), \nabla f (x(t)), \nabla^2 f({x}(t)) \Big) $ acts as a closed-loop control. Under suitable assumptions, this term will induce dissipative effects, which tend to stabilize asymptotically ({\it i.e.}\,\, as $t \to +\infty$) the trajectories to critical points of $f$ (minimizers in the case where $f$ is convex).
We will use the generic terminology \textit{damped inertial continuous dynamics} to designate second order evolution systems which have a strict Lyapunov function. To be specific we will refer to (ADIGE) or to some of its particular cases.
From there, we can distinguish two distinct classes of dynamics and algorithms, depending on whether the damping term involves coefficients which are given a priori as functions of time (open-loop damping, non autonomous dynamic), or is a feedback of the current state of the system (closed-loop damping, adaptive methods, autonomous dynamic). We will use these terminologies indifferently, but to be precise they correspond to the case of autonomous, and non-autonomous dynamics respectively.
Indeed, one of our objective is to understand if the closed-loop damping can do as well (and possibly improve) the fast convergence properties of the accelerated gradient method of Nesterov. Recall that, in convex optimization, the accelerated gradient method of Nesterov (which is associated with a non-autonomous damped inertial dynamic) provides convergence rate of order $1/t^2$, which is optimal for first-order methods (involving only evaluations of $\nabla f$ at iterates). This justifies the importance of inertial dynamics for developing fast optimization methods (recall that the continuous steepest descent, which is a first order evolution equation, only guarantees the convergence rate $1/t$ for general convex functions).
Closely related questions concern the impact of geometric properties of data (damping term, objective function) on the convergence rates of trajectories and iterations. This is a wide subject which concerns continuous optimization, as well as the study of the stabilization of oscillating systems in mechanics and physics.
Due to the highly nonlinear characteristics of (ADIGE) (non-linearity occurs both in the damping term and in the gradient of $f$), our convergence analysis will mainly rely on the combination of the quasi-gradient approach for inertial systems initiated by B\'egout--Bolte--Jendoubi \cite {BBJ} with the theory of Kurdyka--Lojasiewicz. The price to pay is that some of the results are only valid in finite dimensional Hilbert spaces.
It should be noted that the relative simplicity of the functional framework (single functional space, differentiable objective function) does not allow direct application to the corresponding PDEs.
Our objective is mainly the study of optimization problems, but the Lyapunov analysis developed in the article can be a very useful guide for its extension to the PDE framework, as it was done in \cite{AA2}, \cite{BCD}, \cite{CFr}.
\subsection{Presentation of the results}
For each of the following systems, we will show existence and uniqueness of the solution of the Cauchy problem, and study its asymptotic behavior.
\subsubsection{(ADIGE-V)}
Our study mainly concerns the case
the differential inclusion
\begin{equation}\label{closed_loop-phi_def}
\mbox{\rm (ADIGE-V)} \quad 0\in \ddot x(t) +\partial \phi(\dot x(t))+ \nabla f(x(t)),
\end{equation}
where $\phi: {\mathcal H} \to {\mathbb R}$ is a convex continuous function which achieves its minimum at the origin, and the operator $\partial \phi: {\mathcal H} \to 2^{{\mathcal H}}$ is its convex subdifferential.
The damping term $\mathcal G$ depends only on the velocity, which is reflected by the suffix V.
This model encompasses several classic situations:
\smallskip
\noindent $\bullet$ \, The case
$\phi (u)= \frac{\gamma}{2} \|u\|^2$ corresponds to the Heavy Ball with Friction method
\begin{equation}\label{HBF}
\mbox{\rm (HBF)} \quad \ddot x(t) + \gamma \dot x(t)+ \nabla f(x(t))=0
\end{equation}
introduced by B. Polyak \cite{Pol,Polyak2} and further studied by Attouch--Goudou--Redont \cite{AGR} (exploration of local minima), Alvarez \cite{Alvarez} (convergence in the convex case), Haraux-Jendoubi \cite{HJ1, HJ2} (convergence in the analytic case),
B\'egout--Bolte--Jendoubi \cite{BBJ} (convergence based on the Kurdyka-Lojasiewicz property), to cite part of the rich literature devoted to this subject.
\smallskip
\noindent $\bullet$ \, The case $\phi (u)= r\|u\|$ corresponds to the dry friction
effect. Then, (ADIGE-V) is a differential inclusion (because of $\phi$ nondifferentiable), which, when $\dot x(t)$ is not equal to zero, writes as follows
$$
\quad \ddot x(t) + r \frac{\dot x(t)}{\|\dot x(t)\|}+ \nabla f(x(t))=0.
$$
The importance of this case in optimization comes from the finite time stabilization property of the trajectories, which is satisfied generically with respect to the initial data.
The rigourous mathematical treatment of this case has been considered by Adly--Attouch--Cabot \cite{AAC}, Amann--Diaz \cite{AmaDia}, see Adly-Attouch \cite{AA-preprint-jca, AA0, AA} for recent developements.
\smallskip
\noindent $\bullet$ \, Taking $\phi (u)= \frac{1}{p}\|u\|^p$ with $ p \geq 1$ allows to
treat these questions in a unifying way. We will pay particular attention to the role played by the parameter $p$ in the asymptotic convergence analysis. For $p>1$ the dynamic writes
$$
\ddot x(t) + \| \dot x(t)\|^{p-2} \dot x(t)+ \nabla f(x(t))=0.
$$
We will see that the case $p=2$ separates the weak damping ($p>2$) from the strong damping ($p<2$), hence the importance of this case.
\subsubsection{(ADIGE-VH)}
Then,
we will extend the previous results to the differential inclusion
\begin{equation*}
\mbox{(ADIGE-VH)}\quad \ddot{x}(t) + \partial \phi (\dot{x}(t))+ \beta \nabla^2 f (x(t))\dot{x}(t) + \nabla f (x(t)) \ni 0,
\end{equation*}
which, besides a damping potential $\phi$ as above acting on the velocity, also involves a geometric damping driven by the Hessian of $f$, hence the terminology.
The inertial system
\begin{equation*}
{\rm \mbox{(DIN)}}_{\gamma,\beta} \qquad \ddot{x}(t) + \gamma \dot{x}(t) + \beta \nabla^2 f (x(t)) \dot{x}(t) + \nabla f (x(t)) = 0,
\end{equation*}
was introduced in \cite{AABR}. In the same spirit as (HBF), the dynamic ${\rm \mbox{(DIN)}}_{\gamma,\beta} $ contains a \textit{fixed} positive viscous friction coefficient $\gamma>0$. The introduction of the Hessian-driven damping allows to damp the transversal oscillations that might arise with (HBF), as observed in \cite{AABR} in the case of the Rosenbrock function. The need to take a geometric damping adapted to $f$ had already been observed by Alvarez \cite{Alvarez} who considered the inertial system
\[
\ddot{x}(t) + \Gamma \dot{x}(t) + \nabla f (x(t)) = 0 ,
\]
where $\Gamma: {\mathcal H} \to {\mathcal H}$ is a linear positive anisotropic operator (see also \cite{BotCse}). But still this damping operator is fixed. For a general convex function, the Hessian-driven damping in $\mbox{\rm (DIN)}_{\gamma,\beta}$ performs a similar operation in an adaptive way. The terminology (DIN) stands shortly for Dynamic Inertial Newton system. It refers to the natural link between this dynamic and the continuous Newton method, see
Attouch--Svaiter \cite{ASv}.
Recent studies have been devoted to the study of the dynamic
\[
\ddot{x}(t) + \frac{\alpha}{t} \dot{x}(t)+ \beta \nabla^2 f (x(t)) \dot{x}(t) + \nabla f (x(t)) = 0 ,
\]
which combines asymptotic vanishing damping with Hessian-driven damping.
The corresponding algorithms involve a correcting term in the Nesterov accelerated gradient method which reduces the oscillatory aspects, see Attouch--Peypouquet--Redont \cite{APR}, Attouch--Chbani--Fadili--Riahi \cite{ACFR},
Shi--Du--Jordan--Su \cite{SDJS}.
\smallskip
\subsubsection{(ADIGE-VGH)}
Finally, we will consider the new dynamical system
\begin{equation*}
\mbox{(ADIGE-VGH)}\quad \ddot{x}(t) + \partial \phi \Big(\dot{x}(t) + \beta \nabla f (x(t)\Big) + \beta \nabla^2 f (x(t)) \dot{x} (t) + \nabla f (x(t)) \ni 0,
\end{equation*}
where the damping term $\partial \phi \Big(\dot{x}(t) + \beta \nabla f (x(t)\Big)$ involves both the velocity vector and the gradient of the potential function $f$.
The parameter $\beta \geq 0$ is attached to the geometric damping induced by the Hessian. As previously considered, $\phi$ is a damping potential function.
Assuming that $f$ is convex and $\phi$ is a sharp function at the origin, that is
$\phi (u) \geq r\|u\|$ for some $r>0$,
we will show that, for each trajectory generated by (ADIGE-VGH), the following properties are satisfied:
\smallskip
$i)$ $x(\cdot)$ converges weakly as $t\to +\infty$, and its limit belongs to $\argmin _{{\mathcal H}} f$.
\smallskip
$ii)$ $\dot{x}(t)$ and $\nabla f (x(t))$ converge strongly to zero as $t\to +\infty$.
\smallskip
$iii)$ After a finite time, $x(\cdot)$ follows the steepest descent dynamic.
\subsection{Contents}
The paper is organized in accordance with the above presentation.
In Section \ref{sec:classic}, we recall some classical facts concerning the Heavy Ball with Friction method, the Su--Boyd--Cand\`es dynamic approach to the Nesterov method, the Hessian-driven damping, and the dry friction.
Then, we successively examine each of the cases considered above:
Sections \ref{sec: basic_1}, \ref{sec: basic_2}, \ref{rate-f-str-conv}, \ref{sec:weakdamping}, \ref{sec: basic_3} are devoted to the closed-loop control of the velocity, which is the main part of our study. We show the existence and uniqueness of a global solution for the Cauchy problem, the exponential convergence rate in the case $f$ strongly convex, the effect of weak damping, and finally analyze the convergence under the Kurdyka--Lojasiewicz property (KL). Section \ref{sec: basic_4} considers some first related algorithmic results. Section \ref{Sec:Hessian} is devoted to the closed-loop damping with Hessian driven damping.
Section \ref{Sec: combine} is devoted to the closed-loop damping involving the velocity and the gradient.
We conclude by mentioning several lines of research for the future.
\section{Classical facts}\label{sec:classic}
Let's recall some classical facts which will serve as comparison tools.
\subsection{(HBF) dynamic system}
The Heavy Ball with Friction system
\begin{equation*}
{\rm (HBF)}_{r} \quad \ddot{x}(t) + r \dot{x}(t) + \nabla f (x(t)) = 0,
\end{equation*}
was introduced by B. Polyak \cite{Pol,Polyak2}. It involves a fixed viscous friction coefficient $r>0$.
Assuming that $ f $ is a convex function such that $\argmin_{{\mathcal H}} f \neq \emptyset$, we know, by Alvarez's theorem \cite{Alvarez},
that each trajectory of ${\rm (HBF)}_{r}$ converges weakly, and its limit belongs to $\argmin_{{\mathcal H}} f$.
In addition, we have the following convergence rates, the proof of which (see \cite{AC10}) is based on the decrease property of the following Lyapunov functions
\begin{equation*}
\mathcal E (t) :=
\frac{1}{r^2}(f(x(t))-\min_\mathcal H f)+\frac{1}{2}\|x(t)-x^* + \frac{1}{r}\dot x(t)\|^2 ,
\end{equation*}
where $x^* \in \argmin_{{\mathcal H}} f$.
\begin{theorem}\label{th.HBF-rate of conv}
Let $f :\mathcal H\to \mathbb R$ be a convex function of class ${\mathcal C}^1$ such that $\mbox{\rm argmin} f \neq~\emptyset$, and let $r$ be a positive parameter. Let $x(\cdot): [0, + \infty[ \rightarrow \cal H$ be a solution trajectory of ${\rm (HBF)}_r$. Set $x(0)= x_0$ and $\dot x (0)= x_1$. Then, we have
\smallskip
$(i)$ $\displaystyle\int_{0}^{+\infty} (f(x(t))-\min_\mathcal H f)\, dt<+\infty$, \quad $\displaystyle\int_{0}^{+\infty} t \|\dot x(t)\|^2\, dt <+\infty$.
\smallskip
$(ii)$ \ $f(x(t))-\min_\mathcal H f \leq \displaystyle\frac{C(x_0, x_1)}{t}$, \quad $\displaystyle\|\dot x(t)\| \leq
\frac{\sqrt{ 2C(x_0, x_1)}}{\sqrt{t}} $, where
$$
C(x_0, x_1) =: \frac{3}{2r} \left( f(x_0) - \min_\mathcal H f \right) + r \mbox{\rm dist}(x_0, \mbox{\rm argmin} f)^2
+ \frac{5}{4r}\|x_1\|^2 .
$$
$(iii)$ $\displaystyle f(x(t))-\min_\mathcal H f = o \left(\frac{1}{t}\right)$ \quad and \quad
$\displaystyle\|\dot x(t)\|= o\,\!\left(\frac{1}{\sqrt {t}}\right)$\quad as $t\to +\infty$.
\end{theorem}
Let us now consider the case of a strongly convex function. Recall that a function $f: {\mathcal H} \to \mathbb R$ is $\mu$-strongly convex for some $\mu >0$ if $f- \frac{\mu}{2}\| \cdot\|^2$ is convex.
We have the following exponential convergence rate, whose proof relies on the decrease property of the following Lyapunov function
$$
\mathcal E (t):= f(x(t))- \min_{\mathcal H}f + \frac{1}{2} \| \sqrt{\mu} (x(t) -x^*) + \dot{x}(t)\|^2,
$$
where $x^*$ is the unique minimizer of $f$.
\begin{theorem}\label{strong-conv-thm}
Suppose that $f: {\mathcal H} \to \mathbb R$ is a function of class ${\mathcal C}^1$ which is $\mu$-strongly convex for some $\mu >0$.
Let $x(\cdot): [0, + \infty[ \to {\mathcal H}$ be a solution trajectory of
\begin{equation}\label{dyn-sc-a}
\ddot{x}(t) + 2\sqrt{\mu} \dot{x}(t) + \nabla f (x(t)) = 0.
\end{equation}
Set $x(0)= x_0$ and $\dot x (0)= x_1$.
Then, the following property is satisfied: for all $t\geq 0$
$$
f(x(t))- \min_{\mathcal H}f \leq C e^{-\sqrt{\mu}t},
$$
where \quad
$ C:= f(x_0)- \min_{\mathcal H}f + \mu \mbox{\rm dist}(x_0,S)^2 +
\| x_1\|^2 .$
\end{theorem}
A recent account on the best tuning of the damping coefficient can be found in Aujol--Dossal--Rondepierre \cite{ADR}.
The above results show the important role played by the geometric properties of the data in the convergence rates.
Apart from the convex case, the first convergence result for (HBF) was obtained by Haraux--Jendoubi \cite{HJ1} in the case where $f:{\mathbb R}^n \to {\mathbb R}$ is a real-analytic function. They have shown the central role played by Lojasiewicz's inequality (see also \cite{Chergui}).
Then, on the basis of Kurdyka's work in real algebraic geometric, Lojasiewicz's inequality was extended in \cite{BDLM} by Bolte--Daniilidis--Ley--Mazet to a large class of tame functions, possibly nonsmooth.
This is the Kurdyka--Lojasiewicz inequality, to which we will briefly refer (KL).
The convergence of first and second-order proximal-gradient dynamical systems in the context of the (KL) property was obtained by Bo\c t--Csetnek \cite{BotCseESAIM} and Bo\c t--Csetnek--L\'aszl\'o \cite{BotCseLaJEE}. The (KL) property will be a key tool for obtaining convergence rates based on the geometric properties of the data.
Note that this theory only works in the finite dimensional setting \footnote{In the field of PDE's, the Lojasiewicz--Simon theory \cite{CHJ} makes it possible to deal with certain classes of particular problems, such as semi-linear equations.} (the infinite dimensional setting is a difficult topic which is the subject of current research), and only for autonomous systems.
This explains why working with autonomous systems is important, it allows us to use the powerful theory (KL).
\subsection{Su-Boyd-Cand\`es dynamic approach to Nesterov accelerated gradient method}
\noindent The following non-autonomous system
\begin{equation*}
\mbox{\rm (AVD)}_{\alpha} \qquad \ddot{x}(t) + \frac{\alpha}{t} \dot{x}(t) + \nabla f (x(t)) = 0,
\end{equation*}
will serve as a reference to compare our results
with the open-loop damping approach. It was introduced in the context of convex optimization by Su--Boyd--Cand\`es in \cite{SBC}. As a specific feature, the viscous damping coefficient $\frac{\alpha}{t}$ vanishes (tends to zero) as $t$ goes to infinity, hence the terminology "Asymptotic Vanishing Damping". This contrasts with (HBF) where the viscous friction coefficient is fixed, which prevents obtaining fast convergence of the values for general convex functions.
Recall the main results concerning the asymptotic behaviour of the trajectories generated by $\mbox{\rm (AVD)}_{\alpha}$.
\smallskip
\begin{itemize}
\item For $\alpha \geq 3$, for each trajectory $x(\cdot)$ of $\mbox{\rm (AVD)}_{\alpha}$, \, $f(x(t)) - \inf_{{\mathcal H}}f ={\mathcal O} \left(1 /t^2\right)$ as $t \to +\infty$.
\smallskip
\item For $\alpha >3$, each trajectory converges weakly to a minimizer of $f$, see
\cite{ACPR}. In addition, it is shown in \cite{AP} and \cite{May} that $f(x(t)) - \inf_{{\mathcal H}}f = o\left(1 /t^2\right)$ as $t \to +\infty$.
\smallskip
\item For $\alpha \leq 3$, we have $f(x(t)) - \inf_{{\mathcal H}}f = \displaystyle{{\mathcal O}\Big( t^{-\frac{2\alpha}{3}}}\Big)$, see
\cite{AAD} and \cite{ACR-subcrit}.
\item $\alpha=3$ is a critical value.\footnote{The convergence of the trajectories is an open question in this case.} It corresponds to the historical case studied by Nesterov \cite{Nest1,Nest2}.
\end{itemize}
The implicit time discretization
of $\mbox{\rm (AVD)}_{\alpha}$ provides an inertial proximal algorithm that enjoys the same properties as the continuous dynamics. Replacing the proximal step by a gradient step gives the
following Nesterov accelerated gradient method (illustrated in Figure \ref{Nest_picture})
$$
\left\{
\begin{array}{rcl}
y_k&= & x_{k} + \left(1 -\frac{\alpha}{k}\right) ( x_{k} - x_{k-1}) \\
\rule{0pt}{15pt}
x_{k+1}& = & y_k- s\nabla f (y_k),
\end{array}\right.
$$
which still enjoys the same properties when the step size $s$ is less than the inverse of the Lipschitz constant of $\nabla f$.
Based on the dynamic approach above, many recent studies have been devoted to the convergence properties of the sequences $(x_k)$ and $(y_k)$, which have led to a better understanding and improvement of Nesterov's accelerated gradient algorithm \cite{AAD}, \cite{AC2}, \cite{ACFR}, \cite{ACPR}, \cite{ACR-subcrit},
\cite{AP}, \cite{CD}, \cite{SBC}, and of the Ravine algorithm \cite{GT}, \cite{PKD}.
\smallskip
\begin{figure}
\setlength{\unitlength}{6cm}
\begin{picture}(0.5,0.7)(-0.65,0.00)
\textcolor{blue}{
\put(0.357,0.51){$y_k = x_{k} + \left(1- \frac{\alpha}{k} \right)( x_{k} - x_{k-1})$}
\put(0.328,0.493){{\tiny $\bullet$}}
\put(0.369,0.582){$x_k$}
\put(0.348,0.565){{\tiny $\bullet$}}
\put(0.37,0.657){$x_{k-1}$}
\put(0.364,0.635){{\tiny $\bullet$}}
\put(0.293,0.365){$x_{k+1} = \ y_k - s\nabla f \left( y_k\right) $}
\put(-0.1,0.29){$\argmin f$}
}
\qbezier(-0.1,0.2)(0.4,0.37)(-0.1,0.42)
\qbezier(-0.1,0.15)(0.6,0.4)(-0.1,0.48)
\qbezier(-0.1,0.02)(1.05,0.45)(-0.1,0.6)
\qbezier(0.,-0.01)(1.2,0.53)(-0.1,0.65)
\qbezier(0.16,-0.01)(1.42,0.59)(-0.1,0.709)
\put(-0.1,0.23){\line(1,1){0.16}}
\put(-0.02,0.23){\line(1,1){0.136}}
\put(-0.1,0.32){\line(1,1){0.085}}
\textcolor{red}{
\put(0.019,0.633){\line(5,-2){0.6}}
\put(0.341,0.504){\vector(-1,-2){0.06}}
\put(0.385,0.485){\line(-1,-2){0.022}}
\put(0.32,0.463){\line(2,-1){0.042}}
}
\textcolor{red}{
\put(0.318,0.505){\line(2,8){0.035}}
}
\end{picture}
\caption{Nesterov accelerated gradient method}
\label{Nest_picture}
\end{figure}
\subsubsection{Optimal convergence rates} In the above results the convergence rates are optimal, that is, they can be reached, or approached arbitrarily close, as shown by the following example from
\cite{ACPR}.
Let us show that $\mathcal O (1/t^2)$ is the worst possible case for the rate of convergence of the values for the $ \mbox{(AVD)}_{\alpha} $ trajectories, when $\alpha \geq 3$. It is attained as a limit in the following example.
Take $\mathcal H = \mathbb R$ and $f (x) = c|x|^{\gamma}$, where $c$ and $\gamma$ are positive parameters. We look for nonnegative solutions of $\rm{(AVD)_{\alpha}}$ of the form $x(t)= \frac{1}{t^{\theta}}$, with $\theta >0$. This means that the trajectory is not oscillating, it is a completely damped trajectory. Let us determine the values of $c$, $\gamma$ and $\theta$ that provide such solutions. We have
$$\ddot{x}(t) + \frac{\alpha}{t} \dot{x}(t) = \theta (\theta +1 -\alpha) \frac{1}{t^{\theta+ 2}}, \quad \nabla f (x(t))= c \gamma |x(t)|^{\gamma -2}x(t) = c \gamma \frac{1}{t^{\theta (\gamma -1)}}.
$$
Thus, $x(t)= \frac{1}{t^{\theta}}$ is solution of $\rm{(AVD)_{\alpha}}$ if, and only if,
\smallskip
\begin{itemize}
\item [i)] $\theta+ 2 = \theta (\gamma -1)$, which is equivalent to $\gamma >2$ and $\theta= \frac{2}{\gamma -2}$; and
\item [ii)] $c \gamma = \theta (\alpha -\theta -1)$, which is equivalent to $\alpha > \frac{\gamma}{\gamma -2}$ and $c= \frac{2}{\gamma(\gamma -2)}( \alpha - \frac{\gamma}{\gamma -2})$.
\end{itemize}
\smallskip
\noindent We have $\min f = 0$ and
$
f (x(t)) =\frac{2}{\gamma(\gamma -2)}( \alpha - \frac{\gamma}{\gamma -2} ) \frac{1}{t^{\frac{2 \gamma}{\gamma -2 }}}.$\\
The speed of convergence of $f (x(t))$ to $0$ depends on the parameter $\gamma$. The exponent $\frac{2 \gamma}{\gamma -2 }$ is greater than $2$, and tends to $2$ when $\gamma$ tends to $+\infty$. This limiting situation is obtained by taking a function $f$ which becomes very flat around the set of its minimizers. Therefore, without any other geometric assumptions on $f$, we cannot expect a convergence rate better than $\mathcal O (1/t^2)$. This means that it is not possible to obtain a rate $\mathcal O (1/t^{r})$ with $r>2$, which holds for all convex functions. Hence, when $\alpha \geq 3$, $\mathcal O (1/t^{2})$ is sharp. This is not contradictory with the rate $o (t^{-2})$ obtained when $\alpha > 3$.
\subsection{Hessian-driven damping}\label{sec:Hessian_intro}
The inertial system
\begin{equation*}
{\rm \mbox{(DIN)}}_{\gamma,\beta} \qquad \ddot{x}(t) + \gamma \dot{x}(t) + \beta \nabla^2 f (x(t)) \dot{x}(t) + \nabla f (x(t)) = 0,
\end{equation*}
was introduced in \cite{AABR}. In line with (HBF), the viscous friction coefficient $\gamma$ is a \textit{fixed} positive real number. The introduction of the Hessian-driven damping makes it possible to neutralize the oscillations likely to occur with (HBF), a key property for numerical optimization purpose.\\
To accelerate this system, several studies considered the case where the viscous damping is vanishing. As a model example, which is based on the Su--Boyd--Cand\`es continuous model for the Nesterov accelerated gradient method, we have
\begin{equation}\label{DIN-AVD}
{\rm (DIN-AVD)}_{\alpha, \beta} \qquad \ddot{x}(t) + \frac{\alpha}{t}\dot{x}(t) + \beta \nabla^2 f (x(t)) \dot{x} (t) + \nabla f (x(t)) = 0.
\end{equation}
Considering this sytem, let us quote Attouch--Peypouquet--Redont\cite{APR}, Attouch--Chbani--Fadili--Riahi \cite{ACFR},
Bo\c t--Csetnek--L\'{a}szl\'{o} \cite{BCL},
Castera--Bolte--F\'evotte--Pauwels \cite{CBFP}, Kim \cite{Kim}, Lin--Jordan \cite{LJ}, Shi--Du--Jordan--Su \cite{SDJS}.
While preserving the convergence properties of $\mbox{\rm (AVD)}_{\alpha}$, the above system provides fast convergence to zero of the gradients, namely $\int_{t_0}^{\infty} t^2 \|\nabla f (x(t)) \|^2 dt < + \infty$ for $\alpha \geq 3$ and $\beta>0$, and reduces the oscillatory aspects.
\begin{small}
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{avddynavd_2d_T5.pdf}
\end{center}
\caption{Evolution of the objective (left) and trajectories (right) for ${\rm (AVD)}_{\alpha}$ ($\alpha=3.1)$ and ${\rm (DIN-AVD)}_{\alpha, \beta}$ ($\alpha=3.1,\beta=1$) on an ill-conditioned quadratic problem in ${\mathbb R}^2$.}
\label{figH}
\end{figure}
\end{small}
To illustrate the remarkable effect of the Hessian-driven damping, let's compare the two dynamics ${\rm (AVD)}_{\alpha}$ and ${\rm (DIN-AVD)}_{\alpha, \beta}$ on a simple ill-conditioned quadratic minimization problem. In the following example of \cite{ACFR}, the trajectories can be computed in closed form. Take ${\mathcal H}= {\mathbb R}^2$ and $ f(x_1,x_2)=\frac{1}{2}(x_1^2+1000x_2^2)$. We take parameters $\alpha=3.1$, $\beta=1$, to obey the condition $\alpha >3$. Starting with initial conditions: $(x_1(1),x_2(1))=(1,1)$, $(\dot x_1(1),\dot x_2(1))=(0,0)$, we have the trajectories displayed in Figure~\ref{figH}. We observe that the wild oscillations of ${\rm (AVD)}_{\alpha}$ are neutralized by the presence of the Hessian-driven damping in ${\rm (DIN-AVD)}_{\alpha, \beta}$.
At first glance, the presence of the Hessian may seem to cause numerical difficulties. However, this is not the case because the Hessian intervenes in the above ODE in the form $\nabla^2 f (x(t)) \dot{x} (t)$, which is nothing other than the derivative with respect to time of $\nabla f (x(t))$. Thus, the temporal discretization of this dynamic provides first-order algorithms
which, by comparison with the accelerated gradient method of Nesterov, contain a correction term which is equal to the difference of the gradients at two consecutive steps.
\noindent The following closely related inertial system was recently introduced by
Alecsa--L\'aszl\'o--Pinta \cite{ALP}
\begin{equation*}
\ddot{x}(t) + \frac{\alpha}{t} \dot{x}(t) + \nabla f \Big (x(t) + \beta \dot{x}(t) \Big) = 0.
\end{equation*}
The link with ${\rm (DIN-AVD)}_{\alpha, \beta}$ results from Taylor expansion: as $t \to +\infty$ we have $\dot{x}(t) \to 0$, and so
$ \nabla f \Big (x(t) + \beta \dot{x}(t) \Big) \approx \nabla f (x(t)) + \beta \nabla^2 f (x(t)) \dot{x} (t) $.
\subsubsection{Hessian-driven damping and unilateral mechanics}
Another motivation for the study of ${\rm \mbox{(DIN)}}_{\gamma,\beta}$ comes from mechanics, and the modeling of damped shocks.
In \cite{AMR}, Attouch-Maing\'e-Redont consider the inertial system with Hessian-driven damping
\begin{equation}
\label{shoks}
\ddot{x}(t) + \gamma \dot{x}(t) + \beta \nabla^2 f (x(t))\dot{x}(t) + \nabla f (x(t)) + \nabla g(x(t))\; =0,
\end{equation}
where $g:{\mathcal H}\to{\mathbb R}$ is a smooth real-valued function.
An interesting property of this system
is that, after the introduction of an auxiliary variable $y$, it can be equivalently written as a first-order
system involving only the time derivatives $\dot{x}(t)$, $\dot{y}(t)$ and the gradient terms $\nabla f (x(t))$, $\nabla g (x(t))$. More precisely, the system \eqref{shoks} is equivalent to the following first-order differential equation
\begin{equation}
\label{shoks2}
\left\{
\begin{array}{l}
\dot{x}(t)+\beta \nabla f(x(t))+ax(t)+by(t)=0,\vspace{1mm}\\
\dot{y}(t)-\beta\nabla g(x(t))+ax(t)+by(t)=0,
\end{array}
\right.
\end{equation}
where $a$ and $b$ are real numbers such that: $a+b=\gamma$ and $\beta b=1$. Note that \eqref{shoks2} is different from the classical Hamiltonian formulation, which would still involve the Hessian of $f$. In contrast, the formulation \eqref{shoks2} uses only first-order information from the function $f$ (no occurence of the Hessian of $f$).
Replacing $\nabla f$ by $\partial f$ in \eqref{shoks2}
allows us to extend the analysis to the case of a convex lower semicontinuous function $f :{\mathcal H} \to \mathbb R\cup\{+\infty\} $, and so to introduce constraints in the model. When $f = \delta_K$ is the indicator
function of a closed convex set $K \subset {\mathcal H}$ , the subdifferential operator $\partial f$ takes account of the
contact forces, while $\nabla g$ takes account of the driving forces. In this setting, by playing with
the geometric damping parameter $\beta$, one can describe nonelastic shock laws with restitution
coefficient (for more details we refer to \cite{AMR} and references therein).
Combination of dry friction ($\phi(u)= r\|u\|$) with Hessian damping
has been considered by Adly--Attouch \cite{AA-preprint-jca}, \cite{AA}.
\subsection{Inertial dynamics with dry friction}
Although dry friction (also called Coulomb friction) plays a fundamental role in mechanics, its use in optimization has only recently been developed. Due to the nonsmooth character of the associated damping function $ \phi (u) = r \| u \| $,
the dynamics is a differential inclusion, which, when the speed is not equal to zero, is given by
$$
\quad \ddot x(t) + r \frac{\dot x(t)}{\|\dot x(t)\|}+ \nabla f(x(t))=0.
$$
In this case, the energy estimate gives
$\int_{0}^{+\infty} \| \dot{x}(t) \| dt <+\infty.$
Therefore, the trajectory has finite length, and it converges strongly.
The limit $x_{\infty}$ of the trajectory $x(\cdot)$ satisfies
$$
\| \nabla f(x_{\infty} )\| \leq r.
$$
Thus, $x_{\infty}$ is an ``approximate'' critical point
of $f$. In practice, for optimization purpose, we choose a small $r>0$.
This amounts to solving the optimization problem $\min_{{\mathcal H}} f$ with the variational principle of Ekeland, instead of the Fermat rule.
The importance of this case in optimization comes from the finite time stabilization property of the trajectories, which is satisfied generically with respect to the initial data.
The rigourous mathematical treatment of this case has been considered by Adly--Attouch--Cabot \cite{AAC}, see Adly--Attouch \cite{AA-preprint-jca, AA0, AA} for recent developements.
Corresponding PDE's results have been obtained by Amann--Diaz \cite{AmaDia} for the nonlinear wave equation, and by
Carles--Gallo \cite{CarlesGallo} for the nonlinear Schr\"odinger equation.
\subsection{Closed-loop versus open-loop damping}
In the strongly convex case, the autonomous system (HBF) provides an exponential rate of convergence.
On the other hand, the $\mbox{\rm (AVD)}_{\alpha}$ system provides a convergence rate of order $1/t^{\alpha}$.
Thus, in this case, the closed-loop damping behaves better that the open-loop damping.
For general convex functions ({\it i.e.}\,\, in the worst case), we have the opposite situation.
$\mbox{\rm (AVD)}_{\alpha}$ provides a convergence rate $1/t^2$, while (HBF) gives only $1/t$.
In this paper, we will study the impact of the choice of the damping potential on the rate of convergence.
A related question is: using autonomous systems, can we obtain for general convex functions, a convergence rate of order $1/t^2$,
{\it i.e.}\,\, as good as the Nesterov accelerated gradient method?
As we will see, to answer these questions, we will have to study different types of closed-loop damping, and rely on the geometric properties of the data.
These questions fall within the framework of an active research current, to quote some recent works, Apidopoulos--Aujol--Dossal--Rondepierre \cite{AADR} (geometrical properties of the data), Iutzeler--Hendricx \cite{IH} (online acceleration), Lin--Jordan \cite{LJ} (control perspective on high-order optimization), Poon--Liang \cite{PL} (geometry of first-order methods and adaptive acceleration).
\vspace{3mm}
\section{Damping via closed-loop velocity control, existence and uniqueness}\label{sec: basic_1}
In this section, we will successively introduce the notion of damping potential, then prove the existence and uniqueness of the solution of the corresponding Cauchy problem.
\subsection{Damping potential}
We consider the differential inclusion
\begin{equation*}
\mbox{\rm (ADIGE-V)} \qquad 0\in \ddot x(t) +\partial \phi(\dot x(t))+ \nabla f(x(t)),
\end{equation*}
where $\phi$ is a convex damping potential, which is defined below.
\begin{definition}\label{def1}
A function $\phi : {\mathcal H} \to {\mathbb R}_+$ is a damping potential if it satisfies (i), (ii), (iii):
$(i)$ $\phi$ is a nonnegative convex continuous function;
\smallskip
$(ii)$ $\phi (0) = 0 = \min_{{\mathcal H}} \phi $;
\smallskip
$(iii)$ the minimal section of $\partial \phi$ is bounded on the bounded sets, that is, for any $R>0$
$$
\sup_{\|u\|\leq R} \|(\partial \phi)^0(u) \| <+\infty.
$$
\end{definition}
In the above, $(\partial \phi)^0(u)$ is the element of minimal norm of the closed convex non empty set $\partial \phi(u)$, see \cite[Proposition 2.6]{Brezis}.
Note that, when ${\mathcal H}$ is finite dimensional, then property $(iii)$ is automatically satisfied. Indeed, in this case, $\partial \phi$ is bounded on the bounded sets, see \cite[Proposition 16.17]{BC}.
\noindent The concept of damping potential is flexible, and allows to cover various situations. For example,
$$ \phi_1(u)=\frac{\gamma}{2}\|u\|^{2} + r\|u\|, \quad \phi_2(u)=\max\{\frac{\gamma}{2}\|u\|^{2}; r\|u\| \}$$
are damping potentials
which combine dry friction with viscous damping, see \cite{AA}.
\subsection{Existence and uniqueness results}
In this section, we study the existence and the uniqueness of the solution of the Cauchy problem associated with \mbox{\rm (ADIGE-V)}, where $\phi$ is a convex damping potential.
No convexity assumption is made on the function $f$, which is supposed to be differentiable.
Since we work with autonomous (dissipative) systems, we can take an arbitrary initial time $t_0$.
As is usual, we take $t_0 =0$, and hence work on the time interval $[0, +\infty[$.\\
Let us precise the notion of strong solution.
\begin{definition}
The trajectory $x: [0, +\infty[ \to {\mathcal H}$ is said to be a strong global solution of
\mbox{\rm (ADIGE-V)} if it satisfies the following properties:
\smallskip
$(i)$ $x\in \mathcal C^1 ([0, +\infty[; {\mathcal H})$,
\smallskip
$(ii)$ $\dot{x} \in {\rm Lip} (0, T; {\mathcal H})$, $\ddot x \in L^{\infty} (0, T; {\mathcal H})$ for all $T>0$,
\smallskip
$(iii)$ For almost all $t>0$, \,
$
0\in \ddot x(t) +\partial \phi(\dot x(t))+ \nabla f(x(t)).
$
\end{definition}
Note that, since $\dot{x} \in {\rm Lip} (0, T; {\mathcal H})$, it is absolutely continuous on the bounded time intervals, its distribution derivative coincide with its derivative almost everywhere (which exists). Thus, the acceleration $\ddot x$ belongs to $L^{\infty} (0, T; {\mathcal H})$ for all
$T>0$, but it is not necessarily continuous, see \cite[Appendix]{Brezis} for further details on vector-valued Lebesgue and Sobolev spaces.
Let's prove the following existence and uniqueness result for the associated Cauchy problem.
\begin{theorem}\label{basic_exist_thm} Let $f: {\mathcal H} \to {\mathbb R}$ be a differentiable function whose gradient is Lipschitz continuous on the bounded subsets of ${\mathcal H}$, and such that $\inf_{{\mathcal H}} f >-\infty$. Let $\phi : {\mathcal H} \to {\mathbb R}_+$ be a damping potential (see Definition \ref{def1}). Then, for any $x_0,x_1\in \mathcal H$, there exists a unique strong global solution $x: [0, +\infty[ \to {\mathcal H}$ of \mbox{\rm (ADIGE-V)} such that $x(0)=x_0$ and $\dot x(0)=x_1$, that is
$$\left\{
\begin{array}{l}
0\in \ddot x(t) +\partial \phi(\dot x(t))+ \nabla f(x(t)) \vspace{3mm}\\
x(0)=x_0, \, \dot x(0)=x_1.
\end{array}\right.
$$
\end{theorem}
\begin{proof} We successively consider the case where $ \nabla f $ is Lipschitz continuous over the whole space, then the case where it is supposed to be Lipschitz continuous only on the bounded sets.
In both cases, the idea is to mix the existence results for ODEs which are respectively based on the Cauchy--Lipschitz theorem, and on the theory of maximally monotone operators. We treat the two cases independently because the proof is much simpler in the first case.
\medskip
\textbf{Case a) $\nabla f$ is Lipschitz continuous on the whole space}. The Hamiltonian formulation of
(ADIGE-V) gives the equivalent first-order differential
inclusion in the product space $\mathcal H\times \mathcal H$:
\begin{equation}\label{first_order_cl_loop_0}
0\in\dot z(t)+\partial \Phi(z(t))+F(z(t)),
\end{equation}
where $z(t)=(x(t), \dot x(t)) \in \mathcal H\times \mathcal H $, and
\begin{itemize}
\item
$\Phi:\mathcal H\times \mathcal H\to {\mathbb R}$ is the convex function defined by $\Phi(x,u)=\phi(u)$
\item $F: \mathcal H\times \mathcal H\to \mathcal H\times \mathcal H$
is defined by \, $F(x,u)=(-u,\nabla f(x))$.
\end{itemize}
Since $\nabla f$ is Lipschitz continuous on the whole space ${\mathcal H}$, we immediately get that $F$ is a Lipschitz continuous mapping on $\mathcal H\times \mathcal H$. So, we can apply a result related to evolution equations governed by Lipschitz perturbations of
convex subdifferentials \cite[Proposition 3.12]{Brezis} in order to conclude that \eqref{first_order_cl_loop_0} has
a unique strong global solution with initial data $z(0)=(x_0,x_1)$.
\medskip
\textbf{Case b) $\nabla f$ is Lipschitz continuous on the bounded sets}.
The major difficulty in (ADIGE-V) is the presence of the term
$\partial \phi (\dot{x}(t))$, which involves a possibly nonsmooth operator $\partial \phi $.
A natural idea is to regularize this operator, and thus obtain a classical evolution equation.
To this end, we use the Moreau-Yosida regularization.
Let us recall some basic facts concerning this regularization procedure.
For any $\lambda >0$, the Moreau envelope of $\phi$ of index $\lambda$ is the function $\phi_{\lambda}: {\mathcal H} \to \mathbb R $ defined by: for all $u\in {\mathcal H}$,
$$
\phi_{\lambda} (u) = \min_{\xi \in {\mathcal H}} \left\lbrace \phi (\xi) + \frac{1}{2 \lambda} \| u - \xi \|^2 \right\rbrace.
$$
The function $\phi_{\lambda} $ is convex, of class $ {\mathcal C}^{1,1}$, \, and satisfies $\inf_{{\mathcal H}} \phi_{\lambda} = \inf_{{\mathcal H}} \phi $, $\argmin_{{\mathcal H}} \phi_{\lambda} = \argmin_{{\mathcal H}} \phi$.
One can consult \cite[section 17.2.1]{ABM}, \cite{AE}, \cite{BC}, \cite{Brezis} for an in-depth study of the properties of the Moreau envelope in a Hilbert framework.
In our context, since $\phi: {\mathcal H} \to {\mathbb R}$ is a damping potential, we can easily verify that
$\phi_{\lambda} $ is still a damping potential. In particular $\phi_{\lambda}(0) = \inf_{{\mathcal H}} \phi_{\lambda}=0 $. According to the subdifferential inequality for convex functions, this implies that, for all $u\in {\mathcal H}$
\begin{equation}\label{ineq_phi}
\left\langle \nabla \phi_{\lambda}(u), u \right\rangle \geq 0.
\end{equation}
We will also use the following inequality, see \cite[Proposition 2.6]{Brezis}: for any $\lambda >0$, for any $u\in {\mathcal H}$
\begin{equation}\label{ineq_phi_b}
\| \nabla \phi_{\lambda}(u)\| \leq \|(\partial \phi)^0(u) \|.
\end{equation}
So, for each $\lambda >0$, we consider the approximate evolution equation
\begin{equation}\label{existence_approx}
\ddot{x}_{\lambda}(t) + \nabla \phi_{\lambda} (\dot{x}_{\lambda}(t)) + \nabla f (x_{\lambda}(t)) = 0,\; t\in [0,+\infty[.
\end{equation}
We will first prove the existence and uniqueness of a global classical solution $x_{\lambda}$ of \eqref{existence_approx} satisfying
$x_{\lambda}(0)=x_0$ and $\dot {x}_{\lambda}(0)=x_1$.
Then, we will prove that the filtered sequence $(x_{\lambda})$
converges uniformly as $\lambda \to 0$ over the bounded time intervals towards a solution of (ADIGE-V).
According to the Hamiltonian formulation of \eqref{existence_approx}, it is equivalent
to consider the first-order (in time) system
\begin{equation}\label{Hamilton_Yosida}
\left\{
\begin{array}{l}
\dot x_{\lambda}(t) -u_{\lambda}(t) =0; \\
\rule{0pt}{18pt}
\dot{u}_{\lambda}(t) +\nabla \phi_{\lambda}(u_{\lambda}(t))+ \nabla f(x_{\lambda}(t)) = 0 ,
\hspace{2.3cm}
\end{array}\right.
\end{equation}
with the Cauchy data
$x_{\lambda}(0) =x_0$, \, $u_{\lambda}(0)= x_1 $.
Set
$Z_{\lambda}(t) = (x_{\lambda}(t), u_{\lambda}(t)) \in {\mathcal H} \times {\mathcal H} .$\\
The system (\ref{Hamilton_Yosida}) can be written equivalently as
$$
\dot{Z}_{\lambda}(t) + F_{\lambda}( Z_{\lambda}(t)) = 0, \quad Z_{\lambda}(0) = (x_0, x_1),
$$
where $F_{\lambda}: {\mathcal H} \times {\mathcal H}\rightarrow {\mathcal H} \times {\mathcal H},\;\;(x,u)\mapsto F_{\lambda}(x,u)$ is defined by
$$
F_{\lambda}(x,u)= \Big( 0, \nabla \phi_{\lambda}(u) \Big) +
\Big( -u, \nabla f(x) \Big).
$$
Hence $F_{\lambda}$ splits as follows
$
F_{\lambda}(x,u) = \nabla \Phi_{\lambda} (x,u) + G (x,u),
$
where
$$
\Phi (x,u) = \phi(u), \quad \Phi_{\lambda}(x,u)= \phi_{\lambda}(u),
\quad
G(x,u) = \Big( -u, \, \nabla f(x) \Big).
$$
Therefore, it is equivalent to consider the first-order differential inclusion with Cauchy data
\begin{equation}
\label{1odd_existence}
\dot{Z}_{\lambda}(t) +\nabla \Phi_{\lambda}(Z_{\lambda}(t)) + G( Z_{\lambda}(t))= 0, \quad Z_{\lambda}(0) = (x_0, x_1).
\end{equation}
According to the Lipschitz continuity of $\nabla \Phi_{\lambda}$, and the fact that $G$ is Lipschitz continuous on the bounded sets, we have that the sum operator $ \nabla \Phi_{\lambda} + G$ which governs \eqref{1odd_existence} is Lipschitz continuous on the bounded sets.
As a consequence, the existence of a local solution to \eqref{1odd_existence} follows from the classical Cauchy--Lipschitz theorem.
To pass from a local solution to a global solution, we use a standard energy argument, and the following a priori estimate on the solutions of \eqref{existence_approx}. After taking the scalar product of
\eqref{existence_approx} with $\dot{x}_{\lambda} $, and using
\eqref{ineq_phi}, we get that the global energy
\begin{equation}\label{energy_approx}
\mathcal E_{\lambda} (t):= f(x_{\lambda}(t)) -\inf\nolimits_{{\mathcal H}}f + \frac{1}{2} \| \dot{x}_{\lambda}(t) \|^2 ,\end{equation}
is a decreasing function of $t$. According to the Cauchy data, and $f$ minorized,
this implies that, on any bounded time interval, the filtered sequences of functions
$(x_{\lambda})$ and $(\dot{x}_{\lambda}) $ are bounded.
According to the property \eqref{ineq_phi_b} of the Yosida approximation, and the property $(iii)$ of the
damping potential $\phi$, this implies that
$$
\| \nabla \phi_{\lambda} (x_{\lambda}(t))\| \leq \| (\partial \phi )^{0} (x_{\lambda}(t))\|
$$
is also bounded uniformly with respect to $\lambda >0$ and $t$ bounded. According to the constitutive equation \eqref{existence_approx}, this in turn implies that the filtered sequence $(\ddot{x}_{\lambda} )$ is also bounded.
This implies that if a maximal solution is defined on a finite time interval $[0, T[$, then the limits of $x_{\lambda}(t)$ and $\dot{x}_{\lambda} (t)$
exist, as $t \to T$. Then, we can apply the local existence result, which gives a solution defined on a larger interval, thus contradicting the maximality of $T$.
To prove the uniform convergence of the filtered sequence $(Z_{\lambda})$ on the bounded time intervals, we proceed in a similar way as in the proof of Br\'ezis \cite[Theorem 3.1]{Brezis}, see also
Adly-Attouch \cite{AA-preprint-jca} in the context of damped inertial dynamics.
Take $T >0$, and $ \lambda, \mu >0$.
Consider the corresponding solutions of \eqref{1odd_existence} on $[0, T]$
\begin{eqnarray*}
&&\dot{Z}_{\lambda}(t) +\nabla \Phi_{\lambda}(Z_{\lambda}(t)) + G( Z_{\lambda}(t))= 0, \quad Z_{\lambda}(0) = (x_0, x_1) \smallskip \\
&&\dot{Z}_{\mu}(t) +\nabla \Phi_{\mu}(Z_{\mu}(t)) + G( Z_{\mu}(t))= 0, \quad Z_{\mu}(0) = (x_0, x_1).
\end{eqnarray*}
Let's make the difference between the two above equations, and take the scalar product with $Z_{\lambda}(t) - Z_{\mu}(t)$. We obtain
\begin{eqnarray}
\frac{1}{2} \frac{d}{dt}\| Z_{\lambda}(t) - Z_{\mu}(t) \|^2 &+ &
\left\langle \nabla \Phi_{\lambda}(Z_{\lambda}(t)) - \nabla \Phi_{\mu}(Z_{\mu}(t)) , Z_{\lambda}(t) - Z_{\mu}(t) \right\rangle \nonumber\\
&+& \left\langle G( Z_{\lambda}(t)) - G( Z_{\mu}(t)) , Z_{\lambda}(t) - Z_{\mu}(t) \right\rangle =0 . \label{basic_ex_Y}
\end{eqnarray}
We now use the following ingredients:
\medskip
\noindent a) According to the properties of the Yosida approximation (see \cite[Theorem 3.1]{Brezis}), we have
$$
\left\langle \nabla \Phi_{\lambda}(Z_{\lambda}(t)) - \nabla \Phi_{\mu}(Z_{\mu}(t)) , Z_{\lambda}(t) - Z_{\mu}(t) \right\rangle
\geq -\frac{\lambda}{4} \|\nabla \Phi_{\mu}(Z_{\mu}(t)) \|^2 -
\frac{\mu}{4} \|\nabla \Phi_{\lambda}(Z_{\lambda}(t)) \|^2.
$$
Since the filtered sequences $(x_{\lambda})$ and $(\dot{x}_{\lambda}) $ are uniformly bounded on $[0, T]$, there exists a constant $C_T$ such that, for all $0\leq t \leq T$
$$\| Z_{\lambda}(t)\|\leq C_T .$$
According to \eqref{ineq_phi_b}, and the fact that $\phi$ is a damping potential (property $(iii)$ of Definition \eqref{def1}), we deduce that
$$
\|\nabla \Phi_{\lambda}(Z_{\lambda}(t)) \| \leq \sup_{\|\xi\|\leq C_T} \|(\partial \phi)^0(\xi) \|= M_T <+\infty.
$$
Therefore
$$
\left\langle \nabla \Phi_{\lambda}(Z_{\lambda}(t)) - \nabla \Phi_{\mu}(Z_{\mu}(t)) , Z_{\lambda}(t) - Z_{\mu}(t) \right\rangle
\geq -\frac{1}{4} M_T (\lambda +\mu).
$$
b) According to the local Lipschitz assumption on $\nabla f$, the mapping $G : {\mathcal H} \times {\mathcal H} \to {\mathcal H} \times {\mathcal H}$ is Lipschitz continuous on the bounded sets.
Using again that the sequence $(Z_{\lambda})$ is uniformly bounded on $[0, T]$, we deduce that there exists a constant $L_T$ such that, for all $t\in [0,T]$, for all $\lambda, \mu >0$
$$
\| G( Z_{\lambda}(t)) - G( Z_{\mu}(t)) \| \leq L_T \| Z_{\lambda}(t) - Z_{\mu}(t) \|.
$$
Combining the above results, and using Cauchy--Schwarz inequality, we deduce from
\eqref{basic_ex_Y} that
$$
\frac{1}{2} \frac{d}{dt}\| Z_{\lambda}(t) - Z_{\mu}(t) \|^2
\leq \frac{1}{4} M_T (\lambda +\mu) + L_T \| Z_{\lambda}(t) - Z_{\mu}(t) \|^2 .
$$
We now proceed with the integration of this differential inequality.
Using that $ Z_{\lambda}(0) - Z_{\mu}(0) =0$, elementary calculus gives
$$
\| Z_{\lambda}(t) - Z_{\mu}(t) \|^2 \leq \frac{M_T}{4L_T}(\lambda +\mu) \Big( e^{2L_T t} -1 \Big).
$$
Therefore, the filtered sequence $(Z_{\lambda})$ is a Cauchy sequence for the uniform convergence on $[0, T]$, and hence it converges uniformly.
This means the uniform convergence of $x_{\lambda}$ and $\dot{x}_{\lambda}$ to $x$ and $\dot{x}$ respectively.
To go to the limit on \eqref{existence_approx}, let us write it as follows
\begin{equation}\label{hbdf_lambda_bbb}
\nabla \phi_{\lambda} (\dot{x}_{\lambda}(t)) = \xi_{\lambda} (t)
\end{equation}
where
$
\xi_{\lambda} (t) := -\ddot{x}_{\lambda}(t) - \nabla f (x_{\lambda}(t)) .
$
We now rely on the variational convergence properties of the Yosida approximation.
Since $(\phi_{\lambda})$ converges increasingly to $\phi$ as
$\lambda \downarrow 0$, the sequence of integral functionals
$$
\Psi^{\lambda}(\xi) := \int_{0}^T \phi_{\lambda} (\xi(t))dt
$$
converges increasingly to
$
\Psi (\xi)= \int_{0}^T \phi (\xi(t))dt.
$
Therefore $(\Psi^{\lambda})$ Mosco-converges to $\Psi$ in
$\mathbb L^2 (0, T; {\mathcal H})$.
According to the theorem which makes the link between the Mosco convergence of a sequence of convex lower semicontinuous functions and the graph convergence of their subdifferentials, see Attouch \cite[Theorem 3.66]{Att00}), we have that
$$
\partial \Psi^{\lambda} \to \partial \Psi
$$
with respect to the topology ${\rm strong}-\mathbb L^2 (0, T; {\mathcal H}) \times {\rm weak}-\mathbb L^2 (0, T; {\mathcal H}) $. According to \eqref{hbdf_lambda_bbb} we have
$$
\xi_{\lambda} =\nabla \Psi^{\lambda} (\dot{x}_{\lambda}).
$$
Since
$
\dot{x}_{\lambda} \to \dot{x} \, \mbox{ strongly in} \, \mathbb L^2 (0, T; {\mathcal H})
$
and $\xi_{\lambda}$ converges weakly in $\mathbb L^2 (0, T; {\mathcal H})$ to $\xi$ given by
\begin{equation}\label{def:xi}
\xi (t) = -\ddot{x}(t) - \nabla f (x(t)) ,
\end{equation}
we deduce that $\xi \in \partial \Psi (\dot{x})$, that is
$$
\xi (t) \in \partial \phi (\dot{x}(t)).
$$
According to the formulation (\ref{def:xi}) of $\xi$, we finally obtain that $x$ is a solution of (ADIGE-V).\\
The uniqueness of the solution of the Cauchy problem is obtained exactly in the same way as in the case of the global Lipschitz assumption.
\end{proof}
\begin{remark} {\rm
The above existence and uniqueness result uses as essential ingredient that the potential function $f$ to be minimized is a differentiable function, whose gradient is locally Lipschitz continuous.
The introduction of constraints into $f$ via the indicator function would lead to solutions involving shocks when reaching the boundary
of the constraint. In this case, existence can be still obtained in finite dimension, but uniqueness may not be satisfied, see
Attouch--Cabot--Redont \cite{ACR}.}
\end{remark}
\section{Closed-loop velocity control, preliminary convergence results }\label{sec: basic_2}
Let $x: [0, +\infty[ \to {\mathcal H}$ be a solution trajectory of (ADIGE-V).
\subsection{Energy estimates}
Define the global energy at time $t$ as follows:
\begin{equation}\label{energy}
\mathcal E (t):= f(x(t)) -\inf\nolimits_{{\mathcal H}}f + \frac{1}{2} \| \dot{x}(t) \|^2 .
\end{equation}
Take the scalar product of (ADIGE-V) with $ \dot {x} (t) $. According to the derivation chain rule, we get
\begin{equation}\label{closed_loop_2}
\frac{d}{dt} \mathcal E (t) + \left\langle \partial \phi(\dot x(t)), \dot x(t) \right\rangle = 0.
\end{equation}
The convex subdifferential inequality, and $\phi (0)=0$, gives, for all $u \in {\mathcal H}$
\begin{equation}\label{energy_2}
\left\langle \partial \phi(u), u \right\rangle \geq \phi (u).
\end{equation}
Combining the two above inequalites, we get
\begin{equation}\label{closed_loop_2_b}
\frac{d}{dt} \mathcal E (t) + \phi(\dot x(t)) \leq 0.
\end{equation}
Since $\phi$ is nonnegative, this implies that the global energy is non-increasing. Since $f$ is minorized, this implies that the velocity $\dot{x}(t)$ is bounded over $[0, +\infty[$. Precisely,
\begin{equation}\label{closed_loop_2_c}
\sup_{t \geq 0} \| \dot{x}(t) \| \leq R_1:= \sqrt{2 \mathcal E (0)}.
\end{equation}
To go further, suppose that the trajectory $x(\cdot)$ is bounded (this is verified for example if $f$ is coercive), and set
\begin{equation}\label{closed_loop_2_d}
\sup_{t \geq 0} \| x(t) \| \leq R_2.
\end{equation}
Let us now establish a bound on the acceleration.
For this, we rely on the approximate dynamics
\begin{equation}\label{existence_approx_b}
\ddot{x}_{\lambda}(t) + \nabla \phi_{\lambda} (\dot{x}_{\lambda}(t)) + \nabla f (x_{\lambda}(t)) = 0,\; t\in [0,+\infty[.
\end{equation}
A similar estimate as above gives $\sup_{t \geq 0} \| \dot{x}_{\lambda}(t) \| \leq R_1:= \sqrt{2 \mathcal E (0)}$.
According to property $(iii)$ of the damping potential, we obtain
$$
\|\nabla \phi_{\lambda} (\dot{x}_{\lambda}(t)) \| \leq \sup_{\|u\|\leq R_1} \|(\partial \phi)^0(u) \|= M_1 <+\infty.
$$
According to the local Lipschitz continuity property of $\nabla f$
$$
\|\nabla f (x_{\lambda}(t)) \| \leq \sup_{\|x\|\leq R_2} \|\nabla f (x) \|= M_2 <+\infty.
$$
Combining the two above inequalities with \eqref{existence_approx_b}, we get that for all $\lambda >0$, and all $t\geq 0$
$$
\| \ddot{x}_{\lambda}(t) \| \leq M_1 + M_2.
$$
Since $\ddot{x}_{\lambda}(t)$ converges weakly to $\ddot{x}(t)$
as $\lambda \to 0$ (see the proof of Theorem \ref{basic_exist_thm}), we obtain
\begin{equation}\label{closed_loop_2_e}
\sup_{t \geq 0} \| \ddot{x}(t) \| < +\infty.
\end{equation}
Moreover,
by integrating \eqref{closed_loop_2_b}, we immediately obtain
$
\int_{0}^{+\infty} \phi(\dot x(t)) dt <+\infty.
$
Let us summarize the above results, and complete them, in the following proposition.
\begin{proposition}\label{preliminary_est}
Let $x: [0, +\infty[ \to {\mathcal H}$ be a solution trajectory of {\rm (ADIGE-V)}.
Then, the global energy $\mathcal E (t)= f(x(t)) -\inf_{{\mathcal H}}f + \frac{1}{2} \| \dot{x}(t) \|^2 $ is non-increasing, and
$$
\sup_{t \geq 0} \| \dot{x}(t) \| < +\infty, \quad \int_{0}^{+\infty} \phi(\dot x(t)) dt <+\infty.
$$
Suppose moreover that $x$ is bounded. Then
\begin{equation}\label{closed_loop_2_f}
\sup_{t \geq 0} \| \ddot{x}(t) \| < +\infty.
\end{equation}
Suppose moreover that there exists $p\geq 1$, and $r>0$ such that, for all $u\in {\mathcal H}$, $\phi (u) \geq r \|u\|^p$. Then
\begin{equation}\label{dx-conv-0}
\lim_{t\to +\infty} \| \dot{x}(t) \|=0.
\end{equation}
\end{proposition}
\begin{proof}
We just need to prove the last point.
From $\int_{0}^{+\infty} \phi(\dot x(t)) dt <+\infty$ and
$\phi (u) \geq r \|u\|^p$, we get $\int_{0}^{+\infty} \| \dot{x}(t) \|^{p} dt <+\infty$.
This estimate, combined with $\sup_{t \geq 0} \| \ddot{x}(t) \| < +\infty$ classically implies that
$
\lim_{t\to +\infty} \| \dot{x}(t) \|=0.
$
\end{proof}
\medskip
Let us complete the above result by examining the convergence of the acceleration towards zero. To get this result, we need additional assumptions on the data $ f $ and $ \phi $.
\begin{proposition}\label{est_acceleration}
Let $x: [0, +\infty[ \to {\mathcal H}$ be a bounded solution trajectory of {\rm (ADIGE-V)}.
Suppose that $f$ is a ${\mathcal C}^2$ function, and that $\phi$ is a ${\mathcal C}^2$ function which satisfies (i) and (ii):
\medskip
\noindent (i) (local strong convexity)
there exists positive constants $\gamma >0$, and $\rho >0$ such that for all $u$ in
${\mathcal H}$ with $\|u\| \leq \rho$ the following inequality holds
$$
\left\langle \nabla^2 \phi (u) \xi, \xi \right\rangle \geq \gamma \|\xi\|^2 \quad \mbox{for all } \xi \in {\mathcal H};
$$
(ii) (global growth) there exist $p\geq 1$ and $r>0$ such that $\phi (u) \geq r \|u\|^p$ for all $u\in {\mathcal H}$.
\smallskip
\noindent Then, the following convergence property is satisfied:
\begin{equation}\label{acc}
\lim_{t\to +\infty} \| \ddot{x}(t) \|=0.
\end{equation}
\end{proposition}
\begin{proof}
Let us derivate (ADIGE-V), and set $w(t):= \ddot x(t)$. We obtain
$$
\dot w(t) + \nabla^2 \phi (\dot{x}(t))w(t) = - \nabla^2 f(x(t))\dot{x}(t).
$$
Take the scalar product of the above equation with $w(t)$. We get
$$
\frac{1}{2} \frac{d}{dt} \|w(t)\|^2 + \left\langle \nabla^2 \phi (\dot{x}(t))w(t), w(t) \right\rangle = - \left\langle\nabla^2 f(x(t))\dot{x}(t),w(t)\right\rangle.
$$
According to Proposition \ref{preliminary_est}, we have
$
\lim_{t\to +\infty} \| \dot{x}(t) \|=0.
$
From the local strong convexity assumption $(i)$, and the
Cauchy--Schwarz inequality, we deduce that for $t$ sufficiently large, say $t\geq t_1$
$$
\frac{1}{2} \frac{d}{dt} \|w(t)\|^2 + \gamma \| w(t)\|^2 \leq \|\nabla^2 f(x(t))\dot{x}(t)\| \|w(t)\|.
$$
Since $x(\cdot)$ is bounded and $\nabla f$ is Lipschitz continuous on bounded sets, we get that for some $C>0$
$$
\frac{1}{2} \frac{d}{dt} \|w(t)\|^2 + \gamma \| w(t)\|^2 \leq C\|\dot{x}(t)\| \|w(t)\| \ \mbox{for all } t \geq t_1.
$$
After multiplication by $e^{2\gamma t}$,
and integration from $t_1$ to $t$, we get
$$
\frac{1}{2} \left(e^{\gamma t} \|w(t)\| \right)^2 \leq \frac{1}{2} \left(e^{\gamma t_1} \|w(t_1)\| \right)^2 + C \int_{t_1}^t
e^{\gamma \tau}\|\dot{x}(\tau)\|\left( e^{\gamma \tau}\|w(\tau)\| \right)d \tau.
$$
According to the Gronwall Lemma (see \cite[Lemma A.5]{Brezis}) we obtain
$$
e^{\gamma t} \|w(t)\| \leq e^{\gamma t_1}\|w(t_1)\| + C \int_{t_1}^t
e^{\gamma \tau}\|\dot{x}(\tau)\|d \tau.
$$
Therefore
$$
\|\ddot x(t)\| \leq \|\ddot x(t_1)\| e^{-\gamma (t-t_1)} + C e^{-\gamma t} \int_{t_1}^t
e^{\gamma \tau}\|\dot{x}(\tau)\|d \tau.
$$
Since $\lim_{t\to +\infty} \| \dot{x}(t) \|=0,$ we have
$
\lim_{t\to +\infty} e^{-\gamma t} \int_{t_1}^t
e^{\gamma \tau}\|\dot{x}(\tau)\|d \tau=0.
$
Therefore, by passing to the limit on the above inequality we get
$\lim_{t\to +\infty} \| \ddot{x}(t) \|=0.$
\end{proof}
\begin{corollary}\label{attractor}
Let us make the assumption of Proposition \ref{est_acceleration} and suppose that the trajectory $x(\cdot)$ is relatively compact.
Then for any sequence $x(t_n) \to x_{\infty}$ with $t_n \to +\infty$ we have $\nabla f ( x_{\infty})=0$.
Set $S= \{x\in {\mathcal H}: \, \nabla f ( x)=0 \}$. Therefore,
$$
\lim_{t \to + \infty} d(x(t), S)=0.
$$
\end{corollary}
\begin{remark}{\rm
\textit{a)} Without geometric assumption on the function $f$, the trajectories of (ADIGE-v) may fail to converge.
In \cite{AGR} Attouch--Goudou--Redont exhibit a function $f: {\mathbb R}^2 \to {\mathbb R}$ which
is $\mathcal C^1$, coercive, whose gradient is Lipschitz continuous on the bounded sets,
and such that the (HBF) system
admits an orbit $t \mapsto x(t)$ which does not converge as $t \to +\infty$.
The above result expresses that in such situation, the attractor is the set $S= \{x\in {\mathcal H}: \, \nabla f ( x)=0 \}$.
\textit{b)} It is necessary to assume that $\phi$ is a smooth function in order to get the conclusion of Corollary \ref{attractor}. In fact in the case of the dry friction, that is $\phi (u)= r \|u\|$, there is convergence of the orbits to points satisfying
$ \|\nabla f ( x_{\infty})\| \leq r$, and which are not in general critical point of $f$.
}
\end{remark}
\subsection{Model example} Consider the case
$\phi (u)= \frac{r}{p}\|u\|^p$ with $p >1$, in which case the dynamic
(ADIGE-V) writes
\begin{equation}\label{closed_loop_p}
\ddot x(t) + r\| \dot x(t)\|^{p-2} \dot x(t)+ \nabla f(x(t))=0.
\end{equation}
Therefore, for $p>2$, the viscous damping coefficient $\gamma(\cdot)$ that enters equation \eqref{closed_loop_p}, and which is equal to
\begin{equation}\label{g-speed}
\gamma (t) :=r \| \dot{x}(t) \|^{p-2}
\end{equation}
tends to zero as $t\to +\infty$. So, we are in the setting of the inertial dynamics with vanishing damping coefficient. Consequently, in the associated inertial gradient algorithms, the extrapolation coefficient tends to $1$, and we can expect fast asymptotic convergence results.
To summarize, in the case of \eqref{closed_loop_p}, and for $f$ coercive, we have obtained that, for $p>2$
\begin{equation}\label{closed_loop_33}
\lim_{t\to +\infty} \gamma (t)=0, \quad \gamma (\cdot) \in L^{\frac{p}{p-2}}(0, +\infty).
\end{equation}
Are these informations sufficient to derive the convergence rate of the values, and obtain similar convergence properties as for the $\mbox{\rm (AVD)}_{\alpha}$ system?\\
To give a first answer to this question, we rely on the results of Cabot--Engler--Gaddat \cite{CEG}, Attouch--Cabot \cite{AC10} and Attouch--Chbani--Riahi \cite{ACR-Pafa} which concern the asymptotic stabilization of inertial gradient dynamics with general time-dependent viscosity coefficient $\gamma(t)$.
In the case of a vanishing damping coefficient, the key property which insures the asymptotic minimization property is that
$$
\int_{0}^{+\infty} \gamma (t)\, dt = +\infty.
$$
This means that the coefficient $\gamma(t)$ can go to zero as $t\to +\infty$, but not too fast in order to dissipate the energy enough.
On the positive side, $\gamma (t) = \frac{\alpha}{t}$ does satisfy
the conditions \eqref{closed_loop_33} for any $p>0$, which does not exclude the Nesterov case.
On the negative side, we can easily find $\gamma(t)$ such that
\begin{equation}\label{closed_loop_333}
\lim_{t\to +\infty} \gamma (t)=0, \quad \gamma (\cdot) \in L^{\frac{p}{p-2}}(0, +\infty) \, \mbox{ and } \int_{0}^{+\infty} \gamma (t)\, dt < +\infty.
\end{equation}
So, without any other hypothesis, we cannot conclude from this information alone.
At this point, the idea is to introduce additional information, assuming a geometric property on the function $f$ to minimize.
In the next two sections, we successively consider the case where $f$ is a strongly convex function, then the case of the functions $ f $ satisfying the Kurdyka--Lojasiewicz property.
\section{The strongly convex case: exponential convergence rate}
\label{rate-f-str-conv}
We will study the asymptotic behavior of the system (ADIGE-V)
when $f$ is a strongly convex function. Recall that $f: {\mathcal H} \to {\mathbb R}$ is said to be $\mu$-strongly convex (with $\mu >0$) if $f - \frac{\mu}{2}\| \cdot \|^2$ is convex.
Then, we will consider the particular case where $f$ is strongly convex and quadratic. Finally, we will give numerical illustrations in dimension one.
\subsection{General strongly convex function \textit{f}}\label{strong-convex}
\begin{theorem}\label{strong_convex_thm} Let $f: {\mathcal H} \to {\mathbb R}$ be a differentiable function which is $\mu$-strongly convex for some $\mu>0$, and whose gradient is Lipschitz continuous on the bounded sets. Let $\overline{x}$ be the unique minimizer of $f$.\\
Let $\phi : {\mathcal H} \to {\mathbb R}_+$ be a damping potential (see Definition \ref{def1}) which is differentiable, and whose gradient is Lipschitz continuous on the bounded subsets of ${\mathcal H}$. Suppose that $\phi$ satisfies the following growth conditions (i) and (ii):
\medskip
$(i)$ (local) there exists positive constants $\alpha$, and $\rho >0$ such that for all $u$ in
${\mathcal H}$ with $\|u\| \leq \rho$
$$ \left\langle \nabla \phi (u), u \right\rangle \geq \alpha \|u\|^2 .$$
$(ii)$ (global) there exists $p\geq 1$, $c>0$, such that for all $u$ in ${\mathcal H}$, $\phi (u) \geq c\|u\|^p$.
\medskip
Then, for any solution trajectory $x: [0, +\infty[ \to {\mathcal H}$ of \mbox{\rm (ADIGE-V)}, we have exponential
convergence rate to zero as $t \to +\infty$ for $f(x(t))-f(\overline{x}) $, $\| x(t)-\overline{x}\|$ and the velocity $\|\dot x (t)\|$.
\end{theorem}
\begin{proof}
We will use the following inequalities which are attached to the strong convexity of $f$:
\begin{eqnarray}
&& f(\overline{x})-f(x(t)) \geq \langle \nabla f(x(t)),\overline{x}-x(t)\rangle+\frac{\mu}{2}\|x(t)-\overline{x}\|^2 \label{from-str-conv1} \\
&& f(x(t)) - f(\overline{x}) \geq \frac{\mu}{2}\|x(t)-\overline{x}\|^2. \label{from-str-conv2}
\end{eqnarray}
Let us consider the global energy (introduced in \eqref{energy}, in the preliminary estimates)
$$
{\mathcal E} (t):=\frac{1}{2}\|\dot x(t)\|^2 + f(x(t)) - f(\overline{x}).
$$
By Proposition \ref{preliminary_est},
$\dot x(t)$ is bounded on ${\mathbb R}_+$.
Moreover, ${\mathcal E} (\cdot)$ is non-increasing, and hence bounded from above. By definition of ${\mathcal E} (t)$, this implies that $f(x(t))$ is bounded from above. Since $f$ is strongly convex, it is coercive, which implies that $x(\cdot)$ is bounded.
Since $x(\cdot)$ and $\dot{x}(\cdot)$ are bounded, and the vector fields $\nabla f $ and $\nabla \phi$ are locally Lipschitz continuous, we deduce from the constitutive equation
$
\ddot x(t) = -\nabla \phi(\dot x(t))- \nabla f(x(t))
$
that $\ddot x (\cdot)$
is also bounded.
According to the preliminary estimates established in Proposition \ref{preliminary_est}, we have
$
\int_0^{+\infty} \phi (\dot{x}(t)) dt < +\infty.
$
Combining this property with the global growth assumption $(ii)$ on $\phi$, we deduce that there exists $p\geq 1$ such that
$$
\int_0^{+\infty} \| \dot{x}(t)\|^p dt < +\infty.
$$
Since $\ddot x (\cdot)$
is bounded, this implies that
$\dot{x}(t) \to 0$ as $t \to + \infty$.
So, for $t$ sufficiently large, say $t\geq t_1$
$$
\| \dot{x}(t)\| \leq \rho.
$$
Time derivation of ${\mathcal E} (\cdot)$, together with the constitutive equation (ADIGE-V), gives for $t\geq t_1$
\begin{eqnarray} \dot {\mathcal E}(t) & = & \langle \dot x(t), -\nabla\phi(\dot x(t))-\nabla f(x(t))\rangle + \langle \dot x(t),\nabla f(x(t))\rangle
\nonumber \\
&=& -\langle \dot x(t),\nabla \phi(\dot x(t))\rangle \nonumber \\
&\leq & -\alpha \|\dot x(t)\|^2, \label{strong_conv_1}
\end{eqnarray}
where the last inequality comes from the growth condition $(i)$ on $\phi$, and $\| \dot{x}(t)\| \leq \rho$ for $t\geq t_1$.\\
Since $\dot{x}(\cdot)$ is bounded,
let $L$ be the Lipschitz constant of $\nabla\phi$ on a ball that contains the velocity vector $\dot x(t)$ for all $t\geq 0$. Since $\nabla \phi (0) =0$ we have, for all $t\geq 0$
\begin{equation}\label{local_Lip}
\| \nabla \phi(\dot x(t))\| \leq L \| \dot x(t)\|.
\end{equation}
Using successively (ADIGE-V), \eqref{local_Lip}
and \eqref{from-str-conv1}, we obtain
\begin{eqnarray}\frac{d}{dt}\Big(\langle x(t)-\overline{x},\dot x(t)\rangle \Big) &=&
\|\dot x(t)\|^2 + \langle x(t)-\overline{x},-\nabla \phi(\dot x(t))-\nabla f(x(t))\rangle \nonumber\\
&\leq & \|\dot x(t)\|^2 + L\|x(t)-\overline{x}\| \|\dot x(t)\| - \langle x(t)-\overline{x},\nabla f(x(t))\rangle \nonumber\\
&\leq & \|\dot x(t)\|^2 + \frac{L^2}{2\mu}\|\dot x(t)\|^2 +\frac{\mu}{2} \|x(t)-\overline{x}\|^2 + \langle \overline{x} -x(t),\nabla f(x(t))\rangle \nonumber\\
&\leq & \left(1+\frac{L^2}{2\mu}\right)\|\dot x(t)\|^2 + f(\overline{x}) - f(x(t)). \label{strong_conv_2}
\end{eqnarray}
Take now $\epsilon >0$ (we will specify below how it should be chosen), and define
$$h_{\epsilon}(t) := {\mathcal E}(t) + \epsilon \langle x(t)-\overline{x},\dot x(t)\rangle.$$
Time derivation of $h_{\epsilon}$, together with \eqref{strong_conv_1} and \eqref{strong_conv_2}, gives for $t\geq t_1$
$$\dot h_{\epsilon}(t) \leq -\left(\alpha-\epsilon\left(1+\frac{L^2}{2\mu}\right)\right)\|\dot x(t)\|^2
-\epsilon(f(x(t))-f(\overline{x})).$$
Choose $\epsilon>0$ such that $\alpha-\epsilon\left(1+\frac{L^2}{2\mu}\right)>0$. Set $C_1:= \min \{\alpha-\epsilon\left(1+\frac{L^2}{2\mu}\right); \epsilon \}$.
We deduce that
\begin{equation}\label{dot_h-str-conv}\dot h_{\epsilon}(t) \leq -C_1\Big(\|\dot x(t)\|^2+f(x(t))-f(\overline{x})\Big).\end{equation}
Further, from \eqref{from-str-conv2} and the Cauchy--Schwarz inequality we easily obtain
\begin{eqnarray*} h_{\epsilon}(t) &\leq & \frac{1}{2}\|\dot x(t)\|^2 + f(x(t)) - f(\overline{x}) +
\frac{\epsilon}{2}\|x(t)-\overline{x}\|^2+\frac{\epsilon}{2}\|\dot x(t)\|^2\\
&\leq & \left(\frac{1}{2}+\frac{\epsilon}{2}\right)\|\dot x(t)\|^2 + \left(1+\frac{\epsilon}{\mu}\right)(f(x(t))-f(\overline{x}))\\
&\leq & \left(1+\epsilon\left(\frac{1}{2}+\frac{1}{\mu}\right)\right)\Big(\|\dot x(t)\|^2 + f(x(t))-f(\overline{x})\Big).
\end{eqnarray*}
Combining this inequality with \eqref{dot_h-str-conv}, we obtain
$\dot {h}_{\epsilon}(t) + C_2 h_{\epsilon}(t)\leq 0,$
with $C_2:= \frac{C_1}{1+\epsilon\left(\frac{1}{2}+\frac{1}{\mu}\right)} >0$. Then, the Gronwall inequality classically implies
\begin{equation}\label{rate_h-str-conv}h_{\epsilon}(t) \leq h_{\epsilon}(0)e^{-C_2t}.\end{equation}
Finally, from \eqref{from-str-conv2} and the Cauchy--Schwarz inequality we have
\begin{eqnarray*}h_{\epsilon}(t) &\geq & \frac{1}{2}\|\dot x(t)\|^2 + f(x(t)) - f(\overline{x})
-\frac{\epsilon}{2}\|x(t)-\overline{x}\|^2 - \frac{\epsilon}{2}\|\dot x(t)\|^2\\
&\geq & \left(\frac{1}{2}-\frac{\epsilon}{2}\right)\|\dot x(t)\|^2 + \left(1-\frac{\epsilon}{\mu}\right)(f(x(t))-f(\overline{x})).\end{eqnarray*}
Therefore, by taking $\epsilon$ small enough, we obtain the existence of $C_3>0$ such that
$$h_{\epsilon}(t)\geq C_3 \Big(\|\dot x(t)\|^2 + f(x(t))-f(\overline{x})\Big).$$
Combining this inequality with \eqref{rate_h-str-conv} and \eqref{from-str-conv2}, we obtain an exponential
convergence rate to zero for $f(x(t))-f(\overline{x}) $, $\|x(t)-\overline{x}\|$ and the velocity $\|\dot x (t)\|$.
\end{proof}
\medskip
\begin{remark}
In subsection \ref{Sec:KL_poly_growth}, as a consequence of the Kurdyka-Lojasiewicz theory, we will extend the above results to the case where we only assume a quadratic growth assumption
$$
f(x) -\inf\nolimits_{{\mathcal H}} f \geq c\dist (x, \argmin f)^2.
$$
\end{remark}
\begin{remark} In section \ref{sec:weakdamping}, we will give indications concerning the case of a general convex function $f$, whose solution set $\argmin f$ is nonempty.
Let us recall that, in the case of (HBF), which corresponds to $\phi(u)= r \|u\|^2$, each trajectory converges weakly and its limit belongs to $\argmin f$.
Apart from this important case, the convergence of the trajectories depends both on the geometric properties of the function to be minimized $f$ and on those of the damping potential $\phi$. Precisely in section \ref{sec:weakdamping} we will give an example in dimension one, with trajectories which do not converge.
\end{remark}
\subsection{Case \textit{f} convex quadratic positive definite}\label{f_convex_quadratic}
Let us make precise the previous results in the case
$
f(x) = \frac{1}{2} \left\langle Ax, x \right\rangle,
$
where $A :{\mathcal H} \to {\mathcal H}$ is a linear continuous positive definite self-adjoint operator.
Then
$
\nabla f(x)= Ax,
$
and (ADIGE-V) is written
\begin{equation}\label{closed_loop_1_bbb}
\ddot{x}(t) + \partial \phi ( \dot{x}(t)) + A(x(t)) \ni 0.
\end{equation}
Let us prove the following ergodic convergence result, valid for a general damping potential $\phi$.
\begin{theorem}\label{edp} Let
$x:[0,+\infty[\to\mathcal H$ be a solution trajectory of \eqref{closed_loop_1_bbb},
where $\phi$ is a damping potential, and $A :{\mathcal H} \to {\mathcal H}$ is a linear continuous positive definite self adjoint operator.
Then, we have the following ergodic convergence result for the weak topology: as $t \to +\infty$,
$$
\frac{1}{t} \int_0^t x(\tau) d \tau \rightharpoonup x_{\infty},
$$
where the limit $x_{\infty}$ satisfies
$0\in \partial \phi(0)+ Ax_{\infty}.$\\
When $\phi$ is differentiable at the origin, we have $Ax_{\infty}=0$, that is $x_{\infty}=0$.\\
When $\phi (x)= r \|x\|$, we have $\| Ax_{\infty}\|\leq r.$
\end{theorem}
\begin{proof}
The Hamiltonian formulation of \eqref{closed_loop_1_bbb} gives
the equivalent first-order differential
inclusion in the product space $\mathcal H\times \mathcal H$:
\begin{equation}\label{first_order_cl_loop_0_linear}
0\in\dot z(t)+\partial \Phi(z(t))+F(z(t)),
\end{equation}
where $z(t)=(x(t), \dot x(t)) \in \mathcal H\times \mathcal H $, and
\begin{itemize}
\item
$\Phi:\mathcal H\times \mathcal H\to {\mathbb R}$ is the convex continuous function defined by $\Phi(x,u)=\phi(u)$
\item $F: \mathcal H\times \mathcal H\to \mathcal H\times \mathcal H$
is defined by \, $F(x,u)=(-u , Ax)$.
\end{itemize}
The trick is to renorm the product space $\mathcal H\times \mathcal H$ as follows:
\noindent The mapping $(x,y) \mapsto \left\langle Ax, y \right\rangle$
defines a scalar product on $\mathcal H$, which is equivalent to the initial one.
Accordingly, let us equip the product space $\mathcal H\times \mathcal H$ with the scalar product
$$
\left\langle \left\langle (x_1 u_1), (x_2 ,u_2) \right\rangle \right\rangle := \left\langle A x_1, x_2 \right\rangle + \left\langle u_1, u_2 \right\rangle
$$
With respect to this new scalar product, let us observe that:
\begin{itemize}
\item $F: \mathcal H\times \mathcal H\to \mathcal H\times \mathcal H$ is a linear continuous skew symmetric operator. Since $A$ is self-adjoint
\begin{eqnarray*}
\left\langle \left\langle F(x,u), (x,u) \right\rangle \right\rangle &=& \left\langle \left\langle (-u,Ax), (x,u) \right\rangle \right\rangle \\
&=& -\left\langle Au, x \right\rangle + \left\langle Ax, u \right\rangle =0.
\end{eqnarray*}
\item The subdifferential of $\Phi$ is unchanged that is $\partial\Phi (x,u) = (0, \partial \phi (u)).$
\end{itemize}
\noindent Therefore, the differential inclusion \eqref{first_order_cl_loop_0_linear} is governed by the sum of two maximally monotone operators, one of them is the subdifferential of a convex continuous function, the other one is a monotone skew symmetric operator.
By the classical Rockafellar theorem (see \cite[Corollary 24.4]{BC}), their sum is still maximally monotone.
Consequently, we can apply the theory concerning the semi groups generated by general maximally monotone operators, and conclude that $z(t)$ converges weakly and in an ergodic way towards
a zero
$z_{\infty}= (x_{\infty}, u_{\infty}) $ of $\partial \Phi+F$. This means
$$
(0, \partial \phi (u_{\infty})) + (-u_{\infty}, Ax_{\infty}) =(0,0).
$$
Equivalently $u_{\infty} =0$ and $\partial \phi (0) + Ax_{\infty}\ni 0$.
\end{proof}
\medskip
In the case of the wave equation, this type of argument was developed by Haraux in \cite[Lecture 12, Theorem 45]{Haraux_PDE}.
A recent account on these questions can be found in
Haraux-Jendoubi \cite{HJ2} and
Alabau Boussouira-Privat-Tr\'elat \cite{APT}.
\subsection{Numerical illustrations}\label{sec:num}
Finding explicit solutions in a closed form of nonlinear oscillators has direct applications in various fields.
In the one-dimensional case, the corresponding second-order differential equation
$
\ddot x(t) + d(x(t), \dot x(t))\dot x(t)+ g(x(t))=0
$
is known as the Levinson-Smith equation. It is reduced to the Li\'enard equation when $d$ depends only on $x$.
One can consult \cite{GCG, GGH} for recent reports on the subject and the description of some of the different techniques developed to resolve these questions.
In our setting, we will provide some insight on this question by combining energetic and topological arguments.
\subsubsection{A numerical one-dimensional example}
Consider the case ${\mathcal H} = {\mathbb R}$, $f(x) = \frac{1}{2} |x|^2$, and $\phi (u) = \frac{1}{p} |u|^p$ with $p>1$.
Then, (ADIGE-V) writes
\begin{equation}\label{adige_v_one_dim}
\ddot x(t) + |\dot x(t)|^{p-2} \dot x(t)+ x(t)=0.
\end{equation}
It is a linear oscillator with nonlinear damping. According to the previous results,
\medskip
$\bullet$ For $p=2$, according to the strong convexity of the potential function $f(x)=\frac{1}{2} |x|^2$ and Theorem \ref{strong_convex_thm}, we have convergence at an exponential rate of $x(t)$ and $\dot{x}(t)$ toward $0$.
Indeed, $p=2$ is the only value of $p$ for which the hypothesis of
Theorem \ref{strong_convex_thm} are satisfied. For $p>2$ the local hypothesis $(i)$ is not satisfied, and for $p<2$ the gradient of $\phi$ fails to be Lipschitz continuous on the bounded sets containing the origin.
\medskip
$\bullet$ For $p>1$, let us first show that
$
\lim_{t \to +\infty} \dot{x}(t) =0.
$
This results from Proposition \ref{preliminary_est}, and the fact that the trajectory is bounded. This last property results from
the fact that
the global energy
$
\mathcal E (t)= \frac{1}{2} |\dot{x}(t) |^2 + \frac{1}{2} |x(t) |^2
$
is non-increasing, and hence convergent (and bounded from above).
Let us show that $x(t)$ tends to zero.
Since $\lim_{t \to +\infty} \dot{x}(t) =0$, and $\lim_{t \to +\infty} \mathcal E (t)$ exists, we have
\begin{equation}\label{conv _dim_1}
\lim_{t \to +\infty} |x(t) |^2 = \lim_{t \to +\infty} \mathcal E (t) \quad \mbox{\rm exists}.
\end{equation}
Since the identity operator clearly satisfies the assumptions of Theorem \ref{edp}, we have the following ergodic convergence result
$
\lim_{t \to +\infty} \frac{1}{t} \int_0^t x(\tau) d \tau =0.
$
There are two possibilities:
\textit{a)} For $t$ sufficiently large, $x(t)$ has a fixed sign.
According to \eqref{conv _dim_1}, we get that $\lim_{t \to +\infty} x(t):= x_{\infty} \, \mbox{ exists}.$
The convergence implies the ergodic convergence. Therefore,
$\lim_{t \to +\infty} \frac{1}{t} \int_0^t x(\tau) d \tau = x_{\infty}. $
But we know that the ergodic limit is zero, hence $x_{\infty}=0. $
\medskip
\textit{b}) The trajectory changes sign an infinite number of time as $t\to +\infty$. This means that there exists sequences $s_n$ and $t_n$ which tend to infinity such that $x(t_n) x(s_n) <0$.
Since the trajectory is continuous, by the mean value theorem, this implies the existence of $\tau_n \in[s_n, t_n]$ such that $x(\tau_n)=0$.
Hence $x(\tau_n)^2=0$ for all $n\in {\mathbb N}$, with $\tau_n \to +\infty$. Since $\lim_{t \to +\infty} |x(t) |^2$ exists, this implies that
$\lim_{t \to +\infty} |x(t) |^2 =0$.
Clearly, this implies that $\lim_{t \to +\infty} x(t) =0$.
So, for $p>1$, for any solution trajectory of
\eqref{adige_v_one_dim}, we have:
\begin{equation}\label{conv _dim_2}
\lim_{t \to +\infty} x(t) =0 \, \, \mbox{ and }\, \lim_{t \to +\infty} \dot{x}(t) =0.
\end{equation}
Now let's analyze how the trajectories and their speeds go to zero.
As we shall see, the case $p>2$ corresponds to a weak damping, while the case $p<2$ corresponds to a strong damping.
\smallskip
\textbf{Case $p>2$.}
Since the speed $| \dot x(t)|$ tends to zero, we have
$\gamma (t):= | \dot x(t)|^{p-2} \to 0 \, \mbox{ as } \, t\to +\infty.$
\begin{figure}[h]
\centering
{\includegraphics*[viewport=78 200 540 600,width=0.325\textwidth]{p=2.pdf}}\hspace{0.03cm}
{\includegraphics*[viewport=78 200 540 600,width=0.325\textwidth]{p=2andhalf.pdf}}\hspace{0.03cm}
{\includegraphics*[viewport=78 200 540 600,width=0.325\textwidth]{p=3.pdf}}\\
{\includegraphics*[viewport=78 200 540 600,width=0.325\textwidth]{p=5.pdf}}\hspace{0.03cm}
{\includegraphics*[viewport=78 200 540 600,width=0.325\textwidth]{p=10.pdf}}\hspace{0.03cm}
{\includegraphics*[viewport=78 200 540 600,width=0.325\textwidth]{p=50.pdf}}
\caption{\small The evolution of the trajectories $x(t)$ (blue line) and $\dot x(t)$ (red line) of the dynamical system \eqref{adige_v_one_dim} for different values of $p \geq 2$.
}
\label{fig:ex0}
\end{figure}
The viscous damping coefficient $\gamma(t)$ becomes asymptotically small. As a result, the damping effect also becomes weak (that's what we call weak damping). As $p$ increases, the damping effect tends to decrease, the trajectory tends to oscillate more and more, and the rate of convergence deteriorates.\\
This is illustrated in Figure \ref{fig:ex0}, where we can see the evolution of the trajectory $x(t)$ (blue line) and its derivative $\dot x(t)$ (red line) of the dynamical system \eqref{adige_v_one_dim} with starting point $(x(0), \dot x(0)) = (3,1)$ and for different values of $p$ greater than or equal to $2$. The trajectory and the velocity tend towards zero, however, the oscillations become stronger as $p$ increases, and the convergence towards zero becomes very slow.
For large $p$, the oscillatory aspect conforms to the ergodic convergence of the trajectory to $0$ (indeed in dimension one the trajectory tends towards zero, but we can expect that in higher dimensions there is only ergodic convergence towards zero).
Note also that for $p>2$, and $p$ close to $2$, the trajectory is close to that corresponding to $p=2$, and therefore enjoys excellent convergence properties. It would be interesting to study this situation, because it is a natural candidate to obtain convergence rates similar to the accelerated gradient method of Nesterov.
\textbf{Case $1<p<2$}.
According to \eqref{conv _dim_2} we have
$\lim_{t \to +\infty} \dot{x}(t) =0,$
and
$\lim_{t \to +\infty} x(t) =0.$\\
Since $\lim_{t \to +\infty} \dot{x}(t) =0$ and $2-p >0$, we have that the viscous damping coefficient
$$\gamma (t):= \frac{1}{|\dot x(t)|^{2-p}} \to +\infty \, \mbox{ as } \, t\to +\infty .$$
We are in the setting of a strong damping effect.
This situation was analyzed in the following result of \cite{AC1}, which we reproduce here. It concerns the asymptotic behaviour of
$$
{\rm(IGS)_{\gamma}} \quad \ddot x(t) + \gamma (t) \dot x(t) + \nabla f(x(t))=0.
$$
\begin{proposition}\label{large_gamma}
Let $f:{\mathcal H}\to {\mathbb R}$ be a function of class ${\mathcal C}^1$ such that $\nabla f$ is bounded on the bounded subsets of ${\mathcal H}$. Given $r>0$ and $\theta>1$, assume that $\gamma(t)=r\, t^\theta$ for every $t\geq t_0\geq 0$.\\ Then each bounded solution trajectory $x(.)$ of $\rm{(IGS)_{\gamma} }$ satisfies $\int_{t_0}^{+\infty}\|\dot x(t)\|\, dt <+\infty$, and hence converges strongly toward some $x^*\in {\mathcal H}$.
\end{proposition}
According to this result, we can obtain some information about the convergence rate of the velocity to zero.
We have two cases: either
$\int_{t_0}^{+\infty}\|\dot x(t)\| \, dt <+\infty$,
or $\int_{t_0}^{+\infty}\|\dot x(t)\| \, dt = +\infty$.
In this last case, according to Proposition \ref{large_gamma}, we cannot have $\gamma(t)= \frac{1}{|\dot x(t)|^{2-p}} $ of order $r\, t^\theta$ with $\theta>1$.
This excludes the possibility to have $|\dot x(t)|$ or order
$\frac{1}{t^\frac{\theta}{2-p}}$ with $\theta>1$.
So, the best that we can expect is,
$
|\dot x(t)| \sim \frac{1}{t^\frac{1}{2-p}} \, \, \mbox{ as} \; \; t \to +\infty.
$
This estimate is in accordance with the exponential decay when $p=2$, and the finite length property when $p=1$.
We emphasize the fact that the above argument is not a rigorous proof, it just gives an indication of the type of convergence rate that we can expect.
\smallskip
\begin{figure}[h]
\centering
{\includegraphics*[viewport=78 200 540 600,width=0.325\textwidth]{p=1comma1.pdf}}\hspace{0.03cm}
{\includegraphics*[viewport=78 200 540 600,width=0.325\textwidth]{p=1comma3.pdf}}\hspace{0.03cm}
{\includegraphics*[viewport=78 200 540 600,width=0.325\textwidth]{p=1comma5.pdf}}
\caption{\small Evolution of $x(t)$ (blue) and $\dot x(t)$ (red) for different values of $1 < p < 2$.
}
\label{fig:ex2}
\end{figure}
In Figure \ref{fig:ex2} we can see the evolution of the trajectory $t \mapsto x(t)$ (blue line) and of its derivative $t \mapsto \dot x(t)$ (red line) of the dynamical system \eqref{adige_v_one_dim} with starting point $(x(0), \dot x(0)) = (3,1)$ for different values of $1 < p < 2$.
Because of the strong damping, the trajectories exhibit small oscillations, and the velocity converges fastly to zero.
By contrast, the convergence of $x(t)$ to zero highly depends on the parameter $p$.
When $p$ is close to $1$, the convergence of the trajectory to zero is poor, however, already a slight increase of $p$ concisely improves the convergence of the trajectory.
Indeed, when $p$ becomes large the convergence of the trajectory improves.
\vspace{3mm}
\section{Weak damping: from slow convergence to attractor effect}
\label{sec:weakdamping}
As we already noticed, even in the case of a strongly convex function $ f $, when the damping effect becomes too weak, then the convergence property is degraded. In the case of the damping $ \| \dot x (t) \| ^ {p-2} \dot x (t) $), this corresponds to situations where $ p >2 $.
In this section, we give examples showing that in the case of a general convex function, the situation is even worse, and the trajectory may not converge in the case of weak damping.
In this case, one has to replace the convergence notions by the concept of attractor, a central subject for the theory of dynamic systems, and PDE's, see Hale \cite{Hale}, Haraux \cite{Haraux} for seminal contributions to the subject in the case of gradient systems ({\it i.e.}\,\, systems for which there exists a Lyapunov function). For optimization purposes, this is a promising research topic, largely to be explored in the case of a general damping function.
In the next section, we take a convex function $ f $ with a continuum of minimizers, and examine the lack of convergence when the damping becomes too weak.
In fact, as we have already underlined, convergence depends both on the geometric properties of the damping potential and on the potential function $ f $ to be minimized. The corresponding geometric aspects concerning $ f $ will be examined a little later.
\subsection{An example where convergence fails to be satisfied}
The following example is based on Haraux \cite[section 5.1]{Haraux}.
Take ${\mathcal H} ={\mathbb R}$, and $f: {\mathbb R} \to {\mathbb R}$ a convex function of class ${\mathcal C}^1$ which achieves its minimal value on the line segment $[a,b]$, with $a<b$.
We suppose that $f$ is coercive, {\it i.e.}\,\, $\lim_{|x|\to+\infty} f(x)=+\infty$. Its graph looks like a bowl with a flat bottom.
\noindent Consider the evolution equation with closed-loop damping
\begin{equation}\label{closed_loop_1_counter_example}
\ddot{x}(t) + | \dot{x}(t) |^{p-2} \dot{x}(t) + \nabla f (x(t)) = 0.
\end{equation}
Let us discuss, according to the value of $ p $, the convergence properties of the trajectories of this system. We will need the following elementary lemma, see \cite[Lemma 5.1.3]{Haraux}.
\begin{lemma}\label{lem_tech} Let $v \in {\mathcal C}^2 ({\mathbb R}_+)$ which satisfies, for some $c>0$
$$
\dot{v}(0) >0; \quad \ddot{v}(t) \geq -c \dot{v}(t)^2 \, \mbox{ for all } \, t \geq 0.
$$
Then, $v$ is an increasing function, and \,
$
\lim_{t\to +\infty} v(t) = + \infty.
$
\end{lemma}
\begin{proof} As long as $\dot{v}(t) >0$, by integration of the differential inequality $ \ddot{v}(t) +c \dot{v}(t)^2 \geq 0$, we obtain
$$
\dot{v}(t) \geq \frac{1}{ct + \frac{1}{\dot{v}(0)}}.
$$
This immediately implies that $\dot{v}(t) >0$ for all $t\geq 0$.
By integrating the above inequality, we obtain
$$
v(t) \geq v(0) + \int_0^t \frac{1}{c\tau + \frac{1}{\dot{v}(0)}} d\tau
$$
which implies
$\lim_{t\to +\infty} v(t) = + \infty.$
\end{proof}
\begin{proposition}\label{prop_counterex}
Suppose that $p \geq 3$.
Then, any solution trajectory of \eqref{closed_loop_1_counter_example}
which is not constant, passes an infinitely of times through the points $a$ and $b$.
\end{proposition}
\begin{proof}
According to Proposition \ref{preliminary_est}, the trajectory $x(\cdot)$ is bounded and satisfies
\begin{equation}\label{dx-conv-00}
\lim_{t\to +\infty} \| \dot{x}(t) \|=0 \, \mbox{ and }\, \sup_{t \geq 0} \| \ddot{x}(t) \| < +\infty.
\end{equation}
Let us argue by contradiction, and assume that there exists some $t_1 >0$ such that $x(t) \geq a$ for all $t\geq t_1$.
We can distinguish two cases:
\smallskip
\noindent $\bullet$ \textbf{First case}: $\dot{x}(t) \geq 0$ for all $t \geq t_1$. Then, $t\mapsto x(t)$ is increasing and bounded, hence converges to some $x_{\infty}\in {\mathbb R}$. From the constitutive equation \eqref{closed_loop_1_counter_example}, $\lim_{t\to +\infty} \| \dot{x}(t) \|=0$, and the continuity of $\nabla f$, we deduce that
$\lim_{t\to +\infty} \ddot{x}(t) = - \nabla f (x_{\infty})$.
Using again that $\lim_{t\to +\infty} \| \dot{x}(t) \|=0$, we deduce that $\nabla f (x_{\infty})=0$.
Since $x \mapsto \nabla f( x)$ is an increasing function, and
$\nabla f( x(t_1)) \geq 0$, we obtain that
$$
\nabla f( x (t))=0 \mbox{ for all } \, t\geq t_1.
$$
Returning to the constitutive equation \eqref{closed_loop_1_counter_example}, we get
$$
\ddot{x}(t) + | \dot{x}(t) |^{p-2} \dot{x}(t)=0 \mbox{ for all } \, t\geq t_1.
$$
Since $\lim_{t\to +\infty} \| \dot{x}(t) \|=0$, and $p\geq 3$, we have for $t$ sufficiently large, say $t\geq t_2 \geq t_1$\\
$
| \dot{x}(t) |^{p-1} \leq | \dot{x}(t) |^2.
$
Therefore, for all $t\geq t_2$
$$
\ddot{x}(t) + | \dot{x}(t) |^{2} \geq 0 .
$$
Since $x(\cdot)$ is not constant, there exists some $t_3 \geq t_2$ such that $\dot{x}(t_3) >0$.
According to Lemma \ref{lem_tech}, we have
$\lim_{t\to +\infty} x(t) = + \infty$,
a clear contradiction with the convergence of $x(t)$.
\smallskip
\noindent $\bullet$ \textbf{Second case}: there exists $t_2 \geq t_1$ such that $\dot{x}(t_2) < 0$. From the constitutive equation \eqref{closed_loop_1_counter_example} and $x(t)\geq a$
we get, for all $t\geq t_2$
$$
\ddot{x}(t) + | \dot{x}(t) |^{p-2} \dot{x}(t)= - \nabla f(x(t)) \leq 0
$$
This implies, for $t$ large enough
$$
\ddot{x}(t) \leq | \dot{x}(t) |^{2}.
$$
Let's apply Lemma \ref{lem_tech} to $-x(\cdot)$. Since $-\dot{x}(t_2) > 0$, we obtain $\lim_{t\to +\infty} x(t) = - \infty$, a contradiction.
A similar argument gives the same kind of result for $b$, namely, for every $t_1>0$ there exists $t>t_1$ such that $x(t)>b$. Therefore, there is an infinite number of times such that trajectory takes the values $a$ and $b$, which means that it oscillates indefinitely between $a$ and $b$.
\end{proof}
By contrast, if the damping effect is sufficiently important, there is convergence. In our situation, this corresponds to the case $2\leq p<3$, as shown in the following proposition.
\begin{proposition}\label{prop_conv}
Suppose that $2 \leq p < 3$.
Then, any solution trajectory of \eqref{closed_loop_1_counter_example} converges, and its limit belongs to $[a,b]$.
\end{proposition}
\begin{proof} When $ p = 2 $ the convergence follows from Alvarez's theorem for (HBF). So suppose $ 2 <p <3 $. We sketch the main lines of the proof, whose details can be found in Haraux-Jendoubi \cite[Theorem 9.2.1]{HJ2}, which deals with a slightly more general situation.
Take $x$ a solution trajectory of \eqref{closed_loop_1_counter_example}, and denote by $\omega(x)$ its limit set, that is the set of all its limit points as $t \to +\infty$ (limits of sequences $x(t_n)$ for $t_n \to +\infty$). By classical argument, this set is a connected subset of $\{\nabla f =0\}$, that is $\omega(x) \subset [a,b]$.
If $\omega(x)$ is reduced to a singleton, the proof is finished.
Let us therefore examine the complementary case
$$
\omega(x) = [c,d]\subset [a,b], \, \mbox{ with } \, c<d,
$$ and show that this leads to a contradiction.
Set $l:= \frac{1}{2}(c+d)$. Let us prove that
$\lim_{t\to +\infty} x(t)=l$, which gives $\omega(x) = \{l\}$, a clear contradiction with $\omega(x) = [c,d]$, $c\neq d$.
First, since $l$ belongs to the interior of $\omega(x) $, according to the intermediate value property, there exists a sequence $(t_n)$ with $t_n \to +\infty$ such that $ x(t_n)=l$. By continuity of $x$, and since $l$ belongs to the interior of $[c,d]$, for each $n\in {\mathbb N}$ there exists $\delta_n >0$ such that
$$
x(t) \in [c,d] \mbox{ for all } \, t\in [t_n, t_n + \delta_n].
$$
Let's prove that for $n$ large enough we can take $\delta_n =+\infty$. Set
$$
\theta_n = \inf \left\lbrace t >t_n: \, x(t) \notin [c,d] \right\rbrace,
$$
and assume $\theta_n <+\infty$. So for all $t \in [t_n, \theta_n]$ we have $\nabla f(x(t))=0$, and \eqref{closed_loop_1_counter_example} reduces to
$$
\ddot{x}(t) + | \dot{x}(t) |^{p-2} \dot{x}(t)=0 .
$$
After multiplying by $\dot{x}(t)$, we get for all $t \in [t_n, \theta_n]$
$$
\frac{d}{dt} | \dot{x}(t) |^2 + 2 | \dot{x}(t) |^{p}=0.
$$
After integration from $t_n$ to $t \in [t_n, \theta_n]$, we get for all $t \in [t_n, \theta_n]$
$$
| \dot{x}(t) | = \Big( | \dot{x}(t_n) |^{-p+2} + (p-2) (t-t_n)\Big)^\frac{-1}{p-2}.
$$
After further integration, we get for all $t \in [t_n, \theta_n]$
\begin{align*}
|x(t) - l| & = |x(t) - x(t_n)| \leq \int_{t_n}^t | \dot{x}(s) | ds \\
& = \frac{1}{p-3} \Big( | \dot{x}(t_n) |^{-p+2} + (p-2) (t-t_n)\Big)^\frac{p-3}{p-2} + \frac{1}{3-p} | \dot{x}(t_n)|^{3-p}\\
& \leq \frac{1}{3-p} | \dot{x}(t_n)|^{3-p},
\end{align*}
where, to obtain the last inequality, we use the hypothesis $2<p<3$.
Since $\dot{x}(t_n)$ converges to zero as $t_n \rightarrow + \infty$, for $n$ large enough we have that $\theta_n = +\infty$. This means that for $n$ large enough
$$x(t) \in [c,d] \, \, \mbox{and} \, \, |x(t) - l| \leq \frac{1}{3-p} | \dot{x}(t_n)|^{3-p} \quad \forall t \in [t_n,+\infty),$$
which implies that $x(t)$ converges to $l$ as $t\to +\infty$.
\end{proof}
\vspace{2mm}
\noindent Figure \ref{fig:ex4} illustrates the attractor effect when the damping becomes too weak. Take $f:{\mathbb R}\to{\mathbb R}$
$f(x)=\frac{1}{2}(x+1)^2 \, \mbox{ for } x\leq -1, \ f(x)=0 \, \mbox{ for }\, |x|<1, \, \mbox{ and } f(x)=\frac{1}{2}(x-1)^2\, \mbox{ for } x\geq 1.$
\begin{figure}[h]
\centering
{\includegraphics*[viewport=78 200 540 600,width=0.325\textwidth]{weakdampingp=21}}\hspace{0.03cm}
{\includegraphics*[viewport=78 200 540 600,width=0.325\textwidth]{weakdampingp=25}}\hspace{0.03cm}
{\includegraphics*[viewport=78 200 540 600,width=0.325\textwidth]{weakdampingp=29}}\\
\vspace{3mm}
{\includegraphics*[viewport=78 200 540 600,width=0.325\textwidth]{weakdampingp=3}}\hspace{0.03cm}
{\includegraphics*[viewport=78 200 540 600,width=0.325\textwidth]{weakdampingp=4}}\hspace{0.03cm}
{\includegraphics*[viewport=78 200 540 600,width=0.325\textwidth]{weakdampingp=10}}
\caption{\small Evolution of the trajectories $x(t)$ (blue) and $\dot x(t)$ (red) of the dynamical system \eqref{adige_v_one_dim} for $p \geq 2$.
}
\label{fig:ex4}
\end{figure}
For $p= 3$ there is a radical change in trajectory behavior. For $p\geq 3$, they do not converge asymptotically, and exhibit very oscillating behavior by steadily passing through the points $-1$ and $+1$. For $2 < p <3$, there is numerical evidence that the trajectories converge, which confirms the conclusion of Proposition \ref{prop_conv}.
\subsection{An explicit one-dimensional example}
Take $\mathcal H = \mathbb R$ and $f(x) = c|x|^{\gamma}$, where $c$ and $\gamma$ are positive parameters. Let us look for solutions of
\begin{equation}\label{closed_loop_1_example}
\ddot{x}(t) + r| \dot{x}(t) |^{p-2} \dot{x}(t) + \nabla f (x(t)) = 0,
\end{equation}
when $p>1$.
Precisely, we look for nonnegative solutions
of the form $x(t)= \displaystyle{\frac{1}{t^{\theta}}}$, with $\theta >0$. This means that the trajectory is not oscillating, it is a completely damped trajectory. We proceed by identification, and determine the values of the parameters $c$, $\gamma$, $r, p$ and $\theta$ which provide such solutions. On the one hand,
$$
\ddot{x}(t) + r| \dot{x}(t) |^{p-2} \dot{x}(t) = \frac{\theta (\theta +1)}{t^{\theta +2}} - \frac{ \theta^{p-1} r}{t^{(\theta +1)(p-1)}}.
$$
On the other hand, $\nabla f (x)= c \gamma |x|^{\gamma -2}x $, which gives
$$
\nabla f (x(t))= \frac{c \gamma}{t^{\theta (\gamma -1)}}.
$$
Thus, $x(t)= \frac{1}{t^{\theta}}$ is solution of \eqref{closed_loop_1_example} if, and only if,
$$
\frac{\theta (\theta +1)}{t^{\theta +2}} - \frac{ \theta^{p-1} r}{t^{(\theta +1)(p-1)}} + \frac{c \gamma }{t^{\theta (\gamma -1)}}=0.
$$
This is equivalent to solve the following system:
\begin{itemize}
\item [$i)$] $\theta+ 2 = \theta (\gamma -1)$;
\item [$ii)$] $\theta+ 2 = (\theta +1)(p-1) $;
\item [$iii)$] $c \gamma = \theta^{p-1} r -\theta (\theta +1)$;
\item [$iv)$] $\theta >0$, $c>0$
\end{itemize}
\noindent Solving $i)$ and $ii)$ with respect to $\gamma$ gives $2<p<3, \, \gamma >2$, and the following values of $\theta$ and $p$
$$
\theta = \frac{2}{\gamma -2}, \quad p = \frac{3\gamma -2}{\gamma}.
$$
Condition $iii)$ gives
$
c= \frac{\theta}{\gamma}\left( \theta^{p-2} r - (\theta +1) \right)
$
and the nonnegativity condition $iv)$ gives
$
r > \frac{\theta + 1}{\theta^{p-2}}.
$
We have $\min f = 0$ and
$$
f (x(t)) =c \frac{1}{t^{\frac{2 \gamma}{\gamma -2 }}}= \frac{c}{t^{\frac{2 }{p-2}}}.
$$
To summarize, we have shown that when taking $2<p<3$, and $f(x)= c|x|^{\frac{2}{3-p}} $ there exists a solution of
$$
\qquad \ddot{x}(t) +r \| \dot{x}(t) \|^p \dot{x}(t) + \nabla f (x(t)) = 0,
$$
of the form $x(t)= \frac{1}{t^{\frac{3-p}{p-2}}}$, for which
$$
f (x(t)) - \min f = \frac{c}{t^{\frac{2 }{p -2}}}.
$$
As expected,
the speed of convergence of $f (x(t))$ to $0$ depends on the parameter $p$. Therefore, without other geometric assumptions on $f$, for $2 <p<3$, we cannot expect a convergence rate better than
$\mathcal O \Big(\displaystyle{\frac{1}{t^{\frac{2 }{p -2}}}}\Big)$.
When $p \to 3$ from below, the function $f(x)= c|x|^{\frac{2}{3-p}} $ becomes very flat around its minimum (the origin) and the convergence rate of $x(t) =\frac{1}{t^{\frac{3-p}{p-2}}}$ to the origin becomes very slow.
\section{Damping via closed-loop velocity control, quasi-gradient and (KL)}\label{sec: basic_3}
In this section, ${\mathcal H}=\mathbb R^N$ is the finite dimensional Euclidean space. This will allow us to use the Kurdyka--Lojasiewicz property, which we briefly designate by (KL). Unless otherwise indicated, no convexity assumption is made on the function $f$ to minimize, which will be assumed to satisfy (KL). To obtain the convergence of the orbits, the need for a geometric assumption on the function $f$ to be minimized has long been recognized.
As for the steepest descent, without additional geometric assumptions on the potential function $f$, the bounded orbits of the heavy ball with friction dynamic (HBF) may not converge. Let's recall the result from
\cite{AGR} where it is shown a function $f: {\mathbb R}^2 \to {\mathbb R}$ which
is $\mathcal C^1$, coercive, whose gradient is Lipschitz continuous on the bounded sets,
and such that the (HBF) system
admits an orbit $t \mapsto x(t)$ which does not converge as $t \to +\infty$. This example is an inertial version of the famous Palis--De Melo counterexample for the continuous steepest descent \cite{PDM}.
In this section, we examine an important situation where the convergence property is satisfied, namely when $f$ is assumed to satisfy the property (KL), a geometric notion which is presented below.
\subsection{Some basic facts concerning (KL)}
A function $G: {\mathbb R}^N \to {\mathbb R}$ satisfies the (KL) property if its values can be reparametrized in the neighborhood of each of its
critical point, so that the resulting function become sharp. This means the existence of a continuous, concave, increasing function $\theta$ such that for all $u$ in a slice of $G$
$$
\| \nabla (\theta \circ G) (u)\| \geq 1.
$$
The function $\theta$ captures the geometry of $G$ around its critical point, it is called a desingularizing function; see \cite{BDLM}, \cite{ABS}, \cite{abrs1} for further results.
The tame functions satisfy the property (KL).
Tameness refers to an ubiquitous geometric property of functions and sets encountered in most finite
dimensional optimization problems.
Sets or functions are called tame when they can be described by a finite number of basic formulas/
inequalities/Boolean operations involving standard functions such as polynomial, exponential,
or max functions. Classical examples of tame objects are piecewise
linear objects (with finitely many pieces), or semi-algebraic objects. The general notion covering these situations is the concept of $o$-minimal structure; see van den Dries \cite{vdDries}.
Tameness models nonsmoothness via the so-called stratification property of
tame sets/functions. It was this property which motivated the vocable of tame topology, “la topologie
modérée” according to Grothendieck.
All these aspects have been well documented in a series of recent papers devoted to nonconvex nonsmooth optimization; see Ioffe \cite{Ioffe} ("An invitation to tame optimization"), Castera--Bolte--F\'evotte--Pauwels \cite{CBFP} for an application to deep learning, and references therein. We refer to \cite{abrs1} for illustrations, and examples within a general
optimization setting.
This property is particularly interesting in our context, because we work with an \textit{autonomous dynamical system}, in which case the (KL) theory applies to the quasi-gradient systems. This contrasts with the accelerated gradient method of Nesterov which is based on a non-autonomous dynamic system, and for which we do not have a convergence theory based on the (KL) property.
Under this property, we will obtain convergence results with convergence rates linked to the geometry of the data functions $f$ and $\phi$, via the desingularizing function.
\subsection{Quasi-gradient systems}
Let us first recall the main lines of the quasi-gradient approach to the inertial gradient systems as developed by B\'egout--Bolte--Jendoubi in \cite{BBJ}.
The geometric interpretation is simple: a vector field $F$ is called quasi-gradient for a function $E$
if it has the same singular point as $E$ and if the angle between the field $F$ and the gradient $\nabla E$ remains acute
and bounded away from $\pi/2$. A precise definition is given below. Of course, such systems have a behavior
which is very similar to those of gradient systems. We refer to Barta--Chill--Fa\v{s}angov\'a \cite{BCF}, \cite{BF}, \cite{CF}, Chergui \cite{Chergui}, Huang \cite{Huang} and the references therein for
further geometrical insights on the topic.
\begin{definition}\label{quasi_grad_def_0}
Let $\Gamma$ be a nonempty closed subset of $\mathbb R^N$, and let
$F : \mathbb R^N \to \mathbb R^N$ be a locally Lipschitz continuous
mapping. We say that the first-order system
\begin{equation}\label{quasi_grad_def_1}
\dot{z}(t) + F (z(t))=0,
\end{equation}
has a quasi-gradient structure for $E$ on $\Gamma$, if there exist a differentiable function $E : \mathbb R^N \to {\mathbb R}$ and $\alpha >0$ such that the two following conditions are satisfied:
\medskip
(angle condition) \quad \qquad $\left\langle \nabla E (z), F(z) \right\rangle
\geq \alpha \|\nabla E (z) \| \| F(z) \| $ \, for all $z\in \Gamma$;
\medskip
(rest point equivalence) \quad $\mbox{\rm crit}E \cap \Gamma = F^{-1} (0) \cap \Gamma$.
\end{definition}
Based on this notion, we have the following convergence properties for the bounded trajectories of \eqref{quasi_grad_def_1}.
The following result is a localized version and straight adaptation of \cite[Theorem 3.2]{BBJ}.
\begin{theorem} {\rm} \label{quasi_grad_thm_1}
Let $F: \mathbb R^N \to \mathbb R^N$ be a locally Lipschitz continuous mapping.
Let $z: [0, +\infty[ \to \mathbb R^N $ be a bounded solution trajectory of \eqref{quasi_grad_def_1}.
Take $R \geq \sup_{t\geq 0} \|z(t)\|$.
Assume that $F$ defines a quasi-gradient vector field for $E_R$ on $\bar{B}(0,R)$, where $E_R: \mathbb R^N \to {\mathbb R}$ is a differentiable function.
Assume
further that the function $E_R$ is {\rm(KL)}.
Then, the following properties are satisfied:
\medskip
$(i)$ \, $z(t) \to z_{\infty}$ as $t \to + \infty$, where $ z_{\infty}\in F^{-1} (0)$;
\medskip
$(ii)$ \, $\dot{z} \in L^1 (0, +\infty; {\mathbb R}^N )$, $\dot{z}(t) \to 0$ as $t \to + \infty$;
\medskip
$(iii)$
$ \| z(t) - z_{\infty}\| \leq \frac{1}{\alpha_R} \theta \Big( E_R(z(t)) - E_R (z_{\infty})\Big)
$
\medskip
\noindent where $\theta$ is the desingularizing function for $E_R$ at $z_{\infty}$, and $\alpha_R$ enters the angle condition of Definition \ref{quasi_grad_def_0}.
\end{theorem}
\subsection{Convergence of systems with closed-loop velocity control under (KL)}\label{Sec-conv-KL}
Let us apply the above approach to the inertial system with
closed-loop damping
\begin{equation}\label{first_order_cl_loop_quasi_0}
\ddot x(t) +\nabla \phi(\dot x(t))+ \nabla f(x(t))=0,
\end{equation}
by writing it as a first-order system, via its Hamiltonian formulation.
We will assume that $\nabla \phi$ is locally Lipschitz continuous. Indeed, we can reduce to this situation by using a regularization procedure based on the Moreau envelope.
\begin{theorem} \label{quasi_grad_thm_2}
Let $f: {\mathbb R}^N \to {\mathbb R}$ be a $\mathcal C^2$ function whose gradient is Lipschitz continuous on the bounded sets, and such that $\inf_{{\mathbb R}^N} f >-\infty$.
Let $ E_{\lambda}: {\mathbb R}^N \times {\mathbb R}^N\to {\mathbb R} $ be defined by: for all $(x,u)\in {\mathbb R}^N \times {\mathbb R}^N$
$$
E_{\lambda}(x,u):= \frac{1}{2} \|u\|^2 + f(x) +\lambda \left\langle \nabla f (x), u\right\rangle.
$$
Suppose that the function $ E_{\lambda}$ satisfies the {\rm (KL)} property.\\
Let $\phi : {\mathbb R}^N \to {\mathbb R}_+$ be a damping potential (see Definition \ref{def1}) which is differentiable, and which satisfies the following growth conditions:
\medskip
$(i)$ (local) there exists positive constants $\gamma$, $\delta$, and $\epsilon >0$ such that, for all $u$ in
${\mathbb R}^N$ with $\|u\| \leq \epsilon$
$$\phi (u) \geq \gamma \|u\|^2 \mbox{ and } \|\nabla \phi (u) \| \leq \delta \|u\|.$$
$(ii)$ (global) there exists $p\geq 1$, $c>0$, such that for all $u$ in ${\mathbb R}^N$, $\phi (u) \geq c\|u\|^p$.
\medskip
\noindent Let $x: [0, +\infty[ \to \mathbb R^N $ be a bounded solution trajectory of
$$
\ddot x(t) +\nabla \phi(\dot x(t))+ \nabla f(x(t)) =0 .
$$
Then, the following properties are satisfied:
\medskip
$(i)$ \, $x(t) \to x_{\infty}$ as $t \to + \infty$, where $ x_{\infty}\in \crit f$;
\medskip
$(ii)$ \, $\dot{x} \in L^1 (0, +\infty; {\mathbb R}^N )$, $\dot{x}(t) \to 0$ as $t \to + \infty$;
\medskip
$(iii)$ \, For $\lambda$ sufficently small, and $t$ sufficiently large
$$ \| x(t) - x_{\infty}\| \leq \frac{1}{\alpha} \theta \Big( E_{\lambda}(x(t),u(t)) - E_{\lambda} (x_{\infty},0)\Big)
$$
\smallskip
\noindent where $\theta$ is the desingularizing function for $ E_{\lambda}$ at $(x_{\infty},0)$, and $\alpha$ enters the corresponding angle condition.
\end{theorem}
\begin{proof}
According to the preliminary estimates established in Proposition \ref{preliminary_est}, we have
$$
\int_0^{+\infty} \phi (\dot{x}(t)) dt < +\infty \, \, \mbox{ and }\, \, \sup_{t\geq 0} \| \dot{x}(t)\| <+\infty.
$$
Combining the first above property with the global growth assumption on $\phi$, we deduce that there exists $p\geq 1$ such that
$$
\int_0^{+\infty} \| \dot{x}(t)\|^p dt < +\infty.
$$
According to the constitutive equation, we have
$$
\ddot x(t) = -\nabla \phi(\dot x(t))- \nabla f(x(t)).
$$
Since $x(\cdot)$ and $\dot{x}(\cdot)$ are bounded, and $\nabla f $ is locally Lipschitz continuous, we deduce that $\ddot x (\cdot)$
is also bounded. Classically, these properties imply that
$\dot{x}(t) \to 0$ as $t \to + \infty$.
Take $R \geq \sup_{t\geq 0} \|x(t)\|$.
Therefore, for $t$ sufficiently large, we have that the trajectory
$t \mapsto (x(t),\dot{x}(t))$ in the phase space ${\mathbb R}^N \times {\mathbb R}^N$
belongs to the closed set $\Gamma = \bar{B}(0,R) \times \bar{B}(0,\epsilon) $.
\noindent The Hamiltonian formulation of \eqref{first_order_cl_loop_quasi_0} gives the first-order differential system
\begin{equation}\label{first_order_cl_loop_quasi_1}
\dot z(t) + F(z(t)) =0,
\end{equation}
where $z(t)=(x(t), \dot x(t)) \in {\mathbb R}^N \times {\mathbb R}^N $, and
$F: {\mathbb R}^N \times {\mathbb R}^N \to {\mathbb R}^N \times {\mathbb R}^N$
is defined by
\begin{equation}\label{ham}
F(x,u)=(-u, \nabla \phi(u)+ \nabla f(x)).
\end{equation}
Following \cite{BBJ}, take $E_{\lambda} : \mathbb R^N \times \mathbb R^N\to {\mathbb R}$ defined by
\begin{equation}\label{e_l_quasi_grad_th2}
E_{\lambda}(x,u):= \frac{1}{2} \|u\|^2 + f(x) +\lambda \left\langle \nabla f (x), u\right\rangle,
\end{equation}
where the parameter $\lambda >0$ will be adjusted to verify the quasi-gradient property. We have
$$
\nabla E_{\lambda}(x,u) = \Big( \nabla f (x)+ \lambda \nabla^2 f (x)u, \, u + \lambda \nabla f (x) \Big).
$$
Let us analyze the angle condition with $\Gamma = \bar{B}(0,R) \times \bar{B}(0,\epsilon) $. According to the above formulation of $F$ and $\nabla E_{\lambda}$, we have
\begin{eqnarray*}
\left\langle \nabla E_{\lambda}(x,u), F(x,u) \right\rangle
&=& \left\langle \Big( \nabla f (x)+ \lambda \nabla^2 f (x)u, \, u + \lambda \nabla f (x) \Big), \Big(-u, \nabla \phi(u)+ \nabla f(x)\Big) \right\rangle \\
&=& -\left\langle \nabla f (x)+ \lambda \nabla^2 f (x)u, \, u \right\rangle + \left\langle u+ \lambda \nabla f (x), \, \nabla \phi(u)+ \nabla f(x) \right\rangle.
\end{eqnarray*}
After development and simplification, we get
\begin{eqnarray*}
\left\langle \nabla E_{\lambda}(x,u), F(x,u) \right\rangle
&=& - \lambda \left\langle \nabla^2 f (x)u, \, u \right\rangle + \left\langle u, \, \nabla \phi(u) \right\rangle
+ \lambda \left\langle \nabla f (x) , \, \nabla \phi(u) \right\rangle
+ \lambda \| \nabla f (x) \|^2.
\end{eqnarray*}
According to the local Lipschitz assumption on $\nabla f$, let
$$
M:= \sup_{\|x\| \leq R} \| \nabla^2 f (x)\| <+\infty.
$$
Since $\phi$ is a damping potential, we have
$$
\left\langle u, \, \nabla \phi(u) \right\rangle \geq \phi (u).
$$
Combining the above results, we obtain
\begin{eqnarray}
\left\langle \nabla E_{\lambda}(x,u), F(x,u) \right\rangle
&\geq & - \lambda M \|u\|^2 + \phi(u)
+ \lambda \left\langle \nabla f (x) , \, \nabla \phi(u) \right\rangle
+ \lambda \| \nabla f (x) \|^2 \nonumber \\
&\geq & - \lambda M \|u\|^2 + \phi(u)
-\frac{\lambda}{2} \| \nabla f (x) \|^2 - \frac{\lambda}{2} \| \nabla \phi(u)\|^2 + \lambda \| \nabla f (x) \|^2 \nonumber \\
&\geq & - \lambda M \|u\|^2 + \phi(u)
- \frac{\lambda}{2} \| \nabla \phi(u)\|^2 + \frac{\lambda}{2} \| \nabla f (x) \|^2 . \label{quasi_gradient_1}
\end{eqnarray}
At this point, we use the local growth assumption on $\phi$: for all $u$ in
${\mathbb R}^N$ with $\|u\| \leq \epsilon$
\begin{equation} \label{quasi_gradient_11}
\phi (u) \geq \gamma \|u\|^2 \mbox{ and } \|\nabla \phi (u) \| \leq \delta \|u\|.
\end{equation}
By
combining \eqref{quasi_gradient_1} with \eqref{quasi_gradient_11}, we obtain
\begin{eqnarray}
\left\langle \nabla E_{\lambda}(x,u), F(x,u) \right\rangle
&\geq & \Big( \gamma -\lambda M - \frac{\lambda}{2} \delta^2 \Big) \|u\|^2
+ \frac{\lambda}{2} \| \nabla f (x) \|^2 . \label{quasi_gradient_2}
\end{eqnarray}
Take $ \lambda $ small enough to satisfy
$$
\gamma > \lambda \left(M + \frac{\delta^2}{2} \right).
$$
Then,
\begin{eqnarray}
\left\langle \nabla E_{\lambda}(x,u), F(x,u) \right\rangle
&\geq & \alpha_0 ( \|u\|^2
+ \| \nabla f (x) \|^2) \label{quasi_gradient_3}
\end{eqnarray}
with $\alpha_0:= \min\{\gamma - \lambda \left(M + \frac{\delta^2}{2} \right) , \frac{\lambda}{2} \}$.\\
On the other hand,
\begin{eqnarray*}
\| \nabla E_{\lambda}(x,u)\| &\leq& \sqrt{2} \left( 1+ \lambda \max\{1, M\} \right) ( \|u\|^2
+ \| \nabla f (x) \|^2)^{\frac{1}{2}} \\
\| F(x,u)\| &\leq& \sqrt{2}(1+ \delta ) ( \|u\|^2
+ \| \nabla f (x) \|^2)^{\frac{1}{2}} .
\end{eqnarray*}
Therefore
\begin{eqnarray}
\| \nabla E_{\lambda}(x,u)\| \| F(x,u) \|
&\leq & 2 \left( 1+ \lambda \max\{1, M\} \right) (1+\delta) ( \|u\|^2
+ \| \nabla f (x) \|^2). \label{quasi_gradient_4}
\end{eqnarray}
As a consequence, the angle condition
$$
\left\langle \nabla E (z), F(z) \right\rangle
\geq \alpha \|\nabla E (z) \| \| F(z) \|
$$
is satisfied on $\Gamma$, by taking
$$
\alpha = \frac{\min\{\gamma - \lambda \left(M + \frac{\delta^2}{2} \right) , \frac{\lambda}{2} \}}{2 \left( 1+ \lambda \max\{1, M\} \right) (1+\delta) }.
$$
Finally, the rest point equivalence is a consequence of the inequality
\eqref{quasi_gradient_3}.
Then, apply the abstract theorem \ref{quasi_grad_thm_1} to obtain the claims $(i),(ii),(iii)$.
\end{proof}
\begin{remark}\label{rem-quasi-grad-kl}
$(i)$ The above result allows to consider nonlinear damping.
The main restrictive assumption is that the damping potential is assumed to be nearly quadratic close to the origin. It is not necessarily quadratic close to the origin but it has to satisfy:
for all $u$ in
${\mathbb R}^N$ with $\|u\| \leq \epsilon$
$$\phi (u) \geq \gamma \|u\|^2 \mbox{ and } \|\nabla \phi (u) \| \leq \delta \|u\|.$$
$(ii)$ According to \cite[Proposition 3.11]{BBJ}, a desingularizing function of $f$ (see \cite[Definition 2.1]{BBJ}) is desingularizing of $E_{\lambda }$ too, for all $\lambda\in[0,\lambda_1]$.
\smallskip
\noindent $(iii)$ In Section \ref{rem-quasi-hessian} and Section \ref{rem-quasi-hessian_both-quasi} we will develop similar analysis for related dynamical systems which involve Hessian-driven damping.
\smallskip
\noindent $(iv)$ Following \cite[Theorem 4.1 and Theorem 3.7]{BBJ}, a key condition which induces convergence rates for the
trajectories of a quasi-gradient system in the framework of the (KL) property is
\begin{equation}\label{bolte-cond-rate-quasi}\| \nabla E_{\lambda}(x,u)\|\leq b\| F(x,u)\| \mbox{ for } (x,u)\in\Gamma
\end{equation}
with $b>0$. Let us check this condition in the setting of Theorem \ref{quasi_grad_thm_2}.
We have seen there that
$$\| \nabla E_{\lambda}(x,u)\|^2\leq C_1 (\|u\|^2+\|\nabla f(x)\|^2),$$
with $C_1 > 0$.
Further, from the Cauchy--Schwarz inequality and the properties of $\phi$ we derive for $\sigma > 1$:
\begin{eqnarray*}\| F(x,u)\|^2 &=& \|u\|^2 + \|\nabla\phi(u)+\nabla f(x)\|^2\\
&\geq& \|u\|^2 + \|\nabla\phi(u)\|^2+\|\nabla f(x)\|^2 -2\|\nabla \phi(x)\|\|\nabla f(x)\|\\
&\geq& \|u\|^2 + \|\nabla\phi(u)\|^2+\|\nabla f(x)\|^2 -\sigma \|\nabla \phi(u)\|^2-\frac{1}{\sigma}\|\nabla f(x)\|^2\\
&\geq& \left(1-\frac{1}{\sigma}\right)\|\nabla f(x)\|^2+\left(1-(\sigma-1)\delta^2\right)\|u\|^2.
\end{eqnarray*}
From there, we can choose $\sigma > 1$ such that
$$\| F(x,u)\|^2 \geq C_3(\|u\|^2+\|\nabla f(x)\|^2),$$
with $C_3>0$. Condition \eqref{bolte-cond-rate-quasi} is now met with $b=\sqrt{C_1/C_3}$.
\noindent As in \cite[Section 5]{BBJ}, explicit convergence rates can be derived from \cite[Theorem 4.1 and Theorem 3.7]{BBJ}, based on (3.19) and Remark 3.4(c) in \cite{BBJ}.
\end{remark}
\subsection{Application: \textit{f} with polynomial growth}\label{Sec:KL_poly_growth}
This concerns the question raised at the end of Section \ref{strong-convex}. Additionally to the
hypotheses of Theorem \ref{quasi_grad_thm_2}, assume that $f$ is convex, $\argmin f\neq\emptyset$ and for each $x^*\in\argmin f$, there exists $\eta > 0$ such that
$$f(x) -\inf\nolimits_{{\mathcal H}} f \geq c\dist (x, \argmin f)^r \quad \forall x\in B(x^*,\eta),$$
with $r\geq 1$ and $c>0$.
According to the proof of \cite[Corollary 5.5]{BBJ}, $f$ satisfies the \L{}ojasiewicz inequality with
desingularizing function (see \cite[Definition 2.1]{BBJ}) of the form $\varphi(s)=c's^{1/r}$, with
$c'>0$. According to \cite[Proposition 3.11]{BBJ}, this is a desingularizing function of $E_{\lambda }$ too, for all $\lambda\in[0,\lambda_1]$ (with $E_{\lambda}$ defined in Theorem \ref{quasi_grad_thm_2}).
In Remark \ref{rem-quasi-grad-kl}(iv) we have shown that \eqref{bolte-cond-rate-quasi} holds.
Relying now on \cite[Theorem 3.7]{BBJ} and \cite[Remark 3.4(c)]{BBJ}, we derive sublinear rates for $\|x(t)-x_{\infty}\|$ in case $r<2$ and exponential rate in case $r=2$.
\subsection{Application: fixed damping matrix}
We will recover and improve the results of Alvarez \cite[Theorem 2.6]{Alvarez}, which concerns the case $f$ convex, and the damped inertial equation
$$
\ddot{x}(t) + A (\dot{x}(t)) + \nabla f (x(t)) = 0,
$$
where $A: {\mathcal H} \to {\mathcal H}$ is a positive definite self-adjoint linear operator, which is possibly anisotropic (see also \cite{BotCse}).
While the proof of Theorem 2.6 in \cite{Alvarez} works in general Hilbert spaces, we have to restrict ourselves to finite-dimensional spaces, however, we can drop the convexity assumption on $f$.
The following result is a direct consequence of Theorem \ref{quasi_grad_thm_2} applied to $\phi : {\mathbb R}^N \rightarrow {\mathbb R}+, \, \phi(x) = \frac{1}{2}\langle Ax, x\rangle,$ where $A : {\mathbb R}^N \rightarrow {\mathbb R}^N$ is a positive definite self-adjoint linear operator. We note that in this setting, $\phi$ is a damping potential, it is convex continuous and attains its minimum at the origin. Moreover, the local and global growth conditions are met.
Indeed, for all $u\in {\mathbb R}^N$, we have $\phi (u)\geq \frac{1}{2}\lambda_{min} \|u\|^2 $ and $\|\nabla \phi (u)\| \leq \lambda_{max}\|u\|$, where $\lambda_{min}$ and $\lambda_{max}$ are the smallest and largest positive eigenvalues of $A$ respectively.
\begin{theorem} \label{quasi_grad_thm_3}
Let $f: {\mathbb R}^N \to {\mathbb R}$ be a $\mathcal C^2$ function whose gradient is Lipschitz continuous on the bounded sets, and such that $\inf_{{\mathbb R}^N} f >-\infty$. Let $A : {\mathbb R}^N \rightarrow {\mathbb R}^N$ a positive definite self-adjoint linear operator.
Suppose that the function $ E_{\lambda}$ is {\rm(KL)} (which is true if $f$ is {\rm(KL)}) where
$$
E_{\lambda}(x,u):= \frac{1}{2} \|u\|^2 + f(x) +\lambda \left\langle \nabla f (x), u\right\rangle.
$$
\noindent Let $x: [0, +\infty[ \to \mathbb R^N $ be a bounded solution trajectory of
$$
\ddot x(t) + A(\dot x(t))+ \nabla f(x(t)) =0 .
$$
Then, the following properties are satisfied:
\medskip
$(i)$ \, $x(t) \to x_{\infty}$ as $t \to + \infty$, where $ x_{\infty}\in \crit f$;
\medskip
$(ii)$ \, $\dot{x} \in L^1 (0, +\infty; {\mathbb R}^N )$, $\dot{x}(t) \to 0$ as $t \to + \infty$;
\medskip
$(iii)$ \, $f(x(t)) \to f(x_{\infty}) \in f(\crit f)$ as $t \to + \infty$.
\end{theorem}
Indeed, we can complete this result with the convergence rates which are linked to the desingularization function provided with the property (KL) for $ f $.
\section{Algorithmic results: an inertial type algorithm}
\label{sec: basic_4}
The following convergence result is a discrete algorithmic version of Theorem \ref{quasi_grad_thm_2}. To stay close to the continuous dynamic
we use a semi-implicit discretization: implicit with respect to the damping potential $\phi$, and explicit with respect to the function $f$ to minimize. This will make it possible to make minimal assumptions about the damping potential $\phi$, and thus cover various situations.
Like the latter, the underlying structure of the proof is the
quasi-gradient property. We choose to give direct proof, which is a bit simpler in this case.
Consider the following temporal discretization of (ADIGE-V) with step size $h>0$
$$
\frac{1}{h^2}\left( x_{n+2}-2x_{n+1}+x_n\right) +\nabla\phi\left(\frac{1}{h}(x_{n+2}-x_{n+1})\right)+\nabla f(x_{n+1})=0.
$$
Equivalently,
\begin{equation}\label{alt-alg}
\frac{ x_{n+2}-x_{n+1}}{h} - \frac{ x_{n+1}-x_{n}}{h} + h\nabla\phi\left(\frac{ x_{n+2}-x_{n+1}}{h}\right)+h\nabla f(x_{n+1})=0.
\end{equation}
This gives the proximal-gradient algorithm
\begin{equation}\label{prox-grad-alg}
x_{n+2}= x_{n+1}+ h{\prox}_{h\phi}\left(\frac{1}{h}(x_{n+1}-x_n)-h\nabla f(x_{n+1})\right).
\end{equation}
Recall that, for any $x \in \mathcal H= {\mathbb R}^N $, for any $\lambda >0$
$${\prox}_{\lambda\phi}(x):=\argmin_{\xi \in\mathcal H}\,\left\lbrace \lambda\phi(\xi)+\frac{1}{2} \| x-\xi \|^2 \right\rbrace.
$$
Let us start by establishing a decrease property for the sequence $ (W_n)_{n \in {\mathbb N}}$ of global energies
$$
W_n:= \frac{1}{2} \|u_n\|^2 + f(x_{n+1}),
$$
where we set $u_n= \frac{1}{h}\left(x_{n+1}-x_{n}\right)$ for the discrete velocity.
\begin{lemma}\label{basic-energy-lem-1}
Suppose that $f: {\mathbb R}^N \to {\mathbb R}$ is a differentiable function whose gradient is $L$-Lipschitz continuous on a ball containing the iterates $(x_n)_{n \in {\mathbb N}}$, and such that $\inf_{{\mathbb R}^N} f >-\infty$.
Suppose that $\phi (u) \geq \gamma \|u\|^2$ for all $u \in {\mathbb R}^N$.
Then, for all $n \in {\mathbb N}$
$$
W_{n+1} -W_n + h\left( \gamma - \frac{1}{2} Lh \right)\|u_{n+1}\|^2 \leq 0.
$$
As a consequence, under the assumption $\gamma > \frac{1}{2} Lh $, we have
$$
\sum_{n \in {\mathbb N}} \|u_{n}\|^2 <+\infty \quad\mbox{and} \quad u_{n} \to 0 \;\mbox{ as } \; n \to + \infty.
$$
\end{lemma}
\begin{proof}
Write the algorithm as
$$
u_{n+1} -u_{n} + h \nabla \phi(u_{n+1}) +h\nabla f(x_{n+1})=0.
$$
By taking the scalar product with $u_{n+1}$, we obtain
\begin{equation}\label{energy-est-2021-a}
\langle u_{n+1} -u_{n}, u_{n+1} \rangle +
h\langle \nabla \phi(u_{n+1}) , u_{n+1} \rangle
+ \langle \nabla f(x_{n+1}), x_{n+2}-x_{n+1} \rangle =0.
\end{equation}
We have
\begin{eqnarray*}
&&\langle u_{n+1} -u_{n}, u_{n+1} \rangle \geq \frac{1}{2} \|u_{n+1}\|^2 -\frac{1}{2} \|u_n\|^2 \\
&& \langle \nabla \phi(u_{n+1}) , u_{n+1} \rangle \geq \phi (u_{n+1}) \geq \gamma \|u_{n+1}\|^2 \\
&& f(x_{n+2}) - f(x_{n+1}) \leq \langle \nabla f(x_{n+1}), x_{n+2}-x_{n+1} \rangle + \frac{Lh^2}{2} \| u_{n+1}\|^2,
\end{eqnarray*}
where the last inequality follows from the descent gradient lemma.
By combining the above inequalities with (\ref{energy-est-2021-a}), we obtain
\begin{equation}\label{energy-est-2021-b}
\frac{1}{2} \|u_{n+1}\|^2 -\frac{1}{2} \|u_n\|^2 +
h\gamma \|u_{n+1}\|^2
+ f(x_{n+2}) - f(x_{n+1}) - \frac{Lh^2}{2} \| u_{n+1}\|^2 \leq 0.
\end{equation}
Equivalently,
$$
W_{n+1} -W_n + h\left( \gamma - \frac{1}{2} Lh \right)\|u_{n+1}\|^2 \leq 0.
$$
By summing the above inequalites, and since $f$ is minorized, we get
$$
h\left( \gamma - \frac{1}{2} Lh \right) \sum_{n \geq 1} \|u_{n}\|^2 \leq W_0 - \inf f.
$$
Since $\gamma - \frac{1}{2} Lh >0$, we get $ \sum_{n \in {\mathbb N}} \|u_{n}\|^2 <+\infty$, and hence $u_n \to 0$ as $n \to +\infty$.
\end{proof}
\begin{theorem} \label{quasi_grad_thm_algo}
Let $f: {\mathbb R}^N \to {\mathbb R}$ be a $\mathcal C^2$ function whose gradient is Lipschitz continuous on the bounded sets, and such that $\inf_{{\mathbb R}^N} f >-\infty$. Let $\phi : {\mathbb R}^N \to {\mathbb R}_+$ be a damping potential (see Definition \ref{def1}) which is differentiable.
Let $(x_n)_{n\in {\mathbb N}} $ be a bounded sequence generated by the algorithm
\begin{equation}\label{prox-grad-alg-b}
x_{n+2}= x_{n+1}+ h{\prox}_{h\phi}\left(\frac{1}{h}(x_{n+1}-x_n)-h\nabla f(x_{n+1})\right).
\end{equation}
We make the following assumptions on the data $f$, $\phi$, and $h$:\\
\smallskip
\noindent $\bullet$ (assumption on $f$): Suppose that the function $H$ satisfies the {\rm (KL)} property, where
$H: {\mathbb R}^N \times {\mathbb R}^N\to {\mathbb R} $ is defined for all $(x,y)\in {\mathbb R}^N \times {\mathbb R}^N$ by
$$
H(x,y):= f(x) + \frac{1}{2h^2}\|x-y\|^2 .
$$
\noindent$\bullet$ (assumption on $\phi$):
Suppose that $\phi$ satisfies the following growth conditions:
\medskip
there exist positive $\gamma$, $\varepsilon$ and $\delta$ such that $\phi (u) \geq \gamma \|u\|^2$ for all $u$ in ${\mathbb R}^N$, and $\|\nabla \phi(u)\|\leq \delta \|u\|$ for all $u$ with
$\|u\|\leq \varepsilon$.\\
\medskip
\noindent $\bullet$ (assumption on $h$):
Suppose that the step size $h$ is taken small enough to satisfy
$$
0 < h < \frac{2\gamma}{L},
$$
where
$L$ is the Lipschitz constant of $\nabla f$ on the ball centered at the origin and with radius $R= \sup_{n \in {\mathbb N}} \|x_n\|$.\\
\smallskip
\noindent Then, the following properties are satisfied:\\
\medskip
\quad $(i)$ \, $x_n \to x_{\infty}$ as $n \to + \infty$, where $ x_{\infty}\in \crit f$;\\
\medskip
\quad $(ii)$ \, $\sum_{n\in {\mathbb N}} \| x_{n+1} - x_n\| < +\infty $.
\end{theorem}
\begin{proof} By assumption, the sequence $(x_n)_{n\in{\mathbb N}}$ is bounded. From Lemma \ref{basic-energy-lem-1} we have that $(u_n)_{n\in{\mathbb N}}$ tends to zero where $u_n= \frac{1}{h}\left(x_{n+1}-x_{n}\right)$. In addition,
$$
W_{n+1} -W_n + h\left( \gamma - \frac{1}{2} Lh \right)\|u_{n+1}\|^2 \leq 0
$$
for all $n \in {\mathbb N}$, where
$$
W_n:= \frac{1}{2} \|u_n\|^2 + f(x_{n+1}).
$$
Equivalently, by setting
$$
H(x,y)= f(x) + \frac{1}{2h^2}\|x-y\|^2,
$$
we have for all $n\in {\mathbb N}$
\begin{equation}\label{alt-decr-H-bb}
H(x_{n+2},x_{n+1}) + C\|x_{n+2}-x_{n+1}\|^2 \leq H(x_{n+1},x_{n}),
\end{equation}
where $C=\frac{1}{h}\left( \gamma - \frac{1}{2} Lh \right) > 0$.
The rest of the proof is classical in the framework of the KL theory; we refer the reader to \cite{BotCseLaEUR,ipiano} to similar techniques relying on the above decreasing property.
Relation \eqref{alt-decr-H-bb} implies that there exists
\begin{equation}\label{e-lim-H} \lim_{n\to+\infty}H(x_{n+1},x_n)\in{\mathbb R}.
\end{equation}
Further, let us denote by $\omega ((x_n)_{n \in {\mathbb N}})$ the set of cluster points of the sequence $(x_n)_{n\in {\mathbb N}}$, and by
$\crit f=\{x \in {\mathbb R}^N : \nabla f(x)=0\}$ the set of critical points of $f$. \\
We easily derive
$$\crit H=\{(x,x)\in{\mathbb R}^N\times{\mathbb R}^N:x\in \crit f\} $$
and notice that $\omega((x_n)_{n \in {\mathbb N}})\subseteq \crit f$, thus $\omega((x_{n+1},x_n)_{n \in {\mathbb N}})\subseteq \crit H.$ From \eqref{e-lim-H} one can easily conclude that $H$ is constant on $\omega((x_{n+1},x_n))$. Indeed, for $x^*\in\omega((x_n)_{n \in {\mathbb N}})$, we have from above (and the definition of $H$) that
\begin{equation}\label{lim_H-f}\lim_{n\to+\infty}H(x_{n+1},x_n)=f(x^*)=H(x^*,x^*).
\end{equation}
\noindent Assume now that $H$ satisfies the (KL) property with corresponding desingularizing function $\theta$.
We consider two cases.\\
I. There exists $\overline{n} \geq 0$ such that $H(x_{\overline{n}+1},x_{\overline{n}})=H(x^*,x^*)$.
From the decreasing property \eqref{alt-decr-H-bb} we obtain that $(x_n)_{n\geq \overline{n}}$ is a constant sequence and the conclusion follows.\\
II. For all $n \geq 0$ we have $H(x_{n+1},x_{n}) > H(x^*,x^*)$. Since $\theta$ is concave and $\theta'>0$,
we derive from \eqref{alt-decr-H-bb} that there exists $n' \geq 0$ such that for all $n\geq n'$ it holds
\begin{eqnarray} \Delta_{n,n+1}: &=& \theta\left(H(x_{n+1},x_n)-H(x^*,x^*)\right) -
\theta\left(H(x_{n+2},x_{n+1})-H(x^*,x^*)\right)
\nonumber \\
&\geq & \theta'\left(H(x_{n+1},x_n)-H(x^*,x^*)\right)\cdot \left(H(x_{n+1},x_n)-H(x_{n+2},x_{n+1})\right)\nonumber \\
&\geq& \theta'\left(H(x_{n+1},x_n)-H(x^*,x^*)\right)\cdot C\cdot \|x_{n+2}-x_{n+1}\|^2 \nonumber\\
&\geq & \frac{C \|x_{n+2}-x_{n+1}\|^2}{\|\nabla H(x_{n+1},x_n)\|}, \label{last_ineq}
\end{eqnarray}
where the last inequality \eqref{last_ineq} follows from the uniformized (KL) property (\cite[Lemma 6]{BST}) applied to the nonempty compact and connected set $\Omega=\omega((x_{n+1},x_n)_{n \in {\mathbb N}})$ (according to \cite[Remark 5]{BST} the connectedness of this set is generic for sequences satisfying $\lim_{n\to+\infty}(x_{n+1}-x_n)=0$). \\
Further, since $\nabla H(x,y)=(\nabla f(x)+\frac{1}{h^2}(x-y),\frac{1}{h^2}(y-x))$, we derive from \eqref{alt-alg},
the fact that $\lim_{n\to+\infty}(x_{n+1}-x_n)=0$ and the properties of $\phi$ that there exists $C_2>0$ such that
$$\|\nabla H(x_{n+1},x_n)\|\leq C_2(\|x_{n+2}-x_{n+1}\|+\|x_{n+1}-x_{n}\|) \ \quad \forall n \in {\mathbb N}.$$
Hence there exist $C_3>0$ and $n^{''} \in {\mathbb N}$ such that for all $n\geq n^{''}$
$$\frac{a_{n+1}^2}{a_{n+1}+a_n}\leq C_3\Delta_{n,n+1},$$
where $a_n:=\|x_{n+1}-x_n\|$. From here we get that for all $n\geq n^{''}$
$$a_{n+1}= \sqrt{C_3\Delta_{n,n+1}(a_{n+1}+a_n)}\leq \frac{a_{n+1}+a_n}{4} + C_3\Delta_{n,n+1},$$
which implies
$$a_{n+1}\leq \frac{1}{3}a_n+\frac{4}{3}C_3\Delta_{n,n+1}.$$
Summing up the last inequality we obtain $\sum_{n\in {\mathbb N}} \| x_{n+1} - x_n\| < +\infty $. This classically implies that $(x_n)_{n\in {\mathbb N}}$ is a Cauchy sequence in ${\mathbb R}^N$, hence it converges to a critical point of $f$.
\end{proof}
\begin{remark}
$(i)$ If $f: {\mathbb R}^N \to {\mathbb R}$ is a $\mathcal C^2$ coercive function whose gradient is Lipschitz continuous on ${\mathbb R}^N$, then the boundedness of the sequence $(x_n)_{n \in {\mathbb N}}$ follows from \eqref{alt-decr-H-bb}. The function $H$ is a KL function if $f$ is, for instance, semialgebraic; we refer to \cite{LiPongFCM} for other results related to the preservation of the KL property under addition.\\
$(ii)$ For a general damping function $\phi$ we obtain at the limit
$$
\nabla f(x_{\infty}) + \partial \phi (0) \ni 0.
$$
When $\phi$ is differentiable at the origin, it attains its minimum at this point, and hence $\nabla \phi (0)=0$. So, we get $\nabla f(x_{\infty})=0$, {\it i.e.}\,\, $x_{\infty}$ is a critical point of $f$, that's the situation considered above. In the case of dry friction, for example
$\phi(u)=r \|u\|$, we get $
\|\nabla f(x_{\infty}) \| \leq r
$
which gives an approximate critical point, see \cite{AA0,AA,AAC}. \\
$(iii)$ The above result has been given as an illustration of our results,
showing that the continuous dynamic approach gives a valuable guideline to develop corresponding algorithmic results. In the particular case $\phi (u)=\|u\|^2$ one can also consult \cite{GP}, \cite{ipiano}. The explicit discretization gives rise to inertial gradient algorithms, an interesting subject to explore in this general setting.
\end{remark}
\section{Closed-loop velocity control with Hessian driven damping}\label{Sec:Hessian}
\subsection{Hessian damping}
We propose to tackle questions similar to the previous sections, concerning the combination of closed-loop velocity control with Hessian driven damping.
The following system combines closed-loop velocity control with Hessian driven damping:
\begin{equation}\label{DIN}
\qquad \ddot{x}(t) + \partial \phi(\dot{x}(t)) + \beta \nabla^2 f (x(t)) \dot{x} (t) + \nabla f (x(t)) = 0.
\end{equation}
This autonomous system will be our main subject of study in this section.
\smallskip
\noindent $\bullet$
The case $\phi (u)= \frac{\gamma}{2} \|u\|^2$ of a fixed viscous coefficient was first considered by
Alvarez--Attouch--Bolte--Redont in \cite{AABR}.
In this case, (\ref{DIN}) can be equivalently written as a first-order system in time and space (different from the Hamiltonian formulation), which allows to extend this system naturally to the case of a nonsmooth function $f$. This property has been exploited by Attouch--Maing\'e--Redont \cite{AMR}
for modeling non-elastic shocks in unilateral mechanics.
To accelerate this system, several recent studies considered the case where the viscous damping is vanishing, that is
\begin{equation}\label{DIN_AVD}
\qquad \ddot{x}(t) + \frac{\alpha}{t}\dot{x}(t) + \beta \nabla^2 f (x(t)) \dot{x} (t) + \nabla f (x(t)) = 0;
\end{equation}
see \cite{APR}, \cite{ACFR},
\cite{BCL},
\cite{CBFP}, \cite{Kim}, \cite{LJ}, \cite{SDJS}, and Section
\ref{sec:Hessian_intro} for the properties of this system.
\smallskip
\noindent $\bullet$ The case $\phi(u)=\frac{\gamma}{2} \|u\|^2 + r\|u\|$ which combines viscous friction with dry friction and Hessian damping
has been considered by Adly--Attouch \cite{AA-preprint-jca}, \cite{AA}.
\smallskip
\noindent $\bullet$ By taking $\phi (u)= \frac{r}{p} \|u\|^p$, we get
\begin{equation}\label{Hessian_1}
\qquad \ddot{x}(t) + r\| \dot{x}(t) \|^{p-2} \dot{x}(t) + \beta \nabla^2 f (x(t)) \dot{x} (t) + \nabla f (x(t)) = 0,
\end{equation}
for which we will address issues similar to those of the previous theme.
In addition to the fast minimization property, one can expect obtaining too the fast convergence of the gradients to zero.
\subsection{Existence and uniqueness results}
Let us consider the differential inclusion
\begin{equation}\label{Hessian_def_1}
\mbox{\rm (ADIGE-VH)}\quad \ddot{x}(t) + \partial \phi (\dot{x}(t))+ \beta \nabla^2 f (x(t))\dot{x}(t) + \nabla f (x(t)) \ni 0,
\end{equation}
which involves a damping potential $\phi$ (see Definition \ref{def1}), and a geometric damping driven by the Hessian of $f$.
The suffix V makes reference to the velocity and H to the Hessian.
They both enter the damping terms.
It allows to cover different situations, in particular system \eqref{Hessian_1} corresponds to $\phi (u)= \frac{r}{p} \| u\|^{p}$ for $p>1$.
To prove existence and uniqueness results for the associated Cauchy problem, we make additional assumptions. We assume that $f$ is convex, and
that the Hessian mapping $x \in {\mathcal H} \mapsto \nabla^2 f (x) \in \mathcal L ({\mathcal H}, {\mathcal H}) $ is Lipschitz continuous on the bounded sets,
where $\mathcal L ({\mathcal H}, {\mathcal H}) $ is equipped with the norm operator.
Note that this property implies that $\nabla f$ is Lipschitz continuous on the bounded subsets of ${\mathcal H}$ (apply the mean value theorem in the vectorial case).
However, in the following statement, we formulate the two hypotheses for the sake of clarity.
\begin{theorem}\label{th.existence_uniqueness}
Let $f:{\mathcal H} \to {\mathbb R}$ be a convex function which is twice continuously differentiable, and such that $\inf_{{\mathcal H}} f >-\infty$. We suppose that
\medskip
$(i)$ $\nabla f$ is Lipschitz continuous on the bounded subsets of ${\mathcal H}$;
\smallskip
$(ii)$ $\nabla^2 f$ is Lipschitz continuous on the bounded subsets of ${\mathcal H}$.
\smallskip
\noindent Let $\phi: {\mathcal H} \to {\mathbb R}$ be a convex continuous damping function.
Then, for any Cauchy data $(x_0, x_1 ) \in {\mathcal H} \times {\mathcal H}$, there exists a unique strong global solution $x : [0, +\infty[ \to {\mathcal H}$ of
{\rm (ADIGE-VH)}
satisfying $x(0) = x_0$, and $\dot{x}(0)=x_1 $.
\end{theorem}
\begin{proof} To make the reading of the proof easier, we distinguish several steps.
\smallskip
\textbf{Step 1}: \textit{A priori estimate}. Let's establish a priori energy estimates on the solutions of \eqref{Hessian_def_1}. After taking the scalar product of
\eqref{Hessian_def_1} with $\dot{x}(t) $, we get
$$
\frac{d}{dt} \mathcal E (t) + \left\langle \partial \phi (\dot{x}(t)), \dot{x}(t) \right\rangle + \beta
\left\langle \nabla^2 f (x(t))\dot{x}(t), \dot{x}(t) \right\rangle =0,
$$
where
$
\mathcal E (t):= f(x(t)) -\inf_{{\mathcal H}}f + \frac{1}{2} \| \dot{x}(t) \|^2 $
is the global energy.
Since $\phi$ is a damping potential, the subdifferential inequality for convex functions, combined with $\phi(0)=0$, gives
$$
\left\langle \partial \phi (\dot{x}(t)), \dot{x}(t) \right\rangle \geq \phi (\dot{x}(t)).
$$
Since $f$ is convex, we have that $\nabla^2 f$ is positive semidefinite, which gives
$$
\left\langle \nabla^2 f (x(t))\dot{x}(t), \dot{x}(t) \right\rangle \geq 0.
$$
Collecting the above results, we obtain the following decay property of the energy
\begin{equation}\label{closed_loop_2b}
\frac{d}{dt} \mathcal E (t) + \phi (\dot{x}(t)) \leq 0.
\end{equation}
Therefore, the energy is nonincreasing, which implies that, as long as the trajectory is defined
\begin{equation}\label{closed_loop_2c}
\| \dot{x}(t) \|^2 \leq 2 \mathcal E (0).
\end{equation}
\textbf{Step 2}: \textit{Hamiltonian formulation of \eqref{Hessian_def_1}}.
According to the Hamiltonian formulation of \eqref{Hessian_def_1}, it is equivalent
to solve the first-order system
$$ \quad \left\{
\begin{array}{l}
\dot x(t) -u(t) =0; \\
\rule{0pt}{18pt}
\dot{u}(t) +\partial \phi(u(t)) + \nabla f(x(t)) + \beta
\nabla^2 f (x(t))u(t) \ni 0 ,
\hspace{2.3cm}
\end{array}\right.
$$
with the Cauchy data
$x(0) =x_0$, \, $u(0)= x_1$.
Set
$Z(t) = (x(t), u(t)) \in {\mathcal H} \times {\mathcal H} .$\\
The above system can be written equivalently as
$$
\dot{Z}(t) + F( Z(t))\ni 0, \quad Z(0) = (x_0, x_1),
$$
where $F: {\mathcal H} \times {\mathcal H}\rightrightarrows {\mathcal H} \times {\mathcal H},\;\;(x,u)\mapsto F(x,u)$ is defined by
$$
F(x,u)= \Big( 0, \partial \phi(u) \Big) +
\Big( -u, \nabla f(x) +\beta
\nabla^2 f (x)u \Big).
$$
Hence $F$ splits as follows
$
F(x,u) = \partial \Phi (x,u) + G (x,u),
$
where
\begin{equation}\label{Hamilton_Hessian}
\Phi (x,u) = \phi(u)
\, \mbox{ and } \,
G(x,u) = \Big( -u, \, \nabla f(x) +\beta
\nabla^2 f (x)u \Big).
\end{equation}
Therefore, it is equivalent to solve the following first-order differential inclusion with Cauchy data
\begin{equation}
\label{1odd}
\dot{Z}(t) +\partial\Phi(Z(t)) + G( Z(t))\ni 0, \quad Z(0) = (x_0, x_1).
\end{equation}
Let us prove that the mapping $(x,u)\mapsto G(x,u)$ is Lipschitz continuous on the bounded subsets of ${\mathcal H}\times{\mathcal H}$.
For any $(x, u) \in {\mathcal H} \times {\mathcal H} $, set $G(x,u) = (-u, K(x,u))$ where
$$
K(x,u):= \nabla f(x) +\beta \nabla^2 f (x)u .
$$
Let $L_R$ be the Lipschitz constant of $\nabla f$ and $\nabla^2 f$ on the ball centered at the origin and with radius $R$, and
set $M_R = \sup_{\|x\|\leq R} \|\nabla^2 f (x) \|$.
Take $(x_i,u_i) \in {\mathcal H} \times {\mathcal H}$, $i=1,2$ with $\| (x_i,u_i)\| \leq R$. We have
$$
K (x_2,u_2) -K (x_1,u_1)= \nabla f(x_2)- \nabla f(x_1) +\beta
(\nabla^2 f (x_2)u_2 -\nabla^2 f (x_1)u_1).
$$
According to the triangle inequality, and the local Lipschitz continuity property of $\nabla f$ and $\nabla^2 f$
\begin{eqnarray*}
\| K (x_2,u_2) -K (x_1,u_1) \| &\leq& \|\nabla f(x_2)- \nabla f(x_1) \|
+\beta
\| \nabla^2 f (x_2)u_2 -\nabla^2 f (x_1)u_2\| \\
& + &
\beta
\| \nabla^2 f (x_1)u_2 -\nabla^2 f (x_1)u_1\| \\
&\leq& L_R \| x_2 -x_1 \|
+\beta L_R \| x_2 -x_1 \|\|u_2\| + \beta M_R \| u_2 -u_1 \|\\
&\leq& L_R (1+ R\beta)\| x_2 -x_1 \|+ \beta M_R \| u_2 -u_1 \|.
\end{eqnarray*}
Therefore,
\begin{eqnarray}\label{Lip_G_1}
\| G (x_2,u_2) -G (x_1,u_1) \|&\leq& L_R (1+ R\beta)\| x_2 -x_1 \|+ (1+\beta M_R) \| u_2 -u_1 \|,
\end{eqnarray}
which gives that the mapping $(x,u)\mapsto G(x,u)$ is Lipschitz continuous on the bounded subsets of ${\mathcal H}\times{\mathcal H}$.
\smallskip
\textbf{Step 3}: \textit{Approximate dynamics}. We proceed in a similar way as in Theorem \ref{basic_exist_thm} (which corresponds to the case $\beta=0$), and consider the approximate dynamics
\begin{equation}\label{hbdf_lambda_existence}
\ddot{x}_{\lambda}(t) + \nabla \phi_{\lambda} (\dot{x}_{\lambda}(t)) + \beta \nabla^2 f (x_{\lambda}(t))\dot{x}_{\lambda}(t) + \nabla f (x_{\lambda}(t)) = 0,\; t\in [0,+\infty[
\end{equation}
which uses the Moreau-Yosida approximates $(\phi_{\lambda})$ of $\phi$.
We will prove that the filtered sequence $(x_{\lambda})$
converges uniformly as $\lambda \to 0$ over the bounded time intervals towards a solution of \eqref{Hessian_def_1}.
The Hamiltonian formulation of \eqref{hbdf_lambda_existence} gives the first-order (in time) system
$$ \quad \left\{
\begin{array}{l}
\dot x_{\lambda}(t) -u_{\lambda}(t) =0; \\
\rule{0pt}{18pt}
\dot{u}_{\lambda}(t) +\nabla \phi_{\lambda}(u_{\lambda}(t)) + \nabla f(x_{\lambda}(t)) + \beta
\nabla^2 f (x_{\lambda}(t))u_{\lambda}(t) = 0 ,
\hspace{2.3cm}
\end{array}\right.
$$
with the Cauchy data
$x_{\lambda}(0) =x_0$, \, $u_{\lambda}(0)= x_1 $.
Set
$Z_{\lambda}(t) = (x_{\lambda}(t), u_{\lambda}(t)) \in {\mathcal H} \times {\mathcal H} .$\\
The above system can be written equivalently as
$$
\dot{Z}_{\lambda}(t) + F_{\lambda}( Z_{\lambda}(t))\ni 0, \quad Z_{\lambda}(t_0) = (x_0, x_1),
$$
where $F_{\lambda}: {\mathcal H} \times {\mathcal H}\rightarrow {\mathcal H} \times {\mathcal H},\;\;(x,u)\mapsto F_{\lambda}(x,u)$ is defined by
$$
F_{\lambda}(x,u)= \Big( 0, \nabla \phi_{\lambda}(u) \Big) +
\Big( -u, \nabla f(x) +\beta
\nabla^2 f (x)u \Big).
$$
Hence $F_{\lambda}$ splits as follows
$
F_{\lambda}(x,u) = \nabla \Phi_{\lambda} (x,u) + G (x,u)
$
where $\Phi $ and $G$ have been defined in \eqref{Hamilton_Hessian}.
Therefore, the approximate equation is equivalent to the first-order differential system with Cauchy data
\begin{equation}
\label{1odd_existence_b}
\dot{Z}_{\lambda}(t) +\nabla \Phi_{\lambda}(Z_{\lambda}(t)) + G( Z_{\lambda}(t))= 0, \quad Z_{\lambda}(0) = (x_0, x_1).
\end{equation}
Let's argue with $\lambda >0$ fixed.
According to the Lipschitz continuity of $\nabla \Phi_{\lambda}$, and the fact that $G$ is Lipschitz continuous on the bounded sets, we have that the sum operator $ \nabla \Phi_{\lambda} + G$ which governs \eqref{1odd_existence_b} is Lipschitz continuous on the bounded sets.
As a consequence, the existence of a local solution to \eqref{1odd_existence_b} follows from the classical Cauchy--Lipschitz theorem.
To pass from a local solution to a global solution, we use the a priori estimate obtained in Step 1 of the proof. Note that this estimate is valid for any damping potential, in particular for $\phi_{\lambda}$.
According to the Cauchy data, and $f$ minorized,
this implies that, on any bounded time interval, the functions
$(x_{\lambda})$ and $(\dot{x}_{\lambda}) $ are bounded.
According to the property \eqref{ineq_phi_b} of the Yosida approximation, and the property $(iii)$ of the
damping potential $\phi$, this implies that
$$
\| \nabla \phi_{\lambda} (x_{\lambda}(t))\| \leq \| (\partial \phi )^{0} (x_{\lambda}(t))\|
$$
is also bounded uniformly with respect to $t$ bounded.
Moreover, according to the local boundedness assumption made on the gradient and the Hessian of $f$, we have that $\nabla f (x_{\lambda}(t))$ and
$\nabla^2 f (x_{\lambda}(t))\dot{x}_{\lambda}(t)$ are also bounded.
According to the constitutive equation \eqref{hbdf_lambda_existence}, this in turn implies that $(\ddot{x}_{\lambda} )$ is also bounded.
This implies that if a maximal solution is defined on a finite time interval $[0, T[$, then the limits of $x_{\lambda}(t)$ and $\dot{x}_{\lambda} (t)$
exist, as $t \to T$. According to this property, passing from a local to a global solution
is a classical argument.
So for any $\lambda >0$ we have a unique global solution of
\eqref{hbdf_lambda_existence} with satisfies the Cauchy data $x_{\lambda}(0) =x_0$, $\dot{x}_{\lambda}(0)= x_1 $.
\smallskip
\textbf{Step 4}: \textit{Passing to the limit as $\lambda \to 0$}.
Take $T >0$, and $ \lambda , \mu >0$.
Consider the corresponding solutions on $[0, T]$
\begin{eqnarray*}
&& \dot{Z}_{\lambda}(t) +\nabla \Phi_{\lambda}(Z_{\lambda}(t)) + G( Z_{\lambda}(t))= 0, \quad Z_{\lambda}(0) = (x_0, x_1)
\\
&&\dot{Z}_{\mu}(t) +\nabla \Phi_{\mu}(Z_{\mu}(t)) + G( Z_{\mu}(t))= 0, \quad Z_{\mu}(0) = (x_0, x_1).
\end{eqnarray*}
Let's make the difference between the two equations, and take the scalar product by $Z_{\lambda}(t) - Z_{\mu}(t)$. We get
\begin{eqnarray}
\frac{1}{2} \frac{d}{dt}\| Z_{\lambda}(t) - Z_{\mu}(t) \|^2 &+ &
\left\langle \nabla \Phi_{\lambda}(Z_{\lambda}(t)) - \nabla \Phi_{\mu}(Z_{\mu}(t)) , Z_{\lambda}(t) - Z_{\mu}(t) \right\rangle \nonumber\\
&+& \left\langle G( Z_{\lambda}(t)) - G( Z_{\mu}(t)) , Z_{\lambda}(t) - Z_{\mu}(t) \right\rangle =0 . \label{basic_ex_Y_b}
\end{eqnarray}
We now use the following ingredients:
\medskip
i) According to the properties of the Yosida approximation (see \cite[Theorem 3.1]{Brezis}), we have
$$
\left\langle \nabla \Phi_{\lambda}(Z_{\lambda}(t)) - \nabla \Phi_{\mu}(Z_{\mu}(t)) , Z_{\lambda}(t) - Z_{\mu}(t) \right\rangle
\geq -\frac{\lambda}{4} \|\nabla \Phi_{\mu}(Z_{\mu}(t)) \|^2 -
\frac{\mu}{4} \|\nabla \Phi_{\lambda}(Z_{\lambda}(t)) \|^2.
$$
According to the energy estimates, the sequence $(Z_{\lambda})$ is uniformly bounded on $[0, T]$, let
$$\| Z_{\lambda}(t)\|\leq C_T .$$
From these properties we immediately infer
$$
\|\nabla \Phi_{\lambda}(Z_{\lambda}(t)) \| \leq \sup_{\|\xi\|\leq C_T} \|(\partial \phi)^0(\xi) \|= M_T <+\infty,
$$
because our assumption on $\phi$ gives that $(\partial \phi)^0$ is bounded on the bounded sets.
Therefore
$$
\left\langle \nabla \Phi_{\lambda}(Z_{\lambda}(t)) - \nabla \Phi_{\mu}(Z_{\mu}(t)) , Z_{\lambda}(t) - Z_{\mu}(t) \right\rangle
\geq -\frac{1}{4} M_T (\lambda +\mu).
$$
ii) Since the mapping $G : {\mathcal H} \times {\mathcal H} \to {\mathcal H} \times {\mathcal H}$ is Lipschitz continuous on the bounded sets, and
using again that the sequence $(Z_{\lambda})$ is uniformly bounded on $[0, T]$, we deduce that there exists a constant $L_T$ such that
$$
\| G( Z_{\lambda}(t)) - G( Z_{\mu}(t)) \| \leq L_T \| Z_{\lambda}(t) - Z_{\mu}(t) \|.
$$
Combining the above results, and using Cauchy--Schwarz inequality, we deduce from
\eqref{basic_ex_Y_b} that
$$
\frac{1}{2} \frac{d}{dt}\| Z_{\lambda}(t) - Z_{\mu}(t) \|^2
\leq \frac{1}{4} M_T (\lambda +\mu) + L_T \| Z_{\lambda}(t) - Z_{\mu}(t) \|^2 .
$$
We now proceed with the integration of this differential inequality.
According to the fact that $ Z_{\lambda}(0) - Z_{\mu}(0) =0$, elementary calculus gives
$$
\| Z_{\lambda}(t) - Z_{\mu}(t) \|^2 \leq \frac{M_T}{4L_T}(\lambda +\mu) \Big( e^{2L_T (t-t_0} -1 \Big).
$$
Therefore, the filtered sequence $(Z_{\lambda})$ is a Cauchy sequence for the uniform convergence on $[0, T]$, and hence it converges uniformly.
This means the uniform convergence on $[0, T]$ of $x_{\lambda}$ and $\dot{x}_{\lambda}$ to $x$ and $\dot{x}$ respectively.
Proving that $x$ is solution of \eqref{Hessian_def_1} is obtained in a similar way as
in Theorem \ref{basic_exist_thm}. Just rely on the classical derivation chain rule
$\frac{d}{dt}\left( \nabla f (x_{\lambda}(t)) \right) = \nabla^2 f (x_{\lambda}(t))\dot{x}_{\lambda}(t) $ to pass to the limit on the Hessian term.
\end{proof}
\subsection{Convergence based on the quasi-gradient approach}\label{rem-quasi-hessian}
Our objective is to address, from the perspective of quasi-gradient systems, the system (ADIGE-VH)
\begin{equation}\label{Hessian_def_1-quasi}
\ddot{x}(t) + \nabla \phi (\dot{x}(t))+ \beta \nabla^2 f (x(t))\dot{x}(t) + \nabla f (x(t)) = 0,
\end{equation}
as it was done in Section \ref{Sec-conv-KL}.
We assume that ${\mathcal H} = {\mathbb R}^N$ is a finite-dimensional Hilbert space, and that the hypotheses of
Theorem \ref{quasi_grad_thm_2} and Theorem \ref{th.existence_uniqueness} hold.
We follow the steps of the proof of Theorem \ref{quasi_grad_thm_2}.
By using the estimates in Step 1 of the proof of Theorem \ref{th.existence_uniqueness}, we easily derive the first part of
the proof of Theorem \ref{quasi_grad_thm_2}, namely that the trajectory
$t \mapsto (x(t),\dot{x}(t))$ in the phase space ${\mathbb R}^N \times {\mathbb R}^N$
belongs to the closed bounded set $\Gamma = \bar{B}(0,R) \times \bar{B}(0,\epsilon) $.
According to Step 2 in the proof of Theorem \ref{th.existence_uniqueness}, the Hamiltonian formulation of \eqref{Hessian_def_1-quasi} gives the first-order differential system
\begin{equation}\label{first_order_cl_loop_quasi_1-hessian}
\dot z(t) + F(z(t)) =0,
\end{equation}
where $z(t)=(x(t), \dot x(t)) \in {\mathbb R}^N \times {\mathbb R}^N $, and
$F: {\mathbb R}^N \times {\mathbb R}^N \to {\mathbb R}^N \times {\mathbb R}^N$
is defined by
$$F(x,u)=(-u, \nabla \phi(u)+ \nabla f(x)+\beta\nabla^2f(x)u).
$$
Let's focus on the key point which is the angle condition ($E_{\lambda}$ is defined as in Theorem \ref{quasi_grad_thm_2}). We have
\begin{center}
$\left\langle \nabla E_{\lambda}(x,u), F(x,u) \right\rangle
= \left\langle \Big( \nabla f (x)+ \lambda \nabla^2 f (x)u, \, u + \lambda \nabla f (x) \Big), \Big(-u, \nabla \phi(u)+ \nabla f(x) + \beta\nabla^2f(x)u\Big) \right\rangle .$
\end{center}
After development and simplification, we get
\begin{eqnarray*}
\left\langle \nabla E_{\lambda}(x,u), F(x,u) \right\rangle
&=& - \lambda \left\langle \nabla^2 f (x)u, \, u \right\rangle + \left\langle u, \, \nabla \phi(u) \right\rangle
+ \lambda \left\langle \nabla f (x) , \, \nabla \phi(u) \right\rangle
+ \lambda \| \nabla f (x) \|^2\\
&& + \beta\left\langle u+\lambda\nabla f(x), \nabla^2 f (x)u\right\rangle\\
&\geq& - \lambda \left\langle \nabla^2 f (x)u, \, u \right\rangle + \left\langle u, \, \nabla \phi(u) \right\rangle
+ \lambda \left\langle \nabla f (x) , \, \nabla \phi(u) \right\rangle
+ \lambda \| \nabla f (x) \|^2\\
&& + \lambda\beta\left\langle \nabla f(x), \nabla^2 f (x)u\right\rangle,
\end{eqnarray*}
where we used that $\nabla^2 f(x)$ is positive semidefinite.
The only difference with respect to the next step in the proof of Theorem \ref{quasi_grad_thm_2} is that we need
to estimate the extra term $\lambda\beta\left\langle \nabla f(x), \nabla^2 f (x)u\right\rangle$. We do this by writing
$\lambda\beta\left\langle \nabla f(x), \nabla^2 f (x)u\right\rangle\geq -\frac{\lambda}{4}\|\nabla f(x)\|^2
-\lambda\beta^2 M^2\|u\|^2$,
and get
\begin{eqnarray}
\left\langle \nabla E_{\lambda}(x,u), F(x,u) \right\rangle
&\geq & \Big( \gamma -\lambda M - \frac{\lambda}{2} \delta^2 -\lambda \beta^2M^2 \Big) \|u\|^2
+ \frac{\lambda}{4} \| \nabla f (x) \|^2 . \label{quasi_gradient_2_hessian}
\end{eqnarray}
Take $ \lambda $ small enough to satisfy
$
\gamma > \lambda \left(M + \frac{\delta^2}{2} +\beta ^2M^2 \right).
$
Then
\begin{eqnarray}
\left\langle \nabla E_{\lambda}(x,u), F(x,u) \right\rangle
&\geq & \alpha_0 ( \|u\|^2
+ \| \nabla f (x) \|^2), \label{quasi_gradient_3_hessian}
\end{eqnarray}
with $\alpha_0:= \min\{\gamma - \lambda \left(M + \frac{\delta^2}{2} +\beta^2M^2 \right) , \frac{\lambda}{4} \}$.
On the other hand, as in Theorem \ref{quasi_grad_thm_2},
\begin{eqnarray*}
\| \nabla E_{\lambda}(x,u)\| &\leq& C_1 ( \|u\|^2
+ \| \nabla f (x) \|^2)^{\frac{1}{2}} \\
\| F(x,u)\| &\leq& C_2 ( \|u\|^2
+ \| \nabla f (x) \|^2)^{\frac{1}{2}} ,
\end{eqnarray*}
where $C_2=\sqrt{4+3\delta^2+3\beta^2M^2}$.
Therefore
\begin{eqnarray}
\| \nabla E_{\lambda}(x,u)\| \| F(x,u) \|
&\leq & C_1 C_2 ( \|u\|^2
+ \| \nabla f (x) \|^2). \label{quasi_gradient_4_hessian}
\end{eqnarray}
Therefore, for
$
\alpha := \frac{\alpha_0 }{C_1C_2},
$
the angle condition
$
\left\langle \nabla E (z), F(z) \right\rangle
\geq \alpha \|\nabla E (z) \| \| F(z) \|
$
is satisfied on $\Gamma$.
Let us summarize the above results.
\begin{theorem} \label{quasi_thm_VH}
Let $f:{\mathcal H} \to {\mathbb R}$ be a convex function which is twice continuously differentiable, and such that $\inf_{{\mathcal H}} f >-\infty$. We suppose that
\medskip
$(i)$ $\nabla f$ is Lipschitz continuous on the bounded subsets of ${\mathcal H}$;
\smallskip
$(ii)$ $\nabla^2 f$ is Lipschitz continuous on the bounded subsets of ${\mathcal H}$.
\smallskip
\noindent Let $ E_{\lambda}: {\mathbb R}^N \times {\mathbb R}^N\to {\mathbb R} $ be defined by: for all $(x,u)\in {\mathbb R}^N \times {\mathbb R}^N$
$$
E_{\lambda}(x,u):= \frac{1}{2} \|u\|^2 + f(x) +\lambda \left\langle \nabla f (x), u\right\rangle.
$$
Suppose that $ E_{\lambda}$ satisfies the {\rm (KL)} property.
Let $\phi : {\mathbb R}^N \to {\mathbb R}_+$ be a damping potential (see Definition \ref{def1}) which is differentiable, and which satisfies the following growth conditions (i), and (ii):
\medskip
$(i)$ (local) there exists positive constants $\gamma$, $\delta$, and $\epsilon >0$ such that, for all $u$ in
${\mathbb R}^N$ with $\|u\| \leq \epsilon$
$$\phi (u) \geq \gamma \|u\|^2 \mbox{ and } \|\nabla \phi (u) \| \leq \delta \|u\|.$$
$(ii)$ (global) there exists $p\geq 1$, $c>0$, such that for all $u$ in ${\mathbb R}^N$, $\phi (u) \geq c\|u\|^p$.
\medskip
\noindent Let $x: [0, +\infty[ \to \mathbb R^N $ be a bounded solution trajectory of
$$
\ddot{x}(t) + \nabla \phi (\dot{x}(t))+ \beta \nabla^2 f (x(t))\dot{x}(t) + \nabla f (x(t)) = 0.
$$
Then, the following properties are satisfied:
\medskip
$(i)$ \, $x(t) \to x_{\infty}$ as $t \to + \infty$, where $ x_{\infty}\in \crit f$;
\medskip
$(ii)$ \, $\dot{x} \in L^1 (0, +\infty; {\mathbb R}^N )$ , $\dot{x}(t) \to 0$ as $t \to + \infty$;
\medskip
$(iii)$ \, For $\lambda$ sufficently small, and $t$ sufficiently large
$$ \| x(t) - x_{\infty}\| \leq \frac{1}{\alpha} \theta \Big( E_{\lambda}(x(t),u(t)) - E_{\lambda} (x_{\infty},0)\Big)
$$
\smallskip
\noindent where $\theta$ is the desingularizing function for $ E_{\lambda}$ at $(x_{\infty},0)$, and $\alpha$ enters the angle condition.
\end{theorem}
\subsection{Numerical illustrations}\label{sec:num_H}
We revisit the numerical examples of section \ref{sec:num} where we introduce an additional Hessian damping.
\begin{small}
\begin{figure}[h!]
\centering
{\includegraphics*[viewport=78 200 540 600,width=0.325\textwidth]{beta=0.pdf}}\hspace{0.03cm}
{\includegraphics*[viewport=78 200 540 600,width=0.325\textwidth]{beta=005.pdf}}\hspace{0.03cm}
{\includegraphics*[viewport=78 200 540 600,width=0.325\textwidth]{beta=01.pdf}}\\
{\includegraphics*[viewport=78 200 540 600,width=0.325\textwidth]{beta=05.pdf}}\hspace{0.03cm}
{\includegraphics*[viewport=78 200 540 600,width=0.325\textwidth]{beta=1.pdf}}\hspace{0.03cm}
{\includegraphics*[viewport=78 200 540 600,width=0.325\textwidth]{beta=3.pdf}}\\
{\includegraphics*[viewport=78 200 540 600,width=0.325\textwidth]{beta=10.pdf}}\hspace{0.03cm}
{\includegraphics*[viewport=78 200 540 600,width=0.325\textwidth]{beta=25.pdf}}\hspace{0.03cm}
{\includegraphics*[viewport=78 200 540 600,width=0.325\textwidth]{beta=100.pdf}}
\caption{\small Evolution of the trajectories $x(t)$ (blue) and $\dot x(t)$ (red line) of \eqref{adige_v_one_dim_H} for different values of $\beta $.
}
\label{fig:ex3}
\end{figure}
\end{small}
So, we take ${\mathcal H} = {\mathbb R}$, $f(x) = \frac{1}{2} |x|^2$, and $\phi (u) = \frac{1}{p} |u|^p$ with $p>1$.
Then, (ADIGE-VH) writes
\begin{equation}\label{adige_v_one_dim_H}
\ddot x(t) + |\dot x(t)|^{p-2} \dot x(t)+ \beta \dot x(t) + x(t)=0.
\end{equation}
For $\beta >0$, we are in the framework of Theorem \ref{strong_convex_thm}, with $\phi (u)= \frac{\beta}{2}|u|^2 + \frac{p}{2}|u|^p $. So we have convergence at an exponential rate of $x(t)$ and $\dot{x}(t)$ towards zero. This makes a big contrast with the case $\beta=0$, for which we have convergence towards zero, but with many oscillations in the case of weak damping ($p$ large).
Note that that even for very small $\beta >0$, we have a rapid stabilization of the trajectory towards the origin. On the other hand, taking large $\beta$ is not beneficial, we can observe on Figure \ref{fig:ex3} that the quality of convergence is degraded in this case.
Indeed, since the damping attached to $|\dot x(t)|^{p-2} \dot x(t)$ is negligeable for large $p$ with respect to the damping attached to $\beta \dot x(t)$, the "optimal" value of $\beta$ is close to the optimal value for (HBF). So, according to Theorem \ref{strong-conv-thm}, it is close to $2\sqrt{\mu}$ where $\mu$ is the coefficient of strong convexity of $f$ (see Theorem \ref{strong-conv-thm}). In our situation, this gives
$\beta \sim 2$.
\subsection{Link with the regularized Newton method}
Let us specify the link between our study and Newton's method for solving $Ax \ni0$, where $A$ is a general maximally monotone operator (for convex minimization take $A =\partial f$).
To overcome the ill-posed character of the continuous Newton method, the following first-order evolution system was studied by
Attouch--Svaiter \cite{ASv},
\begin{equation*}
\left\{
\begin{array}{l}
v(t) \in A(x(t)) \hspace{2cm} \\
\rule{0pt}{15pt}
\gamma(t) \dot{x}(t) + \beta \dot{v}(t) + v(t) =0 .
\end{array}\right.
\end{equation*}
The system can be considered as a continuous version of the
Levenberg--Marquardt, which acts as a regularization of the Newton method.
Under a fairly general assumption on the regularization parameter $\gamma (t)$, this system is well-posed and generates trajectories that converge weakly to equilibria. Parallel results have been obtained for the associated proximal algorithms obtained by implicit temporal discretization, see \cite{AAS}, \cite{AMAS}, \cite{ARS}.
Formally when $A$ is differentiable, this system writes as
$
\gamma(t) \dot{x}(t) + \beta \frac{d}{dt} \left( A(x(t))\right) + A(x(t)) = 0.
$
When $A =\nabla f$ we obtain
\begin{equation}\label{ARS_Newton}
\gamma(t) \dot{x}(t) + \beta \nabla^2 f (x(t)) \dot{x}(t) + \nabla f(x(t)) = 0.
\end{equation}
The system (ADIGE-VH) considered in the previous section can be seen as an inertial version of the above system \eqref{ARS_Newton}.
Most interesting, Attouch--Redont--Svaiter developed in \cite{ARS} a closed-loop version of the above results.
They showed the convergence of the trajectories generated by the closed-loop control system when $0<p<1$, where $A$ is a general maximally monotone operator:
\begin{equation*}
\left\{
\begin{array}{l}
v(t) \in A(x(t)) \vspace{3mm}\\
\| v(t) \|^p \dot{x}(t) + \dot{v}(t) + v(t)=0 \vspace{3mm}\\
x(0) =x_0, \, v(0)\in A (x_0), v_0 \neq 0.
\end{array}\right.
\end{equation*}
\noindent For optimization problems, this naturally suggests to consider autonomous inertial systems where the damping coefficient is a closed-loop control of the gradient of $f$.
A first answer to this question has been obtained by Lin--Jordan \cite{LJ} who considered the autonomous system
\begin{equation}\label{general_coef}
\ddot{x}(t) + \gamma (t) \dot{x}(t) + \beta (t) \nabla^2 f (x(t)) \dot{x} (t) + b(t)\nabla f (x(t)) = 0,
\end{equation}
where $\gamma$, $\beta$ and $b$ are defined by the following formulas:.
\begin{equation}\label{general_coef_b}
\left\{
\begin{array}{l}
| \lambda(t)|^p \| \nabla f (x(t)) \|^{p-1} =\theta \vspace{2mm}
\\
a(t)= \frac{1}{4}\left( \int_0^t \sqrt{\lambda (s)}ds +c \right)^2
\vspace{2mm}\\
\gamma(t) = 2 \frac{\dot{a}(t)}{a(t)} - \frac{\ddot{a}(t)}{\dot{a}(t)} \vspace{2mm} \\
\beta (t) = \left(\frac{\dot{a}(t)}{a(t)}\right)^2 \vspace{2mm}\\
b(t)= \frac{\dot{a}(t)( \dot{a}(t) + \ddot{a}(t) )}{a(t)}
\end{array}\right.
\end{equation}
As a specific feature, the damping coefficients are expressed with the help of $\lambda (t)$ which is equal to a power of the inverse of the norm of the gradient of $f$.
The authors give some interesting non-trivial convergence rates for values. According to the presence of the Hessian driven damping term, they show the fast convergence towards zero of the gradient norms.
\section{Closed-loop damping involving the velocity and the gradient}
\label{Sec: combine}
Let's consider the following system, where the damping term $\partial \phi \Big(\dot{x}(t) + \beta \nabla f (x(t)\Big)$ involves both the velocity vector and the gradient of the potential function $f$
\begin{equation}\label{closed_loop_inertial_both_1}
\mbox{\rm (ADIGE-VGH)} \quad \ddot{x}(t) + \partial \phi \Big(\dot{x}(t) + \beta \nabla f (x(t)\Big) + \beta \nabla^2 f (x(t)) \dot{x} (t) + \nabla f (x(t)) \ni 0.
\end{equation}
The parameter $\beta \geq 0$ is attached to the geometric damping induced by the Hessian. As previously considered, $\phi$ is a damping potential function. The suffix V,G,H make respectively reference to the Velocity, the Gradient of $f$, and the Hessian of $f$, which enter the damping terms of the above dynamic.
This model makes it possible to encompass several situations.
\medskip
\noindent $\bullet$ When $\beta=0$, we recover the closed loop controled system
\begin{equation}\label{closed_loop_inertial_both_2}
\ddot{x}(t) + \partial \phi \Big( \dot{x}(t) \Big) + \nabla f (x(t)) = 0,
\end{equation}
studied from Section \ref{sec: basic_1} to \ref{sec: basic_3}. So studying \eqref{closed_loop_inertial_both_1} can be viewed as an extension of our previous study.
Still, we will see that taking $ \beta> 0 $ induces several favorable properties.
\medskip
\noindent $\bullet$ When $\phi (u) = \frac{\gamma}{2} \|u\|^2$, we obtain the system
\begin{equation}\label{closed_loop_inertial_both_3}
\ddot{x}(t) + \gamma \dot{x}(t) + \beta \nabla^2 f (x(t)) \dot{x} (t) + (1+ \gamma \beta) \nabla f (x(t)) = 0,
\end{equation}
studied in Section \ref{Sec:Hessian}, and which was introduced by Alvarez--Attouch--Bolte--Redont in \cite{AABR}.
\subsection{Existence and uniqueness results}
A key property for studying \eqref{closed_loop_inertial_both_1} is the following equivalent formulation, different from the Hamiltonian formulation, and whose proof is immediate. Just introduce
the new variable $u(t) := \dot{x}(t) + \beta \nabla f (x(t)) $.
\begin{proposition}\label{first_order_both}
The following are equivalent
\medskip
\quad $(i) $ \, $\ddot{x}(t) + \partial \phi \Big(\dot{x}(t) + \beta \nabla f (x(t)\Big) + \beta \nabla^2 f (x(t)) \dot{x} (t) + \nabla f (x(t)) \ni 0.$
\begin{equation*}
\quad (ii) \; \left\{
\begin{array}{l}
\dot{x}(t) + \beta \nabla f (x(t)) -u(t) = 0 \hspace{8cm} \\
\rule{0pt}{15pt}
\dot{u}(t) +\partial \phi (u(t)) + \nabla f (x(t) \ni 0.\hspace{7.5cm}
\end{array}\right.
\end{equation*}
\end{proposition}
\noindent A major interest of the the formulation $(ii)$ is that it is a first-order system in time and space (without occurence of the Hessian). As such, it requires fewer regularity assumptions on $f$ than in Theorem \ref{th.existence_uniqueness}.
\begin{theorem}\label{th.existence_uniqueness_both}
Let $f:{\mathcal H} \to {\mathbb R}$ be a convex function which is twice continuously differentiable, and such that $\inf_{{\mathcal H}} f >-\infty$. Suppose that $\nabla f$ is Lipschitz continuous on the bounded subsets of ${\mathcal H}$. Let $\phi: {\mathcal H} \to {\mathbb R}$ be a convex continuous damping function.
Then, for any Cauchy data $(x_0, x_1 ) \in {\mathcal H} \times {\mathcal H}$, there exists a unique strong global solution $x : [0, +\infty[ \to {\mathcal H}$ of
\mbox{\rm (ADIGE-VGH)}
satisfying $x(0) = x_0$, and $\dot{x}(0)=x_1 $.
\end{theorem}
\begin{proof} The structure of the proof being similar to Theorem \ref{th.existence_uniqueness}, we only develop the original aspects.
\smallskip
\textbf{Step 1}: \textit{A priori estimate}. Note that \eqref{closed_loop_inertial_both_1} can be equivalently written as
\begin{equation}\label{closed_loop_inertial_both_4}
\frac{d}{dt} \Big( \dot{x}(t) + \beta \nabla f (x(t) \Big)+ \partial \phi \Big(\dot{x}(t) + \beta \nabla f (x(t)\Big) + \nabla f (x(t)) \ni 0.
\end{equation}
After taking the scalar product of
\eqref{closed_loop_inertial_both_4} with $\dot{x}(t) + \beta \nabla f (x(t))$, we get
%
\begin{eqnarray}
\frac{1}{2} \frac{d}{dt} \| \dot{x}(t) &+& \beta \nabla f (x(t)) \|^2 + \left\langle \partial \phi (\dot{x}(t) + \beta \nabla f (x(t))), \dot{x}(t) + \beta \nabla f (x(t)) \right\rangle \nonumber \\
&+&\left\langle \nabla f (x(t)), \dot{x}(t) + \beta \nabla f (x(t))\right\rangle =0. \label{closed_loop_both_energy_1}
\end{eqnarray}
Since $\phi$ is a damping potential, the subdifferential inequality for convex functions gives
$$
\left\langle \partial \phi (\dot{x}(t) + \beta \nabla f (x(t))), \dot{x}(t) + \beta \nabla f (x(t)) \right\rangle \geq \phi (\dot{x}(t) + \beta \nabla f (x(t))).
$$
Collecting the above results, we obtain
\begin{equation}\label{closed_loop_both_b}
\frac{d}{dt} \left( \frac{1}{2} \| \dot{x}(t) + \beta \nabla f (x(t)) \|^2 + f(x(t)) - \inf\nolimits_{{\mathcal H}} f\right) + \phi \Big(\dot{x}(t) + \beta \nabla f (x(t))\Big) + \beta \|\nabla f (x(t)))^2 \leq 0.
\end{equation}
Therefore, the energy-like function
\begin{equation}\label{closed_loop_both_energy-decrease}
t \mapsto \frac{1}{2} \| \dot{x}(t) + \beta \nabla f (x(t)) \|^2 + f(x(t)) - \inf\nolimits_{{\mathcal H}} f \quad \mbox{is nonincreasing}.
\end{equation}
This implies that, as long as the trajectory is defined
\begin{equation}\label{closed_loop_both_c}
\| \dot{x}(t) + \beta \nabla f (x(t)) \|^2 \leq C:= \| x_1 + \beta \nabla f (x_0) \|^2 + 2 (f(x_0) - \inf\nolimits_{{\mathcal H}} f ).
\end{equation}
From this, we will obtain a bound on the trajectory.
We have
$$
\dot{x}(t) + \beta \nabla f (x(t)) = k(t)
$$
with $\|k(t)\| \leq \sqrt{C}$. Take the scalar product of the above equation with $x(t)-x_0$.
$$
\frac{1}{2} \frac{d}{dt} \|x(t) -x_0 \|^2 + \beta \left\langle \nabla f (x(t)) - \nabla f (x_0), x(t) -x_0 \right\rangle +
\beta \left\langle \nabla f (x_0), x(t) -x_0 \right\rangle = \left\langle k(t)
, x(t) -x_0 \right\rangle. $$
According to the convexity of $f$, and hence the monotonicity of $\nabla f$, and by Cauchy--Schwarz inequality
\begin{equation}\label{dx2_leq_k}
\frac{1}{2} \frac{d}{dt} \|x(t) -x_0 \|^2 \leq ( \|k(t)\| + \beta \|\nabla f (x_0) \|)\|x(t) -x_0 \|.
\end{equation}
According to the Gronwall inequality, and $\|k(t)\| \leq \sqrt{C}$, we obtain
\begin{equation}\label{closed_loop_both_d}
\|x(t) -x_0 \| \leq t \left( \| x_1 + \beta \nabla f (x_0) \|+ \sqrt{2 (f(x_0) - \inf\nolimits_{{\mathcal H}} f )} + \beta \|\nabla f (x_0)\| \right).
\end{equation}
\medskip
\textbf{Step 2}: \textit{first-order formulation of \eqref{closed_loop_inertial_both_1}}.
According to Proposition \ref{first_order_both}, it is equivalent
to solve the first-order system
\begin{equation*}
\left\{
\begin{array}{l}
\dot{x}(t) + \beta \nabla f (x(t)) -u(t) = 0 \hspace{8cm} \\
\rule{0pt}{15pt}
\dot{u}(t) +\partial \phi (u(t)) + \nabla f (x(t) \ni 0,\hspace{7.5cm}
\end{array}\right.
\end{equation*}
with the Cauchy data
$x(0) =x_0$, \, $u(0)= x_1$.
Set
$Z(t) = (x(t), u(t)) \in {\mathcal H} \times {\mathcal H} .$\\
The above system can be written equivalently as
$$
\dot{Z}(t) + F( Z(t))\ni 0, \quad Z(0) = (x_0, x_1),
$$
where $F: {\mathcal H} \times {\mathcal H}\rightrightarrows {\mathcal H} \times {\mathcal H},\;\;(x,u)\mapsto F(x,u)$ is defined by
$$
F(x,u)= \Big( 0, \partial \phi(u) \Big) +
\Big(\beta \nabla f(x) -u, \nabla f(x) \Big).
$$
Hence $F$ splits as follows
$$
F(x,u) = \partial \Phi (x,u) + G (x,u),
$$
where
\begin{equation}\label{Hamilton_Hessian_b}
\Phi (x,u) = \phi(u)
\, \mbox{ and } \,
G(x,u) = \Big(\beta \nabla f(x) -u, \nabla f(x) \Big).
\end{equation}
Therefore, it is equivalent to solve the following first-order differential inclusion with Cauchy data
\begin{equation}
\label{1odd_b}
\dot{Z}(t) +\partial\Phi(Z(t)) + G( Z(t))\ni 0, \quad Z(0) = (x_0, x_1).
\end{equation}
According to the local Lipschitz assumption on the gradient of $f$, we immediately obtain that the mapping $(x,u)\mapsto G(x,u)$ is Lipschitz continuous on the bounded subsets of ${\mathcal H}\times{\mathcal H}$.
\textbf{Step 3}: \textit{Approximate dynamics}. We consider the approximate dynamics
\begin{equation}\label{hbdf_lambda_existence_b}
\ddot{x}_{\lambda}(t) + \nabla \phi_{\lambda} \Big(\dot{x}_{\lambda}(t)) + \beta \nabla f (x_{\lambda}(t))\Big) + \beta \nabla^2 f (x(t)) \dot{x}_{\lambda} (t) + \nabla f (x_{\lambda}(t)) = 0,\; t\in [0,+\infty[
\end{equation}
which uses the Moreau-Yosida approximates $(\phi_{\lambda})$ of $\phi$.
We will prove that the filtered sequence $(x_{\lambda})$
converges uniformly as $\lambda \to 0$ over the bounded time intervals towards a solution of \eqref{closed_loop_inertial_both_1}.
The first-order formulation of \eqref{hbdf_lambda_existence_b} gives the following system
$$ \quad \left\{
\begin{array}{l}
\dot{x}_{\lambda}(t) + \beta \nabla f (x_{\lambda}(t)) -u_{\lambda}(t) = 0 ; \\
\rule{0pt}{18pt}
\dot{u}_{\lambda}(t) +\nabla \phi_{\lambda}(u_{\lambda}(t)) + \nabla f(x_{\lambda}(t)) = 0 ,
\hspace{2.3cm}
\end{array}\right.
$$
with the Cauchy data
$x_{\lambda}(0) =x_0$, \, $u_{\lambda}(0)= x_1 $.
Set
$Z_{\lambda}(t) = (x_{\lambda}(t), u_{\lambda}(t)) \in {\mathcal H} \times {\mathcal H} .$\\
The above system can be written equivalently as
$$
\dot{Z}_{\lambda}(t) + F_{\lambda}( Z_{\lambda}(t))\ni 0, \quad Z_{\lambda}(t_0) = (x_0, x_1),
$$
where $F_{\lambda}: {\mathcal H} \times {\mathcal H}\rightarrow {\mathcal H} \times {\mathcal H},\;\;(x,u)\mapsto F_{\lambda}(x,u)$ is defined by
$$
F_{\lambda}(x,u)= \Big( 0, \nabla \phi_{\lambda}(u) \Big) +
\Big(\beta \nabla f(x) -u, \nabla f(x) \Big).
$$
Hence $F_{\lambda}$ splits as follows
$
F_{\lambda}(x,u) = \nabla \Phi_{\lambda} (x,u) + G (x,u)
$
where $\Phi $ and $G$ have been defined in \eqref{Hamilton_Hessian_b}.
Therefore, the approximate equation is equivalent to the first-order differential system with Cauchy data
\begin{equation}
\label{1odd_existence_bb}
\dot{Z}_{\lambda}(t) +\nabla \Phi_{\lambda}(Z_{\lambda}(t)) + G( Z_{\lambda}(t))= 0, \quad Z_{\lambda}(0) = (x_0, x_1).
\end{equation}
Let us argue with $\lambda >0$ fixed.
According to the Lipschitz continuity of $\nabla \Phi_{\lambda}$, and the fact that $G$ is Lipschitz continuous on the bounded sets, we have that the sum operator $ \nabla \Phi_{\lambda} + G$ which governs \eqref{1odd_existence_bb} is Lipschitz continuous on the bounded sets.
As a consequence, the existence of a local solution to \eqref{1odd_existence_bb} follows from the classical
Cauchy--Lipschitz theorem.
To pass from a local solution to a global solution, we use the a priori estimates \eqref{closed_loop_both_c} and \eqref{closed_loop_both_d} obtained in Step 1 of the proof.
Note that these estimates are valid for any damping potential, in particular for $\phi_{\lambda}$.
Suppose that a maximal solution is defined on a finite time interval
$[0,T[$.
According to \eqref{closed_loop_both_d} we first obtain that $x_{\lambda}(t)$ remains bounded on $[0,T[$. Then, using \eqref{closed_loop_both_c} and the fact that the gradient of $f$ is Lipschitz continuous on the bounded sets, we obtain that $\dot{x}_{\lambda}(t) $ is also bounded on $[0,T[$.
According to the property \eqref{ineq_phi_b} of the Yosida approximation, the property $(iii)$ of the
damping potential $\phi$, and \eqref{closed_loop_both_c}, this implies that
$$
\| \ \nabla \phi_{\lambda} \Big(\dot{x}_{\lambda}(t)) + \beta \nabla f (x_{\lambda}(t))\Big)\| \leq \| (\partial \phi )^{0} \Big(\dot{x}_{\lambda}(t)) + \beta \nabla f (x_{\lambda}(t))\Big)\|
$$
is also bounded by on $[0,T[$.
Moreover, according to the local boundedness assumption made on the gradient, and the boundedness of $x_{\lambda}(t)$ and $\dot{x}_{\lambda}(t) $, we have that
$\nabla^2 f (x_{\lambda}(t))\dot{x}_{\lambda}(t)$ is also bounded.
According to the constitutive equation \eqref{hbdf_lambda_existence}, this in turn implies that $(\ddot{x}_{\lambda} )$ is also bounded.
This implies that the limits of $x_{\lambda}(t)$ and $\dot{x}_{\lambda} (t)$
exist, as $t \to T$. According to this property, passing from a local to a global solution
is a classical argument.
So, for any $\lambda >0$ we have a unique global solution of
\eqref{hbdf_lambda_existence} with satisfies the Cauchy data $x_{\lambda}(0) =x_0$, $\dot{x}_{\lambda}(0)= x_1 $.
\smallskip
\textbf{Step 4}: \textit{Passing to the limit as $\lambda \to 0$}.
Take $T >0$, and $ \lambda , \mu >0$.
Consider the corresponding solutions on $[0, T]$
\begin{eqnarray*}
&& \dot{Z}_{\lambda}(t) +\nabla \Phi_{\lambda}(Z_{\lambda}(t)) + G( Z_{\lambda}(t))= 0, \quad Z_{\lambda}(0) = (x_0, x_1)
\\
&&\dot{Z}_{\mu}(t) +\nabla \Phi_{\mu}(Z_{\mu}(t)) + G( Z_{\mu}(t))= 0, \quad Z_{\mu}(0) = (x_0, x_1).
\end{eqnarray*}
Let's make the difference between the two equations, and take the scalar product by $Z_{\lambda}(t) - Z_{\mu}(t)$. We get
\begin{eqnarray}
\frac{1}{2} \frac{d}{dt}\| Z_{\lambda}(t) - Z_{\mu}(t) \|^2 &+ &
\left\langle \nabla \Phi_{\lambda}(Z_{\lambda}(t)) - \nabla \Phi_{\mu}(Z_{\mu}(t)) , Z_{\lambda}(t) - Z_{\mu}(t) \right\rangle \nonumber\\
&+& \left\langle G( Z_{\lambda}(t)) - G( Z_{\mu}(t)) , Z_{\lambda}(t) - Z_{\mu}(t) \right\rangle =0 . \label{basic_ex_Y_bb}
\end{eqnarray}
We now use the following ingredients:
\medskip
i) According to the general properties of the Yosida approximation (see \cite[Theorem 3.1]{Brezis}), we have
$$
\left\langle \nabla \Phi_{\lambda}(Z_{\lambda}(t)) - \nabla \Phi_{\mu}(Z_{\mu}(t)) , Z_{\lambda}(t) - Z_{\mu}(t) \right\rangle
\geq -\frac{\lambda}{4} \|\nabla \Phi_{\mu}(Z_{\mu}(t)) \|^2 -
\frac{\mu}{4} \|\nabla \Phi_{\lambda}(Z_{\lambda}(t)) \|^2.
$$
According to the energy estimates, the sequence $(Z_{\lambda})$ is uniformly bounded on $[0, T]$, let
$$\| Z_{\lambda}(t)\|\leq C_T .$$
From these properties we immediately infer
$$
\|\nabla \Phi_{\lambda}(Z_{\lambda}(t)) \| \leq \sup_{\|\xi\|\leq C_T} \|(\partial \phi)^0(\xi) \|= M_T <+\infty
$$
because our assumption on $\phi$ gives that $(\partial \phi)^0$ is bounded on the bounded sets.
Therefore
$$
\left\langle \nabla \Phi_{\lambda}(Z_{\lambda}(t)) - \nabla \Phi_{\mu}(Z_{\mu}(t)) , Z_{\lambda}(t) - Z_{\mu}(t) \right\rangle
\geq -\frac{1}{4} M_T (\lambda +\mu).
$$
ii) Since the mapping $G : {\mathcal H} \times {\mathcal H} \to {\mathcal H} \times {\mathcal H}$ is Lipschitz continuous on the bounded sets, and
using again that the sequence $(Z_{\lambda})$ is uniformly bounded on $[0, T]$, we deduce that there exists a constant $L_T$ such that
$$
\| G( Z_{\lambda}(t)) - G( Z_{\mu}(t)) \| \leq L_T \| Z_{\lambda}(t) - Z_{\mu}(t) \|.
$$
Combining the above results, and using Cauchy--Schwarz inequality, we deduce from
\eqref{basic_ex_Y_b} that
$$
\frac{1}{2} \frac{d}{dt}\| Z_{\lambda}(t) - Z_{\mu}(t) \|^2
\leq \frac{1}{4} M_T (\lambda +\mu) + L_T \| Z_{\lambda}(t) - Z_{\mu}(t) \|^2 .
$$
We now proceed with the integration of this differential inequality.
According to the fact that $ Z_{\lambda}(0) - Z_{\mu}(0) =0$, elementary calculus gives
$$
\| Z_{\lambda}(t) - Z_{\mu}(t) \|^2 \leq \frac{M_T}{4L_T}(\lambda +\mu) \Big( e^{2L_T (t-t_0} -1 \Big).
$$
Therefore, the filtered sequence $(Z_{\lambda})$ is a Cauchy sequence for the uniform convergence on $[0, T]$, and hence it converges uniformly.
This means the uniform convergence on $[0, T]$ of $x_{\lambda}$ and $\dot{x}_{\lambda}$ to $x$ and $\dot{x}$ respectively.
Proving that $x$ is solution of \eqref{closed_loop_inertial_both_1} is obtained in a similar way as
in Theorem \ref{basic_exist_thm}. Just rely on the property
$\frac{d}{dt}\left( \nabla f (x_{\lambda}(t)) \right) = \nabla^2 f (x_{\lambda}(t))\dot{x}_{\lambda}(t) $ to pass to the limit on the Hessian term.
\end{proof}
\subsection{Convergence properties}
We have the following convergence properties for the solutions trajectories of the system \eqref{closed_loop_inertial_both_1}
with closed loop damping involving both the velocity and the gradient.
\begin{theorem}\label{th.existence_uniqueness_both_conv}
Let $f:{\mathcal H} \to {\mathbb R}$ be a convex function which is twice continuously differentiable, and such that $\argmin _{{\mathcal H}} f \neq \emptyset$. We suppose that $\nabla f$ is Lipschitz continuous on the bounded subsets of ${\mathcal H}$. Suppose that $\beta >0$.
Let $\phi: {\mathcal H} \to {\mathbb R}$ be a convex continuous damping function.
Then, for any solution trajectory $x : [0, +\infty[ \to {\mathcal H}$ of
\mbox{\rm (ADIGE-VGH)} we have
\begin{eqnarray*}
&&(i) \mbox{ The energy-like function} \, \, t \mapsto \frac{1}{2} \| \dot{x}(t) + \beta \nabla f (x(t)) \|^2 + f(x(t)) \mbox{ is nonincreasing};\\
&&(ii) \int_0^{+\infty} \phi \Big(\dot{x}(t) + \beta \nabla f (x(t))\Big) dt <+\infty ;\\
&&(iii) \int_0^{+\infty} \|\nabla f (x(t)) \| ^2 dt <+\infty
\end{eqnarray*}
Suppose moreover that there exists $r>0$ such that for all $u\in {\mathcal H}$
$\phi (u) \geq r\|u\|$.
Then the following properties are satisfied:
\medskip
a) The trajectory $x(\cdot)$ converges weakly as $t\to +\infty$, and its limit belongs to $\argmin _{{\mathcal H}} f$.
\medskip
b) $\dot{x}(t)$ and $\nabla f (x(t))$ converge strongly to zero as $t\to +\infty$.
\end{theorem}
\begin{proof}
Items $(i)$ to $(iii)$ are direct consequences of the estimate \eqref{closed_loop_both_b} established in the Step 1 of the proof of Theorem \ref{th.existence_uniqueness_both}.\\
Let's now make the additional assumption $\phi (u) \geq r\|u\|$.
According to item $(ii)$, we obtain
$$
\int_0^{+\infty} \| \dot{x}(t) + \beta \nabla f (x(t))\| dt
\leq \frac{1}{r} \int_0^{+\infty} \phi \Big(\dot{x}(t) + \beta \nabla f (x(t))\Big) dt <+\infty .
$$
Therefore, $x(\cdot)$ is solution of the non-autonomous steepest descent equation
$$
\dot{x}(t) + \beta \nabla f (x(t)) = k(t)
$$
with $k \in L^1 (0, +\infty; {\mathcal H})$.
We can apply Theorem 3.11 of \cite{Brezis}, which gives the convergence of the trajectory to a point in $\argmin _{{\mathcal H}} f$.
In particular, the trajectory remains bounded. According to item $(i)$, we get that $\dot{x}(t)$ is also bounded.
Returning to the constitutive equation \eqref{closed_loop_inertial_both_1}, we deduce that the acceleration $\ddot{x}(t)$ is also bounded.
This implies that $\xi(t)= \dot{x}(t) + \beta \nabla f (x(t))$ satisfies
$$\int_0^{+\infty} \|\xi (t)\| dt <+\infty \, \mbox{ and } \, \|\dot{\xi}(t)\|\leq M
$$
for some $M >0$.
This classically implies that $\xi(t)= \dot{x}(t) + \beta \nabla f (x(t))$ tends to zero as $t \to +\infty$.
According to item $(iii)$, the same argument applied to $\nabla f (x(t))$ gives that $\nabla f (x(t))$ tends to zero as $t \to +\infty$. As a difference of the two previous quantities, we conclude that $\dot{x}(t)$ tends to zero as $t \to +\infty$.
\end{proof}
Indeed, Theorem 3.11 of \cite{Brezis} was proved under the additional sassumption that $f$ is inf-compact. Recent progress based on Opial lemma \cite{Op} and Bruck theorem \cite{Bru} allows to extend it to general convex function $f$, without making this additional assumption.
This is made precise below.
\begin{proposition} Let $f: {\mathcal H} \to \mathbb R\cup\{+\infty\} $ be a convex lower semicontinuous
proper function such that $\argmin_{{\mathcal H}} f \neq \emptyset$, and let $k \in L^1 (0, +\infty; {\mathcal H})$.
Suppose that $x: [0, +\infty[ \to {\mathcal H}$ is a strong global solution trajectory of
$$
\dot{x}(t) + \partial f (x(t)) \ni k(t).
$$
Then, the trajectory $x(\cdot)$ converges weakly as $t\to +\infty$, and its limit belongs to $\argmin _{{\mathcal H}} f$.
\end{proposition}
\begin{proof}
Take $\epsilon >0$. Since $k \in L^1 (0, +\infty; {\mathcal H})$, there exists $T_{\epsilon} >0$ such that $\int_{T_{\epsilon}}^{+\infty} \|k(t) \| dt < \epsilon$.
Let's consider the solution $v: [0, +\infty[ \to {\mathcal H}$ of
$$
\dot{v}(t) + \nabla f (v(t)) \ni 0; \quad v(0) = x(T_{\epsilon}).
$$
According to the semigroup of contractions property, we have, for all
$t\geq T_{\epsilon}$
\begin{equation}\label{sg_property}
\| x(t) - v(t-T_{\epsilon})\| \leq \| x(T_{\epsilon}) - v(0)\| +
\int_{T_{\epsilon}}^{t} \|k(t) \| dt \leq \epsilon.
\end{equation}
Take $\xi \in {\mathcal H}$. By Cauchy--Schwarz inequality, we have
$$
| \left\langle x(t) - v(t-T_{\epsilon}), \xi \right\rangle | \leq \| \leq \epsilon \|\xi\|.
$$
By the triangle inequality, we deduce that, for all $t\geq T_{\epsilon}$, $t'\geq T_{\epsilon}$
$$
| \left\langle x(t), \xi \right\rangle - \left\langle x(t'), \xi \right\rangle| \leq | \left\langle v(t-T_{\epsilon}) -v(t'-T_{\epsilon}, \xi \right\rangle| + 2\epsilon \|\xi\|.
$$
According to the Bruck theorem, we know that the weak limit of $v(t)$
exists. Passing to the limsup on the above inequality we get
$$
\limsup_{t,t' \to +\infty} | \left\langle x(t), \xi \right\rangle - \left\langle x(t'), \xi \right\rangle| \leq \limsup_{t,t' \to +\infty} | \left\langle v(t-T_{\epsilon} -v(t'-T_{\epsilon}, \xi \right\rangle| + 2\epsilon \|\xi\| \leq 2\epsilon \|\xi\|.
$$
This being true for any $\epsilon >0$, we deduce that the limit of
$\left\langle x(t), \xi \right\rangle$ exists, which implies that the
weak limit of $x(t)$ exists as $t\to +\infty$, let $x_{\infty}$ its limit.
Passing to the lower limit on \eqref{sg_property}, according to the lower semicontinuity of the norm for the weak topology, we deduce that
\begin{equation}\label{sg_property_b}
\| x_{\infty} - \lim_{t \to +\infty} v(t)\| \leq \epsilon.
\end{equation}
Since the weak limit of $v(t)$ belongs to $\argmin _{{\mathcal H}} f$, we deduce that $\dist (x_{\infty}, \argmin _{{\mathcal H}} f) \leq \epsilon.$
This being true for any $\epsilon >0$, and since $\argmin _{{\mathcal H}} f$ is closed, we finally get that $x_{\infty} \in \argmin _{{\mathcal H}} f$.
\end{proof}
\subsection{An approach based on Opial's lemma} \label{rem-quasi-hessian_both}
Here we will prove the weak convergence of the trajectory
$x$ to a minimizer of $f$, based on the continuous version of the Opial Lemma \cite{Op}.
As in the proof of Theorem \ref{th.existence_uniqueness_both_conv}, items $(i)$ to $(iii)$ hold. Assume $\phi (u) \geq r\|u\|$ for all $u\in {\mathcal H}$.
According to item $(ii)$ we obtain
$$
\int_0^{+\infty} \| \dot{x}(t) + \beta \nabla f (x(t))\| dt <+\infty .
$$
Equivalently, we have
$$
\dot{x}(t) + \beta \nabla f (x(t)) = k(t)
$$
with $k \in L^1 (0, +\infty; {\mathcal H})$.
Let us prove that $x$ is bounded. Relying on step 1 of the proof of Theorem \ref{th.existence_uniqueness_both}, notice that \eqref{dx2_leq_k} holds for
a generic $x_0\in {\mathcal H}$. Taking an arbitrary $z\in\argmin f$, we derive from \eqref{dx2_leq_k}
\begin{equation}\label{dx2_leq_k_for_opial}
\frac{1}{2} \frac{d}{dt} \|x(t) -z \|^2 \leq \|k(t)\|\cdot \|x(t) -z \|.
\end{equation}
Integrating we obtain
\begin{equation}\label{dx2_leq_k_for_opial_integr}
\frac{1}{2}\|x(T) -z \|^2 \leq \frac{1}{2}\|x_0 -z \|^2 + \int_0^T\|k(t)\|\cdot \|x(t) -z \|dt \ \ \forall T\geq 0.
\end{equation}
Now apply \cite[Lemme A.5, pag 157]{Brezis} to conclude
$$\|x(T)-z\|\leq \|x_0 -z \| + \int_0^T\|k(t)\|dt \ \ \forall T\geq 0. $$
Since $k \in L^1 (0, +\infty; {\mathcal H})$ we obtain that $x$ is bounded.
Now we can repeat the arguments in the proof of Theorem \ref{th.existence_uniqueness_both_conv} to conclude that
$\lim_{t\to\infty}\dot x(t)=\lim_{t\to\infty}\nabla f( x(t))=0$, so we omit the proof.
Let us pass forward and see how the Opial Lemma \cite{Op} can be applied.\\
Since $x$ is bounded and $k \in L^1 (0, +\infty; {\mathcal H})$, we get from \eqref{dx2_leq_k_for_opial} that
$\lim_{t\to\infty}\|x(t)-z\|$ exists, hence the first condition in the Opial Lemma is fulfilled.
To check the second condition in the Opial Lemma is standard. Take $\overline{x}\in {\mathcal H} $ and $t_n\to +\infty$ such that $x(t_n)$ converges weakly
to $\overline{x}$, as $n\to + \infty$. The convexity of $f$ yields for all
$x\in {\mathcal H}$ and all $n\in{\mathbb N}$
$$f(x)\geq f(x(t_n))+\langle \nabla f(x(t_n)),x-x(t_n)\rangle.$$
Fixing $x$ and taking the limit as $n\to+\infty$, and relying on the strong convergence of $\nabla f(x(t))$ to $0$ and the boundedness of $x$, we derive
$$f(x)\geq \liminf_{n\rightarrow+\infty}f(x(t_n))\geq f(\overline {x}),$$
where the last inequality follows from the weak lower semicontinuity of the convex function $f$. Since the last inequality holds for an arbitrary $x$,
we obtain $\overline{ x}\in\argmin f$. Therefore, the second condition in the Opial Lemma is fulfilled as well.
\subsection{A finite stabilization property}
As we already noticed, (ADIGE-VGH) can be equivalently written as
$$
\dot{u}(t) + \partial \phi (u(t)) \ni - \nabla f (x(t) )
$$
where $ u(t) =\dot{x}(t) + \beta \nabla f (x(t)) $.
After taking the scalar product of the above equation with $u(t)$, we get
$$
\frac{1}{2} \frac{d}{dt} \| u(t) \|^2 + \left\langle \partial \phi (u(t)), u(t) \right\rangle = - \left\langle \nabla f (x(t)), u(t) \right\rangle .
$$
When $\phi (u) \geq r\|u\|$, and by Cauchy--Schwarz inequality we get
$$
\frac{1}{2} \frac{d}{dt} \| u(t) \|^2 + r \|u(t)\| \leq \| \nabla f (x(t))\| \|u(t) \| .
$$
Since $\nabla f (x(t))$ converges strongly to zero as $t\to +\infty$
(that's the last point of Theorem \ref{th.existence_uniqueness_both_conv}), we get for $t$ large enough
$\| \nabla f (x(t))\| \leq \frac{1}{2} r$, and hence
$$
\frac{1}{2} \frac{d}{dt} \| u(t) \|^2 + \frac{1}{2} r \|u(t)\| \leq 0 .
$$
This gives that $u(t) \equiv 0$ after a finite time.
Let us summarize the above results in the following Proposition.
\begin{proposition} Under the hypothesis of Theorem \ref{th.existence_uniqueness_both_conv}, and when $\phi (u) \geq r\|u\|$ for some $r>0$, we have that after a finite time
$$
\dot{x}(t) + \beta \nabla f (x(t)) \equiv 0,
$$
{\it i.e.}\,\, the trajectory follows the steepest descent dynamic.
\end{proposition}
\subsection{The case \textit{f} strongly convex: exponential convergence rate}
\begin{theorem}\label{strong_convex_thm-vel-grad} Let $f: {\mathcal H} \to {\mathbb R}$ be a $\gamma$-strongly convex function (for some $\gamma>0$) which is twice continuously differentiable, and whose gradient is Lipschitz continuous on the bounded sets. Let $\overline{x}$ be the unique minimizer of $f$.
Let $\phi : {\mathcal H} \to {\mathbb R}_+$ be a damping potential (see Definition \ref{def1}) which is differentiable, and whose gradient is Lipschitz continuous on the bounded subsets of ${\mathcal H}$. Suppose that $\phi$ satisfies the following growth conditions:
\smallskip
$(i)$ (local) there exists positive constants $\alpha$, and $\epsilon >0$ such that, for all $u$ in
${\mathcal H}$ with $\|u\| \leq \epsilon$
$$ \left\langle \nabla \phi (u), u \right\rangle \geq \alpha \|u\|^2 .$$
$(ii)$ (global) there exists $p\geq 1$, $r>0$, such that for all $u$ in ${\mathcal H}$, $\phi (u) \geq r\|u\|^p$.
\medskip
\noindent Suppose $\beta > 0$. Let $x: [0, +\infty[ \to {\mathcal H}$ be a solution trajectory of \mbox{\rm (ADIGE-VGH)}
\begin{equation}\label{closed_loop_inertial_both_1-quasi-case-str-conv}
\ddot{x}(t) + \nabla \phi \Big(\dot{x}(t) + \beta \nabla f (x(t)\Big) + \beta \nabla^2 f (x(t)) \dot{x} (t) + \nabla f (x(t)) = 0.
\end{equation}
Then, we have exponential convergence rate to zero as $t \to +\infty$ for $f(x(t))-f(\overline{x}) $, $\| x(t)-\overline{x}\|$ and $\|\dot x (t)+\beta\nabla f(x(t))\|$.
As a consequence, we also have exponential convergence rate to zero as $t \to +\infty$ for $\|\dot x(t)\|$ and $\|\nabla f(x(t))\|$.
\end{theorem}
\begin{proof} Since $f$ is strongly convex, $f$ is a coercive function. According to the decrease property of the global energy, see \eqref{closed_loop_both_energy-decrease} and Theorem \ref{th.existence_uniqueness_both_conv} $(i)$, we have that $f(x(t))$ is bounded from above, and hence the trajectory $x$ is bounded.
Item $(ii)$ of Theorem \ref{th.existence_uniqueness_both_conv}, and the global growth assumption on $\phi$ give that, for some $p\geq 1$
$$
\int_0^{+\infty} \| \dot{x}(t) + \beta \nabla f (x(t))\|^p dt <+\infty.
$$
By a similar argument as in the proof of Theorem \ref{th.existence_uniqueness_both_conv} (where we argued with $p=1$)
we deduce that $\lim_{t\to +\infty} \| \dot{x}(t) + \beta \nabla f (x(t))\| =0$.
Therefore, for $t$ sufficiently large
$$\|\dot x (t)+\beta\nabla f(x(t))\|\leq \epsilon.$$
From \eqref{closed_loop_both_energy_1} and the local property (i) we derive
\begin{equation}\label{energy-multiplied-local-pr}
\frac{d}{dt} \left(\frac{1}{2}\| \dot{x}(t) + \beta \nabla f (x(t))\|^2 + f(x(t)) - f(\overline{x})\right) + \alpha\|\dot x(t)+\beta\nabla f(x(t))\|^2 + \beta\| \nabla f (x(t)) \|^2\leq 0.
\end{equation}
Since $\dot{x}(\cdot)+\beta\nabla f(x(\cdot))$ is bounded,
let $L>0$ be the Lipschitz constant of $\nabla\phi$ on a ball that contains the vector $\dot x(t)+\beta\nabla f(x(t))$ for all $t\geq 0$. Since $\nabla \phi (0) =0$ we have, for all $t\geq 0$
\begin{equation}\label{local_Lip-case-vel-grad}
\| \nabla \phi(\dot x(t))+\beta\nabla f(x(t))\| \leq L \| \dot x(t)+\beta\nabla f(x(t))\|.
\end{equation}
Using successively \eqref{closed_loop_inertial_both_1-quasi-case-str-conv}, \eqref{local_Lip-case-vel-grad}
and \eqref{from-str-conv1}, we obtain
\begin{eqnarray}\frac{d}{dt}\langle x(t)-\overline{x},\dot x(t)+\beta\nabla f(x(t))\rangle &=&
\|\dot x(t)\|^2 + \beta \frac{d}{dt}(f(x(t)) - f(\overline{x})) \nonumber \\
&& + \langle x(t)-\overline{x},-\nabla \phi(\dot x(t)+\beta\nabla f(x(t)))-\nabla f(x(t))\rangle \nonumber\\
&\leq & \|\dot x(t)\|^2 + \beta \frac{d}{dt}(f(x(t)) - f(\overline{x})) + \frac{L^2}{2\gamma}\|\dot x(t)+\beta\nabla f(x(t))\|^2\nonumber\\ && +\frac{\gamma}{2} \|x(t)-\overline{x}\|^2 + \langle \overline{x} -x(t),\nabla f(x(t))\rangle \nonumber\\
&\leq & \|\dot x(t)\|^2 + \beta \frac{d}{dt}(f(x(t)) - f(\overline{x})) + \frac{L^2}{2\gamma}\|\dot x(t)+\beta\nabla f(x(t))\|^2\nonumber\\ && + f(\overline{x}) - f(x(t)). \label{strong_conv_2-case-vel-grad}
\end{eqnarray}
Take now $\varepsilon >0$ (we will specify below how it should be chosen), and define
$$h_{\varepsilon,\beta}(t) := \frac{1}{2}\| \dot{x}(t) + \beta \nabla f (x(t))\|^2 + (1-\beta\varepsilon) \big(f(x(t)) - f(\overline{x})\big) + \varepsilon \langle x(t)-\overline{x},\dot x(t)+\beta\nabla f(x(t))\rangle.$$
Multiplying \eqref{strong_conv_2-case-vel-grad} with $\varepsilon$ and adding the result to \eqref{energy-multiplied-local-pr}, we derive
\begin{eqnarray*}
\dot h_{\varepsilon,\beta}(t) \leq &&- \alpha\|\dot x(t)+\beta\nabla f(x(t))\|^2 - \beta\| \nabla f (x(t)) \|^2 +\varepsilon\|\dot x(t)\|^2 + \frac{\varepsilon L^2}{2\gamma}\|\dot x(t)+\beta\nabla f(x(t))\|^2 \\
&& - \varepsilon (f(x(t))-f(\overline{x})).
\end{eqnarray*}
We use the inequality
\begin{equation}\label{usef-ineq}\varepsilon\|\dot x(t)\|^2 \leq 2\varepsilon \|\dot x(t)+\beta\nabla f(x(t))\|^2
+2\varepsilon\beta^2 \|\nabla f(x(t))\|^2\end{equation}
and we obtain
\begin{equation}\label{d-h-e-b} \dot h_{\varepsilon,\beta}(t) \leq - \left(\alpha - 2\varepsilon - \frac{\varepsilon L^2}{2\gamma}\right)\|\dot x(t)+\beta\nabla f(x(t))\|^2 - (\beta - 2\varepsilon\beta^2)\| \nabla f (x(t)) \|^2 - \varepsilon (f(x(t))-f(\overline{x})).
\end{equation}
Choose $\varepsilon > 0$ small enough such that $C_1: =\min\left\{\left(\alpha - 2\varepsilon - \frac{\varepsilon L^2}{2\gamma}\right), \beta - 2\varepsilon\beta^2, \varepsilon\right\} > 0$. We obtain
\begin{equation}\label{d-h-e-b-ok} \dot h_{\varepsilon,\beta}(t) \leq - C_1\Big(\|\dot x(t)+\beta\nabla f(x(t))\|^2 + \| \nabla f (x(t)) \|^2 + f(x(t))-f(\overline{x})\Big).
\end{equation}
Further, we have
\begin{eqnarray} h_{\varepsilon,\beta}(t) &=& \frac{1}{2}\| \dot{x}(t) + \beta \nabla f (x(t))\|^2 +
\varepsilon\beta\big(\langle x(t)-\overline{x},\nabla f(x(t))\rangle + f(\overline{x}) - f(x(t))\big)\label{h-e-b-leq}\\
&& + f(x(t)) - f(\overline{x}) + \varepsilon \langle x(t)-\overline{x},\dot x(t))\rangle. \nonumber
\end{eqnarray}
Since $f$ is strongly convex, we have (see for example Theorem 2.1.10 in \cite{Nest2})
\begin{equation}\label{nes-str-cv}\langle x(t)-\overline{x},\nabla f(x(t))\rangle + f(\overline{x}) - f(x(t)) \leq \frac{1}{2\gamma}\|\nabla f(x(t))\|^2.
\end{equation}
Moreover, from \eqref{from-str-conv2} and \eqref{usef-ineq} we get
\begin{eqnarray*} && f(x(t)) - f(\overline{x}) + \varepsilon \langle x(t)-\overline{x},\dot x(t))\rangle \leq
f(x(t)) - f(\overline{x}) +
\frac{\varepsilon}{2}\|x(t)-\overline{x}\|^2+\frac{\varepsilon}{2}\|\dot x(t)\|^2\\
&&\leq \left(1+\frac{\varepsilon}{\gamma}\right) \big(f(x(t)) - f(\overline{x})\big)
+ \varepsilon \|\dot x(t)+\beta\nabla f(x(t))\|^2
+\varepsilon\beta^2 \|\nabla f(x(t))\|^2.
\end{eqnarray*}
From this, \eqref{nes-str-cv} and \eqref{h-e-b-leq} we get
\begin{eqnarray*} h_{\varepsilon,\beta}(t) &\leq& \left(\frac{1}{2}+\varepsilon\right)\|\dot x(t)+\beta\nabla f(x(t))\|^2 +
\left(\frac{\varepsilon\beta}{2\gamma} + \varepsilon\beta^2\right)\|\nabla f(x(t))\|^2 +
\left(1+ \frac{\varepsilon}{\gamma}\right)(f(x(t))-f(\overline{x}))\\
&\leq & C_2\Big(\|\dot x(t)+\beta\nabla f(x(t))\|^2 + \| \nabla f (x(t)) \|^2 + f(x(t))-f(\overline{x})\Big),
\end{eqnarray*}
where $C_2:= \max\left\{ \frac{1}{2}+\varepsilon, \frac{\varepsilon\beta}{2\gamma} + \varepsilon\beta^2, 1+ \frac{\varepsilon}{\gamma}\right\} > 0$.
Combining this inequality with \eqref{d-h-e-b-ok}, we obtain
$$\dot {h}_{\varepsilon,\beta}(t) + C_3 h_{\varepsilon,\beta}(t)\leq 0,$$
with $C_3:= \frac{C_1}{C_2} >0$. Then, the Gronwall inequality classically implies
\begin{equation}\label{rate_h-str-conv-vel-grad}h_{\varepsilon,\beta}(t) \leq h_{\varepsilon,\beta}(0)e^{-C_3t}.\end{equation}
Finally, from \eqref{from-str-conv2} and the Cauchy--Schwarz inequality we have
\begin{eqnarray*}h_{\varepsilon,\beta}(t) &\geq & \frac{1}{2}\|\dot x(t)+\beta\nabla f(x(t))\|^2 + (1-\beta\varepsilon)\big(f(x(t)) - f(\overline{x})\big)\\
&&-\frac{\varepsilon}{2}\|x(t)-\overline{x}\|^2 - \frac{\varepsilon}{2}\|\dot x(t)+\beta\nabla f(x(t))\|^2\\
&\geq & \frac{1-\varepsilon}{2}\|\dot x(t)+\beta\nabla f(x(t))\|^2 + \left(1-\beta\varepsilon-\frac{\varepsilon}{\gamma}\right)(f(x(t))-f(\overline{x})).\end{eqnarray*}
Therefore, by taking $\varepsilon$ small enough, we obtain
\begin{equation}\label{rate-1part-vel-grad}h_{\varepsilon,\beta}(t)\geq C_4 \Big(\|\dot x(t)+\beta\nabla f(x(t))\|^2 + f(x(t))-f(\overline{x})\Big),\end{equation}
with $C_4:= \min\left\{\frac{1-\varepsilon}{2},1-\beta\varepsilon-\frac{\varepsilon}{\gamma}\right\} > 0$.
Combining this inequality with \eqref{rate_h-str-conv-vel-grad} and \eqref{from-str-conv2}, we obtain an exponential
convergence rate to zero for $f(x(t))-f(\overline{x}) $, $\|x(t)-\overline{x}\|$ and
$\|\dot x (t)+\beta\nabla f(x(t))\|$.\\
Since $\nabla f$ is Lipschitz continuous on the bounded sets, and $x(t)$ converges to $\overline{x}$, there exists $L_f >0$ such that for all $t\geq 0$
$$
\|\nabla f (x(t)) \|=\|\nabla f (x(t)) - \nabla f (\overline{x})\| \leq L_f \| x(t) - \overline{x}\|.
$$
Based on the exponential convergence rate of $\|x(t)-\overline{x}\|$ to zero, we deduce that the same property holds for
$\|\nabla f (x(t)) \|$. By combining this last property with the exponential convergence rate of $\|\dot x (t)+\beta\nabla f(x(t))\|$ to zero, we finally get that $\|\dot x (t)\|$ converges exponentially to zero when $t \to +\infty$.
\end{proof}
\begin{remark} Similar rates have been reported in \cite[Theorem 4.2]{ACFR} for the heavy ball method
with Hessian driven damping.
\end{remark}
\begin{remark} It is possible to derive similar exponential rates also for the system \eqref{Hessian_def_1},
however for a restrictive choice of $\beta > 0$. To see this, notice that for $\theta > 0$ we have
\begin{eqnarray*}
&&\frac{d}{dt} \left(\frac{1}{2}\| \dot{x}(t) + \beta \nabla f (x(t))\|^2 + f(x(t)) - f(\overline{x})\right)\\
&& = -\langle \dot x(t), \nabla\phi(\dot x(t))\rangle - \beta\langle \nabla \phi(\dot x(t)), \nabla f(x(t))\rangle - \beta\|\nabla f(x(t))\|^2\\
&&\leq -\alpha \|\dot x(t)\|^2 + \frac{\beta\theta}{2}\|\nabla f(x(t))\|^2 + \frac{\beta L^2}{2\theta}\|\dot x(t)\|^2
- \beta\|\nabla f(x(t))\|^2\\
&&= -\left(\alpha - \frac{\beta L^2}{2\theta}\right)\|\dot x(t)\|^2 - \beta\left(1 - \frac{\theta}{2}\right)\|\nabla f(x(t))\|^2.
\end{eqnarray*}
\end{remark}
\subsection{Further convergence results based on the quasi-gradient approach}\label{rem-quasi-hessian_both-quasi}
Let us consider the dynamical systems (ADIGE-VGH) in case $\phi$ is differentiable, $f: {\mathbb R}^N \to {\mathbb R}$ is a $\mathcal C^2$ function (possible nonconvex) whose gradient is Lipschitz continuous on the bounded sets, and such that $\inf_{{\mathbb R}^N} f >-\infty$:
\begin{equation}\label{closed_loop_inertial_both_1-quasi}
\ddot{x}(t) + \nabla \phi \Big(\dot{x}(t) + \beta \nabla f (x(t))\Big) + \beta \nabla^2 f (x(t)) \dot{x} (t) + \nabla f (x(t)) = 0.
\end{equation}
The considerations are similar to those of Section 7.3 and Theorem \ref{quasi_grad_thm_2}.\\
According to Step 2 in the proof of Theorem \ref{th.existence_uniqueness_both}, the first-order reformulation is
\begin{equation}\label{first_order_cl_loop_quasi_1_both-hessian}
\dot z(t) + F(z(t)) =0,
\end{equation}
where $z(t)=(x(t), u(t) ) \in {\mathbb R}^N \times {\mathbb R}^N $, and
$F: {\mathbb R}^N \times {\mathbb R}^N \to {\mathbb R}^N \times {\mathbb R}^N$
is defined by
$$F(x,u)=(\beta\nabla f(x)-u, \nabla \phi(u)+ \nabla f(x)).$$
Let us check the angle condition ($E_{\lambda}$ is defined as in Theorem \ref{quasi_grad_thm_2}). We have
\begin{eqnarray*}
\left\langle \nabla E_{\lambda}(x,u), F(x,u) \right\rangle
&=& \left\langle \Big( \nabla f (x)+ \lambda \nabla^2 f (x)u, \, u + \lambda \nabla f (x) \Big), \Big(\beta\nabla f(x)-u, \nabla \phi(u)+ \nabla f(x) \Big) \right\rangle.
\end{eqnarray*}
After development and simplification, we get
\begin{eqnarray*}
\left\langle \nabla E_{\lambda}(x,u), F(x,u) \right\rangle
&=& - \lambda \left\langle \nabla^2 f (x)u, \, u \right\rangle + \left\langle u, \, \nabla \phi(u) \right\rangle
+ \lambda \left\langle \nabla f (x) , \, \nabla \phi(u) \right\rangle
+ \lambda \| \nabla f (x) \|^2\\
&& + \beta\|\nabla f(x)\|^2+\lambda\beta\left\langle \nabla f(x), \nabla^2 f (x)u\right\rangle.
\end{eqnarray*}
We estimate the term $\lambda\beta\left\langle \nabla f(x), \nabla^2 f (x)u\right\rangle$ by writing
$$\lambda\beta\left\langle \nabla f(x), \nabla^2 f (x)u\right\rangle\geq -\frac{\lambda}{4}\|\nabla f(x)\|^2
-\lambda\beta^2 M^2\|u\|^2$$
and get (as in the proof of Theorem \ref{quasi_grad_thm_2})
\begin{eqnarray}
\left\langle \nabla E_{\lambda}(x,u), F(x,u) \right\rangle
&\geq & \Big( \gamma -\lambda M - \frac{\lambda}{2} \delta^2 -\lambda \beta^2M^2 \Big) \|u\|^2
+ \left(\frac{\lambda}{4}+\beta\right) \| \nabla f (x) \|^2 . \label{quasi_gradient_2_both_hessian}
\end{eqnarray}
We also have
\begin{equation*}
\| F(x,u)\| \leq C_2 ( \|u\|^2 + \| \nabla f (x) \|^2)^{\frac{1}{2}},
\end{equation*}
where $C_2=\sqrt{2(2+\beta^2+\delta^2)}$.
The rest can be done in the lines of the proof of Theorem \ref{quasi_grad_thm_2}.
\section{Conclusion, perspectives}
In this article, from the point of view of optimization, we put forward some classical and new properties concerning the asymptotic convergence of autonomous damped inertial dynamics.
From a control point of view, the damping terms of these dynamics can be considered as closed-loop controls of the current data: position, speed, gradient of the objective function, Hessian of the objective function, and combination of these objects.
Let us cite some of the main results and advantages of the autonomous approach compared to the non-autonomous approach, where damping involves parameters given from the start as functions of time.
\subsection{PRO}
\begin{itemize}
\item Autonomous systems are easy to implement. It is not necessary to adjust the damping coefficient as is the case for non-autonomous systems.
\smallskip
\item When the function to be minimized is strongly convex, there is convergence at an exponential rate, and this is valid for a large class of damping potentials.
\smallskip
\item We were able to exploit the quasi-gradient structure of the autonomous damped dynamics and combine them with the Kurdyka-Lojasiewicz theory to obtain convergence rates for a large class of functions $f$, possibly non-convex.
This is specific to the autonomous case because the theories mentioned above are not developed in the non-autonomous case.
\smallskip
\item The Hessian damping naturally comes within the framework of autonomous systems.
It notably improves the theoretical and numerical behavior of the trajectories, by reducing the oscillatory aspects.
Its introduction into the algorithms does not change their numerical complexity (it makes appear the difference of the gradient at two consecutive steps). This makes this geometric damping successful, several recent articles have been devoted to it.
\smallskip
\item The closed-loop approach clearly distinguishes between the strong and weak damping effects, and the transition between them. It also shows the replacement of the theory of convergence by the notion of attractor when the damping becomes too weak.
\smallskip
\item We have introduced a new autonomous system where the damping involves both the speed and the gradient of $ f $, and which benefits from very good convergence properties. At the beginning of time it takes advantage of the inertial effect, then after a finite time it turns into a steepest descent dynamic, thus avoiding the oscillatory aspects. This regime change has some similarities with the restart method, and also the recent work of Poon-Liang \cite{PL} on adaptive acceleration.
\smallskip
\item The closed-loop approach makes it possible to make the link with different fields, such as PDE and control theory, where the stabilization of oscillating systems is a central issue.
Although the simple mathematical framework chosen in this article (single functional space ${\mathcal H}$, differentiable objective function $f$) does not make it possible to deal directly with the associated PDEs, the Lyapunov analysis that we have developed can naturally be extended to this framework.
\smallskip
\item We have developed an inertial algorithm which shares the good convergence properties of the related continuous dynamics, in the case of the quasi-gradient and Kurdyka-Lojasiewicz approach.
Note that the quasi-gradient approach reflects relative errors in the algorithms, and therefore gives a lot of flexibility.
It is this approach that has made it possible to deal with many different algorithms in Attouch-Bolte-Svaiter \cite{ABS} in the nonconvex nonsmooth case. It would be interesting to develop these aspects for splitting algorithms, such as proximal gradient algorithms, regularized Gauss--Seidel algorithms, and PALM (see also \cite{BotCseLaJDE} for a continuous-times approach to structured optimization problems).
\end{itemize}
\subsection{CONS}
\smallskip
\begin{enumerate}
\item To date, we do not know in the autonomous case the equivalent of the accelerated gradient method of Nesterov and Su-Boyd-Cand\`es damped inertial dynamic, that is to say an adjustment of the damping potential which guarantees the rate of convergence of values $ 1 / t^2 $ for any convex function.
This is a current research subject, for recent progress in this direction, see Lin-Jordan \cite{LJ}.
\smallskip
\item The general approach based on the quasi-gradient and the Kurdyka-Lojasiewicz theory (as developed in Section \ref{sec: basic_3}) works mainly in finite dimension.
The extension of the (KL) theory to spaces of infinite dimension is a current research subject.
\end{enumerate}
\medskip
\subsection{Perspectives}
$\mbox{ }$
\smallskip
\begin{enumerate}
\item Develop closed-loop versions of the Nesterov accelerated gradient method from a theoretical and numerical point of view.
Our analysis allowed us to better define the type of damping potential $\phi$ capable of doing this, but this remains an open question for study. Indeed, the case $ p = 2 $ ({\it i.e.}\,\, quadratic behavior of the damping potential near the origin) is the critical case separating the weak damping from the strong damping. Taking $p>2$, with $ p $ close to $ 2$ provides a vanishing viscosity damping coefficient, which is a specific property of the Nesterov accelerated gradient method. Our intuition is that we need to refine the power scale which is not precise enough to provide the correct setting of the vanishing damping term ({\it i.e.}\,\, going from $ p = 2 $ to $ p> 2 $, with $ p $ even very close to $ 2$ is too sudden a change).
\smallskip
\item Extend our study to the case of nonsmooth optimization possibly involving a constraint. This is an important subject, which is closely related to item 6 of this list, because a common device to deal with a constrained optimization problem is to use a gradient-projection method, which falls under fixed point methods.
\smallskip
\item Develop a control perspective with closed-loop damping for the restarting methods.
The restarting methods take advantage of the inertial effect to accelerate the trajectories, then stop when a given criteria is deteriorate. Then restart from the current point with zero velocity, and so on.
In many ways, the dynamic we developed in Section \ref{Sec: combine} follows a similar strategy. Our results are valid with general data functions $ f $ and $ \phi $, while the known results concerning the restart methods only concern the case where $ f $ is strongly convex. It is an important subject of study, largely to explore.
\smallskip
\item Obtain a closed-loop version of the Tikhonov regularization method, and make the link with the Haugazeau method.
The objective is then, within the framework of the convex optimization, to obtain an autonomous dynamic whose trajectories strongly converge towards the solution of minimum norm; see
Attouch--Cabot--Chbani--Riahi \cite{AC2R-JOTA}, and
Bo\c t--Csetnek--L\'{a}szl\'{o} \cite{BCL} for some recent results in the open-loop case (the Tikhonov regularization parameter tends to zero in a controlled manner, not too fast) and references therein.
\smallskip
\item Develop the corresponding algorithmic results.
Continuous dynamics provide a valuable guide to introduce and analyze algorithms that benefit from similar convergence properties.
In Theorem \ref{quasi_grad_thm_algo} we have analyzed the convergence property of an inertial algorithm with general damping potential $\phi$ and general (tame) function $f$. A similar analysis can certainly be developed on the basis of the Theorems \ref{quasi_thm_VH} and \ref{th.existence_uniqueness_both_conv} which also involve the Hessian-driven damping.
Natural extensions then consist in studying structured optimization problems and the corresponding proximal-based algorithms.
\smallskip
\item In recent years, most of the previous themes have been extended (in the open-loop case) to the case of maximally monotone operators, see \'Alvarez--Attouch \cite{AA1}, Attouch--Maing\'e \cite{AM}, Attouch--Peypouquet \cite{AP-max}, Attouch--Cabot \cite{AC1}, Attouch--L\'{a}szl\'{o} \cite{AL}, Bo\c t--Csetnek \cite{BotCse}. It would be interesting to consider the closed-loop version of these dynamics, as was done by Attouch-Redont-Svaiter \cite{ARS} for first-order Newton-like evolution systems.
\smallskip
\item Time rescaling is a powerful tool to accelerate the inertial systems; see Attouch--Chbani--Riahi \cite{ACR-Pafa},
Shi--Du--Jordan--Su \cite{SDJS} and references therein. It leads naturally to non-autonomous dynamics. It would be interesting to study autonomous closed-loop version. This means first extracting quantities which tend monotonically to $+\infty$.
\smallskip
\item Study of the stability of the dynamics and algorithms with respect to perturbations/errors. This is an an important topic from a numerical point of view, see \cite{AC2R-JOTA}, \cite{ACPR}, \cite{ACR-Pafa}, \cite{VSBV}.
\smallskip
\item The concepts of control theory and dissipative dynamical systems have proven to be useful and intuitive design guidelines for speeding up stochastic gradient methods, especially for the variance-reduction methods for the finite-sum
problem, see \cite{HWL} and accompanying bibliography.
It is likely that our approach fits these questions well.
\end{enumerate}
|
1,314,259,995,273 | arxiv | \section{Introduction}\label{sec:intro}
For any $r\geq 2$, an $r$-uniform hypergraph $\mathcal{H}$, and integer $n$, the \emph{Tur\'{a}n number} for $\mathcal{H}$, denoted $\operatorname{ex}(\mathcal{H}, n)$, is the maximum number of hyperedges in any $r$-uniform hypergraph on $n$ vertices containing no copy of $\mathcal{H}$. The \emph{Tur\'{a}n density} of $\mathcal{H}$ is defined to be $\pi(\mathcal{H}) = \lim_{n \to \infty} \operatorname{ex}(\mathcal{H}, n) \binom{n}{r}^{-1}$ and a well-known averaging argument shows that this limit always exists. While the Tur\'{a}n densities of graphs are well-understood and exact Tur\'{a}n numbers are known for some classes of graphs, few exact results for either Tur\'{a}n numbers or densities are known for the cases $r \geq 3$. The interested reader is directed to the excellent survey by Keevash \cite{pK11} for more details on known Tur\'{a}n results for hypergraphs.
One particular extremal problem, which has a close connection to the Tur\'{a}n density of a particular $3$-uniform hypergraph was considered by Frankl and F\"{u}redi \cite{FF84}. Recall that a subset of vertices, $X$, in a hypergraph is said to span an edge $A$ if $A \subseteq X$. In their paper, Frankl and F\"{u}redi considered those $3$-uniform hypergraphs that have the property that any set of $4$ vertices span either $0$ or exactly $2$ hyperedges. They proved the exact and strongly structural result that any such hypergraph is in one of two classes: either it is the blow-up of a fixed $3$-graph on $6$ vertices with $10$ edges, or else it is isomorphic to a hypergraph obtained by taking vertices as points around a unit circle and then letting the hyperedges be those triples whose convex hull contains the origin (assuming that no two points lie on a line containing the origin).
Let $K_4^-$ be the $3$-uniform hypergraph on $4$ vertices with $3$ edges. Using a recursive construction based on a $3$-uniform hypergraph on $6$ vertices, Frankl and F\"{u}redi show that $\pi(K_4^-) \geq \frac{2}{7}$. Using flag algebra techniques, Baber and Talbot \cite{BT11} showed that the Tur\'{a}n density for $K_4^-$ is at most $0.2871$. It is conjectured in this case that the exact value is $2/7$ (\emph{e.g.} \cite{dM03}). Many further results on extremal numbers for small cliques in hypergraphs can be found, for example, in \cite{BT12, EFR86, F-RV12, FF87, LZ09, MM62, aR10, aS97, jT07}.
In the same paper, Frankl and F\"{u}redi ask about the following more general question. For any $r \geq 4$, what is the maximum number of hyperedges in an $r$-uniform hypergraph with the property that any set of $r+1$ vertices span 0 or 2 edges? They point out that the construction given by points on a circle can be generalized to points on a sphere in $r-1$ dimensions with hyperedges being simplices containing the origin. A random choice of points shows that there exist such $r$-uniform hypergraphs with at least $\binom{n}{r}2^{-r+1}(1+o(1))$ hyperedges as $n$ tends to infinity.
In this paper, we consider their question in the case $r=4$ and give a construction for an infinite family of $4$-uniform hypergraphs with the property that every set of $5$ vertices spans either $0$ or $2$ hyperedges with the maximum number of hyperedges among all such hypergraphs on the same number of vertices. One of the main results of this paper is the following, whose proof appears in Section \ref{sec:design}.
\begin{theorem}\label{thm:main-const}
For each prime power $q \equiv 3 \pmod 4$ there exists a $3-(q+1,4,\frac{q+1}{4})$ design denoted $\mathcal{H}_q$ on $q+1$ vertices with the following properties:
\begin{itemize}
\item[(a)] any set of $5$ vertices spans 0 or 2 edges;
\item[(b)] $\mathcal{H}_q$ has $\frac{q+1}{16}\binom{q+1}{3}$ hyperedges; and
\item[(c)] $\mathcal{H}_q$ is $3$-transitive with a subgroup of its automorphism group isomorphic to $PGL(2,q)$.
\end{itemize}
\end{theorem}
A straightforward modification of the proof of an upper bound on Tur\'{a}n numbers due to de Caen \cite{dC83} shows that any $4$-uniform hypergraph on $n$ vertices in which every $5$ vertices span either $0$ or $2$ hyperedges has at most $\frac{n}{16}\binom{n}{3}$ hyperedges (see Section \ref{sec:ub}). Furthermore, any such hypergraph attaining the maximum number of hyperedges is a $3$-design. Designs arising from the both the groups $PGL(2, q)$ and $PSL(2, q)$ have been examined in a number of previous works, including papers by Cusack, Graham, and Kreher \cite{CGK95}, by Cameron, Omidi, and Tayfeh-Rezaie \cite{COT-R06}, and by Cameron, Maimani, Omidi, and Tayfeh-Rezaie \cite{CMOT-R06}. To the best of our knowledge, the hypergraph in Theorem \ref{thm:main-const} has not been previously studied.
The proofs of the upper bounds on the number of hyperedges in the class of hypergraphs of interest are given, for completeness, in Section \ref{sec:ub}. This shows that the hypergraphs constructed in Theorem \ref{thm:main-const} are maximum. Furthermore, the same upper bound on the number of hyperedges applies for hypergraphs in which every set of $5$ vertices contains \emph{at most} $2$ hyperedges. Thus, the upper bound, together with Theorem \ref{thm:main-const} imply the following result on Tur\'{a}n numbers for the $4$-uniform hypergraph on $5$ vertices with $3$ hyperedges, which is unique up to isomorphism.
\begin{corollary}\label{cor:ex-num}
For any prime power $q \equiv 3\pmod{4}$,
\[
\operatorname{ex}(q+1, \{1234, 1235, 1245\}) = \frac{(q+1)}{16} \binom{q+1}{3}.
\]
\end{corollary}
In general, these extremal hypergraphs need not be unique. Hughes \cite{dH65} considered certain designs arising from groups and showed that there is a $3-(12, 4, 3)$ design with $165$ blocks associated with the Matthieu group $M_{11}$ which has $M_{11}$ as a group of automorphisms. In Section \ref{sec:M11}, this design is examined and the following properties are shown.
\begin{theorem}\label{thm:M11}
There exists a $3-(12,4,3)$ design $\mathcal{M}$ on $12$ vertices with the following properties:
\begin{itemize}
\item[(a)] any set of $5$ vertices spans 0 or 2 edges;
\item[(b)] $\mathcal{M}$ has $165$ hyperedges;
\item[(c)] $\mathcal{M}$ is 3-transitive with a subgroup of its automorphisms isomorphic to $M_{11}$;
\end{itemize}
\end{theorem}
The hyperedges of the hypergraph $\mathcal{M}$ are listed in the Appendix for the interested reader. One can show that the hypergraph $\mathcal{M}$ is not isomorphic to the hypergraph $\mathcal{H}_{11}$. In particular, using SAGE \cite{wS15}, it was verified by direct examination that $\mathcal{H}_{11}$ has the property that every set of $6$ vertices contains at least one hyperedge and that $\mathcal{M}$ contains $22$ independent sets of size $6$.
The extremal numbers given in Corollary \ref{cor:ex-num} give a new proof of the fact that the Tur\'{a}n density for the $5$ vertex hypergraph with $3$ edges is
\begin{equation}\label{eq:density}
\pi(\{1234, 1235, 1245\}) = \frac{1}{4}.
\end{equation} The first proof for the Tur\'{a}n density in Equation \eqref{eq:density} was given by Baber \cite{rB} who proved the lower bound using the following construction and random tournaments. Let $T$ be any tournament on $n$ vertices and construct a $4$-uniform hypergraph $\mathcal{H}_T$ by taking all sets of four vertices $\{a, b, c, d\}$ with the property that the edges are directed $a \to b$, $b \to c$ and $c \to a$ and such that the edges between $d$ and $a$, $b$, or $c$ are either all directed towards $d$ or all directed away from $d$. The two possible sub-tournaments that give hyperedges are shown in Figure \ref{fig:tourn}. One can check that $\mathcal{H}_T$ has the property that any $5$ vertices contain at most two hyperedges and that, in fact, every $5$ vertices span $0$ or $2$ hyperedges.
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}
[decoration={markings, mark=at position 0.6 with {\arrow{>}}}]
\tikzstyle{vertex}=[circle, draw=black, minimum size=5pt,inner sep=0pt]
\node[vertex, label=left:{$d$}] at (-1, 0) (x) {};
\node[vertex, label=above:{$a$}] at (0, 1) (y) {};
\node[vertex, label=below:{$b$}] at (0, -1) (z) {};
\node[vertex, label=right:{$c$}] at (1, 0) (w) {};
\draw[postaction={decorate}] (y) -- (z);
\draw[postaction={decorate}] (z) -- (w);
\draw[postaction={decorate}] (w) -- (y);
\draw[postaction={decorate}] (x) -- (y);
\draw[postaction={decorate}] (x) -- (z);
\draw[postaction={decorate}] (x) -- (w);
\end{tikzpicture} \hspace{20pt}
\begin{tikzpicture}
[decoration={markings, mark=at position 0.6 with {\arrow{>}}}]
\tikzstyle{vertex}=[circle, draw=black, minimum size=5pt,inner sep=0pt]
\node[vertex, label=left:{$d$}] at (-1, 0) (x) {};
\node[vertex, label=above:{$a$}] at (0, 1) (y) {};
\node[vertex, label=below:{$b$}] at (0, -1) (z) {};
\node[vertex, label=right:{$c$}] at (1, 0) (w) {};
\draw[postaction={decorate}] (y) -- (z);
\draw[postaction={decorate}] (z) -- (w);
\draw[postaction={decorate}] (w) -- (y);
\draw[postaction={decorate}] (y) -- (x);
\draw[postaction={decorate}] (z) -- (x);
\draw[postaction={decorate}] (w) -- (x);
\end{tikzpicture}
\end{center}
\caption{Possible edge directions for hyperedge}
\label{fig:tourn}
\end{figure}
The expected number of such hyperedges given by a random tournament on $n$ vertices is $\frac{1}{4}\binom{n}{4}$ and so for every $n$, there is such a tournament yielding a hypergraph with at least this many hyperedges.
In fact, since the construction yields hypergraphs with every $5$ vertices spanning exactly $0$ or $2$ hyperedges, it also shows that
\[
\limsup_{n \to \infty} \left( \frac{\max\{e(\mathcal{H}) \mid |V(\mathcal{H})| = n \text{ and every $5$-set spans $0$ or $2$ edges}\}}{\binom{n}{4}} \right)= \frac{1}{4}.
\]
Examples of tournaments which are sometimes considered to have pseudo-random properties are the Paley tournaments. In Section \ref{sec:tourn}, we give a construction starting from a Paley tournament showing that the hypergraphs $\mathcal{H}_q$ from Theorem \ref{thm:main-const} can be represented by tournaments using Baber's construction. Furthermore, by examining a `switching' operation on tournaments that preserves the resulting hypergraphs, we give easily verifiable necessary conditions for a $4$-uniform hypergraph to be represented in this form. In particular, we show that a hypergraph $\mathcal{H}$ with the property that any set of $5$ vertices spans 0 or 2 edges cannot be realized using a tournament whenever it contains a pair of vertices $u,v$ such that $N_\mathcal{H}(u,v)$ contains an odd cycle (see Proposition \ref{p:oddcycle}). This is used to show that not only is the hypergraph $\mathcal{M}$ from Theorem \ref{thm:M11} not equal to $\mathcal{H}_{11}$, but that it also cannot be represented as $\mathcal{H}_T$ for any tournament on $12$ vertices.
The structure of the remainder of the paper is as follows. In Section \ref{sec:design}, the construction of the family of $4$-uniform hypergraphs described in Theorem \ref{thm:main-const} is given. In Lemma \ref{lem:3tran}, it is shown that the hypergraphs are $3$-transitive and that there is a subgroup of automorphisms isomorphic to $PGL(2, q)$. In Theorem \ref{thm:paley0or2}, it is shown that for each of the constructed hypergraphs, every set of $5$ vertices spans either $0$ or $2$ hyperedges. In Theorem \ref{thm:paley-edge-count}, it is shown that the hypergraphs $\mathcal{H}_q$ are $3$-designs and it is shown that for every $q$, the hypergraph $\mathcal{H}_q$ has exactly $\frac{(q+1)}{16}\binom{q+1}{3}$ hyperedges.
In Section \ref{sec:M11}, the properties of the hypergraph $\mathcal{M}$ described in Theorem \ref{thm:M11} are given.
In Section \ref{sec:ub}, the modification of de Caen's counting argument is given to prove an upper bound on the number of hyperedges for $r$-uniform hypergraphs in which any set of $r+1$ vertices spans either exactly $0$ or $2$ hyperedges or else at most $2$ hyperedges.
In Section \ref{sec:tourn}, Baber's construction for hypergraphs from tournaments is examined and it is shown in Theorem \ref{thm:isotourn} that for every $q$, there is a tournament $T^*(q)$ for which $\mathcal{H}_q = \mathcal{H}_{T^*(q)}$. Two tournaments $T_1$ and $T_2$ on the same vertex set $V$ are defined to be \emph{switching-equivalent} if{f} there exists $A \subseteq V$ so that $T_2$ is equal to the tournament obtained from $T_1$ by reversing the orientation of all edges in $T_1$ between $A$ and $V\setminus A$. Some implications of this switching operation are given for the hypergraphs resulting from tournaments and these are used to show that the hypergraph $\mathcal{M}$ does not arise from a tournament.
Finally, in Section \ref{sec:open}, we note some remaining open problems and further possible directions.
\section{Paley hypergraphs}\label{sec:design}
Throughout, let $q=p^\ell$ be a prime power with $q \equiv 3 \pmod 4$. The purpose of this section is to construct a 4-uniform hypergraph $\mathcal{H}_q$ on $q+1$ vertices with $\frac{(q+1)}{16} \binom{q+1}{3}$ edges and with the property that every set of 5 vertices spans exactly 0 or exactly 2 hyperedges. This construction uses the projective line over a finite field.
\begin{definition}
Define an equivalence relation, $\sim$, on $\mathbb{F}_q^2\setminus\{(0,0)\}$ by $(a, b) \sim (c, d)$ if{f} there exists $\lambda \in \mathbb{F}_q\setminus\{0\}$ with $(a, b) = (\lambda c, \lambda d)$. The \emph{projective line $\mathbb{P}^1\mathbb{F}_q$} is the set of equivalence classes of $\mathbb{F}_q^2 \setminus \{(0,0)\}$ under this equivalence relation. We write $[a:b]$ for the equivalence class of the point $(a, b)$ so that:
\[
\mathbb{P}^1\mathbb{F}_q = \{[x:1] \mid x \in \mathbb{F}_q\} \cup \{[1:0]\}.
\]
\end{definition}
\begin{definition}
Define $D: \mathbb{F}_q^2 \times \mathbb{F}_q^2 \to \mathbb{F}_q$ by
\[
D\left((u_1, u_2), (v_1, v_2) \right) = \begin{vmatrix} u_1 & v_1\\ u_2 & v_2\end{vmatrix}= u_1v_2 - u_2 v_1,
\]
called the \emph{determinant of $(u_1, u_2)$ and $(v_1, v_2)$}.
\end{definition}
Note that the determinant is not constant on the equivalence classes that define the projective line. Next, let $\chi: \mathbb{F}_q \to \{-1, 0, +1\}$, be the square character of the multiplicative group $\mathbb{F}_q$. Explicitly,
\[
\chi(x) = \begin{cases}
0 &\text{if } x=0\\
+1 &\text{if $x$ is a square in $\mathbb{F}_q$, and}\\
-1 &\text{otherwise.}
\end{cases}
\]
Since $\chi$ is a linear character, it is multiplicative. Also note that $\chi(-1) = -1$ since $q \equiv 3 \pmod{4}$.
\begin{definition}
Define the function $S: (\mathbb{F}_q^2)^4 \to \{-1, 0, 1\}$ by
\[
S(a, b, c, d) = \chi\left(D(a, b) D(b, c) D(c, d) D(a, d)\right)
\]
As the function $S$ is constant over equivalence classes of $\mathbb{F}_q^2$, it extends in the natural way to $(\mathbb{P}^1 \mathbb{F}_q)^4$.
\end{definition}
With this definition, the key construction in this paper can now be given.
\begin{definition}[Paley $4$-graph]\label{def:paley-hgraph}
Define $\mathcal{H}_q$ to be the $4$-uniform hypergraph on vertex set $\mathbb{P}^1\mathbb{F}_q$ where $\{a, b, c, d\}$ is a hyperedge if{f} for every permutation $\pi$ of $\{a, b, c, d\}$,
\[
S(\pi(a), \pi(b), \pi(c), \pi(d)) = -1.
\]
\end{definition}
The following lemma gives a useful criterion for testing whether or not a given set of 4 vertices lies in $\mathcal{H}_q$.
\begin{lemma}\label{lem:relabel}
For every four distinct points $a, b, c, d \in \mathbb{P}^1\mathbb{F}_q$, the set $\{a, b, c, d\}$ is a hyperedge in $\mathcal{H}_q$ if{f}
\[
S(a, b, c, d) = S(a, b, d, c) = S(a, c, b, d) = -1
\]
and $\{a, b, c, d\} \notin \mathcal{H}_q$ if{f} among $S(a, b, c, d), S(a, b, d, c)$, and $S(a, c, b, d)$ exactly one is $-1$ and the other two are $+1$.
\end{lemma}
\begin{proof}
Since $S$ is invariant under cyclic permutations of its variables $S(a, b, c, d), S(a, b, d, c)$ and $S(a, c, b, d)$ determine the values of $S$ under all permutations of the set $\{a, b, c, d\}$. This implies the first part of the lemma.
For the second part, note that
\begin{align}
S(a, b, c, d)&S(a, b, d, c) \notag\\
&=\chi\left(D(a, b)D(b, c) D(c, d) D(d, a)\right) \chi\left(D(a, b) D(b, d) D(d, c) D(c, a) \right) \notag\\
&=\chi\left(D(b, c) D(c, d) D(d, a) D(b, d) D(d, c) D(c, a)\right) \notag\\
&=-\chi\left(D(a, c) D(c, b) D(b, d) D(d, a)\right) \notag\\
&=-S(a, c, b, d). \label{eq:threeS}
\end{align}
Since these are all either $+1$ or $-1$, then equation~\eqref{eq:threeS} shows that if one of $S(a, b, c, d), S(a, b, d, c)$ and $S(a, c, b, d)$ are $+1$, then exactly two are $+1$ and the other is $-1$.
\end{proof}
We next show that $\mathcal{H}_q$ is 3-transitive. Let PGL$(2,q)$ be the set of all invertible $2 \times 2$ matrices over $\mathbb{F}_q$ equivalent up to multiplication by scalar matrices. Observe that PGL$(2,q)$ acts naturally on $\mathbb{P}^1\mathbb{F}_q$ as follows:
$$A = \begin{pmatrix} a &b \\ c &d \end{pmatrix} \mbox{ sends } [u_1, u_2] \mbox{ to } [au_1+bu_2: cu_1+du_2].$$ It is well known (see for example \cite[p. 245]{DM}) that this action is 3-transitive.
\begin{lemma}\label{lem:3tran}
For any prime power $q \equiv 3 \pmod{4}$, the hypergraph $\mathcal{H}_q$ is $3$-transitive.
\end{lemma}
\begin{proof}
Let $u:= (u_1, u_2)$ and $v:= (v_1, v_2)$ be elements of $\mathbb{F}_p^2$ and let $A \in PGL(2,q)$. Then,
\begin{align*}
D(uA^T, vA^T)
& = D\left( (au_1+b u_2, cu_1 + du_2), (a v_1 + b v_2, c v_1+ d v_2)\right)\\
&=(au_1 + bu_2)(cv_1+dv_2) - (av_1 + b v_2)(c u_1 + d u_2)\\
&=(ad-bc)(u_1v_2 - v_1 u_2)\\
&=\det(A) D(u, v).
\end{align*}
Thus, for any $x, y, u, v \in \mathbb{F}_q^2$,
\[
S(xA^T, yA^T, uA^T, vA^T) = \chi\left(\det(A)^4\right) S(x, y, u, v) = S(x, y, u, v).
\]
This shows that the map $x \mapsto xA^T$ is an automorphism of the hypergraph $\mathcal{H}_q$. In particular, PGL$(2,q) \leq$ Aut$(\mathcal{H}_q)$ and $\mathcal{H}_q$ is 3-transitive.
\end{proof}
We can now prove our first main result:
\begin{theorem}\label{thm:paley0or2}
For any prime power $q \equiv 3\pmod{4}$ and five distinct vertices $a, b, c, d, e \in \mathbb{P}^1\mathbb{F}_q$, in $\mathcal{H}_q$, the set $\{a, b, c, d, e\}$ either contains $0$ or $2$ hyperedges.
\end{theorem}
\begin{proof}
If there are no edges in $\{a, b, c, d, e\}$, then we are done, so suppose without loss of generality that $\{a, b, c, d\} \in E(\mathcal{H}_q)$. Note that as long as $a, b, c, d, e$ are all distinct, then
\begin{equation}\label{eq:doublecover}
S(a, b, c, d) S(b, c, e, d) S(a, d, b, e) S(a, e, d, c) S(a, b, e, c) = +1.
\end{equation}
This implies that not all $5$ different $4$-sets in $\{a, b, c, d, e\}$ are edges or else the product in equation~\eqref{eq:doublecover} would be $-1$. Therefore, there is at least one non-edge, say $\{b, c, d, e\}$. By Lemma \ref{lem:relabel}, after possibly relabelling points, we can assume that $S(b, c, d, e) = -1$ and $S(b, c, e, d) = S(b, d, c, e) = +1$. Furthermore, by Lemma \ref{lem:3tran}, we may assume that $b = [0:1]$, $c = [1:1]$ and $d = [1:0]$. Thus, it can be assumed that $a = [a_1:1]$ and $e = [e_1:1]$ for some $a_1,e_1 \in \mathbb{F}_q$. Note that for any $x \in \mathbb{F}_q$, $D((x, 1), (1, 0)) = -1$ and $D((1, 0), (x, 1)) = 1$.
Then, $1-e_1$ is a square in $\mathbb{F}_q$ since
\[
+1 = S([0:1], [1:1], [e_1:1], [1:0]) =\chi\left((-1)\cdot(1-e_1) \cdot (-1)\cdot (1) \right) = \chi\left(1-e_1 \right).
\]
Also, $e_1$ is a non-square since $S([0:1], [1:1], [1:0], [e_1:1]) = -1.$ The fact that
\[
S([0:1], [1:1], [1:0], [a_1:1]) = S([0:1], [1:1], [a_1:1], [1:0]) = -1
\]
implies that both $a_1$ and $1-a_1$ are non-squares.
Thus, since
\[
S([a_1:1], [1:0], [e_1:1], [0:1]) = \chi\left((-1)(1)(e_1)(-a_1)\right) = +1
\]
we must have that $\{a, b, d, e\}$ is not an edge.
Finally, note that
\begin{align*}
S([a_1:1], [0:1], [e_1:1], [1:1]) &= \chi\left(a_1 (-e_1)(e_1-1)(1-a_1)\right) = -1,\\
S([a_1:1], [0:1], [1:1], [e_1:1]) & = \chi\left(a_1(-1)(1-e_1)(e_1-a_1) \right) = \chi\left(e_1-a_1\right),\\
S([a_1:1], [1:0], [e_1:1], [1:1]) &= \chi\left((-1)(+1)(e_1-1)(1-a_1) \right) = -1, \text{ and}\\
S([a_1:1], [1:0], [1:1], [e_1:1]) &= \chi\left((-1)(+1)(1-e_1)(e_1 - a_1) \right) = -\chi\left(e_1-a_1\right).
\end{align*}
Hence if $e_1-a_1$ is a square in $\mathbb{F}_p$, then $\{a, b, c, e\}$ is a non-edge and $\{a, c, d, e\}$ is an edge. Conversely, if $e_1 - a_1$ is a non-square, then $\{a, b, c, e\}$ is an edge and $\{a, c, d, e\}$ is a non-edge.
In either case, if the set $\{a, b, c, d, e\}$ contains at least one hyperedge, then it contains exactly two and the proof is complete.
\end{proof}
Next, we count the number of edges in $\mathcal{H}_q$. For this, two facts about sums of the function $\chi$ are used. The only condition required for each is that $q$ is an odd prime power. The first identity is
\begin{equation}\label{eq:sumLeg}
\sum_{x \in \mathbb{F}_q} \chi(x) = 0
\end{equation}
and the second is that for any $y \neq 0$,
\begin{equation}\label{eq:sumLegpairs}
\sum_{x \in \mathbb{F}_q} \chi(x)\chi(x+y) = -1.
\end{equation}
Both of these facts are standard exercises in number theory.
\begin{theorem}\label{thm:paley-edge-count}
For any prime power $q \equiv 3 \pmod{4}$, the hypergraph $\mathcal{H}_q$ has $e(\mathcal{H}_q) = \frac{(q+1)}{16} \binom{q+1}{3}$ and every set of $3$ vertices occurs in exactly $(q+1)/4$ hyperedges.
\end{theorem}
\begin{proof}
Consider first hyperedges of the form $\{[0:1], [1:1], [1:0], [a:1]\}$. If such a set is a hyperedge then,
\begin{align*}
-1 = S([a:1], [0:1], [1:1], [1:0]) &= \chi\left(a\right)(-1)(-1)(+1) =\chi \left(a\right); \mbox{ and}\\
-1 = S([a:1], [1:1], [0:1], [1:0]) &= \chi\left(a-1 \right) (+1)(-1)(+1)= \chi\left(a-1 \right).
\end{align*}
Thus, $[a:1]$ is in a hyperedge with $\{[0:1], [1:1], [1:0]\}$ if{f} both $a$ and $1-a$ are non-squares in $\mathbb{F}_q$. Consider
\[
\frac{1}{4}\left(1 - \chi\left(a \right) \right)\left(1 - \chi\left(1-a \right) \right) =
\begin{cases}
1 &\text{if $a$ and $1-a$ are both non-square, and}\\
0 &\text{otherwise}.
\end{cases}
\]
The number of hyperedges in $\mathcal{H}_q$ of containing $\{[0:1], [1:1], [1:0]\}$ is then exactly
\begin{multline}\label{eq:edge-count}
\frac{1}{4} \sum_{a \in \mathbb{F}_q \setminus \{0,1\}} \left(1 - \chi\left(a\right)\right)\left(1 - \chi\left(1-a\right)\right) \\
= \frac{1}{4}\sum_{a \in \mathbb{F}_q \setminus \{0,1\}}\left(1 - \chi\left(a \right) - \chi\left(1-a\right) + \chi\left(a \right)\chi\left(1-a\right)\right) .
\end{multline}
Consider the terms in equation \eqref{eq:edge-count} separately. Note that, by equation \eqref{eq:sumLeg},
\[
\sum_{a \in \mathbb{F}_q \setminus \{0,1\}} \chi\left(a \right) = -\chi\left(1\right) = -1, \qquad \text{and} \qquad
\sum_{a \in \mathbb{F}_q \setminus \{0,1\}}\chi\left(1-a\right) = -\chi\left(1\right) = -1.
\]
Then, by equation \eqref{eq:sumLegpairs},
\[
\sum_{a \in \mathbb{F}_q \setminus \{0,1\}}\chi\left(a\right)\chi\left(1-a\right)
=-\sum_{a \in \mathbb{F}_q \setminus \{0,1\}}\chi\left(a\right)\chi\left(a-1\right)
=-(-1 - 0 - 0) = 1.
\]
Thus, substituting into equation \eqref{eq:edge-count} gives,
\[
\frac{1}{4} \sum_{a \in \mathbb{F}_q \setminus \{0,1\}} \left(1 - \chi\left(a\right)\right)\left(1 - \chi\left(1-a\right)\right)
= \frac{1}{4}\left((q-2) - (-1) - (-1) +1 \right) = \frac{q+1}{4}.
\]
Thus, the three vertices $\{[0:1], [1:1], [1:0]\}$ are contained together in exactly $(q+1)/4$ hyperedges of $\mathcal{H}_p$ and since the hypergraph is $3$-transitive, the same is true of any other $3$ vertices. That the total number of hyperedges is
\[
e(\mathcal{H}_q) = |E(\mathcal{H}_q)| = \frac{(q+1)}{16}\binom{q+1}{3}
\]
follows immediately.
\end{proof}
\section{A $4$-hypergraph associated to $M_{11}$}\label{sec:M11}
In this section, we describe a single $4$-uniform hypergraph on $12$ vertices with the property that any $5$ vertices span either $0$ or $2$ hyperedges and that has the same number of hyperedges as $\mathcal{H}_{11}$ (another hypergraph on $12$ vertices).
Hughes \cite{dH65} examined certain designs arising from groups and showed that there is a $3-(12, 4, 3)$ design which occurs as an orbit on 4-subsets under the natural action of the Mathieu group $M_{11}$ on 12 points. We denote this design by $\mathcal{M}$ here. A listing of the hyperedges of $\mathcal{M}$ is given in an appendix. One can verify directly that the hypergraph $\mathcal{M}$, with $165$ hyperedges, has the property that every set of $3$ vertices is contained in exactly $3$ hyperedges and that every set of $5$ vertices contains either exactly $0$ or $2$ hyperedges.
This $3$-design was also examined by Devillers, Giudici, Li, and Praeger \cite{DGHP} and their results can be used to give alternative proofs of these facts. In \cite{DGHP}, a graph $\Gamma$ is defined with vertex set being the hyperedges of $\mathcal{M}$ and two vertices $A, B$ being adjacent if{f} $|A \cap B| = 3$.
\begin{theorem}[in Theorem 2.5 of \cite{DGHP}]\label{thm:gamma-graph}
The graph $\Gamma$ is an $8$-regular graph on $165$ vertices with the property that any two cliques of size $3$ intersect in at most one vertex.
\end{theorem}
Theorem \ref{thm:gamma-graph} is used to give another proof of the fact that any set of $5$ vertices of $\mathcal{M}$ contains either $0$ or exactly $2$ hyperedges.
\begin{proposition}\label{prop:m11-0or2}
Let $a, b, c, d, e$ be vertices of $\mathcal{M}$ with $\{a, b, c, d\} \in \mathcal{M}$. There is exactly one other hyperedge of $\mathcal{M}$ in the set $\{a, b, c, d, e\}$.
\end{proposition}
\begin{proof}
Note that since $\mathcal{M}$ is a $3-(12, 4, 3)$ design, then for each subset of $X \subseteq \{a, b, c, d\}$, of size $3$, there are two vertices $x, y \in V(\mathcal{M})$ so that $X \cup \{x\}, X \cup \{y\} \in \mathcal{M}$. Then the three sets $\{a, b, c, d\}$, $X \cup \{x\}$, and $X \cup \{y\}$ correspond to a clique of size $3$ in $\Gamma$. Since these four cliques intersect in the vertex $\{a, b, c, d\}$, they are otherwise pairwise disjoint. Let $x_1, x_2, x_3, x_4, x_5, x_6, x_7, x_8$ be such that $\mathcal{M}$ contains the hyperedges
\[
\begin{tabular}{llll}
$\{a, b, c, x_1\}$, &$\{a, b, d, x_3\}$, &$\{a, c, d, x_5\}$, &$\{b, c, d, x_7\}$\\
$\{a, b, c, x_2\}$, &$\{a, b, d, x_4\}$, &$\{a, c, d, x_6\}$, &$\{b, c, d, x_8\}$.
\end{tabular}
\]
Since these sets are all distinct, $x_1 \neq x_2$, $x_3 \neq x_4$, $x_5 \neq x_6$, and $x_7 \neq x_8$. Furthermore, since $\Gamma$ is $8$-regular, these are the only $8$ neighbours of the vertex $\{a, b, c, d\}$, as in Figure \ref{fig:Gamma-graph}.
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}
\tikzstyle{vertex}=[circle, draw=black, minimum size=5pt,inner sep=0pt]
\node[vertex, label={80:$abcd$}] at (0, 0) (0) {};
\node[vertex, label=right:{$abcx_1$}] at (337.5:2) (1) {};
\node[vertex, label=right:{$abcx_2$}] at (22.5:2) (2) {};
\node[vertex, label=above:{$abdx_3$}] at (67.5:2) (3) {};
\node[vertex, label=above:{$abdx_4$}] at (112.5:2) (4) {};
\node[vertex, label=left:{$acdx_5$}] at (157.5:2) (5) {};
\node[vertex, label=left:{$acdx_6$}] at (202.5:2) (6) {};
\node[vertex, label=below:{$bcdx_7$}] at (247.5:2) (7) {};
\node[vertex, label=below:{$bcdx_8$}] at (292.5:2) (8) {};
\draw (0) -- (1) -- (2) --(0)-- (3) -- (4) -- (0) -- (5) -- (6) -- (0) -- (7)--(8)--(0);
\end{tikzpicture}
\end{center}
\caption{Neighbourhood of a vertex in $\Gamma$}
\label{fig:Gamma-graph}
\end{figure}
Suppose, for some $i \neq j$, that $x_i = x_j$. Let $A, B \subseteq \{a, b, c, d\}$ be the two different sets of size $3$ with $A \cup \{x_i\}, B \cup \{x_i\} \in \mathcal{M}$. Then, since $|A \cap B| = 2$, there is an edge in $\Gamma$ between $A \cup \{x_i\}$ and $B \cup \{x_i\}$. As shown previously, there is $x_{i'} \neq x_i$ with $A \cup \{x_{i'}\} \in \mathcal{M}$. Then, in $\Gamma$, the three vertices $\{a, b, c, d\}$, $A \cup \{x_i\}$ and $B \cup \{x_i\}$ form a clique on $3$ vertices that shares two vertices with the clique formed by $\{a,b, c, d\}$, $A \cup \{x_i\}$ and $A \cup \{x_{i'}\}$. This contradicts Theorem \ref{thm:gamma-graph} and so each of $x_1, x_2, \ldots, x_8$ are distinct and different from $a, b, c$ or $d$. In particular, there exactly one $i \in [1, 8]$ with $x_i = e$. Thus, there is exactly one hyperedge containing $e$ and three vertices from $\{a, b, c, d\}$.
This completes the proof that every set of $5$ vertices in $\mathcal{M}$ either contains no hyperedges or contains exactly $2$.
\end{proof}
Furthermore, using a connection to Witt design $\mathcal{W}_{11}$, it is shown in \cite{DGHP} that the full automorphism group of $\mathcal{M}$ is the group $M_{11}$.
As described in the introduction, a direct examination shows that $\mathcal{M}$ is not isomorphic to $\mathcal{H}_{11}$, although they both have $165$ hyperedges. In Section \ref{sec:tourn} to come, another property of hypergraphs is examined to prove that the two hypergraphs $\mathcal{M}$ and $\mathcal{H}_{11}$ are different.
\section{Extremal bounds}\label{sec:ub}
We now consider the maximum number of hyperedges possible in a $4$-uniform hypergraph with the property that every $5$ vertices span either $0$ or $2$ hyperedges, and deduce that the hypergraph $\mathcal{H}_q$ considered in Section 1 is maximal among 4-hypergraphs on $q+1$ vertices with this property. The bound given in Proposition \ref{prop:ub0or2} follows directly from an argument used by de Caen \cite{dC83} to give upper bounds for the Tur\'{a}n numbers for complete hypergraphs. The full proof is included here both for completeness and to highlight the fact that those hypergraphs which attain the upper bound are necessarily designs. Though we shall only use this result in the case when $r=4$, we state and prove the result for arbitrary $r$-uniform hypergraphs.
\begin{proposition}\label{prop:ub0or2}
Let $r \geq 2$ and let $\mathcal{H}$ be an $r$-uniform hypergraph on $n$ vertices with the property that every set of $r+1$ vertices contains at most $2$ hyperedges. Then,
\[
|E(\mathcal{H})| \leq \frac{n}{r^2} \binom{n}{r-1},
\]
with equality if{f} $\mathcal{H}$ is such that every set of $(r-1)$ vertices occurs in exactly $n/r$ hyperedges.
\end{proposition}
\begin{proof}
The proof involves double-counting the set
\begin{equation}\label{eq:edge-nonedge}
\left\{(A, B) \mid |A| = |B|=r,\ |A \cap B| = r-1,\ A \in \mathcal{H} \text{ and } B \notin \mathcal{H} \right\}.
\end{equation}
The size of the set in \eqref{eq:edge-nonedge} can be bounded from below as follows. Let $E_1, E_2, \ldots, E_m$ be the hyperedges of $\mathcal{H}$. Fix $i \leq m$ and $x \notin E_i$. Since the set $E_i \cup \{x\}$ contains $r+1$ vertices and at least one hyperedge, then by assumption, it contains at most $2$. That is, there is at most one vertex $y \in E_i$ so that $E_i \cup \{x\} \setminus \{y\} \in \mathcal{H}$. That is, for each $z \in E_i \setminus \{y\}$, the pair $(E_i, E_i \cup \{x\} \setminus \{z\})$ is in the set in \eqref{eq:edge-nonedge}. Furthermore, all pairs in this set are of this form. Thus,
\begin{align}
|\big\{(A, B) \mid &|A| = |B|=r,\ |A \cap B| = r-1,\ A \in \mathcal{H} \text{ and } B \notin \mathcal{H} \big\} \notag\\
&=\sum_{i = 1}^{m} \sum_{x \notin E_i} |\{z \in E_i \mid E_i \cup\{x\} \setminus \{z\} \notin \mathcal{H}\}|\notag\\
&\geq \sum_{i=1}^m \sum_{x \notin E_i} (r-1) \notag\\
&=m(n-r)(r-1) = e(\mathcal{H}) (n-r)(r-1). \label{eq:exact-count}
\end{align}
Note that the inequality in \eqref{eq:exact-count} is, in fact, an identity in the case that every $r+1$ vertices span either $0$ or exactly $2$ hyperedges.
For an upper bound on the size of the set in equation \eqref{eq:edge-nonedge}, order the $(r-1)$-sets of vertices $\{C_i \mid 1\leq i \leq \binom{n}{r-1}\}$ and for each $i \leq \binom{n}{r-1}$, let $a_i$ be the number of hyperedges of $\mathcal{H}$ containing the set $C_i$. Note that, by double counting, $\sum a_i = r e(\mathcal{H})$. Further, the number of pairs $(A, B)$ with $A \in \mathcal{H}$, $B \notin \mathcal{H}$ and $A \cap B = C_i$ is $a_i(n-r+1-a_i)$. Thus,
\begin{align}
\sum_{i = 1}^{\binom{n}{r-1}} &\left| \{(A, B) \mid A \in \mathcal{H},\ |B| = r, B \notin \mathcal{H}, A \cap B = C_i \}\right| \notag\\
&=\sum_{i = 1}^{\binom{n}{r-1}} a_i (n-r+1 - a_i) \notag\\
&=(n-r+1) r e(\mathcal{H}) - \sum_{i = 1}^{\binom{n}{r-1}} a_i^2 \notag\\
&\leq (n-r+1) r e(\mathcal{H}) - \frac{1}{\binom{n}{r-1}} \left(r e(\mathcal{H}) \right)^2 &&\text{(by Jensen's ineq.)} \label{eq:conv}\\
& = (n-r+1)r e(\mathcal{H}) - \frac{r^2}{\binom{n}{r-1}} e(\mathcal{H})^2. \label{eq:ub}
\end{align}
Further, by convexity, equality holds in \eqref{eq:conv} if{f} all of the $a_i$s are equal.
Combining equations \eqref{eq:exact-count} and \eqref{eq:ub} shows that
\begin{equation}\label{eq:combined-bd}
e(\mathcal{H}) \leq \frac{n}{r^2} \binom{n}{r-1}
\end{equation}
and by the convexity properties of \eqref{eq:conv}, equality holds in equation \eqref{eq:combined-bd} if{f} every set of $r-1$ vertices is contained in exactly the same number of hyperedges. By double counting, this means that equality holds if{f} every set of $r-1$ vertices is contained in exactly $n/r$ hyperedges of $\mathcal{H}$.
\end{proof}
Combining Proposition \ref{prop:ub0or2} with Theorems \ref{thm:paley0or2} and \ref{thm:paley-edge-count} we obtain:
\begin{corollary}
For each $q \equiv 3 \pmod{4}$, $\mathcal{H}_q$ is maximal among $4$-hypergraphs on $q+1$ points with the property that every set of 5 vertices contains at most 2 hyperedges.
\end{corollary}
Note in particular that the upper bound given in Proposition \ref{prop:ub0or2} is attained for infinitely many values of $n$.
Furthermore, we have the following collection of exact Tur\'{a}n numbers. Note that there is, up to isomorphism, just one $4$-uniform hypergraph on $5$ vertices with $3$ hyperedges: $\{1234, 1235, 1245\}$.
\begin{theorem}
For any prime power $q \equiv 3\pmod{4}$,
\[
\operatorname{ex}(q+1, \{1234, 1235, 1245\}) = \frac{(q+1)}{16} \binom{q+1}{3}.
\]
\end{theorem}
\section{Tournaments}\label{sec:tourn}
\subsection{The extended Paley tournament and $\mathcal{H}_q$}
We begin with the Paley hypergraphs $\mathcal{H}_q$ considered in Section \ref{sec:design}. For any prime power $q \equiv 3 \pmod{4}$, recall that the Paley tournament, denoted here by $T(q)$, is the tournament whose vertices are elements of $\mathbb{F}_q$ where the edges are directed $x \to y$ if{f} $y-x$ is a square in $\mathbb{F}_q$. Note that this is a well-defined since $-1$ is not a square in $\mathbb{F}_q$ when $q \equiv 3 \pmod{4}$. We shall consider a class of tournaments that contain a Paley tournament.
\begin{definition}
For any prime power $q \equiv 3 \pmod{4}$, define a tournament, denoted $T^*(q)$ with vertex set $V := \{(x, 1) \mid x \in \mathbb{F}_q\} \cup \{(1,0)\}$ where for every pair of vertices $(a_1, a_2), (b_1, b_2)$, the edge is directed $(a_1, a_2) \to (b_1, b_2)$ if{f} $D\left( (b_1, b_2), (a_1, a_2)\right)$ is a square in $\mathbb{F}_q$.
\end{definition}
In other words, on $\{(x, 1) \mid x \in \mathbb{F}_q\}$, the tournament $T^*(q)$ is isomorphic to the Paley tournament $T(q)$ and all edges incident to the vertex $(1, 0)$ are directed towards it. Our next goal will be to show that $\mathcal{H}_{T^*(q)}$ is isomorphic to the hypergraph $\mathcal{H}_q$ constructed in Section 1. For this, we need the following observation which is a characterisation of tournaments of the form shown in Figure \ref{fig:tourn}.
Given a tournament on four vertices $\{x_1, x_2, x_3, x_4\}$ and a cyclic permutation of the vertices $(\pi(x_1)\ \pi(x_2)\ \pi(x_3)\ \pi(x_4))$, say that a pair $\{\pi(x_j), \pi(x_{j+1})\}$ of consecutive vertices in the permutation is `reverse-oriented' with respect to the permutation if the direction of the edge in the tournament is $\pi(x_{j+1}) \to \pi(x_j)$.
\begin{fact}\label{lem:circ}
Let $T$ be a tournament on four vertices $\{a,b,c,d\}$. Then, $T$ has the property that in any cyclic permutation of the vertices, the number of reverse-oriented pairs is odd if{f} $T$ is isomorphic to one of the two tournaments in Figure \ref{fig:tourn}.
\end{fact}
\begin{theorem}\label{thm:isotourn}
For any prime power $q \equiv 3 \pmod{4}$, the hypergraph $\mathcal{H}_{T^*(q)}$ constructed from the tournament $T^*(q)$ is isomorphic to the hypergraph $\mathcal{H}_q$.
\end{theorem}
\begin{proof}
We show that the natural mapping $\Phi$ from $\mathcal{H}_q$ to $\mathcal{H}_{T^*(q)}$ which sends a vertex $[x:1]$ to $(x,1)$ and $[1:0]$ to $(1,0)$ induces an isomorphism.
Let $\{[a_1:a_2],[b_1:b_2],[c_1:c_2],[d_1:d_2]\}$ be an edge of $\mathcal{H}_q$ so that for any permutation $\pi$, $$S(\pi([a_1:a_2]),\pi([b_1:b_2]),\pi([c_1:c_2]),\pi([d_1:d_2]))=-1.$$ This is equivalent to saying that in the subgraph induced by the image of $\{[a_1:a_2],[b_1:b_2],[c_1:c_2],[d_1:d_2]\}$ under $\Phi$, any cyclic permutation has an odd number of reverse-oriented consecutive pairs. Now by Fact \ref{lem:circ}, this subgraph is isomorphic to one of the two graphs in Figure 1 and $\{\Phi([a_1:a_2]),\Phi([b_1:b_2]),\Phi([c_1:c_2]),\Phi([d_1:d_2])\}$ is a hyperedge in $\mathcal{H}_{T^*(q)}$.
Conversely, let $\{x, y, z, w\}$ be a non-edge in $\mathcal{H}_q$. Then, possibly after relabelling, the edges between the four vertices are as in one of the three possibilities in Figure \ref{fig:non-edge-tourn}.
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}
[decoration={markings, mark=at position 0.6 with {\arrow{>}}}]
\tikzstyle{vertex}=[circle, draw=black, minimum size=5pt,inner sep=0pt]
\node[vertex, label=left:{$x$}] at (0, 0) (x) {};
\node[vertex, label=left:{$y$}] at (0, 1) (y) {};
\node[vertex, label=right:{$z$}] at (1, 1) (z) {};
\node[vertex, label=right:{$w$}] at (1, 0) (w) {};
\draw[postaction={decorate}] (x) -- (y);
\draw[postaction={decorate}] (y) -- (z);
\draw[postaction={decorate}] (z) -- (w);
\draw[postaction={decorate}] (w) -- (x);
\end{tikzpicture} \hspace{20pt}
\begin{tikzpicture}
[decoration={markings, mark=at position 0.6 with {\arrow{>}}}]
\tikzstyle{vertex}=[circle, draw=black, minimum size=5pt,inner sep=0pt]
\node[vertex, label=left:{$x$}] at (0, 0) (x) {};
\node[vertex, label=left:{$y$}] at (0, 1) (y) {};
\node[vertex, label=right:{$z$}] at (1, 1) (z) {};
\node[vertex, label=right:{$w$}] at (1, 0) (w) {};
\draw[postaction={decorate}] (y) -- (x);
\draw[postaction={decorate}] (y) -- (z);
\draw[postaction={decorate}] (w) -- (z);
\draw[postaction={decorate}] (w) -- (x);
\end{tikzpicture} \hspace{20pt}
\begin{tikzpicture}
[decoration={markings, mark=at position 0.6 with {\arrow{>}}}]
\tikzstyle{vertex}=[circle, draw=black, minimum size=5pt,inner sep=0pt]
\node[vertex, label=left:{$x$}] at (0, 0) (x) {};
\node[vertex, label=left:{$y$}] at (0, 1) (y) {};
\node[vertex, label=right:{$z$}] at (1, 1) (z) {};
\node[vertex, label=right:{$w$}] at (1, 0) (w) {};
\draw[postaction={decorate}] (x) -- (y);
\draw[postaction={decorate}] (y) -- (z);
\draw[postaction={decorate}] (w) -- (z);
\draw[postaction={decorate}] (x) -- (w);
\end{tikzpicture}
\end{center}
\caption{Three possible edge orientations for a non-hyperedge of $\mathcal{H}_q$}
\label{fig:non-edge-tourn}
\end{figure}
In the first case, on the left, the set $\{x, y, z, w\}$ is not a hyperedge in $\mathcal{H}_{T^*(q)}$ because no vertex has all edges directed towards or away from the other three. In the second case, in the middle, the set is not a hyperedge in $\mathcal{H}_{T^*(q)}$ because no three vertices are part of a cyclic ordering. Finally, in the third case on the right, the vertex $x$ could have all edges directed out, but the vertices $\{x, y, w\}$ are not a subset of a cyclic ordering. Similarly, the vertex $z$ could have all edges directed in, but the remaining vertices are not cyclically ordered. This shows that non-edges in $\mathcal{H}_q$ are mapped to non-edges in $\mathcal{H}_{T^*(q)}$.
Thus, the two hypergraphs are isomorphic.
\end{proof}
Note that, in light of the remarks at the start of this section, Theorem \ref{thm:isotourn} supplies a different proof of the fact that in $\mathcal{H}_q$ any set of 5 vertices span 0 or 2 edges.
\subsection{Switching tournaments}\label{sec:switching}
Recall from the introduction that Baber's construction associates to each tournament $T$ a 4-hypergraph $\mathcal{H}_T$ with the property that 5 vertices span 0 or 2 hyperedges. In this section we consider the opposite problem of associating a tournament to any hypergraph satisfying this condition.
The notion, examined by Frankl and F\"{u}redi \cite{FF84}, of $3$-uniform hypergraphs in which every $4$ vertices span $0$ or $2$ hyperedges is a special case of what is called a `two-graph' (not to be confused with a graph). Two-graphs were introduced by Higman (see \cite{dT77}) and are defined to be $3$-uniform hypergraphs with the property that every set of $4$ vertices spans an even number of hyperedges. As described by Cameron and van Lint \cite{CL}, a two-graph can be constructed from a graph $G = (V, E)$ by defining a hypergraph on $V$ whose hyperedges are the sets of $3$ vertices that contain an odd number of edges in $G$. Furthermore, every two-graph arises from such a construction. A survey on two-graphs was given by Seidel and Taylor \cite{ST81}.
For example, the $3$-uniform hypergraph on $6$ vertices with $10$ hyperedges given by Frankl and F\"{u}redi in \cite{FF84} corresponds to the graph shown in Figure \ref{fig:two-graph}. Note that the graph in Figure \ref{fig:two-graph} is the Paley graph for $\mathbb{F}_5$ with an additional isolated vertex (labeled $5$). This is the only such construction from a Paley graph with the property that every set of $4$ vertices contain either $0$ or $2$ subsets of size $3$ that span an odd number of edges. Indeed, all other Paley graphs either contain a copy of $K_4$ or else an induced subgraph consisting of $K_3$ and an isolated vertex.
\begin{figure}[htb]
\begin{minipage}{0.5\linewidth}
\begin{center}
\begin{tikzpicture}
\tikzstyle{vertex}=[circle, draw=black, minimum size=5pt,inner sep=0pt]
\node[vertex, label=left:{$5$}] at (0, 0) (5) {};
\node[vertex, label=right:{$4$}] at (18:2) (4) {};
\node[vertex, label=above:{$0$}] at (90:2) (0) {};
\node[vertex, label=left:{$1$}] at (162:2) (1) {};
\node[vertex, label=below:{$2$}] at (234:2) (2) {};
\node[vertex, label=below:{$3$}] at (306:2) (3) {};
\draw (0) -- (1) -- (2) -- (3) -- (4) -- (0);
\end{tikzpicture}
\end{center}
\end{minipage}
\begin{minipage}{0.5\linewidth}
\begin{center}
Hyperedges:\\ \vspace*{5pt}
\begin{tabular}{ll}
015 &013\\
125 &124\\
235 &230\\
345 &341\\
045 &012
\end{tabular}
\end{center}
\end{minipage}
\caption{Two-graph representation of $3$-uniform hypergraph from \cite{FF84}}
\label{fig:two-graph}
\end{figure}
In \cite{CAM77} Cameron introduced the notion of an \textit{oriented two-graph} which, like the hypergraphs $\mathcal{H}_T$ may also be associated to a tournament $T$. Indeed, if $T$ is regarded as an antisymmetric function $f$ from ordered pairs of distinct vertices to $\{ \pm 1\}$ (where $f(x, y) = 1$ if and only if there is a directed edge from $x$ to $y$) then the associated oriented two-graph is given by the function $g$ defined on ordered triples of distinct vertices $x,y,z$ as follows: $$g(x,y,z)=f(x,y)f(y,z)f(z,x)$$ (see \cite[Section 2]{BC2000}). We contrast this to the definition of $\mathcal{H}_T$ which may be regarded as a function defined on unordered quadruples of distinct vertices.
The following operation on tournaments was introduced by Moorhouse \cite{EM95} in connection to oriented two-graphs.
\begin{definition}\label{def:switch}
Given a tournament $T$ on vertex set $V$ and a set $A \subseteq V$, then \emph{$T$ switched with respect to $A$} is the tournament obtained from $T$ by reversing the orientation of all edges between $A$ and $V \setminus A$.
Two tournaments $T_1$ and $T_2$ both on vertex set $V$ are said to be \emph{switching equivalent} if{f} there exists $A \subseteq V$ so that $T_2$ is precisely $T_1$ switched with respect to $A$.
\end{definition}
Moorhouse \cite{EM95} also notes that if $M$ is the $(0, \pm 1)$-adjacency matrix of a tournament $T$ and $T'$ is a switching equivalent tournament, then there is a $(\pm 1)$-diagonal matrix $D$ so that the $(0, \pm 1)$-adjacency matrix of $T'$ is $DMD$. Note that switching with respect to the empty set (or equivalently the entire vertex set) is a legal operation, but leaves the tournament unchanged. One can further verify directly that being switching equivalent is indeed an equivalence relation. To show transitivity, note that switching with respect to a set $A$ and then switching with respect to a set $B$ corresponds to switching with respect to the set $(A \cap B) \cup (A^c \cap B^c)$.
The notion of switching for tournaments is closely related to a concept of switching in graph theory (see \cite[Chapter 4]{CL}).
It was shown by Moorhouse \cite{EM95} that two tournaments are switching equivalent if and only if they induce the same oriented two-graph. In one direction, an analogous statement holds for the hypergraphs $\mathcal{H}_T$.
\begin{lemma}\label{l:tt'switch}
Let $T,T'$ be two tournaments which are switching equivalent. Then $\mathcal{H}_T=\mathcal{H}_{T'}$.
\end{lemma}
\begin{proof}
It suffices to consider the effect of switching on tournaments on $4$ vertices. Note that for any cyclic permutation of $4$ vertices $(x_1\ x_2\ x_3\ x_4)$, any switching operation changes the orientation of the edges between either $0$, $2$ or $4$ of the pairs $\{x_1, x_2\}, \{x_2, x_3\}, \{x_3, x_4\}$, and $\{x_4, x_1\}$. Thus, if the number of reverse-oriented edges is odd before switching, it remains odd after switching also. Thus, by Fact \ref{lem:circ}, both edges and non-edges in hypergraphs $\mathcal{H}_T$ are preserved by switching.
\end{proof}
The converse to Lemma \ref{l:tt'switch} fails because the first tournament in Figure 1 is clearly not switching equivalent to the tournament obtained by reversing all edges.
In fact, even if one allows the additional operation of ``reversing all the edges,'' the converse still fails as the two tournaments on 5 vertices, given in Figure \ref{fig:not-switching} show. The two tournaments differ only in the orientation of the edge between vertices $1$ and $5$ and yet both yield the $4$-uniform hypergraph $\{1234, 2345\}$.
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}
[decoration={markings, mark=at position 0.6 with {\arrow{>}}}]
\tikzstyle{vertex}=[circle, draw=black, minimum size=5pt,inner sep=0pt]
\node[vertex, label=above:{$1$}] at (90:2) (1) {};
\node[vertex, label=right:{$2$}] at (18:2) (2) {};
\node[vertex, label=below:{$3$}] at (306:2) (3) {};
\node[vertex, label=below:{$4$}] at (234:2) (4) {};
\node[vertex, label=left:{$5$}] at (162:2) (5) {};
\draw[postaction={decorate}] (1) -- (2);
\draw[postaction={decorate}] (1) -- (3);
\draw[postaction={decorate}] (1) -- (4);
\draw[postaction={decorate}, ultra thick] (1) -- (5);
\draw[postaction={decorate}] (3) -- (2);
\draw[postaction={decorate}] (2) -- (4);
\draw[postaction={decorate}] (5) -- (2);
\draw[postaction={decorate}] (4) -- (3);
\draw[postaction={decorate}] (5) -- (3);
\draw[postaction={decorate}] (5) -- (4);
\end{tikzpicture} \hspace{20pt}
\begin{tikzpicture}
[decoration={markings, mark=at position 0.6 with {\arrow{>}}}]
\tikzstyle{vertex}=[circle, draw=black, minimum size=5pt,inner sep=0pt]
\node[vertex, label=above:{$1$}] at (90:2) (1) {};
\node[vertex, label=right:{$2$}] at (18:2) (2) {};
\node[vertex, label=below:{$3$}] at (306:2) (3) {};
\node[vertex, label=below:{$4$}] at (234:2) (4) {};
\node[vertex, label=left:{$5$}] at (162:2) (5) {};
\draw[postaction={decorate}] (1) -- (2);
\draw[postaction={decorate}] (1) -- (3);
\draw[postaction={decorate}] (1) -- (4);
\draw[postaction={decorate}, ultra thick] (5) -- (1);
\draw[postaction={decorate}] (3) -- (2);
\draw[postaction={decorate}] (2) -- (4);
\draw[postaction={decorate}] (5) -- (2);
\draw[postaction={decorate}] (4) -- (3);
\draw[postaction={decorate}] (5) -- (3);
\draw[postaction={decorate}] (5) -- (4);
\end{tikzpicture}
\end{center}
\caption{Two tournaments that determine the hypergraph $\{1234, 2345\}$}
\label{fig:not-switching}
\end{figure}
The following application of switching is used in further results to come and shows that the construction of the extended Paley tournament with one vertex having all incident edges directed towards it is not a particularly unusual condition.
\begin{lemma}\label{lem:univ}
Let $T$ be a tournament on vertex set $V$ and let $w \in V$. There is a tournament $T'$ that is switching equivalent to $T$ in which all edges incident to $w$ are directed towards $w$.
\end{lemma}
\begin{proof}
Let $A$ be the set of vertices in $a \in V\setminus \{w\}$ for which the edge between $a$ and $w$ is directed away from $w$. Switching $T$ with respect to $A$ yields a tournament with all edges incident to $w$ directed towards it.
\end{proof}
Similarly, one could obtain a tournament with all edges directed away from a particular vertex.
As an application of switching, we have the following result which gives a \textit{necessary} condition on a 4-hypergraph $\mathcal{H}$ for the existence of a tournament $T$ with $\mathcal{H}=\mathcal{H}_T$:
\begin{proposition}\label{p:oddcycle}
Let $\mathcal{H}$ be a 4-hypergraph with the property that $\mathcal{H}=\mathcal{H}_T$ for some tournament $T$ with vertex set $V$. Then for each $u,v \in V$ the graph given by the neighbourhood of the pair $\{u, v\}$:
\[
N_\mathcal{H}(u,v):=\{\{x,y\} \mid \{u,v,x,y\} \in \mathcal{H}\}
\]
is a bipartite graph on $V \backslash \{u,v\}$.
\end{proposition}
\begin{proof}
Suppose that $\mathcal{H}=\mathcal{H}_T$ for some tournament $T$ with vertex set $V$. By Lemma \ref{lem:univ}, we can assume that all edges incident to $u$ are directed towards $u$ in $T$. If $N_{\mathcal{H}}(u, v)$ contains no cycles, then it is a tree and so bipartite. Suppose that $N_{\mathcal{H}}(u, v)$ contains a cycle with vertices $a_1, a_2, a_3, \ldots, a_k$ (in that order).
Consider now the orientations of edges in $T$ between consecutive vertices in the cycle and the vertices of the cycle and $v$. Let $i \leq k-1$ and suppose that the edge between $a_i$ and $a_{i+1}$ is directed $a_i \to a_{i+1}$. Since $\{u, v, a_i, a_{i+1}\} \in \mathcal{H}_T$ and all edges incident to $u$ are directed towards $u$, then the remaining edges incident to $v$ are directed $v \to a_i$ and $a_{i+1} \to v$. Similarly, if the edge between $a_i$ and $a_{i+1}$ is directed $a_{i+1} \to a_i$, then the edges incident to $v$ are directed $a_i \to v$ and $v \to a_{i+1}$. In either case, the vertices of the cycle alternate between being in-neighbours of $v$ and out-neighbours of $v$. Since this also holds for the pair $\{a_{k}, a_1\}$, then $k$ is even.
Thus, if the graph $N_{\mathcal{H}}(u,v)$ contains any cycles, they are even cycles and hence $N_{\mathcal{H}}(u,v)$ is a bipartite graph.
\end{proof}
The converse of Proposition \ref{p:oddcycle} is not true. For example, the $4$-uniform hypergraph given by sets in \eqref{eq:non-tourn} has the property that every $5$-set of vertices spans either $0$ or $2$ hyperedges and the neighbourhood graph of every pair of vertices is bipartite, but one can verify directly that the hypergraph can not be represented by a tournament.
\begin{equation}\label{eq:non-tourn}
\begin{tabular}{llll}
$\{6, 7, 8, 11 \}$, &$\{6, 7, 9, 12\}$, &$\{6, 7, 10, 11\}$, &$\{6, 7, 10, 12\}$,\\
$\{6, 8, 9, 12\}$, &$\{6, 9, 10, 11\}$, &$\{6, 9, 11, 12\}$, &$\{7, 8, 9, 11\}$,\\
$\{7, 8, 10, 12\}$, &$\{7, 8, 11, 12\}$, &$\{8, 9, 10, 11\}$, &$\{8, 9, 10, 12\}$
\end{tabular}
\end{equation}
The hypergraph given in Equation \eqref{eq:non-tourn} consist of the hyperedges in $\mathcal{M}$ that contain none of the vertices from $\{1, 2, 3, 4, 5\}$.
Proposition \ref{p:oddcycle} can be used to show that even extremal hypergraphs for the property that any $5$ vertices span either $0$ or $2$ hyperedges need not arise from tournaments.
\begin{corollary}\label{cor:M11-notourn}
There is no tournament $T$ such that $\mathcal{M} = \mathcal{H}_T$.
\end{corollary}
\begin{proof}
For any two vertices $a, b \in [1, 12]$, consider the graph $$N_{\mathcal{M}}(a, b):=\left\{\{c, d\} \mid \{\infty,a,b,c, d\} \in \mathcal{M}\right\}.$$ By \cite[Lemma 4.8]{DGHP} (or alternatively via a direct GAP \cite{GAP} computation), $N_{\mathcal{H}}(a, b)$ is isomorphic to the Petersen graph, which is not bipartite. Thus, by Proposition \ref{p:oddcycle}, there is no tournament $T$ with $\mathcal{M} = \mathcal{H}_T$.
\end{proof}
\section{Open questions}\label{sec:open}
Note that using the recursive construction of Frankl and F\"{u}redi \cite{FF84} with a Paley hypergraph, one can show that for any $\varepsilon >0$ and sufficiently large $n$, there is a $4$-uniform hypergraph with $\frac{1}{4}\binom{n}{4}(1-\varepsilon)$ hyperedges with the property that any $5$ vertices span at most $2$ hyperedges. However, for divisibility reasons, if the upper bound given in Proposition \ref{prop:ub0or2} is attained, then the number of vertices in the graph is divisible by $4$.
\begin{question}
For which natural numbers $n \equiv 0 \pmod 4$ does there exist a $3-(n,4,n/4)$ design with the property that every set of 5 vertices spans either 0 or 2 hyperedges?
\end{question} One might further ask for an improved upper bound in the cases when $n$ is not divisible by $4$. A slight improvement in Proposition \ref{prop:ub0or2} follows immediately by convexity. \newline
\newline
While a complete classification of 4-hypergraphs with the property that every set of 5 vertices spans either 0 or 2 hyperedges (which parallels that given for 3-hypergraphs in \cite{FF84}) appears difficult, it may be of interest to compare the ways in which these hypergraphs arise. As previously mentioned, two natural sources of examples are given by:
\begin{itemize}
\item[(i)] finite subsets of $S^2$ (where edges are given by 4-subsets of points whose convex hull contains the origin);
\item[(ii)] the hypergraphs $\mathcal{H}_T$ where $T$ is a tournament.
\end{itemize}
\begin{question}
What is the relationship between these two families of 4-hypergraphs? For example, can every hypergraph which arises from points on the unit sphere be realized using a tournament?
\end{question}
\section*{Acknowledgement}
The authors wish to thank John Talbot for the interesting discussion on this topic and Rahil Baber for sharing his tournament construction. We would also like to thank Jonathan Bober for showing us the usefulness of the identity in equation \eqref{eq:sumLegpairs} for counting certain sets of squares in prime fields.
|
1,314,259,995,274 | arxiv | \section{Introduction} \label{sec:introduction}
With the advent of deep and wide multi-band photometric surveys there has been
a resurgence of interest in photometric redshifts as a means of estimating the
distance to a range of astrophysical objects.
Depending on the objects of interest and the information to hand, the derived
photometric redshifts will be of varying precision and accuracy, but all can be
described by a probability density function (PDF).
As our understanding of photometric redshifts improves our confidence in, and
ability to characterise, these PDFs, their use in cosmological statistical analyses is sure to increase.
In the sense that photo-$z$s represent color-redshift relations,
the use of an {\it ensemble\/} of PDFs for a {\it set\/} of objects is a
decades-old approach \citep[e.g.][and references therein]{Koo99}.
An example of this is the selection of cluster galaxies
\citep[e.g., via the Red Sequence;][]{Gla00}.
Cluster galaxy selection techniques have, in fact, recently been updated to
incorporate full PDFs \citep{vanB09} but approaches that use full PDFs remain
rare.
\citet{Sub96} introduced a method that used Gaussian PDFs to estimate
luminosity functions, a problem that has been studied for more arbitrary
PDF shapes by \citet{Che03} and \citet{Sheth07}.
Full PDFs are particularly underutilised in clustering work, where the use of
broad redshift bins is more prevalent.
By using broad redshift bins to measure photometric clustering one can
ameliorate uncertainties in the photo-$z$ ``peak", but typically at the
expense of constraining power.
One of the most fundamental statistics of any population of objects, and one
which carries much physical information, is the 2-point correlation function
\citep[e.g.][]{Tot69}.
Provided the redshift distribution of the objects is well known, the underlying
3D clustering can be robustly inferred from the measured clustering in
projection \citep{Limber}, but the number of objects required increases
dramatically when the redshift distribution is broad. For this reason,
estimates of the 2-point function can in principle gain tremendously from
improved utilization of the redshift information associated with photometric objects.
Often photo-$z$s are derived from the information in a subset of the objects
for which spectroscopy has been obtained.
In addition to calibrating the photo-$z$s, this subset of spectroscopic
objects can be used as distance anchors with which to set the real-space
transverse scale for distances to the photometric objects.
Measuring the cross-clustering of photometric objects around spectroscopic
objects has several advantages:
the properties of the spectroscopic objects, such as luminosity or spectral
type are precisely known;
the photometric objects are distributed more uniformly, meaning their
background clustering signature (the ``mask'') is simple to obtain and
issues like fiber collisions and more complex hidden selection dependencies that
might be introduced by the spectrographic setup are completely absent;
the cross-correlation probes the clustering only in a well-defined and
localised $z$-range, reducing the sensitivity to photometric outliers while
the number of pairs is dramatically increased by using the higher number
density of the photometric sample to improve statistics.
The use of spectroscopic-photometric cross-correlations to estimate clustering is not new \citep[e.g.][]{Lon79,Yee84,Yee87,Wol00,Hil91} however, using the information inherent in full PDFs to improve the
clustering signal in cross-correlation methods is in its infancy.
In this paper we develop a clustering measure which uses the full
photometric redshift PDF and which optimally weights
photometric-spectroscopic pairs in the limit that the error is
Poisson. Our method circumvents the need to use the peak of the
photometric redshift PDF to select which objects lie in a redshift bin
of interest, or indeed to bin objects at all. It allows every object
that can be assigned a photometric redshift to be usefully
cross-correlated against every spectrosopic object in the interval of
interest. We also provide simple, informative equations that indicate
when photometric redshifts are precise enough, for a given sample
size, to provide improved constraints over the spectroscopic
auto-correlation. We find that this condition is very hard to satisfy,
which explains why even relatively small spectroscopic surveys can
produce clustering measurements comparable to much larger photometric
samples. We additionally provide a quick method to calculate how much
our optimal weighting scheme for spectroscopic-photometric
cross-correlations can help satisfy this condition by using full PDF
information. The various equations we discuss should be very useful in
establishing a survey design to optimise clustering measurements.
To demonstrate our approach with real-world data we apply our new
method to measure the clustering of quasars (QSOs). The measurement of
QSO clustering sheds light on both QSO demographics and the physics
powering these systems. The amplitude of clustering on large scales
is related to the masses of the dark matter halos which host the QSOs
(their environment), which together with the observed number density
allows QSO lifetimes or duty cycles \citep{ColKai89,HaiHui01,MarWei01}
to be constrained. The small-scale clustering of QSOs can shed light
on their triggering mechanism, and on the nature of QSO progenitors.
With the advent of large, well-characterised samples, QSOs can now be
efficiently photometrically classified
\citep[e.g.][]{Ric04,Dab09,Ricopt09,RicIR09} but still have quite
imprecise photometric redshifts
\citep[e.g.][]{Bud01,Ric01,Wei04,Bal08}. This suggests that an
estimator that takes full advantage of the information in a
photometric redshift might be expected to dramatically improve
measurements of the clustering of QSOs. Most previous work on QSO
clustering used purely spectroscopic analysis
\citep{PorMagNor04,Cro05,PorNor06,Hen06,She07,Ang08,Mye08}, but all such
analyses are limited by the extremely low number density of objects
with spectra. Higher number densities of objects can be achieved by
using photometric QSO selection \citep{Mye06,Mye07a,Mye07b} but
systematic errors must be carefully controlled because photometric
redshifts for QSOs are still frequently inaccurate. The use of
cross-correlations to measure QSO clustering has thus proven quite
popular \citep[e.g.][]{Cro04,AdeSte05a,AdeSte05b,Ser06,DEEP2,Str08,PWNP09,Mou09}. Our new technique
builds on such approaches, particularly that of \cite{PWNP09}, by
incorporating new information from photometric PDFs to improve the
clustering signal.
We note that, although we choose QSOs as our illustrative data set, our
methods and results are significantly more general and {\it our optimal
estimator will improve the signal for any real-space clustering measurement
that uses photometric redshifts}. Although the methods developed in this paper can be easily applied to any spectroscopic-photometric cross-correlation measurement, they will be of particular use in upcoming surveys where sparse spectroscopic data (e.g., from BOSS), is embedded in deeper photometric data, such as from PanSTARRS, DES and the LSST.
The outline of the paper is as follows.
\S\ref{sec:method} introduces our new optimal spectroscopic-photometric
cross-clustering estimator.
In \S\ref{sec:data} we introduce the QSO data we use as an example, and
in \S\ref{sec:qsoresults} we present the clustering results of this sample
and use it to demonstrate the improvement our new technique provides over
existing estimators that do not utilise the full PDF.
We finish in \S\ref{sec:conclusions} with some conclusions and lessons
learned.
We assume a $\Lambda$CDM cosmological model with
$\Omega_{\rm m}=0.25$ and $\Omega_\Lambda=0.75$, consistent with the maximum likelihood estimates from the 5-year WMAP data \citep{Dun09}. All quoted magnitudes are corrected for Galactic extinction using the dust maps of \citet{Sch98}.
\section{Methodology} \label{sec:method}
\subsection{Real Space Clustering Measurements with Photometric Objects}
\label{sec:oldapproach}
Imagine we have a set of objects for which multi-band photometry has allowed
us to estimate photometric redshifts and a second (possibly disjoint)
set of objects for which spectroscopic redshifts are available.
For the spectroscopic objects we know (up to small uncertainties due to
peculiar velocities and uncertainties in the background cosmology) a physical
distance to each object, which can be used to anchor the physical scale.
Consider the cross-clustering between the set of objects with known
spectroscopic redshifts and the set of objects for which only photometric
redshifts are known. To begin let us assume that the spectroscopic objects
all lie at a single redshift (and hence distance, $\chi_\star$) and relax
this assumption later. We may estimate\footnote{More complex estimators,
such as that of \cite{LanSza93}, could also be used. One would simply substitute each
estimator into Eq.~(\ref{eqn:cweightfinal}) or (\ref{eqn:enhanced}) evaluating
the $R_s(\chi_\star\theta)$ terms at different angular positions but at the
comoving distance of the spectroscopic data point. We prefer the robustness
of Eq.~(\ref{eq:wtheta_DDDR}) to likely inaccuracies in the spectroscopic ``mask" over, e.g., the reduced variance
of the \cite{LanSza93} estimator.} the correlation function using the $DD/DR$
estimator \citep[e.g.][]{Sha83}
\begin{equation}
w_{\theta}(R) = \frac{N_R}{N^{\rm phot}}\frac{D_sD_p(R)}{D_sR_p(R)} - 1 \,\,,
\label{eq:wtheta_DDDR}
\end{equation}
where we are measuring the cross-clustering of pairs of spectroscopic
and photometric objects, ``$D$'' denotes a data point ``$R$'' denotes
a point drawn from a random catalogue that mimics the data distribution
and the subscripts ``$p$'' and ``$s$'' denote ``photometric" and ``spectroscopic".
The factor $N_R/N^{\rm phot}$ scales the counts appropriately if the random
catalogue has a different size than the photometric catalogue.
We denote the random points $R_p$ both to specify that the random distribution
mimics the photometric data and to distinguish the term from $R=\chi_\star\theta$,
the transverse separation.
Note that Eq.~(\ref{eq:wtheta_DDDR}) only requires knowledge of the angular
selection function, or ``mask'', of the photometric data, not the
typically far more complex selection function of the spectroscopic data.
We have labeled this estimator $w_\theta(R)$ because it looks like a normal
angular correlation function in the photometric sample, except that angles
have been converted to distances using the distance to the spectroscopic
partner.
As detailed in \citet{PWNP09} we infer the projected, real-space,
cross-correlation function, $w_p(R)$, under the assumption that the
clustering is constant across the redshift slice and within the \citet{Limber}
approximation, using the relation
\begin{eqnarray}
w_\theta(R) &=& \int d\chi\ f(\chi)
\,\xi\left(R, \chi-\chi_\star\right) \\
&\approx& f(\chi_\star) \int d\Delta\chi
\ \xi\left(R, \chi-\chi_\star\right)\\
&=& f(\chi_\star) w_{p}(R) \,\,,
\label{eq:wtheta}
\end{eqnarray}
where $f(\chi)$ is the normalised radial distribution function of the
photometric objects with $\int f(\chi)d\chi=1$ and all of the spectroscopic
objects lie at $\chi_\star$.
Note that this is a real space measurement and for broad enough $f(\chi)$
we can use the real-space correlation function in the integral, avoiding
the need to model redshift-space distortions.
Also note that we are making use of the fact that $f(\chi)$ is typically almost
constant across the entire line-of-sight range of integration employed in
defining $w_p$. If this is not true then a more sophisticated analysis, which
factors in the changing selection function of ``random pairs'' with distance, is required.
For a distribution of spectroscopic redshifts one replaces $f(\chi_\star)$
in the above with the average, $\langle f(\chi_\star)\rangle$, across the
spectroscopic distribution. For a small spectroscopic bin
($\chi_1\leq\chi<\chi_2$)
the redshift distribution will typically be flat.
In this case, $\langle f(\chi)\rangle$ tells us the fraction of objects in
the photometric data set that genuinely have redshifts in the spectroscopic
bin of interest ($f_z$) per comoving interval
($\langle f(\chi_\star)\rangle \approx f_z/(\chi_2-\chi_1)$.
We can use Eq.~(\ref{eq:wtheta}) to answer the question: how large does a
photometric sample need to be before a photometric-spectroscopic
cross-clustering measurement can compete with a spectroscopic auto-correlation?
Clearly, clustering estimates using photometric objects will improve as
photometric redshift precision (and accuracy) approaches the level of a
spectroscopic redshift (though in this limit our assumption of constant
$f(\chi)$ breaks down).
In the limit that the objects of interest are rare enough that their
clustering is dominated by Poisson shot-noise, then the angular bins
in $w_{\theta}(R)$ are independent and
\begin{equation}
\frac{\delta w_\theta}{1+w_\theta} = N_{\rm pair}^{-1/2}
\quad \Rightarrow \quad
\frac{\delta w_p}{w_p} = \frac{f^{-1}+w_p}{w_p}\ N_{\rm pair}^{-1/2}
\label{eqn:dwp}
\end{equation}
where $N_{\rm pair}$ is the number of data pairs in the bin and $f$
is $\langle f(\chi_\star)\rangle$ for the photometric sample. Note
that both $f^{-1}$ and $w_p$ have dimensions of length.
Eq.~(\ref{eqn:dwp}) neatly shows the main drawback of spectroscopic-photometric
cross-correlation measurements as compared to auto-correlation measurements
using only spectroscopic objects. If the photometric redshift solutions are
significantly extended along the line-of-sight then $f_i$ is small (perhaps
as low as the reciprocal of the depth of the survey). This suppresses the
measured clustering, $w_\theta$, which for a given sample is proportional to
$f$. A very large number of pairs are thus necessary to measure $w_\theta$
with any precision.
How large is the typical suppression?
When measuring the spectroscopic auto-correlation the clustering is
integrated along the line-of-sight to eliminate the effects of
redshift-space distortions. The limits of integration tend to vary
from author to author but typically the line-of-sight interval is
$\mathcal{O}(100\,h^{-1}\,{\rm Mpc})$. In the language of
Eq.~(\ref{eqn:dwp}) such an auto-correlation estimate can approach a
limit of $f\approx 0.01\,h\,{\rm Mpc}^{-1}$. If the photometric
sample is extended over, say, $1\,h^{-1}$Gpc, then
$f=\mathcal{O}(10^{-3}\,h\,{\rm Mpc}^{-1})$, and the number of
photometric objects needs to be larger by a factor of $\sim100$ in
order to measure the clustering as well as if precise redshifts were
known.
If the extent is $500\,h^{-1}$Mpc one needs $\sim 25$ times more
objects, and for $300\,h^{-1}$Mpc one needs $\sim 10$ times as many.
Of course, if obtaining spectroscopy or improved PDFs for the photometric
sample is unrealistic then one has no other choice but to use the existing
information.
\subsection{An Optimal Estimator for Real-Space Clustering using
Photometric Redshifts}
We have noted two major drawbacks to measuring the real-space clustering of
photometrically classified objects around spectroscopic objects.
First, it is not clear how to establish which photometric objects should
be cross-correlated with a given set of spectroscopic objects.
The typical approach would be to use objects with a peak photometric redshift
solution in the redshift bin of interest.
This, however, discards much of the information codified in the photometric
redshift PDF and ignores the fact that an object with a peak photometric
redshift in the range of interest may actually have less chance of being in
that redshift range than an object with a peak photometric redshift beyond
that range, particularly as the peak of the PDF may itself be poorly defined.
We illustrate this in Figure~\ref{fig:problempdfs}.
The second drawback is the possible extension of the ensemble of the
photometric redshifts along the line-of-sight, which causes $f$ to be
small in Eq.~(\ref{eqn:dwp}).
\begin{figure}
\begin{center}
\resizebox{3.2in}{!}{\includegraphics{f1.eps}}
\end{center}
\caption{In analyses that use the PDF peak, only the PDF in the centre panel ($z_{\rm peak}=2.17$) would be considered to overlap the spectroscopic bin of interest ($1.8 < z_{\rm spec} < 2.2$ in this plot). In reality each PDF has a 50\% overlap with the spectroscopic bin. We illustrate some typical problems with using PDF peaks; PDFs that overlap the spectroscopic bin but have a preferred peak solution far from the bin (a ``catastrophic" redshift; upper panel), PDFs with a peak solution in the bin but that are smeared out across a large range of redshifts (centre panel), and well-defined PDFs that lie just outside the bin of interest (lower panel). The PDFs are for real photometric QSOs calculated using the method of \citet{Bal08}.}
\label{fig:problempdfs}
\end{figure}
We now introduce a new method designed to circumvent these issues.
Consider breaking the photometric sample into very thin slices in photometric
redshift, $z_p$, and labelling the slices from $i=1,\cdots,k$. Each photometric sample,
$i$, provides an estimate of $w_p(R)$ via
$w_\theta(R)/f_i$. Writing this estimate as $w_i(R)$, with an error
proportional to $f_i^{-1}N_{\rm pair}^{-1/2}$ in the limit of weak
clustering, we can inverse variance weight the different measurements
to obtain
\begin{equation}
w_p(R) = \sum_i N_i^{\rm phot} f_i^2 w_i(R) \Bigm\slash
\sum_i N_i^{\rm phot} f_i^2
\label{eqn:wpvarweight}
\end{equation}
where $N^{\rm phot}_i$ is the number of photometric objects in sample $i$.
This circumvents the issue of which photometric objects to
cross-correlate against a set of spectroscopic objects in a chosen bin
of redshift. Clearly photometric samples which peak at very different
redshifts from the spectroscopic sample are significantly
down-weighted in the sum. Note that our method also down-weights both
objects with unusual colours that might have multi-peaked PDFs and
objects with poorly constrained photometry, such as near survey
limits, where the PDF might be very broad.
Since the binning is so far arbitrary we can consider the limit where each
slice in Eq.~(\ref{eqn:wpvarweight}) represents a single photometric object,
i.e.~$N_i^{\rm phot}=1$ for each $i$.
In this case photometric objects that have some overlap with the spectroscopic
bin of interest are included in the sum and photometric objects with zero
overlap have zero weight.
Treating the photometric objects individually, rather than in an ensemble,
removes the need for any arbitrary binning and effectively reduces the
extension of the ensemble PDF along the line-of-sight and should thus
significantly improve the clustering signal-to-noise.
\begin{figure}
\begin{center}
\resizebox{3.2in}{!}{\includegraphics{f2.eps}}
\end{center}
\caption{The calculation of $\langle f(\chi_\star)\rangle$ and $f_i$, the
``comoving overlaps'', in units of $10^{-3}~h~{\rm Mpc^{-1}}$. The upper panel
demonstrates the old method (\S\ref{sec:oldapproach}), in which the
photometric redshift PDFs are combined into $\langle
f(\chi_\star)\rangle$ an ensemble, normalised, comoving distribution
averaged over the spectroscopic bin of interest ($1.8 < z_{\rm spec} <
2.2$ in this plot). The lower panels demonstrate our new bin-weighted estimator
(Eq.~\ref{eqn:cweightfinal}) in which each PDF is transformed into a
normalised comoving distribution and averaged across the bin of
interest $f_1, f_2, f_3...f_k$. The lower panels displays the case for
$N_i^{\rm phot}=1$ in Eq.~(\ref{eqn:wpvarweight}) but any number $N^{\rm phot}$ of PDFs can be
combined into an ensemble.}
\label{fig:chibin}
\end{figure}
Because the weights in Eq.~(\ref{eqn:wpvarweight}) are
$\sigma_i^{-2} = N_i^{\rm phot} f_i^2$
a rough determination of how much this new estimator will improve the
signal-to-noise of a $w_p$ estimate over existing methods, which only
consider objects that have a peak photometric redshift in the bin of
interest is
\begin{equation}
\sum_i N_i^{\rm phot}f_i^2 \Bigm\slash n\langle f(\chi_\star)\rangle^2
\label{eqn:comp}
\end{equation}
where the $i$ subscripts represent our new optimal estimator for a
slice containing $N^{\rm phot}$ photometric objects and the $n$ represents the
number of photometric objects with a PDF peak in the
spectroscopic bin of interest. The $f_i$ are the comoving fractional
photometric redshift overlaps for objects in slice $i$ and $\langle
f(\chi_\star)\rangle$ is the same for the ensemble of photometric
objects with a peak photometric redshift in the spectroscopic bin of
interest. This is illustrated in Figure~\ref{fig:chibin}, in which the upper panel
plots the ensemble of the ($n=110410$) PDFs with $1.8 < z_{\rm peak} < 2.2$.
This ensemble has an $\langle f(\chi_\star)\rangle=1.26\times10^{-3}~h~{\rm Mpc^{-1}}$ overlap with the true range $1.8 < z < 2.2$.
The lower panels plot three individual (i.e. $N_1^{\rm phot}=N_2^{\rm phot}=N_3^{\rm phot}=1$) PDFs
and their overlaps with $1.8 < z < 2.2$.
\subsection{The Optimal Estimator in Practice}
In \S\ref{sec:qsoresults}, we illustrate the degree to which our optimal estimator
can improve clustering estimates for a ``typical'' analysis, using a sample
of spectroscopic and photometric QSOs.
QSOs may be particularly well suited to our estimator as they are rare
enough that their clustering is dominated by Poisson noise (e.g., see Figure~\ref{fig:bootstrap}) out to reasonably
large scales and $f(\chi)$ is quite broad.
We note, though, that our optimal estimator should improve the signal-to-noise
for any photometric clustering analysis.
The exact methodology we use in practice is as follows.
Eq~(\ref{eqn:wpvarweight}) can be rewritten as
\begin{equation}
w_p(R) = \sum_i c_i w_i^{\theta}(R)
\label{eqn:cweight}
\end{equation}
\noindent where
\begin{equation}
c_i = N_i^{\rm phot} f_i \Bigm\slash
\sum_i N_i^{\rm phot} f_i^2
\label{eqn:fweight}
\end{equation}
and we have used $w_p=w_\theta/f_i$.
Now, consider substituting Eq.~(\ref{eq:wtheta_DDDR}), the typical $DD/DR$
estimator for $w(\theta)$, into Eq.~(\ref{eqn:cweight})
\begin{equation}
w_p(R) = \sum_i c_i \left[\frac{N_R}{N_i^{\rm phot}}\,\frac{D_sD_p(R)}{D_sR_p(R)} -
1\right]
\label{eqn:wpcweight}
\end{equation}
where the the transverse separation, $R$, is evaluated using the angle
between a spectroscopic-photometric pair and the distance to the
spectroscopic object. Finally we obtain a simple equation for
calculating the real-space clustering of a sample of photometric
objects with full PDFs around a sample of spectroscopic objects
\begin{equation}
w_p(R) = N_R\sum_i \frac{c_i}{N_i^{\rm phot}}\,\frac{D_sD_p(R)}{D_sR_p(R)} -
\sum_ic_i \quad .
\label{eqn:cweightfinal}
\end{equation}
The $1/N_i^{\rm phot}$ factor reflects the fact that care must be taken to weight
the random catalogue correctly, i.e., on a slice-by-slice basis. Note that $\sum c_i\sim
f^{-1}(\chi_\star)$ approximates the reciprocal of $\langle
f(\chi_\star)\rangle$ from the unweighted estimator. We prefer
Eq.~(\ref{eqn:cweightfinal}) to other versions of this expression as
it facilitates simple tracking of the data-data counts to construct
error estimates from subsampling of the counts.
Finally, we note that one can express the weights in Eq.~(\ref{eqn:fweight}) based on overlaps between each individual spectroscopic and photometric object (i.e. weighting fully by pairs rather than by how much a photometric object overlaps a {\em bin} of many spectroscopic objects) without loss of generality. The equations of interest would then reduce to
\begin{equation}
c_{i,j} = N_i^{\rm phot}N_j^{\rm spec}f_{i,j} \Bigm\slash
\sum_{i,j} N_i^{\rm phot}N_j^{\rm spec}f_{i,j}^2
\label{eqn:enhancedvarweight}
\end{equation}
\noindent where $N_j^{\rm spec}$ is the number of spectroscopic objects in slice $j$. We will choose $N_j^{\rm spec} = 1$ (as well as $N_i^{\rm phot} = 1$) throughout. Similarly
\begin{equation}
w_p(R) = N_RN_s\sum_{i,j} \frac{c_{i,j}}{N_i^{\rm phot}N_j^{\rm spec}}\,\frac{D_sD_p(R,\Delta\chi)}{D_sR_p(R,\Delta\chi)} -
\sum_{i,j}c_{i,j} \quad
\label{eqn:enhanced}
\end{equation}
\begin{figure}
\begin{center}
\resizebox{3.2in}{!}{\includegraphics{f2b.eps}}
\end{center}
\caption{The calculation of $f_{ij}$, the
``comoving overlaps'' for the pair-weighted approach of Eq.~(\ref{eqn:enhanced}). A comoving window ($\Delta\chi\pm100~h^{-1}~{\rm Mpc}$ in the case of this plot) is adopted around each spectroscopic QSO, which are indexed $j$. There will be many spectroscopic QSOs in a given redshift bin of interest but here we plot only two at $z=1.90$ and $z=2.19$ for illustrative purposes. Each photometric PDF, indexed $i$, is then averaged across each of the comoving windows to produce pairs of weights $f_{ij}$. We display the case for
$N_i^{\rm phot}=N_j^{\rm spec}=1$ in Eq.~(\ref{eqn:enhancedvarweight}) but any number $N^{\rm phot}$ of PDFs and $N^{\rm spec}$ of spectroscopic slices can be combined into ensembles.}
\label{fig:pairweight}
\end{figure}
\noindent where $N_s$ is the total number of spectroscopic objects analyzed in the spectroscopic bin of interest and $\Delta\chi$ is the size of the comoving window integrated over around each spectroscopic object. The additional normalization of $N_s$ arises by analogy with Eq.~(\ref{eqn:cweightfinal}) and the addition of new spectroscopic slices. The extent of the comoving window is entirely flexible, requiring some trial-and-error to determine the optimal choice, although $\Delta\chi\sim\mathcal{O}(50$--$100\,h^{-1}\,{\rm Mpc})$, as used when integrating out the spectroscopic autocorrelation to eliminate
the effects of redshift-space distortions, is an obvious choice. This slightly enhanced approach should provide additional
signal-to-noise gains over Eq.~(\ref{eqn:cweightfinal}) provided the photometric PDFs are sufficiently sampled
to accurately estimate their overlap with small comoving distance intervals. We illustrate this final, full pair-weighted approach in Figure~\ref{fig:pairweight}.
\section{Data} \label{sec:data}
Although our main result is the new methodology outlined
in \S\ref{sec:method}, in \S\ref{sec:qsoresults} we will
illustrate our new method with real-world samples to demonstrate
the improvements that it can return. We will
make use of quasars selected from the SDSS, as described here.
\subsection{Photometric Quasars} \label{sec:KDE}
The photometric quasar sample that we analyze is constructed using the
Kernel Density Estimation (KDE) technique of \citet{Ric04},
a technique to classify quasars in photometric surveys which draws
on several innovations inherent to the SDSS (e.g., \citealt{York00}) --
extensive and carefully monitored $ugriz$ imaging
(e.g., \citealt{Gun98,Hog01}) calibrated to a standard photometric system
(e.g., \citealt{Fuk96,Smi02}) with a precision of a few-hudredths of a
magnitude \citep{Ive04}. These innovations allow quasars to be more easily separated
{}from the stellar locus.
We use the DR6 KDE sample, which is detailed in full in \citet{Ricopt09}.
The DR6 KDE sample is drawn from a test sample of all point sources in the
SDSS DR6 imaging data \citep{DR6} with $i <21.3$, where $i$ refers to the
{\em asinh} magnitude \citep{Lup99} in the ``uber-calibrated'' system of \citet{uber}.
The DR6 primary imaging data covers an area of $8417\,{\rm deg}^2$ but further
cuts \citep{Mye06,Ricopt09} remove approximately $150\,{\rm deg}^2$ or $1.7$\% of
the area.
In this paper we concern ourselves only
with DR6 KDE objects that have a very high probability of being QSOs.
As such, we apply a \textit{uvxts=1} cut within the sample. This cut selects QSOs at particularly high efficiency by limiting the DR6 KDE sample to QSOs that would have been selected by traditional UV-excess techniques. As noted in Table~4 of \citet{Ricopt09}, and discussed in \citet{Mye06}, only
$\sim$5\% of the \textit{uvxts=1} QSOs should,
in reality, be stars\footnote{\citet{Ricopt09} advocate a \textbf{good~$\ge$~0} cut to improve efficiency. We ignore this, as for
\textit{uvxts=1} it only discards a further 2.4\% of the
data.}. The UV-excess nature of the \textit{uvxts=1} cut limits the spectroscopic redshift range to $0.8~\approxlt~z~\approxlt~2.4$.
\begin{figure}
\begin{center}
\resizebox{3.2in}{!}{\includegraphics{f3.eps}}
\end{center}
\caption{The ratio of the bootstrap error to the Poisson error for the
old, ensemble method of \S\ref{sec:oldapproach}. We plot three
separate realizations to demonstrate that the error is stable to
$\sim1$\% for 10,000 bootstraps. The bootstrap error tracks the
Poisson error to around 6\%. On scales $\approxlt~0.5~h^{-1}~{\rm Mpc}$, where
there are few QSO pairs, 10,000 bootstraps is insufficient to recreate
the shot noise. On scales $\approxgt~20~h^{-1}~{\rm Mpc}$, where QSO pairs are not
independent, Poisson errors underestimate the true error. This plot
demonstrates that bootstrapping (at N=10,000) and Poisson errors agree
well in the range $0.5 < R < 20~h^{-1}~{\rm Mpc}$.}
\label{fig:bootstrap}
\end{figure}
\setlength{\tabcolsep}{5.58pt}
\begin{table}
\centering
\begin{tabular*}{0.4703\textwidth}{|c|cccccccc|}
\hline
$z$ & 0.8 & 1.0 & 1.2 & 1.4 & 1.6 & 1.8 & 2.0 & 2.2 \\ \hline
Imp. & 1.87 & 1.61 & 1.22 & 1.63 & 1.53 & 1.40 & 1.77 & 1.90 \\ \hline
(Imp.)$^2$ & 3.50 & 2.60 & 1.48 & 2.65 & 2.35 & 1.96 & 3.15 & 3.63 \\ \hline
\end{tabular*}
\caption{\small{``Imp.'' is the expected improvement due to our new
method (Eq.~\ref{eqn:cweightfinal}) over the old ensemble approach
(\S\ref{sec:oldapproach}) as characterised by Eq.~(\ref{eqn:comp}). As
this value approximates the improvement in Poisson noise, its square
approximates the equivalent increase in survey size.}}
\label{table:expec}
\end{table}
\subsubsection{Redshift Distribution of Photometric Quasars}
\label{sec:dndz}
While estimating the redshift of a QSO with a large number of narrow
filters can be precise (e.g., \citealt{Hat00,Wol01,Wol03})
results using broadband filters are more mixed (e.g.,
\citealt{Ric01,Bud01}). Although photometric redshifts are often
expressed as a single value, they are, in reality,
probabilistic, with a full probability density function (or PDF)
representing the possible redshifts the object of interest could
occupy given the filter information. Our main goal in this paper is
to incorporate full PDF information into clustering
analyses. If we denote by $P_s^j(z)$ the probability density function
for QSO $j$, and assume $\int P_s^j(z)dz=1$ across all possible redshifts,
then the value that will interest us is the fraction
of the ensemble PDF that will genuinely lie in any redshift
interval $z_1<z<z_2$
\begin{equation}
f_z = \frac{1}{N^{\rm phot}}\sum_{j=1,N^{\rm phot}} \int_{z_1}^{z_2}dz\ P_s^j(z) \quad .
\label{eqn:f}
\end{equation}
This fraction can be deduced for arbitrary redshift intervals and could correspond to a single photometric QSO ($N^{\rm phot} =1$) having, say, a 60.3\%
chance of lying in the redshift range of interest, or equivalently a
sample of 100 PDFs in an ensemble from which we might derive that 60.3
of the 100 QSOs in the ensemble can be expected to actually lie in the interval of
interest.
We obtain our PDFs using the Nearest Neighbour approach outlined in
\citet{Bal08}. We perturb a QSO's colours relative to a spectroscopic training set drawn from the DR5 QSO sample \citep{DR5QSO}, determine the nearest neighbour over 100 perturbations, and
build a function that describes the probability that the photometric quasar
matches near spectroscopic neighbours.\footnote{Our PDFs for the DR6KDE catalog will be made available at \url{http://lcdm.astro.uiuc.edu/nbckde_dr6_pdfs}} Examples of these PDFs are shown in Figures~\ref{fig:problempdfs} and \ref{fig:chibin}.
\subsection{Spectroscopic Quasars}
We cross correlate the above QSOs with a sample of spectroscopic QSOs
drawn from the DR6 QSO sample (Schneider et al. 2009 in prep, see
\citealt{DR5QSO}). Our spectroscopic QSO sample populates the sky in a
complex manner but for our method, only the distribution of the
photometric sample, which is far simpler, needs to be modeled.
We impose the criterion that our spectroscopic QSOs must also appear
in the photometric sample discussed in \S\ref{sec:KDE}. We make no
additional cuts on flags or redshift quality, as the vast majority of
quasar redshifts are reliable if the object is, indeed, a QSO, and the
cuts made by \citet{Ricopt09} help ensure both the quality of the
photometry of the QSO, and the likelihood that it is a QSO.
\section{Example Implementation of the New Optimal Estimator}
\label{sec:qsoresults}
In this section, we apply the method developed in \S\ref{sec:method}
to the spectroscopic and photometric QSO samples discussed in
\S\ref{sec:data} to illustrate both our new methodology and its statistical
gains over current methods. As our goal is a simple
demonstration of our new methodology, we apply no cuts to the samples
beyond those discussed in \S\ref{sec:data}. This ensures that any
improved signal is due to the method itself, rather than any additional
magnitude, colour or redshift cuts that we might impose. As outlined in
\S\ref{sec:data}, the only significant cut we employ is the
\textit{uvxts=1} cut within the photometric sample. This cut, which is
purely to ensure that almost all of our photometric objects are genuinely QSOs, limits our spectroscopic redshift range to
$0.8~\approxlt~z~\approxlt~2.4$.
\subsection{Expected Improvement in Signal}
Eq.~(\ref{eqn:comp}) allows us to estimate how treating each
photometric QSO's PDF individually (i.e. Eq.~\ref{eqn:cweightfinal})
will improve the clustering signal over treating the photometric QSOs
as an ensemble (as discussed in \S\ref{sec:oldapproach}). In
Figure~\ref{fig:chibin} we demonstrate the calculation of $\langle
f(\chi_\star)\rangle$ for two different approaches; the ensemble
approach of \S\ref{sec:oldapproach} and our new bin-weighted approach
(Eq.~\ref{eqn:cweightfinal}), which treats each $f_i$ individually. In
Table~\ref{table:expec} we show the expected improvement implied by
Eq.~(\ref{eqn:comp}) for a range of spectroscopic redshift bins. This
improvement arises from using all of the information inherent in every
PDF for every individual photometric object and is about a
factor of $\sim1.6\times$. Based on Poisson
statistics, simply using our new approach should be roughly
equivalent to having a $\sim2$--$3\times$ larger survey.
\subsection{Actual Improvement in Signal}
Poisson errors are typically used to calibrate the noise in a clustering estimator (e.g.,
\citealt{LanSza93})
\begin{equation}
\Delta w_{\theta}(R) = \frac{1+ w_{\theta}(R)}{\sqrt{D_sD_p(R)}}
\label{eq:poiserr}
\end{equation}
Poisson errors accurately reflect the clustering noise on small scales
(where many pairs remain independent) and remain very accurate for the
photometric sample being used out to at least $20~h^{-1}~{\rm Mpc}$ (e.g.,
consider deprojecting Figure~1 of \citealt{Mye06}). Poisson errors are
more complex to calculate for our new methodology because we
incorporate pairs of points with unequal weights, some that may be
completely outside the spectroscopic bin of interest, but they can in
principle be computed. However we estimate the errors by simply bootstrapping
\citep[e.g.,][]{Efron} on the individual {\em spectroscopic\/} QSOs, as was
done in \citet{PWNP09}. This approach is additionally useful as it
demonstrates how one might estimate errors for our new approach based
on other resampling approaches, such as jackknifes or field-to-field
variations. Resampling approaches are generally more accurate than
Poisson errors on large scales and facilitate the construction of a
full covariance matrix. Our preferred expressions for our new estimators
(Eq.~\ref{eqn:cweightfinal} and \ref{eqn:enhanced}) make it straightforward to track how
each {\em spectroscopic\/} QSO affects the pair counts and quickly
construct resampled error estimates.
\begin{figure}
\begin{center}
\resizebox{3.2in}{!}{\includegraphics{f4alt.eps}}
\end{center}
\caption{$w_p(R)$ as measured by the old, ensemble estimator (diamonds; Eq.~\ref{eq:wtheta}) and
our new bin-weighted estimator (crosses; Eq.~\ref{eqn:cweightfinal}) and full pair-weighted estimator (triangles; Eq.~\ref{eqn:enhanced}). The pair-weighted estimator for this plot used a comoving window of $\pm50~h^{-1}~{\rm Mpc}$. All plotted data are for QSOs with spectroscopic redshifts in the bin $1.8 < z_{\rm spec} < 2.2$. We fit a $\gamma=1.5$ power law over $1.6 <R<40~h^{-1}~{\rm Mpc}$ to each estimate using the full covariance matrix estimated from 10,000 bootstraps. The points have been offset slightly for
display purposes. The best fit value of the comoving scale length
$r_0$ (see Eq.~\ref{eq:powerlaw}) is displayed for each data set,
together with the ($2\sigma$) error on the fit.}
\label{fig:firstresult}
\end{figure}
In Figure~\ref{fig:bootstrap} we plot the relationship between the
Poisson and bootstrap errors derived for the ensemble estimator (i.e.,
derived using only QSOs with peak PDF solutions in the spectroscopic
bin of interest, as discussed in \S\ref{sec:oldapproach}) using a
spectroscopic bin of $1.8 \leq z_s < 2.2$. Across scales of $0.2 < R <
50~h^{-1}~{\rm Mpc}$ the bootstrap errors converge to within $\sim0.8$\% for
10,000 bootstraps, and the amplitude of the bootstrap errors closely
tracks (within $\sim5$--10\%) that of the Poisson errors. This
demonstrates that bootstrapping on the spectroscopic QSOs is close to
equivalent to using Poisson errors on the scales of interest. On
scales $\approxlt~0.5~h^{-1}~{\rm Mpc}$, where there are few QSO pairs, more
bootstrap samples are likely needed to recreate the precision of the
Poisson errors. On scales $\approxgt~20~h^{-1}~{\rm Mpc}$ the Poisson errors
likely begin to underestimate the noise as covariance increases.
Having demonstrated the validity of bootstrapping to obtain estimates
of the noise we plot the results for the old
ensemble approach, our new bin-weighted estimator (Eq.~\ref{eqn:cweightfinal}) and our full pair-weighted estimator (Eq.~\ref{eqn:enhanced}) in Figure~\ref{fig:firstresult}. To summarise our
results we fit power laws to our data. A power-law 3D
correlation function of the form $\xi(r)=(r/r_0)^{-\gamma}$ produces a
power-law projected correlation function
\begin{equation}
\frac{w_{p}(R)}{R} = \frac{\sqrt{\pi}
\,\Gamma[(\gamma-1)/2]}{\Gamma[\gamma/2]}
\left(\frac{r_0}{R}\right)^{\gamma}
\quad .
\label{eq:powerlaw}
\end{equation}
We fit this form to the measured correlations over the range $1.6
<R<40~h^{-1}~{\rm Mpc}$, using the full bootstrap covariance and holding the index
fixed at $\gamma=1.5$. In order to improve the numerical stability of
this procedure, we scale $w_p(R)$ by $R^{1/2}$, thereby removing the
artificially high condition number that arises due to the large
dynamic range of $w_p$. The power-law fit for the old, ensemble,
approach gives $r_0 = 4.20 \pm 0.88$, our new bin-weighted estimator (Eq.~\ref{eqn:cweightfinal}) gives $r_0 =
4.22 \pm 0.65$ ($2\sigma$) and our full pair-weighted method (Eq.~\ref{eqn:enhanced}) gives $r_0 =
4.56 \pm 0.48$ ($2\sigma$), which agree well with numerous recent
estimates of the amplitude of $w_p$ for QSO clustering near $z\sim2$
(e.g., \citealt{PorMagNor04,Cro05,PorNor06,Ang08}). We list $2\sigma$
errors to reflect the fact that our errors are likely underestimated
on large scales but the relative improvements for our new estimators are
identical whether we quote $1\sigma$ or $2\sigma$ errors.
\setlength{\tabcolsep}{4.6pt}
\begin{table}
\centering
\begin{tabular*}{0.4703\textwidth}{|c|cccccccc|}
\hline
$R$ & \multicolumn{8}{c|}{$z$} \\
$(~h^{-1}~{\rm Mpc})$ & 0.8 & 1.0 & 1.2 & 1.4 & 1.6 & 1.8 & 2.0 & 2.2 \\ \hline
0.8 & 1.41 & 1.25 & 1.08 & 1.29 & 1.30 & 1.25 & 1.39 & 1.36 \\
1.3 & 1.43 & 1.28 & 1.11 & 1.34 & 1.27 & 1.21 & 1.35 & 1.44 \\
2.0 & 1.41 & 1.28 & 1.11 & 1.29 & 1.27 & 1.18 & 1.39 & 1.44 \\
3.2 & 1.41 & 1.27 & 1.12 & 1.28 & 1.26 & 1.22 & 1.38 & 1.43 \\
5.1 & 1.42 & 1.29 & 1.13 & 1.30 & 1.26 & 1.21 & 1.36 & 1.43 \\
8.2 & 1.38 & 1.30 & 1.11 & 1.33 & 1.27 & 1.19 & 1.34 & 1.40 \\
12.9 & 1.41 & 1.30 & 1.10 & 1.30 & 1.27 & 1.20 & 1.35 & 1.43 \\
20.5 & 1.38 & 1.28 & 1.11 & 1.28 & 1.26 & 1.20 & 1.33 & 1.36 \\ \hline
10.5 & 1.36 & 1.28 & 1.11 & 1.25 & 1.25 & 1.20 & 1.33 & 1.44 \\ \hline
\end{tabular*}
\caption{\small{Improvement of our new bin-weighted estimator (Eq.~\ref{eqn:cweightfinal}) over the old methodology of \S\ref{sec:oldapproach}. Each column represents a bin width of 0.4 in (spectroscopic) redshift centred on $z$. The scales in the first column are logarithmic at five-per-decade. Table values are the ratio between jackknife errors for the new to the old estimator ($\sigma_{\rm new}/\sigma_{\rm old}$). The final row is the total improvement over $1 < R < 20~h^{-1}~{\rm Mpc}$. Squaring the table values approximates the equivalent increase in survey size obtained by using our estimator.}}
\label{table:actual}
\end{table}
It is clear from the fits and errorbars in
Figure~\ref{fig:firstresult} that our new bin-weighted estimator (Eq.~\ref{eqn:cweightfinal}), which utilises all
of the redshift information in the PDF not just the peak of the PDF,
considerably improves the signal-to-noise in estimates of $w_p(R)$. In
Table~\ref{table:actual} we list the improvement in signal-to-noise as
a function of redshift and scale for our sample. Our new bin-weighted estimator,
across scales that are typically used to represent the quasi-linear
regime of clustering ($1 < R < 20~h^{-1}~{\rm Mpc}$) improves the signal-to-noise
of clustering estimates by 30\%. Adopting our
most basic approach of incorporating full PDFs into a clustering measurement is
thus equivalent to increasing the size of the photometric sample
discussed in \S\ref{sec:KDE} by 60\%. Photometric redshift
determinations for QSOs in broadband $ugriz$ are particularly poor
outside of the range $1 < z < 2$. Outside of this range, the
improvement yielded by our bin-weighted estimator is slightly larger, equivalent to
increasing the survey size by 80\%.
We note that our improvements in Table~\ref{table:actual} are slightly
smaller than the expected improvements listed in Table~\ref{table:expec}.
This could reflect a breakdown in our assumption of Poisson errors or
innaccuracy in our PDFs. In fact, one novel approach of our methodology
would be to tune the PDFs until the figures in Table~\ref{table:actual}
peaked, thus constructing PDFs without using any colour information
(see also \citealt{Sch06}).
In Table~\ref{table:pairweight} we list the improvement in signal-to-noise as
a function of scale using our full pair-weighted estimator (Eq.~\ref{eqn:enhanced}) for a spectroscopic redshift bin of $1.8 < z < 2.2$.
We adopt a representative range of comoving windows (see the discussion of $\Delta\chi\sim\mathcal{O}(50$--$100\,h^{-1}\,{\rm Mpc})$ near Eq.~\ref{eqn:enhanced}). The improvement in signal-to-noise is about a factor of 2 for scales that are typically used to represent the quasi-linear regime of clustering ($1 < R < 20~h^{-1}~{\rm Mpc}$). Across some scales the improvement in signal approaches a factor of $2.2\times$ for a comoving window of $\Delta\chi=\pm50~h^{-1}~{\rm Mpc}$. Impressively, this means that our full pair-weighted estimator can potentially improve clustering by a factor equivalent to increasing the size of a survey by a factor of 4--5.
The improvements in Tables~\ref{table:actual} and \ref{table:pairweight} demonstrate that
the PDFs we use must carry additional information that can be used to improve
clustering signal, which was the main goal of this paper. In future, as our
knowledge of PDF construction is refined, the improvements facilitated by
our method can only also improve.
\setlength{\tabcolsep}{9.9pt}
\begin{table}
\centering
\begin{tabular*}{0.439\textwidth}{|c|c|ccc|}
\hline
$R$ & Eq.~(\ref{eqn:cweightfinal}) & \multicolumn{3}{c|}{Eq.~(\ref{eqn:enhanced}); $\Delta\chi$ in $~h^{-1}~{\rm Mpc}$} \\
$(~h^{-1}~{\rm Mpc})$ & & $\pm200$ & $\pm100$ & $\pm50$ \\ \hline
0.8 & 1.39 & 1.41 & 1.76 & 2.03 \\
1.3 & 1.35 & 1.39 & 1.80 & 2.10 \\
2.0 & 1.39 & 1.43 & 1.79 & 2.10 \\
3.2 & 1.38 & 1.44 & 1.81 & 2.16 \\
5.1 & 1.36 & 1.42 & 1.76 & 2.05 \\
8.2 & 1.34 & 1.42 & 1.79 & 2.16 \\
12.9 & 1.35 & 1.39 & 1.77 & 2.11 \\
20.5 & 1.33 & 1.34 & 1.70 & 1.99 \\ \hline
10.5 & 1.33 & 1.34 & 1.68 & 2.04 \\ \hline
\end{tabular*}
\caption{\small{Improvement of our full pair-weighted estimator (Eq.~\ref{eqn:enhanced}) over the old methodology of \S\ref{sec:oldapproach} and our binned estimator (Eq.~\ref{eqn:cweightfinal}). Each calculation is over a spectroscopic bin of $1.8 < z < 2.2$. Table values are the ratio between jackknife errors for the new estimators as compared to the old estimator ($\sigma_{\rm new}/\sigma_{\rm old}$). For the full pair-weighted estimator (Eq.~\ref{eqn:enhanced}) the columns are the adopted comoving window around each spectroscopic QSO. The equivalent window for Eq.~(\ref{eqn:cweightfinal}) would be $\sim220~h^{-1}~{\rm Mpc}$, corresponding to the full bin $1.8 < z < 2.2$. The final row is the total improvement over $1 < R < 20~h^{-1}~{\rm Mpc}$. Squaring the table values approximates the equivalent increase in survey size obtained by using our estimators.}}
\label{table:pairweight}
\end{table}
\section{Conclusions} \label{sec:conclusions}
We have introduced new correlation function estimators to improve measurements of how photometric objects cluster around spectroscopic objects. Spectroscopic-photometric cross-correlations have known benefits, due to the spectroscopic objects having narrowly-defined distance information and the photometric objects having significantly higher number densities. Our approach uses the full photometric probability density information, or PDFs, to optimise such cross-correlation estimates in the Poisson limit. We note that It is possible that a strict Poisson weighting for pairs can be improved upon, particularly on moderate scales.
We have additionally provided simple equations that can be used to calculate when our new estimators will improve on measurements from the spectroscopic autocorrelation. The parameters of interest are the overlap of the photometric data with the spectroscopic bin in comoving space, which depends on the PDF precision, and the relative number of photometric and spectroscopic objects. Because the number of photometric objects scales as the square of the the comoving overlap it can be difficult for spectroscopic-photometric cross-correlations to improve on spectroscopic autocorrelation estimates.
Our improved estimator has several benefits over existing cross-correlation methods. Most obviously, because our estimator does not solely rely on the ``peak'' of a photometric object's PDF to determine which photometric objects should be cross-correlated against the spectroscopic objects of interest, the information from more photometric objects is used in clustering estimates. We show that, in the case of photometric QSOs, simply using the bin-weighted form of our estimator (Eq.~\ref{eqn:cweightfinal}) can thus improve signal-to-noise in the Poisson limit in a manner equivalent to obtaining almost $2\times$ as much survey data. Eq.~(\ref{eqn:comp}) suggests that the full gains on all scales may be closer to equivalent to obtaining $3\times$ as much survey data. Indeed, our full pair-weighted estimator Eq.~(\ref{eqn:enhanced}) demonstrates that gains equivalent to increasing survey size by as much as a factor of 4--5 can be realised. Although we have specifically used the example of QSOs, we stress that our estimator can and should be used to improve the signal for any real-space clustering measurement using photometric redshifts.
The current incarnation of our method has several shortcomings. If the PDFs peak sharply relative to the spectroscopic redshift distribution then $f(\chi)$ cannot be validly extracted, and the full integration across Eq.~(2) must be applied. Our assumptions similarly break down if the spectroscopic survey selection function varies rapidly across the redshift bin of interest. In these cases the full 2D correlation function must be integrated in the line-of-sight direction. These inadequacies cannot be countered by narrowing the spectroscopic bin indefinitely, as redshift-space distortions ultimately limit the scale where redshifts map to line-of-sight distances. As such, our assumptions are most robust for the pair-weighted methodology of Eq.~(\ref{eqn:enhanced}). In this pair-weighted approach, a strict spectroscopic window of, say, $\pm50~h^{-1}~{\rm Mpc}$ can be enforced, and our assumptions would then be valid until the PDFs are more precise than $\pm50~h^{-1}~{\rm Mpc}$ or the spectroscopic distribution varies rapidly over $\pm50~h^{-1}~{\rm Mpc}$.
A particular benefit of our estimator is that it can, very simply, incorporate {\em every} photometric object into an analysis, negating the need to bin the photometric objects. PDFs of varying precision from a range of photometric data can thus be simply combined in a single measurement, provided the mask of photometric object {\em detections} is well-controlled. One could thus envisage taking, say, multi-wavelength photometry from patchy space telescope data or a range of small dedicated surveys (to improve PDFs where possible) embedded in uniform optical photometry such as the SDSS (to establish detections of the photometric objects of interest), and straightforwardly cross-correlating this complex photometric data with a completely different spectroscopic data set. Further, there is no reason to limit the probabilistic information to a photometric redshift. Many techniques, such as star-galaxy separation or the star-QSO separation technique we have used in this paper \citep{Ricopt09}, provide classification probabilities as well as photometric redshifts. Such classification probabilities can naturally be incorporated into our method by, e.g., weighting a PDF heavily to $z=0$ if an object has a high probability of being a star.
Because of the flexibility of our estimator, it should be useful anywhere on the sky where spectroscopic data is embedded in deep, potentially complex and multi-wavelength, photometric data. This should make our estimator particularly useful for regions of the sky where extensive spectroscopy, such as from BOSS, the various 2dF surveys and the SDSS, is embedded in deep, well-calibrated photometry, with measurable PDFs such as from Pan-STARRS, DES and the LSST. Over the next decade, we expect that obvious applications of our estimator will include improved measurements of the clustering of photometric LBGs, LRGs and QSOs around spectroscopic QSOs and measuring the clustering of photometric galaxies and QSOs around absorption features in QSO spectra.
\section*{Acknowledgements}
ADM was supported in this work by NASA ADP grant NNX08AJ28G and by the
University of Illinois. MW is supported by NASA and the DOE. We thank
Gordon Richards, Alex Gray, Robert Nichol and Robert Brunner for their
substantial and important work in helping produce the KDE photometric
QSO catalogue and Nikhil Padmanabhan and Britt Lundgren for helpful conversations.
Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan
Foundation, the Participating Institutions, the National Science Foundation,
the U.S. Department of Energy, the National Aeronautics and Space
Administration, the Japanese Monbukagakusho, the Max Planck Society, and
the Higher Education Funding Council for England.
The SDSS Web Site is {\tt http://www.sdss.org}.
The SDSS is managed by the Astrophysical Research Consortium for the
Participating Institutions. The Participating Institutions are the
American Museum of Natural History, Astrophysical Institute Potsdam,
University of Basel, University of Cambridge, Case Western Reserve University,
University of Chicago, Drexel University, Fermilab, the Institute for
Advanced Study, the Japan Participation Group, Johns Hopkins University,
the Joint Institute for Nuclear Astrophysics, the Kavli Institute for
Particle Astrophysics and Cosmology, the Korean Scientist Group,
the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory,
the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute
for Astrophysics (MPA), New Mexico State University, Ohio State University,
University of Pittsburgh, University of Portsmouth, Princeton University,
the United States Naval Observatory, and the University of Washington.
|
1,314,259,995,275 | arxiv | \section{Introduction}\label{sec:1}
Continuous log-symmetric distributions are of particular interest for describing strictly positive and asymmetric data with the possibility of outlier observations; see, for example, \cite{j:08}, \cite{vp:16a,vanegasp:16b}, \cite{saulo2017log}, \cite{Balakrishnan2017}, \cite{franciscosilvia2017} and \cite{venturaletal:19}, for some discussions and applications of log-symmetric models. A continuous random variable $Y$ follows a log-symmetric distribution if its probability density function (PDF) is given by
\begin{align}\label{eq:ft}
f_{Y}(y|\boldsymbol{\theta})
=
\dfrac{(Z_g)^{-1}}{\sqrt{\phi}\,y}\, g\big[a_{\boldsymbol{\theta}}^2(y)\big],
\quad
a_{\boldsymbol{\theta}}(y)
=
\log \Big({y\over\lambda}\Big)^{1/\sqrt{\phi}},
\quad y>0;
\end{align}
where
$\boldsymbol{\theta}= (\lambda, \phi)$, $\lambda>0$ is a scale parameter and also the median of $Y$, $\phi>0$ is a shape parameter associated with the skewness or relative dispersion, $Z_g=\int^{\infty}_{-\infty} g(w^2) \,\textrm{d}w$ is the partition function, and the function $g$ is a density generating kernel such that $g(u)>0$ for $u>0$. The function $g$ is associated with an additional parameter $\xi$ (or vector $\bm\xi$). We use the notation $Y\sim\textrm{LS}(\boldsymbol{\theta},g)$. Note that if
$g(u)$ in \eqref{eq:ft} is
$\exp(-u/2)$;
$[1+(u/\xi)]^{-(\xi+1)/{2}}$, $\xi>0$;
$\exp[-u^{{1}/(1+\xi)}/2]$, $-1<{\xi}\leqslant{1}$;
$\sqrt{\xi_2}\exp(-\xi_2 u/2)+[{(1-\xi_1)}/{\xi_1}]
\exp(-u/2)$, $0<\xi_1,\xi_2<1$;
$\cosh(u^{1/2})\exp[-({2}/{\xi^2})\sinh^2(u^{1/2})] $, $\xi>0$; or
$\cosh(u^{1/2})\left[\xi_{2}\xi_{1}^2+4\sinh^2(u^{1/2})\right]^{-({\xi_{2}+1})/{2}} $, $\xi_{1},\xi_{2}>0$; we have the
log-normal,
log-Student-$t$,
log-power-exponential,
log-contaminated-normal,
extended Birnbaum-Saunders or
extended Birnbaum-Saunders-$t$ distributions, respectively; see \cite{vp:16a} and \cite{venturaletal:19}. If $Y\sim\textrm{LS}(\boldsymbol{\theta},g)$, then the associated cumulative distribution function (CDF) is given by
$
F_{Y}(y|\boldsymbol{\theta})
=G\big[a_{\boldsymbol{\theta}}(y)\big],
$
where the
function $G:(-\infty,+\infty)\to [0,1]$ is defined as
\begin{align}\label{def-a-g}
G(r)
=
(Z_g)^{-1}
{\int^{r}_{-\infty} g(z^2) \,\textrm{d}z },
\quad -\infty<r<+\infty.
\end{align}
This mapping is easily seen to have the following properties:
\begin{itemize}
\item[(a)] $G(0)=0.5$, $G(+\infty)=\lim_{r\to+\infty}G(r)=1$,
$G(-\infty)=\lim_{r\to-\infty}G(r)=0$;
\item[(b)] $G(\cdot)$ is a continuous function and that $G(\cdot)$ is strictly monotonically increasing. Hence $G(\cdot)$ has an inverse function, denoted by $G^{\pmb{-1}}(\cdot)$;
\item[(c)] From Items (a) and (b), $G(\cdot)$ is a CDF; and
\item[(d)] $G^{\pmb{-1}}(1-p)=-G^{\pmb{-1}}(p)$ for $p\in(0,1)$ given.
\end{itemize}
Despite the huge use of log-symmetric distributions -- its most famous member is the log-normal model -- they are not appropriate in purely discrete contexts. For example, to model the number of cycles before failure of a equipment or the number of weeks to cure a patient, among others; see \cite{vns:19}. Moreover, despite useful, continuous log-symmetric models do not include the zero. In this paper, we define a discrete random variable associated to $Y$ in \eqref{eq:ft} as
$
X=\lfloor Y\rfloor,
$
where $\lfloor y\rfloor$ denotes the largest integer less than or equal to $y$. In other words, we propose a class of discrete log-symmetric distributions. The proposed class incorporates every distribution belonging to the log-symmetric family, and it is useful for asymmetric and non-negative discrete data.
The rest of the paper proceeds as follows. In Section~\ref{sec:02}, we introduce the class of discrete log-symmetric models. In Section \ref{sec:math}, we discuss some mathematical properties. In Section~\ref{sec:03}, estimation of the model parameters are approached via the maximum likelihood method for the censored and uncensored cases. In Section ~\ref{sec:04}, we carry out a simulation study to evaluate the performance of the estimators taking into account different censoring proportions. In Section~\ref{sec:05}, we illustrate the proposed methodology with two real data sets. Finally, in Section~\ref{sec:05}, we make some concluding remarks and discuss future work.
\section{Discrete log-symmetric distributions}\label{sec:02}
We say that a discrete random variable $X$, taking values in the set $\{0,1,\ldots\}$, follows a { discrete log-symmetric distribution}
with parameter vector $\boldsymbol{\theta}= (\lambda, \phi)$, where
$\lambda>0, \phi>0$,
denoted by $X\sim\textrm{LS}_{\rm d}(\boldsymbol{\theta},g)$, if its
probability mass function (PMF) is given by
\begin{align}\label{relation}
p(x|\boldsymbol{\theta})=
G\big[a_{\boldsymbol{\theta}}(x+1)\big]
-
G\big[a_{\boldsymbol{\theta}}(x)\big], \quad
x=0,1, \ldots,
\end{align}
where $a_{\boldsymbol{\theta}}(\cdot)$ and
$G(\cdot)$ are as in \eqref{eq:ft} and \eqref{def-a-g}, respectively. Note that $G\big[a_{\boldsymbol{\theta}}(0)\big]=G(-\infty)=0$ and
that $G\big[a_{\boldsymbol{\theta}}(+\infty)\big]=G(+\infty)=1$.
Given the density generating kernel $g$, defined below Item \eqref{eq:ft}, the parameters $\lambda$ and $\phi$ completely determine the PMF
\eqref{relation} at $x=0.$
Since $G(\cdot)$ and $a_{\boldsymbol{\theta}}(\cdot)$ are strictly increasing functions, and
\[
\lim_{n\to\infty}
\sum_{x=0}^{n}p(x|\boldsymbol{\theta})
=
\lim_{n\to\infty}
G\big[a_{\boldsymbol{\theta}}(n+1)\big]
=
G(+\infty)
=
1,
\]
it is clear that $p(x|\boldsymbol{\theta})$ is a PDF.
The CDF, reliability function (RF) and hazard rate (HR)
of the $\textrm{LS}_{\rm d}$ distribution, respectively, are given by
\begin{align*}
& F(x|\boldsymbol{\theta})
=
1- R(x|\boldsymbol{\theta})
=
G\big[a_{\boldsymbol{\theta}}(\lfloor x\rfloor+1)\big], \quad
x\geqslant 0;
\\[0,2cm]
& H(x|\boldsymbol{\theta})
=
{p(x|\boldsymbol{\theta})\over p(x|\boldsymbol{\theta})+R(x|\boldsymbol{\theta})}
=
{
G\big[a_{\boldsymbol{\theta}}(x+1)\big]
-
G\big[a_{\boldsymbol{\theta}}(x)\big]
\over
1-G\big[a_{\boldsymbol{\theta}}(x)\big]
}, \quad
x=0,1,\ldots.
\end{align*}
\section{Mathematical properties}\label{sec:math}
This section, if not explicitly mentioned otherwise, consists of mathematical properties valid for any discrete random variable $X$ with support $\{0,1,\ldots\}$.
Let $(b_n)$ be a sequence of real numbers.
For technical reasons in the next result we use the convention $\prod_{y=0}^{-1} b_y=1$.
The next result provides a characterization of the PMF and RF
of a discrete distribution in terms of the HR.
\begin{proposition}\label{chac-re}
If $X$ is a discrete random variable
then, for each $x=0,1,2,\ldots,$
\begin{itemize}
\item[\rm (a)]
$\displaystyle
p(x|\boldsymbol{\theta})
=
{H(x|\boldsymbol{\theta})\over 1- H(x|\boldsymbol{\theta})}\,
\prod_{y=0}^{x-1}
\big[1-H(y|\boldsymbol{\theta})\big];$
\item[\rm (b)]
$\displaystyle
R(x|\boldsymbol{\theta})
=
\prod_{y=0}^{x-1}
\big[1-H(y|\boldsymbol{\theta})\big];$
\end{itemize}
where $H(\cdot|\boldsymbol{\theta})$ is the HR.
\end{proposition}
\begin{proof}
By using the identity
$
p(x|\boldsymbol{\theta})
=
R(x|\boldsymbol{\theta})-R(x+1|\boldsymbol{\theta})
=
\big[p(x|\boldsymbol{\theta})+R(x|\boldsymbol{\theta})\big] H(x|\boldsymbol{\theta}), \ x=0,1,\ldots,
$
we have
\[
1=H(x|\boldsymbol{\theta})+{R(x|\boldsymbol{\theta})H(x|\boldsymbol{\theta})\over p(x|\boldsymbol{\theta})} , \quad x=0,1,2,\ldots.
\]
Since ${p(x|\boldsymbol{\theta})/ H(x|\boldsymbol{\theta})}
=
p(x|\boldsymbol{\theta})+R(x|\boldsymbol{\theta})
=
R(x-1|\boldsymbol{\theta}),
$
it follows that
\[
{R(x|\boldsymbol{\theta})\over R(x-1|\boldsymbol{\theta})}
=
1-H(x|\boldsymbol{\theta}),
\quad
x=0,1,2,\ldots.
\]
Exchanging $x$ for $y$ in the above identity and then
multiplying from $y=0$ to $y=x-1$, we get
\[
R(x|\boldsymbol{\theta})
=
\prod_{y=0}^{x-1}
{R(y|\boldsymbol{\theta})\over R(y-1|\boldsymbol{\theta})}
= \textstyle
\prod_{y=0}^{x-1}
\big[1-H(y|\boldsymbol{\theta})\big],
\quad
x=0,1,2,\ldots,
\]
verifying the identity for $R(x|\boldsymbol{\theta})$. On the other hand,
combining the above identity with the definition of HR, the identity for
$p(x|\boldsymbol{\theta})$ follows.
\end{proof}
\subsection{Moments and variance}
\begin{theorem}\label{moments}
If $X$ is a discrete random variable possessing all the higher-order moments,
then
\begin{itemize}
\item[\rm (a)]
$\displaystyle
\mathbb{E}(X^r) = \sum_{x=0}^{\infty}\big[(x+1)^r-x^r\big] R(x|\boldsymbol{\theta});$
\item[\rm (b)]
$\displaystyle
\mathbb{E}(X^r)
=
\sum_{x=0}^{\infty}\sum_{k=0}^{r}
\sum_{i=0}^{r-k}
\binom{r-k}{i}
x^{k+i}\, R(x|\boldsymbol{\theta});$
\item[\rm (c)]
$\displaystyle
{\rm Var}(X)=2\sum_{x=0}^{\infty} x R(x|\boldsymbol{\theta})
+
\sum_{x=0}^{\infty} R(x|\boldsymbol{\theta})
\bigg[
1-\sum_{x=0}^{\infty} R(x|\boldsymbol{\theta})
\bigg];
$
\end{itemize}
where $R(\cdot|\boldsymbol{\theta})$ is the RF.
\end{theorem}
\begin{proof}
In order to prove Item (a),
using the telescopic series $\sum_{x=0}^{i-1} \big[(x+1)^r-x^r\big]=i^r$, it follows that
\begin{align*}
\mathbb{E}(X^r)
=
\sum_{i=0}^{\infty} \sum_{x=0}^{\infty} \mathds{1}_{\{x<i\}} \big[(x+1)^r-x^r\big] \,
p(i|\boldsymbol{\theta})
=
\sum_{x=0}^{\infty} \big[(x+1)^r-x^r\big] \sum_{i=0}^{\infty} \mathds{1}_{\{i>x\}} \,
p(i|\boldsymbol{\theta}),
\end{align*}
where in the second equality we exchange the orders of the summations because the following series
\begin{align*}
\sum_{x=0}^{\infty} \mathds{1}_{\{x<i\}} \big|(x+1)^r-x^r\big| \,
p(i|\boldsymbol{\theta})
=
\sum_{x=0}^{\infty} \mathds{1}_{\{x<i\}} \cdot \big[(x+1)^r-x^r\big] \,
p(i|\boldsymbol{\theta})
=
i^r p(i|\boldsymbol{\theta})\eqqcolon M_i
\end{align*}
is finite for each $i=0,1,\ldots$,
and, by hypothesis, the expectation $\mathbb{E}(X^r)=\sum_{i=0}^{\infty}M_i$ always exists.
This proves the first item.
The second item follows by combining the expression for $\mathbb{E}X^r$ given in the first item with
the polynomial identity $a^n-b^n = (a-b) \sum_{k=0}^{r} a^{r-k}b^k$ and with the binomial expansion.
The proof of the third item immediately follows from Item (a).
Thus, the proof is complete.
\end{proof}
\subsection{The $p$-quantile}
\begin{theorem}\label{quantile}
Let
$X=\lfloor Y\rfloor$ be a discrete random variable obtained from a positive continuous random variable $Y$ with CDF $F_Y(\cdot|\boldsymbol{\theta})$.
Given $p\in(0,1)$, let $Q_p=F_Y^{\pmb{-1}}(p|\boldsymbol{\theta})$ be the $p$-quantile for $Y$. The following statements are valid:
%
\begin{itemize}
\item[\rm (a)]
If $Q_p>0$ is a natural number, then $Q_p-1$ is the $p$-quantile for $X$;
\item[\rm (b)]
If $Q_p>0$ is not a natural number, then
all $y\in\big[\lfloor Q_p\rfloor,\lfloor Q_p\rfloor+1\big)$ is a $p$-quantile for $X$.
\end{itemize}
\end{theorem}
\begin{proof}
Given $p\in(0,1)$, assume that $Q_p=F_Y^{\pmb{-1}}(p|\boldsymbol{\theta})$.
By using the relations, for all $x>0$,
\begin{align}
& \displaystyle
F_X(x^-|\boldsymbol{\theta})=F_Y(\lfloor x\rfloor|\boldsymbol{\theta});
\label{id-1}
\\
& \displaystyle
\lfloor x\rfloor\leqslant x < \lfloor x\rfloor+1;
\label{id-2}
\end{align}
where $F_X(x^-|\boldsymbol{\theta})=\lim_{\delta\to 0} F_X(x-\delta|\boldsymbol{\theta})$ for all $\delta>0$,
we have
\begin{align*}
F_X\big[(Q_p-1)^-|\boldsymbol{\theta}\big]
\leqslant
F_X\big(Q_p-1|\boldsymbol{\theta})
=
F_X\big(Q_p^-|\boldsymbol{\theta})
\stackrel{\eqref{id-1}}{=}
F_Y(Q_p|\boldsymbol{\theta})
=
p,
\end{align*}
whenever $Q_p>0$ is a natural number. Then, by definition of $p$-quantile for a discrete random variable, the statement in Item (a) follows.
Already, when $Q_p>0$ is not a natural number, from \eqref{id-1} and \eqref{id-2} we have
\begin{align*}
& F_X(Q_p^-|\boldsymbol{\theta})
\stackrel{\eqref{id-1}}{=}
F_Y(\lfloor Q_p\rfloor|\boldsymbol{\theta})
\stackrel{\eqref{id-2}}{\leqslant}
F_Y(Q_p|\boldsymbol{\theta})
=
p;
\\
& F_X(Q_p|\boldsymbol{\theta})
=
F_X\big[(Q_p+1)^-|\boldsymbol{\theta}\big]
\stackrel{\eqref{id-1}}{=}
F_Y(\lfloor Q_p\rfloor+1|\boldsymbol{\theta})
\stackrel{\eqref{id-2}}{\geqslant}
F_Y(Q_p|\boldsymbol{\theta})
=
p.
\end{align*}
Therefore, $F_X(Q_p^-|\boldsymbol{\theta})\leqslant p\leqslant F_X(Q_p|\boldsymbol{\theta})$. Hence, the $p$-quantile for $X$ can be represented by any value in the interval $\big[\lfloor Q_p\rfloor,\lfloor Q_p\rfloor+1\big)$, and the proof of Item (b) follows. This completes the proof.
%
%
%
\end{proof}
The following two results are applied exclusively to random variables with discrete log-symmetric distribution.
\begin{proposition}\label{prop-med}
Let $X$ be a random variable with $\textrm{LS}_{\rm d}$ distribution. The following statements hold:
\begin{itemize}
\item[\rm (a)]
If $\lambda$ is a natural number, then $\lambda-1$ is the median for $X$;
\item[\rm (b)]
If $\lambda$ is not a natural number, then the median of the distribution of $X$ can be represented by any value in the set
$\big[\lfloor \lambda\rfloor,\lfloor \lambda\rfloor+1\big)$.
\end{itemize}
\end{proposition}
\begin{proof}
Let $X\sim\textrm{LS}_{\rm d}(\boldsymbol{\theta},g)$.
Since $G(\cdot)$ and $a_{\boldsymbol{\theta}}(\cdot)$ are strictly increasing functions, the function $G\big[a_{\boldsymbol{\theta}}(\cdot)\big]$ is a strictly increasing CDF corresponding to some continuous random variable $Y$ with log-symmetric distribution $\textrm{LS}(\boldsymbol{\theta},g)$.
Furthermore, note that the median $Q_{0.5}$ for $Y$ can be written as
\begin{align*}
Q_{0.5}
=
(G\circ a_{\boldsymbol{\theta}})^{\pmb{-1}}(0.5)
=
a^{\pmb{-1}}_{\boldsymbol{\theta}}\big[G^{\pmb{-1}}(0.5)\big]
=\lambda\exp\left[\sqrt{\phi}\,G^{\pmb{-1}}(0.5)\right]
=
\lambda,
\end{align*}
where in the last equality we use that $G^{\boldsymbol{-1}}({0.5})=0$; see
Item (a) below Item \eqref{def-a-g} in Section \ref{sec:1}.
Then, by Theorem \ref{quantile}, the proof of Items (a) and (b) follows.
\end{proof}
Let $X\sim\textrm{LS}_{\rm d}(\boldsymbol{\theta},g)$. For given $p\in(0,1)$, let $Q_{{\rm d}; p}$ be a $p$-quantile for $X$. Let us define
\begin{align*}
& \text{Dispersion:} \quad \zeta=Q_{{\rm d};0.75}-Q_{{\rm d};0.25}, \quad 0<\zeta<\infty;
\\
& \text{Relative dispersion:} \quad \varpi={\zeta\over \zeta+2
Q_{{\rm d};0.25}}, \quad 0<\varpi<1;
\\
& \text{Skewness:} \quad \varkappa(p)={Q_{{\rm d};p}+Q_{{\rm d};1-p}-2
Q_{{\rm d};0.5}\over Q_{{\rm d};1-p}+Q_{{\rm d};p}}, \quad 0<\varkappa(p)<1, \ 0<p<0.5;
\\
& \text{Kurtosis:} \quad \varsigma={Q_{{\rm d};7/8}-Q_{{\rm d};5/8}+
Q_{{\rm d};3/8}-Q_{{\rm d};1/8}\over Q_{{\rm d};6/8}-Q_{{\rm d};2/8}}, \quad
0\leqslant\varsigma<\infty.
\end{align*}
The relative dispersion, skewness and kurtosis have appeared in \cite{zwko:00}, \cite{hinkley:75} and \cite{moors:88}, respectively.
\begin{proposition}
Given $p\in(0,1)$,
let $X\sim\textrm{LS}_{\rm d}(\boldsymbol{\theta},g)$ and let
$Q_p$ be the $p$-quantile of the corresponding
continuous log-symmetric distribution. If $Q_p$ is a natural number, then
\begin{itemize}
\item[\rm (a)]
$\zeta=2\lambda \sinh\big[\sqrt{\phi}\, G^{\pmb{-1}}(0.75)\big];$
\item[\rm (b)]
$\varpi=\big\{{\rm cotanh}\big[\sqrt{\phi}\, G^{\pmb{-1}}(0.75)\big] - {\rm cosech}\left[\sqrt{\phi}\, G^{\pmb{-1}}(0.75)\big]\right\}^{-1};$
\item[\rm (c)]
$\varkappa(p)=\lambda;$
\item[\rm (d)] $\displaystyle
\varsigma=
{\sinh\big[\sqrt{\phi}\, G^{\pmb{-1}}(7/8)\big]-
\sinh\big[\sqrt{\phi}\, G^{\pmb{-1}}(5/8)\big]\over
\sinh\big[\sqrt{\phi}\, G^{\pmb{-1}}(6/8)\big]};$
\end{itemize}
where $G(\cdot)$ was defined in \eqref{def-a-g}.
\end{proposition}
\begin{proof}
Since $Q_p$ is a natural number,
by Theorem \ref{quantile}, $Q_{{\rm d};p}=Q_p-1$ is a $p$-quantile for $X$, where $Q_p=\lambda\exp\big[\sqrt{\phi}\,G^{\pmb{-1}}(p)\big]$. By using the identity $G^{\pmb{-1}}(1-p)=-G^{\pmb{-1}}(p)$ (see
Item (d) below Item \eqref{def-a-g} in Section \ref{sec:1}), from Proposition \ref{prop-med} and a simple algebraic computation, the proof of statements in Items (a)-(d) follows.
\end{proof}
\subsection{Shape properties}
The next result shows that the shape of a discrete log-symmetric distribution depends on choice of density generating kernel and on the distance between the modes of the corresponding continuous log-symmetric distribution.
\begin{theorem}\label{teo-shapes}
Let $g$ be a density generating kernel so that the corresponding continuous log-symmetric distribution, of a random variable $Y$, is bimodal. Then the discrete log-symmetric distribution of $X=\lfloor Y\rfloor$ has the following shapes:
\begin{itemize}
\item[\rm (a)] It is bimodal, whenever the distance between the modes is big enough;
\item[\rm (b)] It is unimodal, whenever the distance between the modes is small enough.
\end{itemize}
\end{theorem}
\begin{proof}
Since the proof of Item (b) follows the same analysis and steps as the first item, we are concerned with proving only Item (a).
Let $f_{Y}(y|\boldsymbol{\theta})$, $t>0$, be the bimodal PDF of the continuous random variable $Y\sim\textrm{LS}(\boldsymbol{\theta},g)$, where the distance between their modes, denoted by $y_0>0$ and $y_\epsilon=y_0+\epsilon$, is big enough ($\epsilon>6$). From bimodality property, there is $y_*\in(y_0, y_{\epsilon})$ such that the following inequalities hold:
\begin{align}
f_{Y}(y\vert\boldsymbol{\theta})
\geqslant
f_{Y}(y-1|\boldsymbol{\theta})
\quad \text{for all} \ y\leqslant y_0 \ \mbox{and} \
y_*\leqslant y \leqslant y_{\epsilon}; \label{relat-1}
\\[0,2cm]
f_{Y}(y\vert\boldsymbol{\theta})
\geqslant
f_{Y}(y+1\vert\boldsymbol{\theta})
\quad \mbox{for all} \ y_* \geqslant y\geqslant y_0
\ \mbox{and} \ y \geqslant y_{\epsilon}. \label{relat-2}
\end{align}
If $x$ is a natural number such that $x\leqslant \lfloor y_0 \rfloor -1$ and
$\lfloor y_*\rfloor +1\leqslant x \leqslant \lfloor y_{\epsilon}\rfloor-1$, from above inequalities, we have
\begin{align*}
p(x|\boldsymbol{\theta})
=
\int_{x}^{x+1}
f_{Y}(y|\boldsymbol{\theta}) \, {\rm d} y
\stackrel{\eqref{relat-1}}{\geqslant}
\int_{x}^{x+1}
f_{Y}(y-1|\boldsymbol{\theta}) \, {\rm d} y
=
p(x-1|\boldsymbol{\theta}).
\end{align*}
Already, if $x$ is a natural number such that
$\lfloor y_*\rfloor-1\geqslant x\geqslant \lfloor y_0 \rfloor+1$
and $x \geqslant \lfloor y_{\epsilon}\rfloor+1$, we have
\begin{align*}
p(x|\boldsymbol{\theta})
=
\int_{x}^{x+1}
f_{Y}(y|\boldsymbol{\theta}) \, {\rm d} y
\stackrel{\eqref{relat-2}}{\geqslant}
\int_{x}^{x+1}
f_{Y}(y+1|\boldsymbol{\theta}) \, {\rm d} y
=
p(x+1|\boldsymbol{\theta}).
\end{align*}
In other words, we have the following
\begin{align}\label{ineqs-1}
\begin{array}{lllll}
&p(0|\boldsymbol{\theta})
\leqslant
p(1|\boldsymbol{\theta})
\leqslant
\cdots
\leqslant
p(\lfloor y_0 \rfloor-1|\boldsymbol{\theta});
\\
&p(\lfloor y_0 \rfloor+1|\boldsymbol{\theta})
\geqslant
\cdots
\geqslant
p(\lfloor y_*\rfloor-2|\boldsymbol{\theta})
\geqslant
p(\lfloor y_*\rfloor-1|\boldsymbol{\theta});
\end{array}
\\[0,2cm]
\label{ineqs-2}
\begin{array}{lllll}
&p(\lfloor y_*\rfloor +1|\boldsymbol{\theta})
\leqslant
p(\lfloor y_*\rfloor +2|\boldsymbol{\theta})
\leqslant
\cdots
\leqslant
p(\lfloor y_{\epsilon}\rfloor-1|\boldsymbol{\theta});
\\
&
p(\lfloor y_{\epsilon}\rfloor+1|\boldsymbol{\theta})
\geqslant
p(\lfloor y_{\epsilon}\rfloor+2|\boldsymbol{\theta})
\geqslant
\cdots.
\end{array}
\end{align}
%
From monotonicities \eqref{ineqs-1} and \eqref{ineqs-2}, one can guarantee the bimodality property of the discrete log-symmetric distribution.
By using \eqref{ineqs-1}, we show how to obtain only one of the modes, since the other one can be obtained following a similar path. Indeed, by \eqref{ineqs-1} it remains to relate the probabilities
$p(\lfloor y_0 \rfloor-1|\boldsymbol{\theta})$,
$p(\lfloor y_0 \rfloor|\boldsymbol{\theta})$
and
$p(\lfloor y_0 \rfloor+1|\boldsymbol{\theta})$
to find the the first mode, denoted by $x_0$, of the discrete log-symmetric distribution. A simple observation shows that $x_0$ is given by
%
\begin{align*}
x_0=
\begin{cases}
\lfloor y_0 \rfloor+1, & \text{if} \
p(\lfloor y_0 \rfloor-1|\boldsymbol{\theta})\leqslant
p(\lfloor y_0 \rfloor|\boldsymbol{\theta})<
p(\lfloor y_0 \rfloor+1|\boldsymbol{\theta}),
\\
\lfloor y_0 \rfloor, & \text{if} \
p(\lfloor y_0 \rfloor-1|\boldsymbol{\theta})<
p(\lfloor y_0 \rfloor|\boldsymbol{\theta})>
p(\lfloor y_0 \rfloor+1|\boldsymbol{\theta}),
\\
\lfloor y_0 \rfloor-1, & \text{if} \
p(\lfloor y_0 \rfloor-1|\boldsymbol{\theta})>
p(\lfloor y_0 \rfloor|\boldsymbol{\theta})\geqslant
p(\lfloor y_0 \rfloor+1|\boldsymbol{\theta}),
\\
\lfloor y_0 \rfloor-1 \ \text{and} \ \lfloor y_0 \rfloor, & \text{if} \
p(\lfloor y_0 \rfloor-1|\boldsymbol{\theta})=
p(\lfloor y_0 \rfloor|\boldsymbol{\theta})>
p(\lfloor y_0 \rfloor+1|\boldsymbol{\theta}),
\\
\lfloor y_0 \rfloor \ \text{and} \ \lfloor y_0 \rfloor+1, & \text{if} \
p(\lfloor y_0 \rfloor-1|\boldsymbol{\theta})<
p(\lfloor y_0 \rfloor|\boldsymbol{\theta})=
p(\lfloor y_0 \rfloor+1|\boldsymbol{\theta}),
\\
\lfloor y_0 \rfloor, & \text{if} \
p(\lfloor y_0 \rfloor-1|\boldsymbol{\theta})=
p(\lfloor y_0 \rfloor+1|\boldsymbol{\theta})<
p(\lfloor y_0 \rfloor|\boldsymbol{\theta}).
\end{cases}
\end{align*}
%
Notice that, any other possible relation between $p(\lfloor y_0 \rfloor-1|\boldsymbol{\theta})$,
$p(\lfloor y_0 \rfloor|\boldsymbol{\theta})$
and
$p(\lfloor y_0 \rfloor+1|\boldsymbol{\theta})$ contradicts the fact that $y_0$ is a mode of the continuous log-symmetric distribution.
As mentioned above, using \eqref{ineqs-2}, the second mode of the discrete log-symmetric distribution is obtained in an analogous way. So we completed the proof.
\end{proof}
As an immediate consequence of Theorem \ref{teo-shapes}, the following result follows.
\begin{corollary}
Let $g$ be a density generating kernel so that the corresponding continuous log-symmetric distribution, of a random variable $Y$, is unimodal. Then the discrete log-symmetric distribution of $X=\lfloor Y\rfloor$ is also unimodal.
\end{corollary}
\section{Maximum likelihood estimation}\label{sec:03}
\subsection{Uncensored data}
Let $(X_{1}, \dots, X_{n})$ be a random sample of size $n$ from a random variable $X$ with
PMF given by \eqref{relation} and $\boldsymbol{x}=(x_{1}, \dots, x_{n})$ their observations (data).
Then,
the log-likelihood function for a parameter vector $\bm{\theta}=(\lambda,\phi)$
is
given
by
\begin{align}
\ell(\bm\theta)
=
\ell(\bm\theta|\boldsymbol{x})
&=
\sum_{i=1}^n \log
p(x_i|\boldsymbol{\theta})
=
{\sum_{i=1}^n}
\log\left\{
G\big[a_{\boldsymbol{\theta}}(x_i+1)\big]
-
G\big[a_{\boldsymbol{\theta}}(x_i)\big]
\right\}.
\label{logvero}
\end{align}
The roots of the system formed by the partial derivatives of the log-likelihood function $\ell(\bm\theta)$ with respect to $\lambda$ and $\phi$ are the estimates of these parameters, respectively. Thus, we must solve the following system of equations:
\begin{align*}
{\partial \ell(\bm\theta)\over \partial \theta}
&=
(Z_g)^{-1}
\sum _{i=1}^n
\sum_{j=0}^{1}
(-1)^{j+1} \,
{\partial a_{\boldsymbol{\theta}}(x_{i}+j)\over \partial \theta} \,
{g\big[a_{\boldsymbol{\theta}}^2(x_i+j)\big] \over p(x_i|\boldsymbol{\theta})}
=0,
\quad
\theta\in\{\lambda,\phi\},
\end{align*}
where
$Z_g=\int_{-\infty}^{\infty}g(w^2)dw$ and,
\begin{align}\label{pri-der-a}
{\partial a_{\boldsymbol{\theta}}(x_i)\over \partial\lambda}
=
-\big(\lambda\phi^{1/2}\big)^{-1},
\quad
{\partial a_{\boldsymbol{\theta}}(x_i)\over \partial \phi}
=
\log\Big({x_i\over\lambda}\Big)^{1/(2\phi^{3/2})}.
\end{align}
Note that they must be solved by an iterative procedure for non-linear optimization, such as the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method; see \citet[][p.\,199]{mjm:00}.
Inference for $\bm \theta$ of the $\textrm{LS}_{\rm d}$ model can be based on the asymptotic distribution of the maximum likelihood estimator $\widehat{\bm \theta}$. Under classic regularity conditions, this estimatior is bivariate normal distributed with mean $\bm \theta$ and covariance matrix $\bm{\Sigma}_{^{\widehat{\bm \theta}}}$, namely,
$$
\sqrt{n}\,(\widehat{{\bm \theta}} -{\bm \theta}) \stackrel{\rm D}{\to} \textrm{N}_{2}\big(\bm{0}, \bm{\Sigma}_{^{\widehat{\bm \theta}}} = [\mathcal{J}({\bm \theta})]^{-1}\big),
$$
as $n \to \infty$, where $\stackrel{\rm D}{\to}$ means ``convergence in distribution'', $\mathcal{I}({\bm \theta})$ is the expected Fisher information matrix, and $\mathcal{J}({\bm \theta}) = \lim_{n\to\infty}({1}/{n}) [\mathcal{I}({\bm \theta})]$. Observe that $[\widehat{\mathcal{I}}({\bm \theta})]^{-1}$ is a consistent estimator of the asymptotic covariance matrix of $\widehat{\bm \theta}$. Observe also that one may use the Hessian matrix to obtain the observed version of the expected Fisher information matrix.
The Hessian matrix of $\ell(\bm\theta)$ is given by
\begin{align*}
\big[{\ddot{\ell}_{\theta\theta'}(\bm\theta)}\big]_{2\times 2}
=
\begin{bmatrix}
{\partial^2 \ell(\bm\theta)\over\partial\lambda^2}
&
{\partial^2 \ell(\bm\theta)\over\partial\lambda \partial\phi}
\\[0,2cm]
{\partial^2 \ell(\bm\theta)\over\partial\phi \partial\lambda}
&
{\partial^2 \ell(\bm\theta)\over\partial\phi^2}
\end{bmatrix},
\end{align*}
where its elements, for each $\theta, \theta'\in\{\lambda,\phi\}$, are
\begin{align}
&\ddot{\ell}_{\theta\theta'}(\bm\theta)
=
(Z_g)^{-1}
\sum _{i=1}^n
\sum_{j=0}^{1}
(-1)^{j+1}\,
\left[
{\partial^2 a_{\boldsymbol{\theta}}({x_{i}+j})\over \partial\theta \partial\theta'}
+
\Theta_j(x_i)
-
\Omega_j(x_i)
\right]
{g\big[a^2({x_{i}+j})\big]\over p(x_i|\boldsymbol{\theta})}. \nonumber
\end{align}
Here we adopt the following notation:
\begin{align}
&
\Theta_j(x_i)=
2 a_{\boldsymbol{\theta}}({x_i+j})\,
g'\big[a^2({x_i+j})\big]\,
{\partial a_{\boldsymbol{\theta}}({x_i+j})\over\partial\theta} \,
{\partial a_{\boldsymbol{\theta}}({x_i+j})\over\partial\theta'}; \label{Mdef}
\\[0,1cm]
&
\Omega_j(x_i)=
(Z_g)^{-1}
{\partial a_{\boldsymbol{\theta}}({x_i+j})\over\partial\theta}
\sum_{k=0}^{1} (-1)^{k+1}\,
{\partial a_{\boldsymbol{\theta}}({x_i+k})\over\partial\theta'}\,
{g\big[a^2({x_i+k})\big]\over p(x_i|\boldsymbol{\theta})};
\label{Mdef-1}
\end{align}
whenever the density generating kernel $g$ be differentiable.
The above second-order partial derivatives of $a_{\boldsymbol{\theta}}(\cdot)$, with respect to the parameters, are given by
\begin{align}\label{sec-der-a}
\begin{matrix}
\displaystyle
{\partial^2 a_{\boldsymbol{\theta}}(x_i)\over\partial\lambda^2}
=
\big(\lambda^2\phi^{1/2}\big)^{-1};
&
\displaystyle
{\partial^2 a_{\boldsymbol{\theta}}(x_i)\over\partial\phi^2}
=
\log\Big({x_i \over \lambda}\Big)^{-3/(4\phi^{5/2})};
\\[0,35cm]
\displaystyle
{\partial a_{\boldsymbol{\theta}}(x_i)\over\partial\lambda\partial\phi}
=
\big(2\lambda\phi^{3/2}\big)^{-1};
&
\displaystyle
{\partial a_{\boldsymbol{\theta}}(x_i)\over\partial\phi\partial\lambda}
=
\big(2\lambda\phi^{3/2}\big)^{-1}.
\end{matrix}
\end{align}
Under certain regularity conditions, the Fisher information matrix
\begin{align*}
\big[{\mathcal{I}_{\theta\theta'}(\bm\theta)}\big]_{2\times 2}
=
-
\begin{bmatrix}
\mathbb{E}
{\partial^2 p(X|\boldsymbol{\theta}) \over\partial\lambda^2}
&
\mathbb{E}
{\partial^2 p(X|\boldsymbol{\theta})\over\partial\lambda \partial\phi}
\\[0,2cm]
\mathbb{E}
{\partial^2 p(X|\boldsymbol{\theta})\over\partial\phi \partial\lambda}
&
\mathbb{E}
{\partial^2 p(X|\boldsymbol{\theta})\over\partial\phi^2}
\end{bmatrix},
\quad X\sim\textrm{LS}_{\rm d}(\boldsymbol{\theta},g),
\end{align*}
has elements of the following form
\begin{align*}
{\mathcal{I}_{\theta\theta'}(\bm\theta)}
=
(Z_g)^{-1}
\sum _{x=0}^\infty
\sum_{j=0}^{1}
(-1)^{j}
\bigg[
{\partial^2 a_{\boldsymbol{\theta}}({x+j})\over \partial\theta \partial\theta'}
+
\Theta_j(x)
-
\Omega_j(x)
\bigg]\,
g\big[a^2({x+j})\big],
\end{align*}
for each $\theta, \theta'\in\{\lambda,\phi\}$,
where $\Theta_j(\cdot)$ and
$\Omega_j(\cdot)$ are given in \eqref{Mdef} and \eqref{Mdef-1}, respectively, and
whenever the above series converges absolutely.
The extra parameter $\xi$ (or parameter vector $\bm\xi$) associated with $g$ is selected by using the profile log-likelihood function. For instance, in the case of the discrete log-Student-$t$ distribution, two steps are
require:
\begin{itemize}
\item[i)] Let $\xi_{k}=k$ and for each $k=1,..,100$ compute the $k$-th maximum likelihood estimate of ${\bm{\theta}}_k=({\lambda}_k,{\phi}_k)^\intercal$, $\widehat{\bm{\theta}}_k=(\widehat{\lambda}_k,\widehat{\phi}_k)^\intercal$ say. Compute also the $k$-th log-likelihood function value $\ell_k(\widehat{\bm\theta}_k)$;
\item[ii)] The final estimate of $\xi$, $\widehat{\xi}=\xi_k$ say, is the one which maximizes the log-likelihood function, that is, $\widehat{\xi}\in \{\mbox{argmax}_{\xi_k} \ell_k(\widehat{\bm\theta}_k)\}$, and the estimate of $\bm{\theta}$ is $\widehat{\bm{\theta}}_k=(\widehat{\lambda}_k,\widehat{\phi}_k)^\intercal$.
\end{itemize}
\subsection{Censored data}
Let $Y_i\sim\textrm{LS}(\boldsymbol{\theta},g)$ be the failure time of the $i$-th
individual and let $\delta_i$ indicate whether
the $i$-th individual is censored or not.
Let us define
$d_k =$ ``number of failures at time $t_k$'',
$q_k =$ ``number censored at time $t_k$'' and
$N_k = \sum_{i=k}^{\infty} (d_i + q_i)$.
Note that $N_k-d_k$ represents the number survived just before time $t_k+1$.
That is, in each given time $t_k$, there are $d_k$ failures and $N_k-d_k$ survivals.
Since the data are discrete observing $\{(Y_i,\delta_i)\}$ is equivalent to observing $\{(d_k, q_k)\}$,
the likelihood function for the random censoring is given by
\begin{align*}
L^R(\bm\theta)
=
{\displaystyle\prod_{i=1}^n}
\big[f_{Y}(y_i|\boldsymbol{\theta})\big]^{\delta_i}
\big[1-F_{Y}(y_i|\boldsymbol{\theta})\big]^{1-\delta_i}
=
{\displaystyle\prod_{k=1}^\infty}
\big[p(x_k|\boldsymbol{\theta})\big]^{d_k}
\big[p(x_k|\boldsymbol{\theta})+R(x_k|\boldsymbol{\theta})\big]^{q_k}.
\end{align*}
This type of censoring has as special
case type I and II censoring.
The corresponding log-likelihood is
\begin{align}\label{log-lik-cens}
\ell^R(\bm\theta)
&=
{\sum_{k=1}^\infty}
\Big\{
{d_k}
\log
p(x_k|\boldsymbol{\theta})
+
{q_k}
\log\big[p(x_k|\boldsymbol{\theta})+R(x_k|\boldsymbol{\theta})\big]
\Big\}
\\[0,15cm]
&=
{\sum_{k=1}^\infty}
{d_k}
\log
\big\{
G\big[a_{\boldsymbol{\theta}}(x_k+1)\big]
-
G\big[a_{\boldsymbol{\theta}}(x_k)\big]
\big\}
+
{\sum_{k=1}^\infty}
{q_k}
\log\big\{1-G\big[a_{\boldsymbol{\theta}}(x_k)\big] \big\}
\nonumber.
\end{align}
\begin{remark}
By Proposition \ref{chac-re}, the log-likelihood \eqref{log-lik-cens} can be rewritten in terms of HR as
\[
\ell^R(\bm\theta)
=
{\sum_{k=1}^\infty}
(d_k+q_k)
\bigg\{
{d_k\over d_k+q_k} \log\big[ H(x_k|\boldsymbol{\theta})\big]
-
\log\big[1- H(x_k|\boldsymbol{\theta})\big]
+
\sum_{y=0}^{x_k-1}
\log\big[1- H(y|\boldsymbol{\theta})\big]
\bigg\},
\]
whenever the above series converges absolutely.
\end{remark}
Differentiating in \eqref{log-lik-cens}, a straightforward computation shows that
\begin{align*}
{\partial \ell^R(\bm\theta)\over\partial\theta}
&=
(Z_g)^{-1}
\sum _{k=1}^\infty
d_k
\sum_{j=0}^{1}
(-1)^{j+1}\,
{\partial a_{\boldsymbol{\theta}}(x_{k}+j)\over\partial\theta}\,
{g\big[a_{\boldsymbol{\theta}}^2(x_k+j)\big] \over p(x_k|\boldsymbol{\theta})}
\\[0,1cm]
&
-
(Z_g)^{-1}
\sum _{k=1}^\infty
q_k\,
{\partial a_{\boldsymbol{\theta}}(x_{k})\over\partial\theta} \,
{g\big[a_{\boldsymbol{\theta}}^2(x_k)\big] \over
p(x_k|\boldsymbol{\theta})+R(x_k|\boldsymbol{\theta}) },
\quad \theta\in\{\lambda,\phi\},
\end{align*}
where $Z_g=\int^{\infty}_{-\infty} g(w^2) \,\textrm{d}w$.
The mixed partial derivatives of $\ell^R(\bm\theta)$ are given by
\begin{align}
{\partial^2\ell^R(\bm\theta)\over \partial\theta\partial\theta' }
&=
(Z_g)^{-1}
\sum _{k=1}^\infty
d_k
\sum_{j=0}^{1}
(-1)^{j+1}\,
\bigg[
{\partial^2 a_{\boldsymbol{\theta}}({x_{k}+j})\over \partial\theta \partial\theta'}
+
\Theta_j(x_k)
-
\Omega_j(x_k)
\bigg]\,
{g\big[a^2_{\boldsymbol{\theta}}({x_{k}+j})\big]\over p(x_k|\boldsymbol{\theta})}
\nonumber
\\[0,1cm]
&
-
(Z_g)^{-1}
\sum _{k=1}^\infty
q_k\,
\bigg[
{\partial^2 a_{\boldsymbol{\theta}}({x_{k}})\over \partial\theta \partial\theta'}
+
\Theta_0(x_k)
+
\hat{\Omega}(x_k)
\bigg]\,
{g\big[a_{\boldsymbol{\theta}}^2(x_k)\big] \over
p(x_k|\boldsymbol{\theta})+R(x_k|\boldsymbol{\theta}) },
\quad \theta, \theta'\in\{\lambda,\phi\},
\nonumber
\end{align}
where
\[
\hat{\Omega}(x_k)=
(Z_g)^{-1}
{g\big[a^2({x_k})\big]\over p(x_k|\boldsymbol{\theta})+R(x_k|\boldsymbol{\theta}) } \,
{\partial a_{\boldsymbol{\theta}}({x_k})\over\partial\theta}
{\partial a_{\boldsymbol{\theta}}({x_k})\over\partial\theta'},
\]
and $\Theta_j(\cdot), \Omega_j(\cdot)$ are as in \eqref{Mdef} and \eqref{Mdef-1}, respectively. The first and second derivatives of function $a_{\boldsymbol{\theta}}({\cdot})$ are given in \eqref{pri-der-a} and \eqref{sec-der-a}, respectively.
\section{Monte Carlo simulation study}\label{sec:04}
A Monte Carlo simulation study was carried out to evaluate the performance of the maximum likelihood estimators for the $\textrm{LS}_{\rm d}$ models, particularly the log-normal, log-Student-$t$,
log-contaminated-normal, log-power-exponential, extended Birnbaum-Saunders and
extended Birnbaum-Saunders-$t$ cases. Note that when $\phi=4$ (fixed) the Birnbaum-Saunders and Birnbaum-Saunders-$t$ are obtained. All numerical evaluations were done in the \texttt{R} software; see \cite{R:15}.
The simulation scenario considers: sample size $n \in \{40,120,400\}$, values of true
parameters $\phi \in \{1,4,8\}$, $\lambda \in \{2.00\}$, censoring proportions $\{0\%,10\%,30\%\}$, and $1,000$ Monte Carlo replications for each sample size. The values of the true extra parameters are presented in the caption of each table.
The maximum likelihood estimation results are presented in Tables~\ref{t1}--\ref{t6}. The following sample statistics for the maximum likelihood estimates are reported: empirical mean,
bias, and mean squared error (MSE). A look at the results in Tables~\ref{t1}--\ref{t6} allows us to conclude that, as the sample size increases, the bias and MSE of all the estimators decrease, indicating that they are asymptotically unbiased, as expected. Moreover, as the censoring proportion increases, the performances of the estimators of $\phi$ and $\lambda$, deteriorate. Generally, all of these results show the
good performance of the proposed model.
\begin{table}[H]
\caption{Empirical values of mean, bias and MSE from simulated discrete log-normal data for the indicated maximum likelihood estimators.}
\label{t1}
\scalefont{0.82}
\centering
\begin{tabular}{lccccccccccccc}
\hline
\multirow{2}{*}{\thead{n}}&\multirow{2}{*}{Cen.}&&\multicolumn{3}{c}{$\phi=1$}&&\multicolumn{3}{c}{$\phi=4$}&&\multicolumn{3}{c}{$\phi=8$}\\
\cline{4-6}\cline{8-10}\cline{12-14}
&&&Mean&Bias&MSE&&Mean&Bias&MSE&&Mean&Bias&MSE\\
\cline{4-14}
\multirow{2}{*}{40}&\multirow{6}{*}{0\%}&$\hat{\phi}$&1.0119&0.0119&0.0851&&4.0752&0.0752&1.7247&&8.1388&0.1388&7.5728\\
&&$\hat{\lambda}$&2.0262&0.0262&0.1166&&2.1126&0.1126&0.5629&&2.2586&0.2586&1.4043\\
\multirow{2}{*}{120}&&$\hat{\phi}$&1.0059&0.0059&0.0260&&4.0413&0.0413&0.5237&&8.0768&0.0768&2.1683\\
&&$\hat{\lambda}$&2.0086&0.0086&0.0394&&2.0317&0.0317&0.1759&&2.0750&0.0750&0.3897\\
\multirow{2}{*}{400}&&$\hat{\phi}$&1.0041&0.0041&0.0077&&4.0184&0.0184&0.1522&&8.0404&0.0404&0.6641\\
&&$\hat{\lambda}$&2.0021&0.0021&0.0110&&2.0086&0.0086&0.0464&&2.0198&0.0198&0.1020\\
\hline
\multirow{2}{*}{40}&\multirow{6}{*}{10\%}&$\hat{\phi}$&1.0189&0.0189&0.0998&&4.1507&0.1507&2.2755&&8.3002&0.3002&10.0324\\
&&$\hat{\lambda}$&2.0301&0.0301&0.1198&&2.1154&0.1154&0.5850&&2.2528&0.2528&1.4323\\
\multirow{2}{*}{120}&&$\hat{\phi}$&1.0114&0.0114&0.0290&&4.0519&0.0519&0.5978&&8.1162&0.1162&2.5822\\
&&$\hat{\lambda}$&2.0099&0.0099&0.0396&&2.0416&0.0416&0.1795&&2.0842&0.0842&0.4037\\
\multirow{2}{*}{400}&&$\hat{\phi}$&1.0015&0.0015&0.0090&&4.0262&0.0262&0.1817&&8.0675&0.0675&0.8152\\
&&$\hat{\lambda}$&2.0012&0.0012&0.0109&&2.0014&0.0014&0.0466&&2.0073&0.0073&0.0984\\
\hline
\multirow{2}{*}{40}&\multirow{6}{*}{30\%}&$\hat{\phi}$&1.0562&0.0562&0.1839&&4.4113&0.4113&5.7506&&8.9965&0.9965&28.9825\\
&&$\hat{\lambda}$&2.0435&0.0435&0.1407&&2.1396&0.1396&0.6858&&2.2925&0.2925&1.7228\\
\multirow{2}{*}{120}&&$\hat{\phi}$&1.0218&0.0218&0.0458&&4.1116&0.1116&1.1065&&8.2539&0.2539&5.0762\\
&&$\hat{\lambda}$&2.0124&0.0124&0.0438&&2.0450&0.0450&0.1915&&2.0868&0.0868&0.4270\\
\multirow{2}{*}{400}&&$\hat{\phi}$&1.0093&0.0093&0.0133&&4.0840&0.0840&0.3158&&8.1961&0.1961&1.5102\\
&&$\hat{\lambda}$&2.0046&0.0046&0.0119&&2.0076&0.0076&0.0501&&2.0131&0.0131&0.1043\\
\hline
\end{tabular}
\end{table}
\begin{table}[H]
\caption{Empirical values of mean, bias and MSE from simulated discrete log-Student-$t$ data for the indicated maximum likelihood estimators with $\xi=4$.}
\label{t2}
\scalefont{0.82}
\centering
\begin{tabular}{lccccccccccccc}
\hline
\multirow{2}{*}{\thead{$n$}}&\multirow{2}{*}{Cen.}&&\multicolumn{3}{c}{$\phi=1$}&&\multicolumn{3}{c}{$\phi=4$}&&\multicolumn{3}{c}{$\phi=8$}\\
\cline{4-6}\cline{8-10}\cline{12-14}
&&&Mean&Bias&MSE&&Mean&Bias&MSE&&Mean&Bias&MSE\\
\cline{4-14}
\multirow{2}{*}{40}&\multirow{6}{*}{0\%}&$\hat{\phi}$&1.0250&0.0250&0.1313&&4.1432&0.1432&2.6039&&8.3035&0.3035&14.7992\\
&&$\hat{\lambda}$&2.0351&0.0351&0.1589&&2.1416&0.1416&0.7463&&2.3098&0.3098&1.9332\\
\multirow{2}{*}{120}&&$\hat{\phi}$&1.0093&0.0093&0.0408&&4.0316&0.0316&0.8793&&7.9743&-0.0257&4.0502\\
&&$\hat{\lambda}$&2.0179&0.0179&0.0497&&2.0657&0.0657&0.2195&&2.1694&0.1694&0.7087\\
\multirow{2}{*}{400}&&$\hat{\phi}$&0.9974&-0.0026&0.0123&&3.9778&-0.0222&0.4006&&7.9731&-0.0269&2.8371\\
&&$\hat{\lambda}$&1.9986&-0.0014&0.0137&&2.0148&0.0148&0.0684&&2.0657&0.0657&0.2455\\
\hline
\multirow{2}{*}{40}&\multirow{6}{*}{10\%}&$\hat{\phi}$&1.0332&0.0332&0.1499&&4.2117&0.2117&3.2204&&8.5131&0.5131&14.0700\\
&&$\hat{\lambda}$&2.0318&0.0318&0.1692&&2.1338&0.1338&0.8105&&2.2965&0.2965&2.1397\\
\multirow{2}{*}{120}&&$\hat{\phi}$&1.0149&0.0149&0.0466&&4.0564&0.0564&0.9162&&8.1134&0.1134&4.0340\\
&&$\hat{\lambda}$&2.0068&0.0068&0.0510&&2.0434&0.0434&0.2086&&2.0916&0.0916&0.4621\\
\multirow{2}{*}{400}&&$\hat{\phi}$&1.0026&0.0026&0.0126&&4.0285&0.0285&0.2729&&8.0335&0.0335&1.1661\\
&&$\hat{\lambda}$&1.9987&-0.0013&0.0146&&2.0000&0.0000&0.0598&&2.0137&0.0137&0.1208\\
\hline
\multirow{2}{*}{40}&\multirow{6}{*}{30\%}&$\hat{\phi}$&1.0772&0.0772&0.2681&&4.5456&0.5456&8.5155&&9.3122&1.3122&41.0324\\
&&$\hat{\lambda}$&2.0430&0.0430&0.2007&&2.1605&0.1605&1.0059&&2.3570&0.3570&3.1707\\
\multirow{2}{*}{120}&&$\hat{\phi}$&1.0148&0.0148&0.0666&&4.0801&0.0801&1.4871&&8.1426&0.1426&6.8379\\
&&$\hat{\lambda}$&2.0032&0.0032&0.0534&&2.0429&0.0429&0.2210&&2.0866&0.0866&0.4900\\
\multirow{2}{*}{400}&&$\hat{\phi}$&1.0129&0.0129&0.0168&&4.0890&0.0890&0.4097&&8.1679&0.1679&1.8873\\
&&$\hat{\lambda}$&2.0021&0.0021&0.0160&&2.0040&0.0040&0.0645&&2.0179&0.0179&0.1297\\
\hline
\end{tabular}
\end{table}
\begin{table}[H]
\caption{Empirical values of mean, bias and MSE from simulated discrete log-contaminated-normal data for the indicated maximum likelihood estimators with $\bm{\xi}=(0.5,0.5)^\intercal$.}
\label{t3}
\scalefont{0.82}
\centering
\begin{tabular}{lccccccccccccc}
\hline
\multirow{2}{*}{\thead{$n$}}&\multirow{2}{*}{Cen.}&&\multicolumn{3}{c}{$\phi=1$}&&\multicolumn{3}{c}{$\phi=4$}&&\multicolumn{3}{c}{$\phi=8$}\\
\cline{4-6}\cline{8-10}\cline{12-14}
&&&Mean&Bias&MSE&&Mean&Bias&MSE&&Mean&Bias&MSE\\
\cline{4-14}
\multirow{2}{*}{40}&\multirow{6}{*}{0\%}&$\hat{\phi}$&1.0057&0.0057&0.0947&&4.0459&0.0459&1.8954&&8.0944&0.0944&7.9832\\
&&$\hat{\lambda}$&2.0437&0.0437&0.1706&&2.1750&0.1750&0.8474&&2.3687&0.3687&2.1616\\
\multirow{2}{*}{120}&&$\hat{\phi}$&1.0024&0.0024&0.0300&&4.0207&0.0207&0.6042&&8.0165&0.0165&2.5295\\
&&$\hat{\lambda}$&2.0161&0.0161&0.0545&&2.0589&0.0589&0.2503&&2.1297&0.1297&0.5661\\
\multirow{2}{*}{400}&&$\hat{\phi}$&1.0020&0.0020&0.0085&&4.0061&0.0061&0.1779&&7.9912&-0.0088&0.8225\\
&&$\hat{\lambda}$&2.0024&0.0024&0.0165&&2.0161&0.0161&0.0745&&2.0410&0.0410&0.1751\\
\hline
\multirow{2}{*}{40}&\multirow{6}{*}{10\%}&$\hat{\phi}$&1.0336&0.0336&0.1484&&4.1971&0.1971&3.1525&&8.4153&0.4153&12.8221\\
&&$\hat{\lambda}$&2.0562&0.0562&0.1819&&2.1952&0.1952&0.9257&&2.3944&0.3944&2.3858\\
\multirow{2}{*}{120}&&$\hat{\phi}$&1.0154&0.0154&0.0396&&4.0676&0.0676&0.7806&&8.1552&0.1552&3.3793\\
&&$\hat{\lambda}$&2.0182&0.0182&0.0588&&2.0688&0.0688&0.2649&&2.1338&0.1338&0.5975\\
\multirow{2}{*}{400}&&$\hat{\phi}$&1.0067&0.0067&0.0103&&4.0278&0.0278&0.2113&&8.0581&0.0581&0.9135\\
&&$\hat{\lambda}$&1.9988&-0.0012&0.0163&&2.0077&0.0077&0.0735&&2.0222&0.0222&0.1586\\
\hline
\multirow{2}{*}{40}&\multirow{6}{*}{30\%}&$\hat{\phi}$&1.0644&0.0644&0.2448&&4.4774&0.4774&7.4371&&9.1280&1.1280&33.9336\\
&&$\hat{\lambda}$&2.0635&0.0635&0.2088&&2.2106&0.2106&1.0845&&2.4422&0.4422&3.1094\\
\multirow{2}{*}{120}&&$\hat{\phi}$&1.0310&0.0310&0.0591&&4.1681&0.1681&1.3522&&8.4147&0.4147&6.3718\\
&&$\hat{\lambda}$&2.0237&0.0237&0.0635&&2.0767&0.0767&0.2895&&2.1436&0.1436&0.6536\\
\multirow{2}{*}{400}&&$\hat{\phi}$&1.0126&0.0126&0.0161&&4.0609&0.0609&0.3648&&8.1301&0.1301&1.6252\\
&&$\hat{\lambda}$&2.0005&0.0005&0.0176&&2.0102&0.0102&0.0771&&2.0237&0.0237&0.1628\\
\hline
\end{tabular}
\end{table}
\begin{table}[H]
\caption{Empirical values of mean, bias and MSE from simulated discrete log-power-exponential data for the indicated maximum likelihood estimators with $\xi=-0.5$.}
\label{t4}
\scalefont{0.82}
\centering
\begin{tabular}{lccccccccccccc}
\hline
\multirow{2}{*}{\thead{$n$}}&\multirow{2}{*}{Cen.}&&\multicolumn{3}{c}{$\phi=1$}&&\multicolumn{3}{c}{$\phi=4$}&&\multicolumn{3}{c}{$\phi=8$}\\
\cline{4-6}\cline{8-10}\cline{12-14}
&&&Mean&Bias&MSE&&Mean&Bias&MSE&&Mean&Bias&MSE\\
\cline{4-14}
\multirow{2}{*}{40}&\multirow{6}{*}{0\%}&$\hat{\phi}$&0.9783&-0.0217&0.0470&&3.9497&-0.0503&1.1520&&7.9186&-0.0814&5.6648\\
&&$\hat{\lambda}$&2.0072&0.0072&0.0481&&2.0493&0.0493&0.2856&&2.1332&0.1332&0.6806\\
\multirow{2}{*}{120}&&$\hat{\phi}$&0.9970&-0.0030&0.0134&&4.0014&0.0014&0.3399&&7.9966&-0.0034&1.5695\\
&&$\hat{\lambda}$&2.0032&0.0032&0.0153&&2.0185&0.0185&0.0863&&2.0482&0.0482&0.1996\\
\multirow{2}{*}{400}&&$\hat{\phi}$&0.9969&-0.0031&0.0042&&3.9906&-0.0094&0.1019&&7.9864&-0.0136&0.4742\\
&&$\hat{\lambda}$&1.9998&-0.0002&0.0047&&2.0036&0.0036&0.0282&&2.0098&0.0098&0.0640\\
\hline
\multirow{2}{*}{40}&\multirow{6}{*}{10\%}&$\hat{\phi}$&0.9941&-0.0059&0.0612&&4.0123&0.0123&1.5967&&8.0169&0.0169&8.0172\\
&&$\hat{\lambda}$&2.0046&0.0046&0.0510&&2.0472&0.0472&0.2900&&2.1398&0.1398&0.6989\\
\multirow{2}{*}{120}&&$\hat{\phi}$&0.9999&-0,0001&0.0167&&4.0151&0.0151&0.4585&&8.0260&0.0260&2.0633\\
&&$\hat{\lambda}$&2.0010&0.0010&0.0165&&2.0099&0.0099&0.0866&&2.0349&0.0349&0.1968\\
\multirow{2}{*}{400}&&$\hat{\phi}$&1.0020&0.0020&0.0053&&4.0178&0.0178&0.1283&&8.0441&0.0441&0.6083\\
&&$\hat{\lambda}$&2.0010&0.0010&0.0050&&2.0044&0.0044&0.0298&&2.0084&0.0084&0.0661\\
\hline
\multirow{2}{*}{40}&\multirow{6}{*}{30\%}&$\hat{\phi}$&1.0093&0.0093&0.0890&&4.1602&0.1602&3.3402&&8.2982&0.2982&15.8037\\
&&$\hat{\lambda}$&2.0111&0.0111&0.0646&&2.0512&0.0512&0.3185&&2.1408&0.1408&0.7509\\
\multirow{2}{*}{120}&&$\hat{\phi}$&1.0040&0.0040&0.0250&&4.0485&0.0485&0.7736&&8.1109&0.1109&3.7337\\
&&$\hat{\lambda}$&2.0019&0.0019&0.0191&&2.0104&0.0104&0.0927&&2.0350&0.0350&0.2090\\
\multirow{2}{*}{400}&&$\hat{\phi}$&1.0026&0.0026&0.0079&&4.0215&0.0215&0.1934&&8.0665&0.0665&0.9711\\
&&$\hat{\lambda}$&2.0010&0.0010&0.0056&&2.0045&0.0045&0.0318&&2.0093&0.0093&0.0694\\
\hline
\end{tabular}
\end{table}
\begin{table}[H]
\caption{Empirical values of mean, bias and MSE from simulated discrete extended Birnbaum-Saunders data for the indicated maximum likelihood estimators with $\zeta=0.5$.}
\label{t5}
\scalefont{0.82}
\centering
\begin{tabular}{lccccccccccccc}
\hline
\multirow{2}{*}{\thead{$n$}}&\multirow{2}{*}{Cen.}&&\multicolumn{3}{c}{$\phi=1$}&&\multicolumn{3}{c}{$\phi=4$}&&\multicolumn{3}{c}{$\phi=8$}\\
\cline{4-6}\cline{8-10}\cline{12-14}
&&&Mean&Bias&MSE&&Mean&Bias&MSE&&Mean&Bias&MSE\\
\cline{4-14}
\multirow{2}{*}{40}&\multirow{6}{*}{0\%}&$\hat{\phi}$&0.8741&-0.1259&0.1721&&3.9719&-0.0281&0.9531&&7.9871&-0.0129&4.3427\\
&&$\hat{\lambda}$&2.0126&0.0126&0.0075&&2.0117&0.0117&0.0275&&2.0152&0.0152&0.0542\\
\multirow{2}{*}{120}&&$\hat{\phi}$&0.9747&-0.0253&0.0353&&4.0090&0.0090&0.3178&&8.0289&0.0289&1.3258\\
&&$\hat{\lambda}$&2.0049&0.0049&0.0029&&2.0032&0.0032&0.0090&&2.0037&0.0037&0.0185\\
\multirow{2}{*}{400}&&$\hat{\phi}$&0.9913&-0.0087&0.0117&&4.0042&0.0042&0.0978&&8.0266&0.0266&0.4048\\
&&$\hat{\lambda}$&2.0020&0.0020&0.0009&&2.0011&0.0011&0.0026&&2.0004&0.0004&0.0051\\
\hline
\multirow{2}{*}{40}&\multirow{6}{*}{10\%}&$\hat{\phi}$&0.8825&-0.1175&0.1945&&4.0107&0.0107&1.2217&&8.1117&0.1117&5.3717\\
&&$\hat{\lambda}$&2.0128&0.0128&0.0076&&2.0120&0.0120&0.0289&&2.0172&0.0172&0.0565\\
\multirow{2}{*}{120}&&$\hat{\phi}$&0.9699&-0.0301&0.0460&&4.0048&0.0048&0.3449&&8.0872&0.0872&1.6128\\
&&$\hat{\lambda}$&2.0047&0.0047&0.0028&&2.0043&0.0043&0.0096&&2.0031&0.0031&0.0187\\
\multirow{2}{*}{400}&&$\hat{\phi}$&0.9907&-0.0093&0.0134&&4.0000&0.0000&0.1167&&8.0092&0.0092&0.4932\\
&&$\hat{\lambda}$&2.0005&0.0005&0.0009&&1.9997&-0.0003&0.0026&&2.0005&0.0005&0.0050\\
\hline
\multirow{2}{*}{40}&\multirow{6}{*}{30\%}&$\hat{\phi}$&0.7674&-0.2326&0.3577&&4.0841&0.0841&2.0748&&8.3517&0.3517&8.9625\\
&&$\hat{\lambda}$&2.0152&0.0152&0.0067&&2.0178&0.0178&0.0343&&2.0280&0.0280&0.0692\\
\multirow{2}{*}{120}&&$\hat{\phi}$&0.9263&-0.0737&0.0967&&4.0323&0.0323&0.5129&&8.1723&0.1723&2.5286\\
&&$\hat{\lambda}$&2.0052&0.0052&0.0028&&2.0057&0.0057&0.0106&&2.0055&0.0055&0.0210\\
\multirow{2}{*}{400}&&$\hat{\phi}$&0.9852&-0.0148&0.0212&&4.0188&0.0188&0.1660&&8.0638&0.0638&0.7290\\
&&$\hat{\lambda}$&2.0009&0.0009&0.0009&&2.0011&0.0011&0.0028&&2.0028&0.0028&0.0056\\
\hline
\end{tabular}
\end{table}
\begin{table}[H]
\caption{Empirical values of mean, bias and MSE from simulated discrete extended Birnbaum-Saunders-$t$ data for the indicated maximum likelihood estimators with $\bm{\xi}=(0.5,4)^\intercal$.}
\label{t6}
\scalefont{0.82}
\centering
\begin{tabular}{lccccccccccccc}
\hline
\multirow{2}{*}{\thead{$n$}}&\multirow{2}{*}{Cen.}&&\multicolumn{3}{c}{$\phi=1$}&&\multicolumn{3}{c}{$\phi=4$}&&\multicolumn{3}{c}{$\phi=8$}\\
\cline{4-6}\cline{8-10}\cline{12-14}
&&&Mean&Bias&MSE&&Mean&Bias&MSE&&Mean&Bias&MSE\\
\cline{4-14}
\multirow{2}{*}{40}&\multirow{6}{*}{0\%}&$\hat{\phi}$&0.9871&-0.0129&0.1511&&3.9863&-0.0137&1.4404&&8.0258&0.0258&6.0994\\
&&$\hat{\lambda}$&2.0028&0.0028&0.0114&&2.0100&0.0100&0.0387&&2.0167&0.0167&0.0756\\
\multirow{2}{*}{120}&&$\hat{\phi}$&0.9998&-0.0002&0.0457&&4.0319&0.0319&0.4972&&8.0689&0.0689&2.1333\\
&&$\hat{\lambda}$&2.0026&0.0026&0.0037&&2.0061&0.0061&0.0128&&2.0086&0.0086&0.0248\\
\multirow{2}{*}{400}&&$\hat{\phi}$&0.9984&-0.0016&0.0141&&3.9996&-0.0004&0.1569&&8.0282&0.0282&0.6348\\
&&$\hat{\lambda}$&2.0018&0.0018&0.0010&&2.0031&0.0031&0.0038&&2.0035&0.0035&0.0074\\
\hline
\multirow{2}{*}{40}&\multirow{6}{*}{10\%}&$\hat{\phi}$&1.0002&0.0002&0.1663&&4.1093&0.1093&1.8263&&8.2395&0.2395&7.9333\\
&&$\hat{\lambda}$&2.0009&0.0009&0.0109&&2.0029&0.0029&0.0387&&2.0095&0.0095&0.0760\\
\multirow{2}{*}{120}&&$\hat{\phi}$&1.0055&0.0055&0.0512&&4.0492&0.0492&0.5522&&8.1287&0.1287&2.3725\\
&&$\hat{\lambda}$&2.0027&0.0027&0.0037&&2.0044&0.0044&0.0129&&2.0080&0.0080&0.0245\\
\multirow{2}{*}{400}&&$\hat{\phi}$&1.0003&0.0003&0.0157&&3.9944&-0.0056&0.1649&&8.0085&0.0085&0.6848\\
&&$\hat{\lambda}$&2.0011&0.0011&0.0012&&2.0020&0.0020&0.0042&&2.0028&0.0028&0.0078\\
\hline
\multirow{2}{*}{40}&\multirow{6}{*}{30\%}&$\hat{\phi}$&0.9829&-0.0171&0.2728&&4.1695&0.1695&2.7474&&8.4158&0.4158&12.0941\\
&&$\hat{\lambda}$&2.0027&0.0027&0.0114&&2.0055&0.0055&0.0438&&2.0151&0.0151&0.0882\\
\multirow{2}{*}{120}&&$\hat{\phi}$&1.0115&0.0115&0.0708&&4.1024&0.1024&0.7985&&8.2566&0.2566&3.5320\\
&&$\hat{\lambda}$&2.0029&0.0029&0.0038&&2.0065&0.0065&0.0140&&2.0119&0.0119&0.0273\\
\multirow{2}{*}{400}&&$\hat{\phi}$&1.0009&0.0009&0.0238&&4.0094&0.0094&0.2337&&8.0494&0.0494&0.9990\\
&&$\hat{\lambda}$&2.0014&0.0014&0.0012&&2.0037&0.0037&0.0047&&2.0049&0.0049&0.0087\\
\hline
\end{tabular}
\end{table}
\section{Illustrative examples}\label{sec:05}
The $\textrm{LS}_{\rm d}$ models are now used to analyze two real-world data sets. It is considered the following discrete $\textrm{LS}_{\rm d}$ models:
log-normal (LN),
log-Student-$t$ (L$t$),
log-contamined-normal (LCN),
log-power-exponential (LPE),
Birnbaum-Saunders (BS),
extended Birnbaum-Saunders (EBS),
Birnbaum-Saunders-$t$ (BS$t$), and
extended Birnbaum-Saunders-$t$ (EBS$t$).
\begin{Example}
The first data set corresponds to the number of times that a DEC-20 computer broke down in each of 128 consecutive weeks of operation. This computer has operated at the Open University during the 1980s; see Table \ref{t:data2} and \cite{trenkler:95}. Descriptive statistics for the computer breaks data set are the following: $128$(sample size), $0$(minimum), $22$(maximum), $3$(median), $4.016$(mean), $3.808$(standard deviation), $94.839$(coefficient of variation), $1.732$(coefficient of skewness) and $3.995$(coefficient of kurtosis).
From these results, we observe the positive skewness and a high degree of kurtosis. Figure~\ref{fig:ex2}(left) shows the histogram for the computer breaks data, from where it is confirmed the positive skewness. Moreover, Figure~\ref{fig:ex2}(right) shows the usual and adjusted boxplots, and we note that some potential outliers are not in fact outliers when the adjusted boxplot is observed.
\begin{table}[ht!]
\caption{Computer breaks data.}
\label{t:data2}
\scalefont{0.82}
\centering
\begin{tabular}{lccccccccccccccccc}
\hline
x&0 & 1 &2 &3 &4 &5 &6 &7 &8 &9 &10 &11 &12 &13 &16 &17 &22\\
frequency&15 &19 &23 &14 &15 &10 &8 &4 &6 &2 &3 &3 &2 &1 &1 &1 &1\\
\hline
\end{tabular}
\end{table}
\begin{figure}[!ht]
\centering
\psfrag{0.00}[c][c]{\scriptsize{0.00}}
\psfrag{0.05}[c][c]{\scriptsize{0.05}}
\psfrag{0.10}[c][c]{\scriptsize{0.10}}
\psfrag{0.15}[c][c]{\scriptsize{0.15}}
\psfrag{0.20}[c][c]{\scriptsize{0.20}}
\psfrag{0.25}[c][c]{\scriptsize{0.25}}
\psfrag{0.30}[c][c]{\scriptsize{0.30}}
%
\psfrag{0.0}[c][c]{\scriptsize{0.0}}
\psfrag{0.1}[c][c]{\scriptsize{0.1}}
\psfrag{0.2}[c][c]{\scriptsize{0.2}}
\psfrag{0.3}[c][c]{\scriptsize{0.3}}
\psfrag{0.4}[c][c]{\scriptsize{0.4}}
\psfrag{0.5}[c][c]{\scriptsize{0.5}}
\psfrag{0.6}[c][c]{\scriptsize{0.6}}
\psfrag{0.7}[c][c]{\scriptsize{0.7}}
\psfrag{0.8}[c][c]{\scriptsize{0.8}}
\psfrag{1.0}[c][c]{\scriptsize{1.0}}
\psfrag{0}[c][c]{\scriptsize{0}} \psfrag{2}[c][c]{\scriptsize{2}}
\psfrag{4}[c][c]{\scriptsize{4}} \psfrag{5}[c][c]{\scriptsize{5}}
\psfrag{6}[c][c]{\scriptsize{6}} \psfrag{8}[c][c]{\scriptsize{8}}
\psfrag{0}[c][c]{\scriptsize{0}}
\psfrag{10}[c][c]{\scriptsize{10}}
\psfrag{15}[c][c]{\scriptsize{15}}
\psfrag{20}[c][c]{\scriptsize{20}}
\psfrag{30}[c][c]{\scriptsize{30}}
\psfrag{40}[c][c]{\scriptsize{40}}
\psfrag{50}[c][c]{\scriptsize{50}}
\psfrag{100}[c][c]{\scriptsize{100}}
\psfrag{150}[c][c]{\scriptsize{150}}
%
\psfrag{dp}[c][c]{\scriptsize{$x$}}
\psfrag{c}[c][c]{\scriptsize{$x$}}
\psfrag{dc}[c][c]{\scriptsize{Frequency}}
\psfrag{aa}[c][c]{\scriptsize{usual boxplot}}
\psfrag{ad}[c][c]{\scriptsize{adjusted boxplot}}
%
{\includegraphics[height=7.2cm,width=7.2cm]{histogram_ex1-eps-converted-to}}\hspace{-0.25cm}
{\includegraphics[height=7.2cm,width=7.2cm]{boxplots_ex1-eps-converted-to}}
\caption{\small Histogram (left) and boxplots (right) for the computer breaks data.}
\label{fig:ex2}
\end{figure}
Table~\ref{tab:ex2} presents the maximum likelihood estimates, computed by the BFGS method, and standard errors (SEs) for the $\textrm{LS}_{\rm d}$ models parameters. Moreover, the $p$-values of the $\chi^2$ and Cramer-Von Mises (CVM) statistics, and the
the Akaike (AIC) and Bayesian information (BIC) criteria, are also reported. The results of Table \ref{tab:ex2} reveal that
the discrete BS model provides the best adjustment compared to other models based on the values of AIC and BIC.
\begin{table}[H]
\caption{Maximum likelihood estimates (with SE in parentheses) and model selection measures for fit to the computer breaks data.}
\label{tab:ex2}
\scalefont{0.81}
\centering
\begin{tabular}{lcccccccc}
\hline
\multirow{2}{*}{Model}&\multicolumn{3}{c}{Estimates}&&\multicolumn{2}{c}{$p$-value}&\multirow{2}{*}{AIC}&\multirow{2}{*}{BIC}\\
\cline{2-4}
\cline{6-7}
&$\widehat{\lambda}\ (\text{SE})$&$\widehat{\phi}\ (\text{SE})$&$\widehat{\xi}$ ($\widehat{\bm\xi}$)&&$\chi^2$&CMV&&\\
\hline
LN&3.2280 (0.2526)&0.7541 (0.1048)&$-$&&0.7841&0.6959&643.5141&652.0702\\
L$t$&3.2653 (0.2574)&0.7065 (0.1026)&20.0&&0.8306&0.7762&644.2248&652.7809\\
LCN&3.2283 (0.2526)&0.6858 (0.0953)&(0.9, 0.9)&&0.7996&0.6967&645.5205&656.9287\\
LPE&3.1624 (0.2770)&1.0176 (0.0555)&-0.2&&0.7096&0.5151&642.8785&651.4346\\
BS&3.1704 (0.0589)&$*$&0.9&&0.6007&0.5283&640.6061&646.3102\\
EBS&3.1436 (0.2090)&2.9392 (0.0438)&1.1&&0.6517&0.4700&642.4045&650.9605\\
BS$t$&3.1803 (0.3008)&$*$&(0.9, 20.0)&&0.7966&0.5799&642.9026&651.4587\\
EBS$t$&3.1406 (0.2193)&2.0820 (0.0818)&(1.3, 20.0)&&0.6492&0.4364&644.5534&655.9615\\
\hline
\multicolumn{9}{l}{{\scriptsize $*$ indicates that $\phi=4$ (fixed)}.}\\
\end{tabular}
\end{table}
\end{Example}
\begin{Example}
The second data set refers to the number of physiotherapy sessions until a patient's chronic back pain is reduced or alleviated; see Table \ref{t:data2}. The patients were submitted to electric currents and the study was developed by the School of Physiotherapy Clinics of City University of Sao Paulo (UNICID), Sao Paulo, Brazil; see~\cite{SILVAETAL2017}. Observations were considered censored to the right when patients did not report pain reduction or relief after $12$ treatment sessions, or if they had been lost to follow-up. Such as in \cite{vns:19}, the variable of interest is defined as $T = X - 1$, $t = 0,1,2,3,\ldots$, where $t = 0$ denotes a patient who presented pain relief in the first session performed. Descriptive statistics for the pain relief data are the following: $100$(sample size), $0$(minimum), $11$(maximum), $0$(median), $0.98$(mean), $1.933$(standard deviation), $197.258$(coefficient of variation), $2.802$(coefficient of skewness) and $9.015$(coefficient of kurtosis). These statistics values indicate the positive skewness and a high degree of kurtosis. Figure \ref{fig:ex3} shows the histogram and the fitted survival function by the Kaplan-Meier (KM) method.
\begin{table}[!ht]
\footnotesize
\centering
\caption{Number of sessions until a patient's chronic back pain is reduced or alleviated.}\label{t:data2}
\begin{tabular}{ccccccccccccccccccccccc}
\hline
Sessions & $T$ & \# at risk & \# of events & censoring indicator \\
\hline
1 & 0 & 100 & 64 & 0 \\
2 & 1 & 36 & 16 & 0 \\
3 & 2 & 20 & 5 & 1 \\
4 & 3 & 14 & 4 & 0 \\
5 & 4 & 10 & 4 & 0 \\
6 & 5 & 6 & 3 & 0 \\
7 & 6 & 3 & 0 & 0 \\
8 & 7 & 3 & 1 & 0 \\
9 & 8 & 2 & 0 & 0 \\
10 & 9 & 2 & 1 & 0 \\
11 & 10 & 1 & 0 & 0 \\
12 & 11 & 1 & 0 & 1 \\
\hline
\end{tabular}
\end{table}
\begin{figure}[!ht]
\centering
{\includegraphics[height=7.2cm,width=7.2cm]{histogram_ex2-eps-converted-to}}\hspace{-0.25cm}
{\includegraphics[height=7.2cm,width=7.2cm]{km.pdf}}
\caption{\small Histogram (left) and KM (right) for the pain relief data.}
\label{fig:ex3}
\end{figure}
The maximum likelihood estimates of the discrete log-symmetric distribution parameters, along with AIC and BIC criteria are reported in Table \ref{t:ml-censo}. We note that the log-Student-$t$ model provides better adjustment compared to the other models based on the values of AIC and BIC. Table \ref{t:ml-censo} and \ref{t:ci_km} present the fitted survival functions obtained by the KM and the discrete log-symmetric models. These results suggest that (extended) Birnbaum-Saunders and log-normal models yield the best fits to the pain relief data.
\begin{table}[H]
\caption{Maximum likelihood estimates and model selection measures for fit to the pain relief data.}
\label{t:ml-censo}
\scalefont{0.82}
\centering
\begin{tabular}{llcc}
\hline
Discrete distribution&Estimates (SE)&AIC&BIC\\
\hline
\multirow{2}{*}{Log-normal}&$\hat{\lambda}$=2.3229 (0.1329)&540.0872&549.1191\\
&$\hat{\phi}$=0.462 (0.0571)&&\\
\multirow{2}{*}{Log-Student-$t$}&$\hat{\lambda}$=1.8745 (0.1250)&513.1153&522.1472\\
&$\hat{\phi}$=0.122 (0.0538)&&\\
&$\hat{\zeta}$=2&&\\
\multirow{2}{*}{Log-Power-Exponential}&$\hat{\lambda}$=2.0046 (0.0999)&528.1035&537.1354\\
&$\hat{\phi}$=0.1713 (0.0259)&&\\
&$\hat{\zeta}$=0.5&&\\
\multirow{2}{*}{Log-Contamined-Normal}&$\hat{\lambda}$=1.8654 (0.1329)&513.3146&525.3571\\
&$\hat{\phi}$=0.1018 (0.0519)&&\\
&$\hat{\zeta}$=(0.37;0.10)&&\\
\multirow{2}{*}{Birnbaum-Saunders}&$\hat{\lambda}$=2.4767 (0.8043)&543.6507&549.6719\\
&$\hat{\zeta}$=0.7&&\\
\multirow{2}{*}{Extended Birnbaum-Saunders}&$\hat{\lambda}$=2.3263 (0.156)&540.2164&549.2483\\
&$\hat{\phi}$=184.8547 (0.0701)&&\\
&$\hat{\zeta}$=0.1&&\\
\multirow{2}{*}{Birnbaum-Saunders-$t$}&$\hat{\lambda}$=1.8966 (0.2047)&515.0695&524.1014\\
&$\hat{\zeta}$=(0.4;2.0)&&\\
\multirow{2}{*}{Extended Birnbaum-Saunders-$t$}&$\hat{\lambda}$=1.8751 (0.1477)&515.1634&527.2060\\
&$\hat{\phi}$=49.0515 (0.1995)&&\\
&$\hat{\zeta}$=(0.1;2.0)&&\\
\hline
\end{tabular}
\end{table}
\begin{table}[H]
\caption{Estimates of the survival function via KM and discrete log-symmetric distributions.}
\label{t:ci_km}
\scalefont{0.82}
\centering
\begin{tabular}{lcccccccccc}
\hline
$x$&KM&LN&L-$t$&LPE&LCN&BS&EBS&BS-$t$&EBS-$t$\\
\hline
0&1&0.8925&0.8931&0.8698&0.8848&0.9100&0.8930&0.8774&0.8930\\
1&0.4333&0.5871&0.4350&0.5018&0.4354&0.6202&0.5880&0.4533&0.4354\\
2&0.2400&0.3533&0.1552&0.2412&0.1610&0.3919&0.3541&0.1835&0.1557\\
3&0.1933&0.2120&0.0811&0.1315&0.0885&0.2447&0.2126&0.0982&0.0812\\
4&0.1588&0.1297&0.0534&0.0791&0.0614&0.1528&0.1301&0.0640&0.0534\\
5&0.1299&0.0813&0.0398&0.0511&0.0458&0.0958&0.0815&0.0466&0.0396\\
6&0.0866&0.0523&0.0318&0.0348&0.0352&0.0603&0.0524&0.0364&0.0316\\
7&0.0794&0.0344&0.0267&0.0247&0.0276&0.0381&0.0344&0.0297&0.0265\\
8&0.0577&0.0232&0.0231&0.0182&0.0220&0.0242&0.0231&0.0250&0.0228\\
10&0.0505&0.0111&0.0184&0.0106&0.0145&0.0098&0.0110&0.0190&0.0181\\
12&0.0361&0.0056&0.0155&0.0067&0.0101&0.0040&0.0056&0.0153&0.0152\\
\hline
\end{tabular}
\end{table}
\begin{figure}[H]
\centering
\includegraphics[scale=0.55]{KMLSd}\setlength{\belowcaptionskip}{-8pt}
\caption{Estimation of the survival function using the KM (solid) and discrete log-symmetric distributions (dashed) with the pain relief data.}
\label{fno-}
\end{figure}
\end{Example}
\newpage
\noindent
\\[0,1cm]
\noindent
\section{Concluding remarks}\label{sec:06}
We have proposed a new class of distributions to deal with cases where the data are discrete, asymmetric and nonnegative. The proposed approach is a discrete version of the family of continuous log-symmetric distributions. We have considered estimation about the model parameters based on the maximum likelihood method with censored and uncensored data. A Monte Carlo simulation study was carried out to evaluate the behavior of the maximum likelihood estimators. We have applied the proposed models to two real-world data sets. In general, the results have shown that the proposed discrete family proved to be an useful model for discrete data. As part of future research, it is of interest to discuss regression models as well as multivariate extensions. Moreover, time series models based on the proposed class may be of interest. Work on these issues is currently in progress and we hope to report some findings in future papers.
\noindent
\\[0,1cm]
\noindent
\bibliographystyle{apalike}
|
1,314,259,995,276 | arxiv |
\section{The Formalization}
\label{sec:formalization}
Large chunks of the material presented above have been formalized in the proof assistant \textsf{Coq}.
The version of \textsf{Coq} used is \textsf{Coq} 8.3pl5, patched according to the instructions given by
Voevodsky \cite{vv_foundations}.
Our formalization is based on Voevodsky's \emph{Foundations} library~\cite{vv_foundations},
and is available online \cite{rezk_coq}.
It is also available as an addendum to this arXiv submission.
\subsection*{Design principles}
Our general design principles largely follow the conventions established by Voevodsky \cite{vv_foundations}
with a few departures.
Both use only three type constructors, namely $\Pi$, $\Sigma$, $\textsf{Id}$, and avoid most of the syntactic sugar of \textsf{Coq} (such as record types).
Both do use implicit arguments and, quite extensively, coercions.
We restrict ourselves to these basic type constructors since they have a well-understood
semantics in various homotopy-theoretic models. Implicit arguments and coercions are
crucial to manage structures of high complexity. Furthermore, they reflect
familiar mathematical practice.
As for the differences, the use of notations, especially with infix symbols (for example, \lstinline!f ;; g! for
the composition of morphisms of a precategory) plays an important role in our formalization.
We also use the section mechanism of \textsf{Coq}
when several hypotheses are common to a series of constructions and lemmas, e.g.,
when constructing particular examples of complex structures.
\subsection*{Reading the code}
Since informal type theory, used in the previous sections, is supposed to match its formal equivalent quite closely,
the statements of the formalization are very similar to the corresponding statements of the informal type theory.
For example, our formal statement correponding to \autoref{def:whisker} looks as follows:
\begin{lstlisting}
Lemma is_nat_trans_pre_whisker (A B C : precategory) (F : functor A B)
(G H : functor B C) (gamma : nat_trans G H) :
is_nat_trans (G o F) (H o F) (fun a : A => gamma (F a)).
\end{lstlisting}
The major differences occur when we split a large definition in parts as, for example, for the definition of a precategory.
We first define:
\begin{lstlisting}
Definition precategory_ob_mor := total2 (
fun ob : UU => ob -> ob -> hSet).
\end{lstlisting}
Given an element \lstinline!C! of the above type, we write \lstinline!a : C!
for an inhabitant \lstinline!a! of its first component (using the \emph{coercion} mechanism of \textsf{Coq}) and \lstinline!a --> b! for the value of the second
component on \lstinline!a b : C!.
We complete the data of a precategory by:
\begin{lstlisting}
Definition precategory_data := total2 (
fun C : precategory_ob_mor =>
dirprod (forall c : C, c --> c)
(forall a b c : C, a --> b -> b --> c -> a --> c)).
\end{lstlisting}
In the following we write \lstinline!identity c! for the identity morphism
on an object \lstinline!c! and \lstinline!f ;; g! for the composite of
morphisms \lstinline!f : a --> b! and \lstinline!g : b --> c!.
We define a predicate expressing that this data constitutes a precategory:
\begin{lstlisting}
Definition is_precategory (C : precategory_data) :=
dirprod (dirprod (forall (a b : C) (f : a --> b),
identity a ;; f == f)
(forall (a b : C) (f : a --> b),
f ;; identity b == f))
(forall (a b c d : C)
(f : a --> b)(g : b --> c) (h : c --> d),
f ;; (g ;; h) == (f ;; g) ;; h).
\end{lstlisting}
As the last step, we say that a precategory is given by the data of a precategory
satisfying the necessary axioms:
\begin{lstlisting}
Definition precategory := total2 is_precategory.
\end{lstlisting}
\subsection*{Contents of the formalization}
In this part of the project we aimed on formalizing the Rezk completion together with its universal property. The formalization consists of 10 files:
\begin{itemize}
\item \texttt{precategories.v} which roughly covers \autoref{sec:cats}.
\item \texttt{functors\textunderscore transformations.v} which roughly covers \autoref{sec:transfors}.
\item \texttt{sub\textunderscore precategories.v} where we define sub-precategories and
the image factorization of a functor.
This is not a separate part of the paper, but it is used (in less generality) in \autoref{thm:rezk-completion}.
\item \texttt{equivalences.v} where we cover parts of \autoref{sec:equivalences} needed for \autoref{ct:cat-weq-eq}.
\item \texttt{category\textunderscore hset.v} where we define the precategory of sets and show that it is a category.
\item \texttt{yoneda.v} where we cover the main parts of \autoref{ct:yoneda}.
\item \texttt{whiskering.v} where we define the whiskering, see \autoref{def:whisker}.
\item \texttt{precomp\textunderscore fully\textunderscore faithful.v} that covers \autoref{lem:precomp-faithful} and \ref{lem:precomp-full}.
\item \texttt{precomp\textunderscore ess\textunderscore surj.v} that covers \autoref{ct:cat-weq-eq}.
\item \texttt{rezk\textunderscore completion.v} that puts the previous files together exhibiting \autoref{thm:rezk-completion}.
\end{itemize}
\subsection*{Formalization vs informal definitions}
The formalization deviates very little from the informal definitions given in the previous sections.
We shall mention here the only example of such a deviation, resulting in a slicker definition.
In \autoref{def:adjoint} the natural transformations $(\epsilon F)$ and $(F\eta)$ (similarly,
$(G\epsilon)$ and $(\eta G)$) are actually not
composable! We have $\epsilon F : (FG)F \to 1_{B}F$ and $F\eta : F1_A \to F(GF)$.
However, $(FG)F$ and $F(GF)$ are not convertible, i.e.\ not \emph{definitionally} equal,
which would be necessary for the composition to typecheck. So in order to state the equality in question
we would have to insert a transport along propositional equality---see \autoref{ct:functor-assoc} and the subsequent discussion.
We overcome this issue by rephrasing the axiom: instead of requiring an equality of natural
transformations, we require it to hold pointwise.
These statements are logically and type-theoretically equivalent, but for the latter we have the
desired convertibility: for any $a : A$, the term $\big(F(GF)\big)(a)$ is convertible to $\big((FG)F\big)(a)$.
\subsection*{Statistics}
Our library comprises ten files with ca.\ 180 definitions and 170 lemmas altogether.
The \texttt{coqwc} tool counts 1200 lines of specification---definitions and statements of lemmas and theorems---and
2700 lines of proof script overall.
\section{Introduction}
\label{sec:introduction}
Of the branches of mathematics, category theory is one which perhaps fits the least comfortably into existing ``foundations of mathematics''.
This is true both at an informal level, and when trying to be completely formal using a computer proof assistant.
One problem is that naive category theory tends to run afoul of Russellian paradoxes and has to be reinterpreted using universe levels; we will not have much to say about this.
But another problem is that most of category theory is invariant under weaker notions of ``sameness'' than equality, such as isomorphism in a category or equivalence of categories, in a way which traditional foundations (such as set theory) fail to capture.
This problem becomes especially important when formalizing category theory in a computer proof assistant.
Our aim in this paper is to show that this problem can be ameliorated using the new \emph{Univalent Foundations} of mathematics, a.k.a.\ \emph{homotopy type theory}, proposed by V.~Voevodsky \cite{vv_uf}.
It builds on the existing system of dependent type theory \cite{martin-lof:bibliopolis, werner:thesis}, a logical system that is feasible for large-scale formalization of mathematics \cite{gonthier:feit-thompson} and also for internal categorical logic.
The distinctive feature of Univalent Foundations (UF) is its treatment of equality inspired by homotopy-theoretic semantics \cite{awodey-warren, arndt-kapulkin, warren:thesis, garner-van-den-berg:top-and-simp-models}.
Using this interpretation, Voevodsky has extended dependent type theory with an additional axiom, called the \emph{Univalence Axiom}, which was originally suggested by the model of the theory in the category of simplicial sets~\cite{klv:ssetmodel}, and should also be valid in other homotopical models such as categories of higher stacks.
The univalence axiom identifies \emph{identity} of types with \emph{equivalence} of types.
In particular, this implies that anything we can say about sets is automatically invariant under isomorphism, because isomorphism is identified with identity.
In other words, under the univalence axiom, the category of sets \emph{automatically} behaves ``categorically'', in that isomorphic objects cannot be distinguished.
Our goal in this paper is to extend this behavior to other categories, which requires a more careful analysis of the definition of ``category''.
If we ignore size issues, then in set-based mathematics, a category consists of a \emph{set} of objects and, for each pair $x,y$ of objects, a \emph{set} $\hom(x,y)$ of morphisms.
Under Univalent Foundations, a ``naive'' definition of category would simply mimic this with a \emph{type} of objects and \emph{types} of morphisms.
However, if we allowed these types to contain arbitrary higher homotopy, then we ought to impose higher coherence conditions on the associativity and unitality axioms, leading to some notion of $(\infty,1)$-category.
Eventually this should be done, but at present our goal is more modest.
We restrict ourselves to 1-categories, and therefore we restrict the hom-types $\hom(x,y)$ to be \emph{sets} in the sense of UF, i.e.\ types satisfying the principle UIP of ``uniqueness of identity proofs''.
More interesting is whether the type of objects should have any higher homotopy.
If we require it also to be a set, then we end up with a definition that behaves more like the traditional set-theoretic one.
Following Toby Bartels, we call this notion a \emph{strict category}.
However, a (usually) better option is to require a generalized version of the univalence axiom, identifying the \emph{identity type} $(x=_{\mathsf{Obj}} y)$ between two objects with the type $\mathsf{iso}(x,y)$ of \emph{isomorphisms} from $x$ to $y$.
(In particular, this implies that each type $(x=_{\mathsf{Obj}} y)$ is a set, and that therefore the type of objects is a \emph{1-type}, containing no higher homotopy above dimension 1.)
This seems to have been first suggested by Hofmann and Streicher~\cite{hs:gpd-typethy}, who also introduced a precursor of the univalence axiom under the name ``universe extensionality''.
We consider it to be the ``correct'' definition of \emph{category} in Univalent Foundations, since it automatically implies that anything we say about objects of a category is invariant under isomorphism.
For emphasis, we may call such a category a \emph{saturated} or \emph{univalent} category.
Most categories encountered in practice are saturated, at least in the presence of the univalence axiom.
Those which are not saturated, such as the category of $n$-types and homotopy classes of functions for $n\ge 1$, tend to behave much worse than the saturated ones.
Thus, in the non-saturated and non-strict case, we use instead the slightly derogatory word \emph{precategory}.
A good example of the difference between the three notions of category is provided by the statement ``every fully faithful and essentially surjective functor is an equivalence of categories'', which in classical set-based category theory is equivalent to the axiom of choice.
\begin{enumerate}
\item For strict categories, this is still equivalent to to the axiom of choice.
\item For precategories, there is no axiom of choice which can make it true.
\item For saturated categories, it is provable \emph{without} any axiom of choice.\label{item:satnoac}
\end{enumerate}
Saturated categories have the additional advantage that (as conjectured by Hofmann and Streicher~\cite{hs:gpd-typethy}) they are ``univalent as objects'' as well.
Specifically, just the way isomorphic objects \emph{in} a saturated category are equal, \emph{equivalent} saturated categories are themselves equal.
When interpreted in Voevodsky's simplicial set model, our precategories are similar to a truncated analogue of the \emph{Segal spaces} of Rezk~\cite[Sec.~14]{rezk01css}, while our saturated categories correspond to his \emph{complete Segal spaces}.
Strict categories correspond instead to (a weakened and truncated version of) \emph{Segal categories}.
It is known that Segal categories and complete Segal spaces are equivalent models for $(\infty,1)$-categories (see e.g.~\cite{bergner:infty-one}), so that in the simplicial set model, strict and saturated categories yield ``equivalent'' category theories---although as mentioned above, the saturated ones still have many advantages.
However, in the more general categorical semantics of a higher topos, a strict category corresponds to an internal category (in the traditional sense) in the corresponding 1-topos of sheaves, while a saturated category corresponds to a \emph{stack}.
Internal categories are \emph{not} equivalent to stacks (in fact, stacks form a localization of internal categories~\cite{jt:strong-stacks}), and it is well-known that stacks are generally a more appropriate sort of ``category'' relative to a topos.
Besides developing the basic theory of precategories and saturated categories, one of the main goals of this paper is to describe a universal way of ``saturating'' a precategory.
More precisely, we show that the obvious inclusion of saturated precategories into categories has a left adjoint, in the appropriate bicategorical sense.
More concretely, from any precategory $A$, we construct a saturated category $\widehat{A}$, with a universal functor
$A \to \widehat{A}$ (the unit of the adjunction).
With the connection to Rezk's complete Segal spaces in mind, we call the saturation of a precategory its \emph{Rezk completion}.
However, with higher topos semantics in mind, it could also reasonably be called the \emph{stack completion}: a strict category in the internal type theory of a higher topos corresponds to an internal category in the 1-topos of sheaves, and its Rezk completion is essentially its stack completion.
Our construction uses a Yoneda embedding as in~\cite{bunge:stacks-morita-internal} rather than a transfinite localization argument as in~\cite{jt:strong-stacks,rezk01css}, but it is also possible to mimic the latter more closely in type theory using ``higher inductive types''~\cite{ls:hits}.
A slightly expanded version of this paper, which includes this alternative proof, is included in~\cite[Chapter 9]{HoTTbook}.
The Rezk completion also sheds further light on the notion of equivalence of categories.
For instance, the functor $A \to \widehat{A}$ is always fully faithful and essentially surjective, hence a ``weak equivalence''.
It follows that a precategory is a saturated category exactly when it ``sees'' all fully faithful and essentially surjective functors as equivalences.
(The analogous facts for complete Segal spaces and stacks are well-known.)
In particular, the notion of saturated category is already inherent in the notion of ``fully faithful and essentially surjective functor''.
Finally, as mentioned above, one of the virtues of Univalent Foundations (and type theory more generally) is the feasibility of formalizing it in a computer proof assistant.
We have taken advantage of this by verifying large parts of the theory of precategories and saturated categories in the proof assistant \textsf{Coq}, building on Voevodsky's \emph{Foundations} library for UF~\cite{vv_foundations}.
In particular, the formalization includes the Rezk completion together with its universal property.
Our \textsf{Coq} files are attached to this arXiv submission.
\begin{rmk}
Because saturated categories are the ``correct'' notion of category in UF, when working internally in UF we drop the adjective ``saturated'' and speak merely of \emph{categories}.
The adjective is only necessary when comparing such categories to other ``external'' notions of category.
\end{rmk}
\subsection*{Outline of the paper}
In \S\ref{sec:background} we recall some definitions from Univalent Foundations.
Then in \S\S\ref{sec:cats}--\ref{sec:yoneda} we develop the basic theory of precategories and saturated categories informally, working entirely inside of Univalent Foundations.
We define functors, natural transformations, adjunctions, equivalences, and prove the Yoneda lemma.
We also show that equivalent categories are equal.
In \S\ref{sec:rezk} we construct the Rezk completion which, as described above, universally saturates any precategory.
Finally, \S\ref{sec:formalization} describes the content of our formalization, the organization of the source files, and the differences between informal presentation and its formal analog.
The actual \textsf{Coq} code is available as a supplement to this paper \cite{rezk_coq}.
\subsection*{Acknowledgements}
First and foremost, we would like to thank Vladimir Voevodsky for initiating the project of Univalent Foundations and for much assistance.
We are also very grateful to the organizers of the special year at the Institute for Advanced Study in 2012--2013, where much of this work was done.
The first- and the third-named author were supported by NSF grant DMS-1128155.
The second-named author was supported by NSF Grant DMS-1001191 (P.I. Steve Awodey).
Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
The second-named author dedicates this work to his mother.
\section{Review of univalent foundations}
\label{sec:background}
Most of this paper is written in an informal style, with the intent of describing mathematics that could be formalized in Univalent Foundations, analogously to the way that traditional mathematics is discussed informally but is generally accepted to be formalizable in set theory.
We do not have space to give an introduction to UF here; instead we refer the reader to~\cite{pelayo-warren:univalent-foundations-paper}.
However, a brief reminder of the essential concepts may be helpful.
The basic objects are \emph{types}, which have \emph{elements}, with the basic judgment of elementhood denoted $a:A$.
There are the usual constructions on types such as dependent sums and dependent products, which we generally write about in English according to the propositions-as-types interpretation: we identify the activity of \emph{proving a theorem} with the activity of \emph{constructing a term in a type}.
For instance, a statement like ``for all $x:A$ we have $P(x)$'' indicates that we have an element of the type ${\textstyle\prod}\@prd{x:A} P(x)$, while ``there exists an $x:A$ such that $P(x)$'' indicates $\sm{x:A} P(x)$.
Depending on context, we may also pronounce $\sm{x:A} P(x)$ as ``the type of $x:A$ such that $P(x)$'' and write it as $\setof{x:A | P(x)}$.
For $a,b:A$ there is an \emph{identity type} $a=b$ (or $a=_A b$ for emphasis), which in the homotopical semantics becomes a \emph{path type}.
It has the universal property that we may prove things about a general $p:a=b$ by restricting to the special case when $a$ and $b$ are the same and $p$ is ``reflexivity''.
We refer to this as \emph{path induction} or \emph{induction on identity}.
For instance, in this way we can show that if $(P(x))_{x:A}$ is a family of types indexed by $A$, and we have $p:a=_A b$ and $u:P(a)$, then we can \emph{transport} $u$ along $p$ to obtain an element $\trans{p}{u} : P(b)$.
Similarly, we can show that for any $f:A\to B$ and $p:x=_A y$, we have $f(p):f(x)=_B f(y)$, and we can compose paths (written $p\mathrel{\raisebox{.5ex}{$\centerdot$}} q$) and reverse paths (written $\opp{p}$).
The identity type of many types can be characterized up to equivalence (see below).
For instance, to say $(x,u) = (y,v)$ in $\sm{a:A} P(a)$ is equivalent to saying that $p:x =_A y$ and $\trans{p}{u} =_{P(y)} v$.
And to say $f=g$ in ${\textstyle\prod}\@prd{a:A} P(a)$ is to say that $f(x)=g(x)$ for all $x:A$ (this is \emph{function extensionality}, which follows from the univalence axiom below).
A type $A$ is called a \emph{mere proposition} if for all $a,b:A$ we have $a=b$.
Homotopically, these are the spaces which, if nonempty, are contractible.
With this in mind, we call a type $A$ \emph{contractible} if it is a mere proposition and has an element $a:A$.
On the other hand, we call $A$ a \emph{set} if for all $a,b:A$, the type $a=b$ is a mere proposition.
Homotopically, these are the spaces which are equivalent to discrete ones.
More generally, $A$ is an \emph{$n$-type} if each $a=b$ is an $(n-1)$-type, with the 0-types being the sets, the $(-1)$-types the mere propositions, and the $(-2)$-types the contractible ones.
This exactly matches the traditional notion of \emph{homotopy $n$-type}.
A \emph{quasi-inverse} of a function $f:A\to B$ is a function $g:B\to A$ such that $\eta_x:x = g(f(x))$ for all $x:A$ and $\epsilon_y:f(g(y))=y$ for all $y:B$.
We say $f$ is an \emph{equivalence} if it has a quasi-inverse such that $f(\eta_x) \mathrel{\raisebox{.5ex}{$\centerdot$}} \epsilon_{f(x)} = {\sf r}{f(x)}$ for all $x:A$.
In fact, if $f$ has a quasi-inverse, then it is an equivalence (by modifying $\epsilon$ or $\eta$); this is the usual way that we construct equivalences.
However, the type ``$f$ is an equivalence'' is better-behaved than ``$f$ has a quasi-inverse''; in particular it is a mere proposition.
We write $A\simeq B$ for the type $\sm{f:A\to B} \mathsf{isequiv}(f)$ of equivalences from $A$ to $B$.
In the formalization, we use an equivalent definition that $f:A\to B$ is an equivalence if for all $b:B$, its ``homotopy fiber'' $\sm{x:A} (f(x)=b)$ is contractible.
In some literature such functions are called ``weak equivalences'', but there is nothing weak about them, since in particular they have quasi-inverses.
The types in UF are stratified in a linearly ordered hierarchy of \emph{universes}, which are types whose elements are themselves types.
For most of the paper we avoid mentioning particular universes explicitly: we write simply ``\ensuremath{\mathsf{Type}}\xspace'' to indicate \emph{some} universe.
This is called \emph{typical ambiguity}: universes are implicitly quantified over.
However, in \S\S\ref{sec:yoneda}--\ref{sec:rezk} we will be a little more careful.
All our universes are assumed to satisfy the univalence axiom, which says that for types $A,B:\ensuremath{\mathsf{Type}}\xspace$ in some universe \ensuremath{\mathsf{Type}}\xspace, the canonical map $(A=_\ensuremath{\mathsf{Type}}\xspace B) \to (A\simeq B)$ is an equivalence.
We write $\set$ for the type $\sm{A:\ensuremath{\mathsf{Type}}\xspace} \mathsf{isset}(A)$ of all sets (in some universe \ensuremath{\mathsf{Type}}\xspace).
Technically, this is the type of pairs $(A,s)$ where $A$ is a type and $s$ inhabits the type ``$A$ is a set'', but since the latter type is a mere proposition, it is usually easy to ignore the distinction.
Similarly, we write $\ensuremath{\mathsf{Prop}}\xspace \defeq \sm{A:\ensuremath{\mathsf{Type}}\xspace} \mathsf{isprop}(A)$ for the type of all mere propositions.
One type forming operation we use in UF which is not as well-known in type theory is the \emph{propositional truncation} of a type $A$.
This is a type $\brck A$ that is a mere proposition, and has the universal property that whenever we want to prove a type $B$ (i.e.\ construct an element of $B$) assuming $\brck A$, and $B$ is a mere proposition, then we may assume $A$ instead of $\brck A$.
In the formalization, we define $\brck A$ with an impredicative encoding as
\[ \brck A \defeq {\textstyle\prod}\@prd{P:\ensuremath{\mathsf{Prop}}\xspace} (A\to P) \to P. \]
This depends for its correctness on an impredicativity axiom for mere propositions (every mere proposition is equivalent to one living in the smallest universe), and also lives in a higher universe level than $A$.
However, $\brck A$ can be constructed as a higher inductive type~\cite{ls:hits}, avoiding both of these issues.
In informal mathematical English, we use the adverb \emph{merely} to indicate the propositional truncation; thus for instance ``there merely exists an $x:A$ such that $P(x)$'' indicates $\bbrck{\sm{x:A} P(x)}$.
In contrast to the type-theoretic ``there exists'' which is strongly constructive, ``mere existence'' is more like the usual mathematical sort of ``there exists'' which does not imply that any particular choice of such an object has been specified.
The propositional truncation is actually the case $n=-1$ of a more general $n$-truncation operation, which makes any type $A$ into an $n$-type $\trunc n A$ in a universal way.
However, we will not have much need of the $n$-truncation for $n\ge 0$.
A function $f:A\to B$ between types is called a \emph{monomorphism} if for all $x,y:A$, the function $f:(x=y) \to (f(x)=f(y))$ is an equivalence.
If $A$ and $B$ are sets, then it is equivalent to say that for all $x,y:A$, if $f(x)=f(y)$, then $x=y$; so in this case we also say that $f$ is \emph{injective}.
Also if $A$ and $B$ are sets, we say that $f:A\to B$ is \emph{surjective} if for every $b:B$ there merely exists an $a:A$ such that $f(a)=b$.
If in this definition we leave out the adverb ``merely'', we call the resulting notion being \emph{split surjective}; in the absence of the axiom of choice the two are different.
(Type theorists are accustomed to use the phrase ``the axiom of choice'' for a provable statement which is really about commutation of dependent sums and products; in UF one can state an axiom of choice that behaves more like the familiar one in set theory.
However, we will not need any such axiom.)
\section{Categories and precategories}
\label{sec:cats}
We use a definition of category in which the arrows form a family of types indexed by the objects.
This matches the way hom-types are always used in category theory; for instance, we never even consider comparing two arrows unless we know their sources and targets agree.
Furthermore, it seems clear that for a theory of 1-categories, the hom-types should all be sets.
This leads us to the following.
\begin{defn}\label{ct:precategory}
A \textbf{precategory} $A$ consists of the following.
\begin{enumerate}
\item A type $A_0$ of \emph{objects}. We write $a:A$ for $a:A_0$.
\item For each $a,b:A$, a set $\hom_A(a,b)$ of \emph{arrows} or \emph{morphisms}.
\item For each $a:A$, a morphism $1_a:\hom_A(a,a)$.
\item For each $a,b,c:A$, a function of type
\[ \hom_A(b,c) \to \hom_A(a,b) \to \hom_A(a,c) \]
denoted infix by $g\mapsto f\mapsto g\circ f$, or sometimes simply by $gf$.
\item For each $a,b:A$ and $f:\hom_A(a,b)$, we have $\id f {1_b\circ f}$ and $\id f {f\circ 1_a}$.
\item For each $a,b,c,d:A$ and $f:\hom_A(a,b)$, $g:\hom_A(b,c)$, $h:\hom_A(c,d)$, we have $\id {h\circ (g\circ f)}{(h\circ g)\circ f}$.
\end{enumerate}
\end{defn}
The problem with the notion of precategory is that for objects $a,b:A$, we have two possibly-different notions of ``sameness''.
On the one hand, we have $\id[A_0]{a}{b}$.
But on the other hand, there is the standard categorical notion of \emph{isomorphism}.
\begin{defn}\label{ct:isomorphism}
A morphism $f:\hom_A(a,b)$ is an \textbf{isomorphism} if there is a morphism $g:\hom_A(b,a)$ such that $\id{g\circ f}{1_a}$ and $\id{f\circ g}{1_b}$.
We write $a\cong b$ for the type of such isomorphisms.
\end{defn}
\begin{lem}\label{ct:isoprop}
For any $f:\hom_A(a,b)$, the type ``$f$ is an isomorphism'' is a mere proposition.
Therefore, for any $a,b:A$ the type $a\cong b$ is a set.
\end{lem}
\begin{proof}
Suppose given $g:\hom_A(b,a)$ and $\eta:(\id{1_a}{g\circ f})$ and $\epsilon:(\id{f\circ g}{1_b})$, and similarly $g'$, $\eta'$, and $\epsilon'$.
We must show $\id{(g,\eta,\epsilon)}{(g',\eta',\epsilon')}$.
But since all hom-sets are sets, their identity types (in which $\eta$ and $\epsilon$ live) are mere propositions, so it suffices to show $\id g {g'}$.
For this we have
\[g' = 1_a\circ g' = (g\circ f)\circ g' = g\circ (f\circ g') = g\circ 1_b = g\]
using $\eta$ and $\epsilon'$.
\end{proof}
If $f:a\cong b$, then we write $\inv f$ for its inverse, which by \autoref{ct:isoprop} is uniquely determined.
The only relationship between these two notions of sameness that we have in a precategory is the following.
\begin{lem}[\textsf{idtoiso}]\label{ct:idtoiso}
If $A$ is a precategory and $a,b:A$, then
\[(\id a b)\to (a \cong b).\]
\end{lem}
\begin{proof}
By induction on identity, we may assume $a$ and $b$ are the same.
But then we have $1_a:\hom_A(a,a)$, which is clearly an isomorphism.
\end{proof}
The intuitive similarity to the univalence axiom should be clear.
More precisely, we have the following:
\begin{eg}\label{ct:precatset}
There is a precategory \ensuremath{\underline{\set}}\xspace, whose type of objects is \set, and with $\hom_{\ensuremath{\underline{\set}}\xspace}(A,B) \defeq (A\to B)$.
The identity morphisms are identity functions and the composition is function composition.
For this precategory, \autoref{ct:idtoiso} is equal to the restriction to sets of the canonical identity-to-equivalence map, which the univalence axiom asserts to be an equivalence.
\end{eg}
Thus, it is natural to make the following definition.
\begin{defn}\label{ct:category}
A \textbf{category} is a precategory such that for all $a,b:A$, the function $\ensuremath{\mathsf{idtoiso}}\xspace_{a,b}$ from \autoref{ct:idtoiso} is an equivalence.
\end{defn}
In particular, in a category, if $a\cong b$, then $a=b$.
\begin{eg}\label{ct:eg:set}
The univalence axiom implies immediately that \ensuremath{\underline{\set}}\xspace is a category.
One can also show, using univalence, that any precategory of set-level structures such as groups, rings, topological spaces, etc.\ is a category; see for instance~\cite{dc:isoeq}.
\end{eg}
We also note the following.
\begin{lem}\label{ct:obj-1type}
In a category, the type of objects is a 1-type.
\end{lem}
\begin{proof}
It suffices to show that for any $a,b:A$, the type $\id a b$ is a set.
But $\id a b$ is equivalent to $a \cong b$, which is a set.
\end{proof}
We write $\ensuremath{\mathsf{isotoid}}\xspace$ for the inverse $(a\cong b) \to (\id a b)$ of the map $\ensuremath{\mathsf{idtoiso}}\xspace$ from \autoref{ct:idtoiso}.
The following relationship between the two is important.
Recall the notion of \emph{transport} along a path, denoted $\trans p z$.
Additionally, if $p:\id a a'$ and $q:\id b b'$, then we write $(p,q)$ for the induced path of type $\id{(a,b)}{(a',b')}$.
\begin{lem}\label{ct:idtoiso-trans}
For $p:\id a a'$ and $q:\id b b'$ and $f:\hom_A(a,b)$, we have
\begin{equation}\label{ct:idtoisocompute}
\id{\trans{(p,q)}{f}}
{\ensuremath{\mathsf{idtoiso}}\xspace(q)\circ f \circ \inv{\ensuremath{\mathsf{idtoiso}}\xspace(p)}}
\end{equation}
\end{lem}
\begin{proof}
By induction, we may assume $p$ and $q$ are ${\sf r} a$ and ${\sf r} b$ respectively.
Then the left-hand side of~\eqref{ct:idtoisocompute} is simply $f$.
But by definition, $\ensuremath{\mathsf{idtoiso}}\xspace({\sf r} a)$ is $1_a$, and $\ensuremath{\mathsf{idtoiso}}\xspace({\sf r} b)$ is $1_b$, so the right-hand side of~\eqref{ct:idtoisocompute} is $1_b\circ f\circ 1_a$, which is equal to $f$.
\end{proof}
Similarly, we can show
\begin{gather}
\id{\ensuremath{\mathsf{idtoiso}}\xspace(\rev p)}{\inv {(\ensuremath{\mathsf{idtoiso}}\xspace(p))}}\\
\id{\ensuremath{\mathsf{idtoiso}}\xspace(p\mathrel{\raisebox{.5ex}{$\centerdot$}} q)}{\ensuremath{\mathsf{idtoiso}}\xspace(q)\circ \ensuremath{\mathsf{idtoiso}}\xspace(p)}\\
\id{\ensuremath{\mathsf{isotoid}}\xspace(f\circ e)}{\ensuremath{\mathsf{isotoid}}\xspace(e)\mathrel{\raisebox{.5ex}{$\centerdot$}} \ensuremath{\mathsf{isotoid}}\xspace(f)}
\end{gather}
and so on.
\begin{eg}\label{ct:orders}
A precategory in which each set $\hom_A(a,b)$ is a mere proposition is equivalently a type $A_0$ equipped with a mere relation ``$\le$'' that is reflexive ($a\le a$) and transitive (if $a\le b$ and $b\le c$, then $a\le c$).
We call this a \textbf{preorder}.
In a preorder, a morphism $f\colon a\le b$ is an isomorphism just when there exists some proof $g\colon b\le a$.
Thus, $a\cong b$ is the mere proposition that $a\le b$ and $b\le a$.
Therefore, a preorder $A$ is a category just when (1) each type $a=b$ is a mere proposition, and (2) for any $a,b:A_0$ there exists a function $(a\cong b) \to (a=b)$.
In other words, $A_0$ must be a set, and $\le$ must be antisymmetric (if $a\le b$ and $b\le a$, then $a=b$).
We call this a \textbf{(partial) order} or a \textbf{poset}.
\end{eg}
\begin{eg}\label{ct:gaunt}
If $A$ is a category, then $A_0$ is a set if and only if for any $a,b:A_0$, the type $a\cong b$ is a mere proposition.
Classically, a category satisfies this condition if and only if it is equivalent to one in which every isomorphism is an identity morphism.
A category of the latter sort is sometimes called \textbf{gaunt} (this term was introduced by Barwick and Schommer-Pries~\cite{bsp12infncats}).
\end{eg}
\begin{eg}\label{ct:discrete}
For any 1-type $X$, there is a category with $X$ as its type of objects and with $\hom(x,y) \defeq (x=y)$.
If $X$ is a set, we call this the \textbf{discrete} category on $X$.
In general, we call it a \textbf{groupoid}.
\end{eg}
\begin{eg}\label{ct:fundgpd}
For \emph{any} type $X$, there is a precategory with $X$ as its type of objects and with $\hom(x,y) \defeq \trunc0{x=y}$, the 0-truncation of its identity type.
We call this the \emph{fundamental pregroupoid} of $X$.
\end{eg}
\begin{eg}\label{ct:hoprecat}
There is a precategory whose type of objects is \ensuremath{\mathsf{Type}}\xspace and with $\hom(X,Y) \defeq \trunc0{X\to Y}$.
We call this the \emph{homotopy precategory of types}.
\end{eg}
\begin{rmk}\label{defn:strict}
As suggested in the introduction, if a precategory has the property that its type $A_0$ of objects is a \emph{set}, we call it a \textbf{strict category}.
We will not have much to say about strict categories in this paper, however.
\end{rmk}
\section{Functors and transformations}
\label{sec:transfors}
The following definitions are fairly obvious, and need no modification.
\begin{defn}\label{ct:functor}
Let $A$ and $B$ be precategories.
A \textbf{functor} $F:A\to B$ consists of
\begin{enumerate}
\item A function $F_0:A_0\to B_0$, generally also denoted $F$.
\item For each $a,b:A$, a function $F_{a,b}:\hom_A(a,b) \to \hom_B(Fa,Fb)$, generally also denoted $F$.
\item For each $a:A$, we have $\id{F(1_a)}{1_{Fa}}$.
\item For each $a,b,c:A$ and $f:\hom_A(a,b)$ and $g:\hom_B(b,c)$, we have
\[\id{F(g\circ f)}{Fg\circ Ff}.\]
\end{enumerate}
\end{defn}
Note that by induction on identity, a functor also preserves \ensuremath{\mathsf{idtoiso}}\xspace.
\begin{defn}\label{ct:nattrans}
For functors $F,G:A\to B$, a \textbf{natural transformation} $\gamma:F\to G$ consists of
\begin{enumerate}
\item For each $a:A$, a morphism $\gamma_a:\hom_B(Fa,Ga)$.
\item For each $a,b:A$ and $f:\hom_A(a,b)$, we have $\id{Gf\circ \gamma_a}{\gamma_b\circ Ff}$.
\end{enumerate}
\end{defn}
Since each type $\hom_B(Fa,Gb)$ is a set, its identity type is a mere proposition.
Thus, the naturality axiom is a mere proposition, so (invoking function extensionality) identity of natural transformations is determined by identity of their components.
In particular, for any $F$ and $G$, the type of natural transformations from $F$ to $G$ is again a set.
Similarly, identity of functors is determined by identity of the functions $A_0\to B_0$ and (transported along this) of the corresponding functions on hom-sets.
\begin{defn}\label{ct:functor-precat}
For precategories $A,B$, there is a precategory $B^A$ defined by
\begin{itemize}
\item $(B^A)_0$ is the type of functors from $A$ to $B$.
\item $\hom_{B^A}(F,G)$ is the type of natural transformations from $F$ to $G$.
\end{itemize}
\end{defn}
\begin{proof}
We define $(1_F)_a\defeq 1_{Fa}$.
Naturality follows by the unit axioms of a precategory.
For $\gamma:F\to G$ and $\delta:G\to H$, we define $(\delta\circ\gamma)_a\defeq \delta_a\circ \gamma_a$.
Naturality follows by associativity.
Similarly, the unit and associativity laws for $B^A$ follow from those for $B$.
\end{proof}
\begin{lem}\label{ct:natiso}
A natural transformation $\gamma:F\to G$ is an isomorphism in $B^A$ if and only if each $\gamma_a$ is an isomorphism in $B$.
\end{lem}
\begin{proof}
If $\gamma$ is an isomorphism, then we have $\delta:G\to F$ that is its inverse.
By definition of composition in $B^A$, $(\delta\gamma)_a\jdeq \delta_a\gamma_a$ and similarly.
Thus, $\id{\delta\gamma}{1_F}$ and $\id{\gamma\delta}{1_G}$ imply $\id{\delta_a\gamma_a}{1_{Fa}}$ and $\id{\gamma_a\delta_a}{1_{Ga}}$, so $\gamma_a$ is an isomorphism.
Conversely, suppose each $\gamma_a$ is an isomorphism, with inverse called $\delta_a$, say.
We define a natural transformation $\delta:G\to F$ with components $\delta_a$; for the naturality axiom we have
\[ Ff\circ \delta_a = \delta_b\circ \gamma_b\circ Ff \circ \delta_a = \delta_b\circ Gf\circ \gamma_a\circ \delta_a = \delta_b\circ Gf. \]
Now since composition and identity of natural transformations is determined on their components, we have $\id{\gamma\delta}{1_G}$ and $\id{\delta\gamma}{1_F}$.
\end{proof}
The following result, due originally to Hofmann and Streicher~\cite{hs:gpd-typethy}, is fundamental.
\begin{thm}\label{ct:functor-cat}
If $A$ is a precategory and $B$ is a category, then $B^A$ is a category.
\end{thm}
\begin{proof}
Let $F,G:A\to B$; we must show that $\ensuremath{\mathsf{idtoiso}}\xspace:(\id{F}{G}) \to (F\cong G)$ is an equivalence.
To give an inverse to it, suppose $\gamma:F\cong G$ is a natural isomorphism.
Then for any $a:A$, we have an isomorphism $\gamma_a:Fa \cong Ga$, hence an identity $\ensuremath{\mathsf{isotoid}}\xspace(\gamma_a):\id{Fa}{Ga}$.
By function extensionality, we have an identity $\bar{\gamma}:\id[(A_0\to B_0)]{F_0}{G_0}$.
Now since the last two axioms of a functor are mere propositions, to show that $\id{F}{G}$ it will suffice to show that for any $a,b:A$, the functions
\begin{align*}
F_{a,b}&:\hom_A(a,b) \to \hom_B(Fa,Fb)\mathrlap{\qquad\text{and}}\\
G_{a,b}&:\hom_A(a,b) \to \hom_B(Ga,Gb)
\end{align*}
become equal when transported along $\bar\gamma$.
By computation for function extensionality, when applied to $a$, $\bar\gamma$ becomes equal to $\ensuremath{\mathsf{isotoid}}\xspace(\gamma_a)$.
But by \autoref{ct:idtoiso-trans}, transporting $Ff:\hom_B(Fa,Fb)$ along $\ensuremath{\mathsf{isotoid}}\xspace(\gamma_a)$ and $\ensuremath{\mathsf{isotoid}}\xspace(\gamma_b)$ is equal to the composite $\gamma_b\circ Ff\circ \inv{(\gamma_a)}$, which by naturality of $\gamma$ is equal to $Gf$.
This completes the definition of a function $(F\cong G) \to (\id F G)$.
Now consider the composite
\[ (\id F G) \to (F\cong G) \to (\id F G). \]
Since hom-sets are sets, their identity types are mere propositions, so to show that two identities $p,q:\id F G$ are equal, it suffices to show that $\id[\id{F_0}{G_0}]{p}{q}$.
But in the definition of $\bar\gamma$, if $\gamma$ were of the form $\ensuremath{\mathsf{idtoiso}}\xspace(p)$, then $\gamma_a$ would be equal to $\ensuremath{\mathsf{idtoiso}}\xspace(p_a)$ (this can easily be proved by induction on $p$).
Thus, $\ensuremath{\mathsf{isotoid}}\xspace(\gamma_a)$ would be equal to $p_a$, and so by function extensionality we would have $\id{\bar\gamma}{p}$, which is what we need.
Finally, consider the composite
\[(F\cong G)\to (\id F G) \to (F\cong G). \]
Since identity of natural transformations can be tested componentwise, it suffices to show that for each $a$ we have $\id{\ensuremath{\mathsf{idtoiso}}\xspace(\bar\gamma)_a}{\gamma_a}$.
But as observed above, we have $\id{\ensuremath{\mathsf{idtoiso}}\xspace(\bar\gamma)_a}{\ensuremath{\mathsf{idtoiso}}\xspace((\bar\gamma)_a)}$, while $\id{(\bar\gamma)_a}{\ensuremath{\mathsf{isotoid}}\xspace(\gamma_a)}$ by computation for function extensionality.
Since $\ensuremath{\mathsf{isotoid}}\xspace$ and $\ensuremath{\mathsf{idtoiso}}\xspace$ are inverses, we have $\id{\ensuremath{\mathsf{idtoiso}}\xspace(\bar\gamma)_a}{\gamma_a}$.
\end{proof}
In particular, naturally isomorphic functors between categories (as opposed to precategories) are equal.
\begin{defn}
For functors $F:A\to B$ and $G:B\to C$, their composite $G\circ F:A\to C$ is given by
\begin{itemize}
\item The composite $(G_0\circ F_0) : A_0 \to C_0$
\item For each $a,b:A$, the composite
\[(G_{Fa,Fb}\circ F_{a,b}):\hom_A(a,b) \to \hom_C(GFa,GFb).\]
\end{itemize}
It is easy to check the axioms.
\end{defn}
\begin{defn}\label{def:whisker}
For functors $F:A\to B$ and $G,H:B\to C$ and a natural transformation $\gamma:G\to H$, the composite $(\gamma F):GF\to HF$ is given by
\begin{itemize}
\item For each $a:A$, the component $\gamma_{Fa}$.
\end{itemize}
Naturality is easy to check.
Similarly, for $\gamma$ as above and $K:C\to D$, the composite $(K\gamma):KG\to KH$ is given by
\begin{itemize}
\item For each $b:B$, the component $K(\gamma_b)$.
\end{itemize}
\end{defn}
\begin{lem}\label{ct:interchange}
For functors $F,G:A\to B$ and $H,K:B\to C$ and natural transformations $\gamma:F\to G$ and $\delta:H\to K$, we have
\[\id{(\delta G)(H\gamma)}{(K\gamma)(\delta F)}.\]
\end{lem}
\begin{proof}
It suffices to check componentwise: at $a:A$ we have
\begin{align*}
((\delta G)(H\gamma))_a
&\jdeq (\delta G)_{a}(H\gamma)_a\\
&\jdeq \delta_{Ga}\circ H(\gamma_a)\\
&= K(\gamma_a) \circ \delta_{Fa} \hspace{2cm}\text{(by naturality of $\delta$)}\\
&\jdeq (K \gamma)_a\circ (\delta F)_a\\
&\jdeq ((K \gamma)(\delta F))_a.\qedhere
\end{align*}
\end{proof}
Classically, one defines the ``horizontal composite'' of $\gamma:F\to G$ and $\delta:H\to K$ to be the common value of ${(\delta G)(H\gamma)}$ and ${(K\gamma)(\delta F)}$.
We will refrain from doing this, because while equal, these two transformations are not \emph{definitionally} equal.
This restraint also has the consequence that we can use the symbol $\circ$ (or juxtaposition) for all kinds of composition unambiguously: there is only one way to compose two natural transformations (as opposed to composing a natural transformation with a functor on either side).
\begin{lem}\label{ct:functor-assoc}
Composition of functors is associative: $\id{H(GF)}{(HG)F}$.
\end{lem}
\begin{proof}
Since composition of functions is associative, this follows immediately for the actions on objects and on homs.
And since hom-sets are sets, the rest of the data is automatic.
\end{proof}
The equality in \autoref{ct:functor-assoc} is likewise not definitional.
(Composition of functions is definitionally associative, but the axioms that go into a functor must also be composed, and this breaks definitional associativity.) For this reason, we need also to know about \emph{coherence} for associativity.
\begin{lem}\label{ct:pentagon}
\autoref{ct:functor-assoc} is coherent, i.e.\ the following pentagon of equalities commutes:
\[ \xymatrix{ & K(H(GF)) \ar[dl] \ar[dr]\\
(KH)(GF) \ar[d] && K((HG)F) \ar[d]\\
((KH)G)F && (K(HG))F \ar[ll] }
\]
\end{lem}
\begin{proof}
As in \autoref{ct:functor-assoc}, this is evident for the actions on objects, and the rest is automatic.
\end{proof}
We will henceforth abuse notation by writing $H\circ G\circ F$ or $HGF$ for either $H(GF)$ or $(HG)F$, transporting along \autoref{ct:functor-assoc} whenever necessary.
We have a similar coherence result for units.
\begin{lem}\label{ct:units}
For a functor $F:A\to B$, we have equalities $\id{(1_B\circ F)}{F}$ and $\id{(F\circ 1_A)}{F}$, such that given also $G:B\to C$, the following triangle of equalities commutes.
\[ \xymatrix{
G\circ (1_B \circ F) \ar[rr] \ar[dr] &&
(G\circ 1_B)\circ F \ar[dl] \\
& G \circ F.}
\]
\end{lem}
\section{Adjunctions}
\label{sec:adjunctions}
We take as our definition of adjunction the purely diagrammatic one in terms of a unit and counit natural transformation.
\begin{defn}\label{def:adjoint}
A functor $F:A\to B$ is a \textbf{left adjoint} if there exists
\begin{itemize}
\item A functor $G:B\to A$.
\item A natural transformation $\eta:1_A \to GF$.
\item A natural transformation $\epsilon:FG\to 1_B$.
\item $\id{(\epsilon F)(F\eta)}{1_F}$.
\item $\id{(G\epsilon)(\eta G)}{1_G}$.
\end{itemize}
\end{defn}
\begin{lem}\label{ct:adjprop}
If $A$ is a category (but $B$ may be only a precategory), then the type ``$F$ is a left adjoint'' is a mere proposition.
\end{lem}
\begin{proof}
Suppose given $(G,\eta,\epsilon)$ with the triangle identities and also $(G',\eta',\epsilon')$.
Define $\gamma:G\to G'$ to be $(G'\epsilon)(\eta' G)$, and $\delta:G'\to G$ to be $(G\epsilon')(\eta G')$.
Then
\begin{align*}
\delta\gamma &=
(G\epsilon')(\eta G')(G'\epsilon)(\eta'G)\\
&= (G\epsilon')(G F G'\epsilon)(\eta G' F G)(\eta'G)\\
&= (G\epsilon)(G\epsilon'FG)(G F \eta' G)(\eta G)\\
&= (G\epsilon)(\eta G)\\
&= 1_G
\end{align*}
using \autoref{ct:interchange} and the triangle identities.
Similarly, we show $\id{\gamma\delta}{1_{G'}}$, so $\gamma$ is a natural isomorphism $G\cong G'$.
By \autoref{ct:functor-cat}, we have an identity $\id G {G'}$.
Now we need to know that when $\eta$ and $\epsilon$ are transported along this identity, they become equal to $\eta'$ and $\epsilon'$.
By \autoref{ct:idtoiso-trans}, this transport is given by composing with $\gamma$ or $\delta$ as appropriate.
For $\eta$, this yields
\begin{equation*}
(G'\epsilon F)(\eta'GF)\eta
= (G'\epsilon F)(G'F\eta)\eta'
= \eta'
\end{equation*}
using \autoref{ct:interchange} and the triangle identity.
The case of $\epsilon$ is similar.
Finally, the triangle identities transport correctly automatically, since hom-sets are sets.
\end{proof}
In \S\ref{sec:yoneda} we will mention another way to prove \autoref{ct:adjprop}.
\section{Equivalences}
\label{sec:equivalences}
It is usual to define an equivalence of categories to be a functor $F:A\to B$ for which there exists a functor $G:B\to A$ and natural isomorphisms $F\circ G \cong 1_B$ and $G\circ F \cong 1_A$.
However, because of the ``proof-relevant'' or ``constructive'' nature of ``there exists'' (dependent sum types) in UF, this definition does not produce a well-behaved \emph{type of equivalences} between two categories.
The solution is not surprising to a category theorist: whenever equivalences are ill-behaved, it usually suffices to consider \emph{adjoint} equivalences instead.
(This is exactly the same problem and solution as is encountered in the definition of equivalence of \emph{types} in UF.)
\begin{defn}
A functor $F:A\to B$ is an \textbf{equivalence of (pre)categories} if it is a left adjoint for which $\eta$ and $\epsilon$ are isomorphisms.
We write $A\simeq B$ for the type of equivalences of categories from $A$ to $B$.
\end{defn}
By \autoref{ct:adjprop} and \autoref{ct:isoprop}, if $A$ is a category, then the type ``$F:A\to B$ is an equivalence of precategories'' is a mere proposition.
\begin{lem}\label{ct:adjointification}
If for $F:A\to B$ there exists $G:B\to A$ and isomorphisms $GF\cong 1_A$ and $FG\cong 1_B$, then $F$ is an equivalence of precategories.
\end{lem}
\begin{proof}
We can repeat the standard proof that any equivalence of categories gives rise to an adjoint equivalence.
First note that for any $a:A$ we have
\begin{equation}
\eta_{GFa} = GF(\eta_a).\label{eq:gfeta}
\end{equation}
This follows by cancelling $\eta_a$ in the naturality condition $\eta_{GFa} \circ \eta_a = GF(\eta_a) \circ \eta_a$.
Now, given $G$ and $\eta:FG\cong 1_B$ and $\epsilon : 1_A \cong GF$, we define $\epsilon'$ by
\[ \epsilon'_b \defeq
\epsilon_b \circ
F(\eta_{Gb})^{-1} \circ
(\epsilon_{FGb})^{-1}.
\]
This is evidently a natural isomorphism.
Then we have
\begin{align*}
\epsilon'_{Fa} \circ F\eta_{a}
&= \epsilon_{Fa} \circ F(\eta_{GFa})^{-1} \circ (\epsilon_{FGFa})^{-1} \circ F\eta_a\\
&= \epsilon_{Fa} \circ FGF(\eta_{a})^{-1} \circ (\epsilon_{FGFa})^{-1} \circ F\eta_a\\
&= \epsilon_{Fa} \circ (\epsilon_{Fa})^{-1} \circ F(\eta_{a})^{-1} \circ F\eta_a\\
&= 1_{Fa}.
\end{align*}
using~\eqref{eq:gfeta} and naturality of $\epsilon$.
For the other identity $G(\epsilon_b) \circ \eta_{Gb} = 1_{Gb}$, it suffices to show $G(\epsilon_b) \circ GFG(\epsilon_b) = \eta_{Gb}^{-1} \circ GFG(\epsilon_b)$.
But we have
\begin{align*}
\eta_{Gb}^{-1} \circ GFG(\epsilon'_b)
&= G(\epsilon'_b) \circ \eta_{GFGb}^{-1}\\
&= G(\epsilon'_b) \circ GF(\eta_{Gb})^{-1}\\
&= G(\epsilon'_b) \circ G(\epsilon'_{FGb})\\
&= G(\epsilon'_b) \circ GFG(\epsilon'_b)
\end{align*}
using naturality of $\eta$,~\eqref{eq:gfeta}, the previous identity, and naturality of $\epsilon'$.
\end{proof}
We now investigate some alternative definitions of equivalences of categories.
\begin{defn}
We say a functor $F:A\to B$ is \textbf{faithful} if for all $a,b:A$, the function
\[F_{a,b}:\hom_A(a,b) \to \hom_B(Fa,Fb)\]
is injective, and \textbf{full} if for all $a,b:A$ this function is surjective.
If it is both (hence each $F_{a,b}$ is an equivalence) we say $F$ is \textbf{fully faithful}.
\end{defn}
\begin{defn}
We say a functor $F:A\to B$ is \textbf{split essentially surjective} if for all $b:B$ there exists an $a:A$ such that $Fa\cong b$.
\end{defn}
The reason for the adjective \emph{split} is that because of the strong type-theoretic meaning of ``there exists'', such a functor comes with a function assigning a specified $a$ for every $b$.
This has the following advantage.
\begin{lem}\label{ct:ffeso}
For any precategories $A$ and $B$ and functor $F:A\to B$, the following types are equivalent.
\begin{enumerate}
\item $F$ is an equivalence of precategories.\label{item:ct:ffeso1}
\item $F$ is fully faithful and split essentially surjective.\label{item:ct:ffeso2}
\end{enumerate}
\end{lem}
\begin{proof}
Suppose $F$ is an equivalence of precategories, with $G,\eta,\epsilon$ specified.
Then we have the function
\begin{equation*}
\begin{array}{rcl}
\hom_B(Fa,Fb) &\to& \hom_A(a,b)\\
g &\mapsto& \inv{\eta_b}\circ G(g)\circ \eta_a.
\end{array}
\end{equation*}
For $f:\hom_A(a,b)$, we have
\[ \inv{\eta_{b}}\circ G(F(f))\circ \eta_{a} =
\inv{\eta_{b}} \circ \eta_{b} \circ f=
f
\]
while for $g:\hom_B(Fa,Fb)$ we have
\begin{align*}
F(\inv{\eta_b} \circ G(g)\circ\eta_a)
&= F(\inv{\eta_b})\circ F(G(g))\circ F(\eta_a)\\
&= \epsilon_{Fb}\circ F(G(g))\circ F(\eta_a)\\
&= g\circ\epsilon_{Fa}\circ F(\eta_a)\\
&= g
\end{align*}
using naturality of $\epsilon$, and the triangle identities twice.
Thus, $F_{a,b}$ is an equivalence, so $F$ is fully faithful.
Finally, for any $b:B$, we have $Gb:A$ and $\epsilon_b:FGb\cong b$.
On the other hand, suppose $F$ is fully faithful and split essentially surjective.
Define $G_0:B_0\to A_0$ by sending $b:B$ to the $a:A$ given by the specified essential splitting, and write $\epsilon_b$ for the likewise specified isomorphism $FGb\cong b$.
Now for any $g:\hom_B(b,b')$, define $G(g):\hom_A(Gb,Gb')$ to be the unique morphism such that $\id{F(G(g))}{\inv{(\epsilon_{b'})}\circ g \circ \epsilon_b }$ (which exists since $F$ is fully faithful).
Finally, for $a:A$ define $\eta_a:\hom_A(a,GFa)$ to be the unique morphism such that $\id{F\eta_a}{\inv{\epsilon_{Fa}}}$.
It is easy to verify that $G$ is a functor and that $(G,\eta,\epsilon)$ exhibit $F$ as an equivalence of precategories.
Now consider the composite~\ref{item:ct:ffeso1}$\to$\ref{item:ct:ffeso2}$\to$\ref{item:ct:ffeso1}.
We clearly recover the same function $G_0:B_0 \to A_0$.
For the action of $G$ on hom-sets, we must show that for $g:\hom_B(b,b')$, $G(g)$ is the (necessarily unique) morphism such that $F(G(g)) = \inv{(\epsilon_{b'})}\circ g \circ \epsilon_b$.
But this equation holds by the assumed naturality of $\epsilon$.
We also clearly recover $\epsilon$, while $\eta$ is uniquely characterized by $\id{F\eta_a}{\inv{\epsilon_{Fa}}}$ (which is one of the triangle identities assumed to hold in the structure of an equivalence of precategories).
Thus, this composite is equal to the identity.
Finally, consider the other composite~\ref{item:ct:ffeso2}$\to$\ref{item:ct:ffeso1}$\to$\ref{item:ct:ffeso2}.
Since being fully faithful is a mere proposition, it suffices to observe that we recover, for each $b:B$, the same $a:A$ and isomorphism $F a \cong b$.
But this is clear, since we used this function and isomorphism to define $G_0$ and $\epsilon$ in~\ref{item:ct:ffeso1}, which in turn are precisely what we used to recover~\ref{item:ct:ffeso2} again.
Thus, the composites in both directions are equal to identities, hence we have an equivalence \eqv{\ref{item:ct:ffeso1}}{\ref{item:ct:ffeso2}}.
\end{proof}
However, if $B$ is not a category, then neither type in \autoref{ct:ffeso} may necessarily be a mere proposition.
Moreover, classically, one usually defines ``essentially surjective'' without specifying the witnesses in a determinate way.
In UF, the appropriate version of this definition is the following.
\begin{defn}
A functor $F:A\to B$ is \textbf{essentially surjective} if for all $b:B$, there \emph{merely} exists an $a:A$ such that $Fa\cong b$.
We say $F$ is a \textbf{weak equivalence} if it is fully faithful and essentially surjective.
\end{defn}
Being a weak equivalence is \emph{always} a mere proposition, since a function being an equivalence of types is such, and the propositional truncation is so by definition.
For categories, however, there is no difference between equivalences and weak ones.
\begin{lem}\label{ct:catweq}
If $F:A\to B$ is fully faithful and $A$ is a category, then for any $b:B$ the type $\sm{a:A} (Fa\cong b)$ is a mere proposition.
Hence if $A$ and $B$ are categories, then the types ``$F$ is an equivalence'' and ``$F$ is a weak equivalence'' are equivalent (and mere propositions).
\end{lem}
\begin{proof}
Suppose given $(a,f)$ and $(a',f')$ in $\sm{a:A} (Fa\cong b)$.
Then $\inv{f'}\circ f$ is an isomorphism $Fa \cong Fa'$.
Since $F$ is fully faithful, we have $g:a\cong a'$ with $Fg = \inv{f'}\circ f$.
And since $A$ is a category, we have $p:a=a'$ with $\ensuremath{\mathsf{idtoiso}}\xspace(p)=g$.
Now $Fg = \inv{f'}\circ f$ implies $\trans{(\map{(F_0)}{p})}{f} = f'$, hence (by the characterization of equalities in dependent sums) $(a,f)=(a',f')$.
Thus, for fully faithful functors whose domain is a category, essential surjectivity is equivalent to split essential surjectivity, and so being a weak equivalence is equivalent to being an equivalence.
\end{proof}
This is an important advantage of our category theory over set-based approaches.
As remarked in the introduction, with a purely set-based definition of category, the statement ``every fully faithful and essentially surjective functor is an equivalence of categories'' is equivalent to the axiom of choice (in the appropriate sense of UF).
Here we have it for free, as a category-theoretic version of the function comprehension principle.
We will see in \S\ref{sec:rezk} that this property moreover characterizes categories among precategories.
On the other hand, the following characterization of equivalences of categories is perhaps even more useful.
\begin{defn}\label{ct:isocat}
A functor $F:A\to B$ is an \textbf{isomorphism of (pre)categories} if $F$ is fully faithful and $F_0:A_0\to B_0$ is an equivalence of types.
\end{defn}
Note that being an isomorphism of precategories is always a mere proposition.
Let $A\cong B$ denote the type of isomorphisms of (pre)categories from $A$ to $B$.
\begin{lem}\label{ct:isoprecat}
For precategories $A$ and $B$ and $F:A\to B$, the following types are equivalent.
\begin{enumerate}
\item $F$ is an isomorphism of precategories.\label{item:ct:ipc1}
\item There exist $G:B\to A$ and $\eta:1_A = GF$ and $\epsilon:FG=1_B$ such that\label{item:ct:ipc2}
\begin{equation}
\map{(F\circ -)}{\eta} = \map{(-\circ F)}{\opp\epsilon}.\label{eq:ct:isoprecattri}
\end{equation}
\item There merely exist $G:B\to A$ and $\eta:1_A = GF$ and $\epsilon:FG=1_B$.\label{item:ct:ipc3}
\end{enumerate}
\end{lem}
In~\eqref{eq:ct:isoprecattri}, $\map{(F\circ -)}{\eta}$ denotes application of the function $(F\circ -)$ (which goes from functors $A\to A$ to functors $A\to B$) to the equality $\eta$, and similarly for $\map{(-\circ F)}{\opp\epsilon}$.
Note that if $B_0$ is not a 1-type, then~\eqref{eq:ct:isoprecattri} may not be a mere proposition.
\begin{proof}
First note that since hom-sets are sets, equalities between equalities of functors are uniquely determined by their object-parts.
Thus, by function extensionality,~\eqref{eq:ct:isoprecattri} is equivalent to
\begin{equation}
\map{(F_0)}{\eta_0}_a = \opp{(\epsilon_0)}_{F_0 a}.\label{eq:ct:ipctri}
\end{equation}
for all $a:A_0$.
Note that this is precisely the coherence condition for $G_0$, $\eta_0$, and $\epsilon_0$ to be a proof that $F_0$ is an equivalence of types.
Now suppose~\ref{item:ct:ipc1}.
Let $G_0:B_0 \to A_0$ be the inverse of $F_0$, with $\eta_0: \idfunc[A_0] = G_0 F_0$ and $\epsilon_0:F_0G_0 = \idfunc[B_0]$ satisfying the triangle identity, which is precisely~\eqref{eq:ct:ipctri}.
Now define $G_{b,b'}:\hom_B(b,b') \to \hom_A(G_0b,G_0b')$ by
\[ G_{b,b'}(g) \defeq
\inv{(F_{G_0b,G_0b'})}\Big(\ensuremath{\mathsf{idtoiso}}\xspace(\opp{(\epsilon_0)}_{b'}) \circ g \circ \ensuremath{\mathsf{idtoiso}}\xspace((\epsilon_0)_b)\Big)
\]
(using the assumption that $F$ is fully faithful).
Since \ensuremath{\mathsf{idtoiso}}\xspace takes opposites to inverses and concatenation to composition, and $F$ is a functor, it follows that $G$ is a functor.
By definition, we have $(GF)_0 \jdeq G_0 F_0$, which is equal to $\idfunc[A_0]$ by $\eta_0$.
To obtain $1_A = GF$, we need to show that when transported along $\eta_0$, the identity function of $\hom_A(a,a')$ becomes equal to the composite $G_{Fa,Fa'} \circ F_{a,a'}$.
In other words, for any $f:\hom_A(a,a')$ we must have
\begin{multline*}
\ensuremath{\mathsf{idtoiso}}\xspace((\eta_0)_{a'}) \circ f \circ \ensuremath{\mathsf{idtoiso}}\xspace(\opp{(\eta_0)}_a)\\
= \inv{(F_{GFa,GFa'})}\Big(\ensuremath{\mathsf{idtoiso}}\xspace(\opp{(\epsilon_0)}_{Fa'})
\circ F_{a,a'}(f) \circ \ensuremath{\mathsf{idtoiso}}\xspace((\epsilon_0)_{Fa})\Big).
\end{multline*}
But this is equivalent to
\begin{multline*}
(F_{GFa,GFa'})\Big(\ensuremath{\mathsf{idtoiso}}\xspace((\eta_0)_{a'}) \circ f \circ \ensuremath{\mathsf{idtoiso}}\xspace(\opp{(\eta_0)}_a)\Big)\\
= \ensuremath{\mathsf{idtoiso}}\xspace(\opp{(\epsilon_0)}_{Fa'})
\circ F_{a,a'}(f) \circ \ensuremath{\mathsf{idtoiso}}\xspace((\epsilon_0)_{Fa}).
\end{multline*}
which follows from functoriality of $F$, the fact that $F$ preserves \ensuremath{\mathsf{idtoiso}}\xspace, and~\eqref{eq:ct:ipctri}.
Thus we have $\eta:1_A = GF$.
On the other side, we have $(FG)_0\jdeq F_0 G_0$, which is equal to $\idfunc[B_0]$ by $\epsilon_0$.
To obtain $FG=1_B$, we need to show that when transported along $\epsilon_0$, the identity function of $\hom_B(b,b')$ becomes equal to the composite $F_{Gb,Gb'} \circ G_{b,b'}$.
That is, for any $g:\hom_B(b,b')$ we must have
\begin{multline*}
F_{Gb,Gb'}\Big(\inv{(F_{Gb,Gb'})}\Big(\ensuremath{\mathsf{idtoiso}}\xspace(\opp{(\epsilon_0)}_{b'}) \circ g \circ \ensuremath{\mathsf{idtoiso}}\xspace((\epsilon_0)_b)\Big)\Big)\\
= \ensuremath{\mathsf{idtoiso}}\xspace((\opp{\epsilon_0})_{b'}) \circ g \circ \ensuremath{\mathsf{idtoiso}}\xspace((\epsilon_0)_b).
\end{multline*}
But this is just the fact that $\inv{(F_{Gb,Gb'})}$ is the inverse of $F_{Gb,Gb'}$.
And we have remarked that~\eqref{eq:ct:isoprecattri} is equivalent to~\eqref{eq:ct:ipctri}, so~\ref{item:ct:ipc2} holds.
Conversely, suppose given~\ref{item:ct:ipc2}; then the object-parts of $G$, $\eta$, and $\epsilon$ together with~\eqref{eq:ct:ipctri} show that $F_0$ is an equivalence of types.
And for $a,a':A_0$, we define $\overline{G}_{a,a'}: \hom_B(Fa,Fa') \to \hom_A(a,a')$ by
\begin{equation}
\overline{G}_{a,a'}(g) \defeq \ensuremath{\mathsf{idtoiso}}\xspace(\opp{\eta})_{a'} \circ G(g) \circ \ensuremath{\mathsf{idtoiso}}\xspace(\eta)_a.\label{eq:ct:gbar}
\end{equation}
By naturality of $\ensuremath{\mathsf{idtoiso}}\xspace(\eta)$, for any $f:\hom_A(a,a')$ we have
\begin{align*}
\overline{G}_{a,a'}(F_{a,a'}(f))
&= \ensuremath{\mathsf{idtoiso}}\xspace(\opp{\eta})_{a'} \circ G(F(f)) \circ \ensuremath{\mathsf{idtoiso}}\xspace(\eta)_a\\
&= \ensuremath{\mathsf{idtoiso}}\xspace(\opp{\eta})_{a'} \circ \ensuremath{\mathsf{idtoiso}}\xspace(\eta)_{a'} \circ f \\
&= f.
\end{align*}
On the other hand, for $g:\hom_B(Fa,Fa')$ we have
\begin{align*}
F_{a,a'}(\overline{G}_{a,a'}(g))
&= F(\ensuremath{\mathsf{idtoiso}}\xspace(\opp{\eta})_{a'}) \circ F(G(g)) \circ F(\ensuremath{\mathsf{idtoiso}}\xspace(\eta)_a)\\
&= \ensuremath{\mathsf{idtoiso}}\xspace(\epsilon)_{Fa'}
\circ F(G(g))
\circ \ensuremath{\mathsf{idtoiso}}\xspace(\opp{\epsilon})_{Fa}\\
&= \ensuremath{\mathsf{idtoiso}}\xspace(\epsilon)_{Fa'}
\circ \ensuremath{\mathsf{idtoiso}}\xspace(\opp{\epsilon})_{Fa'}
\circ g\\
&= g.
\end{align*}
(There are lemmas needed here regarding the compatibility between \ensuremath{\mathsf{idtoiso}}\xspace and whiskering, which we leave to the reader to state and prove.)
Thus, $F_{a,a'}$ is an equivalence, so $F$ is fully faithful; i.e.~\ref{item:ct:ipc1} holds.
Now the composite~\ref{item:ct:ipc1}$\to$\ref{item:ct:ipc2}$\to$\ref{item:ct:ipc1} is equal to the identity since~\ref{item:ct:ipc1} is a mere proposition.
On the other side, tracing through the above constructions we see that the composite~\ref{item:ct:ipc2}$\to$\ref{item:ct:ipc1}$\to$\ref{item:ct:ipc2} essentially preserves the object-parts $G_0$, $\eta_0$, $\epsilon_0$, and the object-part of~\eqref{eq:ct:isoprecattri}.
And in the latter three cases, the object-part is all there is, since hom-sets are sets.
Thus, it suffices to show that we recover the action of $G$ on hom-sets.
In other words, we must show that if $g:\hom_B(b,b')$, then
\[ G_{b,b'}(g) =
\overline{G}_{G_0b,G_0b'}\Big(\ensuremath{\mathsf{idtoiso}}\xspace(\opp{(\epsilon_0)}_{b'}) \circ g \circ \ensuremath{\mathsf{idtoiso}}\xspace((\epsilon_0)_b)\Big)
\]
where $\overline{G}$ is defined by~\eqref{eq:ct:gbar}.
However, this follows from functoriality of $G$ and the \emph{other} triangle identity, which is equivalent to~\eqref{eq:ct:ipctri}.
Now since~\ref{item:ct:ipc1} is a mere proposition, so is~\ref{item:ct:ipc2}, so it suffices to show they are co-inhabited with~\ref{item:ct:ipc3}.
Of course,~\ref{item:ct:ipc2}$\to$\ref{item:ct:ipc3}, so let us assume~\ref{item:ct:ipc3}.
Since~\ref{item:ct:ipc1} is a mere proposition, we may assume given $G$, $\eta$, and $\epsilon$.
Then $G_0$ along with $\eta$ and $\epsilon$ imply that $F_0$ is an equivalence.
Moreover, we also have natural isomorphisms $\ensuremath{\mathsf{idtoiso}}\xspace(\eta):1_A\cong GF$ and $\ensuremath{\mathsf{idtoiso}}\xspace(\epsilon):FG\cong 1_B$, so by \autoref{ct:adjointification}, $F$ is an equivalence of precategories, and in particular fully faithful.
\end{proof}
From \autoref{ct:isoprecat}\ref{item:ct:ipc2} and $\ensuremath{\mathsf{idtoiso}}\xspace$ in functor categories, we conclude immediately that any isomorphism of precategories is an equivalence.
For precategories, the converse can fail.
\begin{eg}\label{ct:chaotic}
Let $X$ be a type and $x_0:X$ an element, and let $X_{\mathrm{ch}}$ denote the \emph{chaotic} or \emph{indiscrete} precategory on $X$.
By definition, we have $(X_{\mathrm{ch}})_0\defeq X$, and $\hom_{X_{\mathrm{ch}}}(x,x') = 1$ for all $x,x'$.
Then the unique functor $X_{\mathrm{ch}}\to 1$ is an equivalence of precategories, but not an isomorphism unless $X$ is contractible.
This example also shows that a precategory can be equivalent to a category without itself being a category.
Of course, if a precategory is \emph{isomorphic} to a category, then it must itself be a category.
\end{eg}
However, for categories, the notions of equivalence and isomorphism coincide.
\begin{lem}\label{ct:eqv-levelwise}
For categories $A$ and $B$, a functor $F:A\to B$ is an equivalence of categories if and only if it is an isomorphism of categories.
\end{lem}
\begin{proof}
Since both are mere properties, it suffices to show they are co-inhabited.
So first suppose $F$ is an equivalence of categories, with $(G,\eta,\epsilon)$ given.
We have already seen that $F$ is fully faithful.
By \autoref{ct:functor-cat}, the natural isomorphisms $\eta$ and $\epsilon$ yield identities $\id{1_A}{GF}$ and $\id{FG}{1_B}$, hence in particular identities $\id{\idfunc[A]}{G_0\circ F_0}$ and $\id{F_0\circ G_0}{\idfunc[B]}$.
Thus, $F_0$ is an equivalence of types.
Conversely, suppose $F$ is fully faithful and $F_0$ is an equivalence of types, with inverse $G_0$, say.
Then for each $b:B$ we have $G_0 b:A$ and an identity $\id{FGb}{b}$, hence an isomorphism $FGb\cong b$.
Thus, by \autoref{ct:ffeso}, $F$ is an equivalence of categories.
\end{proof}
Of course, there is yet a third notion of sameness for (pre)categories: equality.
However, the univalence axiom implies that it coincides with isomorphism.
\begin{lem}\label{ct:cat-eq-iso}
If $A$ and $B$ are precategories, then the function
\[(\id A B) \to (A\cong B)\]
(defined by induction from the identity functor) is an equivalence of types.
\end{lem}
\begin{proof}
As usual for dependent sum types, to give an element of $\id A B$ is equivalent to giving
\begin{itemize}
\item an identity $P_0:\id{A_0}{B_0}$,
\item for each $a,b:A_0$, an identity
\[P_{a,b}:\id{\hom_A(a,b)}{\hom_B(\trans {P_0} a,\trans {P_0} b)},\]
\item identities $\id{\trans {(P_{a,a})} {1_a}}{1_{\trans {P_0} a}}$ and $\id{\trans {(P_{a,c})} {gf}}{\trans {(P_{b,c})} g \circ \trans {(P_{a,b})} f}$.
\end{itemize}
(Again, we use the fact that the identity types of hom-sets are mere propositions.)
However, by univalence, this is equivalent to giving
\begin{itemize}
\item an equivalence of types $F_0:\eqv{A_0}{B_0}$,
\item for each $a,b:A_0$, an equivalence of types
\[F_{a,b}:\eqv{\hom_A(a,b)}{\hom_B(F_0 (a),F_0 (b))},\]
\item and identities $\id{F_{a,a}(1_a)}{1_{F_0 (a)}}$ and $\id{F_{a,c}(gf)}{F_{b,c} (g)\circ F_{a,b} (f)}$.
\end{itemize}
But this consists exactly of a functor $F:A\to B$ that is an isomorphism of categories.
And by induction on identity, this equivalence $\eqv{(\id A B)}{(A\cong B)}$ is equal to the function obtained by induction.
\end{proof}
Thus, for categories, equality also coincides with equivalence.
We can interpret this as follows: define a ``pre-2-category'' to have a type of objects equipped with hom-precategories, composition functors, and so on.
Then categories, functors, and natural transformations form a pre-2-category whose hom-precategories are categories (this is \autoref{ct:functor-cat}), and \autoref{ct:cat-eq-iso} is a categorified version of the saturation property.
It is consistent to use the word \emph{2-category} for a pre-2-category satisfying both of these conditions.
The following corollary was conjectured by Hofmann and Streicher\cite{hs:gpd-typethy}.
\begin{thm}\label{ct:cat-2cat}
If $A$ and $B$ are categories, then the function
\[(\id A B) \to (A\simeq B)\]
(defined by induction from the identity functor) is an equivalence of types.
\end{thm}
\begin{proof}
By \autoref{ct:cat-eq-iso} and \autoref{ct:eqv-levelwise}.
\end{proof}
As a consequence, the type of categories is a 2-type.
For since $A\simeq B$ is a subtype of the type of functors from $A$ to $B$, which are the objects of a category, it is a 1-type; hence the identity types $\id A B$ are also 1-types.
\section{The Yoneda lemma}
\label{sec:yoneda}
In this section we fix a particular universe \ensuremath{\mathsf{Type}}\xspace, and write \set for the type of sets in that universe and \ensuremath{\underline{\set}}\xspace for the category whose objects are sets in that universe and whose morphisms are functions between them.
Of course, \set and \ensuremath{\underline{\set}}\xspace do not themselves lie in the universe \ensuremath{\mathsf{Type}}\xspace, but rather in some higher universe.
Define a precategory to be \emph{locally small} if its hom-sets lie in our fixed universe \ensuremath{\mathsf{Type}}\xspace.
We now show that every locally small precategory has a \ensuremath{\underline{\set}}\xspace-valued hom-functor.
First we need to define opposites and products of (pre)categories.
\begin{defn}
For a precategory $A$, its \textbf{opposite} $A^{\textrm{op}}$ is a precategory with the same type of objects, with $\hom_{A^{\textrm{op}}}(a,b) \defeq \hom_A(b,a)$, and with identities and composition inherited from $A$.
\end{defn}
\begin{defn}
For precategories $A$ and $B$, their \textbf{product} $A\times B$ is a precategory with $(A\times B)_0 \defeq A_0 \times B_0$ and
\[\hom_{A\times B}((a,b),(a',b')) \defeq \hom_A(a,a') \times \hom_B(b,b').\]
Identities are defined by $1_{(a,b)}\defeq (1_a,1_b)$ and composition by $(g,g')(f,f') \defeq ((gf),(g'f'))$.
\end{defn}
\begin{lem}\label{ct:functorexpadj}
For precategories $A,B,C$, the following types are equivalent.
\begin{enumerate}
\item Functors $A\times B\to C$.
\item Functors $A\to C^B$.
\end{enumerate}
\end{lem}
\begin{proof}
Given $F:A\times B\to C$, for any $a:A$ we obviously have a functor $F_a : B\to C$.
This gives a function $A_0 \to (C^B)_0$.
Next, for any $f:\hom_A(a,a')$, we have for any $b:B$ the morphism $F_{(a,b),(a',b)}(f,1_b):F_a(b) \to F_{a'}(b)$.
These are the components of a natural transformation $F_a \to F_{a'}$.
Functoriality in $a$ is easy to check, so we have a functor $\widehat{F}:A\to C^B$.
Conversely, suppose given $G:A\to C^B$.
Then for any $a:A$ and $b:B$ we have the object $G(a)(b):C$, giving a function $A_0 \times B_0 \to C_0$.
And for $f:\hom_A(a,a')$ and $g:\hom_B(b,b')$, we have the morphism
\begin{equation*}
G(a')_{b,b'}(g)\circ G_{a,a'}(f)_b = G_{a,a'}(f)_{b'} \circ G(a)_{b,b'}(g)
\end{equation*}
in $\hom_C(G(a)(b), G(a')(b'))$.
Functoriality is again easy to check, so we have a functor $\check{F}:A\times B \to C$.
Finally, it is also clear that these operations are inverses.
\end{proof}
Now for any locally small precategory $A$, we have a hom-functor
\[\hom_A : A^{\textrm{op}} \times A \to \ensuremath{\underline{\set}}\xspace.\]
It takes a pair $(a,b): (A^{\textrm{op}})_0 \times A_0 \jdeq A_0 \times A_0$ to the set $\hom_A(a,b)$.
For a morphism $(f,f') : \hom_{A^{\textrm{op}}\times A}((a,b),(a',b'))$, by definition we have $f:\hom_A(a',a)$ and $f':\hom_A(b,b')$, so we can define
\begin{align*}
(\hom_A)_{(a,b),(a',b')}(f,f')
&\defeq (g \mapsto (f'gf))\\
&: \hom_A(a,b) \to \hom_A(a',b').
\end{align*}
Functoriality is easy to check.
By \autoref{ct:functorexpadj}, therefore, we have an induced functor $\ensuremath{\mathbf{y}}\xspace:A\to \ensuremath{\underline{\set}}\xspace^{A^{\textrm{op}}}$, which we call the \textbf{Yoneda embedding}.
As usual, of course, $\ensuremath{\underline{\set}}\xspace^{A^{\textrm{op}}}$ may not be locally small unless $A$ is small (i.e.\ unless $A_0$ lies in our fixed universe \ensuremath{\mathsf{Type}}\xspace).
\begin{thm}[The Yoneda lemma]\label{ct:yoneda}
For any locally small precategory $A$, any $a:A$, and any functor $F:\ensuremath{\underline{\set}}\xspace^{A^{\textrm{op}}}$, we have an isomorphism
\begin{equation}\label{eq:yoneda}
\hom_{\ensuremath{\underline{\set}}\xspace^{A^{\textrm{op}}}}(\ensuremath{\mathbf{y}}\xspace a, F) \cong Fa.
\end{equation}
Moreover, this is natural in both $a$ and $F$.
\end{thm}
\begin{proof}
Given a natural transformation $\alpha:\ensuremath{\mathbf{y}}\xspace a \to F$, we can consider the component $\alpha_a : \ensuremath{\mathbf{y}}\xspace a(a) \to F a$.
Since $\ensuremath{\mathbf{y}}\xspace a(a)\jdeq \hom_A(a,a)$, we have $1_a : \ensuremath{\mathbf{y}}\xspace a(a)$, so that $\alpha_a(1_a) : F a$.
This gives a function $(\alpha \mapsto \alpha_a(1_a))$ from left to right in~\eqref{eq:yoneda}.
In the other direction, given $x:F a$, we define $\alpha:\ensuremath{\mathbf{y}}\xspace a \to F$ by
\[\alpha_{a'}(f) \defeq F_{a',a}(f)(x). \]
Naturality is easy to check, so this gives a function from right to left in~\eqref{eq:yoneda}.
To show that these are inverses, first suppose given $x:F a$.
Then with $\alpha$ defined as above, we have $\alpha_a(1_a) = F_{a,a}(1_a)(x) = 1_{F a}(x) = x$.
On the other hand, if we suppose given $\alpha:\ensuremath{\mathbf{y}}\xspace a \to F$ and define $x$ as above, then for any $f:\hom_A(a',a)$ we have
\begin{align*}
\alpha_{a'}(f)
&= \alpha_{a'} (\ensuremath{\mathbf{y}}\xspace a_{a',a}(f))\\
&= (\alpha_{a'}\circ \ensuremath{\mathbf{y}}\xspace a_{a',a}(f))(1_a)\\
&= (F_{a',a}(f)\circ \alpha_a)(1_a)\\
&= F_{a',a}(f)(\alpha_a(1_a))\\
&= F_{a',a}(f)(x).
\end{align*}
Thus, both composites are equal to identities.
We leave the proof of naturality to the reader.
\end{proof}
\begin{cor}\label{ct:yoneda-embedding}
The Yoneda embedding $\ensuremath{\mathbf{y}}\xspace :A\to \ensuremath{\underline{\set}}\xspace^{A^{\textrm{op}}}$ is fully faithful.
\end{cor}
\begin{proof}
By \autoref{ct:yoneda}, we have
\[ \hom_{\ensuremath{\underline{\set}}\xspace^{A^{\textrm{op}}}}(\ensuremath{\mathbf{y}}\xspace a, \ensuremath{\mathbf{y}}\xspace b) \cong \ensuremath{\mathbf{y}}\xspace b(a) \jdeq \hom_A(a,b). \]
It is easy to check that this isomorphism is in fact the action of \ensuremath{\mathbf{y}}\xspace on hom-sets.
\end{proof}
\begin{cor}\label{ct:yoneda-mono}
If $A$ is a category, then $\ensuremath{\mathbf{y}}\xspace_0 : A_0 \to (\ensuremath{\underline{\set}}\xspace^{A^{\textrm{op}}})_0$ is a monomorphism.
In particular, if $\ensuremath{\mathbf{y}}\xspace a = \ensuremath{\mathbf{y}}\xspace b$, then $a=b$.
\end{cor}
\begin{proof}
By \autoref{ct:yoneda-embedding}, \ensuremath{\mathbf{y}}\xspace induces an isomorphism on sets of isomorphisms.
But as $A$ and $\ensuremath{\underline{\set}}\xspace^{A^{\textrm{op}}}$ are categories and \ensuremath{\mathbf{y}}\xspace is a functor, this is equivalently an isomorphism on identity types, which is the definition of being mono.
\end{proof}
\begin{defn}\label{ct:representable}
A functor $F:\ensuremath{\underline{\set}}\xspace^{A^{\textrm{op}}}$ is said to be \textbf{representable} if there exists $a:A$ and an isomorphism $\ensuremath{\mathbf{y}}\xspace a \cong F$.
\end{defn}
\begin{thm}\label{ct:representable-prop}
If $A$ is a category, then the type ``$F$ is representable'' is a mere proposition.
\end{thm}
\begin{proof}
By definition ``$F$ is representable'' is just the fiber of $\ensuremath{\mathbf{y}}\xspace_0$ over $F$.
Since $\ensuremath{\mathbf{y}}\xspace_0$ is mono by \autoref{ct:yoneda-mono}, this fiber is a mere proposition.
\end{proof}
In particular, in a category, any two representations of the same functor are equal.
We could use this to give a different proof of \autoref{ct:adjprop} by characterizing adjunctions in terms of representability.
\section{The Rezk completion}
\label{sec:rezk}
In this section we will give a universal way to replace a precategory by a category.
It relies on the fact that ``categories see weak equivalences as equivalences''.
To prove this latter fact, we begin with a couple of lemmas which are completely standard category theory, phrased carefully so as to make sure we are using the eliminator for the propositional truncation correctly.
One would have to be similarly careful in classical category theory if one wanted to avoid the axiom of choice: any time we want to define a function, we need to characterize its values uniquely somehow.
\begin{lem}\label{lem:precomp-faithful}
If $A,B,C$ are precategories and $H:A\to B$ is an essentially surjective functor, then $(-\circ H):C^B \to C^A$ is faithful.
\end{lem}
\begin{proof}
Let $F,G:B\to C$, and $\gamma,\delta:F\to G$ be such that $\gamma H = \delta H$; we must show $\gamma=\delta$.
Thus let $b:B$; we want to show $\gamma_b=\delta_b$.
This is a mere proposition, so since $H$ is essentially surjective, we may assume given an $a:A$ and an isomorphism $f:Ha\cong b$.
But now we have
\[ \gamma_b = G(f) \circ \gamma_{Ha} \circ F(\inv{f})
= G(f) \circ \delta_{Ha} \circ F(\inv{f})
= \delta_b.\qedhere
\]
\end{proof}
\begin{lem}\label{lem:precomp-full}
If $A,B,C$ are precategories and $H:A\to B$ is essentially surjective and full, then $(-\circ H):C^B \to C^A$ is fully faithful.
\end{lem}
\begin{proof}
It remains to show fullness.
Thus, let $F,G:B\to C$ and $\gamma:FH \to GH$.
We claim that for any $b:B$, the type
\begin{equation}\label{eq:fullprop}
\sm{g:\hom_C(Fb,Gb)} {\textstyle\prod}\@prd{a:A}{f:Ha\cong b} (\gamma_a = \inv{Gf}\circ g\circ Ff)
\end{equation}
is contractible.
Since contractibility is a mere property, and $H$ is essentially surjective, we may assume given $a_0:A$ and $h:Ha_0\cong b$.
Now take $g\defeq Gh \circ \gamma_{a_0} \circ \inv{Fh}$.
Then given any other $a:A$ and $f:Ha\cong b$, we must show $\gamma_a = \inv{Gf}\circ g\circ Ff$.
Since $H$ is full, there merely exists a morphism $k:\hom_A(a,a_0)$ such that $Hk = \inv{h}\circ f$.
And since our goal is a mere proposition, we may assume given some such $k$.
Then we have
\begin{align*}
\gamma_a &= \inv{GHk}\circ \gamma_{a_0} \circ FHk\\
&= \inv{Gf} \circ Gh \circ \gamma_{a_0} \circ \inv{Fh} \circ Ff\\
&= \inv{Gf}\circ g\circ Ff.
\end{align*}
Thus,~\eqref{eq:fullprop} is inhabited.
It remains to show it is a mere proposition.
Let $g,g':\hom_C(Fb, Gb)$ be such that for all $a:A$ and $f:Ha\cong b$, we have both $(\gamma_a = \inv{Gf}\circ g\circ Ff)$ and $(\gamma_a = \inv{Gf}\circ g'\circ Ff)$.
The dependent product types are mere propositions, so all we have to prove is $g=g'$.
But this is a mere proposition and $H$ is essentially surjective, so we may assume $a_0:A$ and $h:Ha_0\cong b$, in which case we have
\[ g = Gh \circ \gamma_{a_0} \circ \inv{Fh} = g'.\]
This proves that~\eqref{eq:fullprop} is contractible for all $b:B$.
Now we define $\delta:F\to G$ by taking $\delta_b$ to be the unique $g$ in~\eqref{eq:fullprop} for that $b$.
To see that this is natural, suppose given $f:\hom_B(b,b')$; we must show $Gf \circ \delta_b = \delta_{b'}\circ Ff$.
As before, we may assume $a:A$ and $h:Ha\cong b$, and likewise $a':A$ and $h':Ha'\cong b'$.
Since $H$ is full as well as essentially surjective, we may also assume $k:\hom_A(a,a')$ with $Hk = \inv{h'}\circ f\circ h$.
Since $\gamma$ is natural, $GHk\circ \gamma_a = \gamma_{a'} \circ FHk$.
Using the definition of $\delta$, we have
\begin{align*}
Gf \circ \delta_b
&= Gf \circ Gh \circ \gamma_a \circ \inv{Fh}\\
&= Gh' \circ GHk\circ \gamma_a \circ \inv{Fh}\\
&= Gh' \circ \gamma_{a'} \circ FHk \circ \inv{Fh}\\
&= Gh' \circ \gamma_{a'} \circ \inv{Fh'} \circ Ff\\
&= \delta_{b'} \circ Ff.
\end{align*}
Thus, $\delta$ is natural.
Finally, for any $a:A$, applying the definition of $\delta_{Ha}$ to $a$ and $1_a$, we obtain $\gamma_a = \delta_{Ha}$.
Hence, $\delta \circ H = \gamma$.
\end{proof}
The proof of the theorem itself follows almost exactly the same lines, with the saturation of $C$ inserted in one crucial step, which we have bolded below for emphasis.
This is the point at which we are trying to define a function into \emph{objects} without using choice, and so we must be careful about what it means for an object to be ``uniquely specified''.
In classical category theory, all one can say is that this object is specified up to unique isomorphism, but in set-theoretic foundations this is not a sufficient amount of uniqueness to give us a function without invoking AC.
In Univalent Foundations, however, if $C$ is a category, then isomorphism is equality, and we have the appropriate sort of uniqueness (namely, living in a contractible space).
\begin{thm}\label{ct:cat-weq-eq}
If $A,B$ are precategories, $C$ is a category, and $H:A\to B$ is a weak equivalence, then $(-\circ H):C^B \to C^A$ is an isomorphism.
\end{thm}
\begin{proof}
By \autoref{ct:functor-cat}, $C^B$ and $C^A$ are categories.
Thus, by \autoref{ct:eqv-levelwise} it will suffice to show that $(-\circ H)$ is an equivalence.
But since we know from the preceeding two lemmas that it is fully faithful, by \autoref{ct:catweq} it will suffice to show that it is essentially surjective.
Thus, suppose $F:A\to C$; we want there to merely exist a $G:B\to C$ such that $GH\cong F$.
For each $b:B$, let $X_b$ be the type whose elements consist of:
\begin{enumerate}
\item An element $c:C$; and
\item For each $a:A$ and $h:Ha\cong b$, an isomorphism $k_{a,h}:Fa\cong c$; such that\label{item:eqvprop2}
\item For each $(a,h)$ and $(a',h')$ as in~\ref{item:eqvprop2} and each $f:\hom_A(a,a')$ such that $h'\circ Hf = h$, we have $k_{a',h'}\circ Ff = k_{a,h}$.\label{item:eqvprop3}
\end{enumerate}
We claim that for any $b:B$, the type $X_b$ is contractible.
As this is a mere proposition and $H$ is essentially surjective, we may assume given $a_0:A$ and $h_0:Ha_0 \cong b$.
Let $c^0\defeq Fa_0$.
Next, given $a:A$ and $h:Ha\cong b$, since $H$ is fully faithful there is a unique isomorphism $g_{a,h}:a\to a_0$ with $Hg_{a,h} = \inv{h_0}\circ h$; define $k^0_{a,h} \defeq Fg_{a,h}$.
Finally, if $h'\circ Hf = h$, then $\inv{h_0}\circ h'\circ Hf = \inv{h_0}\circ h$, hence $g_{a',h'} \circ f = g_{a,h}$ and thus $k^0_{a',h'}\circ Ff = k^0_{a,h}$.
Therefore, $X_b$ is inhabited.
Now suppose given another $(c^1,k^1): X_b$.
Then $k^1_{a_0,h_0}:c^0 \jdeq Fa_0 \cong c^1$.
\textbf{Since $C$ is a category, we have $p:c^0=c^1$ with $\ensuremath{\mathsf{idtoiso}}\xspace(p) = k^1_{a_0,h_0}$.}
And for any $a:A$ and $h:Ha\cong b$, by~\ref{item:eqvprop3} for $(c^1,k^1)$ with $f\defeq g_{a,h}$, we have
\[k^1_{a,h} = k^1_{a_0,h_0} \circ k^0_{a,h} = \trans{p}{k^0_{a,h}}\]
This gives the requisite data for an equality $(c^0,k^0)=(c^1,k^1)$, completing the proof that $X_b$ is contractible.
Now since $X_b$ is contractible for each $b$, the type ${\textstyle\prod}\@prd{b:B} X_b$ is also contractible.
In particular, it is inhabited, so we have a function assigning to each $b:B$ a $c$ and a $k$.
Define $G_0(b)$ to be this $c$; this gives a function $G_0 :B_0 \to C_0$.
Next we need to define the action of $G$ on morphisms.
For each $b,b':B$ and $f:\hom_B(b,b')$, let $Y_f$ be the type whose elements consist of:
\begin{enumerate}[resume]
\item A morphism $g:\hom_C(Gb,Gb')$, such that
\item For each $a:A$ and $h:Ha\cong b$, and each $a':A$ and $h':Ha'\cong b'$, and any $\ell:\hom_A(a,a')$, we have\label{item:eqvprop5}
\[ (h' \circ H\ell = f \circ h)
\to
(k_{a',h'} \circ F\ell = g\circ k_{a,h}). \]
\end{enumerate}
We claim that for any $b,b'$ and $f$, the type $Y_f$ is contractible.
As this is a mere proposition, we may assume given $a_0:A$ and $h_0:Ha_0\cong b$, and each $a'_0:A$ and $h'_0:Ha'_0\cong b'$.
Then since $H$ is fully faithful, there is a unique $\ell_0:\hom_A(a_0,a_0')$ such that $h'_0 \circ H\ell_0 = f \circ h_0$.
Define $g_0 \defeq k_{a_0',h_0'} \circ F \ell_0 \circ \inv{(k_{a_0,h_0})}$.
Now for any $a,h,a',h'$, and $\ell$ such that $(h' \circ H\ell = f \circ h)$, we have $\inv{h}\circ h_0:Ha_0\cong Ha$, hence there is a unique $m:a_0\cong a$ with $Hm = \inv{h}\circ h_0$ and hence $h\circ Hm = h_0$.
Similarly, we have a unique $m':a_0'\cong a'$ with $h'\circ Hm' = h_0'$.
Now by~\ref{item:eqvprop3}, we have $k_{a,h}\circ Fm = k_{a_0,h_0}$ and $k_{a',h'}\circ Fm' = k_{a_0',h_0'}$.
We also have
\begin{align*}
Hm' \circ H\ell_0
&= \inv{(h')} \circ h_0' \circ H\ell_0\\
&= \inv{(h')} \circ f \circ h_0\\
&= \inv{(h')} \circ f \circ h \circ \inv{h} \circ h_0\\
&= H\ell \circ Hm
\end{align*}
and hence $m'\circ \ell_0 = \ell\circ m$ since $H$ is fully faithful.
Finally, we can compute
\begin{align*}
g_0 \circ k_{a,h}
&= k_{a_0',h_0'} \circ F \ell_0 \circ \inv{(k_{a_0,h_0})} \circ k_{a,h}\\
&= k_{a_0',h_0'} \circ F \ell_0 \circ \inv{Fm}\\
&= k_{a_0',h_0'} \circ \inv{(Fm')} \circ F\ell\\
&= k_{a',h'}\circ F\ell.
\end{align*}
This completes the proof that $Y_f$ is inhabited.
To show it is contractible, since hom-sets are sets, it thankfully suffices to take another $g_1:\hom_C(Gb,Gb')$ satisfying~\ref{item:eqvprop5} and show $g_0=g_1$.
However, we still have our specified $a_0,h_0,a_0',h_0',\ell_0$ around, and~\ref{item:eqvprop5} implies both $g_0$ and $g_1$ must be equal to $k_{a_0',h_0'} \circ F \ell_0 \circ \inv{(k_{a_0,h_0})}$.
This completes the proof that $Y_f$ is contractible for each $b,b':B$ and $f:\hom_B(b,b')$.
Therefore, there is a function assigning to each such $f$ its unique inhabitant; denote this function $G_{b,b'}:\hom_B(b,b') \to \hom_C(Gb,Gb')$.
The proof that $G$ is a functor is straightforward.
Finally, for any $a_0:A$, defining $c\defeq Fa_0$ and $k_{a,h}\defeq F g$, where $g:\hom_A(a,a_0)$ is the unique isomorphism with $Hg = h$, gives an element of $X_{Ha_0}$.
Thus, it is equal to the specified one; hence $GHa=Fa$.
Similarly, for $f:\hom_A(a_0,a_0')$ we can define an element of $Y_{Hf}$ by transporting along these equalities, which must therefore be equal to the specified one.
Hence, we have $GH=F$, and thus $GH\cong F$ as desired.
\end{proof}
Therefore, if a precategory $A$ admits a weak equivalence functor $A\to \widehat{A}$ where $\widehat{A}$ is a category, then that is its ``reflection'' into categories: any functor from $A$ into a category will factor essentially uniquely through $\widehat{A}$.
We now construct such a weak equivalence.
\begin{thm}\label{thm:rezk-completion}
For any precategory $A$, there is a category $\widehat A$ and a weak equivalence $A\to\widehat{A}$.
\end{thm}
\begin{proof}
The hom-sets of $A$ must lie in some universe \ensuremath{\mathsf{Type}}\xspace, so that $A$ is locally small with respect to that universe.
Write \ensuremath{\underline{\set}}\xspace for the category of sets in \ensuremath{\mathsf{Type}}\xspace, and let $\widehat{A}_0 \defeq \setof{ F:\ensuremath{\underline{\set}}\xspace^{A^{\textrm{op}}} | \bbrck{\sm{a:A} (\ensuremath{\mathbf{y}}\xspace a \cong F)}}$, with hom-sets inherited from $\ensuremath{\underline{\set}}\xspace^{A^{\textrm{op}}}$.
In other words, $\widehat{A}$ is the full subcategory of $\ensuremath{\underline{\set}}\xspace^{A^{\textrm{op}}}$ determined by the functors that are \emph{merely representable}.
Then the inclusion $\widehat{A} \to \ensuremath{\underline{\set}}\xspace^{A^{\textrm{op}}}$ is fully faithful and a monomorphism on objects.
Since $\ensuremath{\underline{\set}}\xspace^{A^{\textrm{op}}}$ is a category (by \autoref{ct:functor-cat}, since \ensuremath{\underline{\set}}\xspace is a category by univalence), $\widehat A$ is also a category.
Let $A\to\widehat A$ be the Yoneda embedding.
This is fully faithful by \autoref{ct:yoneda-embedding}, and essentially surjective by definition of $\widehat{A}_0$.
Thus it is a weak equivalence.
\end{proof}
\begin{rmk}
Note, however, that even if $A$ itself is a ``small category'' with respect to some universe \ensuremath{\mathsf{Type}}\xspace (that is, both $A_0$ and all its hom-sets lie in \ensuremath{\mathsf{Type}}\xspace), then $\widehat A$ as we have constructed it will lie in the next higher universe.
One could imagine a ``resizing axiom'' that could deal with this.
It is also possible to give a direct construction of $\widehat A$ using higher inductive types~\cite{ls:hits}, which leaves its universe level unchanged; see~\cite[Chapter 9]{HoTTbook}.
\end{rmk}
We call the construction $A\mapsto \widehat A$ the \textbf{Rezk completion}, although as mentioned in the introduction, there is also an argument for calling it the \textbf{stack completion}.
We have seen that most precategories arising in practice are categories, since they are constructed from \ensuremath{\underline{\set}}\xspace, which is a category by the univalence axiom.
However, there are a few cases in which the Rezk completion is necessary to obtain a category.
\begin{eg}
Recall from \autoref{ct:fundgpd} that for any type $X$ there is a pregroupoid with $X$ as its type of objects and $\hom(x,y) \defeq \trunc0{x=y}$.
Its Rezk completion is the \emph{fundamental groupoid} of $X$.
Under the equivalence between groupoids and 1-types, we can identify this groupoid with the 1-truncation $\trunc1X$.
\end{eg}
\begin{eg}\label{ct:hocat}
Recall from \autoref{ct:hoprecat} that there is a precategory whose type of objects is \ensuremath{\mathsf{Type}}\xspace and with $\hom(X,Y) \defeq \trunc0{X\to Y}$.
Its Rezk completion may be called the \emph{homotopy category of types}.
Its type of objects can be identified with the 1-truncation of the universe, $\trunc1\ensuremath{\mathsf{Type}}\xspace$.
\end{eg}
Finally, the Rezk completion allows us to show that the notion of ``category'' is determined by the notion of ``weak equivalence of precategories''.
Thus, insofar as the latter is inevitable, so is the former.
\begin{thm}
A precategory $C$ is a category if and only if for every weak equivalence of precategories $H:A\to B$, the induced functor $(-\circ H):C^B \to C^A$ is an isomorphism of precategories.
\end{thm}
\begin{proof}
``Only if'' is \autoref{ct:cat-weq-eq}.
In the other direction, let $H$ be $I:A\to\widehat A$.
Then since $(-\circ I)_0$ is an equivalence, there exists $R:\widehat A\to A$ such that $RI=1_A$.
Hence $IRI=I$, but again since $(-\circ I)_0$ is an equivalence, this implies $IR =1_{\widehat A}$.
By \autoref{ct:isoprecat}\ref{item:ct:ipc3}, $I$ is an isomorphism of precategories.
But then since $\widehat A$ is a category, so is $A$.
\end{proof}
\input{formalization}
\section{Conclusions and further work}
\label{sec:conclusion}
We have presented a new foundation for category theory, based on the general system of Univalent Foundations, with the following advantages:
\begin{itemize}
\item All category-theoretic constructions and proofs are automatically invariant under isomorphism of objects and under equivalence of categories (when performed with saturated categories).
\item In the rare case when we want to treat categories less invariantly, there is a separate notion available to use (strict categories).
This allows both approaches to category theory to coexist simultaneously, with a type distinction making clear which one we are using at any given time.
\item There is a universal way to make a strict category (or, more generally, a precategory) into a saturated category, thereby passing to the invariant world in a very precise way.
In higher-topos-theoretic semantics, this operation corresponds to the natural and well-known notion of stack completion.
\item The basic theory has all been formalized in a computer proof assistant.
\end{itemize}
One obvious direction for future work is to push forward the development of basic category theory in this system.
Another is to move on to \emph{higher} category theory: a theory of pre-2-categories and saturated 2-categories, at least, should be within reach.
Ideally, we would like a full theory of $(\infty,1)$-categories, but it has proven difficult to formalize such infinite structures in currently available type theories.
\bibliographystyle{plain}
|
1,314,259,995,277 | arxiv | \section{Introduction}
\subsection{Aim of this work}
The aim of this work is to prove Schauder estimates at the boundary for sub-Laplacian type operators in Carnot groups.
As it is well known, Schauder estimates at the boundary in the Euclidean setting are based on two main ingredients. The first one, which is the core of the Schauder method, is the local reduction of general uniformly elliptic operators to the Laplace operator. The second one, which seems elementary in the Euclidean setting, is a reflection technique which reduces the boundary Schauder estimates to internal ones.
Unfortunately, this technique can not be applied in the strong anisotropic setting of a Carnot group, since a Laplace type operator
in this framework is not invariant with respect to reflection, nor can be approximated by any invariant operator.
In the special case of the Heisenberg group, Schauder estimates are a classical result due to Jerison (see \cite{Jerison}),
but not even this technique, based on the standard Fourier transform, can be extended to general Lie groups, whose geometry is not related to the Fourier transform.
After that, no new contribution has been provided to the problem, which is still open,
while its solution would be necessary for the development
of nonlinear PDE's theory in this setting.
In this paper we introduce a completely different approach, which is new even in the Riemannian setting, and which allows to built a Poisson kernel starting from the knowledge of a smooth fundamental solution for the problem on the whole space.
\subsection{Carnot groups}\label{introcarnot}
A Carnot group $\Gi$ can be identified with $\R^n$ with a polynomial group law $(\Gi,\cdot)$,
whose Lie algebra ${\mathfrak{g}}$ admits a step $\kappa$ stratification. Precisely
there exist linear subspaces $V^1,...,V^\kappa$ such that
\begin{equation}\label{strat}
\mathfrak{g}=V^1\oplus...\oplus V^\kappa,\quad [V^1,V^{i-1}]=V^{i},{\textrm{ if }} i\leq\kappa,
\quad [V^1,V^{\kappa}]=\{0\}.
\end{equation}
We will call horizontal tangent bundle the subspace $V^1$, and we will choose a basis for it, denoted
$\left\{X_1, \cdots, X_m\right\}$. By the assumption on the Lie algebra,
this basis can be completed to a basis $\left\{X_1, \cdots, X_n\right\}$ of $\mathfrak{g}$
with the list of their commutators. On the vector space $V^1$ we define a Riemannian metric
which makes orthonormal the vector fields $X_1, \cdots, X_m$.
Several equivalent left invariant distances
$d$ can be introduced on the whole space
requiring that their restriction to $V^1$ is equivalent to the
fixed Riemannian metric
(see for example Nagel, Stein and Wainger in \cite{NSW}).
The subriemannian gradient of a regular function $f$ will be denoted $\nabla f=(X_1f, \cdots, X_m f)$ and $f$ will be called of class $C^1$ if this gradient is continuous
with respect to the distance $d$.
More generally, spaces of H\"older continuous functions $C^{k, \alpha}$ can be defined
in terms of this distance and this gradient. We will study here a subelliptic operator
expressed as follows:
\begin{equation}\label{laplacoperator}\Delta = \sum_{i=1}^m (X_i^2 + b_i X_i),\end{equation}
with regular coefficients $b_i$.
Operators of this type are hypoelliptic and have been deeply studied after the first works of Folland
and Stein \cite{FollandStein}, Rothschild and Stein \cite{RS}, Jerison and Sanchez-Calle \cite{JSC},
Fefferman and Sanchez-Calle \cite{FSC},
Kohn and Nirenberg \cite{KN}, and
Jerison \cite{Jerison, Jerison2} (see also \cite{BLU} for a recent monograph).
Their fundamental solution
$\Gamma_\Delta$ is of class $C^\infty$ far from the diagonal
and it can be estimated in term of the distance
as follows \begin{equation}\label{gammabehavior}\Gamma_\Delta(x,y)\approx \frac{1}{d^{Q-2}(x,y)},\end{equation}
for a suitable integer $Q$, called homogeneous dimension of the space (see \eqref{homodim} for a precise definition). A kernel with the behavior of $\Gamma_\Delta$ is called of local type 2.
In general we will say that a kernel $K$
is of local type $\lambda$ with respect to the distance $d$ if
for every open bounded set $V$ and
for every $p \geq 0$ there exists a positive constant $C_p$ such that, for every $x, y \in V$, with $x\not= y$
\begin{equation}
\label{e:sileva}
|X_{i_1}\cdots X_{i_p}K(x, y)|\leq C_p d(x, y)^{\lambda-p-2} \Gamma_\Delta(x,y).\end{equation}
A well established theory of singular integrals in H\"ormander setting (due to Folland and Stein
\cite{FollandStein}, Rothschild and Stein \cite{RS}, Greiner and Stein \cite{GreinerStein}) allows to prove interior Schauder estimates.
For more recent results we quote the
H\"older estimates by Citti \cite{C}, the Schauder estimates of Xu \cite{Xu} and Capogna
and Han \cite{CapognaHan} for uniformly subelliptic operators,
Bramanti and Brandolini \cite{BramantiBrandolini}
for heat-type operators and the results of Lunardi \cite{Lunardi}, Di
Francesco and Polidoro \cite{PolidoroDiFrancesco}, Gutierrez and Lanconelli \cite{GutierrezLanconelli}, Bramanti and Zhu \cite{bramantizhu} and Simon \cite{Simon}
for a large
class of operators. The problem at the boundary is completely different and largely unsolved.
\subsection{Schauder estimates at the boundary}
A surface $M$ in a Carnot group, smooth in the Euclidean sense,
can be locally expressed as the zero
level set of a function $f\in C^\infty$, but it can
have points in $M$ where its subriemannian gradient
vanishes. At these points, called characteristic,
the geometry of the surface is not completely understood.
Far from characteristic points,
properties of regular surfaces have been largely studied starting
from the papers of Kohn and Nirenberg
in \cite{KN}, Jerison in \cite{Jerison} and more
recently by Franchi, Serapioni and Serra Cassano, \cite{FSSC, FSSC1}
(see also the references therein).
The stratification defined in (\ref{strat})
induces a stratification on the tangent plane of the manifold $M$. We will call ${\hat V}^1 = V^1 \cap TM$,
${\hat V}^2 = V^2 \cap TM$, $\cdots$, ${\hat V}^\kappa = V^\kappa \cap TM$. It is not restrictive to assume
that $X_1\in V^1$ is normal to ${\hat V}^1$ with respect to the metric fixed in $V^1$ so that we can denote by
$\{\hat X_i\}_{i=2,\cdots, m}$ a basis of ${\hat V}^1$.
We also require that the following condition holds:
\begin{equation}\label{assumption}Lie ({\hat V}^1) = TM.\end{equation}
Under this assumption the manifold $M$ has a H\"ormander structure,
and $\hat V^1$ inherits a metric from the immersion in $V^1$. Hence
a distance $\hat d$ and corresponding classes of H\"older continuous functions ${\hat C}^{k, \alpha}(M)$ are well defined. For every choice of regular coefficients $(b_i)_{i=2,\cdots, m}$,
a Laplace-type operator
\begin{equation}\label{laplacefundamental}\hat \Delta =
\sum_{i=2}^m \hat X_{i}^2 + \sum_{i=2}^m b_i \hat X_i
\end{equation}
is defined on $M$,
with fundamental solution
$\hat \Gamma_{\hat\Delta}.$
It has been proved by Kohn and Nirenberg in \cite{KN} that,
if $D$ is a smooth open set with smooth boundary and $g$ a smooth function defined on the boundary of $D,$ the problem
\begin{equation}\label{Poisson_problem}\Delta u=0 \ \text{in } D, \quad u=g \ \text{on }\partial D\end{equation}
has a unique solution, of class $C^{\infty}$ up to the boundary at non characteristic points. At the characteristic points very few results are known (see \cite{Jerison2}, already quoted, and \cite{CG98}, \cite{GV2000} and \cite{V}, where existence of non tangential limits up to the boundary are established).
In this paper we prove the exact analogous of the classical Schauder estimates
at the boundary, providing estimates of the ${\hat C}^{2, \alpha}$ norm of the
solution in terms of the H\"older norm of the data.
Precisely our result can be stated as follows.
\begin{theorem} \label{c:schauderGroups}
Let $D \subset \mathbb{G}$ be a smooth, bounded domain and
assume that the vector fields
$\{X_i\}_{i=1, \cdots, m}$ satisfy the assumption \eqref{assumption}.
Denote $u$ the unique
solution to
$$\Delta u=f\; \text{in}\ D, \quad u= g \text{ on }\, \partial D, $$
where $f \in C^\alpha(\bar D)$ and $g \in \hat C^{2, \alpha} (\partial D)$ and $\alpha>0$.
If $\bar x\in \partial D$ and $V$ is an open neighborhood of $\bar x$ without characteristic
points, for every $\phi\in C^\infty_0(V)$ we have
$\phi u \in C^{2, \alpha}(\bar D\cap V)$ and
\begin{equation}
\label{stime}
\|\phi u\|_{C^{2, \alpha}(\bar D\cap V)} \leq C (\|g\|_{\hat C^{2, \alpha} (\partial D)} + \|f\|_{C^\alpha(\bar D)}).\end{equation}
\end{theorem}
As we mentioned before, up to now subriemannian boundary
Schauder estimates are known only for the Heisenberg group (see \cite{Jerison}) and are based on the construction of a Poisson kernel.
If $D$ is an open bounded set, and
$V$ is a neighborhood of a non characteristic point $\overline x\in\partial D$,
we say that $P:C^{\infty}(\partial D\cap V)\rightarrow C^{\infty}(V\cap \overline{D})$ is a local Poisson operator
for the problem \eqref{Poisson_problem}
if, for every $g\in C^{\infty}(\partial D\cap V)$, the function $u=P(g)$
satisfies
$\Delta u=0$ in $D\cap V$ and $u(x)=g(x)$ for all $x\in \partial D\cap V$.
The construction of the Poisson kernel contained in \cite{Jerison} is based on
the standard Fourier transform and can not be
directly repeated in general Lie groups.
General measure theory ensures the existence of a Poisson kernel under very weak assumptions on the vector fields
(see for example Lanconelli and Uguzzoni \cite{LAU}), but this
theory only allows to establish $L^p$ regularity of the solution at the boundary.
A Poisson kernel has been built by
Ferrari and Franchi \cite{Ferrarifranchi} in the special case when the set $D$ is a half space of the form
$ \R^+\times\hat \Gi $. In this case, as well as in the Heisenberg groups considered in
\cite{Jerison}, assumption \eqref{assumption} is satisfied.
However, this assumption is verified by a
much larger class of Carnot groups, where the splitting of $\Gi$ is not possible. A simple example can be the following one:
\begin{equation}\label{campesempio}X_1 = \partial_1, \quad X_2 = \partial_2 + x_1^2 \partial_5 + x_3 \partial_4,
\quad X_3 = \partial_3 + x_4 \partial_5,
\end{equation}
with $M=\{x_1=0\}$.
Our construction of the
Poisson operator is based on the knowledge of a smooth fundamental solution, its restriction to the boundary, and on the properties of singular integrals. Since our result is local, we can locally express the
boundary of $D$ as the graph of a smooth function $w$, and
perform a change of variable, to reduce the boundary to a plane.
In the new coordinates the vector fields will explicitly depend on the
function $w$ defining the boundary, and will not be homogeneous in general.
For sub-Laplacian type operators associated to these vector fields we will obtain the following expression of the
Poisson kernel.
\begin{theorem}
\label{mainRn}
Let $D= \{(x_1,\x)\in \R \times \R^{n-1}: x_1>0\}\subset \Gi$
be a non
characteristic half space and let $g\in C^\infty(\partial D)$. Let $\bar x\in \partial D$,
let $V_0$ be a neighborhood of $\bar x$ in $\R^{n}$ and let
\begin{equation}K_1(g)(\hat y) :=\int_{\partial D \cap V_0} \Gamma_{\Delta}((0, \hat y), (0,\hat z)) \hat \Delta g(\hat z)
d\hat z.\end{equation} There exists
a lower order operator $R$ of type 3/2 with respect to the distance $\hat d$ defined on $\partial D$, such that
for every neighborhood $V$ of $\bar x$ in $\R^{n}$, $V\subset\subset V_0$,
the operator
\begin{equation}\label{poissonintro}P(g)(x) := \int_{\partial D \cap V_0} \Gamma_{\Delta}(x, (0,\hat y)) (K_1 + R)(g)(\hat y) d\hat y \end{equation}
is a Poisson kernel in $V$.
\end{theorem}
The representation \eqref{poissonintro}
and the properties of the fundamental solution
immediately ensure that $P(g)$ satisfies the equation in \eqref{Poisson_problem}.
In order to show that $P$ is a Poisson operator, we only have to
show that $P(g) = g$ on the boundary $\{x_1=0\}$.
Denoting by
$E_{\Gamma_{\Delta}(0, \cdot)}$ the operator associated to the
kernel $\Gamma_{\Delta}((0,\x), (0,\z))$,
this is equivalent to say that
$K_1 + R$ is the inverse of the operator $E_{\Gamma_{\Delta}(0, \cdot)}$.
Under the assumption \eqref{assumption} this is proved using
the fundamental solution $\hat \Gamma_{\hat\Delta}$ of the operator $\hat \Delta$ defined in
\eqref{laplacefundamental}. Indeed $\hat \Gamma_{\hat\Delta}$
satisfies the following approximate reproducing formula:
\begin{theorem}\label{teorema1} Let $D= \{(x_1,\x)\in \R \times \R^{n-1}: x_1>0\}\subset \Gi$ be a non
characteristic half plane.
If $\bar x\in \partial D $, then there exists a neighborhood $V$ of $\bar x$ in $\Gi$ such that
the fundamental solution admits the following representation:
\begin{equation}\label{tesi1}\hat {\Gamma}_{\hat\Delta}(\x, \y) =\int_{\partial D \cap V}
\Gamma_{\Delta}((0,\x), (0,\z))
\Gamma_{\Delta}((0,\z), (0,\y)) d\z + \hat R_{\hat\Delta}(\x, \y), \end{equation}
for every $x = (0, \x), y= (0, \y)\in \partial D \cap V$,
where $\hat R_{\hat\Delta}$ is a kernel of type $5/2$ with respect to the distance $\hat d$.
\end{theorem}
This theorem ensures
that $K_1$ is the inverse of the operator $E_{\Gamma_{\Delta}(0, \cdot)}$ up
to a remainder. The proof of Theorem \ref{mainRn} will be concluded
with a standard version of the parametrix method, which allows to
carefully handle the remainder and to prove that $K_1 +R$ is indeed the inverse of
$E_{\Gamma_{\Delta}(0, \cdot)}$.
Theorem \ref{teorema1} expresses
$E_{\Gamma_{\Delta}(0, \cdot)}$ as
the square root of
the operator associated to
$\hat {\Gamma}_{\hat\Delta}$. This result, well known in the
Euclidean setting (see for example the results of
Caffarelli and Silvestre \cite{CS}), was not known
for general Carnot groups, but only in the special case when
the group $\Gi$ is expressed as $\Gi= \R \times \hat \Gi$ (see Ferrari and Franchi in \cite{Ferrarifranchi}).
The proof in this setting is inspired by the results of Evans in \cite{E} (in the Euclidean setting) and
of Capogna, Citti and Senni (in Carnot groups) in \cite{CCS}.
\subsection{Structure of the paper and sketch of the proofs}
The paper starts with Section 2, where we fix notations and recall known properties of Carnot groups and their Riemannian approximation.
In Section 3 we show that a
non characteristic plane can always be represented as the plane $\{(x_1,\x)\in \mathbb{R}\times \mathbb{R}^{n-1}\,:\, x_1=0\}$
with the canonical exponential change of variables described in
\eqref{deftheta}. In these coordinates the vector fields attain an explicit
polynomial representation recalled in
\eqref{struttura campi}. Moreover, Section 3 contains the proof of
Theorem \ref{teorema1} under the assumption that the boundary of $D$
is a non characteristic plane and the vector fields are homogeneous.
The proof of this theorem is the most technical part of the paper and it is based on
a Riemannian approximation and a parabolic regularization of the operator $\Delta$.
Precisely the Riemannian approximation of the Laplace type operator $\Delta$ is an operator of the form
\begin{equation}\label{Definede}\Delta_{\e}=
\Delta+ \e^2 \sum_{i=m+1}^{n}X_i^2,\end{equation}
and its parabolic regularization leads to the operator
\begin{equation}\label{e:L_epsilon}
L_\e:= \partial_t - \Delta_{\e}.\end{equation}
In a neighborhood of any non characteristic point $z$ of the plane $\partial D$
we will apply a new version of the freezing and parametrix methods to approximate
the fundamental solution $\Gamma_{\e}$ of $L_\e$ in terms of the fundamental solution $\hat \Gamma_{\e}$ of a
suitable tangential heat operator $\partial_t - \hat \Delta_{\e}$.
The parametrix method has already been largely used in the subriemannian setting
for estimating the fundamental solution
in terms of a known one (see for example \cite{RS, SC, JSC, C, blu_a}).
Here we are inspired by the papers \cite{CCS} and \cite{CM} where the relation between the fundamental solution on the whole space and its restriction to the boundary was studied in the framework of a diffusion driven motion by curvature.
The main technical difficulty in our setting is due to the fact that neither the geometry of the subriemannian space nor the structure of the subriemannian operators is naturally represented as the direct sum of the tangential
and the normal part. This splitting is true in the Riemannian approximation, and this is the reason for using this approximation.
However the subriemannian structure and its Riemannian approximation have different homogeneous dimension.
Hence we need to introduce a non homogeneous version of the parametrix method, which leads to the existence of a constant $C$ such that
\begin{equation}
\label{quellosopra}
\left| \Gn_\e((0, \x, t), (0, \y,\tau)) -
\frac{\hat \Gamma_{\e}((\x,t), (\y, \tau))}{
\sqrt{t-\tau}} \right|\leq
C\Gn_{\e}((0, \x, t), (0, \y, \tau))(t-\tau)^{1/4}
\end{equation}
for every $\x$ and $\y\in \partial D$, (this is done in Proposition \ref{lemmaJe2} below).
The key point here is to prove that $C$ is independent of $\e$.
The proof is quite delicate, and it is based on an interplay between the Riemannian and subriemannian
nature of our operators.
Since all
constants in \eqref{quellosopra} are independent of $\e$, we can let $\e$ go to $0$ and obtain
an analogous estimate for subriemannian operators.
Denoting $\Gamma$ the fundamental solution of $\partial_t - \Delta$ and
$\hat \Gamma$ the fundamental solution of the operator $\partial_t - \hat \Delta$,
we will prove in
Theorem \ref{lemmaJe1} that
there exists a constant $C=C(T)$ such that
for
all $z=(0, \z)$, $x=(0, \x)$ in $\partial D$ and for every $t$ and $\tau$, with $0<t-\tau<T$, we have
\begin{equation}
\label{altraggiunta}
\left| \Gn((0, \x, t), (0, \y,\tau)) -
\frac{\hat \Gamma((\x,t), (\y, \tau))}{
\sqrt{t-\tau}} \right|\leq
C\Gn((0, \x, t), (0, \y, \tau))(t-\tau)^{1/4}
\end{equation}
Now, integrating in the time variable, we deduce the proof of Theorem \ref{teorema1} for homogeneous
vector fields and on a plane (also called Lemma \ref{l:Gammaconvolve}).
In Section 4 we provide the full proof of Theorem \ref{teorema1} on smooth manifolds. Since
this is a local result, we show that, via a suitable change of variables, it is possible to identify the boundary of $D$ with the plane $\{x_1 =0\}$.
With this change of variables the vector fields $(X_{i})$ become non homogeneous, but they still define an H\"ormander structure. In Section 4.1 we describe this procedure and recall some properties of subriemannian spaces in this generality.
Then, in Section 4.2 we apply a new simplified version of the parametrix method
of Rothschild and Stein \cite{RS} tailored on the present setting,
and locally we reduce the vector fields to homogeneous ones.
With this instrument we can deduce the proof of Theorem \ref{teorema1} for
smooth surfaces from the one on planes, previously proved in Section 3.
Finally, Section 5 contains the construction of the Poisson kernel, which allows to prove Theorem \ref{mainRn}.
The main idea of the proof of this theorem has been outlined above. First, we use
Theorem \ref{teorema1} to build an approximated kernel.
After that, a standard version of the parametrix method is applied to
obtain the Poisson kernel from the approximating one.
The Schauder estimates stated in
Theorem \ref{c:schauderGroups} are a consequence of the boundedness of the
operator associated to the Poisson kernel and they will be proved with the same instruments as in \cite{Jerison}.
\section{Notations and known results}\label{section2}
\subsection{The subriemannian structure}
As recalled in Section \ref{introcarnot}, a Carnot group $\Gi$ is
$\R^{n}$, with the group low induced by the exponential map and the stratification
$ V^1\oplus \cdots \oplus V^\kappa$ of the tangent space
recalled in
\eqref{strat}.
The stratification induces a natural notion of
degree of a vector field:
\begin{equation}\label{degree}deg(X)=j \quad\text {whenever}\; X\in V^j.\end{equation}
If $\{X_i\}_{i=1, \cdots, n}$ is the stratified basis introduced
in subsection \ref{introcarnot}, we will write also $deg(i)$ instead of $deg(X_i)$.
Via the exponential map,
$\R^{n}$ is endowed with a Lie group structure
and the resulting group is denoted by $\mathbb{G}$. Since in this setting the exponential
map around a fixed point $y$ is a global diffeomorphism, every other point
$x$ can be uniquely represented as
$\xn = \exp(v_1 X_1)\exp(\sum_{i=2}^n v_{i} X_i)(y). $
Consequently we can define a logarithmic function
$\Theta_{X_1, \cdots, X_n, y}$ as the inverse of the
exponential map:
\medbreak\noindent
\begin{equation}\label{deftheta}
\Theta_{X_1, \cdots, X_n, y}: \Gi \to \mathfrak
g, \quad \Theta_{X_1, \cdots, X_n, y}(\xn)= (v_1, \cdots, v_n).\end{equation}
We will simply denote $\Theta_y$ instead of $\Theta_{X_1, \cdots, X_n, y}$ when no ambiguity may arise.
Note that we are using exponential canonical coordinates of second type around a fixed point $y\in \mathbb{G}$,
which will simplify notations while dealing with a boundary problem.
In particular, the fixed point $y$, around which we choose the axes, has coordinates $0$ and the vector field $X_1$ is represented as $X_1=\partial_1$ and
all the others vector fields $(X_i)_{i\geq 2}$ coincide with the partial derivative $\partial_i$
at the fixed point $y=0$. In any other point they can be represented in these coordinates as
\begin{equation}\label{struttura campi}
X_1=\partial_1, \quad X_i=\partial_i+\sum_{deg(j)> deg(i)}a_{ij}(v)\partial_j\quad i=2,\cdots,n,\end{equation}
where $a_{ij}$ are
homogeneous polynomials
of degree $deg(j) - deg(i)$ depending only on variables $v_h$, with $deg(h) \le deg(j)-deg(i)$ (see for example \cite{RS} for a detailed proof).
Note that if $deg(i)=\kappa $ then $X_i=\partial_i$.
By construction the vectors $\{X_{i}\}_{i=1, \cdots, m}$
and their commutators span
$\mathfrak g$ at every point, and consequently verify H\"ormander's finite rank condition
(\cite{hormander}).
Due to the stratification of the algebra, a natural
family of dilation $(\delta_\lambda)_{\lambda>0}$ acts on points
$v= \sum_{i=1}^n v_i X_i \in \mathfrak{g} $ as follows:
\begin{equation}\label{dilat}
\delta_\lambda(v):=\lambda^{deg(i)}v_i.
\end{equation}
On $V^1$, which is generated by $X_1, \cdots X_m$, we define a Riemannian metric which makes $X_1, \cdots, X_m$ an orthonormal basis.
The associated norm will be extended to an homogeneous norm to the whole $\mathfrak g$
defined as follows:
\begin{equation}\label{subnorm}\|v\| := \sum_{i=1}^{n} | v_{i}|^{1/{deg(i)}}.\end{equation}
Via the logarithmic function defined in (\ref{deftheta}) the dilation $\delta_{\lambda}$
induces a one-parameter group of
automorphisms on $\Gi$, again denoted by $\delta_{\lambda}$.
A function $f: \Gi \rightarrow \R$ is called homogeneous of degree $\alpha$ if
$f(\delta_\lambda(x)) = \lambda^\alpha f(x)$ for any $ \lambda>0$ and $x\in \Gi$.
In particular we can define a
gauge distance $d(\cdot,\cdot)$ homogeneous of degree 1,
as the image of the norm through the function $\Theta$:
\begin{equation}\label{2.5bis}
d(y, \xn):= \|\Theta_{X_1,\cdots,X_n,y}(\xn)\|.\end{equation}
The gauge function is homogeneous of order
\begin{equation}\label{homodim}
Q:= \sum_{i=1}^\kappa i \, dim(V^i) \end{equation}
with respect to the dilation.
Hence $Q$ is called the homogeneous dimension of the space and
there exist constants $C_1, C_2$ such that
$$
C_1 r^Q \leq |B(x, r)|\leq C_2 r^Q \qquad\forall\, r>0, \ x\in \Gi,$$
where $B(\xn, r)$ denotes the metric ball centered in $x$ with radius $r$,
and $|\,\cdot\,|$ denotes the Lebesgue measure.
Any vector field $X$ will be identified with
the first order differential operator with its same coefficients.
If $\phi$ is a continuous function defined in an open set $V$ of $\Gi$ and if, for every $i=1,\cdots, m,$ there exists the Lie derivative
$X_{i}\phi$ then we call horizontal gradient of $\phi$ the vector
\begin{equation}\label{e:tutto subriemannian}
\nabla \phi=\sum_{i=1}^m (X_{i}\phi) X_{i}.
\end{equation}
The associated classes of H\"older continuous functions
will be defined as follows:
\begin{definition}\label{defholder}
Let $0<\alpha < 1$, $V\subset\Gi$ be an open set, and $u$ be a function defined on
$V.$ We say that $u \in C^{\alpha}(V)$ if there exists a positive constant $M$ such that for
every $x, x_0\in V$
\begin{equation}\label{e301}
|u(x) - u(x_0)| \le M d ^\alpha(x, x_{0}).
\end{equation}
We put
$$\|u\|_{C^{\alpha}(V)}=\sup_{x\neq x_{0}} \frac {|u(x) - u(x_{0})|}{d^\alpha(x, x_{0})}+
\sup_{V} |u|.$$
Iterating this definition,
if $k\geq 1$ we say that $u \in
C^{k,\alpha}(V)$ if
$X_iu \in C^{k-1,\alpha}(V)$ for all $i=1,\cdots, m$.
\end{definition}
The Laplace type operator defined in \eqref{laplacoperator}
is a differential operator of degree $2$, in the sense of the following definition.
\begin{definition} Let $\{X_{i_j}\}$ be differential operators of order 1 and degree $ deg(X_{i_j})$.
We say that the differential operator $Y_1= X_{i_1}\cdots X_{i_p}$ has order $p$ and degree
$\sum_{j=1}^p deg(X_{i_j})$.
Moreover, if
$Y$ is a differential operator represented as
\begin{equation}\label{order}
Y=a Y_1,\end{equation}
where $a$ is a homogeneous function of degree $\alpha$, then
we say that $Y$
is homogeneous of degree $deg(Y_1)-\alpha$. A differential operator will be called of degree $k-\alpha$ if it is a sum of operators with maximum degree
$k-\alpha$.
\end{definition}
Following \cite{Folland} we recall the definition of kernel of type $\alpha$:
\begin{definition}\label{kerneltype}
We say that $K$ is a kernel of type $\alpha$, if $K$ is of class $C^\infty$ away from $0$ and it is homogeneous of degree $\alpha-Q$.
\end{definition}
In a Carnot group, this implies that $K$ satisfies condition \eqref{e:sileva}.
\subsection{The Riemannian approximation of the structure}
One of the key technical instruments that we will use is a
Riemannian approximation of the subriemannian structure.
For every $\e>0$, we extend the Riemannian metric defined on $V^1$
to a left invariant Riemannian metric
defined on $\mathfrak g$
by requesting that \begin{equation}\label{e:pippo}(X_{1, \e},\cdots,X_{n, \e}):=
(X_1,\cdots,X_m,\e^{{deg}(m+1)-1}
X_{m+1},\cdots, \e^{{deg}(n)-1} X_n)\end{equation}
is an orthonormal frame. We say that these vector fields {have $\e$-degree}
equal to $1$, and we write $deg_\e(X_{i, \e})=1$.
Since the Lie algebra generated by these vectors also
contains the commutators of these vector fields, we also consider the vector fields
\begin{equation}\label{e:pippo1}X_{i, \e}:=
X_{i -n+m} \ \text{and } \ {deg}_{\e}(X_{i, \e}):= {deg}(X_{i-n+m}) \ \text{for all}\ i= n+1,\cdots, 2n-m.\end{equation}
Let $d_{cc}$ and $d_{cc, \e}$ denote the control distances
associated with the vector fields $X_{1},\cdots, X_{m}$ and $X_{1, \e},\cdots,X_{n, \e}$, respectively. It is well known (see for instance \cite{Gromov} and the references therein) that $(\mathbb{G},d_{cc, \e})$ converges
in the Gromov-Hausdorff sense, as $\e\to 0$, to the subriemannian
space $(\mathbb{G},d_{cc})$. Although the subriemannian structure is formally recovered
in the limit for $\e\rightarrow 0$, we will need to recognize that
the structure and all constants appearing in the estimates are stable in the limit.
In addition we will need to recognize that the space has a property of $\e$-homogeneity,
with respect to the natural distance.
A classical estimate of the distance $d_{cc, \e}$ is due to Nagel, Stein and Wainger in
\cite{NSW}. From the whole family $\{X_{i, \e}\}_{i=1,\cdots, 2n-m}$ it is possible to select different bases $\{X_{i_j, \e}\}_{i_j\in I}$, for different choices of indices $I = (i_1, \cdots,i_n) \subset \{1,...,2n-m\}^n$. As a consequence each vector
$v$ has different representations $v= \sum_{i_j\in I} v_{i_j, \e}X_{i_j, \e}$ in terms of the
different bases. The optimal choice of indices, denoted $I_{v, \e},$
is the one which minimize the $\e$-homogeneous gauge distance:
\begin{equation}\label{rap}\|v\|_\e = \sum_{i_j\in I_{v, \e}} |v_{i_j, \e}|^{1/ deg_\e(i_j)}=\min_{I}\sum_{i_j\in I}|v_{i_j, \e}|^{1/deg_\e(i_j)}.
\end{equation}
This norm can be explicitly written as follows:
if $v = \Theta_{X_1,\cdots, X_n, y}(x)$ then
\begin{equation}\label{norma_e}\|v\|_{\e}=\sum_{i=1}^{m}|v_i| + \sum_{i=m+1}^{n} \min\left\{\frac{|v_i|}{\e^{deg(i)-1}},
|v_i|^{1/deg(i)} \right\}.
\end{equation}
In \cite{CCR} it is proved that the associated ball box distance
\begin{equation}\label{de}d_\e(y,\xn)= \|\Theta_{X_{1},\cdots, X_{n}, y}(\xn)\|_\e.\end{equation}
is locally equivalent to the distance $d_{cc,\e}$, with equivalence constants independent of $\e$.
Let us explicitly note that this distance has different behavior in $0$ and at infinity. Indeed,
if $v_i$ are small with respect to $\e$ for every $i\ge m+1$, then
the distance $d_{\e}$ has a Riemannian behavior, while it is purely subriemannian for $v_i$ large.
It is worthwhile to note that for every $\e>0$ there exists a constant $C_\e$ such
$|B_\e(\xn, r)|= C_\e r^n$,
where
$B_\e(\xn, r)$ denotes the ball $\{\yn\in \mathbb{G}\,|\, d_\e(\xn, \yn)<r\}$ and
$|\cdot|$ the Lebesgue measure. In particular for every $\e>0$ the homogeneous dimension of the Riemannian space
is $n$, while by \eqref{homodim} for $\e=0$ the homogeneous dimension of the space is $Q$, with $Q>n$.
Hence this notion of homogeneity is not
stable with respect to $\e$, and the constant $C_\e$ blows up for $\e\rightarrow 0$.
However it has recently proved in \cite{CCR}
the following uniform doubling property: \begin{prop}\label{homog-stab}
There is a constant $C $ independent of $\e$ such that for every $\xn\in \mathbb{G}$ and $r>0$,
\begin{equation}\label{E:homog-stab}
|B_\e(\xn, 2r)|\le C |B_\e(\xn, r)|.
\end{equation}
\end{prop}
The doubling inequality \eqref{E:homog-stab} can be considered as a weak form of homogeneity, and suggests that it is possible to give a new definition of $\e$- homogeneity.
Following \cite{RS} we will give the following definition of local homogeneous
functions and operators
\begin{definition} \label{egrado}A function $f$ is locally homogeneous of $\e$-degree
$\alpha$ in a neighborhood of a point $z$ with respect to the metric \eqref{de}
if $f\circ {\Theta_{X_1,\cdots,X_n, z}^{-1}}$ is homogeneous of
degree $\alpha$, with respect to the norm $\|\cdot\|_{\e}$ defined in \eqref{norma_e}.
A differential operator $Y$ is homogeneous of local $\e$-degree $\alpha$ in a neighborhood of a point $z$,
with respect to the metric \eqref{de} if
$d\Theta_{X_1,\cdots, X_n, z}(Y)$ is homogeneous of degree $\alpha$. \end{definition}
In particular this definition implies the following property:
\begin{remark}\label{edegreefo}
If $a$ is a homogeneous function of $\e$-degree $\alpha$ and $Y_1$ is an operator of $\e$-degree $k$,
then $a(z^{-1}x)Y_1 $
is a homogeneous operator of $\e$-degree $k-\alpha$. This implies that for every other smooth function $f$
of local $\e$-degree $\beta$ in a neighborhood of a point $z$,
$a(z^{-1} x)(Y_1 f)$
is a smooth function
of $\e$-degree $\beta+\alpha-k $ in a neighborhood of a point $z$, and
$|a(z^{-1} x)(Y_1 f)(x)|\leq C d_\e^{\beta+\alpha-k}(x,z)$.
\end{remark}
If $\phi\in C^{\infty}(\Gi)$ we define the $\e$-gradient of $\phi$ as follows
\[\nabla_\e \phi:=\sum_{i=1}^n (X_{i, \e}\phi) X_{i, \e}.\]
In terms of the vector fields with $\e$-degree $1$, defined in \eqref{e:pippo}, we consider the associated heat operator
\begin{equation}\label{operatore}
L_{\e}:= \p_t - \sum_{i=1}^n X^2_{i, \e} - \sum_{i=1}^m b_i X_{i, \e}. \end{equation}
In analogy with the operator introduced in \eqref{e:tutto subriemannian},
the heat operator associated to the subriemannian structure has the form
\begin{equation}\label{operatoresub}L:= \p_t - \sum_{i=1}^m X^2_{i}- \sum_{i=1}^m b_i X_{i}. \end{equation}
The behavior of these operators in interior points
is well known: they admit fundamental solutions respectively
$\Gamma_{\e}(\xn, t)$ and $\Gamma(x,t)$ of class $C^\infty$ out of the pole
(see \cite{JSC} for precise estimates of $\Gamma(x,t)$ and \cite{CCM} for estimates of
$\Gamma_{\e}(\xn, t)$ uniform in $\e$).
In our work we will
need estimates which are uniform in the variable $\e$. We start with the following definition.
\begin{definition}\label{defkernelestimates}
We say that a family of kernels $(P_{\e})_{\e>0}$, defined on
$\Gi\times ]0, \infty[\times \Gi\times ]0, \infty[ $
and $C^\infty$ out of the diagonal, is of uniform exponential $\e$-type $\lambda +2$,
if for $q\in \N$ and $k$-tuple $(i_1,\cdots,i_k)\in \{1,\cdots,n\}^k$ there exists $C_{q,k}>0$, depending only on $k,q$ and on
the Riemannian metric, such that
\begin{equation}\label{e:Giovanna}
|(X_{i_1, \e}\cdots X_{i_k, \e}\p_t^q P_{\e})((\xn,t), (z,\tau))| \le C_{q,k} (t-\tau)^{-q-k/2 + \lambda/2}
\frac{e^{-\frac{d_\e(\xn, z)^2}{C_{q,k} (t-\tau)}}} {|B_\e(x, \sqrt{t-\tau})|}
\end{equation}
for all $x\in \Gi$ and $t>\tau$.
\end{definition}
According with the definition above, the fundamental solution $\Gamma_{\varepsilon}$ is a kernel of exponential
$\e$-type $2$. Precisely, the following result, established in \cite{CC} (see also \cite{CM} and \cite{CCM}), holds:
\begin{theorem}\label{uniform heat kernel estimates-e}
The fundamental solutions $\Gamma_\e(x,t)$ of the operators $L_\e$ constitutes
a family of kernels of exponential
$\e$-type $2$ and there exist constants $C_0,C>0$
independent of $\e$ such that for each $\e>0$, $x\in \mathbb{G}$ and $t>\tau$ one has
\begin{equation}\label{gauss-sol}
C_0^{-1}\frac{e^{-C\frac{d_\e(\xn, z)^2}{(t-\tau)}}} {|B_\e (x, \sqrt{t-\tau})|}\le \Gamma_{\e}((\xn, t), (z, \tau))\le
C_0\frac{e^{-\frac{d_\e(\xn,z)^2}{C (t-\tau)}}} {|B_\e (x, \sqrt{t-\tau})|}.
\end{equation}
Moreover, for any
$k$-tuple $(i_1,\cdots,i_k)\in \{1,\cdots, m\}^k$ one has
\begin{equation}\label{GetobarG}{X}_{i_1}\cdots {X}_{i_k} \p_t^q \Gamma_{\e}\to {X}_{i_1}\cdots {X}_{i_k}\p_t^q \Gamma\quad \text{as $\e\to 0$}\end{equation}
uniformly on compact sets and in a dominated way on all $\Gi$.
\end{theorem}
\begin{remark}
In particular from this theorem we can obtain the well known Gaussian estimates
of the fundamental solution $\Gamma$ of the operator $L$. Indeed $\Gamma$ is a kernel of exponential
type $2$ and there exist constants $C_0,C>0$ such that for each $x\in \mathbb{G}$ and $t>\tau$ one has
\begin{equation}\label{gauss-sol2}
C_0^{-1}\frac{e^{-C\frac{d(\xn, z)^2}{(t-\tau)}}} {|B (x, \sqrt{t-\tau})|}\le \Gamma((\xn, t), (z, \tau))\le
C_0\frac{e^{-\frac{d(\xn,z)^2}{C (t-\tau)}}} {|B (x, \sqrt{t-\tau})|}.
\end{equation}
\end{remark}
\subsection{The parametrix method}\label{paramethod}
One of the main instruments that we will use to estimate the fundamental solution is the parametrix method, originally due to Levi
and now extremely classical for elliptic and parabolic equations
(see \cite{Fri}). In subriemannian setting the parametrix method
have been used to approximate general H\"ormander type operators with homogeneous ones: we refer to \cite{RS, SC} for the first results, \cite{JSC} for the subriemannian heat kernel, \cite{C} for estimates in case of low regularity, \cite{blu_a}
for a recent self-contained presentation.
The method consists in
providing an explicit representation of the fundamental solution $\Gamma$ of an operator
$L$ in terms of the fundamental solution of an approximating operator
$L_{z_1}$ (with associated fundamental solution $\Gamma_{z_1}$).
Using the definition of fundamental solution and the fact that
$L_{z_1} (\Gamma - \Gamma_{z_1}) =(L_{z_1} -L)\Gamma,$
the difference between the
two solutions can be formally represented as
\begin{align}\nonumber (\Gamma - \Gamma_{z_1})((\xn, t), (z, \tau))&=
\int \Gamma_{z_1}((\xn, t), (y, \theta)) (L_{z_1} - L)(\Gamma - \Gamma_{z_1}) ((y, \theta), (z, \tau))\,dy\,d\theta \\
&+\int \Gamma_{z_1}((\xn, t), (y, \theta)) (L_{z_1} - L) \Gamma_{z_1} ((y, \theta), (z, \tau))\,dy\,d\theta.\label{e:acca}\end{align}
Denoting by
\begin{equation}\label{piantina}H: = L_{z_1} - L,\end{equation}
and $E_{\Gamma_{z_1}}$ the integral operator with kernel $\Gamma_{z_1}$,
the above expression \eqref{e:acca} can be written as
$$(I- E_{\Gamma_{z_1}} H)(\Gamma - \Gamma_{z_1}) = E_{\Gamma_{z_1}}H(\Gamma_{z_1}).$$
If the operator $(I- E_{\Gamma_{z_1}} H)$ is invertible, the difference $\Gamma - \Gamma_{z_1}$ can be formally expressed as
\begin{equation}\label{e: informal serie}
\Gamma - \Gamma_{z_1}=\sum_{j=0}^\infty (E_{\Gamma_{z_1}} H)^{j+1}(\Gamma_{z_1})= E_{\Gamma_{z_1}} \Phi, \quad \text{with}
\;\; \Phi:=\sum_{j=0}^\infty (H E_{\Gamma_{z_1}})^{j}H (\Gamma_{z_1}).
\end{equation}
Roughly speaking the proof is obtained as follows:
\begin{itemize}
\item [1)] The first and most delicate part of the proof is to define the approximating operator $H$
and to prove that it is a differential operator of degree $2-\alpha$ for a suitable positive $\alpha$. From this fact it
follows that the kernel
\begin{equation}\label{erreuno}R_1(x,z): = H \Gamma_{z_1}(x,z)
\end{equation} is homogeneous of type $\alpha$ with respect to the considered homogeneous space. It is important to note that
$$R_1(x,z)= (L_{z_1}-L)\Gamma_{z_1}(x,z) = L(\Gamma(x,z) - \Gamma_{z_1}(x,z))\,.$$
\item [2)] Identifying
the operator $HE_{\Gamma_{z_1}}$ with the integral
operator $E_{R_1}$ with kernel $R_1$,
the series $\Phi$ in \eqref{e: informal serie}
reduces to
\begin{equation}\label{Fi}\Phi=\sum_{j=0}^\infty (E_{R_1})^{j}E_{R_1}.\end{equation}
Using the fact that the convolution of a kernel of type $\alpha$ with a kernel of type $\beta$
provides a kernel of type $\alpha + \beta$, it is possible to prove that this series
converges uniformly (see for example Lemma 7.3 in \cite{Jerison}).
\item [3)] Finally, singular integrals tools lead to the convergence of the derivatives and the function $\Gamma$,
defined through \eqref{e: informal serie}, is a fundamental solution.
\end{itemize}
In the sequel we will consider kernels of type $\alpha$ in the sense of
Definition \ref{kerneltype}, when working with subelliptic operators,
while kernels of exponential $\e$-type
$\alpha$ in the sense of Definition \ref{defkernelestimates}, when studying Riemannian heat kernels. The main difficulty to be faced here is that the Riemannian approximation has not
a standard notion of homogeneity, since the Riemannian homogeneous dimension $n$
collapses to the subriemannian one $Q$ in the limit. However
we have endowed the regularized space with an $\e$-homogeneous structure (see \eqref{E:homog-stab} and the remark below) and
we will see that this is enough to apply the method in this setting.
Therefore, even though our vector fields are homogeneous,
our approach is more similar to the ones
\cite{RS, SC, JSC, C} where the geometry of the given operator and the
approximating one do not coincide.
\section{Reproducing formula on a plane}\label{s:LaplaceBeltrami}
In this section we will prove a first version of Theorem \ref{teorema1},
under the simplified assumptions that $\partial D$ is a non characteristic plane and
that the vector fields $(X_i)_{i=1, \cdots, m}$ are the generators of a Carnot group and have the
explicit representation recalled in \eqref{struttura campi}.
This result will be obtained via a parabolic approximation and a Riemannian regularization.
The proof of the same Theorem \ref{teorema1} on any smooth hypersurface will be deduced from this result in next section.
\subsection{Geometry of the plane} \label{geosection}
Let $\mathbb{G}$ be a Carnot group of step $\kappa$.
Consider a non characteristic plane $M_0$.
Using the logarithmic coordinates defined in \eqref{deftheta},
it is always possible to represent $M_0$ as follows:
\begin{equation}\label{M0}M_{0}=\{(x_1,\x)\in \mathbb{G}:\,x_1=0\},
\end{equation}
where
$x=(x_1, \x)$ is a point of the space, $x_1\in \R$ and $\x\in \R^{n-1}$.
This choice of coordinates is made in such a way that
the vector fields $X_{1} = \partial_1$ coincides with the direction normal to the plane,
while $\{X_i\}_{i=2, \cdots, n}$ are tangent to $M_0$ and
are represented as in \eqref{struttura campi}. Hence the vector fields obtained
from $X_i$ by evaluating the coefficients $a_{ij}$ on the points of the plane $M_0$,
are the generators of the first layer $\hat V_1$ on the plane,
so that
\begin{equation}
\label{campitildeu_noe} \hat X_{i}:= {X_{i}}_{|x_1 =0},\quad i=2,\cdots,n. \end{equation}
Thanks to this choice, not only the plane $M_0$ is non characteristic,
but also the planes $M_{z_1}=\{(x_1,\x)\in \mathbb{G}:\,x_1=z_1\}$,
for every $z_1$ sufficiently small, are non characteristic.
We note also that assumption \eqref{assumption}
ensures that the vector fields $\hat X_i$
satisfy a H\"ormander condition at every point and span a $n-1$ dimensional space at every point. In analogy with formula \eqref{homodim},
the homogeneous dimension of the plane is $$\hat Q= \sum_{i=2}^\kappa i \, dim(\hat V^i).
$$
As a consequence
\begin{equation}\label{hadimension}\hat Q= Q-1.\end{equation}
Via the exponential map and definition \eqref{2.5bis}
the vector fields $\hat X_{i}$ define a distance
\begin{equation}\label{hatdistance}
\hat d(\hat y, \hat x):= \|\Theta_{\hat X_2,\cdots, \hat X_n ,\y}(\x)\|
\end{equation}
on $M_0.$ By the H\"ormander condition a
Laplace operator and its time dependent counterpart are defined on $M_{0}$ as
\begin{equation}
\hat \Delta:= \sum_{i=2}^m \hat X_{i}^2, \quad \text{and} \quad
\hat L:=\partial_t -\hat \Delta,
\label{heat_plane_noe}
\end{equation}
and they have non negative fundamental solutions
$\hat \Gamma_{\hat\Delta}$ and $\hat \Gamma$ respectively.
In analogy with Definition \ref{defkernelestimates} we give the following
\begin{definition} We
say that a kernel $\hat P$, defined on
$\R^{n-1}\times ]0, \infty[\times \R^{n-1}\times ]0, \infty[ $
and $C^\infty$ out of the diagonal, is of exponential type $\lambda +2$,
if for $q\in \N$ and $k$-tuple $(i_1,\cdots,i_k)\in \{1,\cdots,n\}^k$ there exists $C_{q,k}>0$, depending only on $k,q$ and on
the subriemannian metric, such that
\begin{equation*}
|(\hat X_{i_1}\cdots \hat X_{i_k}\p_t^q \hat P)((\x,t), (\hat z,\tau))| \le C_{q,k} (t-\tau)^{-q-k/2 + \lambda/2}
\frac{e^{-\frac{\hat d(\x, \hat z)^2}{C_{q,k} (t-\tau)}}} {|\hat B(\hat x, \sqrt{t-\tau})|}
\end{equation*}
for all $\hat x\in \R^{n-1}$ and $t>\tau$.
\end{definition}
Our first result is the following one:
\begin{theorem}
\label{lemmaJe1}
Assume that $M_{0}=\{(x_1,\x)\in \mathbb{G}:\,x_1=0\}$ is a non characteristic plane
and let $T>0$.
Then there exists a constant $C=C(T)$ such that
for
all $z=(0, \z)$, $x=(0, \x)$ in $M_0$ and for every $t$ and $\tau$, with $0<t-\tau<T$, we have
\begin{equation}\label{goal}|\hat \Gamma((\x,t), (\z, \tau)) -\sqrt{t-\tau}\Gn((x, t), (z, \tau))|\le
C \hat \Gn((\hat x, t), (\hat z,\tau))(t-\tau)^{1/4}.\end{equation}
In addition $\hat \Gamma((\x,t), (\z, \tau)) -\sqrt{t-\tau}\Gn((x, t), (z, \tau))$ is an operator of exponential type $1/4$ with respect to the
vector fields $\{\hat X_i\}_{i=2, \cdots, m}$.
\end{theorem}
This theorem will be proved with the parametrix method and a Riemannian approximation.
Classically, the method is applied for proving the existence of the fundamental solution
of a given operator. Extending an approach of \cite{CCS}, in Lemma \ref{lemmaJe1} we apply the method to find a relation
between the fundamental solutions since we already know that they exist.
Even though the parametrix method has been largely used in the subriemannian setting
for internal estimates, the vector fields $\hat X_{i}$
do not provide a subriemannian approximation of the vector fields $X_{i}$
and the standard parametrix method of Rothschild and Stein can not be applied starting with the
fundamental solution of $\hat L$.
In order to clarify this fact we start with the concrete example of vector fields already introduced in
\eqref{campesempio}.
\begin{example}
Let us consider the vector fields in \eqref{campesempio}. Their commutators
are $\partial_4 = [X_1, X_2]$, which is a vector field of degree $2$, and
$\partial_5 = [[X_1, X_2],X_2]$, which is a vector field of degree $3$.
If we evaluate the vector fields $X_i$ on the plane $\{x_1 =0\}$
we obtain
$$\hat X_{2} = \partial_2 + x_3 \partial_4,
\quad \hat X_{3} = \partial_3 + x_4 \partial_5,
$$
so that
$$X_{2} - \hat X_{2}= x_1^2 \partial_5 \ \text{is an operator of degree }1.$$
Consequently, the difference
$$H=\hat L-L\, \text{ is an operator of degree } 2.$$
Hence it is not possible to apply the parametrix method, whose convergence requires
$H$ to be an operator of degree strictly less than $2$.
\end{example}
Due to these difficulties, we introduce a new version of the
parametrix method, using the $\e$-Riemannian approximation described in Section \ref{section2}.
The whole proof is based on a careful analysis of the Riemannian approximation
metric and lies on a delicate interplay between the Riemannian and subriemannian nature of our operators.
\subsection{A Riemannian and frozen approximating operator}
In Section \ref{geosection} we have chosen a point $0$
and a constant $\varepsilon$ sufficiently small such that for every $z_1\in\R$, such that $|z_1|\le \e^{2\kappa}$, the plane
$M_{z_1}=\{(x_1,\x)\in \mathbb{G}:\,x_1=z_1\}$
is non-characteristic.
In analogy with \eqref{campitildeu_noe}, we define the vector fields
$X_{i, z_1}:= {X_{i}}_{|_{x_1 =z_1}}$ as the vector fields whose coefficients
are evaluated at the points with first component $z_1$. Thanks to \eqref{struttura campi},
they can be represented as
\begin{equation}
\label{campizetauno}
X_{1, z_1}:=\partial_1,\quad X_{i, z_1}:= {X_{i}}_{|_{x_1 =z_1}}= \partial_i + \sum_{deg(j) > deg(i)} a_{ij}(z_1, \x) \partial_j
,\quad i=2, \cdots, n
\end{equation}
for every $\e>0$ and we set
\begin{equation}
\label{campitildeu}
X_{1, z_1, \e}:=\partial_1,\quad X_{i, z_1, \e}:= {X_{i, \e}}_{|_{x_1 =z_1}},\quad i=2, \cdots, 2n-m.\end{equation}
We introduce now an
operator $L_{z_1, \e}$
which can be split in a tangential and in a normal part on any
plane $M_{z_1}$, and we will use it to approximate with the parametrix
method the tangential
and the normal part of the operator $L_\e$.
Precisely,
we define
\begin{equation}
L_{z_1, \e}
:= \partial_t - \sum_{i=1}^n X_{i, z_1, \e}^2
\label{L0riemannian}
\end{equation}
with fundamental solution $\Gamma_{z_1, \e}$
on the whole space. On every plane $M_{z_1}$ we define the tangential operators
\begin{equation}
\hat L_{z_1, \e}:=\partial_t -\hat \Delta_{z_1, \e},\,
\quad \text{ where } \quad
\hat \Delta_{z_1, \e}:= \sum_{i=2}^n X_{i, z_1, \e}^2, \quad
\label{heat_plane}
\end{equation}
with non negative fundamental solutions $\hat \Gamma_{z_1, \e}$ and $\hat \Gamma_{\Delta, z_1, \e}$ respectively.
\begin{remark}
\label{r:splittingNC}
Let us explicitly note that the
operator $L_{z_1, \e}$ can be represented as
$$L_{z_1, \e}
= \partial_t - \partial^2_{11} - \hat \Delta_{z_1, \e}.$$
Since $\partial_{1}$ coincides with the
direction normal to the plane, the operator
splits in the sum of its orthogonal part
$\partial_t - \partial^2_{11}$ and its tangential part $\hat L_{z_1, \e}$.
Consequently its fundamental solution can be represented as
$$ \Gamma_{z_1,\e} = \Gamma_{\perp,z_1,\e} \hat \Gamma_{z_1,\e}.$$
where $ \hat \Gamma_{z_1,\e}$ is defined above and $ \Gamma_{\perp,z_1,\e} $ is
the standard one-dimensional Gaussian function, fundamental solution of $\partial_t - \partial^2_{11}$.
\end{remark}
\subsection{Estimates of the approximating operator}
As recalled in Section \ref{paramethod} the first step of the
parametrix method is to prove that the difference
$X_{i,\e}- X_{i,z_1,\e} $ is a differential operator of $\e$-degree strictly less than $1$
around the point $z_1$
and, as a consequence, that the operator $H_\e: =L_\e-L_{z_1, \e}$ (see also \eqref{operatore} and \eqref{piantina} above)
has $\e$-degree strictly less than 2.
\begin{lemma}
\label{lemmallemma}
Let $M_{z_1}=\{(x_1, \x)\in\mathbb{G}:x_1 =z_1\} $ be a non characteristic plane.
For every $z=(z_1, \hat z)\in M_{z_1}$, and for every $i\leq n$ and for every $h$ such that
$deg(i) +1 \leq deg(h) \leq \kappa$ there exists a polynomial $b_{i,h, z_1}(v)$,
homogeneous of degree $deg(h)-deg(i)-1$ as a function of $v$ and $z_1$, such that, if $v =
\Theta_{X_{z_1}, z}(x)$,
\begin{equation} d\Theta_{X_{z_1}, z}({X_{i}}-{X_{i,z_1}})=
v_1\sum_{deg(h)=deg(i)+1}^\kappa b_{i,h, z_1}(v)d\Theta_{X_{z_1}, z}(X_{h,z_1}),\end{equation}
where $\Theta_{X_{z_1}, z}$ has been defined in \eqref{deftheta}.
Moreover
\begin{equation}\label{homihz} |b_{i, h, z_1}(v)|\leq C \sum_{j=0}^{deg(h)-deg(i)-1} |z_1|^j \|v\|^{deg(h)-deg(i)-1-j}.
\end{equation}
\end{lemma}
\begin{proof}
When $i=1,\cdots, n$, by the definition
\eqref{struttura campi} and \eqref{campizetauno} of the vector fields
we have, for $deg(i)=\kappa$
\begin{equation}\label{eerre}X_{i}-
X_{i,z_1}= 0,
\end{equation}
hence the thesis is true, and we have to prove it only for $deg(i)<\kappa$.
Using the fact that the translation associated to the
vector fields $X_{z_1}$
acts only on the $\x$ variables,
we have
\begin{align}\nonumber &
d\Theta_{X_{z_1}, z}(X_{i}- X_{i,z_1}) = \sum_{deg(h)=deg(i)+1}^\kappa
\Big( a_{i, h}(x_1, \hat v) -
a_{i, h}(z_1,\hat v)
\Big) \partial_{h} =\\& = \sum_{deg(h)=deg(i)+1}^\kappa (x_1-z_1) a^1_{i, h, z_1}(v) \partial_{h}=v_1 \sum_{deg(h)=deg(i)+1}^\kappa a^1_{i, h, z_1}(v) \partial_{h}.
\label{Xiuestimatebis}\end{align}
In the last equality we have denoted $(x_1-z_1)a^1_{i, h, z_1}(v) $ the polynomial
$ a_{i, h}(x_1, \hat v) -
a_{i, h}(z_1,\hat v)$ and we used the fact that $v_1 = x_1-z_1$. The polynomial $a^1_{i, h, z_1}(v)$ is homogeneous of degree
$deg(h)-deg(i)-1$ in the variables $v$ and $z_1$ and we have estimated as
$$ |a^1_{i, h, z_1}(v)|\leq C \sum_{j=0}^{deg(h)-deg(i)-1} |z_1|^j \|v\|^{deg(h)-deg(i)-1-j}.$$
If $deg(i)= \kappa-1$, the proof is completed, by \eqref{eerre}.
For $deg(i)< \kappa-1$, using again the expression \eqref{campizetauno} for
$deg(h)<\kappa$ and \eqref{eerre} for $deg(h)=\kappa$, we get
$$d\Theta_{X_{z_1}, z} (X_{i}- X_{i, z_1}) = v_1 \sum_{deg(h)=deg(i)+1}^\kappa a^1_{i, h, z_1}(v)
d\Theta_{X_{z_1}, z}(X_{h, z_1}) -
$$
$$-v_1\sum_{deg(j)=deg(i)+2}^\kappa
\sum_{deg(h)=deg(i)+1 }^{deg(j)-1}a^1_{i, h, z_1}(v)a_{h, j, z}(v) \partial_{v_j}.$$
Since the Lie group is nilpotent of step $\kappa$,
after $\kappa-1$ iteration of this method, we get that there exists a polynomial $b_{i,h,z_1}$
such that \eqref{euno} is satisfied.
\end{proof}
From this lemma, Corollary \ref{7-1} below immediately follows. The proof is technically very simple,
but it is important to note that the $X_{i,\e}-X_{i,z_1,\e}$ is a differential operator which has
local degree $1$ while has local $\e$-degree $1/2$, in a neighborhood of the point $z$.
This property allows to obtain a better approximation in the
Riemannian setting, rather than in the subriemannian setting.
\begin{corollary}
\label{7-1}
Let $M_0$ be a non characteristic plane.
Let $\mathcal{S}$ be the strip
\[\mathcal{S}:=\{x=(x_1, \x)\in\mathbb{G}:\,|x_1|\leq \e^{2\kappa}, \
|x_1-z_1| \leq \e^{2\kappa}\},\]
where $\kappa$ is the step of the group.
Then $X_{i,\e}-X_{i,z_1,\e}$ is a differential operator of
{{$\e$-degree $1/2$ with respect to the vector fields $X_{i,z_1,\e}$.}}
\end{corollary}
\begin{proof}
Applying Lemma \ref{lemmallemma}, calling
$p_{i,h, z_1, \e}(v) = \e^{-deg(h)} v_1 b_{i,h, z_1}(v)$
and using the
fact that $|v_1|\leq\e^{\kappa}$ and $|z_1|\le 1$ we have
$$|p_{i,h, z_1, \e}(v)| \leq C |v_1|^{1/2} \sum_{deg(h) = deg(i) +1}^\kappa \sum_{j=0}^{deg(h)-deg(i)-1}\|v\|^{deg(h)-deg(i)-1-j}. $$
Since $X_{i,\e} = \e^{deg(i)} X_{i} $ and $v_1 b_{i,h, z_1}(v)X_{h,z_1} = p_{i,h, z_1, \e}(v)X_{h,z_1, \e}$, from Lemma \ref{lemmallemma} we also deduce that
\begin{equation}\label{euno} d\Theta_{X_{z_1}, z}(X_{i,\e}-X_{i,z_1,\e})=
\e^{deg(i)} \sum_{deg(h)=deg(i)+1}^\kappa p_{i,h, z_1,\e}(v) d\Theta_{X_{z_1}, z}(X_{h,z_1, \e}).\end{equation}
If $\|v\|\leq 1$, then
$|p_{i,h, z_1, \e}(v) |\leq C |v_1|^{1/2}.$ Since
$X_{h,z_1, \e}$ has degree $1$, then $p_{i,h, z_1,\e}(v) d\Theta_{X_{z_1}, z}(X_{h,z_1, \e})$ is a differential operator of
local $\e$-degree $1/2$ in the set $\|v\|\leq 1$ with respect to the vector fields $X_{i,z_1,\e}$.
On the other side, if $\|v\|\geq 1$, we have that
$X_{h,z_1}$ is a differential operator of $\e$-degree $h$,
$|p_{i,h, z_1, \e}(v)| \leq C |v_1|^{1/2} \|v\|^{deg(h)}, $
so that $p_{i,h, z_1,\e}(v) d\Theta_{X_{z_1}, z}(X_{h,z_1, \e})
$ is a differential operator of
$\e$-degree $1/2$ for $\|v\| \geq 1$ with respect to the vector fields $X_{i,z_1,\e}$.
\end{proof}
As a direct consequence of the previous corollary, we can prove that the
difference $H_\e= L_\e - L_{z_1,\e}$ is a differential operator of degree strictly less than 2
in a neighborhood of the point $z$.
\begin{lemma}
\label{7-2}Under the assumptions of Corollary \ref{7-1},
${L_{\e}}-{L_{z_1,\e}}$ is an operator of $\e$-degree $3/2$ with respect to the vector fields
$X_{z_1, \e}$.
Precisely there exist polynomials $p^{(1)}_{h,z_1,\varepsilon}$, $p^{(2)}_{i,h,z_1,\e}$
and a constant $C$ independent of $\e$ satisfying
$$|p^{(1)}_{h,z_1,\e}(v)|\leq C |v_1|^{1/2} \;\; \text{for } \|v\|\leq 1, \quad
|p^{(1)}_{h,z_1,\e}(v)|\leq C |v_1|^{1/2}\|v\|^{deg(h)} \;\; \text{for } \|v\|\geq 1$$
and
$$|p^{(2)}_{i,h,z_1,\e}(v)|\leq C |v_1|^{1/2} \;\; \text{for } \|v\|\leq 1, \quad
|p^{(2)}_{i,h,z_1,\e}(v)|\leq C |v_1|^{1/2}\|v\|^{deg(h)+deg(i)} \;\; \text{for } \|v\|\geq 1,$$
such that
\begin{equation}\label{somma}
d\Theta_{X_{z_1}, z}({L_{\e}-L_{z_1,\e}})=
\sum_{deg(h)=2}^\kappa p^{(1)}_{h, z_1, \e}(v) d\Theta_{X_{z_1}, z}(X_{h, z_1, \e}) \; +\end{equation}
$$+\sum_{deg(h)=2}^\kappa \sum_{deg(i)=1}^{deg(h)-1}p^{(2)}_{i,h,z_1,\e}(v)
d\Theta_{X_{z_1}, z}(X_{i, z_1, \e} X_{h, z_1, \e}).$$
\end{lemma}
\begin{proof}
By the definition of the operators we have:
\begin{equation}\label{lele}\begin{split}
& d\Theta_{X_{z_1}, z}({L_{\e}-L_{z_1,\e}}) \\&=
\sum_{i=1}^{n} d\Theta_{X_{z_1}, z}\Big({X_{i, z_1, \e}} {(X_{i, \e} - X_{i, z_1, \e})}\Big) +
\sum_{i=1}^{n} \Big(d\Theta_{X_{z_1}, z} ({X_{i, \e}} - {X_{i, z_1, \e}})\Big)^2+\\&
+ \sum_{i=1}^{n} d\Theta_{X_{z_1}, z}\Big(({X_{i, \e}} - {X_{i, z_1, \e}})
{X_{i, z_1, \e}}\Big) -\sum_{i=1}^{m} b_i d\Theta_{X_{z_1}, z}(X_{i, \e}) \end{split}
\end{equation}
Let us consider the first term at the right hand side.
By Corollary \ref{7-1}
$$ d\Theta_{X_{z_1}, z}({X_{i, z_1, \e}}) d\Theta_{X_{z_1}, z}(X_{i, \e} - X_{i, z_1, \e}) = $$$$=
d\Theta_{X_{z_1}, z}(X_{i, z_1, \e})\Big(\sum_{deg(h)=deg(i)+1}^\kappa p_{i,h,z_1,\e}(v)d\Theta_{X_{z_1}, z}(X_{h,z_1,\e})\Big).
$$
When the derivative is applied on the coefficients $p_{i,h,z_1,\e}$,
we obtain a term contributing to the terms $p^{(1)}_{h,z_1,\e}(v) d\Theta_{X_{z_1}, z}(X_{h, z_1, \e})$.
When the derivative is applied on the differential operator we obtain a contribution
to the second sum in the right hand side of \eqref{somma}.
The other terms of \eqref{lele} can be handled in a similar way.
\end{proof}
In analogy with \eqref{erreuno}
we define the kernel
\begin{equation}\label{formula}
R_{1,\e}((\xn, t), (z, \tau)):=(L_{{z_1,\e}}-L_{\e})\Gamma_{z_1,\e} ((\xn, t), (z,\tau)).
\end{equation}
for
$t>\tau$.
As a consequence of Lemma \ref{7-2} and of the homogeneity of
the fundamental solution, we provide an estimate for $ R_{1,\e}$.
\begin{lemma} \label{lemmaR1}
If $M_0\subset\Gi$ is a non-characteristic plane,
$R_{1,\e}$ is a family of kernels of $\e$-uniform exponential type $1/2$
in the set $\{|x_1|\leq \e^{2\kappa}\}$.
Precisely for every bounded set there exists a constant $C$ such that for every
$x= (x_1, \x), \, z=(z_1,\z)\in \Gi$ such that $|x_1|, |z_1|\leq \e^{2\kappa}$.
\begin{equation}\label{claim1R1}
|R_{1,\e} ((x, t), (z, \tau))|
\leq C \frac{ \Gamma_{z_1,\e}((\xn, 2t), (z, 2\tau))}{|t-\tau|^{3/4}},
\end{equation}
with $C$ independent of $\e$ and $z_1$.
\end{lemma}
\begin{proof}
By the representation formulas obtained in the previous Lemma \ref{7-2}, used with $v_1=x_1-z_1$, we only have to estimate terms of the type
$|x_1-z_1|^{1/2}X_{i,z_1,\e} X_{h,z_1,\e}\Gamma_{z_1,\e}((\xn, t), (z, \tau))$.
Using \eqref{e:Giovanna}
for any $0<\e<1$ we immediately obtain
$$|R_{1,\e} ((x, t), (z, \tau))|
\leq C \frac{|x_1-z_1|^{1/2} \Gamma_{z_1,\e}((\xn, 2t), (z, 2\tau))}{|t-\tau|}.$$
In order to prove
\eqref{claim1R1} we note that we can assume that
$|x_1-z_1|> \sqrt{t-\tau}$, since otherwise the assertion is true.
In this case we can use the fact that $\rho^{1/4} e^{-\rho^2}\le Ce^{-\rho^2/2}$, for a suitable constant $C$, and the estimate \eqref{gauss-sol} of the fundamental solution to ensure that
\begin{equation}
\frac{|x_1-z_1|^{1/2}}{|t-\tau|^{1/4}} \Gamma_{z_1,\e}((\xn, t), (z, \tau))\leq
C\Gamma_{z_1,\e}((\xn, 2t), (z, 2\tau)),
\end{equation}
From here the thesis follows at once.
\end{proof}
\subsection{Convergence of the parametrix method}
The second step of the parametrix method consists in
proving that the
series $\Phi$, defined in \eqref{Fi}, is convergent.
In order to do this, we first need to obtain an uniform estimate
of the distances $d_{z_1, \e}$.
We denote respectively
$d_{z_1, \e} $ and $d_{z_1, 0} $ the distances defined
as in \eqref{de} and \eqref{2.5bis} in terms of the vector fields
$X_{i, z_1, \e}$ and $X_{i, z_1}$:
\begin{equation}\label{finalmente2}
d_{z_1, 0}(x,z) = \|\Theta_{X_{1, z_1},\cdots, X_{n, z_1}, z}(\xn)\|,
\,\,
d_{z_1, \e}(x,z) = \|\Theta_{X_{1, z_1, \e},\cdots, X_{n, z_1, \e}, z}(\xn)\|_\e.
\end{equation}
Under the usual assumption that $M_0\subset\Gi$ is a non characteristic plane (so that also $M_{z_1}$ is non characteristic for $|z_1|$ sufficiently small),
we have the following lemmata.
\begin{lemma}\label{distanze-fuori}
For every $x=(x_1,\x)$, $z=(z_1,\z)$ in $\Gi$,
\begin{equation}\label{equivd2}
d(x,z) = d_{z_1, 0}(x,z), \quad
d_{\e}(x,z) = d_{z_1, \e}(x,z).
\end{equation}
In addition the distance $\hat d$ defined in Section \ref{geosection} satisfies
\begin{equation}\label{finalmente}\hat d (\x, \z) = d_{0, 0} ((0,\x), (0,\z)).
\end{equation}
and
\begin{equation}\label{equivdhat}
\hat d(\hat x,\hat z) = d((0,\hat x),(0, \hat z)).
\end{equation}
\end{lemma}
\begin{proof}
The distance $d(x,z)$
is defined in \eqref{2.5bis} as the norm of the vector $v$ such that
\begin{equation}\label{carciofi_arrosto}
x=\operatorname{exp}(v_1 X_1)\operatorname{exp}(\sum_{i=2}^nv_i X_i)(z).\end{equation}
Since all the vector fields $(X_i)_{i=2,\cdots, n}$ are tangential to the plane $M_{z_1}$,
the integral curve $t\mapsto \operatorname{exp}(t\sum_{i=2}^nv_i X_i)(z)$ is tangent to the same plane. Therefore, along this curve
the vector fields $(X_i)_{i=2,\cdots, n}$ are computed for $x_1=z_1$ and coincide with the vector fields $X_{i, z_1}$. This implies that $d(x,z) = d_{z_1, 0}(x,z)$.
The same argument applies to the second equality in \eqref{equivd2} to \eqref{finalmente}, and to
\eqref{finalmente}.
\end{proof}
Since we have a good estimate of the kernel $R_{1, \e}$
only in an $\e$- neighborhood of the plane $M_0$,
we have to modify the classical parametrix method, restricting the
integral in this neighborhood.
To this end we consider a cut-off function depending only on the first variable $x_1$.
Precisely, we consider a piecewise function $\rho_\e$, supported
in an $\e$ neighborhood of the origin, defined as follows:
\begin{equation}\label{chiNC}
\rho_\e(x_1) =1 \ {\rm if}\ |x_1|\le 2\e^{2\kappa}, \quad \rho_\e(x_1) =0 \ {\rm elsewhere}.\end{equation}
For any suitable kernel $K$, we define
\begin{equation} \label{martedi}
E_{R_1, \e}(K)((\xn, t), (z, \tau)):=-\int_{\mathbb{R}^{n}\times [\tau,t]}
R_{1,\e}((x,t), (y, \theta))\,
K ((y, \theta), (z,\tau)) \rho_\e(y_1-z_1)\, dy d\theta
\end{equation}
and, in analogy with \eqref{Fi}, we consider
\begin{equation} \label{giovedi}\Phi_\e((x, t), (z,\tau)):=\sum_{j=0}^{\infty}(E_{R_1, \e})^j(R_{1,\e})((x, t), (z,\tau)).\end{equation}
We will prove that the series
can be is totally convergent on the set
\[\left\{0<t-\tau\le T, \ |x_1|,|z_1|\le \e^{2\kappa}, \ d_{\e}(x, z)+
|t-\tau|^{\frac{1}{2}}\ge \delta
\right\}\qquad \text{for all}\ T>0, \delta >0\]
and it is a kernel of uniform exponential $\e$-type $1/2$, i.e. it satisfies the estimate
\begin{equation}\label{e:phi}|\Phi_\e((x, t), (z,\tau))|\le c(T)(t-\tau)^{-\frac{3}{4}}
\Gamma_{z_1,\e}((x, ct), (z,c\tau))
\qquad 0<t-\tau\le T,
\end{equation}
with constants independent of $\e$ and of $z_1$.
As we mentioned in Section \ref{paramethod} the convergence of this
series relies on properties of convolutions of kernels.
Hence we will need the following property of the operator $E_{R_{1}, \e}$,
that ensures that the series can be estimated by a power
series, so that it is convergent on the mentioned set:
\begin{lemma}\label{lemmaRj} Let $M_0\subset\Gi$ be a non-characteristic plane.
For $z\in M_0$, $x \in\Gi$,
with $|x_1|\leq \e^{2\kappa}$ and for $j\in\N$ it holds that
\begin{equation}\label{stimaRj2}|(E_{R_{1},\e})^j R_1((x, t), (z,\tau))| \leq \frac{C^jb_j}{2}
\frac{\Gamma_{z_1, \e}((x, c_1 t), (z,c_1 \tau))}{(t-\tau)^{3/4- j/4}},
\end{equation}
$j\in\mathbb{N}$, where $b_j=\Gamma^{j+1}(\frac{1}{4})/\Gamma(\frac{j+1}{4})$ and $\Gamma$ is the Euler
Gamma-function.
\end{lemma}
\begin{proof} We argue as in \cite{JSC} or \cite{blu_a} and we prove by induction that
$$
|(E_{R_{1},\e})^j R_1((x, t), (z,\tau))| \leq \frac{C^jb_j}{2}
\frac{\Gamma_{\e}((x, c_2 t), (z, c_2\tau))}{(t-\tau)^{3/4- j/4}},
$$
Here we only sketch the proof, in order to show that the constant is independent of $\e$. The estimate is true for $j=0$ (see \eqref{claim1R1}).
Let us assume that an analogous estimate holds for $j-1\in \mathbb{N}$. We have
\begin{align*}
&|(E_{R_{1},\e})^{j} R_1((x, t), (z,\tau))|\le \,
\\& \leq\frac{C^{j}b_{j-1}}{2}
\int_{\tau}^{t}(t-\theta)^{-\frac34}(\theta-\tau)^{-\frac34+\frac{j-1}{4}}
\int_{\mathbb{R}^{n}} \Gamma_{\e}((x, c_2 t), (y,c_2 \theta))
\Gamma_{\e}((y, c_2 \theta), (z,c_2 \tau))\, dy d\theta.
\end{align*}
By the reproducing property of the fundamental solution, we have
\[\int_{\mathbb{R}^{n}} \Gamma_{\e}((x, c_2 t), (y,c_2 \theta))
\Gamma_{\e}((y, c_2 \theta), (z, c_2 \tau))\, dy =
\Gamma_{\e}(x, c_2 t), (z,c_2\tau))
\]
and, by the change of variable $r=(t-\tau)^{-1}(\theta-\tau)$,
\begin{equation}\label{c.var.}\begin{split}b_{j-1}&\int_{\tau}^{t}(t-\theta)^{-\frac34}(\theta-\tau)^{-1+\frac{j}{4}}\,d\theta
\\=&b_{j-1}
(t-\tau)^{-\frac{3}{4}+\frac{j}{4}}\int_{0}^{1}(1-r)^{-\frac34}r^{-1+\frac{j}{4}}\,dr\,
=b_{j-1}(t-\tau)^{-3/4+{j}/{4}}2^{j/4-3/4}\frac{\Gamma(\frac{1}{4})\cdot \Gamma(\frac{j}{4})}{\Gamma(\frac{j+1}{4})}.
\end{split}
\end{equation} Recall now
the definition of $b_{j-1}$ and obtain $b_{j}$ from
the last integral.
Thus, \eqref{stimaRj2} follows by induction for all $j\in \mathbb{N}$.
\end{proof}
\begin{remark}\label{decay}
From the assertion above it follows that the
convolution of a family of kernels of uniform exponential $\e$-type $1/2$ with
a family of kernels with uniform exponential $\e$-type $\beta$
is a family of kernels with uniform exponential $\e$-type $\beta + 1/2$.
\end{remark}
For every $\e>0$ the operators $L_{\e}$ and $L_{z_1,\e}$ are uniformly elliptic, so that
the proof of the convergence of the parametrix method is a well known fact
(see \cite{Fri}).
In particular
$\Gamma_{z_1,\e}$
provides a good approximation of $\Gamma_{\e} $ in a neighborhood of the plane. More precisely
\begin{equation} \begin{split}\label{RdefineJ}
\Gamma_{\e}((x, t), (z,\tau)) &= \Gamma_{z_1, \e}((x, t), (z,\tau))
\\&+
\int_{\mathbb{R}^{n}\times [\tau,t]} \Gamma_{y_1,\e}((x, t), (y, \theta))\,
\Phi_\e((y, \theta), (z,\tau))\rho_{\e}(y_1-z_1) \, dy d\theta.
\end{split}\end{equation}
In addition, for $i= 1,\cdots, n$,
\begin{align}\nonumber &X_{i,0,\e}^2\Gamma_{\e}((x, t), (z,\tau))= X_{i,0,\e}^2\Gamma_{z_1, \e}((x, t), (z,\tau))
\\ &+ \quad
\lim_{\delta\to 0^+}\int_{\mathbb{R}^{n}\times [\tau,t-\delta]}X_{i,0,\e}^2\Gamma_{y_1,\e}
((x, t),(y,\theta))\Phi_{\e}((y,\theta),(z, \tau))\rho_{\e}(y_1-z_1)\,dy d\theta.
\label{1sett}\end{align}
Using the explicit representation formulas above we can provide the following estimates for $\Gamma_{\e} - \Gamma_{z_1,\e}$
uniform in $\e$:
\begin{proposition}\label{lemmaJe2}
Let $M_0$ be a non characteristic plane
and
$x=(x_1,\x), z=(z_1,\z) \in \Gi$ such that $|x_1|, |z_1|\leq \e^{2\kappa}$. For every $T>0$ there exists a constant $C=C(T)$ such that
for every $\e>0$ and for every $t, \tau$ with
$0<t-\tau\le T$ the following inequalities hold
\begin{equation} \label{Rdefine1}
|\Gamma_{\e}((x, t), (z,\tau)) - \Gamma_{z_1,\e}((x, t), (z,\tau))|\le C(t-\tau)^{1/4}\Gamma_{z_1,\e}((x, t), (z,\tau)).
\end{equation}
In addition $\Gamma_{\e}((x, t), (z,\tau)) - \Gamma_{z_1,\e}((x, t), (z,\tau))
$ is a family of kernels of uniform exponential $\e$-type $1/4$
with respect to the vector fields $(X_{i,\e})_{i=2,\cdots, n}.$
\end{proposition}
For the proof we refer to \cite{JSC}, while the uniformity with respect to $\e$ follows by \eqref{uniform heat kernel estimates-e}.
\begin{proof}[Proof of Theorem \ref{lemmaJe1} ]
We first prove a Riemannian version of Theorem \ref{lemmaJe1}. Precisely we show that
for all $(0, \z)$, $(0, \x)$ in $M_0$ and for every $t,\tau$, with $0<t-\tau\le T$ we have
\begin{equation}\label{zanzara}
|\hat \Gamma_{0,\e}((\x,t), (\z, \tau)) -\sqrt{4\pi(t-\tau)}\Gn_{\e}((0, \x, t), (0, \z, \tau))|\le
C \Gn_\e((0, \x, t), (0, \z,\tau))(t-\tau)^{3/4},
\end{equation}
where $C$ is a constant independent of $\e$.
Indeed,
$$\sqrt{4\pi(t-\tau)} \Gamma_{\e} ((0,\x, t), (0,\z, \tau)) -
\hat \Gamma_{0, \e} ((\x, t), (\z, \tau))= $$
$$=\sqrt{4\pi(t-\tau)} \Gamma_{0,\e} ((0,\x, t), (0,\z, \tau)) -
\hat \Gamma_{0, \e} ((\x, t), (\z, \tau)) $$
$$+\sqrt{4\pi(t-\tau)} \Big(\Gamma_{\e} ((0,\x, t), (0,\z, \tau)) - \Gamma_{0,\e} ((0,\x, t), (0,\z, \tau)) \Big). $$
The first difference at the right hand side is zero by Remark \ref{r:splittingNC}, since
$$\sqrt{4\pi(t-\tau)} \Gamma_{\perp, 0,\e} ((0,\x, t), (0,\z, \tau))={1}$$ and hence the estimate \eqref{zanzara} follows by \eqref{Rdefine1}. Always from Proposition \ref{lemmaJe2}
it follows that $\sqrt{4\pi(t-\tau)} \Gamma_{\e} ((0,\x, t), (0,\z, \tau)) -
\hat \Gamma_{0, \e} ((\x, t), (\z, \tau))$ is
a family of kernels of uniform exponential $\e$-type $3/4$
with respect to the vector fields $(X_{i,\e})_{i=2,\cdots, n}.$
Sending $\e$ to $0$ in the assertion \eqref{zanzara}, we obtain that for all $z=(0, \z)$, $x=(0, \x)$ in $M_0$ and for every $t$ and $\tau$, with $0<t-\tau<T$, we have
\begin{equation*}|\hat \Gamma((\x,t), (\z, \tau)) -\sqrt{t-\tau}\Gn((x, t), (z, \tau))|\le
C \Gn((x, t), (z,\tau))(t-\tau)^{3/4},\end{equation*}
and the left hand side is a kernel of exponential type $3/4$
with respect to the vector fields $(X_{i})_{i=2,\cdots, m}.$
Using the Gaussian estimate \eqref{gauss-sol2} of $\Gn$ and $\hat \Gamma$
together with formula \eqref{equivdhat} we obtain
$$|\hat \Gamma((\x,t), (\z, \tau)) -\sqrt{t-\tau}\Gn((x, t), (z, \tau))|\le
C \hat \Gn((\x, t), (\hat z,\tau))(t-\tau)^{1/4} $$
and the left hand side is a kernel of exponential type $1/4$
with respect to the vector fields $(\hat X_{i})_{i=2,\cdots, m}.$
Theorem \ref{lemmaJe1} follows immediately.
\end{proof}
\subsection{The reproducing formula for homogeneous sub-Laplacians on a plane}
Here we establish the analogous of Theorem \ref{teorema1}
for homogeneous vector fields
expressed as in \eqref{struttura campi},
under the assumption that the boundary of $D$
is the plane $\{x_1 =0\}$.
This is done integrating in time the result of Theorem \ref{lemmaJe1}.
Let us first deduce an integral version of Theorem \ref{lemmaJe1}, based on the reproducing formula of the
heat kernel.
\begin{lemma}\label{gammaMconvolve}
Let $D=\{(x_1,\hat x)\in \mathbb{R}^{n}:\,x_1>0\}$, and assume that its boundary is non characteristic.
There exists $C>0$ such that for any $(0,\hat x), (0,\hat y)\in \partial D$ and for
all $t, \tau$, with $0\leq \tau \le t$ we have
\begin{equation}\label{eq:heatgammaconvolution}
\hat \Gamma((\hat x, t), (\hat y, \tau)) =
\end{equation}
$$= \int_\tau^t
\int_{\mathbb{R}^{n-1}} \Gamma((0,\hat x,t), (0,\hat z, \theta))
\Gamma((0,\hat z,\theta), (0,\hat y, \tau)) d{\hat z} d \theta + \hat R(\hat x, \hat y, t- \tau),
$$
where
\begin{equation}|\hat R(\hat x, \hat y, t)| \leq Ct^{1/4} \hat \Gamma((\hat x, t), (\hat y,0)),
\label{stimahatR3.13}\end{equation}
and $\hat R$ is a kernel of exponential type $5/2$ with respect to the
vector fields $\{\hat X_i\}_{i=2,\cdots, n}$.
\end{lemma}
\begin{proof}
Let us first prove \eqref{eq:heatgammaconvolution}. To this end we note that the thesis is true for $t-\tau\geq 1$. Indeed
$$\hat \Gamma((\hat x, t), (\hat y, \tau)) \leq c
(t-\tau)^{1/4} \hat \Gamma((\hat x, t), (\hat y, \tau))$$
and
by the standard Gaussian estimates \eqref{gauss-sol2} of the fundamental solution
and by the relation \eqref{equivdhat} between the distances $\hat d$ and $d$, we obtain
$$\int_\tau^t
\int_{\mathbb{R}^{n-1}} \Gamma((0,\hat x,t), (0,\hat z, \theta))
\Gamma((0,\hat z,\theta), (0,\hat y, \tau)) d{\hat z} d \theta $$$$\leq c
(t-\tau)^{3/4} \Gamma((0,\hat x, t), (0, \hat y,\tau))\leq c
(t-\tau)^{1/4} \hat \Gamma((\hat x, t), (\hat y, \tau)).$$
If $t-\tau<1$, by Theorem \ref{lemmaJe1}, we have
\begin{equation}
\label{e:inventa}
\Gamma((0,\hat x,t), (0, \hat z, \theta)) =
\frac{\hat \Gamma((\hat x,t), (\hat z, \theta)) }{\sqrt{t-\theta}}
(1 + O(t-\theta)^{1/4}).\end{equation}
Thus,
\begin{align*}&\int_{\tau}^t
\int_{\mathbb{R}^{n-1}} \Gamma((0,\hat x,t), (0,\hat z, \theta))
\Gamma((0,\hat z,\theta), (0,\hat y, \tau)) d{\hat z} d \theta
\\ &=
\int_{\tau}^t\Bigg(
\int_{\mathbb{R}^{n-1}} \hat \Gamma((\x,t), (\z, \theta)) \hat\Gamma((\z,\theta), (\y, \tau)) d{\hat z}
\\ &
\quad\quad\quad\left(\frac{1}{(t-\theta)^{1/2}} + O\left(\frac{1}{(t-\theta)^{1/4}}\right)\right)
\left(\frac{1}{(\theta- \tau)^{1/2}} + O\left(\frac{1}{(\theta - \tau)^{1/4}}\right)\right) \Bigg) d \theta
\\
& =\hat \Gamma((\x,t), (\y, \tau))\int_{\tau}^t\left(\frac{1}{(t-\theta)^{1/2}} +O\left(\frac{1}{(t-\theta)^{1/4}}\right)\right)
\left(\frac{1}{(\theta- \tau)^{1/2}} +O\left(\frac{1}{(\theta - \tau)^{1/4}}\right)\right)d \theta,\end{align*}
by the reproducing formula. Now, with
the change of variable $r=(t-\tau)^{-1}(\theta-\tau)$, we get
$$\int_{\tau}^t\left(\frac{1}{(t-\theta)^{1/2}} +O\left(\frac{1}{(t-\theta)^{1/4}}\right)\right) \left(\frac{1}{(\theta- \tau)^{1/2}} +
O\left(\frac{1}{(\theta - \tau)^{1/4}}\right)\right)d \theta=1 + O\left((t-\tau)^{1/4}\right).$$
Therefore, we get
\begin{equation}
\label{resto_cappuccio}
\int_{\tau}^t
\int_{\mathbb{R}^{n-1}} \Gamma((0,\hat x,t), (0,\hat z, \theta))
\Gamma((0,\hat z,\theta), (0,\hat y, \tau)) d{\hat z} d \theta
\end{equation}
$$
=\hat \Gamma((\x,t), (\y, \tau)) (1 + O((t-\tau)^{1/4}),
$$
so that $R$ satisfies \eqref{eq:heatgammaconvolution}.
A similar argument applied to all derivatives
ensures that $\hat R$ is a kernel of exponential type $5/2$ with respect to the
vector fields $\{\hat X_i\}_{i=2,\cdots, n}$
and concludes the proof.
\end{proof}
\subsection{Proof of Theorem \ref{teorema1} for homogeneous vector fields on a plane}
We will provide in Lemma \ref{l:Gammaconvolve} below the proof of Theorem \ref{teorema1} on a plane and for homogeneous vector fields.
This result can be considered the time independent version of Lemma \ref{gammaMconvolve}.
It will be established integrating in time the thesis of that Lemma and
using the well known fact that the fundamental solutions
$\hat \Gamma_{\hat \Delta}$ of the Laplace type operator \eqref{heat_plane_noe}
and $\Gamma_{\Delta}$ of the Laplace operator \eqref{laplacoperator}
satisfy respectively
\begin{equation}
\label{hatG-eps}
\hat \Gamma_{\hat \Delta}(\x,\z)=\int_{0}^{+\infty} \hat \Gamma((\x,t)(\z,0)) dt,
\quad \Gamma_{\Delta}(x, z)=\int_{0}^{+\infty} \Gamma((x, t), (z, 0)) dt. \end{equation}
\begin{lemma} \label{l:Gammaconvolve}Let the vector $(X_i)$ be represented as in \eqref{struttura campi}, let
$M_0=\{(0,\hat x): \hat x\in \mathbb{R}^{n-1}\}$ be a non
characteristic plane. For any $(0,\hat x), (0,\hat y)\in M_0$
\begin{equation}\label{GammaGamma}
{\hat \Gamma}_{\hat \Delta}(\x, \y) =\int_{\R^{n-1}} \Gamma_{\Delta}((0,\x), (0,\z))
\Gamma_{\Delta}((0,\z), (0,\y)) d\z + \hat R_{\hat\Delta}(\x, \y),\end{equation}
where
\begin{equation}
\label{resto_cappuccio_laplaciano}
\hat R_{\hat\Delta}(\x, \y)=O(\hat d(\x,\y)^{\frac12}\hat\Gamma_{\hat\Delta}(\x,\y)).\
\end{equation}
In particular $
\hat R_{\hat \Delta}(\x, \y)$
is a
kernel of type
$5/2$ in the sense of Definition \ref{kerneltype} with respect to the distance
$\hat d$ defined on the plane.
\end{lemma}
\begin{proof}
Using \eqref{hatG-eps} and
integrating both sides of expression (\ref{eq:heatgammaconvolution}) we obtain:
$$\hat \Gamma_{\hat\Delta}(\x, \y) + \int_{0}^{+\infty} \hat R((\x,t), (\y, 0)) dt$$
$$
=
\int_{0}^{+\infty} \int_{0}^{t}\int_{\mathbb{R}^{n-1}}
\Gamma((0,\x,t -\theta), (0,\z, 0)) \Gamma((0,\z,\theta), (0,\y, 0)) d\z\, d\theta \,dt.$$
Changing the order of integration, we get that the last term is equal to
$$ \int_{\mathbb{R}^{n-1}}
\int_{0}^{+\infty} \left(\int_{\theta}^{+\infty}
\Gamma((0,\x,t -\theta), (0,\z, 0))\,dt\right) \Gamma((0,\z,\theta), (0,\y, 0)) \, d\theta d\z$$
$$=\int_{\mathbb{R}^{n-1}}
\int_{0}^{+\infty}
\Gamma_{\Delta}((0,\x), (0,\z)) \Gamma((0,\z,\theta), (0,\y, 0)) d\theta d\z$$
and integrating with respect to $\theta$ we obtain
$$\int_{\mathbb{R}^{n-1}}\Gamma_{\Delta}((0,\x), (0,\z)) \Gamma_{\Delta}((0,\z), (0,\y)) d\z. $$
The estimate of $\hat R_{\hat\Delta}(\x,\y)$ directly follows easily integrating in \eqref{resto_cappuccio}
and using the estimate of $\hat \Gamma((\x, ct), (\y,0))$ provided in Lemma \ref{gammaMconvolve}. Therefore, recalling that $\widehat Q=Q-1$ denotes the homogeneous dimension of the plane, we have
\begin{equation*}
\begin{split}
\hat R_{\hat \Delta}(\x,\y)=& \int_{0}^{+\infty} \hat R((\x,t), (\y,0)) dt \le \\
& \le c \int_{0}^{+\infty}\hat \Gamma((\x, {\tilde c}\, t), (\y,0))t^{1/4}dt
\le c\int_{0}^{+\infty} \frac{e^{-\frac{\hat d(\x,\y)^2}{C t}}} {t^{\frac{{\widehat Q}}{2}-\frac14}}\,dt,
\end{split}
\end{equation*}
where the constants may vary from line to line.
Now, with the change of variables $v=-\frac{\hat d(\x,\y)^2}{C t}$ we get
$$\hat R_{\hat \Delta}(\x,\y)
\leq c \int_{0}^{+\infty} e^{-v} \frac{v^{\frac{{\hat Q}}{2}-\frac94}}{{\hat d(\x,\y)}^{{{\widehat Q}}-\frac52}}\,dv \le c \hat d(\x,\y)^{-{\widehat Q}+2+\frac12}\le c \hat d(\x,\y)^{\frac12}\hat\Gamma_{\hat\Delta}(\x,\y)\,.
$$
An analogous inequality holds for any derivative
and the result is proved.
\end{proof}
\section{Reproducing formula on a smooth hypersurface}
\subsection{Reduction of a general hypersurface to a plane with a subriemannian structure} \label{nuova}
Let us denote by $D$ a smooth, open bounded set in $\Gi$ and let
$0\in \partial D$ be a non characteristic point. In this section we show that
we can always reduce the boundary $\partial D$ to the plane $\{(x_1,\hat x): \ x_1=0\}$, via a change of variables.
Indeed, there exists a neighborhood $V_0$ of $0$ such that
the subriemannian normal $\nu$ satisfies $$\nu(s) \ne 0 \ \text{for every } s\in \partial D\cap V_0.$$
We can also choose an invariant basis
$(Z_{i})_{i=1, \cdots, n}$ of the tangent space of $\Gi$ around the point $0$ and
$Z_{i}$ coincides with the standard element $\partial_i$ of the tangent basis at the point $0$, for every $i=1,\cdots, n$. In addition, we assume that
${Z_{1}}_{|0}:={\partial_1}_{|0} = \nu(0)$ and that the vector fields $({Z_{i}}_{|0})_{i=2, \cdots, m}$ span the horizontal tangent space of $\partial D$ at $0$.
We also assume that the problem is expressed in canonical coordinates of second type
around the point $0$ associated to these vector fields.
In these coordinates the vector fields admit the representation
$$Z_1 = \partial_1, \quad Z_i = \partial_i + \sum_{deg(j) > deg(i)} a_{i,j}(s)\partial_j, \text{ for } i=1, \cdots, m$$
while the boundary of $D$ can be identified in a neighborhood $V\subset\subset V_0$ with the graph of a regular
function $w$, defined on a neighborhood $\hat V=V\cap \R^{n-1}$ of $0$:
$$\partial D \cap V= \{(w(\hat s), \hat s): \hat s\in \hat V \}.$$
By the choice of coordinates we have in particular that
\begin{equation}\label{zetaiszero}
Z_{i}w(0) =0.
\end{equation}
On the set $V$ the function
$\Xi(s_1, \hat s) = (s_1 - w(\hat s), \hat s) $
is a diffeomorphism. It sends $\partial D\cap V$ to a subset of the plane $\{x_1 =0\}$:
$$\Xi(\partial D \cap V) =\{(x_1, \hat x): x_1 =0\}.$$
Through this change of variables
the vector fields $Z_i $ can be represented as
\begin{equation}\label{campinonomo}
\begin{array}{ll}X_1 & = d\Xi(Z_1) = \partial_{x_1}, \\
X_i &= d\Xi(Z_i) = \partial_{x_i} + \sum_{deg(j) > deg(i)} a_{i,j}(x_1 + w(\hat x), \hat x)\partial_{x_j} + Z_i w (\x)\partial_{x_1},
\end{array}
\end{equation}
for $i=1,\cdots, n$, where the polynomials $a_{ij}$ are the same of the ones defined in \eqref{struttura campi}.
A neighborhood of $0$ in the boundary of $D$ locally becomes in the new coordinates an open subset $M_0= \Xi(\partial D \cap V)$ of the plane $\{x_1 =0\}$. We can restrict the vector fields
$(X_i)_{i=1,\cdots,m}$ to the tangent to $M_0$
and we call them $\hat X_{i}$:
\begin{equation} \label{hatticampi}\hat X_{i} = \partial_{i} + \sum_{deg(j) = deg(i) + 1}^\kappa
a_{i,j}(w(\hat x), \hat x)\partial_{j}, \quad i=2,\cdots,n.\end{equation}
The vector fields $(\hat X_{i})_{i=1, \cdots, m}$ still satisfy the assumption
\eqref{assumption}, which ensures that they satisfy the
H\"ormander finite rank condition \cite{hormander}.
They do not define a general H\"ormander structure (see \cite{Montgomery}),
since they have been obtained from the generators of a Carnot group via a change
of variables. It is important to note that the vector fields $X_i$ as well as the vector fields $\hat X_i$ are not
homogeneous with respect to the new variables $x_i$. However we will see in Lemma \ref{teoRS} and Lemma \ref{teo2RS} that at every point they admit approximating vector fields respectively
$Z_i$ and $\hat Z_i$ homogeneous in the new variables.
Hence the local homogeneous dimension of $\R^{n}$ endowed with the choice of the vector fields $X_i$ is $Q$.
Since the H\"ormander condition is satisfied, a Carnot Carath\'eodory distance $d$ is defined
in terms of the vector fields $(X_i)_{i=1}^m$.
Thanks to assumption \eqref{assumption}, the vector fields $\{\hat X_{i}\}_{i=2, \cdots, m}$,
defined in \eqref{hatticampi}, generate on the plane $M_0$
a subriemannian structure with local homogeneous dimension $\hat Q= Q-1$
and induce a distance $\hat d$ on $M_0$ defined through the exponential map
as in \eqref{2.5bis},
which satisfies \eqref{finalmente} and
\eqref{equivdhat}.
The Laplace type operator, analogous to \eqref{laplacoperator}
and expressed in terms of the vector fields $X_i$ is denoted by
\begin{equation}\label{ohscusa}\Delta = \sum_{i=1}^m X_i^2 + \sum_{i=1}^m b_i X_i\end{equation}
and it has a fundamental solution $\Gamma_\Delta$,
of class $C^\infty$ out of the pole (see for example \cite{RS}). The operator
\eqref{heat_plane_noe}, expressed in terms of the vector fields
$\hat X_i$ becomes
\begin{equation}\label{scusa}\hat \Delta = \sum_{i=2}^m {\hat X}_i^2,
\end{equation}
with fundamental solution
$\hat \Gamma_{\hat \Delta}$. In analogy with the definition of type of a kernel with respect to the
vector fields $(X_i)$, given in \eqref{e:sileva},
we give here the definition of kernel of local type
$\lambda$ with respect to the vector fields $\hat X_2,\cdots,\hat X_n$:
\begin{definition}\label{defikernel}
$k$ is a kernel of local type
$\lambda$ with respect to the vector fields $\hat X_2,\cdots,\hat X_n$
and the distance $\hat d$ if it is a smooth function out of the
diagonal and, in any open set
$V$, the following holds:
for every $p$ there exists a positive constant
$C_p$
such that, for every
$\hat x,\hat y
\in \partial D\cap V$, $\hat x \not=\hat y$,
$$|\hat X_{i_1},\cdots,\hat X_{i_p} k(\hat x,\hat y)|\leq C_p\,\hat d(\hat x,\hat y)^{\lambda - p-2}
\frac{\hat d(\hat x,\hat y)^{2}}{|B(\hat x, \hat d(\hat x, \hat y))|}.$$
\end{definition}
Clearly, if the space is homogeneous, the previous definition coincides with Definition \ref{kerneltype}.
\subsection{A freezing procedure}
Here we will show that, when we are studying pointwise properties around
a fixed point $x_0$, we can always reduce our vector fields to homogeneous ones.
The proof is made approximating the vector fields with nilpotent ones, adapting to this
context the Rothschild and Stein parametrix method.
In the classical case the vector fields are lifted to vector fields free up to step $\kappa$
and then they are reduced to the generators of a free algebra with a freezing method.
Here we can not lift the vector fields to free ones otherwise we would loose assumption \eqref{assumption}.
However, we can
use the explicit expression of the vector fields \eqref{campinonomo} to obtain an ad hoc version of the Rothschild and Stein method.
Let $D$ be a smooth, open bounded set in $\R^n$
As shown in the previous section we can
assume, up to a change of variable, that
$0\in \partial D$ and that there exists $V$ such that
$$\partial D \cap V =\{(x_1, \hat x): x_1 =0\}.$$
Moreover, there exists a regular function $w$ such that
the vector fields $(X_i)$ can be represented as
\eqref{campinonomo}. We prove the following result analogous to
\cite{NSW} in our simplified setting:
\begin{proposition} \label{proteta}
There exist open neighborhoods
$U$ of $\;0$ in $\R^n$ and $V,\, W$ of $\;0\in \partial D \subset \R^n$, with $W \subset V$ and, for every $z$ fixed in $W$,
a change of coordinates $\Xi_z$ such that
\begin{itemize}
\item {the function $x\rightarrow \Xi_z(x)$
is a diffeomorphism from $U$ on the image}
\item {in the new coordinates the vector fields will admit the following representation:
$$ \Xi_z (X_{1}) = \partial_{y_1}$$$$ d \Xi_z (X_{i})=\partial_{y_i} + \sum_{deg(j)>deg(i)} a_{i, j} (y_1 + w_z(\y), \y)\partial_{y_j} +
X_i w_z \partial_{y_1},\quad i=2,\cdots,n. $$
}
\end{itemize}
\end{proposition}
\begin{proof}
Let us call $M= \{(w(\x), \x):
(0,\x) \in V\cap \partial D \},$ where $w$ is the function introduced above and defining the vector fields in \eqref{campinonomo}.
For every $z\in M$ we will denote $T_z$ the group homomorphism which sends $0$ to $z$
and whose differential
sends ${X_1}_{|0}$ in the normal $\nu(z)$ at $M$ in $z$ and
${X_2}_{|0}, \cdots, {X_n}_{|0}$ in a basis
of the tangent space to $M$ at $z$.
If we fix $z$ the implicit function theorem, (see \cite{FSSC1}, \cite{cittiman06}) ensures that there exists
a neighborhood $U = I \times \hat U$ of $0$ and
a function $w_z: \hat U \rightarrow \mathbb{R}$ such that $w_z(0)=0$ and
$$\{(w_z (\y), \y): \y\in \hat U\}= T_z^{-1}(M)\cap U,$$
so that $\{T_z(w_z (\y), \y): \y\in \hat U\}\subset M\cap V$. We can always assume that $\nabla_z w_z(0)=0$.
Due to the regularity of the boundary we can find an open set $W\subset V$
such that for every $z\in W\cap M$ the function $w_z$ is defined on the same set $\hat U$
with values in the same set $I$. Hence we can define the map
$$E_z : U \rightarrow V, \quad E_z(y_1, \y):= T_z(y_1 + w_z (\y), \y). $$
$ E_z$ is invertible on its image and sends the plane $\{y_1=0\} \cap U$ into a suitable subset
of $M$. The composition
$E_0^{-1} E_z$ sends the plane $\{y_1=0\}$ into the
the plane $\{x_1 = 0\}$, boundary of $D$.
For every $z\in W\cap M$ its inverse function
$\Xi_z(x)$
is a diffeomorphism on the image and $ \Xi_z(W) \subset U\subset \Xi_z(V)$
The vector fields $X_i$ can be represented as follows in the new coordinates (see also \cite{ASV}, \cite{CM}):
\begin{equation*}
\begin{split}
d \Xi_z (X_{1}) & = \partial_{y_1}\\
d \Xi_z (X_{i}) & =\partial_{y_i} + \sum_{deg(j)>deg(i)} a_{i, j} (y_1 + w_z(\y), \y)\partial_{y_j} + X_i w_z(\y) \partial_{y_1}, \quad i=2,\cdots ,n.
\end{split}
\end{equation*}
\end{proof}
We can now prove the following result, analogous to Theorem 5 in \cite{RS}:
\begin{lemma} \label{teoRS}
With the same notations of the previous Proposition \ref{proteta},
let us call
$$Z_{i}= \partial_{y_i}+ \sum_{deg(j)>deg(i)} a_{ij}(y) \partial_{y_j},$$
for $i=1, \cdots, n$. Then we have
$$d \Xi_z(X_{i }) - Z_{i} = R_{i, z, \Xi}$$
where $R_{i, z, \Xi}$ are vector fields of local degree $\leq deg(i)-1$ depending smoothly on $z$. Precisely
$R_{i, z, \Xi}= \sum_j r_{ij, z}\partial_j$
where $r_{ij, z}=O(d(x,y)^{deg(j)-deg(i)+1})$.
\end{lemma}
\begin{proof}
It is a direct computation. Indeed the assertion is true for $i=1$. For every $i>1$ the difference
$d \Xi_z(X_{i }) - Z_{i}$
can be expressed as
$$d \Xi_z(X_{i }) - Z_{i} = \sum_{deg(j)>deg(i)} \Big(a_{ij}(y_1 + w_z(\hat y), \hat y) - a_{ij}(y_1, \hat y)\Big)\partial_{y_j} + X_i w_z(\hat y)\partial_{y_1}.$$
We first note that, since $w_z(0)=0$ and we can always think that also $X_i w_z(0)=0$, then
$X_i w_z(\hat y)\partial_{y_1}$ is an operator of degree 0.
Moreover, being $a_{ij}$ homogeneous polynomials,
their difference can be represented as a homogeneous polynomial. Precisely
there exists a suitable polynomial $a^1_{ij}$ homogeneous of degree $deg(i)-deg(j)-1$ such that
$$a_{ij}(y_1 + w_z(\hat y), \hat y) - a_{ij}(y_1, \hat y)= w_z(\hat y)
a^1_{ij}(y_1,y_1 + w_z(\hat y), \hat y) =$$$$= O(\|\hat y\|^2)
a^1_{ij}(y_1,y_1 + w_z(\hat y), \hat y),$$
since $w_z$ and its gradient vanish at $\hat y=0$.
\end{proof}
A similar relation holds between the vector fields restricted to the boundary:
\begin{lemma} \label{teo2RS} Using the same notation of Proposition \ref{proteta} and
setting $$\hat Z_{i}= \partial_{y_i}+ \sum_{deg(j)>deg(i)} a_{ij}(0, \hat y) \partial_{y_j},$$ we get:
$$d {\hat \Xi}_z(\hat X_{i }) = {\hat Z}_{i} + {\hat R}_{i, z, \Xi}$$
where ${\hat R}_{i, z, \Xi}$ are vector fields of local degree $\leq deg(i)-1$ depending smoothly on $z\in W$.
\end{lemma}
\begin{proof} We omit the proof which is exactly the same as the
previous lemma.
\end{proof}
\subsection{Properties of the fundamental solution and its approximating ones}
The vector fields
$(X_i)_{i=1, \cdots, n }$ in \eqref{campinonomo}, as well as their restriction to
the boundary $(\hat X_i)_{i=2, \cdots, n}$, are in general non homogeneous in the variables $x$,
but we have proved in the previous section that for every $z$ their images through $\Xi_z$ admit
homogeneous approximating vectors fields. Then calling
$X_{i, z } = d {\Xi_z}^{-1}({Z}_{i})$
for $i=1, \cdots, n,$ and applying the change of variable $\Xi$
to the result of Lemma \ref{teoRS}, we deduce that for every
$i=1,\cdots, n$
there exists an operator $R_{i, z }$ such that $R_{i, z }\le deg(i)-1$ and
\begin{equation}\label{approssimacampo} X_{i } = X_{i, z } + {R}_{i, z}. \end{equation}
Calling $ \hat X_{i, z}= d {\hat \Xi_z}^{-1} (\hat Z_{i})
$ for $ i=2,\cdots, n,$ we obtain from Lemma \ref{teo2RS} that for every
$i=2,\cdots, n$ there exists a vector field
$ \hat R_{i, z}$ such that $ \hat R_{i, z}\leq deg(i)-1$ and
$$ \hat X_{i } = \hat X_{i, z}+ {\hat R}_{i, z}.$$
The associated sub-Laplacian type operators
are defined as
\begin{equation}\label{ponteggio}
\Delta_z = \sum_{i=1}^m X_{i, z}^2,\quad \hat \Delta_z = \sum_{i=2}^m {\hat X}_{i, z}^2,\quad
\Delta_Z = \sum_{i=1}^m Z_{i}^2,\quad \hat \Delta_Z = \sum_{i=2}^m {\hat Z}_{i}^2,
\end{equation}
with fundamental solutions
$\Gamma_{z, \Delta}$, $\hat \Gamma_{z, \hat \Delta}$,
$\Gamma_{\Delta_Z}$ and $\hat \Gamma_{\hat \Delta_Z}$ respectively.
Note that $\Gamma_{\Delta_Z}$ and $\hat \Gamma_{\hat \Delta_Z}$ do not depend on the fixed point $z$.
We can now apply the parametrix method of
\cite{JSC}, recalled in \eqref{piantina} and \eqref{e: informal serie} to
estimate the fundamental solutions
$\Gamma_\Delta$ and $\hat \Gamma_{\hat \Delta}$, associated to the operators \eqref{ohscusa} and \eqref{scusa} respectively.
The argument is similar to the one applied
in Section \ref{s:LaplaceBeltrami} but in this case
the proof is standard, since we do not have to take
care of the different homogeneous dimensions of the Riemannian and
subriemannian structures. Hence we state without proof the following lemma:
\begin{lemma} \label{lemmagamma}
Let us consider the operators defined in \eqref{ponteggio}. Then
$$H=\Delta - \Delta_z \quad \hat H=\hat\Delta - \hat\Delta_z $$
are differential operators of degree 1. As a consequence
\begin{equation} \nonumber
\Gamma_\Delta(x,z) - \Gamma_{z, \Delta}(x, z)=
\Gamma_\Delta(x,z) - \Gamma_{\Delta_Z}(\Xi_z(x), 0)
\end{equation}
are kernels of type $3$, with respect to the vector fields
$X_i$ and the distance $d$. Analogously
\begin{equation} \nonumber
\hat \Gamma_{\hat \Delta}(\hat x, \hat z) - \hat \Gamma_{z, \hat \Delta}(\x, \hat z)
=\hat \Gamma_{\hat \Delta}(\x, \hat z) - \hat \Gamma_{\hat \Delta_Z}(\hat \Xi_{\hat z}(\hat x), 0)
\end{equation}
are kernels of type $3$ with respect to the
vector fields
$\hat X_i$ and the distance $\hat d$. \end{lemma}
We will also denote $(X_i)^*$ the formal adjoint of $X_i$.
\begin{remark}
Let us note that for every $i=1,\cdots, n$, the vector field $X_i$ is no more self adjoint, but its
formal adjoint differs from $X_i$ by an operator of order 0. Indeed there exists a smooth function
$\phi_i$ such that
\begin{equation}\label{aggiuntoXX}(X_{i})^*= -X_{i} + \phi_i, \quad i=1,\cdots,n.\end{equation}
Indeed
$$(X_i)^* = - X_i - \sum_{deg(j) > deg(i)} \sum_k \partial_{x_1} a_{i,j}(x_1 + w(\hat x), \hat x)\partial_{x_k} w.
$$
\end{remark}
In the sequel we will denote $X_i^z$ the derivative with respect to $z$ and $X_i^x$ the one
with respect to $x$ of a kernel $K(x,z)$.
From Proposition 5.10 in \cite{CC} (see also \cite{RS}, page 295, line 3 from below)
we have
\begin{prop}\label{uniformderivatives}
Assume that $f\in C^{\infty}_0(\R^{n-1})$, and for $x\in \R^n$ define
$$F(f)(x):= \int_{\R^{n-1}} \Gamma_\Delta (x, (0, \hat y)) f(\y) d\y.$$
For every $i, {\,h}=1, \cdots, m$
there exist kernels
$ \Gamma_{i,h}(x, y)$ and $S_{i}(x, y)$,
of type $2$ with respect to the distance $d$, such that
$$X_i F(f)(x) = $$
$$=-\int_{\R^{n-1}}
\sum_{h=1}^m (X^{y}_{h})^* \Gamma_{i,h }(x, (0, \hat y)) f(\y) d\y -
\int_{\R^{n-1}} S_{i}(x,(0, \hat y)) f(\y) d\y.$$
\end{prop}
\begin{lemma} \label{gammanabla}Let $f\in C_0^\infty(\R^n)$.
Let us call $$G(f)(x) := \int_{\R^n} \Gamma_\Delta(x,y)f(y)dy$$
Then there exists a kernel $S$ of type 1 such that the
operator $G_1(f):= G(\nabla f)$ can be represented as
$$G(\nabla f) = E_S(f), $$
where $E_S$ is the operator with kernel $S$.
\end{lemma}
\begin{proof}
We have
$$G(X_i f) = \int \Gamma_\Delta(x,z) X_i^z f(z) dz = \int (X_i^z)^*\Gamma_\Delta(x,z) f(z) dz.$$
Hence we only have to prove that the kernel
$$S:=(X_i^z)^*\Gamma_\Delta(x,z)$$
is a kernel of type 1 with respect to the distance $d$. By \eqref{aggiuntoXX} there exist regular functions $\phi_i$ such that
$$(X_i^z)^* = - X_i^z + \phi_i.$$
On the other side, by \eqref{approssimacampo}
for every $i=1,\cdots, n$
there exist and operator $R_{i, z }$ such that $deg(R_{i, z })\le deg(i)-1$ and
$$ X^z_{i } = X^z_{i, z } + R^z_{i, z}. $$
Finally in \cite{RS}, page 295, line 3 from below, it is proved that
$$X^z_{i, z } \Gamma_{z, \Delta}$$
is an kernel of type 1. Now we use the fact that
$K=\Gamma_{\Delta}- \Gamma_{z, \Delta}$ is an operator of type 3,
to conclude that
$$S= (X^z_i)^* \Gamma_\Delta = (- X^z_{i, z } - R^z_{i, z} + \phi_i)(\Gamma_{z, \Delta} + K)$$
is a kernel of local type 1 with respect to the distance $d_z$ associated with the
vector fields $X_{i,z}$. On the other side as in Lemma
\ref{distanze-fuori}, the distances $d$ and $d_z$ are equivalent, so that
the conclusion follows.
\end{proof}
\subsection{The reproducing formula for non homogeneous vector fields}
In this section we prove Theorem \ref{teorema1}.
The proof is obtained, via the results of the previous
section,
by reducing to the analogous result for homogeneous vector fields, already established in Lemma \ref{l:Gammaconvolve}.
\begin{proof}[\bf{Proof of Theorem \ref{teorema1}}]
By Lemma \ref{lemmagamma},
\begin{equation}\label{lenotazioni}{\hat \Gamma}_{\hat \Delta}(\x, \z) -
{\hat \Gamma}_{\hat \Delta_Z}(\hat \Xi_{\hat z}(\hat x), 0) \end{equation}
is a kernel of type $3$ with respect to the vector fields $\hat X_i$.
For the vector fields $(Z_i)_{i=1, \cdots, n}$ and the fundamental solution associated to the
corresponding sub-Laplacian type operator, we can apply Lemma \ref{l:Gammaconvolve}, so that
$$ {\hat \Gamma}_{\Delta_Z}(\hat \Xi_{\hat z}(\hat x), 0) -
\int_{\R^{n-1}} \Gamma_{\Delta_Z}((0,\x), (0,\y))
\Gamma_{\Delta_Z}((0,\y), (0,\z)) d\y $$
is a kernel of type $5/2$ with respect to the vector fields $\hat X_{i,z}$.
Using Lemma \ref{teo2RS} we deduce that a kernel has the same type
with respect to the vector fields $\hat X_i$ and $\hat X_{i,z}$.
Inserting in \eqref{lenotazioni} we get that
\begin{equation}\label{labelaggiunta}{\hat \Gamma}_{\hat \Delta}(\x, \z) - \int_{\R^{n-1}} \Gamma_{\Delta_Z}((0,\x), (0,\y))
\Gamma_{\Delta_Z}((0,\y), (0,\z)) d\y \end{equation}
is a kernel of type $5/2$.
Applying again Lemma \ref{lemmagamma} we deduce that
the following difference,
$$\int_{\R^{n-1}} \Gamma_{\Delta_Z}((0,\x), (0,\y))
\Gamma_{\Delta_Z}((0,\y), (0,\z)) d\y -
\int_{\R^{n-1}} \Gamma_{\Delta}((0,\x), (0,\y))
\Gamma_{\Delta}((0,\y), (0,\z)) d\y =$$
$$\int_{\R^{n-1}} \Gamma_{\Delta_Z}((0,\x), (0,\y))
\Big(\Gamma_{\Delta_Z}((0,\y), (0,\z)) d\y -
\Gamma_{\Delta}((0,\y), (0,\z)) \Big) d\y +$$
$$+
\int_{\R^{n-1}} \Big(\Gamma_{\Delta_Z}((0,\x), (0,\y))
- \Gamma_{\Delta}((0,\x), (0,\y))\Big)
\Gamma_{\Delta}((0,\y), (0,\z)) d\y,$$
is a kernel of type $3$.
As a consequence, we deduce from here and \eqref{labelaggiunta} that
$${\hat \Gamma}_{\hat \Delta}(\x, \z)-
\int_{\R^{n-1}} \Gamma_{\Delta}((0,\x), (0,\y))
\Gamma_{\Delta}((0,\y), (0,\z)) d\y $$
is a kernel of type $5/2$. The proof is complete.
\end{proof}
\section{Poisson kernel and Schauder estimates at the boundary}
In this section we will show the existence of a Poisson kernel for
the Dirichlet problem, stated in Theorem
\ref{mainRn}. From this we deduce the Schauder estimates at the boundary stated in
Theorem \ref{c:schauderGroups}.
Consider a bounded smooth set $D$ and a sub-Laplacian type operator $\Delta$
defined in $D$,
as in \eqref{laplacoperator}, in terms of the homogeneous
vector fields defined in \eqref{struttura campi}.
The corresponding Dirichlet problem is expressed as
\begin{equation}
\label{festa}
\Delta u=f \ \text{in }D, \quad u=g \ \text{on }\partial D,
\end{equation}
for a suitable boundary datum $g$ and a smooth function $f$ defined on $D$.
As mentioned in Section \ref{nuova},
we can locally perform a change of variable, and reduce the domain of the Dirichlet problem
to the half space. Hence there is an open set $V\subset \R^n$ such that $D\cap V= \{x=(x_1,\x)\in V: x_1>0\}$ and $\{x_1 =0\}$ is a non characteristic plane. Under this
change of variable, the vector fields
$\{X_i\}_{i=1,\cdots,m}$ will take the non homogeneous expression of
\eqref{campinonomo}.
Their restriction to the boundary, denoted $(\hat X_i)$ is defined in \eqref{hatticampi} induces on the set $\partial D$ a distance $\hat d$ defined in \eqref{finalmente}.
The corresponding spaces of H\"older continuous functions, will be denoted $\hat C^{k, \alpha}$.
We look for a Poisson operator
in a neighborhood $V$ of a point $x_0 \in \partial D$.
We say that $P:C^{\infty}( V\cap\partial D)\rightarrow C^{\infty}(V\cap \overline{D})$ is a local Poisson operator
for the problem \eqref{festa} if, for every $g\in C^{\infty}( V\cap\partial D)$, the function $u:=P(g)$
satisfies
$\Delta u=0$ in $D\cap V$ and $u(x)=g(x)$ for all $x\in \partial D\cap V$.
We will construct a parametrix for the Poisson kernel of the Dirichlet problem,
adapting to the present setting a method introduced by Greiner and Stein \cite{GreinerStein}
and Jerison \cite{Jerison}. They used an approximating kernel,
defined via pseudodifferential instruments,
while we use here the kernel found in Theorem
\ref{teorema1}. We will denote it as follows:
$$\hat\Gamma_{\Delta^2}(\x, \y):= \int_{\R^{n-1}} \Gamma_{\Delta}((0,\x), (0,\z))
\Gamma_{\Delta}((0,\z), (0,\y)) d\z.$$
In analogy with \eqref{erreuno} we call
$$R_1(\x, \y) := \hat \Delta \Big({\hat \Gamma}_{\hat \Delta}(\x, \y) - \hat\Gamma_{\Delta^2}(\x, \y)\Big).$$
In the present case $R_1$ is a kernel of type $1/2$ with respect to the distance $\hat d$.
As in \eqref{Fi} we now call
$$\Phi(\hat x, \hat y):= \sum_{j=0}^\infty (E_{R_1})^j(R_1)(\hat x, \hat y). $$
Using a standard singular integral argument,
we deduce that the series uniformly converges on any bounded open set $V_0$ and it is a kernel of type $1/2$,
As a consequence
\begin{equation}\label{typekernelfi}
\begin{split}
&\int_{\mathbb{R}^{n-1}\cap V_0} \Gamma_\Delta((0, \hat x),(0, \hat z))\Phi(\hat z, \hat y) d\hat z \;\; \text {is of type 3/2 with respect to the distance }\hat d \\
& E_{\hat\Gamma_{\Delta^2}}(\Phi(\x, \y)) \;\; \text {is of type 5/2 with respect to the same distance},
\end{split}\end{equation}
where $E_{\hat\Gamma_{\Delta^2}}$ denotes the operator with kernel $\hat\Gamma_{\Delta^2}$.
In addition the fundamental solution of the operator $\hat \Delta$ can be represented as
\begin{equation}\label{semprepara}
\hat \Gamma_{\hat \Delta}(\x, \y) = \hat\Gamma_{\Delta^2}(\x, \y) + E_{\hat\Gamma_{\Delta^2}}\Phi(\x, \y),
\end{equation}
for $\x, \y\in V_0\cap \R^{n-1}.$
Let us now prove Theorem \ref{mainRn} with
$$R(g)(\hat y) :=\int_{\mathbb{R}^{n-1}\cap V_0} \int_{\mathbb{R}^{n-1}\cap V_0} \Gamma_\Delta((0, \hat y), (0,\hat s)) \Phi((0, \hat s), (0,\hat z)) \hat \Delta g(\hat z) d\hat s d\hat z $$
and
\begin{equation}\label{cappa}
K: {\hat C}^{2}(\partial D\cap V_0) \rightarrow \hat C(\partial D\cap V_0), \quad K= K_1 + R.
\end{equation}
\begin{proof}[{\bf Proof of Theorem \ref{mainRn}}]
Since we are proving a local property, it is not restrictive that the
boundary datum $g$ belongs to $C^\infty_0 (\partial D \cap V_0)$.
Since $\Gamma_{\Delta}$ is the fundamental solution of $\Delta$,
then the function $u=
P(g) (x)$
satisfies
$\Delta u=0 $ in $ V\cap D.$
Hence by \eqref{semprepara}, we have
$$\nonumber P(g) (0, \x)=
\int_{\mathbb{R}^{n-1}} \Big(\hat\Gamma_{\Delta^2}(\x, \y) + E_{\hat\Gamma_{\Delta^2}}(\Phi(\x, \y))\Big)\hat \Delta g(\y) d\y =$$$$=
\int_{\mathbb{R}^{n-1}} \hat \Gamma_{\hat \Delta}(\x, \y))
\hat \Delta g(\y) d\y =g(\x).$$
\end{proof}
Once proved the existence of a Poisson kernel, the proof of Schauder
estimate is based on properties of singular integrals. We follow here the same ideas as in \cite{Jerison}
and we prove that the operator $P$ is bounded. Since it can be represented as
in \eqref{poissonintro} we will start with the properties of $K$.
Let us first note that both $K_1$ and $R$ can be extended to
operators with values in $C( D \cap V)$ setting
$$K_1(g)(y) =\int_{\partial D \cap V_0} \Gamma_{\Delta}(y, (0,\hat z)) \hat \Delta g(\hat z)
d\hat z.$$
$$R(g)(y) =\int_{\mathbb{R}^{n-1}\cap V_0} \int_{\mathbb{R}^{n-1}\cap V_0} \Gamma_\Delta(y, (0,\hat s)) \Phi((0, \hat s), (0,\hat z)) \hat \Delta g(\hat z) d\hat s d\hat z.$$
As a consequence $K= K_1 + R $ will be considered as an operator acting between the
following sets
$$K: \hat C^{2}(\partial D\cap V_0) \rightarrow C(D\cap V_0).$$
\begin{remark}\label{finiremomai}
Let us explicitly note that the spaces $C^{k, \alpha}$ associated with the
vector fields $X_i$ defined in \eqref{campinonomo}
are equivalent to the spaces $C^{k, \alpha}$ associated with the
vector fields
\begin{equation}\label{campinonomodue}
\begin{array}{ll}\Y_1 & = \partial_{x_1}, \\
\Y_i & = \partial_{x_i} + \sum_{deg(j) > deg(i)} a_{i,j}(x_1 + w(\hat x), \hat x)\partial_{x_j}, i=1, \cdots, n
\end{array}
\end{equation}
since these vectors are linear combinations of the previous ones.
\end{remark}
\begin{lemma}
Let $D= \{(x_1,\x)\in \R \times \R^{n-1}: x_1>0\}$ be a half space with non characteristic boundary.
Then for every $V\subset \subset V_0$ there is a constant $C_1$ such that for every $g\in {\hat C}^{2, \alpha}(\partial D \cap V_0)$
\begin{equation}\label{tesicappag}
\|K(g)\|_{ C^{1, \alpha}( D \cap V)}\leq C_1 \|g\|_{{\hat C}^{2, \alpha}(\partial D \cap V_0)}.\end{equation}
In addition there is a constant $C_2$ such that
if $g\in C_0^\infty(\partial D \cap V_0)$, then
$$K(g)\in \left\{\phi:\, |\phi(0,\z)|\le C_2 \frac{\hat d(\z,\operatorname{supp}(g))}{|\hat B (\z, \hat d(\z,\operatorname{supp}(g)))|}\ \forall \hat z \ \\ \text{s.t. }
\hat d(\hat z, \operatorname{supp}(g)) \geq 2
\operatorname{diam}(\operatorname{supp}(g)) \right\}.$$
\end{lemma}
\begin{proof}
Clearly $\Gamma_{\Delta}((0,\z), (0,\y))$ is a kernel of type $2$
with respect to the distance $d$ in the sense of Definition
\ref{defikernel}. Because of inequality
\eqref{equivdhat} we deduce that there are constants $C_1, C_2$ such that
\begin{equation}\label{homo}
C_1 \frac{\hat d(\z,\y)}{|\hat B (\z, d(\z,\y))|} \leq \Gamma_{\Delta}((0,\z), (0,\y))\leq C_2 \frac{\hat d(\z,\y)}{|\hat B (\z, d(\z,\y))|}
\end{equation}
so that $\Gamma_{\Delta}((0,\z), (0,\y))$ is a kernel of type $1$ with respect
to the distance $\hat d$ induced on $\partial D$,
while the first derivatives of $\Gamma_{\Delta}((0,\z), (0,\y))$
are
singular integrals. As a consequence we obtain (see for example \cite{NagelStein})
\begin{equation}\label{nucleokappa1}
\|E_{\Gamma_\Delta}(\phi)\|_{ C^{1, \alpha}( D \cap V)} \leq C \|\phi\|_{{\hat C}^{ \alpha}(\partial D \cap V_0)}.
\end{equation}
for every $\phi\in {\hat C}^{\alpha}(\partial D \cap V_0)$,
where $E_{\Gamma_\Delta}$ denotes the operator with kernel $\Gamma_{\Delta}(z, (0,\y))$.
Therefore $K_1 = E_{\Gamma_{\Delta}} \circ\hat \Delta$ satisfies
$$
\|K_1(g)\|_{ C^{1, \alpha}( D \cap V)} \leq C \|\hat \Delta g\|_{{\hat C}^{\alpha}(\partial D \cap V_0)}\leq C \|g\|_{{\hat C}^{2, \alpha}(\partial D \cap V_0)}.
$$
Since
$\Phi$ is a kernel of type $1/2,$ its associated operator $E_{\Phi}$ satisfies
\begin{equation}
\|E_{\Phi} (\hat \Delta g)\|_{{\hat C}^{ \alpha + 1/2}(\partial D \cap V_0)}\leq
C \|\hat \Delta g\|_{{\hat C}^{\alpha}(\partial D \cap V_0)}\leq C\|g\|_{{\hat C}^{2, \alpha}(\partial D \cap V_0)}.\end{equation}
It follows that
$$\|R(g)\|_{ C^{1, \alpha}( D \cap V)} =\|E_{\Gamma_\Delta}E_{\Phi}(\hat \Delta g)\|_ {C^{1, \alpha}( D \cap V)}\leq
\|E_{\Phi}(\hat \Delta g)\|_ {C^{\alpha}( D \cap V)}\leq \|g\|_{{\hat C}^{2, \alpha}(\partial D \cap V_0)},$$
In particular \eqref{tesicappag} directly follows. Also the decay property of $K$ immediately follows, since
$$d(\z,\y)\ge d(\z, \operatorname{supp}g)\; $$
for all $\y\in \operatorname{supp}g$ and for all $\z$ such that $\hat d(\z, \text{supp}\,g)\ge 2\operatorname{diam}(\operatorname{supp}g)$.
\end{proof}
Arguing as in Remark \ref{finiremomai} we have the following
\begin{remark}
There are $C^\infty$ functions such that the Laplace type operator $\Delta$
can be expressed as
$$\Delta = \Y_1^2 + \sum_{i=2}^m(\Y_i - Z_i w \Y_1)^2 + b_1\Y_1 + \sum_{i=2}^m b_i (\Y_i - Z_i w \Y_1)=$$
$$= \Y_1^2 + \sum_{i=2}^m(\Y_i - Z_i w \Y_1)^2 + \Big(b_1- \sum_{i=2}^m Z_i w \Big)\Y_1 + \sum_{i=2}^m b_i \Y_i =$$
$$= (1 + \sum_{i=2}^m(Z_i w)^2)\Y_1^2 + \sum_{i=2}^m\Y_i^2
- \sum_{i=2}^m Z_i w (\Y_i\Y_1 + \Y_1\Y_i)$$
\begin{equation}\label{erratoquasiovunque}
+
\Big(b_1- \sum_{i=2}^m Z_i w + \sum_{i=2}^m(\Y_i - Z_i w \Y_1) Z_i w \Big)\Y_1 + \sum_{i=2}^m b_i \Y_i. \end{equation}
In particular the coefficient $1 + \sum_{i=2}^m(Z_i w)^2 $ of $\Y_1^2$
is smooth and bounded from above and below by positive constants.
\end{remark}
Let us now conclude the proof of the boundedness of $P$.
\begin{theorem}\label{albicocca}
Let $V$, $V_0$ be open sets in $\R^{n}$, with $V\subset\subset V_0$, let $g \in {\hat C}^{2, \alpha}(\partial D \cap V_0)$. Then
there is a constant $C_1$ such that \begin{equation}\label{boundP}
\|P(g)\|_{{ C}^{2, \alpha}(D \cap V)}\leq C_1\|g\|_{{\hat C}^{2, \alpha}( \partial D \cap V_0)}.\end{equation}
\end{theorem}
\begin{proof}
Let us fix $V_1$ such that
$V\subset\subset V_1 \subset\subset V_0$. Thanks to the previous lemma we only have to prove that
the operator
$$\tilde K: {C}^{1,\alpha}(D\cap V_0)\cap
\left\{\phi:\, |\phi(0,\z)|\le C \frac{ \hat d(\z,\operatorname{supp}(g))}{|\hat B(\z, \hat d(\z,\operatorname{supp}(g)))|}, \quad \quad \quad \quad \quad
\quad \quad\right.$$$$\quad \quad \quad \quad \quad \quad \quad
\left.\forall \hat z \; \text{s.t. }\hat d(\hat z. \supp (g)) \geq 2 \operatorname{diam}(\operatorname{supp}(g)) \right\}
\to C^{2,\alpha}(D\cap V_0)
$$
defined as
$$\tilde K (\varphi)(x) :=
\int_{\R^{n-1}} \Gamma_\Delta(x, (0,\hat z)) \phi(0,\hat z) d\hat z
$$
satisfies
\begin{equation}\label{semai}
\|\tilde K (\varphi)\|_{C^{2,\alpha}(D\cap V)}\leq\|\varphi\|_{{C}^{1,\alpha}( D\cap V_0)}.
\end{equation}
It is standard to recognize that for every $i, j = 2, \cdots, m$, $ \Y_i \Y_j \tilde K $ it is bounded as operator with values in $C^{\alpha}(D\cap V_0)$
(see for example \cite{GreinerStein}, \cite{NagelStein}).
Hence we have to estimate the normal derivative. Let us begin with the derivatives
$ \Y_i \Y_1 \tilde K $ with $i = 2, \cdots, m$.
Let $\psi \in C_0^\infty(V_1)$ such that $\psi\equiv 1$ in a neighborhood of $V$ and let $x\in V$ and $\phi\in C^{1,\alpha}(D \cap V_0)$.
By Proposition \ref{uniformderivatives}
there exist kernels
$ (\Gamma_{1,i}(x, y))_{i=1, \cdots, m},$
$S_{1}(x, y)$
of type 2 such that
\begin{align*}\partial_1 \tilde K(\varphi)(x) =
&\int_{\R^{n-1}}
\sum_{i=1}^m (\Y^{z}_{i})^* \Gamma_{1, i }(x, (0, \hat z)) \varphi(0,\z) d\z +
\int_{\R^{n-1}} S_{1}(x, (0, \hat z)) \varphi(0,\z) d\z\\
=&- \int_{\R^{n-1}} \partial^z_1 \Gamma_{1,1 }(x, (0, \hat z)) \varphi(0,\z) d\z\\
&-\int_{\R^{n-1}}
\sum_{i=2}^m \Gamma_{1, i }(x, (0, \hat z)) \Y^{z}_{i} \varphi(0,\z) d\z
+\int_{\R^{n-1}} S_{1}(x, (0, \hat z)) \varphi(0,\z) d\z.
\end{align*}
Let us estimate the first term, using the fact that
$\partial_1^z = \partial^z_\nu$,
\begin{align*}
&\int_{\R^{n-1}}
\partial^z_1 \Gamma_{1, 1 }(x, (0, \hat z)) \varphi(0,\z) d\z \\
=&- \int_{\R^{n-1} \cap V_1} <\nu, \nabla \Gamma_{1, 1}(x, (0,\hat z))> \varphi(0,\hat z) \psi(0, \hat z) d\hat z \\
& - \int_{\R^{n-1} } <\nu, \nabla \Gamma_{1,1}(x, (0,\hat z))> \varphi(0,\hat z)
(1-\psi(0, \z)) d\hat z \\
=&
- \int_{V_1\cap D} < \nabla \Gamma_{1,1}(x, z), \nabla(\varphi \psi)(z) > d z \\
&- \int_{\R^{n-1} \backslash V_1} <\nu, \nabla \Gamma_{1,1}(x, (0,\hat z))> \varphi(0,\hat z) (1-\psi(0, \z)) d \hat z.
\end{align*}
If $x\in V$ the last integral contains a $C^\infty$ kernel since $\psi=1$, on a closed set which contains $V$ in the interior. Thus, applying standard singular integral theory to all terms in the expression of $\partial_1 \tilde K$ we obtain
$$\|\partial_1 \tilde K(\varphi)\|_{C^{1, \alpha}(D \cap V)}\leq C \|\varphi\|_{{ C}^{1, \alpha}( D \cap V_0)}.$$
Analogously for every $i = 2, \cdots, m$ we have
$$\|\partial_1 \Y_i\tilde K(\varphi)\|_{C^{\alpha}(D \cap V)}\leq C \|\varphi\|_{{C}^{1, \alpha}( D \cap V_0)}.$$
Finally we note that $\Delta \tilde K(x, (0, \hat y)) =0$, consequently the estimate
of $\Y^2_1 \tilde K$ follows by difference from the estimates of all the other
second derivatives and the expression \eqref{erratoquasiovunque}.
Assertion \eqref{semai} is proved, so that the thesis follows.
\end{proof}
From here it immediately follows:
\begin{corollary}\label{operator_norms} Assume that the same assumptions as in Theorem \ref{albicocca} are satisfied.
If $V\subset\subset V_0$, $k\in \{0,1\}$,
$f\in C^{k,\alpha}(V_0)$, and
$$G(f) := E_{\Gamma_{\Delta}}(f) - P((E_{\Gamma_{\Delta}}(f))_{|{\partial D \cap V_0}}),$$
there exists a constant $C$ such that
\begin{equation}\label{Gnabla}
\|G(f)\|_{C^{2, \alpha }(V)} \leq C \|f\|_{C^{\alpha}(V_0)}
\quad \text{ and } \quad \|G(\nabla f)\|_{C^{k+1, \alpha }(V)} \leq C \|f\|_{C^{k,\alpha}(V_0)}.
\end{equation}
\end{corollary}
\begin{proof}
The first inequality follows from properties of singular integrals (see \cite{libroStein}) and the boundedness of $P$ established in Theorem \ref{albicocca}.
The last inequality follows applying Lemma \ref{gammanabla}. Indeed there exists a kernel $S$ of type 1 such that
$$ E_{\Gamma_{\Delta}}(\nabla f) = E_{S}(f).$$
Consequently
$$G(\nabla f) = E_{S}(f) - P((E_S(f))_{|{\partial D \cap V_0}}),$$
and the assertion follows at once.
\end{proof}
Let $D= \{(x_1,\x)\in \R \times \R^{n-1}: x_1>0\}$ be a half space as above and consider the problem
\begin{equation}
\label{festa2}
\left\{\begin{array}{cl}
\Delta u=f& \text{in $D$},\\ u=g & \text{on $\partial D$.}
\end{array}\right.\end{equation}
From Theorem \ref{mainRn} next theorem easily follows .
\begin{theorem}\label{con f}
If $f\in C^\infty _0(V_0)$ and $g\in C_0^\infty(\partial D \cap V_0)$ and
$$G(f) = E_{\Gamma_{\Delta}}(f) - P(E_{\Gamma_{\Delta}}(f))_{|\partial D \cap V_0}),$$
then the function $u=G(f) + P(g)$
solves the problem $$\Delta u=f \ \text{in }D, \quad u=g \ \text{on }\partial D\cap V_0.$$
\end{theorem}
As a consequence of the previous theorem, we immediately get an approximate
representation formula for a smooth function $u$.
\begin{lemma}
Let $V\subset\subset V_0$ and let $u\in C^{\infty}_0(V)$. Let us call $\Delta u=f$ and $g=u_{|\partial D \cap V_0}$,
and let $\phi \in C^\infty_0(V_0)$, $\phi =1$ on $V$.
Then
\begin{equation} \label{ravioli}u = \phi v + E_{\Gamma_\Delta}\Big(f (1-\phi) + v \sum _{i=1}^m b_i X_i\phi\Big) -
E_{S} (v \nabla\phi),\end{equation}
where $v= G(f)+P(g)$ and $b_i$ are the coefficients of the operator $\Delta$ in \eqref{laplacoperator}.
\end{lemma}
\begin{proof}
Setting $v= G(\Delta u)+P(u|_{\partial D \cap V_0})$
we have by Theorem \ref{con f}
$$
\left\{\begin{array}{lll}
\Delta (u - \phi v)&=f (1-\phi) + \nabla v \nabla \phi + v \Delta \phi &\quad \text{in}\, V_0\cap D,
\\
u - \phi v &= 0 &\text{on}\, \partial (V_0\cap D)\,.
\end{array}\right.
$$
where we have extended $u - \phi v$ on the whole space with $0$.
We deduce by \eqref{laplacoperator}
$$
u = \phi v + E_{\Gamma_\Delta}\Big(f (1-\phi) + \nabla v \nabla \phi + v \Delta \phi\Big)=
$$$$=
\phi v +E_{\Gamma_\Delta}\Big(f (1-\phi) + v \sum_i b_i X_i\phi \Big) - E_{\Gamma_\Delta}(\nabla (v \nabla\phi)).$$
Now applying Lemma \ref{gammanabla} we obtain
$$u = \phi v + E_{\Gamma_\Delta}\Big(f (1-\phi) + v \sum _ib_i X_i\,\phi\Big) -
E_{S} (v \nabla\phi).$$
\end{proof}
\subsection{Schauder estimates}
We can now complete the proof of the Schauder estimates, stated in the introduction:
\begin{proof}[Proof of Theorem \ref{c:schauderGroups}]
Let $u$ be a solution of $\Delta u=f$ and $u_{|\partial D}=g$.
We will prove the a priori estimates for $u$ under the assumption that
$f\in C^\infty(\bar D)$, $g\in C^\infty(\partial D)$ and we will obtain
the thesis for $f\in C^{\alpha}(\bar D)$, $g\in \hat C^{2, \alpha}(\partial D)$ by a density argument.
For smooth data, by \cite{KN} there exists a unique solution $u\in C^\infty(D)$, smooth up to the boundary at non characteristic points.
We first note that
$$\|u\|_\infty \leq \|g\|_\infty$$
via the maximum principle. In addition,
extending $g$ in the interior of $D$ to a function of class $C^{2, \alpha}$ such that
$\|g\|_{C^{2, \alpha}(D)}\leq \|g\|_{\hat C^{2, \alpha}(\partial D)},$
we see that $u-g$ is a solution of $\Delta(u-g)= f- \Delta g$ in $D$ and $ u-g=0 $ on ${\partial D}$, hence the Moser iteration technique (see \cite{moser}) ensures that there exists a value of $\beta$ such that
$u-g\in C^{\beta}(\bar D),$ and
\begin{equation}\label{normabeta}
\|u\|_{C^\beta(\bar D)} \leq C\big(\|f\|_{C^\alpha(\bar D)} + \|g\|_{\hat C^{2, \alpha}(\partial D)}\big).
\end{equation}
We can choose a non characteristic point, say $0\in \partial D$,
and denote $V_0$ a neighborhood of $0$ such that
the subriemannian normal $$\nu(s) \ne 0 \ \text{for every } s\in \partial D\cap V_0.$$
Then we can perform the change of variable described in Section \ref{nuova}
on a set $V\subset\subset V_0$.
Through this change of variables
the vector fields $X_i $ can be represented as in \eqref{campinonomo}
$$d\Xi(X_1) = \partial_{x_1}, \quad
d\Xi(X_i) = \partial_{x_i} + \sum_{deg(j) > deg(i)} a_{i,j}(x_1 + w(\hat x), \hat x)\partial_{x_j} + X_i w (\x)\partial_{y_1},$$
so that the results of the previous section apply.
Let $\phi\in C_0^{\infty}(V)$, let $V_1$ be an open set such that $V\subset\subset V_1$, $\phi_1\in C_0^{\infty}(V_1)$ and identically $1$ on $V$. Define
\begin{equation}\label{preraviolo}
v:=G(\Delta (\phi u)) + P((\phi u)_{|\partial D \cap V})= G\Big(
f \phi + \nabla \phi \nabla u+ \Delta \phi u\Big) + P((\phi u)_{|\partial D \cap V}).
\end{equation}
By \eqref{ravioli} we get
\begin{equation}\label{postraviolo}
\phi u =
\phi_1 v + E_{\Gamma_\Delta}\Big(f (1-\phi_1) - v \sum_i b_i X_i\phi_1\Big) +
E_S (v \nabla\phi_1).
\end{equation}
Then, from previous expressions and using \eqref{Gnabla}, for nested open sets $V \subset\subset V_3\subset \subset V_2\subset \subset V_1 \subset\subset V_0$ and for every $\gamma\leq \alpha$ we get that
\begin{equation}\label{stimabeta}
\|\phi u\|_{C^{1,\gamma}(V_3\cap D)} \leq C\big(\|v\|_{C^{1,\gamma}(V_2\cap D)} + \|f\|_{C^{\alpha}(V_2\cap D)}\big)\end{equation}
$$\leq C\big(\|u\|_{C^{\gamma}(V_1\cap D)} + \|f\|_{C^{\alpha}(\bar D)} + \|g\|_{\hat C^{2,\alpha}(\partial D)} \big).$$
In particular, using this inequality and the uniform estimate of $\|\phi u\|_{C^{\beta}(\bar D)}$ provided by \eqref{normabeta} we get for $V \subset\subset V_4 \subset\subset V_3 $
$$\|\phi u\|_{C^{\alpha}(V_4\cap D)} \leq C\|\phi u \|_{C^{1,\beta}(V_3\cap D)}
\leq C\big(\|f\|_{C^{\alpha}(\bar D)} + \|g\|_{\hat C^{2,\alpha}(\partial D)} \big). $$
Having an estimate of $\|\phi u\|_{C^{\alpha}}$ we apply again \eqref{normabeta} with $\gamma =\alpha$
and we have for $V \subset\subset V_5\subset \subset V_4 $
$$\|\phi u \|_{C^{1,\alpha}(V_5\cap D)}
\leq C\big(\|f\|_{C^{\alpha}(\bar D)} + \|g\|_{\hat C^{2,\alpha}(\partial D)} \big). $$
Finally, we iterate the same argument applying again \eqref{Gnabla} to \eqref{preraviolo} and \eqref{postraviolo}. Therefore we get
$$
\|\phi u\|_{C^{2, \alpha} (V\cap D)}\leq C\big(\|\phi u\|_{C^{1,\alpha}(V_5\cap D)} + \|f\|_{C^{\alpha}(\bar D)} + \|g\|_{\hat C^{2, \alpha}(\partial D)} \big)\leq $$$$\leq
C\big(\|f\|_{C^{\alpha}(\bar D)} + \|g\|_{\hat C^{2, \alpha}(\partial D)} \big).
$$
This concludes the proof.
\end{proof}
|
1,314,259,995,278 | arxiv | \section{0pt}{4pt plus 0pt minus 0pt}{1.5pt plus 0pt minus 0pt}
\usepackage{multirow}
\usepackage{listings}
\lstdefinelanguage{diff}{
basicstyle=\ttfamily\bfseries\scriptsize,
morecomment=[f][\color{diffstart}]{@},
morecomment=[f][\color{diffincl}]{+},
morecomment=[f][\color{diffrem}]{-},
keepspaces=true,
identifierstyle=\color{black},
}
\usepackage[shortlabels]{enumitem}
\setlength{\textfloatsep}{4pt plus 1.0pt minus 2.0pt}
\setlength{\floatsep}{4pt plus 1.0pt minus 2.0pt}
\setlength{\intextsep}{2pt plus 1.0pt minus 2.0pt}
\setlength{\dbltextfloatsep}{4pt plus 1.0pt minus 2.0pt}
\setlength{\dblfloatsep}{4pt plus 1.0pt minus 2.0pt}
\usepackage{subcaption}
\usepackage{ifthen}
\newboolean{showcomments}
\setboolean{showcomments}{true}
\ifthenelse{\boolean{showcomments}}
{ \newcommand{\mynote}[2]{\textcolor{red}{
\fbox{\bfseries\sffamily\scriptsize#1}
{\small$\blacktriangleright$\textsf{\emph{#2}}$\blacktriangleleft$}}}}
{ \newcommand{\mynote}[2]{}}
\ifthenelse{\boolean{showcomments}}
{ \newcommand{\hnote}[2]{\textcolor{blue}{
\fbox{\bfseries\sffamily\scriptsize#1}
{\small$\blacktriangleright$\textsf{\emph{#2}}$\blacktriangleleft$}}}}
{ \newcommand{\hnote}[2]{}}
\newcommand{\ft}[1]{\mynote{Ferdian}{#1}}
\newcommand{\sa}[1]{\hnote{Stefanus}{#1}}
\newcommand{\jlx}[1]{\mynote{JLX}{#1}}
\begin{document}
\author{
\IEEEauthorblockN{
Stefanus A. Haryono\IEEEauthorrefmark{1},
Ferdian Thung\IEEEauthorrefmark{1},
David Lo\IEEEauthorrefmark{1},
Lingxiao Jiang\IEEEauthorrefmark{1},
Julia Lawall\IEEEauthorrefmark{3},
Hong Jin Kang\IEEEauthorrefmark{1},\\
Lucas Serrano\IEEEauthorrefmark{2}, and
Gilles Muller\IEEEauthorrefmark{3},
}
\IEEEauthorblockA{\IEEEauthorrefmark{1}School of Information Systems, Singapore Management University, Singapore\\
\{stefanusah,ferdianthung,davidlo,hjkang.2018,lxjiang\}@smu.edu.sg}
\IEEEauthorblockA{\IEEEauthorrefmark{2}Sorbonne University/Inria/LIP6, France\\
Lucas.Serrano@lip6.fr}
\IEEEauthorblockA{\IEEEauthorrefmark{3}Inria, France\\
\{Gilles.Muller,Julia.Lawall\}@inria.fr}
}
\captionsetup{font=footnotesize,belowskip=1pt,aboveskip=1.0pt}
\pagestyle{plain}
\pagenumbering{arabic}
\title{AndroEvolve: Automated Android API Update with Data Flow Analysis and Variable Denormalization}
\maketitle
\begin{abstract}
The Android operating system is frequently updated, with each version bringing a new set of APIs. New versions may involve API deprecation; Android apps using deprecated APIs need to be updated to ensure the apps' compatibility with old and new versions of Android. Updating deprecated APIs is a time-consuming endeavor. Hence, automating the updates of Android APIs can be beneficial for developers. CocciEvolve{} is the state-of-the-art approach for this automation. However, it has several limitations, including its inability to resolve out-of-method-boundary variables and the low code readability of its update due to the addition of temporary variables. In an attempt to further improve the performance of automated Android API update, we propose an approach named AndroEvolve{}, which addresses the limitations of CocciEvolve{} through the addition of data flow analysis and variable name denormalization. Data flow analysis enables AndroEvolve{} to resolve the value of any variable within the file scope. Variable name denormalization replaces temporary variables that may present in the CocciEvolve{} update with appropriate values in the target file. We have evaluated the performance of AndroEvolve{} and the readability of its updates on 360 target files. AndroEvolve{} produces 26.90\% more instances of correct updates compared to CocciEvolve{}. Moreover, our manual and automated evaluation shows that AndroEvolve{} updates are more readable than CocciEvolve{} updates.
\end{abstract}
\begin{IEEEkeywords}
Program transformation, Android, data flow analysis, readability, API deprecation, API update
\end{IEEEkeywords}
\section{Introduction}
Android is currently one of the most prominent operating systems (OS) due to the vast amount of its users.
Android OS is frequently updated to add new features or to fix bugs. With each new version, changes and modifications in its APIs are inevitable. Changes to Android APIs may deprecate older versions and render them unusable in the newer version of the OS. To prevent errors caused by such API deprecation, developers need to constantly update deprecated-API usages in their code, while still maintaining backward compatibility with older Android versions. This problem, termed Android fragmentation~\cite{han2012understanding, wei2016taming}, is a common occurrence. Aside from being cumbersome and time-consuming to mitigate, Android fragmentation also introduces security risks~\cite{fragmentationsecurity}.
Due to the nature of Android fragmentation, updating usages of deprecated Android APIs has become a priority. To help developers, several studies have proposed automatic approaches for updating Android API usages~\cite{fazzini2019automated, coccievolve}. AppEvolve is a recent approach~\cite{fazzini2019automated}. It uses both before- and after-update code examples to learn the update automatically. However, while it is able to provide an applicable update for some examples,
it was found to have several weaknesses and
a replication study by Thung et al.\cite{thung2020automated} demonstrated that AppEvolve works correctly only when the target file to be updated has a very similar syntax to the code example.
More recently, Haryono et al.\cite{coccievolve} presented a new tool for automatic Android API update called CocciEvolve{}
built on Coccinelle4J~\cite{lawall2018coccinelle}.
CocciEvolve{} shows better performance than AppEvolve on 112 target files.
The main improvements that CocciEvolve{} provides are:
generated update-scripts in the Semantic Patch Language (SmPL)~\cite{lawall2018coccinelle}, updates generated by using only a single after-update example, and its capability to update multiple instances of deprecated API invocations within a single file.
However, the code example used in CocciEvolve{} must be in the form of an {\tt if} statement containing the updated API in the ``then'' statement and the old API in the ``else'' statement, or vice versa. A sample of such an after-update code example can be seen in Figure~\ref{fig:example_update_code}.
\begin{figure}[h]
\centering
\scriptsize{
\begin{lstlisting}[language=java,numbers=none,sensitive=true,columns=flexible,basicstyle=\ttfamily]
if (android.os.Build.VERSION.SDK_INT >=
android.os.Build.VERSION_CODES.M) {
minutes = picker.getMinute();
} else {
minutes = picker.getCurrentMinute();
}
\end{lstlisting}
\setlength{\belowcaptionskip}{7pt}
\caption{An example of after-update code for {\tt getCurrentMinute()} API}\label{fig:example_update_code}
}
\end{figure}
The major problem of CocciEvolve is its inability to resolve all the values used as API arguments.
These values in API method invocations in a method body can be expressions in various forms, such as literal expressions, name expressions, field access expressions, method invocations, and object creations.
When the expressions refer to a variable defined outside of the method boundary, CocciEvolve fails to resolve the variable or produce a working update.
Another problem in CocciEvolve{} is about the readability of its update results. During the update process, CocciEvolve introduces temporary variables that refer to other variables in the target file. The temporary variables are used to ease the transformation process in CocciEvolve{}, but
they clutter the update results.
In this work, we propose AndroEvolve{}, an improved automated Android API usages update tool that addresses the limitations of CocciEvolve{}, with two new features:
data flow analysis and variable name denormalization.
During the update-script creation, data flow analysis is used to resolve the values used as API arguments, including all variables that are defined outside of the current method containing deprecated API invocations to be updated. Definitions of such out-of-method-boundary variables are located
and used to replace the variables in the API invocations.
For brevity, we refer to such variables as out-of-method variables in the rest of the paper.
\begin{comment}
\jlx{this paragraph doesn't seem to be useful, although it explains where comes the temporary variables; may keep the explanation short if not the explanation doesn't show the advantage of our new tool.
Modified.}
Conforming with the approach taken by CocciEvolve{}, source file normalization is also used in AndroEvolve{} as a {\em pre-processing step}. Source file normalization mitigates the problem of failed update that is caused due to minor syntactic differences between the after-update example and the target file. Through source file normalization, the API invocation and its arguments are converted into normalized form to ease the update process by introducing temporary variables. Each temporary variable is an assignment which contains the reference to the arguments and class or object used in the API invocation.
\end{comment}
Variable name denormalization is added to improve the readability of the updated code as a {\em post-processing step}.
As temporary variables can be introduced by CocciEvolve{} to normalize syntactic differences between update examples and the target file to ease the code update process, the readability of the updated code may be decreased with more temporary variables used.
Our denormalization refers to a process that converts the code normalized to use temporary variables back to their original form {\em after} updates have been performed
and thus improves the readability of the produced updates.
To evaluate the performance of AndroEvolve{}, we have conducted an
experiment using a dataset of 360 target files containing 20 different Android APIs.
We have compared the performance of AndroEvolve{} against CocciEvolve{} by counting the number of successful updates produced by each tool. AndroEvolve{} has produced 26.9\% more successful updates.
For readability, we have compared the update results of AndroEvolve{} and CocciEvolve{} through both an automated measurement by using a popular readability scoring tool~\cite{readabilitymodel} and a manual measurement by asking the opinions of two experienced Android engineers. The measurements highlight that AndroEvolve{} produces updates that are about 50\% and 83\% more readable than CocciEvolve{} with respect to the automated and manual measurements
respectively.
The main contributions of our work are as follows:
\begin{enumerate}[nosep,leftmargin=*]
\item We propose AndroEvolve{}, a tool that addresses the limitations of CocciEvolve{}. AndroEvolve{} adds data flow analysis to resolve out-of-method variables and introduces variable name denormalization to increase update readability.
\item We evaluate AndroEvolve{} on a dataset containing 360 target files involving 20 Android APIs and show that it outperforms CocciEvolve{} in terms of both update success rate and the readability of the updates.
\end{enumerate}
\smallskip
The rest of this paper is organized as follows. Section~\ref{sec:prelim} provides preliminaries on CocciEvolve, data flow analysis, and code readability. Section~\ref{sec:motivating_examples} provides the motivating examples that show the problems present in CocciEvolve{}. Section~\ref{sec:approach} discusses our approach in creating AndroEvolve{} as the upgraded version of CocciEvolve{}. Section~\ref{sec:exp} provides the details on the experiments and its results. Section~\ref{sec:discuss} discusses the limitations of our work. Lastly, Section~\ref{sec:conclusion} concludes our work and provides a discussion of future plans.
\section{Preliminaries}\label{sec:prelim}
{\bf CocciEvolve}~\cite{coccievolve} is the state-of-the-art tool on automatic Android API usage update. It distinguishes itself by only requiring a single after-update example, providing a readable update-script, and introducing code normalization that
tolerates some syntactic differences during code updates.
Firstly, using only a single after-update example,
CocciEvolve{} becomes applicable to more cases than its predecessor AppEvolve~\cite{fazzini2019automated} that requires both before- and after-update example.
Secondly, a readable update-script is achieved through the use of Coccinelle4J~\cite{kang2019semantic}. Coccinelle4J is a program matching and transformation tool for Java language, ported from Coccinelle~\cite{lawall2018coccinelle,padioleau2008documenting}.
Coccinelle4J describes its transformation using a patch written in Semantic Patch Language (SmPL), which has similar syntax with \textit{diff}. Having a patch that describes the program transformation helps the developers to better understand the updates and transformations applied.
Finally, by doing a normalization on both the after-update example and the target file, CocciEvolve{} minimizes the syntactic differences that may cause a failed update. Syntactic differences occur when the after-update and the target file API invocation arguments are expressed in different syntax. This was one of the main limitations of AppEvolve, as shown in a replication study by Thung et al.~\cite{thung2020automated}.
\vspace{0.2cm}\noindent{\bf Data flow analysis}, such as def-use analysis, is an analysis of the data within the code based on the control flow paths taken by the program. For each given expression at a point inside a program, data flow analysis
can determine the value of the expression.
Sample uses of data flow analysis are dead code elimination, variable value prediction, and program slicing.
For our work, data flow analysis is mainly used to determine the values of variables used in the arguments for API invocations and to conduct program slicing.
\vspace{0.2cm}\noindent{\bf Code readability} is a measure of how easy it is to read a piece of code. It is an important code feature that developers look out for, especially for code that needs to be maintained for a long run, or code that is touched by multiple developers. Having readable code makes it easier for developers to understand and modify the code. Studies on code readability have been conducted extensively~\cite{readabilitymodel, readabilityempirical, busereadability, textualreadability}.
These studies define the metrics and features that are considered as important factors in determining code readability. These features include structural features (e.g. numbers of lines of code, length of each line of code, etc.) and textual features (e.g. name of variables, consistency between comments and variable name, etc.).
\section{Motivating Example}\label{sec:motivating_examples}
The two major limitations of CocciEvolve{} are its inability to resolve variables defined outside of the current method containing invocations to deprecated APIs (so called {\it out-of-method variables}) and the presence of temporary variables in the updated code.
\textbf{Out-of-method variables.} An after-update example for the {\tt requestAudioFocus(...)} API is shown in Figure~\ref{fig:out_of_method_boundaries}. In this example, the deprecated API that is going to be updated from the target file is the {\tt requestAudioFocus(...)} in line 77. The updated API and its argument is shown in line 74--75. In this example, the method invocation argument for the updated API is the {\tt request} object, shown in line 75. This object needs to be defined through a method invocation of {\tt audioFocusRequestOreo.getAudioFocus-
Request()} (line 74).
The variable {\tt audioFocusRequestOreo} is also defined outside at line 41.
This variable and its definition is a new argument that is not yet defined in the deprecated API invocation, thus they are not present in the target file.
Since CocciEvolve only performs an intra-procedural analysis of update examples and only considers line 74--75 for the update script creation, the variables related to the {\tt request} object (line 41, 110--133) cannot be resolved, hence creating a non-working update.
Our proposed solution is the addition of data flow analysis as a variable value resolver. We use data flow analysis on the after-update example's code to find the definitions of the variables used in the API invocation's arguments in a file scope.
To further improve the functionality of this data flow analysis across files and Java classes, we also copy method and class definitions. If a variable's value is resolved as a method invocation or an object creation, its method or class definition is needed to create a working update. AndroEvolve{} copies such definitions into the updated file, allowing the uses of those method invocations or object creations.
\begin{figure}[t]
\centering
\scriptsize{
\begin{lstlisting}[language=java,numbers=none,sensitive=true,columns=flexible,basicstyle=\ttfamily]
41 AudioFocusRequestOreo audioFocusRequestOreo = new
AudioFocusRequestOreo();
...
67 public void tryToGetAudioFocus() {
68 OnAudioFocusChangeListener listener = this;
69 int result;
70 int type = AudioManager.STREAM_MUSIC;
71 int duration = AudioManager.AUDIOFOCUS_GAIN;
72 if (android.os.Build.VERSION.SDK_INT >=
android.os.Build.VERSION_CODES.O) {
74 AudioFocusRequest request = audioFocusRequestOreo.
getAudioFocusRequest();
75 result = audioManager.requestAudioFocus(request);
76 } else {
77 result = audioManager.requestAudioFocus(listener,
type, duration);
78 }
79 }
...
110 private class AudioFocusRequestOreo {
111 public AudioFocusRequest getAudioFocusRequest() {
...
132 }
133 }
\end{lstlisting}
\caption{Sample out-of-method argument for an API invocation {\tt requestAudioFocus(...)}}\label{fig:out_of_method_boundaries}
}
\end{figure}
\textbf{Temporary variables in update results.}
Temporary variables are used in CocciEvolve{} to ease the process of code update. However, these variables remain in the updated code, affecting its readability.
Furthermore, these variables only refer to other variables that are already in the actual target file. Consider the sample updated code in Figure~\ref{fig:temporary_variables}, there exist two temporary variables, {\tt parameterVariable0} (line 2) and {\tt classNameVariable} (line 3). These variables refer to other parameters of the {\tt setTimeH} method, and can be removed and replaced by their definitions.
To resolve this problem, our proposed solution is to add variable name denormalization. This denormalization removes the declarations and definitions of the temporary variables,
and replace uses of such temporary variables with the original variables that are referred to by those temporary variables. For example, line 2 in Figure~\ref{fig:temporary_variables} will be removed and {\tt parameterVariable0} in line 5 and 7 will be replaced by {\tt hour}.
\begin{figure}[h]
\centering
\scriptsize{
\begin{lstlisting}[language=java,numbers=none,sensitive=true,columns=flexible,basicstyle=\ttfamily]
1 public void setTimeH(TimePicker tp, int hour) {
2 int parameterVariable0 = hour;
3 TimePicker classNameVariable = tp;
4 if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) {
5 classNameVariable.setHour(parameterVariable0);
6 } else {
7 classNameVariable.setCurrentHour(parameterVariable0);
8 }
9 }
\end{lstlisting}
\caption{An example of temporary variables in the output of CocciEvolve after updating the {\tt setCurrentHour(...)} API invocation.}\label{fig:temporary_variables}
}
\end{figure}
\section{Approach}\label{sec:approach}
\subsection{AndroEvolve{} Overview}
\begin{figure}[t]
\centering
\captionsetup{belowskip=1.0pt,aboveskip=3.0pt}
\includegraphics[width=0.85\linewidth]{Diagrams/workflow_summary.png}
\caption{Summary of AndroEvolve{} workflow}
\label{fig:androevolve_summary}
\end{figure}
The workflow of AndroEvolve{} is comprised of two main functionalities, update-script creation, and update-script application.
Figure~\ref{fig:androevolve_summary} provides a graphical description of this workflow.
Update script creation takes as input the API update mapping and the after-update example of the API to create the update script. The API update mapping defines the mapping between the deprecated API and the updated API. Within the update script creation process, several components are at work. First, data flow analysis is used to resolve any out-of-method variables used by the updated API arguments in the after-update example. This data flow analysis also locates the definitions of any method invocations or object creations used as API arguments. Following the data flow analysis, source file normalization is done on the code block containing the API invocation. Variable normalization introduces temporary variables as replacements for the API invocation arguments to facilitate the update process.
Finally, an update script is created based on this modified example code.
This update script, along with the API update mapping and the target file, is the input for the update application process. Within the update application process, there are also several steps that are applied. First, source file normalization is also used in the target file to ease the update process. The update script is then applied to the normalized code. After the update is done, we copy the methods and class definitions that are used by the method invocation or object creations that are used in the updated API arguments. Finally, we apply variable name denormalization to remove temporary variables introduced by the variable normalization and replace their usage with the original expressions used as the API arguments. Details of each functionality are provided below.
\subsection{Data Flow Analysis}
API method invocations may include arguments that are syntactically different but semantically equivalent.
Figure~\ref{fig:arguments_form} shows an example of different forms of arguments for {\tt setAudioAttributes} method invocations. In the first example, the argument is first instantiated and assigned into a variable (line 2, 3) before being used as the argument for the first {\tt setAudioAttributes} method invocation (line 10). The {\tt builder} variable (line 3) in this example can be a {\it free} out-of-method variable (only defined in line 2 outside of the method containing lines 9 and not passed as an argument).
The second example shows a code fragment where a complex expression is directly put as an argument of the second {\tt setAudioAttributes(...)} method invocation (line 26). Contrary to the first example, the argument used in this example is bound (i.e., a variable locally defined or passed in via a method parameter).
\begin{figure}[t]
\centering
\scriptsize{
\begin{lstlisting}[language=java,numbers=none,sensitive=true,columns=flexible,basicstyle=\ttfamily]
// First Example
1 public class AudioPlayer {
2 AudioAttributes.Builder builder = new
AudioAttributes.Builder();
3 AudioAttributes attributes = builder.build();
...
8 private void setAttributes() {
9 if (android.os.Build.VERSION.SDK_INT >=
android.os.Build.VERSION_CODES.LOLLIPOP) {
10 mMediaPlayer.setAudioAttributes(attributes);
11 } else {
...
}
}
}
// Second Example
20 public class AudioPlayer {
...
24 private void setAttributes() {
25 if (android.os.Build.VERSION.SDK_INT >=
android.os.Build.VERSION_CODES.LOLLIPOP) {
26 mMediaPlayer.setAudioAttributes(new
AudioAttributes.Builder().build());
27 } else {
...
}
}
}
// Normalized first example
30 public class AudioPlayer {
30 AudioAttributes.Builder builder = new
AudioAttributes.Builder();
31 AudioAttributes attributes = builder.build();
...
38 private void setAttributes() {
39 if (android.os.Build.VERSION.SDK_INT >=
android.os.Build.VERSION_CODES.LOLLIPOP) {
40 AudioAttributes parameterVariable0 = attributes;
41 MediaPlayer classNameVariable = mMediaPlayer;
42 classNameVariable.setAudioAttributes(
parameterVariable0);
43 } else {
...
}
}
}
\end{lstlisting}
\caption{An example of different forms of argument for {\tt setAudioAttributes} method invocation}\label{fig:arguments_form}
}
\end{figure}
Suppose that CocciEvolve{} is given an after-update example as shown in the first example in Figure~\ref{fig:arguments_form}. Part of the code containing the API invocation and its argument is first normalized, resulting in the normalized first example code (line 30--44). Slice of the normalized code that is used to create the update script is the part contained within the {\tt if} statement, shown in line 40--42.
Using this normalized code slice, CocciEvolve{} will produce an incorrect update, as shown in Figure~\ref{fig:old_coccievolve_result_wrong}. This update script is incorrect since based on the code slice, {\tt newParametervariable0} is only resolved to the bound variable, {\tt attributes} (line 40). This is because CocciEvolve{} cannot resolve the correct value of the expression used as the API invocation argument and only uses the bound variable found within the slice, which is {\tt attributes}. Due to this reason, CocciEvolve{} can generate a correct update script for the second example (as shown in Figure~\ref{fig:old_coccievolve_result_correct}), but not for the first example.
\begin{figure}[t]
\centering
\scriptsize{
\begin{lstlisting}[language=diff,numbers=none]
@bottomupper_classname@
expression exp0, exp1;
identifier iden0, classIden;
@@
...
// Created Update Script
+ if (android.os.Build.VERSION.SDK_INT >=
+ android.os.Build.VERSION_CODES.LOLLIPOP) {
+ AudioAttributes newParameterVariable0 = attributes;
+ classIden.setAudioAttributes(newParameterVariable0);
+ } else {
...
+ }
\end{lstlisting}
\caption{Incorrect update script for {\tt setAudioAttributes} API invocation generated by AndroEvolve{} based on the first example in Figure~\ref{fig:arguments_form}}\label{fig:old_coccievolve_result_wrong}
}
\end{figure}
\begin{figure}[t]
\centering
\scriptsize{
\begin{lstlisting}[language=diff,numbers=none]
@bottomupper_classname@
expression exp0, exp1;
identifier iden0, classIden;
@@
...
+ if (android.os.Build.VERSION.SDK_INT >=
+ android.os.Build.VERSION_CODES.LOLLIPOP) {
+ AudioAttributes newParameterVariable0 = new
+ AudioAttributes.Builder().build();
+ classIden.setAudioAttributes(newParameterVariable0);
+ } else {
...
+ }
\end{lstlisting}
\caption{Correct update script for {\tt setAudioStreamType} method invocation generated by AndroEvolve{} based on the second example in Figure~\ref{fig:arguments_form}}\label{fig:old_coccievolve_result_correct}
}
\end{figure}
This problem severely limits the coding styles that are acceptable as examples for CocciEvolve{}, which subsequently limits its effectiveness. This problem also prevents CocciEvolve{} from being able to produce a working update script for examples which contain free variables.
To alleviate this problem, AndroEvolve{} uses Data Flow Analysis (DFA) to resolve the values of expressions used as arguments in an API method invocation. This resolver should handle all possible forms of Java expressions. This data flow analysis is built to gather and predict a set of possible values
at any given point inside the code. Hence, this functionality is able to predict and resolve the correct replacement values for any expressions used in the API invocation arguments.
We made a custom lightweight DFA for this purpose by using the symbol resolver from Java Symbol Solver that is a part of Javaparser~\cite{javaparser}. This DFA conducts a bottom-up search from the bound variables or expressions used as the API method invocation arguments and expands the search scope until it finds the value or the method definition referred by the expressions, or until it explores the entire file. Using this approach, we can predict the value of free variables that are referred by the bound variables. Values and method invocations that are found by this analysis are used to replace the original expressions. These replacements are done to ensure that the expressions used as API invocation arguments are in the form of literal expressions, static class members, method invocations, or object creations.
The workflow for this DFA is shown in Figure~\ref{fig:data_flow_analysis}. The DFA receives as an input the expression to be resolved. This expression is used as API invocation argument and can be in the form of a method invocation expression, name expression, field access expression, and literal expression. Each form of expression will require a specific processing as given in the workflow diagram. To give a better understanding of this workflow, assume an example code provided in Figure~\ref{fig:update_code_example}.
\setcounter{figure}{8}
\begin{figure}[t]
\centering
\scriptsize{
\begin{lstlisting}[language=java,numbers=none,sensitive=true,columns=flexible,basicstyle=\ttfamily]
43 private int duration = 9;
44 private int frequency = 3;
45 public int amplitude = duration / frequency;
46 public VibrationEffect createVibration(int time,
int amplitude) {
47 return VibrationEffect.createOneShot(time, amplitude);
48 }
...
69 public void onCreate() {
70 if (android.os.Build.VERSION.SDK_INT >=
android.os.Build.VERSION_CODES.O) {
71 vibrator.vibrate(createVibration(3, amplitude));
72 } else {
73 vibrator.vibrate(50);
74 }
75 }
\end{lstlisting}
\caption{Update example code for {\tt vibrate(long)} API}\label{fig:update_code_example}
}
\end{figure}
In this update example, we can see that the updated {\tt vibrate} API used a method invocation expression as its argument (line 71). According to the workflow, we resolved the method definition, {\tt createVibration(...)} (line 46), and processed it using the copy method and class definition.
Then, we resolve the scope of the method.
However, since {\tt createVibration(...)} method is a public method that can be referenced
directly, no object or class is used in its invocation, thus resulting in no scope to resolve. Next, we resolve the arguments of the method invocation. The first argument, {\tt 3} is an integer literal expression, thus no replacement is needed. However, the second argument, {\tt amplitude} is a name expression, so we need to resolve its definition. Resolving this argument results in its definition which is {\tt duration / frequency} (line 45). Since this definition still contains expressions in the form of name expression, we further resolve their values recursively. From this process, we found the literal expressions of {\tt 9} (line 43) for duration and {\tt 3} (line 44) for frequency. These expressions are used to replace their values in the {\tt amplitude} variable definition in line 45 definition resulting in:
\begin{lstlisting}[language=java,numbers=none,sensitive=true,columns=flexible,basicstyle=\ttfamily]
45 public int amplitude = 9 / 3;
\end{lstlisting}
In the end, we replace this definition of {\tt amplitude} variable into the value used as the API invocation argument in line 71, resulting in this updated API:
\begin{lstlisting}[language=java,numbers=none,sensitive=true,columns=flexible,basicstyle=\ttfamily]
71 vibrator.vibrate(createVibration(3,
9 / 3));
\end{lstlisting}
\setcounter{figure}{7}
\begin{figure}[t]
\centering
\captionsetup{belowskip=1.0pt,aboveskip=3.0pt}
\includegraphics[width=1.0\linewidth]{Diagrams/data_flow_analysis_workflow.png}
\caption{Overview of the data flow analysis workflow}
\label{fig:data_flow_analysis}
\end{figure}
\setcounter{figure}{9}
This data flow analysis is built for AndroEvolve{} as an upgrade from CocciEvolve{}. Therefore, the DFA is run as a preprocessing step before the update script creation. As a preprocessing step, this feature does not change the internal working behavior of the CocciEvolve{} but instead adds a layer of functionality that modifies the input code.
\subsection{Source File Normalization}
Following the approach taken by CocciEvolve{}, Andro-Evolve also uses source file normalization in its workflow. Source file normalization is used to mitigate the problem of semantically-equivalent code being expressed in different forms, which can cause a failed API update. Variable normalization in AndroEvolve{} is focused on the part of the file that contains the API invocations defined in the API update mapping, along with their arguments. Given an API invocation, source file normalization normalizes the code in three steps:
\begin{enumerate}[nosep,leftmargin=*]
\item AndroEvolve{} is the first Python API usage search tool that we know of;
\item AndroEvolve{} us
\item Extract the returned value of the API invocation into variable assignment.
\end{enumerate}
Consider a code fragment containing a {\tt fromHtml(String)} API invocation as shown in the first example of Figure~\ref{fig:source_file_normalization}. Source file normalization will convert the code given in the first example into the normalized form given in the second example. First, all arguments, including the class or object used in the API invocation, are extracted. This extraction introduces the variables {\tt classNameVariable} (line 2), containing the class used in the API invocation, and {\tt parameterVariable0} (line 3), containing the argument of the API invocation. Next, the return value of the API invocation is also extracted, resulting in the {\tt tempFunctionReturnValue} variable (line 4--5).
\begin{figure}[t]
\centering
\scriptsize{
\begin{lstlisting}[language=java,numbers=none,sensitive=true,columns=flexible,basicstyle=\ttfamily]
// First Example
1 Spanned span = Html.fromHtml("<h2>Title</h2><br>");
// Second Example
2 Html classNameVariable = Html;
3 String parameterVariable0 = "<h2>Title</h2><br>";
4 Spanned tempFunctionReturnValue;
5 tempFunctionReturnValue = classNameVariable.
fromHtml(parameterVariable0);
6 span = tempFunctionReturnValue;
\end{lstlisting}
\caption{An illustration of the source file normalization result}\label{fig:source_file_normalization}
}
\end{figure}
\subsection{Copying Method and Class Definition}
To provide a correct update, substituting the expressions into the resolved value is insufficient if the expressions are in the form of method invocation or object instantiation. This is because these expressions require their definitions to be used. Accordingly, we must copy method and class definitions from the resolved expression.
There are several important points to be considered for this feature. First, the copied class or method should be defined within the file containing the after-update example. This is due to AndroEvolve{} limitation as a tool that works on a file scope. Therefore, if the class or method is defined outside of the after-update example file, AndroEvolve{} will not be able to resolve them.
Second, the copied class or method must be given an unique name, as required by Java.
Lastly, the class or method that is copied must be in a scope that is accessible by the API invocation in the target file.
The workflow of this feature can be seen in Figure~\ref{fig:copy_method_class}. First we extract the
definitions of the methods and classes referenced by the expression from the code.
Then, we make sure that in the target file, there are no class nor method with the same name as the extracted class and method. If a duplicate name is detected, the copied class or method name that is a duplicate is modified by adding number to the name. After validating the name, we then modify the access modifier of the class and method to {\tt public} to make sure that the API invocation in the target file can access them. After all these process, we then copy the class and method definition into the end of the target file.
\begin{figure}[t]
\centering
\includegraphics[width=0.85\linewidth]{Diagrams/copy_class_method_definition.png}
\caption{Overview of the copy method and class definition workflow}
\label{fig:copy_method_class}
\end{figure}
Figure~\ref{fig:fig:method_copy_illus} illustrates this task.
Lines 1--7 show an example for {\tt requestAudioFocus(...)} API. The updated API uses an {\tt AudioFocusRequestOreo} object (line 3--4) as an argument but it is not resolved.
To correct the update, we must resolve and copy the relevant class definition, and instantiate the {\tt AudioFocusRequestOreo} object. Lines 10--51 show the results of this process. The {\tt AudioFocusRequestOreo} object is instantiated (line 11) and the relevant class is added into the updated code (line 30--51).
\begin{figure}[t]
\centering
\scriptsize{
\begin{lstlisting}[language=java,numbers=none,sensitive=true,columns=flexible,basicstyle=\ttfamily]
// Example of unresolved class audioFocusRequestOreo
1 if (android.os.Build.VERSION.SDK_INT >=
2 android.os.Build.VERSION_CODES.O) {
3 AudioFocusRequest request = audioFocusRequestOreo.
getAudioFocusRequest();
4 result = audioManager.requestAudioFocus(request);
5 } else {
6 result = audioManager.requestAudioFocus(listener,
type, duration);
7 }
// Example after resolving audioFocusRequestOreo class
10 if (android.os.Build.VERSION.SDK_INT >=
android.os.Build.VERSION_CODES.O) {
11 AudioFocusRequest request = new AudioFocusRequestOreo
(this).getAudioFocusRequest();
12 result = audioManager.requestAudioFocus(request);
13 } else {
14 result = audioManager.requestAudioFocus(listener,
type, duration);
15 }
...
30 class AudioFocusRequestOreo {
31 private AudioFocusRequest audioFocusRequest;
32 public AudioFocusRequestOreo(AudioManager.
33 OnAudioFocusChangeListener listener) {
...
40 }
41 public AudioFocusRequest getAudioFocusRequest() {
...
50 }
51 }
\end{lstlisting}
\caption{Sample updates for {\tt requestAudioFocus()} deprecated API}\label{fig:fig:method_copy_illus}
}
\end{figure}
\subsection{Variable Name Denormalization}
Another problem of CocciEvolve is the temporary variables that are added during the update process. These temporary variables add multiple lines of code to the updated file which can be considered harmful since it makes the code less readable and understandable. Typically, most of the added lines just reference other variables in the code -- thus they are unnecessary. An example of the temporary variables created by CocciEvolve can be seen in Figure~\ref{fig:unnormalized_code}. In this figure, we can see that the temporary variables named {\tt parameterVariable} (line 11--16) and {\tt classNameVariable} (line 17) only refer to other variables that already exist in the file (method parameter in line 10).
\begin{figure}[t]
\centering
\scriptsize{
\begin{lstlisting}[language=java,numbers=none,sensitive=true,columns=flexible,basicstyle=\ttfamily]
...
10 public int saveLayer(float left, float top, float right,
float bottom, @Nullable Paint paint,
int saveFlags) {
11 float parameterVariable0 = left;
12 float parameterVariable1 = top;
13 float parameterVariable2 = right;
14 float parameterVariable3 = bottom;
15 Paint parameterVariable4 = paint;
16 int parameterVariable5 = saveFlags;
17 Canvas classNameVariable = mCanvas;
18 if (VERSION.SDK_INT >= 21) {
19 tempFunctionReturnValue = classNameVariable.
saveLayer(parameterVariable0,
parameterVariable1, parameterVariable2,
parameterVariable3, parameterVariable4);
20 } else {
21 tempFunctionReturnValue = classNameVariable.
saveLayer(parameterVariable0,
parameterVariable1, parameterVariable2,
parameterVariable3, parameterVariable4,
parameterVariable5);
22 }
23 }
...
\end{lstlisting}
\caption{Sample CocciEvolve update result for deprecated API {\tt saveLayer}}\label{fig:unnormalized_code}
}
\end{figure}
Addressing this problem will be beneficial towards the readability and ease of understanding of the updated code. It will also make the code closer to developer-written code and less artificial. Variable name denormalization is our proposed approach for this purpose. Through the use of this denormalization, we aim to remove all unnecessary temporary variables and replace them with their values or referred variables. An example of the denormalized code based on the example from Figure~\ref{fig:unnormalized_code} can be seen in Figure~\ref{fig:denormalized_code}. In Figure~\ref{fig:denormalized_code}, temporary variables are removed and the relevant values are used directly in the method invocations. This results in a shorter, more concise, and more understandable code compared to the CocciEvolve update result.
\begin{figure}[t]
\centering
\scriptsize{
\begin{lstlisting}[language=java,numbers=none,sensitive=true,columns=flexible,basicstyle=\ttfamily]
...
10 public int saveLayer(float left, float top, float right,
float bottom, @Nullable Paint paint, int saveFlags) {
11 if (VERSION.SDK_INT >= 21) {
12 tempFunctionReturnValue = mCanvas.saveLayer(left,
top, right, bottom, paint);
13 } else {
14 tempFunctionReturnValue = mCanvas.saveLayer(left,
top, right,bottom, paint, saveFlags);
15 }
16 }
...
\end{lstlisting}
\caption{Denormalized code of the one shown in Figure~\ref{fig:unnormalized_code}}\label{fig:denormalized_code}
}
\end{figure}
The steps in the denormalization process are shown
in Figure~\ref{fig:variable_denorm}. First, we find the location of the API invocation that utilizes the temporary variables. For each of the temporary variable used in the API invocation, we resolve and locate its definition. These variable definitions contain their resolved values as the assigned expressions. These values are then used to replace the temporary variables used as the API's arguments. Finally, we delete the declaration and definition of the temporary variables as they are no longer needed.
\begin{figure}[t]
\centering
\captionsetup{belowskip=1.0pt,aboveskip=3.0pt}
\includegraphics[width=0.95\linewidth]{Diagrams/variable_denormalization.png}
\caption{Overview of the variable denormalization workflow}
\label{fig:variable_denorm}
\end{figure}
\section{Experiment}\label{sec:exp}
\subsection{Dataset}\label{sec:dataset}
Our dataset contains 360 target files containing 20 Android deprecated-APIs, extended from the APIs that were used to evaluate CocciEvolve{}~\cite{coccievolve}. The dataset is obtained from a randomly selected GitHub projects obtained using AUSearch~\cite{asyrofi2020ausearch}, a tool to search Github repositories for API usages. Using this tool, we collected public GitHub repositories that contain invocations of the deprecated APIs and their replacement APIs in our dataset. Our dataset comprises after-update examples, target files to update that contains usages of the APIs, and one-to-one mappings from the deprecated APIs to the replacement APIs.
Detailed statistics of the target files are shown in Table~\ref{table:data_statistic}.
\begin{table}[t]
\caption{Number of targets in our evaluation dataset}
\begin{center}
\begin{tabular}{ |p{20em}|p{4.5em}| }
\hline
\textbf{API Description} & \textbf{\# Targets}
\\ \hline
addAction(...) & 2
\\ \hline
getAllNetworkInfo() & 11
\\ \hline
getCurrentHour() & 60
\\ \hline
getCurrentMinute() & 60
\\ \hline
setCurrentHour(Integer) & 32
\\ \hline
setCurrentMinute(Integer) & 15
\\ \hline
setTextAppearance(...) & 15
\\ \hline
addGpsStatusListener(...) & 10
\\ \hline
fromHtml(...) & 15
\\ \hline
release() & 11
\\ \hline
removeGpsStatusListener(...) & 5
\\ \hline
shouldOverrideUrlLoading(...) & 0
\\ \hline
startDrag(...) & 4
\\ \hline
abandonAudioFocus(...) & 1
\\ \hline
getDeviceId() & 29
\\ \hline
requestAudioFocus(...) & 53
\\ \hline
saveLayer(...) & 21
\\ \hline
setAudioStreamType(...) & 2
\\ \hline
vibrate(long) & 8
\\ \hline
vibrate(long[], int) & 6
\\ \hline
\end{tabular}
\end{center}
\label{table:data_statistic}
\end{table}
As shown in Table~\ref{table:data_statistic}, there are no target files for the API {\tt shouldOverrideUrlLoading(...)}. During the search, we did not find any after-update example nor target file for it.
\subsection{Research Questions}\label{sec:rq}
\subsubsection{RQ1: How many updates can AndroEvolve{} apply correctly?}
\hfill \break
We assess update performance by counting the number of correct updates produced. A correct update is an update that contains the deprecated and replacement API method in the form of an {\tt if} code block, alongside with all the methods and classes needed by the replacement API method.
We compare the update performance of AndroEvolve{} and CocciEvolve.
We also ask an experienced Java and Android engineer to check the measured update performance and verify their correctness.
\subsubsection{RQ2: How readable is the updated code produced by CocciEvolve{} and AndroEvolve{}?}
\hfill \break
We measured the readability of the updated code produced by AndroEvolve{} and CocciEvolve.
In order to get a better insight on the readability aspect of the update, we conduct an automatic and a manual measurement.
In the automatic measurement, we utilized a state-of-the-art code readability scoring tool proposed by Scalabrino et al.~\cite{readabilitymodel}.
The tool outputs a code readability score in a scale of 0.0 to 1.0 with the higher score being a measure of better readability. As the readability score is affected by the length of the source code file, we performed a static slicing to obtain parts of the code that are affected by the update.
The static code slicing is done using JavaParser\cite{javaparser} by first locating the deprecated and updated Android APIs based on their description. After these APIs are found, we slice the API method invocations and all the variables that are used in the invocations. The sliced code is then put into a template class and method to allow readability measurement using the tool. An example of the sliced code file is shown in Figure~\ref{fig:code_slicing_example}.
\begin{figure}[t]
\centering
\scriptsize{
\begin{lstlisting}[language=java,numbers=none,sensitive=true,columns=flexible,basicstyle=\ttfamily]
class MainActivity {
public static void main() {
int parameterVariable0 = lastHour + 1;
TimePicker classNameVariable = timePicker;
if (Build.VERSION.SDK_INT >=
Build.VERSION_CODES.M) {
classNameVariable.setHour(parameterVariable0);
} else {
classNameVariable.setCurrentHour(
parameterVariable0);
}
}
}
\end{lstlisting}
\caption{Sliced code example for deprecated {\tt setCurrentHour(...)} API}\label{fig:code_slicing_example}
}
\end{figure}
In the manual measurement, we asked two experienced Android developers to score 60 updated code, with 30 updated code each from CocciEvolve and AndroEvolve. We choose 30 as a sample size to represent the variation in the syntax of the updated code. The developers were not told about which tool is used to produce the updated code. The first developer has 5 years experience in Android, while the second developer has 3 years of experience. For each updated code, the developers were asked to give score in the Likert scale of 1-5 for the readability of the code, and the naturalness of the code. A higher score indicates higher readability and higher confidence that the code resembles the one produced by human. For each pair of updated code produced by CocciEvolve and AndroEvolve, the developers are also asked to determine which code that they prefer.
\subsubsection{RQ3: How efficient is AndroEvolve{} in producing updates?}
\hfill \break
We measured the time needed for AndroEvolve{} to update the target file. Specifically, we measure the time to perform update script creation and update application. The measurement is conducted in a Macbook Pro with a 2.3 GHz Intel Core i5 processor, and 8 GB 2133 MHz random access memory. The system ran Java SE 11, with OpenJDK version 11.0.6.
\subsection{Results}\label{result_subsection}
\subsubsection{RQ1: Code Update Accuracy}\label{sec:RQ1}
In total, AndroEvolve{} and CocciEvolve{} managed to correctly update 316 and 249 out of the 360 target files, respectively. AndroEvolve outperforms CocciEvolve by 26.90\%.
Analysis on the results shows that the inclusion of data flow analysis improves the update result of AndroEvolve{} significantly, specifically in the {\tt vibrate(long)}, {\tt vibrate(long[],int)}, and {\tt requestAudioFocus(...)} APIs. After-update examples for these APIs include usages of out-of-method variables that are not handled by CocciEvolve{}. AndroEvolve{} has similar performance as CocciEvolve{} for APIs that do not use out-of-method variables in the after-update example.
Despite the big improvement when compared to CocciEvolve, AndroEvolve{} still has several problems. One of these problems affects {\tt addGpsStatusListener(...)} and {\tt removeGpsStatusListener(...)} APIs. These APIs utilize out-of-file variables for their method invocation. Out-of-file variables are variables that are defined outside of a file scope. These variables can be a class member or a method invocation argument that are defined in other files. Since AndroEvolve{} works only in a file scope, these out-of-method variables are unresolved, causing an incomplete update. An example of the updated code that carries this problem is shown in Figure~\ref{fig:input_parameter}. In this example, the {\tt callback} object (line 2, 6) is a class member whose value is defined in another file.
\begin{figure}[t]
\centering
\scriptsize{
\begin{lstlisting}[language=java,numbers=none,sensitive=true,columns=flexible,basicstyle=\ttfamily]
1 public class MixedPositionProvider extends PositionProvider
implements LocationListener, GpsStatus.Listener {
2 private GnssStatus.Callback callback;
3 public void startUpdates() {
4 GpsStatus.Listener listener = this;
5 if (android.os.Build.VERSION.SDK_INT >=
android.os.Build.VERSION_CODES.N) {
6 locationManager.registerGnssStatusCallback(
callback);
7 }
8 else{
9 locationManager.addGpsStatusListener(listener);
10 }
11 }
12 }
\end{lstlisting}
\caption{An example usage of out-of-file variable in {\tt addGpsStatusListener} method invocation}\label{fig:input_parameter}
}
\end{figure}
AndroEvolve{} cannot handle complex updates that involve an update of a single API into multiple APIs, such as updating {\tt getAllNetworkInfo()} API to {\tt getAllNetworks()} and {\tt getNetworkInfo(...)} APIs. Unlike the usual Android API update that replaces an API with a newer API, the case of {\tt getAllNetworkInfo()} is different. Updating this API involves the addition of a new control flow in the form of a loop that iterates the {\tt Network} object returned by {\tt getAllNetworks()} to receive their {\tt NetworkInfo} object by using {\tt getNetworkInfo(...)} method.
AndroEvolve{} also cannot update multiple invocations of an API method written in a single line of code.
While uncommon, this problem has been found in some target files, resulting in an incomplete update. An example of this problem is shown in Figure~\ref{fig:multiple_invocation}. We can see that {\tt getCurrentHour()} (line 7) is invoked multiple times in the last line, causing only the first invocation to be updated.
\begin{figure}[t]
\centering
\scriptsize{
\begin{lstlisting}[language=java,numbers=none,sensitive=true,columns=flexible,basicstyle=\ttfamily]
1 int tempFunctionReturnValue;
2 if (android.os.Build.VERSION.SDK_INT >= 23) {
3 tempFunctionReturnValue = timePickerBegin.getHour();
4 } else {
5 tempFunctionReturnValue = timePickerBegin.
getCurrentHour();
6 }
7 dateTime = tempFunctionReturnValue + ":" +
timePickerBegin.getCurrentMinute() + "-" +
timePickerEnd.getCurrentHour() + ":";
\end{lstlisting}
\caption{An example of multiple invocations of {\tt getCurrentHour()} method in a single line}\label{fig:multiple_invocation}
}
\end{figure}
\subsubsection{RQ2: Code Readability}
In automatic readability measurement, we compute the scores of all 355 updated code and average the scores for the APIs.
The detailed scores are shown in Table~\ref{table:readability_measure}. Based on this result, code updated by AndroEvolve{} has higher scores for all APIs. Further analysis of the updated code shows that a bigger improvement is observed for APIs with multiple arguments (e.g. {\tt saveLayer(...), startDrag(...)}, etc.).
\begin{table}[t]
\centering
\caption{Updated code automated readability scores}
\label{table:readability_measure}
\begin{tabular}{|l|c|c|c|}
\hline
\multicolumn{1}{|c|}{\multirow{2}{*}{\textbf{API}}} & \multicolumn{1}{|c|}{\multirow{2}{*}{\textbf{\# Code}}} & \multicolumn{2}{|c|}{\textbf{Average Score}} \\
\cline{3-4}
\multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{} &
\textbf{AndroEv} & \textbf{CocciEv} \\ \hline
addAction(...) & 0 & 0 & 0
\\ \hline
getAllNetworkInfo() & 11 & 0 & 0
\\ \hline
getCurrentHour() & 60 & 0.6009 & 0.5071
\\ \hline
getCurrentMinute() & 60 & 0.5996 & 0.5172
\\ \hline
setCurrentHour(Integer) & 32 & 0.6006 & 0.3904
\\ \hline
setCurrentMinute(Integer) & 15 & 0.5766 & 0.3962
\\ \hline
setTextAppearance(...) & 15 & 0.5615 & 0.2947
\\ \hline
addGpsStatusListener(...) & 10 & 0.3945 & 0.2307
\\ \hline
fromHtml(...) & 15 & 0.4143 & 0.2593
\\ \hline
release() & 11 & 0.8311 & 0.6890
\\ \hline
removeGpsStatusListener(...) & 5 & 0.4006 & 0.2287
\\ \hline
shouldOverrideUrlLoading(...) & 0 & 0 & 0
\\ \hline
startDrag(...) & 4 & 0.4516 & 0.1440
\\ \hline
abandonAudioFocus(...) & 0 & 0 & 0
\\ \hline
getDeviceId() & 29 & 0.4545 & 0.3974
\\ \hline
requestAudioFocus(...) & 53 & 0.2413 & 0.2290
\\ \hline
saveLayer(...) & 21 & 0.4115 & 0.1011
\\ \hline
setAudioStreamType(...) & 0 & 0 & 0
\\ \hline
vibrate(long) & 8 & 0.5284 & 0.3629
\\ \hline
vibrate(long[], int) & 6 & 0.4437 & 0.2631
\\ \hline
\end{tabular}
\end{table}
The manual readability measurement strengthens the above findings. The average readability score given by the developers for code updated by AndroEvolve{} is 4.817. Meanwhile, the average readability score for code updated by CocciEvolve{} is 2.633. Improvement can also be seen in the scores for code naturalness:
AndroEvolve{}'s code was given an average score of 4.917, while CocciEvolve{}'s only received an average score of 2.433 for the naturalness aspect of their code.
\subsubsection{RQ3: Update time}
The update time of AndroEvolve{} is shown in Table~\ref{table:time_measurement}.
\begin{table}[t]
\caption{Time measurement results of AndroEvolve{} (in seconds)}
\begin{center}
\begin{tabular}{ |p{12em}|p{5em}|p{5em}| }
\hline
\textbf{API} & \textbf{Update Creation} & \textbf{Update Application}
\\ \hline
addAction(...) & 9.601 & -
\\ \hline
getAllNetworkInfo() & 9.083 & 9.417
\\ \hline
getCurrentHour() & 13.840 & 11.140
\\ \hline
getCurrentMinute() & 8.660 & 10.172
\\ \hline
setCurrentHour(Integer) & 6.365 & 7.133
\\ \hline
setCurrentMinute(Integer) & 9.345 & 6.173
\\ \hline
setTextAppearance(...) & 7.935 & 13.144
\\ \hline
addGpsStatusListener(...) & 9.434 & 6.206
\\ \hline
fromHtml(...) & 9.278 & 13.448
\\ \hline
release() & 10.792 & 6.622
\\ \hline
removeGpsStatusListener(...) & 9.316 & 6.500
\\ \hline
shouldOverrideUrlLoading(...) & - & -
\\ \hline
startDrag(...) & 10.030 & 6.483
\\ \hline
abandonAudioFocus(...) & 12.230 & -
\\ \hline
getDeviceId() & 9.231 & 9.001
\\ \hline
requestAudioFocus(...) & 9.340 & 11.112
\\ \hline
saveLayer(...) & 8.767 & 6.589
\\ \hline
setAudioStreamType(...) & 9.197 & -
\\ \hline
vibrate(long) & 8.700 & 5.876
\\ \hline
vibrate(long[], int) & 8.292 & 5.747
\\ \hline
\end{tabular}
\end{center}
\label{table:time_measurement}
\end{table}
Both update-script creation and update-script application steps in AndroEvolve{} took an average of less than 15 seconds to execute. Given an after-update code example, a target file, and an API update mapping, AndroEvolve{} can update the API usages in the target file in less than a minute.
Further analysis yields several observations. First, aside from API complexities, the number of API invocations in the target file also affects the update time. Second, when processing multiple files containing the same API usage, the time can be shortened by using the same update script.
\section{Discussion}\label{sec:discuss}
Based on the results provided in Section~\ref{result_subsection}, it is evident that AndroEvolve{} achieves a better performance than CocciEvolve{}. Our approach solves the problem of out-of-method variables by using data flow analysis to resolve their values. Moreover, variable name denormalization also improves the readability of the updated code, as demonstrated by our automatic and manual readability measurements.
Despite this achievement, AndroEvolve{} still fails to update some target files, as described in Section~\ref{sec:RQ1}. Some failures are caused by multiple invocations of the same API within a single line of code. A possible solution for this problem is to temporarily separate the same API invocations that exist in the same line of code into multiple statements to be updated independently. Other failures occur due to usages of out-of-file-boundary variables in the after-update example. To mitigate this problem, we need to improve the data flow analysis to make it work in a project scope, and change the input of AndroEvolve{} from a file into a project. Problems also occur in the case of APIs with complex updates that convert a single API invocation into multiple API invocations. Solving this problem will require an overall change in AndroEvolve{}, by allowing it to accept a non-one-to-one API update mapping.
\begin{figure}[t]
\centering
\scriptsize{
\begin{lstlisting}[language=java,numbers=none,sensitive=true,columns=flexible,basicstyle=\ttfamily]
1 public ChromeNotificationBuilder addAction(int icon,
CharSequence title, PendingIntent intent) {
2 if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) {
3 Notification.Action action = new Notification.
Action.Builder(Icon.createWithResource
(mContext, icon), title, intent).build();
4 mBuilder.addAction(action);
5 } else {
6 mBuilder.addAction(icon, title, intent);
7 }
8 return this;
9 }
\end{lstlisting}
\caption{Code example for {\tt } API that use inner class constructor}\label{fig:inner_class_constructor}
}
\end{figure}
Another problem occurs when the after-update example makes use of an inner class constructor. This problem occurs due to a limitation of Coccinelle4J, which AndroEvolve{} inherits. Coccinelle4J only supports middleweight Java, which only includes a subset of the Java grammar. This subset does not include the inner class constructor. As a consequence, AndroEvolve{} cannot handle an after-update example that contains inner class constructors. Such an example is shown in Figure~\ref{fig:inner_class_constructor}. In this example, the creation of the {\tt action} object (line 3) uses an inner class constructor {\tt Notification.Action.Builder}. This problem affects the update of several APIs, including {\tt addAction(...)}, {\tt abandonAudioFocus(...)}, and {\tt setAudioStreamType(...)}.
\section{Related Work}\label{sec:related}
\textbf{API deprecation}.
Studies about API deprecation have been done frequently \cite{li2018characterising, zhou2016api, brito2016developers, yang2018android, robbes2012developers, horadevelopersapievolution, sawant2018reaction, fazzini2019automated, coccievolve}.
Li et al.\cite{li2018characterising} proposed a tool called CDA to characterize deprecated Android APIs. They found inconsistent annotation and documentation on deprecated APIs, and that most deprecated APIs are used in popular libraries.
Zhou et al.\cite{zhou2016api} examined the usages of deprecated APIs in 26 open-source Java frameworks and libraries and found that many of these APIs were never updated. They proposed Deprecation Watcher, a tool to detect deprecated Android API usages from code examples on the web.
Brito et al.\cite{brito2016developers} conducted a large scale analysis on Java systems to measure the usages of deprecation messages. Their analysis showed that a number of deprecated APIs did not use these replacement messages.
Yang et al.\cite{yang2018android} investigated the impact of Android OS updates on Android apps. They presented an automatic approach to detect parts of Android apps affected by an OS update.
Some studies focus on the effect of API deprecation.\cite{robbes2012developers, horadevelopersapievolution, sawant2018reaction}. Robbes et al.\cite{robbes2012developers} conducted a case study on the Smalltalk ecosystem and found that API deprecation messages are not always helpful. Hora et al.\cite{horadevelopersapievolution} conducted a case study on the Pharo ecosystem on the impact of API evolution, resulting in similar findings that API changes can have a large impact in the client systems, methods, and developers. They also found that API replacements can not be resolved uniformly.
Sawant et al.\cite{sawant2018reaction} replicated the study on Java. They found that only a small fraction of developers react to API deprecation and most of these developers prefer to remove usages of deprecated APIs.
Our study also deals with API deprecation. It focuses on automatically updating the usages of deprecated Android APIs. Similar studies on this topic have been done recently. AppEvolve\cite{fazzini2019automated} is one of the first tools proposed for this purpose. It performs API updates by learning from both before- and after- update example.
CocciEvolve{}\cite{coccievolve} is the current state-of-the-art tool for automated update of deprecated Android API usage. CocciEvolve{} improves on AppEvolve by only using a single after-update example to perform API update and providing a highly readable and configurable update API update script in the form of semantic patches. CocciEvolve{} also solves the problem of failure to update code with different form or syntax that occurs in AppEvolve, as highlighted by the replication study by Thung et al.\cite{thung2020automated}. CocciEvolve{} utilizes parameter variable normalization to mitigate this problem.
\textbf{Program transformation}. Program transformations have been studied extensively \cite{Visser:2001:SLP:647200.718711, LASE, Meng:2013:LLA:2486788.2486855, Rolim:2017:LSP:3097368.3097417, lawall2018coccinelle, brunel2009foundation, kang2019semantic}.
Stratego\cite{Visser:2001:SLP:647200.718711} is a language for program transformation based on the paradigm of rewriting strategies. Stratego performed transformation following the written transformation rules. LASE\cite{LASE, Meng:2013:LLA:2486788.2486855} is an example based program transformation tool that is capable of locating and applying systematic edits. LASE provides users with a view of the syntactic edit and its corresponding contexts, allowing users to review and correct the edit suggestions. Rolim et al.\cite{Rolim:2017:LSP:3097368.3097417} proposed REFAZER, a technique for automatically learn program transformations by observing code edits performed by developers.
Coccinelle\cite{lawall2018coccinelle, brunel2009foundation} is a C-based program matching and transformation tool that has been utilized for the automated evolution of Linux kernel. Coccinelle allows developers to write their transformation rules using Semantic Patch Language (SmPL). Recently, Kang et al.~\cite{kang2019semantic} proposed Coccinelle4J, a port of Coccinelle for Java language. Coccinelle4J allows the transformation of Java program using the same method as Coccinelle, through the use of semantic patch written in Semantic Patch Language.
In our work, AndroEvolve{} applies program transformation to update deprecated Android API usages. It uses SmPL to write the transformation and Coccinelle4J to apply it.
\section{Conclusion and Future Work}\label{sec:conclusion}
Updating the usages of deprecated Android APIs is a priority to ensure the functionality of Android apps in the current and previous versions of Android OS. However, performing such updates is time-consuming and labor-intensive. In this work, we proposed AndroEvolve{}, an automated Android API usage update tool. AndroEvolve{} uses data flow analysis to resolve the values of out-of-method variables, allowing AndroEvolve{} to work on the file scope. AndroEvolve{} also performs variable denormalization to produce updated code with good readability. We evaluated AndroEvolve{} using a dataset of 360 target files from which it managed to produce 316 successful updates. On the same dataset, CocciEvolve{}, the previous state-of-the-art tool, only managed to produce 249 successful updates. We also evaluate the updated code readability using both manual and automatic measurements. In the manual measurement, we asked the opinions of two developers on the readability of the updated code, while in the automatic measurement, we used a code readability scoring tool. In both measurements, AndroEvolve{} outperforms CocciEvolve by 49.89\% and 82.94\% respectively.
For future work, we plan to increase the capability of AndroEvolve{}. First, we plan to improve the data flow analysis to allow resolving values that are located in other files within the same project. This addition will make AndroEvolve{} capable of handling out-of-file variables. Second, we also plan to handle more complex Android API updates, especially for cases where a single API is updated into several different APIs. While such a case is uncommon, this improvement would increase the overall effectiveness of AndroEvolve{}.
\balance
|
1,314,259,995,279 | arxiv | \section{Introduction}
\label{sec:introduction}
Despite long model building efforts, the origin and nature of dark matter remains one of the biggest puzzles in Physics and astronomy. In recent years, Cold Dark Matter (CDM) has been pointed out as the best class of models that is able to reproduce large scale structure formation of the universe. In these models, dark matter is made out of weakly interacting non-relativistic particles with a small initial velocity dispersion relation inherited from interactions in the early universe that do not erase structures on galactic and sub-galactic scales. Among the various models, the combination of cosmic acceleration measurement and the CMB evidence for a flat universe led to the choice of $\Lambda$CDM model which is nowadays considered as the `Standard Model' of cosmology.
Despite its success in explaining the large scale structure of the universe, $\Lambda$CDM was believed to suffer from some problems related to galaxy formation such as the cusp-core problem, the missing satellite problem and the too-big-to-fail problem~\cite{Bullock_2017}. During the last years, the increasing interest in small scales led to further investigations pointing out that the aforementioned problems may be due to unaccounted baryonic feedback mechanisms or to new exotic dark matter physics on small scales~\cite{Brooks_2013,Spergel:1999mh,Read:2018fxs,Colin:2000dn} but a final and exhaustive solution is still lacking. Regardless of the veracity of small-scale problems, Weakly Interacting Massive Particles (WIMPs) having mass $\sim \mathcal{O}(100)$ GeV that were considered the most promising CDM candidates have continuously eluded whatever kind of experimental measurement as collider searches and direct/indirect detection experiments.
These concerns about $\Lambda$CDM and WIMPs led to the study of alternative DM models. Among that, in recent years the idea of bosonic ultralight CDM, also called Fuzzy Dark Matter (FDM), has been proposed~\cite{Hu:2000ke,Schive:2014dra,Hui:2016ltb,Hui:2021tkt}. In one of its prominent versions, DM is made of ultralight axion-like particles that form halos as Bose-Einstein condensates.
In this theory each axionic particle can develop structures on the scale of de Broglie wavelength thanks to gravitational interactions. This is an ensemble effect which is given by the mean properties of every single axion field. A prominent soliton, i.e. a state where self-gravity is balanced by the effective pressure arising from the uncertainty principle, develops at the centre of every bound halo. The soliton properties depend on the axion mass but usually its extension is assumed to be much smaller than the galaxy or galaxy cluster size. An axion having mass around $10^{-22}$ eV and decay constant $f\sim 10^{16\div 17}$ GeV has been pointed out as the best candidate to represent the dominant part of CDM in the universe since the wave nature of such a particle can suppress kpc scale cusps in DM halos and reduce the abundance of low mass halos~\cite{Schive:2014hza,Schive:2014dra,Hui:2016ltb}.
Recent studies put severe constraints on the vanilla FDM model without self-interactions where the usual cosine axionic potential is approximated as $1-\cos(\phi/f)\sim \frac{\phi^2}{2f^2}$. Various analyses of Lyman-$\alpha$ forest, satellite galaxies formation, the Milky Way core and Black Hole superradiance leave as the only viable mass window $m_\phi\gtrsim \mathcal{O}(1)\times 10^{-21}$ eV ~\cite{Marsh:2018zyw,Chan:2021ukg,Jones:2021mrs,Nadler:2020prv,Zu:2020whs,Marsh_2019,Nebrin_2019,Maleki:2020sqn,Marsh:2021lqg} and constrain the abundance of FDM candidates at different mass scales.
These experimental bounds imply that FDM cannot solve the alleged small-scale problems affecting $\Lambda$CDM as the Jeans mass (representing the lower bound on DM halos mass production) rapidly decreases at increasing ultralight boson masses \cite{Nebrin_2019}. Nevertheless, even in this case, these problems can be solved by baryonic physics and a better understanding of galaxy formations may allow us to discriminate between standard CDM and FDM models. Indeed, it was proven that small-mass halos suppression in the FDM model causes a delay in the onset of Cosmic Dawn and the Epoch of Reionization. Future experiments, such as the HERA survey, will measure the neutral hydrogen (HI) 21cm line power spectrum at high statistical significance across a broad range of redshifts~\cite{Jones:2021mrs,Nebrin_2019} and their findings may be able to discriminate between standard WIMP and FDM scenarios.
Since experimental bounds and simulations strongly constrain the original FDM model with negligible self-interaction, many extensions of it have been studied. It was shown that for large initial misalignment angles ALPs self-interactions can affect the baryonic structure and accelerate star formation in the early universe or induce oscillon formation that can give rise to detectable low frequency stochastic gravitational waves~\cite{Arvanitaki:2019rax}. Other authors suggest that FDM may not represent the entirety of DM~\cite{Schwabe:2020eac} or that FDM may not be given by a single component, being made out of multiple ultralight ALPs~\cite{Broadhurst:2018fei}.
The extremely high value of the decay constant together with the possible multiple axionic nature of FDM have been claimed to be a possible sign in favour of the string axiverse~\cite{Hui:2016ltb,Rogers:2020ltq}, where a plenitude of axion-like particles (ALPs) naturally emerge from 4D effective theories. However, in this paper we point out that obtaining a FDM axion with the correct mass and decay constant is not automatic in string theory. Indeed, even if one would naively think that ultralight axions generally emerge from string theory equipped with naturally high decay constants, reproducing the right relic abundance turns out to be hard and provides sharp predictions for fundamental microscopical parameters. We carry out a detailed analysis, studying the general features of closed and open string ALPs coming from type IIB string theory. Focusing on simple extra-dimensions geometries and using the most common moduli stabilisation prescriptions, for each class of ALPs we provide general predictions for the expected mass, decay constant and dark matter abundance. We discuss the settings of the microscopical parameters that lead to ultralight axions representing non-negligible fractions of DM, and we estimate how these requirements put stringent predictions for the relevant energy scales of the 4D effective field theory, such as the Kaluza-Klein (KK) scale, the gravitino mass and the scale of inflation. Finally, we compare our predictions for FDM ALPs with current observational constraints and we highlight which stringy FDM candidates occupy a region of the parameter space that will be probed by next generation experiments.
This work is organised as follows: in Section~\ref{sec:StringOrigin} we introduce our notation and we briefly sum up how ALPs naturally arise from type IIB string theory as closed and open string axions. Moreover, we discuss the non-trivial theoretical implication hiding behind the requirements of matching the right mass, decay constant and abundance. In Section~\ref{sec:closedALPs} we focus on closed string axion FDM models. We will work with type IIB string theory compactified to 4D on six dimensional Calabi-Yau (CY) orientifolds. Considering the two most prominent moduli stabilisation prescriptions for this setting, i.e. Large Volume Scenario (LVS)~\cite{Balasubramanian:2005zx} and KKLT~\cite{Kachru2003}, we scan over the different axion classes that can represent significant fractions of DM, i.e $C_4$, $C_2$ axions and thraxions. While thraxions turn out to be good FDM candidates in both setups, $C_4$ and $C_2$ axions may give rise to FDM only in LVS. In Section~\ref{sec:PredExpConstr} we discuss our findings and compare them with state of the art experimental bounds also considering how future experiments will be able to constrain the allowed ultralight axions parameter space. We also provide some intuition about the probabilistic distribution of these particles in the string landscape and we try to figure out how our results would be affected by considering more complex extra-dimension geometries.
\section{String origin of ultralight axionic DM candidates}
\label{sec:StringOrigin}
The 4D effective field theory coming from string compactification contains many scalar fields, named moduli, which parametrise the size and the shape of the extra dimensions. Moduli appear at tree-level as massless and uncharged scalar fields which, thanks to their effective gravitational coupling to all ordinary particles, would mediate some undetected long-range fifth forces. For this reason it is necessary to develop a potential for these particles in order to give them a mass. This problem goes under the name of moduli stabilisation.
Since the number of ALPs is related to the number of moduli, which can easily reach the value of several hundreds, we can have many ultralight axion candidates which create the so called axiverse~\cite{Cicoli:2012sz}. On the other hand, it is essential to notice that, although string compactifications carry plenty of candidates for axion and axion-like weakly interacting particles, there are several known mechanisms by which they can be removed from the low energy spectrum.
The low energy spectrum below the compactification scale generically contains many axion-like particles which arise either as closed string axions, which are the KK zero modes of 10D antisymmetric tensor fields, or as the phase of open string modes. While the number of closed string axions is related to the topology of the internal manifold, the number of open string axions is more model dependent since their existence relies upon the brane setup. In the next section, we will briefly describe the main properties of both closed and open string axions, trying to understand what conditions are required in order to reproduce viable FDM particles.
Let us now focus on the most relevant features that our fields need to satisfy in order to be good FDM candidates. A commonly used set of axion conventions is
\begin{equation}
{\cal L}=\frac{1}{2}f^2(\partial \theta)^2-Ae^{-S}\cos(\theta)\,,
\label{eq:AxionLagr}
\end{equation}
where $f$ is the axion decay constant and $S$ represents the instanton action that gives rise to the axion potential. Using this notation the axion periodicity is $2\pi/f$ and the value for $Sf$ corresponding to (half of a) Giddings-Strominger wormhole (for a review see~\cite{Hebecker:2018ofv})
\begin{equation}
Sf=\frac{\sqrt{6}\pi}{8}\simeq 0.96\fstop
\end{equation}
Given that FDM particles have to be produced through the misalignment mechanism and that a GUT scale decay constant implies that the Peccei-Quinn symmetry is broken before the inflationary stage, the DM abundance of the physical ALP particle, $\phi=f\theta$, can be expressed as~\cite{Cicoli:2012sz}:
\begin{equation}
\label{eq:DMabundance}
\frac{\Omega_{\phi}h^2}{0.112}\simeq 1.4 \times \left(\frac{m_\phi}{10^{-22} \mbox{eV}}\right)^{1/2}\left(\frac{f}{10^{17}\mbox{GeV}}\right)^2 \theta_{mi}^2\sim1 \coma
\end{equation}
where $\theta_{mi}\in [0,2\pi]$ is the initial misalignment angle with respect to the minimum of the potential. In Eq.~\eqref{eq:DMabundance} we are considering small field initial displacement, large misalignment will be briefly treated in Appendix~\ref{sec:anharm}. From Eq.~\eqref{eq:AxionLagr} we see that the axion mass is given by $m_\phi^2=A M_P^4 e^{-S}/f^2$.
Therefore, assuming an initial misalignment angle $\theta_{mi}\sim\mathcal{O}(1)$, a prefactor $A\sim\mathcal{O}(1)$, and imposing the right value for the axion mass and decay constant, $m_\phi\sim 10^{-22}$ eV and $f\sim 10^{-2} M_P$, we have that
\begin{equation}
\label{eq:naivebound}
Sf=-f\ln\left(\frac{m_\phi^2f^2}{A M_P^4} \right)\gtrsim 1\fstop
\end{equation}
This means that the existence of a FDM candidate tends to slightly violate the Weak Gravity Conjecture (WGC)\cite{Alonso:2017avz,Hebecker:2018ofv}. Hence, in this paper we are going to check the most generic closed string axion candidates in terms of their ability to reach a regime where they acquire their mass from an instanton with $Sf={\cal O}(a\, few)$ as indicated by Eq.~\eqref{eq:naivebound}. What we find is summarised in Table~\ref{tab:closedaxions}, showing that most candidates will provide only a fraction of FDM, while we find two candidates ($C_2$ axions and thraxions, and to some extent also $C_4$ in certain limits) which can violate the bound $Sf\lesssim 1$, suggested by arguments surrounding the WGC (see below), thus potentially allowing for all dark matter to be FDM.
The WGC \cite{ArkaniHamed:2006dz,Palti:2019pca} suggests that there must exist (some) charged states whose charge-to-mass ratio is larger than that of an extremal black hole in the theory, implying that gravity should be the weakest force. Since axions can be seen as $0$-form gauge fields, the WGC should hold for them as well. The axionic version of the WGC states that there must be an instanton whose action satisfies
\begin{equation}
S f \lesssim \alpha M_P\coma
\end{equation}
where $\alpha$ is an $\mathcal{O}(1)$ constant depending on the extremality bound entering the formulation of the conjecture. However, general extremal solutions for instantons have not been found yet, therefore the precise value of $\alpha$ is known only for special cases (see e.g.~\cite{Rudelius:2015xta,Brown:2015iha,Hebecker:2015zss,Demirtas:2019lfi}).
Nevertheless, we can refine the statement below Eq.\eqref{eq:naivebound} in the following way. Since the axion mass has an exponential dependence on the instanton action $S$, the accordance with or the violation of the WGC crucially depends on the precise extremality bound, i.e. on the value of $\alpha$ entering the formulation of the WGC. It appears indeed quite interesting that experiments constraining the parameter space of FDM ALPs may be able to probe the upper limit of the axionic WGC thus shedding some light on the underlying theory of quantum gravity.
\subsection{Closed string axions}
\label{sec:csALPs}
In String Theory, axion-like particles coming from closed string modes arise from the integration of p-form gauge field potentials over p-cycles of the compact space. In what follows we consider type IIB string compactifications where axions arise from the integration of the NS-NS 2-form $B_2$ and R-R 2-form $C_2$ over 2-cycles, $\Sigma_2^I$, or from the integration of the R-R 4-form $C_4$ over 4-cycles, $\Sigma_4^I$. Another axion is given by the R-R 0-form $C_0$. In order to understand where these axionic particles come from, we define the set of harmonic $(1,1)$-forms $\omega_I$, $I=\{1,\dots,h_{1,1}\}$ which are representatives of the Dolbeault cohomology group $H_{\bar{\partial}}^{1,1}(X_6,\mathbb{C})$ and the dual basis $\tilde{\omega}_I$ of $H^{2,2}$ that satisfy the following normalisation condition~\cite{Baumann2015}
\begin{equation}
\displaystyle\int_{\Sigma_2^I} \omega^J=\alpha'\delta_I^J\coma \qquad \displaystyle\int_{\Sigma_4^I}\tilde{ \omega}^J=(\alpha')^2\delta_I^J \coma
\end{equation}
\begin{equation}
b_I=\frac{1}{\alpha'}\displaystyle\int_{\Sigma_2^I}B_2\coma \quad c_I=\frac{1}{\alpha'}\displaystyle\int_{\Sigma_2^I}C_2\coma\quad d_I=\frac{1}{(\alpha')^2}\displaystyle\int_{\Sigma_4^I}C_4\coma \quad
\end{equation}
\begin{equation}
B_2=B_2(x)+b^I(x)\omega_I\coma \quad C_2=C_2(x)+c^I(x)\omega_I\coma \quad C_4=d^I(x)\tilde{\omega}_I\fstop
\end{equation}
Here $B_2(x)=B_{\mu\nu}dx^\mu\wedge dx^\nu$ and $C_2(x)=C_{\mu\nu}dx^\mu\wedge dx^\nu$ are 4-dimensional 2-forms and $\alpha'$ is the inverse string tension.
After orientifold involution the cohomology group $H^{1,1}$ splits into a direct sum of orientifold even and orientifold odd 2-forms cohomology. Therefore $\omega^I$ decomposes into $\omega^i$ (even) and $\omega^\alpha$ (odd) respectively, where $i=1,\dots,h^{1,1}_+$, $\alpha=1,\dots,h^{1,1}_-$ and $h^{1,1}_++h^{1,1}_-=h^{1,1}$ . In addition $B_2(x)$, $C_2(x)$ are projected out and we are left with the following invariant 2- and 4-form fields:
\begin{equation}
B_2=b^\alpha(x)\omega_\alpha\coma C_2=c^\alpha(x)\omega_\alpha\coma C_4=d^i(x)\tilde{\omega}_i\fstop
\end{equation}
The K\"ahler form can be written as $J=t^i(x)\omega_i$, where $t^i(x)$ are orientifold invariant real scalar fields which parametrise the volume of internal 2-cycles that are even under orientifold involution.
The invariant complex structure moduli are given by $\zeta^a$, $a=1,\dots,h^{2,1}_-$ while the dilaton $\phi$ and $C_0$ are automatically invariant under orientifold involution.
After we determine the invariant scalar degrees of freedom, we need to rearrange them into the bosonic components of chiral multiplets of $\mathcal N=1$ supersymmetry.
The proper coordinates of the moduli space turn out to be $h_-^{1,1}$ 2-form fields $G_\alpha$, $h_+^{1,1}$ K\"ahler moduli $T_i$, $h_-^{2,1}$ complex structure moduli $\zeta^a$ and the axio-dilaton $S$~\cite{Grimm:2004uq}:
\begin{equation}
S=C_0+i\, e^{-\phi}\coma G_\alpha=c_\alpha-S b_\alpha\coma T_i=\tau_i + i\,d_i+\frac{i\,\kappa_{i\alpha\beta}}{2 \left(S-\bar{S}\right)}G^\alpha\left(G-\bar{G}\right)^{\beta}\fstop
\end{equation}
where $\tau_i=\frac{1}{2}k_{ijk}t^{j}t^{k}$ while $\kappa_{ijk}$ and $\kappa_{i\alpha\beta}$ are intersection numbers.
We immediately see that the axionic content of the theory coming from closed string modes is given by the fields $C_0$, $c_\alpha$, $b_\alpha$, $d_i$, whose number depends on the geometrical structure of the extra dimensions.
\begin{figure}
\begin{center}
\includegraphics[width=0.25\textwidth, angle=0]{SC1dof.png} \includegraphics[width=0.35\textwidth, angle=0]{FC2dof.png}
\includegraphics[width=0.35\textwidth, angle=0]{thrax_geo.jpeg}
\caption{Pictorial representation of Swiss-cheese (left), K3 fibred geometry (centre) and double-throats (right) in Calabi-Yau (CY) threefolds.} \label{fig:c4picture}
\end{center}
\end{figure}
Moreover, a new class of ultralight axions coming from flux compactification of type IIB string theory was recently discovered~\cite{Hebecker:2018yxs}. These so-called \emph{thraxions} are axionic modes living at the tip of warped multi-throats of the compact manifold, near a conifold transition locus in complex structure moduli space. As shown in \cite{Hebecker:2018yxs}, at the tip of such throats there exists a 4D mode $c$ that can be thought of as the integral of the two-form $C_2$ over the $S^2$ collapsing at the conifold point, as measured far away from that point. Although so far no study has been carried out on the phenomenology of such axions, it was shown in \cite{Carta:2020ohw} that they do exist in a quite interesting fraction of orientifolds of the known compact manifolds realised as complete intersections of polynomial equations in products of projective spaces, also known as CICYs \cite{Candelas:1987kf}. More in general, it is expected that Klebanov-Strassler throats with tiny warp factor are widely present in type IIB CY orientifold or F-theory models \cite{Ashok:2003gk,Denef:2004ze,Hebecker:2006bn}. Therefore, in this work we study how they behave as possible FDM candidates, as they are known theoretically to be ultralight and they possess a flux-enhanced decay constant.
Being interested in axions that can nearly saturate the WGC bound, we analyse some simple setups that allow us to estimate the maximum value of $Sf$. These results are summed up in Table~\ref{tab:closedaxions} and further details can be found in Appendix~\ref{sec:closed_examples}. From our analysis it turns out that $C_4$ axions and thraxions are the best candidates to satisfy the constraint of Eq.~\eqref{eq:naivebound}. To study the behaviour of $C_4$ axions having $Sf\sim\mathcal{O}(1)$, we consider two different Calabi-Yau (CY) geometries: the Swiss-cheese case and the fibred case where the overall volume of the extra dimensions is parametrised by a single or by two degrees of freedom respectively. A pictorial view of these geometries is given in Figure~\ref{fig:c4picture}.
These results are in agreement with what previously stated in the literature about the construction of explicit models and in works where a full mathematical analysis has been carried out for specific axion classes~\cite{Demirtas:2019lfi}. Indeed, no cases have been reported for $C_4$ and $C_2$ axions, where it was possible to clearly violate the constraint of the WGC even in its weak form while keeping the theory under control. Concerning thraxions, we analyse both the case in which their mass is independent of the stabilisation of K\"ahler moduli and also when it gets lifted by their presence. Since their existence relies only on the presence of multi-throats and fluxes inside such throats, we do not have to specify any type of geometry for the compact manifold as thraxion features only depend on in its volume size $\mathcal{V}$. As shown in Table~\ref{tab:closedaxions}, we are also concerned with $K, \,M$, the flux numbers coming from the integral of the $H_3,\, F_3$ field strengths, and the string coupling $g_s$.
\begin{table}[t!]
\centering
\begin{tabular}{l|lcc}
Axion & $Sf$ \\[3pt]
\hline\\
$C_0$ & $\sim 1/\sqrt{2}\,M_P$ \\[3pt]
$B_2$ & $\lesssim \,M_P$ \\[3pt]
$C_2$ & $\sim$ \begin{tabular}{l}
$S_{ED1}f\sim \sqrt{g_s} \,M_P$ \\
$S_{ED3}f\sim \sqrt{g_s}\,{\cal V}^{1/3} M_P$
\end{tabular} \\[5pt]
$C_4\; (1\; dof)\qquad$ & $\sim \sqrt{3/2}\,M_P$ \\[3pt]
$C_4\; (2\; dof)$ & $< \sqrt{3}/2\,M_P$ \\[3pt]
$C_{2,thrax}$ & $\sim$ \begin{tabular}{l}
$S_{ED1}f
$\sim\frac{3 \sqrt{g_s K M}}{\sqrt{2}\, \mathcal{V}^{1/3}}M_P$\\
$S_{eff}f_{eff}$
$\sim \frac{3\sqrt{2}\, \pi K }{\mathcal{V}^{1/3}} M_P$
\end{tabular} \\[5pt]
\hline
\end{tabular}
\caption{Boundaries on $Sf$ for different classes of closed string axions. These results arise from the study of simple extra-dimensions geometries. Further details are contained in Appendix \ref{sec:closed_examples}.} \label{tab:closedaxions}
\end{table}
Besides looking at the constraint on $Sf$, we also need to consider that a good FDM axion must be extremely light. The current techniques developed to perform moduli stabilisation in type IIB are able to exclude already some possible candidates. The axio-dilaton, together with complex structure moduli, are stabilised at high energies by background fluxes, so that they are naturally too heavy to represent FDM. The same conclusion is true for the orientifold-odd axions $B_2$ which are usually much heavier than the overall volume modulus \cite{Gao_2014,Hristov_2009}. The remaining candidates are given by $C_2$, $C_4$ axions and thraxions that we analyse in the following sections.
\subsection{Open string axions}
\noindent If we are dealing with CY manifolds which contain collapsed cycles carrying a $U(1)$ charge, we might work with open string axions which come from anomalous U(1) symmetries belonging to the gauge theory located at the singularity.
Anomalous $U(1)$ factors derive from D7-branes wrapping 4-cycles in the geometric regime or from D3-branes at singularities. The anomalous $U(1)$ gauge boson acquires a mass in the process of anomaly cancellation eating up the open string axion for D7-branes or the closed string axion for D3-branes, when the 4-cycle saxion is collapsed at singularity \cite{Green:1984sg,Allahverdi:2014ppa}. At energies below the gauge boson mass, the theory features a global $U(1)$ symmetry. In presence of 4-cycles that are collapsed at singularity, some complex scalar matter field $C=|C|e^{i\sigma}$ can be charged under the global $U(1)$ symmetry and its phase $\sigma$ may represent an open string axion. Indeed, the global symmetry can be broken by subdominant supersymmetry breaking contributions coming from background fluxes~\cite{Cicoli:2013cha}, making $\sigma$ the Nambu-Goldstone boson of the broken $U(1)$. Under these conditions, the open string axion decay constant becomes
\begin{equation}
\begin{array}{ll}
f\propto \frac{1}{\mathcal{V}^\alpha}\coma\quad \alpha=1,2
\end{array}
\end{equation}
where the values of $\alpha$ are related to sequestered ($\alpha=1$) and to super-sequestered ($\alpha=2$) scenario respectively~\cite{Cicoli:2013cha}.
This particle acquires a mass through hidden sector strong dynamics instanton effects. The scale of strong dynamics in the hidden sector is given by
\begin{equation}
\Lambda_{hid}=M_P e^{-c/g_s^2}\coma
\end{equation}
where $c$ is fixed by the 1-loop $\beta$ function
\begin{equation}
\frac{1}{g_s^2}=\frac{1}{g_{s,0}^2}+\frac{\beta}{4\pi}\ln(\dots)\fstop
\end{equation}
These quantities fix the open string axion mass scale to be
\begin{equation}
m_{\sigma}^2=\Lambda_{hid}^4/f_{seq}^2\fstop
\end{equation}
Being interested in ultralight axions, we will need an extremely low scale of strong dynamics in the hidden sector. The only parameter choice that may lead to a high decay constant is given by $\alpha=1$, i.e. the sequestered scenario, where
\begin{equation}
\begin{array}{ll}
f_{\sigma}= p\, \frac{M_P}{\mathcal{V}}\fstop
\end{array}
\end{equation}
Plugging this result inside Eq.~\eqref{eq:DMabundance} and assuming $m_\sigma= 10^{-22}$ eV, we get
$\mathcal{V}\simeq 2 \times 10^{2}$, which is consistent with the sequestered assumptions described in Appendix~\ref{sec:open_example}.
On the other hand, matching the right mass requires
\begin{equation}
\langle S \rangle=\frac{1}{c}\ln{\left(\frac{M_P}{\Lambda_{hid}}\right)}\simeq \frac{59}{c}\fstop
\end{equation}
For $c\sim \mathcal{O}(1)$, this is consistent with the use of a perturbative approach to string theory, being $g_s\simeq 0.13$ and implies that $\Lambda_{hid}\simeq 70 \; \mbox{eV}$. Therefore, we see that if we want an open string axion to be the FDM particle we need to deal with small extra dimension volumes and extremely low scales for the hidden sector strong dynamics.
Despite fine-tuning of parameters being quite reduced in this context, the required setup is not as general as for closed string axions and it is not easy to give these axions a precise upper bound on $Sf$. In addition, one should take into account that strong dynamics may induce a non-negligible production of glueballs that may represent a non-vanishing contribution to DM. In order to give a precise estimate of the amount of glueballs production it would be necessary to focus on some explicit models but this is far beyond the aim of this paper.
Given that closed string axions represent a model-independent feature of string compactifications, the forthcoming sections will be devoted to the general constraints and the predictions coming from explicit FDM constructions in this context.
\section{FDM from closed string axions}
\label{sec:closedALPs}
Let us start by reviewing how to compute $C_2$ and $C_4$ axions decay constant and mass, which are the two relevant quantities in reproducing FDM components.
The axion fields $d_i$ and $c^\alpha$ coming from the compactification of the extra dimensions after orientifold involution, have periods equal to integers
\begin{equation}
\Phi_i \quad\rightarrow \quad \Phi_i\;+\;k \qquad k\in \mathbb{Z}\coma \quad \Phi_i=d_i,c^\alpha\fstop
\end{equation}
The kinetic part of the 4D Lagrangian contains the following terms associated to the axions
\begin{equation}
\mathcal{L}\supset \frac{g_{ij}}{2} \partial_\mu \Phi^i \partial^\mu \Phi^j\coma
\end{equation}
where $g_{ij}=2\frac{\partial^2 K}{\partial T^i \partial \bar{T}^j}$ for $C_4$ axions, $g_{ij}=2\frac{\partial^2 K}{\partial G^i \partial \bar{G}^j}$ for $C_2$ axions and thraxions, and $K$ is the K\"ahler potential of the theory.
Since in usual situations we want to interpret the axion field as an angle, we need to diagonalise the K\"ahler metric and find the axion metric eigenvalues $\lambda_i$ and eigenvectors $\tilde{\Phi}_i$. These will have the same periodicity as the original coordinates. After that, we define the canonically normalised axion fields as $\phi_i =\sqrt{\lambda_i} \tilde{\Phi}_i M_P$ (restoring proper powers of $M_P$) where \cite{Cicoli:2012sz}
\begin{equation}
\mathcal{L}_{kin}\supset \frac{\lambda_i M_P^2}{2} \partial_\mu \tilde{\Phi}_i \partial^\mu \tilde{\Phi}_i=
\frac12 \partial_\mu \phi_i \partial^\mu \phi_i
\end{equation}
and
\begin{equation}
\phi_i \quad \rightarrow \quad \phi_i+ 2\pi \hat{f}_i \;k \qquad\mbox{where}\qquad \hat{f}_i = \sqrt{\lambda_i}\,\frac{M_P}{2\pi} \qquad k\in\mathbb{Z}\fstop
\end{equation}
The $C_4$ axion potential arises from non-perturbative corrections to the superpotential such that $V\propto \cos(a_i d_i)$ where $a_i=2\pi/N_i$, $N_i\in \Bbb N^+$. These corrections imply that the exact continuous axion shift symmetry breaks down to a discrete shift symmetry. Given that the decay constant of the field is defined as the quantity $f$ which satisfies $\phi \rightarrow \phi + 2\pi f$ and noticing that the field periodicity corresponds to that of the potential, the $C_4$ axion decay constant $f_i$ can be written as follows
\begin{equation}
\begin{array}{lll}
a_i \,d_i\quad &\rightarrow \quad a_i\,d_i+ 2\pi\;k \qquad\; \mbox{implies that}\\[10pt]
\phi_i \quad &\rightarrow \quad \phi_i + 2\pi f_i\;k \qquad\mbox{where}\qquad f_i = \sqrt{\lambda_i}\,\frac{M_P}{a_i}=N_i\,\hat{f}_i \qquad k\in\Bbb Z \fstop
\end{array}
\end{equation}
The thraxion potential comes instead from corrections to the superpotential $W$ governed by powers of the warp factor $\omega$, which tends to zero when approaching the conifold limit. In the pure ISD solution of~\cite{Giddings:2001yu}, the effective potential for the thraxion $c$ takes the form
$
V\sim\cos\left(c/M\right)
$,
where $M$ is the flux quantum coming from the presence of a 3-form flux integrated over the 3-cycle that is shrinking at the bottom of the warped throat. The corrections to $W$ break the continuous shift symmetry but they preserve a set of discrete ones. Using the same notation as in the $C_4$ case above, we get that the effective decay constant is enhanced by a factor $M$, namely $f_{\mbox{\tiny{eff}}}\simeq M \hat{f}$.
However, in the thraxion case this computation is quite model dependent. It is better then to derive the decay constant in its general form from the 10D perspective, by dimensionally reducing to 4D the $|F_3|^3$ term and plugging the expansion of $C_2$ in harmonic forms. In this way, one can show that $f$ depends explicitly on inverse powers of the warp factor $\omega$ coming from the Klebanov-Tseytlin throat metric~\cite{Klebanov:2000nc}.
In order to estimate mass and decay constant values, we have to analyse how these depend on the microscopic parameters of the theory through moduli stabilisation. Two prominent prescriptions to perform moduli stabilisation in type IIB string compactification are given by LVS~\cite{Balasubramanian:2005zx} and KKLT~\cite{Kachru2003}. These rely on different constructions and give rise to different mass spectra for the moduli fields. For these reasons we analyse them separately.
\subsection{LVS: FDM from $C_4$ axions}
\label{sec:LVSC4}
As its name suggests, LVS moduli stabilisation allows the volume of the extra dimensions to be stabilised at exponentially large values. This creates a natural hierarchy between energy scales that can be parametrised by inverse powers of the overall volume. This is particularly convenient for phenomenology, since it allows us to perform moduli stabilisation step by step, at different energies. After flux stabilisation, the K\"ahler moduli are still flat directions thanks to the so called `no-scale structure'. They can be stabilised using perturbative and non-perturbative corrections to the K\"ahler potential and the superpotential. In this section we will assume for simplicity that $h^{1,1}_-=0$.
LVS describes a way to stabilise K\"ahler moduli using the interplay between non-perturbative corrections to the superpotential coming from euclidean ED3 instantons or gaugino condensation and leading order $\alpha'$ corrections to the K\"ahler potential of the form
\begin{equation}\label{eq:LVS_contr}
\begin{cases}
K&=K_0 -2\ln \left(\mathcal{V}+\frac{\hat{\xi}}{2}\right) \\
W&= W_0 + \sum_{i} A_ie^{-a_iT_i} \coma
\end{cases}
\end{equation}
where $\hat{\xi}=\xi/g_s^{3/2}$, $\xi=-\frac{\zeta(3)\chi(Y_6)}{2(2\pi)^3}$, $\chi(Y_6)$ is the effective Euler characteristic of the CY manifold \cite{Becker:2002nn}, $W_0$ is the tree-level superpotential coming from background fluxes stabilisation, $A_i$ depends on the VEVs of complex structure moduli and the dilaton, and $a=2\pi/N$ where $N=1$ for euclidean ED3 instantons or $N>1$ for gaugino condensation. The moduli stabilisation prescription of LVS holds if the number of 3-cycles is larger than the number of 4-cycles, i.e. $h_{12}>h_{11}>1$ and in presence of at least one shrinkable 4-cycle.
Being interested in large 4-cycles parametrising the overall volume of extra dimensions, let us consider a simplified version of the so-called weak Swiss-cheese volume form, namely
\begin{equation}
\label{eq:CY_vol}
\mathcal{V}=\left(f_{3/2}(\tau_i)-\tau_s^{3/2}\right)\qquad i=1\dots N \coma
\end{equation}
where $f_{3/2}$ is a function of degree $3/2$ in $\tau_i$ that we assume to be given by a single term for simplicity and $\tau_s$ is a diagonal contractible blow-up cycle.
Given this simplifying assumptions and considering non-perturbative corrections to $W$ only related to the small cycle $T_s$, LVS stabilisation is able to fix three directions in the K\"ahler moduli space, namely the overall volume $\mathcal{V}$, the small cycle $\tau_s$ and the $C_4$ axion $d_s$ at
\begin{equation}
\langle \tau_s\rangle^{3/2}\simeq \frac{\hat{\xi}}{2}\,, \qquad e^{-a_s\langle \tau_s\rangle}\simeq\frac{\sqrt{\tau_s}|W_0|}{a_s|A_s| \mathcal{V}}\,, \qquad a_s d_s=(1 + 2k)\pi\coma
\end{equation}
where $k\in\Bbb Z$.
From the previous equations we see that the LVS minimum lies at exponentially large volume $\mathcal{V}\sim e^{a_s\tau_s}\gg 1$ and does not require any fine-tuning on the tree-level superpotential $W_0\sim 1\div 100$. Non-perturbative effects do not destabilise the flux-stabilised complex structure moduli and the dilaton. Moreover, supersymmetry is mostly broken by the F-terms of the K\"ahler moduli and the gravitino mass is exponentially suppressed with respect to $M_P$, allowing to get low-energy supersymmetry in a natural way. These models are characterised by a non-supersymmetric anti de Sitter minimum of the scalar potential at exponentially large volume. Since the value of the scalar potential in its minimum gives the value of the cosmological constant, we must find a way to uplift this negative minimum to a de Sitter vacuum. This can be done by switching on magnetic fluxes on D7-branes~\cite{Burgess:2003ic}, adding anti D3-branes~\cite{Kachru:2003aw, Kallosh:2014wsa, Bergshoeff:2015jxa,Kallosh:2015nia,Aparicio:2015psl,Garcia-Etxebarria:2015lif,GarciadelMoral:2017vnz,Moritz:2017xto}, hidden sector T-branes~\cite{Cicoli:2015ylx}, non-perturbative effects at singularities~\cite{Cicoli:2012fh} or non-zero F-terms of the complex structure moduli~\cite{Gallego:2017dvd}.
If the CY volume of Eq.~\eqref{eq:CY_vol} is parametrised by a single K\"ahler modulus, i.e. ($f_{3/2}=\tau_1^3$) LVS is able to stabilise all the real part of K\"ahler moduli. If this is not the case, i.e. ($f_{3/2}=\tau_1\sqrt{\tau_2}$ or $f_{3/2}=\sqrt{\tau_1\tau_2\tau_3}$) we will be left with some flat directions in the K\"ahler moduli space. A potential for these fields can be generated at lower energies by e.g. higher order $\alpha'$ and $g_s$-loop corrections. Once these fields get stabilised, the scalar potential for the axions associated to volume cycles is just induced by non-perturbative terms as in Eq.~\eqref{eq:LVS_contr}.
The field dependence of the decay constant associated to $C_4$ axions is given by \cite{Cicoli:2012sz}
\begin{equation}
\label{eq:dc}
f\sim \left\{
\begin{array}{ll}
\frac{M_P}{\tau}&\qquad \mbox{volume closed string axion,} \\[10pt]
\frac{M_P}{\sqrt{\mathcal{V}}} &\qquad \mbox{blow-up closed string axion.} \\[10pt]
\end{array}\right.
\end{equation}
Moreover, in this setup the instanton action appearing in the axion potential of Eq.~\eqref{eq:AxionLagr} is given by $S=a\tau$.
Looking for a particle having a high decay constant and an extremely small mass, we immediately see from Eq.~\eqref{eq:dc} that a FDM particle is more likely represented by axions related to large cycles parametrising the overall volume. In fact, while blow-up cycles seem to have a higher decay constant ($\sim \mathcal{V}^{-1/2}$) compared to volume cycles ($\sim \mathcal{V}^{-2/3}$), LVS stabilisation requires that $\mathcal{V}\sim e^{a_s\tau_s}$. This implies that matching the right FDM mass value tunes the overall volume too large ($\mathcal{V}\sim e^{220}$) making the match between $m$ and $f$ unfeasible and, above all, this would cause the string scale to be much lower than eV where the theory is no longer under control. In addition, looking at the $\tau$ dependence of the decay constant, the axion mass and the total amount of FDM, Eq.~\eqref{eq:DMabundance}, we can easily conclude that in presence of multiple volume axions, the heavier particles will represent a higher percentage of dark matter. Indeed, assuming that all the other parameters and the initial misalignment angle are the same for every axion, we have that $\frac{\Omega_{\theta}}{0.112}\sim e^{-S/4}\propto m_{\theta}^{1/2}$.
In what follows, we are going to analyse two simple examples of concrete 4D effective models coming from type IIB string theory: Swiss-cheese and fibred CY threefolds.
\paragraph{Swiss-cheese geometry}
\noindent This model is based on a CY having the typical Swiss-cheese shape
\begin{equation}
\mathcal{V}=\alpha \left(\tau_\mathcal{V}^{3/2} -\lambda_s \tau_{s}^{3/2}\right)\coma
\end{equation}
where $\alpha$ and $\lambda_s$ are positive real coefficients of order one.
After LVS stabilisation, all K\"ahler moduli but the overall volume axion $d_{\mathcal{V}}$ have been stabilised. This will represent our FDM candidate whose mass is given by
\begin{equation}
m_{d_\mathcal{V}}^2=\frac{8\kappa S_\mathcal{V}^3 A_\mathcal{V}\,W_0\, e^{-S_\mathcal{V}}}{3 \mathcal{V}^2} M_P^2 \coma S_\mathcal{V}=a_\mathcal{V} \tau_\mathcal{V}\coma
\end{equation}
while its decay constant is
\begin{equation}
f_\mathcal{V} = \sqrt{\frac32}\frac{M_P}{ S_\mathcal{V}}\fstop
\end{equation}
The previous relations are based on the assumption that both the kinetic Lagrangian and the mass matrix associated to $C_4$ axions are diagonal. Working in the large volume limit, this can be safely assumed, as the off-diagonal terms of the K\"ahler matrix are suppressed by powers of $\tau_s/\tau_b\ll1$ while the off-diagonal terms of the mass matrix are exponentially suppressed.
In what follows, we try to understand which requirements are needed to match the FDM prescriptions. Assuming to have no prior knowledge on the cosmological history of the universe, we assume a constant axion field distribution. Given a uniform probability density on the range $[0;2\pi]$, the mean value of $d_{\mathcal{V}}$ is given by $\pi$ and its standard deviation is $\sigma_d^2=\pi^2/3$, therefore we consider a misalignment angle $d_{mi}=\pi/\sqrt{3}$ as it represents the most likely value. This assumption is supported by~\cite{Graham:2018jyp} where it was shown that for any inflationary scale $H\gtrsim 1$ keV, the misalignment angle distribution becomes flat through stochastic diffusion. The most stringent constraint on inflationary model building in FDM models comes from isocurvature perturbation bounds as we briefly describe in Appendix~\ref{sec:anharm}.
\begin{table}[t!]
\centering
\begin{tabular}{l|lcc}
& $c_{min}$ & $c_{max}$ \\[3pt]
\hline
$W_0$ & $1$ & $10^2$ \\[3pt]
$A_\mathcal{V}$ & $10^{-4}$ & $10^{4}$ \\[3pt]
\hline
\end{tabular}\hspace{20pt}
\begin{tabular}{l|l|l|lc}
& $N=1$ & $N=2$ & $N=4$ \\[3pt]
\hline
$\mathcal{V}$ & $200\div 300$ & $500\div 800$ & $1500 \div 2300$\\[3pt]
\hline
\end{tabular}
\caption{Left: Parameter range considered in the evaluation of $c$, Eq. (\ref{eq:thetaVamount}). Right: Predicted overall volumes for different values of $N$ imposing 100\% of FDM. } \label{tab:cvalues}
\end{table}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.47\textwidth]{massesdifferentN.pdf}\hspace{6pt}\includegraphics[width=0.49\textwidth]{dcdifferentN2.pdf}
\caption{Predictions for axion mass (left) and decay constant (right) varying the percentage of axionic FDM and the non-perturbative effects giving rise to the ALP potential. }\label{Fig:MassFPercentage}
\end{center}
\end{figure}
In the Swiss-cheese geometry, the amount of DM depends only on the instanton action $S_\mathcal{V}$. This implies that once we fix the required amount of DM, we can immediately compute the natural value of mass and decay constant the FDM axion candidate needs to have.
Knowing the shape of the instanton action, we can write $\mathcal{V}\simeq (S_\mathcal{V}/a_\mathcal{V})^{3/2}$, being $a_\mathcal{V}=2\pi/N$, so that the formula for the DM abundance, Eq.~\eqref{eq:DMabundance}, becomes:
\begin{equation}
\label{eq:thetaVamount}
\frac{\Omega_{\theta}h^2}{0.112}\simeq 6.36\times 10^{27} a_\mathcal{V}^{3/4} (c\, g_s)^{1/4} \frac{e^{-S_\mathcal{V}/4}}{S_\mathcal{V}^2 }\coma
\end{equation}
where $c=W_0 A_\mathcal{V}$.
Given that the value of the parameters may vary across different models, we decide to fix the maximum and minimum values that they may acquire and we choose different values of $N=\{1,2,4\}$. Moreover, we use the LVS relation between the string coupling and the overall volume: $\mathcal{V}\sim e^{g_s^{-1}}$ to reduce the amount of fine-tuning. The extrema of the values we consider are listed in Table~\ref{tab:cvalues}.
Looking at the previous formula it is clear that once we fix the upper and lower bounds for $W_0$, $A_\mathcal{V}$ and the fraction of FDM $\frac{\Omega_{FDM}}{\Omega_{DM}}$, we can easily determine the mass and decay constant values of our axion candidate.
The natural amount of axionic DM with the right mass and decay constant range can be found in Fig.~\ref{Fig:MassFPercentage}. While the predictions for the decay constant are not significantly influenced by changing parameters, the particle mass can vary across different setups. As shown in Table~\ref{tab:cvalues}, these setups put strong constraints on the predicted overall volume $\mathcal{V}$. The lightest DM particle representing a considerable fraction of FDM has $m\sim 10^{-20}$ eV and is related to ED3 brane effects.
Concerning the implications related to this FDM model, let us now estimate what the relevant energy scales are going to be. The KK scales, i.e. the maximum energies at which a 4D treatment of the theory is allowed, that are associated with bulk KK modes and KK replicas of open string modes living on D7-branes wrapping 4-cycles are given by
\begin{equation}
\label{eq:KKmassC4}
M_{KK}^{(i)}=\frac{\sqrt{\pi}M_P}{\sqrt{\mathcal{V}}\tau_i^{1/4}}\fstop
\end{equation}
This implies that for the Swiss-cheese geometry $M_{KK}=\frac{\sqrt{\pi}M_P}{\mathcal{V}^{2/3}}\sim 10^{15}\div 10^{16}$ GeV. Moreover, we have that the \nr{blow-up} moduli which are stabilised through LVS prescription receive masses comparable to the gravitino mass, $m_{3/2}=M_P W_0/\mathcal{V}\sim 10^{14}\div 10^{16}$ GeV. The last relevant energy scale is given by the inflationary scale.
Looking at the ALP decay constant and mass, we can estimate what are the predictions for inflation that would arise from the ultralight $C_4$ axion detection. These are mainly due to isocurvature perturbations constraint and imply that the Hubble parameter during inflation, $H_I$, needs to be low, $H_I<5\cdot 10^{11}$ GeV giving rise to undetectable stochastic gravitational waves, being the tensor-to-scalar ratio $r< 10^{-6}$. An extended derivation of these results can be found in Appendix~\ref{sec:anharm}. We conclude this paragraph by stressing that since FDM needs to be the dominant DM component, the mass spectrum of the theory between the inflationary scale and the FDM scale should be nearly empty. In particular, as we already stressed, since heavier axions naturally represent higher DM fractions, the axion spectrum in the aforementioned range needs to be exactly empty.
\paragraph{Fibred geometry} Consider a fibred CY, whose volume can be written as
\begin{equation}
\mathcal{V} =\alpha\left( \tau_b \sqrt{\tau_f}-\lambda_s \tau_s^{3/2} \right)\coma
\end{equation}
where $\tau_f$ parametrises the volume of a K3 fibre over a $\Bbb P_1$ base whose volume is controlled by $\tau_b$, and $\tau_s$ represents the volume of a rigid del Pezzo divisor. Again, $\alpha$ and $\lambda_s$ are positive real coefficients of order one. After LVS stabilisation, the fibre modulus is still a flat direction and requires additional corrections to be stabilised. These are usually taken to be $\alpha'$ corrections or KK and winding $g_s$ loop corrections~\cite{Berg:2005ja,Berg:2004ek,vonGersdorff:2005bf,Berg:2007wt,Cicoli:2007xp,Cicoli:2008va}. In this setup, the two good FDM candidates are the closed string axions related to the base and the fibre modulus. The shape of the decay constants in case of ED3 brane instantons or purely gauge theories on the D7-branes wrapping the 4-cycles, are given by
\begin{equation}
\label{eq:decayconstants}
\left\{
\begin{array}{ll}
f_{d_b}=\frac{M_P}{a_b\tau_b}=\frac{M_P}{S_b}\\[10pt]
f_{d_f}=\frac{M_P}{\sqrt{2} a_f \tau_f}=\frac{1}{\sqrt{2}}\frac{M_P}{S_f}\\[10pt]
\end{array}\right.
\end{equation}
while their masses are
\begin{equation}
\label{eq:massesbf}
\begin{array}{lll}
m_{d_f}^2\simeq \frac{8\kappa S_f^3 A_f \,W_0\, e^{-S_f}}{\mathcal{V}^2} M_P^2 \\[10pt]
m_{d_b}^2\simeq\frac{4\kappa S_b^3 A_b \,W_0\, e^{-S_b}}{\mathcal{V}^2} M_P^2 \fstop\\
\end{array}
\end{equation}
Again, these relations are based on the assumption that both the kinetic Lagrangian and the mass matrix associated to $C_4$ axions are diagonal. As the field-space metric related to $\tau_f$ and $\tau_b$ is exactly diagonal, the same considerations provided in the Swiss-cheese geometry apply.
Without loss of generality we can consider the case where $\alpha=1$ so that
\begin{equation}
\mathcal{V}=\tau_b \sqrt{\tau_f}=\frac{S_b \sqrt{S_f}}{a_b \sqrt{a_f}} \fstop
\end{equation}
The masses of the two axions become
\begin{equation}
\begin{array}{lll}
m_{d_f}^2\simeq c_f a_b^2a_f \frac{S_f^2\,}{S_b^2} e^{-S_f} M_P^2\coma \qquad c_f=2 g A_f \\[10pt]
m_{d_b}^2\simeq c_b a_b^2a_f \frac{S_b\,}{S_f} e^{-S_b} M_P^2\coma \qquad c_b= g A_b\\
\end{array}
\end{equation}
where $g=4\kappa W_0$.
Fixing the ratio between the two decay constants to be $q=f_{d_b}/f_{d_f}$ we immediately see that the ratio between the abundance of DM components is given by
\begin{equation}
\frac{\Omega_{b}}{\Omega_f}\simeq 1.09\left(\frac{c_b}{c_f}\right)^{1/4}q^{5/4}e^{-\frac{M_P}{4f_b}\left(1-\frac{q}{\sqrt{2}}\right)}\fstop
\end{equation}
\begin{figure}[!t]
\begin{center}
\includegraphics[width=0.7\textwidth]{dc_fibre_NfNb1line.pdf}
\end{center}
\caption{\label{fig:FDMfibrePercentages} Allowed percentages of axionic DM as a function of the axion decay constants. The coloured areas satisfy the constraint $\frac{\Omega_{b}h^2}{0.112}+\frac{\Omega_{f}h^2}{0.112}\leq 1$. The blue and yellow areas refer to regions where ultralight axionic DM represents different percentages of the total amount of DM of the universe. The green area identifies the region where we have two FDM axions. The black line is given by $q=f_b/f_f=\sqrt{2}$. }
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.49\textwidth]{Masses_1.pdf}\includegraphics[width=0.49\textwidth]{perc_1.pdf}
\vspace{5pt}
\includegraphics[width=0.49\textwidth]{Masses_2.pdf}\includegraphics[width=0.49\textwidth]{perc_2.pdf}
\vspace{5pt}
\includegraphics[width=0.49\textwidth]{Masses_3.pdf}\includegraphics[width=0.49\textwidth]{perc_3.pdf}
\end{center}
\caption{\label{fig:massisotropic} Left: $d_f$ and $d_b$ axion masses as a function of the total ultralight axion fraction of dark matter. Right: Relative contributions to $\Omega_{DM}$ coming from $d_f$ and $d_b$ axions. Top panels are referred to equal values of the perturbative corrections prefactors $A_i$, $i=f,b$. Central and bottom panels contain the results related to the cases where $A_f=A_{max}\gg A_b=A_{min}$ and $A_f=A_{min}\ll A_b=A_{max}$ respectively. The extreme values $A_{min}$ and $A_{max}$ can be read from Table~\ref{tab:cvalues}.}
\end{figure}
This result highlights that we can face two opposite scenarios. Isotropic compactification ($q\simeq \sqrt{2}$) implies that the two axions have similar masses and represent similar percentages of DM. On the other hand, given the exponential sensitivity of $\frac{\Omega_{b}}{\Omega_f}$ on the parameter $q$, in anisotropic compactifications ($q\ll 1$ or $q\gg 1$) just one axion can play the r\^ole of the FDM particle. As already mentioned in the previous sections, also in case of nearly isotropic compactifications, the heavier axion will naturally represent the higher fraction of DM. Let us consider for $W_0$, and $A_i$ $i=b,f$ the same parameter range as described on the left side of Table~\ref{tab:cvalues}.
Moreover, given that the mass range of the two particles will follow the same behaviour as in the Swiss-cheese geometry, let us focus on the case where $N_f=N_b=1$, corresponding to lighter axions. Also considering the whole parameter space, we can already dramatically restrict the predictions for the allowed decay constants. The results of this analysis are represented in Fig.~\ref{fig:FDMfibrePercentages}.
From this plot we can identify the narrow region where we have two suitable FDM candidates. Let us now fix the decay constant ratio $q$ in order to inspect the green central area and understand what will be the composition and the mass of the two axions. The results obtained fixing $q=\sqrt{2}$ are represented in Fig.~\ref{fig:massisotropic}. If we fix $A_b=A_f$, we find two different axions having mass $\sim 10^{-20}$ representing similar percentages of DM. If $A_f$ and $A_b$ get different values, one of the axions becomes much lighter than $ 10^{-22}$ eV representing a negligible fraction of DM.
\begin{table}[t!]
\centering
\begin{tabular}{l|lc}
& $\mathcal{V}$ \\
\hline
$q=0.01$ & $(2.4\div 3.6)\cdot 10^4$ \\
$q=0.1$ & $(2.5\div 3.8)\cdot 10^3$ \\
$q=\sqrt{2}$ & $(1.9\div 2.9)\cdot 10^2$ \\
$q=10$ & $(4.9\div 7.3)\cdot 10^2 $ \\
$q=100$ & $(1.5\div 2.3)\cdot 10^3$ \\
\hline
\end{tabular}
\caption{Overall volume of the extra-dimensions for different values of $q$.}
\label{tab:qvalues}
\end{table}
\begin{figure}
\begin{center}
\includegraphics[width=0.7\textwidth]{df_mass_qless1.pdf}\vspace{5pt}
\includegraphics[width=0.7\textwidth]{db_mass_qlarger1.pdf}
\end{center}
\caption{\label{fig:massanisotropic}We show $m_{d_b}$ and $m_{d_f}$ masses as a function of the axion DM fraction varying the ratio between the decay constants $q=f_b/f_f$. Top: When $q\leq 1$ the contributions coming from $d_b$ are negligible and its mass becomes $\ll 10^{-40}$ eV. Bottom: When $q\geq 2$ the contributions coming from $d_f$ are negligible and its mass becomes $\ll 10^{-40}$ eV. The maximum and minimum values of the parameters used to compute the allowed mass range can be found in Table \ref{tab:cvalues}. In these plots, we assumed for simplicity that $A_f=A_b$ given that, considering small and large $q$ values, a relative variation of these parameters does not have any impact on the predictions.}
\end{figure}
We now consider the effect coming from an anisotropic compactification. Also in this case, the predictions for the mass of the two candidates are quite robust. Indeed, as we show in Fig.~\ref{fig:massanisotropic}, choosing different values of $q=\{0.01,0.1,1,2,10,100\}$ the results about the mass and DM fraction do not change. For large values of $q$, $d_b$ is the FDM axion which represents a significant fraction of DM when it acquires a mass $m\gtrsim 10^{-20}$ eV, while $d_f$ is much lighter ($\ll 10^{-44}$ eV) and has a negligible impact on DM abundance. On the other hand, when $q$ is fixed to small values, $d_f$ is the right FDM candidate representing a large amount of DM when its mass is given by $m\sim 10^{-19}$ eV, while the contribution coming from $d_b$ is negligible. The predictions for the overall volume $\mathcal{V}$ for different values of $q$ and varying parameters are listed in Table~\ref{tab:qvalues}.
For what concerns the relevant energy scales of the model, i.e. KK masses, Eq.~\eqref{eq:KKmassC4} and the gravitino mass, the results found for the Swiss-cheese geometry are still valid in presence of CY fibrations in the isotropic compactification limit. Anisotropic compactifications may lead to different results depending on the overall volume considered. The ratio between the KK masses related to $\tau_f$ and $\tau_b$ 4-cycles scales as $M_{KK}^{(b)}/M_{KK}^{(f)}\sim q^{1/4}$. In this setup, the inflationary scale and the tensor-to-scalar ratio are suppressed compared to the Swiss-cheese geometry. For both isotropic and anisotropic compactifications and for any value of initial misalignment angles, we have that the inflationary scale $H_I< 10^{11}$ GeV and the tensor-to-scalar ratio $r<10^{-7}$. Further details can be found in Appendix~\ref{sec:anharm}.
\subsection{LVS: FDM from $C_2$ axions}
Our discussion at the beginning of Sec.~\ref{sec:csALPs} made it clear that it is the $C_2$-axions which can lay claim to be to the arguably best axion candidates of the type IIB O3/O7 orientifold closed string axion sector. This is so because their shift symmetry remains protected even under orientifolding, and they acquire a potential from non-perturbative effects less easily than $C_4$ axions, as we now summarise (see e.g.~\cite{Cicoli:2021tzt})
\begin{itemize}
\item{In the absence of brane~\cite{McAllister:2008hb} or flux~\cite{Kaloper:2008fb,Dong:2010in,Kaloper:2011jz} monodromy scalar potentials for $C_2$ axions arise either via gaugino condensation on stacks of 4-cycle wrapping D7-branes with gauge flux or bound states of ED3/ED1-brane instantons. The structure of the ED3/ED1-bound state instanton contribution to the superpotential is given by a theta function. For large enough real argument, this becomes exponentially damped. In our cases, the total scalar potential results in stabilising $b=0$ so no extra damping from a finite $b$-VEV arises in the exponential in $W$. The suppression of the $C_2$-cosine potential comes from $e^{-T}$ dependence of the ED3-parent instanton. Hence in total, if you have an ED3 that has a dissolved ED1 \cite{Grimm:2007hs} this gives you a non-perturbative correction to $W$ like $e^{-T -G}$. Formally, the $G$-dependence of the ED1 dissolved inside the ED3 arises as an ED3 magnetised by 2-form gauge flux threading 2-cycles in the ED3-wrapped 4-cycles. As the ED3-brane itself is a purely Euclidean instanton effect, the path integral enforces summation over the unmagnetised ED3 and all magnetised ED3/ED1-bound states, mandating the appearance of the $G$-dependence in $W$ for ED3-contributions on 4-cycles intersecting with orientifold-odd 2-cycle combinations.}
\item{If you use instead a D7 brane stack to stabilise the $T$ moduli, magnetisation of the D7-brane stack is a choice of compactification data (no path integral forces you to sum over magnetised D7-brane states, since unlike a purely euclidean instanton the full D7-brane fills 4D space--time as well). Thus, by avoiding putting gauge fluxes on the D7-branes you prevent single-suppressed $e^{-T -G}$ terms in $W$ from arising~\cite{Long:2014dta,Jockers:2004yj,Jockers:2005pn,Jockers:2005zy,Grimm:2011dj}. However, the path integral will generate contributions from ED3/ED1-bound state instantons to the gauge kinetic function. Such a correction to the gauge kinetic function of the 7-brane stack scaling like $e^{-T -G}$ in turn induces a superpotential correction of order $e^{-2T-G}$~\cite{McAllister:2008hb}. Compared to the scale of the superpotential terms $e^{-T}$ stabilising the $T$ moduli, this leads to a double suppression of the potential for the $C_2$ axion.}
\end{itemize}
We shall now summarise the scaling of the scalar potential for the $C_2$ axion arising from these non-perturbative effects in the concrete scenarios of KKLT and LVS stabilisation of the volume moduli on Swiss-cheese CY orientifolds with two volume moduli.
\begin{itemize}
\item{We first look at KKLT: if a harmonic zero-mode $C_2$ axion counted by $h^{1,1}_-$ acquires a single suppressed non-perturbative scalar potential from ED3/ED1-bound state instantons, then in KKLT it is too heavy to form FDM. Even if its potential comes from the double suppressed contribution of an unmagnetised 7-brane stack, the $C_2$ axion remains too heavy to constitute FDM. The reason is that in KKLT the lowest volume moduli masses $\sim e^{-T}$ are always around the gravitino mass scale. Since this in turn is bounded from below by ${\cal O}({\rm TeV})$ the resulting $C_2$ mass scale $\sim e^{-2T}$ is still too heavy. So in KKLT setups, $C_2$ axions cannot constitute FDM.}
\item{In LVS we should always have a CY manifold with a form of the CY volume such that it has for at least two of the volume moduli the Swiss-cheese form. For a $C_2$ axion we can now consider intersection couplings with either the small LVS blow-up or the CY volume-carrying big cycle. We begin by looking at the case of $C_2$ intersecting with the small cycle. If you have a double suppressed term in $W$ from an unmagnetised 7-brane stack on a small cycle, the term in the exponent would scale as $2 (2\pi/N) \tau_s$. For $N=2$ this scales like a single ED3/ED1-bound state instanton wrapping the small cycle. Moreover, in this case $N=2$ is the most favourable setup for a potential FDM role as the volume needs to be $\mathcal{V}\sim e^{100}$, while it would be even large for $N>2$. Hence, $C_2$-FDM cannot arise in LVS from the LVS blow-up cycle or similarly small blow-up cycles.}
\item{Conversely, in LVS a D7-brane stack wrapped around the large volume cycle induces a double suppressed mass term that would imply either a too light axion or volume too small for control of the $\alpha'$-expansion.
What thus remains is the case of an ED3/ED1-bound state instanton wrapping the volume cycle. The resulting single-suppressed cosine potential for $C_2$ on the big cycle leads to a borderline situation and the relation between the $C_4$ and $C_2$ masses, the decay constants and the FDM abundances requires further investigation.}
\end{itemize}
Let us consider the case where there is non-vanishing intersection between a harmonic $C_2$ axion and the large volume 4-cycles in the Swiss-cheese geometry. If the axions acquire a mass via ED3/ED1 instanton corrections, the K\"ahler potential has the following form:
\begin{equation}
K=-2\ln\left(\mathcal{V}+\frac{\hat{\xi}}{2}\right) \coma
\end{equation}
\begin{equation}
\mbox{ with }\quad \mathcal{V}\simeq \left[T_b+\bar{T}_b-\kappa_{b}(G - \bar{G})^2\right]^{3/2}- (T_s+\bar{T}_s)^{3/2}\coma
\end{equation}
where $\kappa_b=\kappa_{b11}g_s/4$ and $\kappa_{b11}$ is the intersection number of the big divisor with the odd cycle. The superpotential receives leading order non-perturbative corrections given by
\begin{equation}
W=W_0+A_s\,e^{-a_sT_s}+A_b\,e^{-a_bT_b}+C e^{-a_b \left(T_b+i G\right)}\,.
\end{equation}
These corrections tend to make the volume $C_4$ axion and the $C_2$ axion degenerate in mass. After LVS and $b$ axion stabilisation, which we assume to take place at $b=0$, we are left with two ultralight axion candidates, namely $d_b$ and $c$. The field space metric associated to these fields is diagonal
\begin{equation}
\mathcal{L}_{kin}=\frac{1}{2}\left[\frac{3}{2\tau_b^2} (\partial d_b)^2 -\frac{6 \kappa_b}{\tau_b} (\partial c)^2\right]\coma
\end{equation}
while their scalar potential is given by
\begin{equation}
V_F\supset \frac{a_b \kappa W_0}{2\tau_b^2}e^{-a_b\tau_b}\left(A_b \cos(a_bd_b) +C \cos\left[a_b\left(d_b+c\right)\right]\right) \fstop
\end{equation}
In terms of the canonically normalised fields it becomes
\begin{equation}
\label{eq:C2pot}
V_F\supset \frac{a_b \kappa W_0}{2\tau_b^2}e^{-a_b\tau_b}\left(A_b \cos\left(\frac{\hat{d}_b}{g_{d_b}}\right) +C \cos\left[\left(\frac{\hat{d}_b}{g_{d_b}}+\frac{\hat{c}}{g_{c}}\right)\right]\right)\coma
\end{equation}
where
\begin{equation}
g_{d_b}=\sqrt{\frac{3}{2}} \frac{1}{a_b\tau_b}\,, \qquad g_c= \frac{1}{a_b}\sqrt{\frac{6 |\kappa_b|}{\tau_b}}\fstop
\end{equation}
\begin{figure}
\begin{center}
\includegraphics[width=0.49\textwidth]{C2mass.pdf}
\includegraphics[width=0.49\textwidth]{C2f.pdf}\\[5pt]
\includegraphics[width=0.49\textwidth]{C2perc.pdf}
\includegraphics[width=0.49\textwidth]{C2phi.pdf}
\end{center}
\caption{\label{fig:massC2}Results about $C_2$ and $C_4$ axionic FDM from ED3/ED1-bound state instanton effects wrapping the overall volume cycle in the Swiss-cheese geometry. The results are given in terms of the mass matrix eigenvectors $\phi_1$ and $\phi_2$, Eq. (\ref{eq:evec}). Top: axion masses (left) and decay constants (right) as a function of the total ultralight axionic dark matter fraction. Bottom-left: relative abundance of the axionic DM particles. Bottom-right: eigenvectors components as a function of the overall volume $\mathcal{V}$. We see that also at small volumes the eigenvectors of the mass matrix $\phi_1$ and $\phi_2$ are mainly given by the $C_2$ and $C_4$ axion respectively.}
\end{figure}
As we will show below, we cannot identify $g_{d_b}$ and $g_c$ with the decay constants as the physical fields are given by the mass matrix eigenvectors that may not be aligned with $d_b$ and $c$. Let us consider for simplicity the case where $A_b=C$. The minimum of the scalar potential is given by
\begin{equation}
\frac{\hat{d}_b}{g_{d_b}}=(2k+1)\pi \,,\qquad \frac{\hat{c}}{g_c}=2m\pi\,, \qquad m,k \in \Bbb Z
\end{equation}
so that the mass matrix in a neighbourhood of the minimum becomes
\begin{equation}
M=\Lambda \bar{M}=\Lambda \begin{pmatrix}
\frac{2}{g_{d_b}^2} & \frac{1}{g_{d_b}g_c}\\
\frac{1}{g_{d_b}g_c} & \frac{1}{g_c^2}\\
\end{pmatrix} \quad \mbox{where}\quad \Lambda= \frac{a_b A_b \kappa W_0}{2\tau_b^2}e^{-a_b\tau_b}\fstop
\end{equation}
The eigenvalues, $\lambda_{i}$, and eigenvectors, $\phi_{i}$, of $\bar{M}$ are
\begin{equation}
\lambda_1=\frac{g_{d_b}^2+2 g_c^2-\sqrt{g_{d_b}^4+4g_c^4}}{2g_{d_b}^2 g_c^2}\coma \quad \lambda_2=\frac{g_{d_b}^2+2 g_c^2+\sqrt{g_{d_b}^4+4g_c^4}}{2g_{d_b}^2 g_c^2}
\end{equation}
\begin{equation}
\label{eq:evec}
\phi_1=\frac{1}{|g_{-}|}\begin{pmatrix}
\frac{2 g_c^2-g_{d_b}^2-\sqrt{g_c^4+4g_{d_b}^4}}{2g_c g_{d_b}}\\
1\\
\end{pmatrix} \coma \quad
\phi_2= \frac{1}{|g_{+}|}\begin{pmatrix}
\frac{2 g_c^2-g_{d_b}^2+\sqrt{g_c^4+4g_{d_b}^4}}{2g_c g_{d_b}}\\
1\\
\end{pmatrix}\,
\end{equation}
where $|g_{\pm}|=1+\left(\frac{2 g_c^2-g_{d_b}^2\pm\sqrt{g_c^4+4g_{d_b}^4}}{2g_c g_{d_b}}\right)^2$ is just a normalisation factor so that $|\phi_i|=1$.
Using these results, we can write the decay constants and the masses of the physical axions as
\begin{equation}
f_{\phi_i}=\frac{1}{\sqrt{\lambda_i}}\coma m_{\phi_i}^2=\Lambda \lambda_i\fstop
\end{equation}
Here we note, that in the limit of large $\tau_b\sim {\cal V}^{2/3}$ we find that the lightest axion $\phi_1$ has $f_{\phi_1}\sim g_c \sim \frac{g_s}{\tau_b}$ and thus $S_{ED3}f_{\phi_1}\sim \sqrt{g_s\tau_b}$ confirming with our summary in Table~\ref{tab:closedaxions}. This implies a violation of certain strong forms of the WGC.
Also in this case, we find that FDM particles naturally arise from string compactification only if the overall volume of the extra dimensions is small. For simplicity, in this section we fix $W_0=1$ while we let $A_b$ vary in $A_b\in [10^{-4},10^{4}]$. The overall volumes which are compatible with having 100\% of ultralight axionic DM are $\mathcal{V}\in 200\div 300$. The results related to this setup are shown in Fig.~\ref{fig:massC2}. Despite the relation $\phi_1\equiv c$ and $\phi_2\equiv \hat{d}_b$ only holds at $\mathcal{V}\rightarrow \infty$, the eigenvectors $\phi_1$ and $\phi_2$ are mainly given by the $C_2$ and $C_4$ axion respectively. Although the shape of the potential in Eq.~\eqref{eq:C2pot} may suggest some mass degeneracy, the hierarchy in the mass scales and in the abundances of the two fields is apparent. While the two decay constants are comparable and their values lie in the expected range $\sim 10^{16}$ GeV, the $C_2$ axion is much lighter and more represented than the $C_4$. The reason why in this context the lighter axion can represent a higher fraction of DM is that the two fields acquire a mass through instanton corrections that have a different nature. In this way they do not share the same dependence of the mass and the decay constant on the instanton action. The $C_2$ axion field, that would represent the prominent FDM candidate in this setup, exhibits a mass that is lighter than the original FDM estimate, $m_c<10^{-22}$ eV. In this section we are relying again on LVS moduli stabilisation, hence the same energy scales that we have shown in the case of $C_4$ axions in the Swiss-cheese geometry remain valid.
\subsection{KKLT \& LVS: FDM from Thraxions }\label{sec:thraxions}
In the KKLT scenario~\cite{Kachru2003}, it is difficult to realise ultralight axions. In this case, axions get stabilised at the same energy level as their moduli partners by the same non-perturbative effect to $W$. This is a consequence of the fact that KKLT AdS vacuum is supersymmetric. Their masses then are generically of the same order as the gravitino mass. Therefore, axions coming from KKLT moduli stabilisation behave just like the axionic partner of the small-cycle volume moduli in LVS, they are too heavy to be FDM candidates.
However, there is a way out: we can have a viable FDM candidate if the underlying internal manifold admits the presence of \emph{thraxions}~\cite{Hebecker:2018yxs}. Thraxions, or throat-axions, are a recently discovered class of ultralight axionic modes living in warped throats of the CY, near a conifold transition locus in moduli space. They occupy a special corner of the axion landscape as their mass is exponentially suppressed by powers of the warp factor $\omega\sim e^{-S/3}$ of the throat. At the level of complex structure moduli stabilisation via fluxes of~\cite{Giddings:2001yu}, their squared masses scale as~\cite{Hebecker:2018yxs}
\begin{equation}\label{eq:mthrax}
\frac{m^2}{ M_P^2}\sim \frac{2\,e^{-2 S}}{\sqrt{3}\,S^{3/2}\,\mathcal{V}^{2/3} M^2} \coma
\end{equation}
where $\mathcal{V}$ is the volume of the bulk CY and $M$ is a flux quantum coming from the integral of a 3-form field strength $F_3$ over the $\mathcal{A}$-type 3-cycle of the deformed conifold.
Note that here, compared to the axions studied so far (e.g. in Eq.~\eqref{eq:massesbf}), the dependence on the instanton action $S$ is enhanced by a factor of $2$, resulting in a bigger suppression of the mass. In principle, we should consider also the possibly present effects of ED1 instantons coming from 2-branes wrapping the 2-cycle, which contribute an action
\begin{equation}
S_{ED1} \sim \sqrt{\frac{K M}{g_s}}\coma
\end{equation}
where $K$ is another flux quantum defined as the integral of the 3-form field strength $H_3$ over the $\mathcal{B}$-type 3-cycle.
The effective instanton action generating the thraxion potential reads
\begin{equation}
S_{eff} \sim \frac{2 \pi K}{g_s M}\fstop
\end{equation}
Note, that $\omega\ll 1$ is ensured when $K>g_s M$.
The ED1 instanton effects come with a shorter periodicity. Yet, they can remain subdominant in the thraxion scalar potential while satisfying the WGC in its mild version. We should therefore require ED1-contributions to be suppressed compared to the flux-backreaction induced thraxion scalar potential scale. This can be achieved by requiring the following hierarchy among fluxes:
\begin{equation}
M\gtrsim \sqrt{\frac{K }{M g_s}}\fstop
\end{equation}
In this way, we are satisfying a milder version of the WGC. The decay constant reads
\begin{equation}\label{eq:fthrax}
f\sim \frac{3 g_s M}{\sqrt{2}\, \mathcal{V}^{1/3}} M_P\coma
\end{equation}
$g_s$ being the string coupling.
The presence of no-scale breaking terms, which are necessary to stabilise the K\"ahler moduli sector, \emph{generically} induces cross terms between the thraxion and the moduli in the total potential~\cite{Carta:unpublished}. These new terms generate a mass for the thraxion which scales as $ e^{- S} $. Hence, the mass loses the double suppression, and the thraxion potentially becomes slightly heavier than in Eq.~\eqref{eq:mthrax}. The mass squared now reads
\begin{equation}\label{eq:mthrax_single}
\begin{split}
\frac{m^2}{ M_P^2}&\sim \frac{2\,e^{- S}}{3^{5/4}\,S^{3/4}\,\mathcal{V}^{2/3} M^2} \frac{|W_0|}{\mathcal{V}^{4/3}} \quad \mbox{ for KKLT stabilisation}\coma\\
\frac{m^2}{ M_P^2}&\sim \frac{2\,e^{- S}}{3^{5/4}\,S^{3/4}\,\mathcal{V}^{2/3} M^2} \frac{\ln{\mathcal{V}}}{\mathcal{V}^3} \,\,\quad \mbox{ for LVS stabilisation}
\coma
\end{split}
\end{equation}
where we distinguished between KKLT and LVS moduli stabilisation procedures. The decay constant remains the same as in \eqref{eq:fthrax}, as it is dominated by the physics in the UV.
\begin{figure}
\centering
\includegraphics[width=0.49\linewidth]{thrax_m_KKLT.pdf}
\includegraphics[width=0.49\linewidth]{thrax_f_KKLT.pdf}
\includegraphics[width=0.49\linewidth]{thrax_m_LVS.pdf}
\includegraphics[width=0.49\linewidth]{thrax_f_LVS.pdf}
\caption{Predictions for the thraxion mass and decay constant as functions of the FDM abundance. All plots are drawn fixing $M=10$. The plots in the first row refer to the KKLT moduli stabilisation procedure, while the plots in the second row to LVS. For KKLT we relate $\mathcal{V}\sim -(\log|W_0|/a)^{3/2}$, thus we vary both $|W_0|$ and $g_s$ (we fixed for simplicity $a=0.2$). For LVS, the volume scales as $\mathcal{V}\sim e^{1/g_s}$, hence allowing us to deal with only one free parameter (as we consider $W_0\sim\mathcal{O}(1)$).}
\label{Fig:MassFPercentage:ThraxS}
\end{figure}
We display the results in Fig.~\ref{Fig:MassFPercentage:ThraxS}. First, we point out that we allowed the parameters to vary between the biggest and smallest values compatible with a consistent compactification, regardless of FDM astrophysical constraint. Then, it is indeed remarkable that for a certain parameter space we cover the FDM window.
This is the first example in which we are able to get both $m\sim 10^{-21}$ eV and $f\sim 10^{16}$ GeV.
As explained in~\cite{Carta:unpublished}, in certain geometries it can happen that the cross terms vanish. Hence, the mass scales substantially again as in Eq.~\eqref{eq:mthrax}. We checked also these setups and we found that there is no appreciable difference with the results given in Fig.~\ref{Fig:MassFPercentage:ThraxS} for the single-suppressed mass.
We should now discuss the peculiarities of the two stabilisation procedures. For KKLT, we see that the mass is more sensible to a variation of $g_s$ than of $W_0$ and hence of the CY volume. Instead, in LVS the mass is again highly sensible to the value of $g_s$, but this time it corresponds to a sensitivity on the CY volume. For both examples, big values of $\mathcal{V}$ are preferred, as opposite to what we found in Section~\ref{sec:LVSC4}. The fact that thraxions should rely on big volumes of the extra dimensions to lie in the FDM range could be a drawback. As we will discuss later, big values of the CY volume can be statistically less represented in the landscape of string vacua.
We are now able to estimate of the mass of the warped KK modes living inside the warped-throat systems hosting the thraxion. Indeed, they will be heavier than the thraxion, as their masses scales linearly with the warp factor $\omega$ as
\begin{equation}
\frac{m_{w,KK}}{M_P}\sim \frac{\omega}{R}\sim \frac{\omega}{\mathcal{V}^{1/6}\sqrt{\alpha'}}\coma
\end{equation}
where $R$ is the throat radius which can be rewritten in terms of the bulk CY volume and the parameter $\alpha'$. The KK masses change drastically from the double to the single suppression case, as we shall discuss below.
We can express $m_{w,KK}$ in terms of the variables of our setup as
\begin{subequations}
\begin{align}
&m_{w,KK}^{(s,\text{KKLT})}\simeq 5.5\times 10^{-2}\, g_s^{3/4}\left(\frac{m}{10^{-22}\mbox{ eV}}\right)^{2/3}K^{1/4}M^{5/12}\mbox{ eV}\coma\\
&m_{w,KK}^{(s,\text{LVS})}\simeq 2.5\times 10^{-3}\, g_s^{13/12}e^{5/(9 g_s)}\left(\frac{m}{10^{-22}\mbox{ eV}}\right)^{2/3}K^{1/4}M^{5/12}\mbox{ eV}\coma\\
&m_{w,KK}^{(d)}\simeq 342\, g_s^{3/4}\left(\frac{10^4}{\mathcal{V}}\right)^{5/9}\left(\frac{m}{10^{-22}\mbox{ eV}}\right)^{1/3}K^{1/4}M^{1/12}\mbox{ GeV}\coma
\end{align}
\end{subequations}
where the index $s$ stand for the single suppressed case. We can give a rough estimate of $m_{w,KK}$ for $10^{-22}\mbox{ eV}\leq m\leq 10^{-19}\mbox{ eV}$ by plugging the other parameters accordingly. Hence, we find
\begin{subequations}
\begin{align}
& 50\mbox{ eV} \lesssim m_{w,KK}^{(s,\text{KKLT})}\lesssim 260 \mbox{ eV }\coma\\
& 30\mbox{ eV} \lesssim m_{w,KK}^{(s,\text{LVS})}\lesssim 1500 \mbox{ eV }\coma\\
& 0.3\mbox{ TeV} \lesssim m_{w,KK}^{(d,\text{KKLT})}\lesssim 2.5 \mbox{ TeV}\coma\\
& 0.2\mbox{ GeV} \lesssim m_{w,KK}^{(d,\text{LVS})}\lesssim 5 \mbox{ GeV}\fstop \label{eq:mwkk_s}
\end{align}
\end{subequations}
Note that we expect these modes, which live at the IR ends of the thraxion-carrying double-throat, to be nearly completely sequestered. Hence, their interactions with standard model particles are suppressed.
At this point we would like to discuss an intriguing possibility regarding the warped KK modes arising from the single-suppressed case. With the scaling found above, a warped KK mode might behave as standard CDM. Therefore, in the single-suppression case we may envision a scenario where the thraxion represents part of the total DM abundance as FDM, while the warped KK mode may constitute the rest. We leave this possibility for future work.
In this setup, the bulk energy scales strongly depend on the moduli stabilisation prescription that we use. In LVS we have that the bulk KK scale ranges in $M_{KK}^{bulk}\in 10^{12}\div 10^{17}$ GeV while the gravitino mass is $m_{3/2}\in 10^{9}\div 10^{16}$ GeV. In KKLT instead $M_{KK}^{bulk}\sim 10^{17}$ GeV while $m_{3/2}\in 10^{5}\div 10^{15}$ GeV. The constraints on inflation coming from isocurvature pertubations bounds can be shown to be comparable to those ones related to $C_4$ and $C_2$ axions, implying low inflationary scale and undetectable tensor modes.
Finally, we must point out that the results above rely on the internal manifold to be (almost) CY. This is true when the throats in the multi-throat system are all symmetrical and host one thraxion only: in this particular case the thraxion minimises at vanishing vacuum energy. If this symmetry is not met by the system, $c$ will not necessarily minimise at zero, and thus it could break the CY condition. Moreover, the single-suppressed terms introduced by K\"ahler moduli stabilisation induce an additional shift on the thraxion vacuum which pushes it further away from the vanishing VEV. This tends to increase the amount of CY breaking and could lead also to a non-supersymmetric vacuum. The fact that vacua at non-zero $c$ break the CY condition implies that the use of the effective 4D supergravity action derived by compactifying type IIB string theory on CY orientifolds is questionable in this situation.
However, we could be entitled to keep using the results based on the CY-derived 4D EFT if the CY breaking does not change the EFT (too) drastically. This could happen for instance if $c$ is sufficiently small, so that the manifold is `close to' the original conformal CY and the CY-based 4D supergravity approach still gives at least the qualitatively right behaviour. Alternatively, the CY-breaking effect of a non-vanishing $c$-VEV may turn out to be largely `decoupled' from the bulk CY (leaving the largest part of the Laplacian eigenvalue spectrum qualitatively unchanged compared to the actual CY) and stays sequestered in the throats.
\section{Overall predictions and comparison with experimental constraints}
In what follows we wrap up all the results coming from the previous sections and we compare our findings with current and future experimental constraints.
As already mentioned, empirical bounds coming from Lyman-$\alpha$ forest, black hole superradiance and ultra-faint dwarf galaxies that are DM dominated put strong constraints on the vanilla FDM model, ruling out a non-negligible area of the parameter space~\cite{Marsh:2018zyw,Chan:2021ukg,Jones:2021mrs,Nadler:2020prv,Zu:2020whs,Marsh_2019,Nebrin_2019,Maleki:2020sqn,Marsh:2021lqg}. We sum up these bounds together with our results in Fig.~\ref{fig:final_plot_bounds}.
Our analysis was able to provide some sharp relations between the mass and the abundance of ultralight ALPs coming from type IIB string theory.
We found that non-negligible fractions of DM can only be given by $C_2$ and $C_4$ ALPs or thraxions under the following conditions:
\begin{itemize}
\item{$C_4$: 4-form axions can be good FDM candidates in LVS stabilisation only if the ALPs are related to cycles parametrising the overall volume. The overall extra-dimensions volume needs to be small $\mathcal{V}\in 10^2 \div 10^4$ and $g_s \sim 0.2$. We considered for simplicity the case where the ALP mass is given by non-perturbative corrections coming from ED3 instanton and gaugino condensation on a stack of $N\leq 4$ branes. Results coming from higher numbers of branes do not show any significant difference and are highly constrained by Eridanus-II and black hole superradiance bounds. These particles can represent $\sim$20\% of DM when their mass is $m\sim 10^{-21}$ eV.}
\item{$C_2$: they can represent FDM in the LVS stabilisation setup when there is non-vanishing intersection between the harmonics $C_2$ and the volume cycle in the extra dimensions. Also in this case the overall extra-dimension volume needs to be small $\mathcal{V}\sim \mathcal{O}(10^2)$. The $C_2$ axions need to acquire a mass through ED3/ED1 bound state instantons. These particles can represent nearly 50\% of DM when their mass is around $10^{-23}$ eV.}
\item{Thraxions: these particles can be FDM candidates in both LVS and KKLT scenarios. Here the allowed parameter region is wider compared to the previous cases. In LVS, the volume can vary between $\mathcal{V}\in 10^2\div 10^8$ and $g_s\in 0.05\div 0.2$. In KKLT, the tree-level superpotential may lie in $W_0\in 10^{-12}\div 10^{-2}$. These ALPs can represent 20\% of DM if $m\sim 10^{-21}$ eV and 100\% of DM when $m\in 10^{-25}\div 10^{-23}$ eV.}
\end{itemize}
\label{sec:PredExpConstr}
\begin{figure}
\begin{center}
\includegraphics[width=0.9\textwidth]{sumup2.pdf}\vspace{5pt}
\end{center}
\caption{\label{fig:final_plot_bounds} Mass and total DM abundance predictions for large cycles $C_4$ axions (blue stripe), $C_2$ axions (light blue stripe), and thraxions in KKLT (yellow stripe) and LVS (sand stripe) stabilisation. These results are compared to the current experimental boundaries coming from CMB (solid grey area), Lyman-$\alpha$ forest (solid red), Eridanus II (solid pink area) observations and with theoretical predictions based on Black Hole Superradiance (solid purple area). Future experimental boundaries coming from CMB (grey), Lyman-$\alpha$ forest detection (red), Square Kilometre Array (brown) and Pulsar Timing Arrays (orange) are identified with solid lines. The reported experimental bounds were adapted from the recent review on ultralight bosonic dark matter~\cite{Marsh:2021lqg}. We refer the reader to that text and to the references therein for more details and extended bibliography.}
\end{figure}
Given the variety of possible ultralight axionic DM candidates, it is natural to ask whether some of them are more probable than others. Recent works have been analysing the relation between the distribution of string vacua, the axion masses and the decay constants~\cite{Broeckel_2021, Mehta:2020kwu}. Despite this is far beyond the scope of this paper, we try to provide a very short description of how the number of vacua varies across our FDM candidates.
In LVS, the relation between the overall volume and the string coupling leads to the following differential relation
\begin{equation}
d\mathcal{V}\simeq-\frac{e^{1/g_s}}{g_s^2}dg_s \fstop
\end{equation}
Given that the distribution of $g_s$ was shown to be uniform \cite{Broeckel_2020,Blanco-Pillado:2020wjn} we can write $dg_s\sim dN$, $N$ being the number of flux vacua, so that
\begin{equation}
dN\sim d\left[\log \mathcal{V}\right]^{-1}\fstop
\end{equation}
Instead, in KKLT the relation between the tree-level superpotential and the overall volume (considering a single K\"ahler modulus for simplicity) leads to
\begin{equation}
d\mathcal{V}\sim -\frac{3}{2 a}\frac{\mathcal{V}^{1/3}}{W_0}dW_0\fstop
\end{equation}
The $W_0$ distribution is assumed to be uniformly distributed in the complex plane so that $d|W_0|^2\sim |W_0| d|W_0|\sim dN$ for standard values of $W_0$ \cite{Denef_2004} while it scales as $|W_0|\sim e^{-1/g_s}$ for exponentially suppressed values of $W_0$ \cite{Demirtas_2020,Demirtas_2020b,Alvarez-Garcia:2020pxd}. This implies that in KKLT
\begin{subequations}
\begin{align}
&dN\sim d\left[e^{-2a\mathcal{V}^{2/3}}\right] \qquad \mbox{for not too small $|W_0|$}\coma\nonumber\\
&dN\sim d\left[\mathcal{V}^{-2/3}\right] \qquad\quad \mbox{for exponentially small $|W_0|$\fstop}\nonumber
\end{align}
\end{subequations}
The relation between the overall volume and the axion mass for large cycles $C_4$ axions, $C_2$ axions and thraxions in KKLT scenario scales as $m\sim e^{-\frac{a}{2}\mathcal{V}^{2/3}}$. Instead, for thraxions in LVS it reads $m\sim \frac{c}{\mathcal{V}^{11/6}}$, $c\in \Bbb R^+$. This implies that the relation between the number of vacua and the mass of the ALP is given by:
\begin{subequations}
\begin{align}
&dN \sim d\left[\log\left(\frac{2}{a} \log(m^{-1})\right)\right]^{-1} \;\qquad \mbox{for $C_2/C_4$ axions in LVS}\coma \nonumber\\[5pt]
&dN \sim d\left[\log(m^{-1})\right]^{-1} \qquad\qquad\qquad\; \mbox{for thraxions in LVS}\coma \nonumber\\[10pt]
& dN \sim d\left[m^4\right] \qquad\qquad\qquad\qquad\qquad \mbox{for thraxions in KKLT}\coma\nonumber\\
& dN \sim d\left[\frac{2}{a} \log(m^{-1})\right]^{-1} \qquad\qquad\quad \mbox{for thraxions in KKLT $(W_0\sim e^{-1/g_s})$}\fstop\nonumber
\end{align}
\end{subequations}
We can conclude that ALPs relying on LVS stabilisation do not show a strongly preferred mass value, given that here the number of vacua distribute at most logarithmic with respect to the thraxion mass. On the contrary, thraxions living in the KKLT setup show a polynomial distribution for fairly large values of $W_0$, stating that higher thraxion masses are more likely to appear in the string landscape. This distribution then flattens out towards a logarithmic distribution for exponentially suppressed $W_0$ values.
We would like to stress that our results provide scaling relationships for the simple setups analysed here. A more complete and general treatment of the problem as e.g. the number of moduli increases, also considering different geometries, is well beyond the scope of this paper. Nonetheless, we would like to give a hint about why we believe our results do not substantially change as the complexity of the extra dimensions increases. Thraxion fields depend on the CY geometry only via the overall volume, therefore changing the compactification manifold do not significantly affect their result. On the other hand, $C_4$ axions can be good FDM candidates if and only if they are the axion partners of K\"ahler moduli parametrising the overall volume $\mathcal{V}$ so that they nearly saturate the WCG bound. Although it is not possible to write the most generic volume of a CY in terms of 4-cycles (the change from 2-cycle variables to 4-cycle volumes enforced by the O7 orientifold action is in general not feasible analytically), the number of moduli entering the volume with a positive sign must be finite. Furthermore, the K\"ahler cone conditions tend to create a hierarchy between the volumes of the 2-cycles thus reducing the number of very large cycles. Moreover, the presence of many moduli will have to lower the value of $Sf$, as they increase the value of the total volume~\cite{Demirtas:2018akl}. It is therefore quite reasonable to think that as the complexity of the extra dimensions increases, the $C_4$ axions are naturally moved towards lower masses, away from the desired value to represent FDM that was shown to be exponentially sensitive to $\mathcal{V}$. Similar arguments also apply to the case of $C_2$ coupled to $C_4$ through ED3/ED1 instanton interactions. In fact, this effect tends to make the $C_4$ and $C_2$ axions almost degenerate in mass.
\section{Conclusions}
In this work we systematically dissect the long-standing lore that string axions can represent viable FDM candidates. We focus on the string axiverse coming from type IIB string theory compactified down to 4D on a CY orientifold with $O3/O7$-planes. After studying the properties of the whole axionic spectrum, we restrict the discussion to those axions that can represent good FDM candidates. We find that this request is closely related to the WGC for axions and implies that FDM saturates the bound $Sf \sim 1 $. The best candidates turn out to be $C_2$, $C_4$ closed string axions and thraxions. In order for the $C_2$ and $C_4$ axions to be ultralight, we need to stabilise them via the LVS procedure such that, being the LVS vacuum non-supersymmetric, their mass can be many orders of magnitude lighter than their volume modulus partners. On the contrary, thraxions are axionic modes which stay ultralight regardless of the moduli stabilisation prescription chosen, given that their mass scaling is mostly dominated by the warp factor of the multi-throat systems they live in.
Our results show that string axions can exist in the FDM window allowed by experiments, but this translates into requiring specific properties of the compactification. As mentioned before, for this aim LVS is the preferred stabilisation procedure, but FDM in KKLT can still be viable if we consider thraxions. For the harmonic zero mode $C_2$ and $C_4$ axions to fit the FDM window, the results suggest that the CY volume should be `smallish' (with respect to LVS standard volumes) and the rank of the gauge group living on the brane stack providing the non-perturbative effect generating the FDM mass should stay low. The masses and decay constants are basically insensitive to all the other microscopical parameters, making our predictions quite sharp. We also checked the scenario where more $C_4$ ultralight axions are present by considering a fibred CY. While in general cases heavier axions represent considerably higher DM fractions, in case of isotropic compactifications if we choose similar internal parameters for all the axions, i.e. same rank of gauge group and prefactor coming from complex structure moduli stabilisation, we end up having multiple FDM particles. In this specific case the relative abundance of the FDM particles is determined by their $Sf$ value. Axions that come closer to saturating the WGC bound will represent higher percentages of DM.
For the $C_2$ axions, the situation is more involved. After checking many possibilities which can give rise to ultralight masses for these modes, we find that the only viable FDM scenario is the case of ED3/ED1-bound state instanton wrapping the cycle supporting the CY volume. Hence, $C_2$ axions can be ultralight only in presence of very light $C_4$ axions. This setup allows for a more enhanced mass hierarchy but, due to different instanton properties, here the heavier particle, i.e. the $C_4$ axion, constitutes an almost irrelevant fraction in the DM halo.
Then, we analyse the predictions for the masses and decay constants as a function of the DM abundance for the thraxions. These axionic modes allow for a wider range of masses, making them easier to fit the FDM window. We study both K\"ahler moduli stabilisation scenarios (KKLT and LVS), as well as the two possible regimes arising there: i) the thraxion mass keeps its double-suppressed mass from warping even after K\"ahler moduli stabilisation; or ii) it receives corrections from K\"ahler moduli stabilisation which cut the power of the warp factors suppressing the thraxion mass by half. Surprisingly, our results for thraxion FDM largely decouple from these details. Hence, we infer that thraxions are insensitive to the moduli stabilisation prescription chosen. The most prominent requirements are that in LVS the volume of the bulk CY should be rather big, as opposite to the cases discussed previously, while in KKLT the string coupling should be low.
A few caveats are in order concerning our results for thraxion FDM. For once, the complete 4D EFT of thraxions is still being developed. Moreover, while warped throats are ubiquitous in CY manifolds, this may not be the case for thraxions, as e.g. for the recently constructed landscape of $O3/O7$-orientifolds of CICYs thraxion appear only in a fraction of them~\cite{Carta:2020ohw,Carta:unpublished}. Hence, while they appear to span a large portion of the parameter space in Fig.~\ref{fig:final_plot_bounds}, we leave questions as to their generality for the future.
Finally, we compare our results with current astrophysical and experimental bounds. For each scenario analysed, we discuss the relation between our predictions and the exclusion bands. Moreover, we provide a preliminary discussion of the vacuum distribution for the mass of such axions in the string landscape. The results show that our FDM candidates from string theory have a very flat mass distribution for almost all cases studied. It is particularly exciting that our predictions show overlap with the regions in reach of future experiments. Hence, if at some point axions were to be found at these mass scales, we may be able to learn from the data about the type of axion detected, as well as its couplings, and potentially even something about their underlying microscopic theory.
With all our caveats having been stated, in the end axions may yet turn out to be the missing link towards testing string theory.
\acknowledgments
We would like to thank Arthur Hebecker, David J.E. Marsh, Federico Carta, Alessandro Mininno Andreas Schachner and Gary Shiu for useful discussions. N.R. is supported by the Deutsche Forschungsgemeinschaft under Germany's Excellence Strategy - EXC 2121 `Quantum Universe' - 390833306. V.G. and A.W. are supported by the ERC Consolidator Grant STRINGFLATION under the HORIZON 2020 grant agreement no. 647995.
|
1,314,259,995,280 | arxiv | \section{Introduction}
\label{sec:intro}
After delivering a total integrated luminosity of more than 160~fb$^{-1}$ at the end of Run 2, at the beginning of 2019, the Large Hadron Collider (LHC) was shut down for two years (Long Shutdown 2, LS2) to upgrade the accelerator-chain and detectors upgraded for the High Luminosity LHC (HL-LHC) phase.
The Compact Muon Solenoid (CMS)~\cite{cms} Drift Tube (DT) muon detector system, originally built for processing the instantaneous and integrated luminosities expected for the LHC, will now be required to process the HL-LHC data, which has a factor five larger instantaneous luminosity. Consequently, the radiation levels are expected to exceed the LHC integrated luminosity by a factor of ten. During this LS2, CMS aims to upgrade its electronics and detector performance to improve the data taking and to ensure a precise reconstruction of all the particles in the high pile-up conditions of HL-LHC using the existing DT chambers.
\section{CMS Drift Tubes}
\label{sec:dts}
The Drift Tube chambers are one of the important parts of the CMS muon system responsible for identifying, measuring and triggering on muons by the precise measurement of their position and momentum. The electronics of the CMS DT chambers will need to be replaced for the HL-LHC operation~\cite{hllhc}.
\begin{figure}[htbp]
\centering
\includegraphics[width=.33\textwidth]{cell}
\includegraphics[width=.3\textwidth]{Chamber}
\includegraphics[width=.33\textwidth]{CMStranv}
\caption{\label{fig:i} DT cell (left), DT chamber (middle), CMS tranverse cut (right).}
\end{figure}
The basic element of the DT system is the DT drift cell, shown in Fig.~\ref{fig:i} on the left side, a rectangular cell of 42 x 13~mm$^{2}$, with a gas mixture of 85\% Ar and 15\% CO$_{2}$ and resulting in a roughly constant drift velocity of 54 $\mu$m/ns. There are approximately 172000 DT drift cells. A DT chamber, shown in Fig.~\ref{fig:i} in the middle part, is made of parallel layers of cells grouped in four layers that form a super layer (SL); two SLs in r-$\phi$ and one SL r-z (for the three innermost stations). A DT muon station consists of an assembly of chambers located at a fixed value of radial distance R, with four barrel stations labelled as MB1 to MB4, where MB1 is the closest station to the interaction point. Along the beam axis z DTs are divided into five slices, called wheels (see Fig.~\ref{fig:i} for one wheel), with wheel 0 centered at z=0 and wheel+1 and wheel+2 in the positive z direction and wheel-1 and wheel-2 in the negative z direction. Within each wheel, chambers are arranged in twelve azimuthal sectors.
\section{Slice Test}
\label{sec:obdt}
For a slice test of the HL-LHC electronics, prototypes of the on-detector board for the drift tube chambers (OBDTs) have been installed in a single sector (wheel+2, sector 12) and integrated into the central data acquisition and trigger systems during LS2. The four chambers in this sector were instrumented with OBDT prototypes~\cite{obdt}. The DT chamber front-end pulses carrying the time information of the chamber hits are sent to both the existing on-detector electronics (minicrate) and through the OBDTs via specifically designed splitter boards that preserve the signal integrity. Thirteen OBDTs distributed in five mechanical frames which also take care of the thermal interface to the water cooling loop are installed in this sector. The Phase-2 back-end functionality (slow control, trigger generation and DAQ) is implemented in firmware running on DT uTCA boards (TM7~\cite{track}) developed for the Phase-1 upgrade.
The OBDT boards use a Polarfire FPGA and they digitize the pulses that come from the front-end boards located inside the chambers. The FPGAs send digitalized and formatted data to five AB7 boards through its five optical links. A new trigger system based on high performing FPGAs is being designed that will be capable of providing precise muon reconstruction and bunch crossing identification.
The slice test results presented here were obtained in 2021 using cosmic rays. As mentioned previously, the signals from the chambers are split and reach both the legacy existing Phase-1 electronics and Phase-2 demonstrator chains, which will allow them to operate in parallel during LHC collisions. Figure~\ref{fig:2Dandeff} shows the 2D distribution of the Phase-2 Trigger Primitive (TP) Quality obtained by the AB7 board as a function of the number of hits associated to the offline reconstructed segment (phi view). The quality criteria is explained in Table~\ref{tab:qual}.
\begin{center}
\begin{table}[ht]
\centering
\caption{Trigger Primitive Quality criteria}
\begin{tabular}{c|c}
Quality & Description \\\hline
1& 3 hit track \\
3& 4 hit track \\
6& 3+3 hit track \\
7& 3+4 hit track \\
8& 4+4 hit track \\
\end{tabular}
\label{tab:qual}
\end{table}
\end{center}
The highest number of events corresponds to the highest quality, which matches four plus four hits from a Phase-2 TP with eight hits from an offline reconstructed segment. The efficiency of finding a Phase-2 TP in any bunch crossing (BX) with respect to the local position of the offline segment reconstructed out of hits detected by the Phase-1 system is shown in Fig.~\ref{fig:2Dandeff}. It includes every primitive (red), primitives built with more than 4 hits (blue), and primitives with more than 6 hits, i.e. 3 or more hits per superlayer (green) in MB4. Selected segments are built with more than 4 hits and have an inclination in the radial coordinate smaller than 30 degrees with respect to the direction perpendicular to the chamber. No geometrical matching between the offline segment and the TP is required.
\begin{figure}[htbp]
\centering
\includegraphics[width=.48\textwidth]{h2DHwQualSegNHits_MB1.png}
\includegraphics[width=.48\textwidth]{hEffHWvsSegX_MB4_combined.png}
\caption{\label{fig:2Dandeff} Left: 2D distribution of the TP Quality vs the number of hits associated to the offline reconstructed segment. Right: Efficiency of finding a Phase-2 TP in any BX with respect to the local position of the offline segment reconstructed out of
hits detected by the Phase-1 system.}
\end{figure}
The time difference between trigger primitive time and the offline reconstructed segment time in a cosmic sample is shown Fig.~\ref{fig:DeltaTime}. For Phase-2 only primitives fitting at least 4 hits are compared with the legacy system. The core resolution of the Phase-2 distribution is a few ns, whereas for the Phase-1 system the trigger output time is given in bunch crossing units (25~ns step). The improved online time resolution in Phase-2 reflects in this particular sample (unbunched cosmic muons) as a lower fraction of triggers at a wrong bunch crossing, i.e. 12.5~ns away from the time the muon crossed the chamber.
\begin{figure}[htbp]
\centering
\includegraphics[width=.5\textwidth]{DeltaTimeQ_MB4.png}
\caption{\label{fig:DeltaTime} Difference between trigger primitive’s time and the offline reconstructed segment time in a cosmic sample.}
\end{figure}
\section{Summary}
\label{sec:summary}
For HL-LHC the DT electronics will be fully replaced while
keeping the existing chambers, the prototypes of the Phase-2 On Board DT electronics are integrated on site in CMS as part of the DT Slice Test. The full Slice Test data taking chain has been operated very satisfactorily, showing the optimal efficiency of the designed Phase-2 electronics and good performance is obtained. The hit detection and offline reconstruction is are comparable to the Phase-1 system, already exploiting the ultimate DT cell resolution. Moreover, a significant improvement in the Phase-2 Level 1 DT local trigger resolution is also reached. We plan to run this Phase-2 parallel system in collisions during Run 3, which will allow us to test final preproduction prototypes under realistic conditions (radiation, magnetic field) and further refine the trigger algorithms.
|
1,314,259,995,281 | arxiv | \section[#1]{#2}}
\def\pr {\noindent {\it Proof.} }
\def\rmk {\noindent {\it Remark} }
\def\n{\nabla}
\def\bn{\overline\nabla}
\def\ir#1{\mathbb R^{#1}}
\def\hh#1{\Bbb H^{#1}}
\def\ch#1{\Bbb {CH}^{#1}}
\def\cc#1{\Bbb C^{#1}}
\def\f#1#2{\frac{#1}{#2}}
\def\qq#1{\Bbb Q^{#1}}
\def\cp#1{\Bbb {CP}^{#1}}
\def\qp#1{\Bbb {QP}^{#1}}
\def\grs#1#2{\bold G_{#1,#2}}
\def\bb#1{\Bbb B^{#1}}
\def\dd#1#2{\frac {d\,#1}{d\,#2}}
\def\dt#1{\frac {d\,#1}{d\,t}}
\def\mc#1{\mathcal{#1}}
\def\pr{\frac {\partial}{\partial r}}
\def\pfi{\frac {\partial}{\partial \phi}}
\def\pf#1{\frac{\partial}{\partial #1}}
\def\pd#1#2{\frac {\partial #1}{\partial #2}}
\def\ppd#1#2{\frac {\partial^2 #1}{\partial #2^2}}
\def\epw#1{\ep_1\w\cdots\w \ep_{#1}}
\def\td{\tilde}
\font\subjefont=cmti8 \font\nfont=cmr8
\def\inner#1#2#3#4{(e_{#1},\ep_1)(e_{#2},\ep_2)(\nu_{#3},\ep_1)(\nu_{#4},\ep_2)}
\def\second#1#2{h_{\a,i#1}h_{\be,i#2}\lan e_{#1\a},A\ran\lan e_{#2\be},A\ran}
\def\a{\alpha}
\def\be{\beta}
\def\gr{\bold G_{2,2}^2}
\def\r{\Re_{I\!V}}
\def\sc{\bold C_m^{n+m}}
\def\sg{\bold G_{n,m}^m(\bold C)}
\def\p#1{\partial #1}
\def\pb#1{\bar\partial #1}
\def\de{\delta}
\def\De{\Delta}
\def\e{\eta}
\def\ep{\varepsilon}
\def\eps{\epsilon}
\def\G{\Gamma}
\def\g{\gamma}
\def\k{\kappa}
\def\la{\lambda}
\def\La{\Lambda}
\def\om{\omega}
\def\Om{\Omega}
\def\th{\theta}
\def\Th{\Theta}
\def\si{\sigma}
\def\Si{\Sigma}
\def\ul{\underline}
\def\w{\wedge}
\def\vs{\varsigma}
\def\ze{\zeta}
\def\Hess{\mbox{Hess}}
\def\R{\Bbb{R}}
\def\C{\Bbb{C}}
\def\tr{\mbox{tr}}
\def\U{\Bbb{U}}
\def\lan{\langle}
\def\ran{\rangle}
\def\ra{\rightarrow}
\def\Dirac{D\hskip -2.9mm \slash\ }
\def\dirac{\partial\hskip -2.6mm \slash\ }
\def\bn{\bar{\nabla}}
\def\aint#1{-\hskip -4.5mm\int_{#1}}
\def\V{\mbox{Vol}}
\def\ol{\overline}
\def\mb{\mathbf}
\def\O{\Bbb{O}}
\def\H{\Bbb{H}}
\def\Re{\text{Re }}
\def\Im{\text{Im }}
\def\Id{\mathbf{Id}}
\def\Arg{\text{Arg}}
\renewcommand{\subjclassname}
\textup{2000} Mathematics Subject Classification}
\subjclass{58E20,53A10.}
\begin{document}
\title
[Berstein type theorems] {Bernstein type theorems for
spacelike stationary graphs in Minkowski spaces}
\author
[Xiang Ma, Peng Wang and Ling Yang]{Xiang Ma, Peng Wang and Ling Yang}
\address{School of Mathematical Sciences, Peking University, Beijing 100871, China}
\email{maxiang@math.pku.edu.cn}
\address {Department of Mathmatics, Tongji University,
Shanghai 200092, China.} \email{netwangpeng@tongji.edu.cn}
\address{School of Mathematical Sciences, Fudan University,
Shanghai 200433, China.} \email{yanglingfd@fudan.edu.cn}
\thanks{Xiang Ma is supported by NSFC Project 11171004; Peng Wang is supported by NSFC Project 11201340; Ling Yang is supported by NSFC Project 11471078}
\begin{abstract}
For entire spacelike stationary 2-dimensional graphs in Minkowski spaces, we establish Bernstein type theorems under specific boundedness assumptions either on the $W$-function or on the total (Gaussian) curvature. These conclusions imply the classical Bernstein theorem for minimal surfaces in $\R^3$ and Calabi's theorem for spacelike maximal surfaces in $\R_1^3$.
\end{abstract}
\maketitle
\Section{Introduction}{Introduction}
The classical Bernstein theorem \cite{be} says that any entire minimal graph in $\R^3$ has to be an affine plane. In other words,
suppose $f:\R^2\ra \R$ is an entire solution to the minimal surface equation
\begin{equation}
\text{div}\left(\f{\n f}{\sqrt{1+|\n f|^2}}\right)=0.
\end{equation}
Then $f$ has to be affine linear.
This conclusion is generally not true in the higher codimensional case. The simplest counter-example is the minimal graph $M=\text{graph }f:=\{(x,f(x)):x\in \C\}\subset \R^4$ of an arbitrary nonlinear holomorphic function $f:\C\ra \C$.
To find a suitable generalization, usually we have to add some boundedness assumptions on the growth rate of the function $f$.
Chern-Osserman \cite{c-o} obtained such a weak version of Bernstein type theorem as follows.
Let $f=(f_1,\cdots,f_m)$ be a smooth vector-valued function from
$\R^2$ to $\R^m$. If $M=\text{graph }f:=\{(x,f(x)):x\in \R^2\}$ is a minimal graph, and
\begin{equation}
W:=\left[\det\left(\de_{ij}+\sum_{1\leq \a\leq m} \pd{f_\a}{x_i}\pd{f_\a}{x_j}\right)\right]^{1/2}
\end{equation}
is uniformly bounded, then $M$ has to be an affine plane.
This $W$-function is a significant quantity for various reasons.
Firstly, for any $f:\R^2\mapsto \R^m$ and its graph, denote the metric on $\text{graph }f$ as $g=g_{ij}dx_idx_j$ under the
global coordinate chart $x=(x_1,x_2)\mapsto (x,f(x))\in \text{graph }f$, then the area element is given by $Wdx_1\w dx_2$. Thus $W$ is a geometric measure of the area growth of the graph of $f$.
Secondly, Chern-Osserman's theorem can be stated in the language of PDE as below. Namely, the entire solution to the following PDE system
\begin{equation}\label{PDE}\aligned
\sum_{1\leq i\leq 2} \pf{x_i}(Wg^{ij})&=0\qquad j=1,2\\
\sum_{1\leq i,j\leq 2}\pf{x_i}\left(Wg^{ij}\pd{f_\a}{x_j}\right)&=0\qquad \a=1,\cdots,m
\endaligned\end{equation}
has to be affine linear provided that $W\leq C$ for a positive constant $C$, where
\begin{equation}\label{metric}
(g_{ij}):=I_2+J_f^T E J_f
\end{equation}
($I_2, E$ denote the identity matrices of size $2$ and $m$ separately, $J_f:=(\pd{f^\a}{x_i})$ is the Jacobian of $f$), $(g^{ij}):=(g_{ij})^{-1}$ and $W=\det(g_{ij})^{1/2}$.
A key point from the analytic viewpoint is that the boundedness of $W$ ensures that (\ref{PDE}) is a uniformly elliptic PDE system.
For more work on the generalization of Chern-Osserman's theorem in relation with the $W$-function, see \cite{b}, \cite{fc}, \cite{j-x-y1} and \cite{j-x-y2}.
Now we consider entire spacelike stationary graphs in Minkowski spaces.
They also correspond to solutions to (\ref{PDE}), with the differences being that $f=(f_1,\cdots,f_m)$ is now from $\R^2$ to the $m$-dimensional Minkowski space $\R_1^m$, and that $E$ appearing in \eqref{metric} should be replaced by the Minkowski inner product matrix $\mathrm{diag}(1,1,\cdots,1,-1)$. Here we need to assume that
$(g_{ij})$ is positive-definite everywhere.
When $m=1$, $M$ becomes a spacelike maximal graph in $\R_1^3$, which has to be an affine plane. This is a well-known Berntein type result by E. Calabi \cite{ca}.
But for higher codimensional cases, the Bernstein type result fails to be true even if the $W$-function is uniformly bounded. Such a counterexample can be found in \cite{m-w-w} which is given by the function
$$f(x_1,x_2)=\left(2\sinh(x_1)\cos(-\f{\sqrt{2}}{2}x_2),
2\cosh(x_1)\cos(-\f{\sqrt{2}}{2}x_2)\right).$$
So it is a more subtle problem about the value distribution of the $W$-function for entire spacelike stationary graphs in Minkowski spaces. This is the main topic of the present paper.
As the first step, we generalize Osserman's result in \S 5 of \cite{o} to entire spacelike stationary graphs in the Minkowski space. They are still conformally equivalent to
the complex plane (see Theorem \ref{iso}), and have an explicit simple representation formula. Based on these formulas, we establish the following results:
1) Let $M$ be an entire spacelike stationary
graph in $\R_1^4$, then the $W$-function is either constant, or takes each values in $[r^{-1},r]$ infinitely often, where
$r$ can be any positive number strictly bigger than $1$. Moreover, $W\equiv \text{const}$ if and only if $M$ is a flat surface
(see Theorem \ref{t1}).
2) For any entire spacelike stationary
graph $M$ in $\R_1^4$, If $W\leq 1$ (or $W\geq 1$) always holds true on $M$, then $M$ has to be flat (see Corollary \ref{ber1}). Note that Calabi's theorem \cite{ca} and the classical Bernstein theorem \cite{be}
can be easily deduced from the above 2 conclusions, respectively.
3) For any entire spacelike stationary
graph $M$ in $\R_1^n (n\ge 4)$, if $W\leq 1$, then $M$ must be flat (see Theorem
\ref{ber2}). (On the contrary, the same conclusion does not necessarily hold true in the case $W\geq 1$; see Proposition \ref{ber3}.)
Another measure of the complexity of a complete stationary surface is its total Gaussian curvature $\int_M |K| dM$. It is closely related with its end behavior at the infinity (see the generalized Jorge-Meeks formula in \cite{m-w-w}).
Using the Weierstrass representation formula given in \cite{m-w-w}, one can compute the integral of the Gauss curvature and the normal curvature of an arbitrary
spacelike stationary surface in $\R_1^4$. A Bernstein type theorem (Theorem \ref{ftc}) follows immediately, which states that an entire spacelike stationary graph in $\R_1^4$
has to be flat, provided that $\int_M |K|dM<+\infty$. (This result cannot be generalized to higher codimensional cases.)
\Section{Entire graphs in Minkowski spaces and the $W$-function}{Entire graphs in Minkowski spaces and the $W$-function}\label{eg}
Let $\R_1^m$ denote the $m$-dimensional Minkowski space. For any $\mathbf{u}=(u_1,\cdots,u_{m-1},u_m)$, $\mathbf{v}=(v_1,\cdots,v_{m-1},v_m)\in
\R_1^m$, the Minkowski inner product is given by
\begin{equation}
\lan \mathbf{u},\mathbf{v}\ran=u_1v_1+\cdots+u_{m-1}v_{m-1}-u_mv_m.
\end{equation}
Let $f: \R^2\ra \R_1^m$
\begin{equation}
(x_1,x_2)\mapsto f(x_1,x_2)=(f_1(x_1,x_2),\cdots,f_m(x_1,x_2))
\end{equation}
be a smooth vector-valued function.
As in \S 3 of \cite{o}, we introduce the
vector notation
\begin{equation}
p:=\pd{f}{x_1},\qquad q:=\pd{f}{x_2}.
\end{equation}
Let $M=\text{graph }f:=\{(x,f(x)):x\in \R^2\}$ be the entire graph in $\R_1^{2+m}$ generated by $f$, then the metric on $M$ is
\begin{equation}
g=g_{ij}dx_idx_j
\end{equation}
with
\begin{equation}
g_{11}=1+\lan p,p\ran,\quad g_{22}=1+\lan q,q\ran,\quad g_{12}=g_{21}=\lan p,q\ran.
\end{equation}
According to the properties of positive-definite matrices, $M$ is a spacelike surface if and only if $1+\lan p,p\ran>0$ and $\det(g_{ij})>0$.
Hence
\begin{equation}
W=\det(g_{ij})^{1/2}>0
\end{equation}
for any spacelike graph.
Denote by
$\mc{P}_0$ the orthogonal projection of $\R_1^{2+m}$ onto $\R^2$, then $w:=W^{-1}$ is equivalent to the Jacobian determinant of
$\mc{P}_0|_M$. Thus $W\leq 1$ ($\equiv 1,\ \geq 1$) is equal to saying that $\mc{P}_0|_M$ is an area-increasing (area-preserving, area-decreasing) map.
For entire graphs in the Euclidean space, it is well-known that the orthogonal projection onto the coordinate plane is a length-decreasing map, which becomes
an isometry if and only if the graph is parallel to the coordinate plane. Therefore every entire graph in the Euclidean space must be complete.
But the following examples shows the above properties cannot be generalized to entire graphs in Minkowski spaces.
\textbf{Examples:}
\begin{itemize}
\item Let $\mathbf{y}_0$ be a non-zero light-like vector in $\R_1^m$, $h$ be a smooth real-valued function on $\R^2$ and $f:=h\mathbf{y}_0$,
then $p=\pd{h}{x_1}\mathbf{y}_0, q=\pd{h}{x_2}\mathbf{y}_0$ and hence $g_{ij}=\de_{ij}$, which implies the projection of $M=\text{graph }f$ onto $\R^2$
is an isometry, but $M$ cannot be an affine plane of $\R_1^{2+m}$ whenever $h$ is nonlinear.
\item Let $t\in \R\mapsto \th(t)\in (-\pi/2,\pi/2)$ be a smooth odd function, which satisfies $\lim_{t\ra +\infty}\th(t)=\pi/2$ and $\pi/2-\th(t)=O(t^{-2})$.
Denote
$$h(t):=\int_0^t \sin(\th(t))dt,$$
then $h$ is a smooth even function on $\R$. Define
$$f(x_1,x_2)=(0,\cdots,0,h(r))\qquad (r=\sqrt{x_1^2+x_2^2}),$$
then $p=\pd{f}{x_1}=(0,\cdots,0,\f{h'(r)x_1}{r})$, $q=\pd{f}{x_2}=(0,\cdots,0,\f{h'(r)x_2}{r})$ and hence
$$\aligned
g_{11}&=1+\lan p,p\ran=1-\f{h'(r)^2 x_1^2}{r^2}\geq 1-h'(r)^2=\cos^2(\th(t))>0,\\
\det(g_{ij})&=\det\left(\begin{array}{cc}
1-\f{h'(r)^2 x_1^2}{r^2} & -\f{h'(r)^2 x_1x_2}{r^2}\\
-\f{h'(r)^2x_1x_2}{r^2} & 1-\f{h'(r)^2x_2^2}{r^2}.
\end{array}\right)=1-h'(r)^2>0.
\endaligned$$
Therefore $M=\text{graph }f$ is an entire spacelike graph. Denote $\g: t\in \R\mapsto (t,0,f(t,0))$, then $\g$ is a smooth curve in $M$ tending to infinity. Since
$f(t,0)=(0,\cdots,0,h(t))$,
$$L(\g)=\int_{-\infty}^{+\infty} \sqrt{1-h'(t)^2}dt=\int_{-\infty}^{+\infty}\cos(\th(t))dt.$$
When $t\ra \infty$, $\cos(\th(t))\sim \pi/2-|\th(t)|\sim |t|^{-2}$, therefore $L(\g)<+\infty$ and hence $M$ cannot be complete.
\end{itemize}
\bigskip
\Section{Isothermal parameters of spacelike stationary graphs}{Isothermal parameters of spacelike stationary graphs}
Let $\mathbf{x}:M\ra \R_1^{2+m}$ be a spacelike surface in the Minkowski space. If the mean curvature vector field $\mathbf{H}$ vanishes everywhere, then $M$ is said to be \textit{stationary}. $M$ is stationary if and only if the restriction of any coordinate function on $M$ is harmonic. Namely,
$\De x_l\equiv 0$ for each $1\leq l\leq 2+m$, with $\De$ the Laplace-Beltrami operator with respect to the induced metric on $M$ (see \cite{m-w-w}).
Now we additionally assume $M$ to be an entire graph over $\R^2$. More precisely, there exists $f:\R^2\ra \R_1^m$, such that $M=\text{graph }f=:\{(x,f(x)):x\in \R^2\}$.
The denotation of $p,q,g_{ij},W$ is same as in Section \ref{eg}. For an arbitrary smooth function $F$ on $M$,
\begin{equation}
\De F=W^{-1}\p_i(Wg^{ij}\p_j F),
\end{equation}
where
\begin{equation}
(g^{ij})=(g_{ij})^{-1}=W^{-2}\left(\begin{array}{cc}
1+\lan q,q\ran & -\lan p,q\ran\\
-\lan p,q\ran & 1+\lan p,p\ran
\end{array}\right).
\end{equation}
The stationarity of $M$ implies $x_1,x_2$ are both harmonic functions on $M$, hence
\begin{equation}\aligned
0&=W\De x_1=\p_i (Wg^{ij}\p_j x_1)\\
&=\p_i(Wg^{ij}\de_{1j})=\p_i (Wg^{i1})\\
&=\pf{x_1}\left(\f{1+\lan q,q\ran}{W}\right)-\pf{x_2}\left(\f{\lan p,q\ran}{W}\right)
\endaligned
\end{equation}
and similarly
\begin{equation}\aligned
0&=W\De x_2=\p_i (Wg^{i2})\\
&=-\pf{x_1}\left(\f{\lan p,q\ran}{W}\right)+\pf{x_2}\left(\f{1+\lan p,p\ran}{W}\right).
\endaligned
\end{equation}
The above 2 equations implies the existence of smooth functions $\xi_1$ and $\xi_2$, such that
\begin{equation}\aligned
\pd{\xi_1}{x_1}&=\f{1+\lan p,p\ran}{W},\qquad \pd{\xi_1}{x_2}=\f{\lan p,q\ran}{W},\\
\pd{\xi_2}{x_1}&=\f{\lan p,q\ran}{W}, \qquad \pd{\xi_2}{x_2}=\f{1+\lan q,q\ran}{W}.
\endaligned
\end{equation}
As in \S 5 of \cite{o}, one can define the Lewy's transformation $L: (x_1,x_2)\in \R^2\ra (\eta_1,\eta_2)\in \R^2$ by
\begin{equation}
\eta_i=x_i+\xi_i(x_1,x_2)\qquad i=1,2.
\end{equation}
Since the Jacobi matrix of $L$
\begin{equation}
J_L=I_2+\left(\pd{\xi_i}{x_j}\right)=I_2+W^{-1}(g_{ij})
\end{equation}
is positive-definite, $L$ is a local diffeomorphism. Again based on the fact that $\big(\pd{\xi_i}{x_j}\big)$ is positive-definite, one can proceed as in
\cite{le} or \S 5 of \cite{o} to show that $L$ is length-increasing, thus $L$ is injective. Let $\Om$ be the image of $L$, then $\Om$ is open. If $\Om\neq \R^2$,
take $\eta$ in the complement of $\Om$ that is nearest to $L(0)$, and find a sequence of points $\{\eta^{(k)}:k\in \Bbb{Z}^+\}$, such that
$|\eta^{(k)}-L(0)|<|\eta-L(0)|$ and $\lim_{k\ra \infty}\eta^{(k)}=\eta$, then there exists $x^{(k)}\in \R^2$, such that $\eta^{(k)}=L(x^{(k)})$.
Since $L$ is length-increasing, $\{x^{(k)}:k\in \Bbb{Z}^+\}$ lies in a bounded domain of $\R^2$, then there exists an subsequence converging to $x\in \R^2$,
which implies $L(x)=\eta$ and causes a contradiction. Therefore $\Om=\R^2$ and then $L$ is a diffeomorphism of $\R^2$ onto itself.
Denote by $\la_1^2, \la_2^2$ ($\la_1,\la_2>0$) the eigenvalues of $(g_{ij})$, then $W=\det(g_{ij})^{1/2}=\la_1\la_2$ and there exists an orthogonal matrix $O$, such that
$$(g_{ij})=O^T\left(\begin{array}{cc} \la_1^2 & \\ & \la_2^2\end{array}\right)O.$$
Hence
$$\aligned
J_L&=I_2+W^{-1}(g_{ij})=O^T\left(\begin{array}{cc}
1+\f{\la_1}{\la_2} & \\
& 1+\f{\la_2}{\la_1}
\end{array}\right)O\\
&=(\la_1^{-1}+\la_2^{-1})O^T\left(\begin{array}{cc}
\la_1 & \\ & \la_2\end{array}\right)O
\endaligned$$
and furthermore
$$\aligned
d\eta_1^2+d\eta_2^2&=(d\eta_1\ d\eta_2)\left(\begin{array}{c} d\eta_1 \\ d\eta_2\end{array}\right)
=(dx_1\ dx_2)J_L^T J_L\left(\begin{array}{c} dx_1 \\ dx_2\end{array}\right)\\
&=(\la_1^{-1}+\la_2^{-1})^2(dx_1\ dx_2)O^T\left(\begin{array}{cc}
\la_1^2 & \\ & \la_2^2\end{array}\right)O \left(\begin{array}{c} dx_1 \\ dx_2\end{array}\right)\\
&=(\la_1^{-1}+\la_2^{-1})^2(dx_1\ dx_2)(g_{ij}) \left(\begin{array}{c} dx_1 \\ dx_2\end{array}\right)\\
&=(\la_1^{-1}+\la_2^{-1})^2(g_{ij}dx_idx_j),
\endaligned$$
i.e.
\begin{equation}
g=g_{ij}dx_idx_j=(\la_1^{-1}+\la_2^{-1})^{-2}(d\eta_1^2+d\eta_2^2).
\end{equation}
This means that $(\eta_1,\eta_2)$ are global isothermal parameters on $M$.
Denote
\begin{equation}
\ze:=\eta_1+\sqrt{-1}\eta_2
\end{equation}
and
\begin{equation}\label{phi1}
\beta_l:=\pd{x_l}{\ze}=\f{1}{2}\Big(\pd{x_l}{\e_1}-\sqrt{-1}\pd{x_l}{\e_2}\Big)\qquad l=1,\cdots,2+m.
\end{equation}
Then the harmonicity of coordinate functions implies
$$0=\f{\p^2 x_l}{\p \ze\p \bar{\ze}}=\f{\p \beta_l}{\p \bar{\ze}},$$
i.e. $\beta_1,\cdots,\beta_{2+m}$ are all holomorphic functions on $M$. A straightforward calculation shows
$-4\text{Im}(\bar{\beta}_1\beta_2)$ equals the Jacobian of the inverse of Lewy's transformation, which is positive everywhere,
thus $\f{\beta_2}{\beta_1}=\f{\bar{\beta}_1\beta_2}{|\beta_1|^2}$ is an entire function whose imaginary part is always negative.
The classical Liouville's Theorem implies $\f{\beta_2}{\beta_1}\equiv c:=a-b\sqrt{-1}$, where $a,b\in \R$ and $b>0$. In conjunction with
(\ref{phi1}) we get
\begin{equation}\label{eta}
\aligned
\pd{x_2}{\e_1}&=a\pd{x_1}{\e_1}-b\pd{x_1}{\e_2}\\
\pd{x_2}{\e_2}&=b\pd{x_1}{\e_1}+a\pd{x_1}{\e_2}.
\endaligned
\end{equation}
Let $(u_1,u_2)$ be global parameters of $M$, satisfying $x_1=u_1$ and $x_2=au_1+bu_2$. Then (\ref{eta}) tells us
\begin{equation}
\pd{u_2}{\e_1}=-\pd{u_1}{\e_2},\quad \pd{u_2}{\e_2}=\pd{u_1}{\e_1}.
\end{equation}
This means the one-to-one map $(\eta_1,\eta_2)\in \R^2\mapsto (u_1,u_2)\in \R^2$ is bi-holomorphic. Thereby we arrive at the following conclusion:
\begin{thm}\label{iso}
Let $f:\R^2\ra \R_1^m$ be a smooth vector-valued function, such that $M=\text{graph }f:=\{(x,f(x)):x\in \R^2\}$ is a spacelike stationary surface,
then there exists a nonsingular linear transformation
\begin{equation}\label{tran}\aligned
x_1&=u_1\\
x_2&=au_1+bu_2,\quad (b>0)
\endaligned
\end{equation}
such that $(u_1,u_2)$ are global isothermal parameters for $M$.
\end{thm}
Now we introduce the complex coordinate $z:=u_1+\sqrt{-1}u_2$ and denote
\begin{equation}
\alpha=(\alpha_1,\cdots,\alpha_{2+m}):=\pd{\mathbf{x}}{z}=\f{1}{2}\Big(\pd{\mathbf{x}}{u_1}-\sqrt{-1}\pd{\mathbf{x}}{u_2}\Big).
\end{equation}
then $\alpha$ is a holomorphic vector-valued function. The induced metric on $M$ can be written as
\begin{equation*}
\aligned
g&=\lan \pd{\mathbf{x}}{z},\pd{\mathbf{x}}{z}\ran dz^2+\lan \pd{\mathbf{x}}{\bar{z}},\pd{\mathbf{x}}{\bar{z}}\ran d\bar{z}^2+2\lan \pd{\mathbf{x}}{z},\pd{\mathbf{x}}{\bar{z}}\ran|dz|^2\\
&=2\text{Re}\big(\lan \alpha,\alpha\ran dz^2\big)+2\lan \alpha,\bar{\alpha}\ran|dz|^2.
\endaligned
\end{equation*}
Here $|dz|^2:=\f{1}{2}(dz\otimes d\bar{z}+d\bar{z}\otimes dz)=du_1^2+du_2^2$. Since $(u_1,u_2)$ are isothermal parameters for $M$,
\begin{equation}\label{phi2}
\lan \alpha,\alpha\ran=0
\end{equation}
and hence
\begin{equation}
g=2\lan \alpha,\bar{\alpha}\ran |dz|^2.
\end{equation}
Noting that $\alpha_1=\pd{x_1}{z}=\f{1}{2}$, $\alpha_2=\pd{x_2}{z}=\f{1}{2}(a-b\sqrt{-1})=\f{c}{2}$, (\ref{phi2}) equals to say
\begin{equation}\label{phi3}
\alpha_{2+m}^2=\alpha_1^2+\cdots+\alpha_{1+m}^2=\f{1+c^2}{4}+\alpha_3^2+\cdots+\alpha_{1+m}^2.
\end{equation}
Thus
$$\aligned
\lan \alpha,\bar{\alpha}\ran&=|\alpha_1|^2+\cdots+|\alpha_{1+m}|^2-|\alpha_{2+m}|^2\\
&=\f{1+|c|^2}{4}+|\alpha_3|^2+\cdots+|\alpha_{1+m}|^2-\big|\f{1+c^2}{4}+\alpha_3^2+\cdots+\alpha_{1+m}^2\big|\\
&\geq \f{1+|c|^2-|1+c^2|}{4}
\endaligned$$
and moreover
\begin{equation}
g\geq \f{1+|c|^2-|1+c^2|}{2}|dz|^2
\end{equation}
Observing that $1+|c|^2-|1+c^2|>0$ is a direct corollary of $b>0$, we get a conclusion as follows.
\begin{cor}
Let $M=\text{graph }f:=\{(x,f(x)):x\in \R^2\}$ be a spacelike stationary graph generated by $f:\R^2\ra \R_1^m$,
then the induced metric on $M$ is complete.
\end{cor}
(\ref{tran}) implies $dx_1\w dx_2=bdu_1\w du_2$, hence
$$\aligned
dM&=2\lan \alpha,\bar{\alpha}\ran du_1\w du_2\\
&=2b^{-1}\lan \alpha,\bar{\alpha}\ran dx_1\w dx_2\\
&=\f{1+|c|^2+4(|\alpha_3|^2+\cdots+|\alpha_{1+m}|^2-|\alpha_{2+m}|^2)}{2b}dx_1\w dx_2
\endaligned$$
with $dM$ the area element of $M$. In other words,
\begin{equation}\label{W}
W=\f{1+|c|^2+4(|\alpha_3|^2+\cdots+|\alpha_{1+m}|^2-|\alpha_{2+m}|^2)}{2b}.
\end{equation}
\bigskip
\Section{On $W$-functions for entire stationary graphs in $\R_1^4$}{On $W$-functions for entire stationary graphs in $\R_1^4$}
\begin{thm}\label{t1}
Let $f:\R^2\ra \R_1^2$ be a smooth function, such that $M=\text{graph }f:=\{(x,f(x)):x\in \R^2\}$ is a spacelike stationary graph,
then one and only one of the following three cases must occur:
(i) $f$ is affine linear and $W\equiv r$, where $r$ is an arbitrary positive constant;
(ii) $f=h\mathbf{y}_0+\mathbf{y}_1$ with $h$ a nonlinear harmonic function on $\R^2$, $\mathbf{y}_0$ a nonzero lightlike vector in $\R_1^2$
and $\mathbf{y}_1$ a constant vector, and
$W\equiv 1$;
(iii) $W$ takes each values in $[r^{-1},r]$ infinitely often, where $r$ is an arbitrary number in $(1,+\infty)$.
\end{thm}
\begin{proof}
(\ref{phi2}) is equal to
\begin{equation}\label{phi4}
\alpha_3^2-\alpha_4^2=-(\alpha_1^2+\alpha_2^2)=-\f{1+c^2}{4}
\end{equation}
and (\ref{W}) gives
\begin{equation}\label{W2}
W=\f{1+|c|^2+4(|\alpha_3|^2-|\alpha_4|^2)}{2b}.
\end{equation}
If $\alpha_3$ is a constant function, then (\ref{phi4}) shows $\alpha_4$ is also constant, and
$$x_a(z)=\text{Re}\int_0^z \alpha_a dz+x_a(0)\qquad \forall a=3,4$$
is affine linear. Hence $f$ is affine linear and $W\equiv r$,
where $r$ can be taken to be any value in $(0,\infty)$. This is Case (i).
Now we assume $\alpha_3$ is non-constant, then (\ref{phi4}) implies $\alpha_4$ is also non-constant.
If $c=-\sqrt{-1}$, then (\ref{phi4}) gives
$$0=\alpha_3^2-\alpha_4^2=(\alpha_3+\alpha_4)(\alpha_3-\alpha_4).$$
Noting that the zeros of a non-constant holomorphic function have to be isolated, we get
$\alpha_3+\alpha_4=0$ or $\alpha_3-\alpha_4=0$. Thus $|\alpha_3|=|\alpha_4|$
and then (\ref{W2}) shows $W\equiv 1$. Let $\beta$ be the unique holomorphic function such that
$\beta'=\alpha_3$ and $\beta(0)=0$, then $\alpha_3\pm \alpha_4=0$ implies
$$\aligned
f(x_1,x_2)&=(x_3(u_1,u_2),x_4(u_1,u_2))=(x_3(z),x_4(z))\\
&=\text{Re}\int_0^z (\alpha_3,\alpha_4)dz+(x_3(0),x_4(0))\\
&=\text{Re }\beta(z)(1,\mp 1)+f(0,0).
\endaligned$$
Now we put $h:=\text{Re }\beta(z)$, $\mathbf{y}_0:=(1,\mp 1)$ and $\mathbf{y}_1:=f(0,0)$, then $h$ is a nonlinear harmonic function,
$\mathbf{y}_0$ is a light-like vector and $f=h\mathbf{y}_0+\mathbf{y}_1$. This is Case (ii).
Otherwise $c\neq -\sqrt{-1}$ and hence $-\f{1+c^2}{4}\neq 0$. Let
$\mu\neq 0$ such that $\mu^2=-\f{1+c^2}{4}$, and $h_1,h_2$ be
holomorphic functions such that $\alpha_3=\mu h_1$, $\alpha_4=\mu
h_2$, then $\mu^2(h_1^2-h_2^2)=\alpha_3^2-\alpha_4^2=\mu^2$ gives
$$1=h_1^2-h_2^2=(h_1+h_2)(h_1-h_2),$$
which implies $h_1+h_2$ is an entire function containing no zero. Hence there exists an entire function $\beta$, such that
$h_1+h_2=e^\beta$, then $h_1-h_2=e^{-\beta}$ and hence
\begin{equation}
h_1=\cosh \beta,\qquad h_2=\sinh \beta.
\end{equation}
By computing,
$$\aligned
&|h_1|^2-|h_2|^2=|\cosh \beta|^2-|\sinh \beta|^2\\
=&\f{1}{2}(e^{\beta-\bar{\beta}}+e^{-\beta+\bar{\beta}})=\f{1}{2}(e^{2\text{Im}\beta\sqrt{-1}}+e^{-2\text{Im}\beta\sqrt{-1}})\\
=&\cos(2\text{Im}\beta)
\endaligned$$
and hence
\begin{equation}
\aligned
W&=\f{1+|c|^2+4(|\alpha_3|^2-|\alpha_4|^2)}{2b}\\
&=\f{1+|c|^2+4|\mu|^2(|h_1|^2-|h_2|^2)}{2b}\\
&=\f{1+|c|^2+|1+c^2|\cos(2\text{Im}\beta)}{2b}.
\endaligned
\end{equation}
Denote $r_1:=\inf W=\f{1+|c|^2-|1+c^2|}{2b}$ and $r_2:=\sup W=\f{1+|c|^2+|1+c^2|}{2b}$.
Due to the Picard theorem, $W$ takes each values in $[r_1,r_2]$ infinitely often. Noting that $c=a-b\sqrt{-1}$,
one computes
$$\aligned
r_1r_2&=\f{(1+|c|^2)^2-|1+c^2|^2}{4b^2}=\f{1+2|c|^2+|c|^4-(1+c^2+\bar{c}^2+|c|^4)}{4b^2}\\
&=\f{4b^2}{4b^2}=1.
\endaligned$$
Hence $r_1\in (0,1)$ and $r_2\in (1,+\infty)$.
Now we take $b:=1$, then $c=a-\sqrt{-1}$ and
$r_2=\f{1}{2}(2+a^2+|a|\sqrt{a^2+4})$. Denote $\mu:t\in \R^+\mapsto \mu(t)=\f{1}{2}(2+t^2+|t|\sqrt{t^2+4})$,
then $\mu$ is a strictly increasing function and $\lim_{t\ra 0}\mu(t)=1$, $\lim_{t\ra +\infty}\mu(t)=+\infty$.
Hence for an arbitrary number $r\in (1,\infty)$, one can find $a\in \R^+$, such that $r_2=r$ and then $W$ takes each values
in $[r^{-1},r]$ infinitely often. This is Case (iii).
\end{proof}
\begin{cor}\label{ber1}
Let $M$ be an entire spacelike stationary graph in $\R_1^{4}$ generated by a smooth function $f=(f_1,f_2):\R^2\ra \R_1^2$, if $W\leq 1$ (resp. $W\geq 1$),
then $f$ is affine linear or $f=h\mathbf{y}_0+\mathbf{y}_1$, with $h$ a nonlinear harmonic function, $\mathbf{y}_0$ a nonzero lightlike
vector and $\mathbf{y}_1$ a constant vector. Moreover, $W>1$ (resp. $W<1$) forces $f$ to be affine linear, representing an affine plane
in $\R_1^4$.
\end{cor}
\noindent \textbf{Remark. }If $f_2\equiv 0$, then $M=\text{graph }f$ is a minimal entire graph in $\R^3$ and $W\geq 1$. By Corollary \ref{ber1},
$f$ is affine linear or $f=h\mathbf{y}_0+\mathbf{y}_1$, where $h$ is a nonlinear harmonic function and $\mathbf{y}_0$ is a nonzero
lightlike vector. But $f_2\equiv 0$ denies the occurence of the latter case. Hence $f$ is an affine linear function and thereby the
classical Bernstein theorem \cite{be} can be derived from Corollary \ref{ber1}. Similarly, Corollary \ref{ber1} implies any
spacelike maximal entire graph in $\R_1^3$ has to be affine linear. This is Calabi's theorem \cite{ca}.
\Section{Bernstein type theorems for entire stationary graphs in $\R_1^{2+m}$}{Bernstein type theorems for entire stationary graphs in $\R_1^{2+m}$}
It is natural to ask whether one can generalized the conclusion of Corollary \ref{ber1} to higher codimensional cases.
For the first statement, i.e. $W\leq 1$, the answer is ``yes":
\begin{thm}\label{ber2}
Let $f:\R^2\ra \R_1^m$ be a smooth function, such that $M=\text{graph }f:=\{(x,f(x)):x\in \R^2\}$ is a spacelike stationary graph in $\R_1^{2+m}$.
If the orthogonal projection $\mc{P}_0$ of $M$ onto the coordinate plane $\R^2$ is area-increasing (i.e. $W\leq 1$), then $f$ is affine linear or $f=h\mathbf{y}_0+\mathbf{y}_1$, with $h$ a nonlinear harmonic function, $\mathbf{y}_0$ a nonzero lightlike
vector and $\mathbf{y}_1$ a constant vector. Moreover, if $\mc{P}_0$ is strictly area-increasing (i.e. $W<1$), then $f$ has to be affine linear and $M$
is an affine plane.
\end{thm}
\begin{proof}
We shall consider the problem in the following four cases.
\textbf{Case I.} $\alpha_1,\cdots,\alpha_{2+m}$ are all constant functions.
As in the proof of Theorem \ref{t1}, one can show $f$ is an affine linear function.
\textbf{Case II.} $\alpha_{2+m}$ is a constant function, but $\alpha_l$ is non-constant for some $1\leq l\leq 1+m$.
By the classical Liouville Theorem, there exists a point in $\C$, such that $|\alpha_{l}|^2\geq |\alpha_{2+m}|^2+b-\f{1}{4}(1+|c|^2)$
at this point. Combing with (\ref{W}) gives
$$\aligned
W&=\f{1+|c|^2+4(|\alpha_3|^2+\cdots+|\alpha_{1+m}|^2-|\alpha_{2+m}|^2)}{2b}\\
&\geq \f{1+|c|^2+4(|\alpha_l|^2-|\alpha_{2+m}|^2)}{2b}\geq 2.
\endaligned$$
This gives a contradiction to the assumption that $W\leq 1$ everywhere. Hence the case cannot occur.
\textbf{Case III.} $\alpha_{2+m}$ is non-constant and $c\neq
-\sqrt{-1}$.
$c\neq \sqrt{-1}$ implies
$$\f{1+|c|^2}{2b}=\f{1+b^2+a^2}{2b}>1.$$
Denote $\de:=\f{1+|c|^2}{2b}-1$. Again the classical Liouville Theorem implies the existence of a point, such that
$|\alpha_{2+m}|^2<\f{1}{2}b\de$ at this point. Hence
$$\aligned
W&=\f{1+|c|^2+4(|\alpha_3|^2+\cdots+|\alpha_{1+m}|^2-|\alpha_{2+m}|^2)}{2b}\\
&\geq \f{1+|c|^2-4|\alpha_{2+m}|^2}{2b}>1+\de-\f{4\cdot \f{1}{2}b\de}{2b}=1,
\endaligned$$
which causes a contradiction and therefore this case cannot happen.
\textbf{Case IV.} $\alpha_{2+m}$ is non-constant and $c=-\sqrt{-1}$.
Let $h_3,\cdots,h_{1+m}$ be meromorphic functions, such that
$$\alpha_3^2=h_3\alpha_{2+m}^2,\cdots,\alpha_{1+m}^2=h_{1+m}\alpha_{2+m}^2.$$
Then (\ref{phi3}) tells us
$$\aligned
\alpha_{2+m}^2&=\f{1+c^2}{4}+\alpha_3^2+\cdots+\alpha_{1+m}^2=\alpha_3^2+\cdots+\alpha_{1+m}^2\\
&=(h_3+\cdots+h_{1+m})\alpha_{2+m}^2.
\endaligned$$
Since $\alpha_{2+m}$ is a non-constant function, we have
$$h_3+\cdots+h_{1+m}\equiv 1.$$
Due to the triangle inequality,
$$\aligned
W&=\f{1+|c|^2+4(|\alpha_3|^2+\cdots+|\alpha_{1+m}|^2-|\alpha_{2+m}|^2)}{2b}\\
&=1+2(|\alpha_3^2|+\cdots+|\alpha_{1+m}^2|-|\alpha_{2+m}^2|)\\
&=1+2(|h_3|+\cdots+|h_{1+m}|-1)|\alpha_{2+m}|^2\geq 1
\endaligned$$
and the equality holds if and only if the functions $h_3,\cdots,h_{1+m}$ all take values in $\R^+\cup \{0,\infty\}$.
Again using the Liouville Theorem, we know that $h_3,\cdots,h_{1+m}$ are all constant real functions. Therefore, there
exist $v_3,\cdots,v_{1+m}\in \R$, such that $v_3^2+\cdots+v_{1+m}^2=1$ and
$$(\alpha_3,\cdots,\alpha_{1+m},\alpha_{2+m})=(v_3,\cdots,v_{1+m},1)\alpha_{2+m}.$$
Let $\beta$ be the unique holomorphic function such that $\beta'=\alpha_{2+m}$ and $\beta(0)=0$. Denote $h:=\text{Re }\beta$,
$\mathbf{y}_0:=(v_3,\cdots,v_{1+m},1)$ and $\mathbf{y}_1:=f(0,0)$, then $h$ is a non-linear harmonic function and
$\mathbf{y}_0$ is a light-like vector. We can proceed as in the proof of Theorem \ref{t1} to show $f=h\mathbf{y}_0+\mathbf{y}_1$.
Note that in this case $W\equiv 1$.
\end{proof}
But our answer is ``no" for the second statement, i.e. $W\geq 1$. In fact, we have the following result:
\begin{pro}\label{ber3}
For any real number $C\geq 1$ and $\ep>0$, there exists an entire spacelike stationary graph in $\R_1^{2+m}$ ($m\geq 3$) generating by
$f:\R^2\ra \R_1^m$, such that $\inf W\cdot \sup W=C$ and $0<\sup W-\inf W<\ep$.
\end{pro}
\begin{proof}
Now we put $c:=-b\sqrt{-1}$ and let $d$ be a real number to be chosen. Let $\mu$ be a complex number such that
$$\mu^2=-\f{1+c^2+d^2}{4}=-\f{1-b^2+d^2}{4}.$$
Denote
$$\aligned
&\alpha_1=\f{1}{2},\alpha_2=\f{c}{2}=-\f{b}{2}\sqrt{-1},\alpha_3=\cdots=\alpha_{m-1}=0,\\
&\alpha_m=\f{d}{2},\alpha_{1+m}=\mu \cosh z,\alpha_{2+m}=\mu \sinh z.
\endaligned$$
Since
$$\lan \alpha,\alpha\ran=\alpha_1^2+\alpha_2^2+\alpha_m^2+\alpha_{1+m}^2-\alpha_{2+m}^2=0$$
and $\lan \alpha,\bar{\alpha}\ran>0$, $z\mapsto \mathbf{x}(z)=\int_0^z \alpha(z)$ gives an entire spacelike stationary graph in $\R_1^{2+m}$.
As in the proof of Theorem \ref{t1}, a similar calculation shows
$$\aligned
W&=\f{1+|c|^2+4(|\alpha_3|^2+\cdots+|\alpha_{1+m}|^2-|\alpha_{2+m}|^2)}{2b}\\
&=\f{1+b^2+d^2+|1-b^2+d^2|\cos(2\text{Im}z)}{2b}
\endaligned$$
Denote $r_1:=\inf W$, $r_2:=\sup W$, then $r_1=\f{1+b^2+d^2-|1-b^2+d^2|}{2b}$,
$r_2=\f{1+b^2+d^2+|1-b^2+d^2|}{2b}$ and
$$\aligned
&r_1r_2=\f{(1+b^2+d^2)^2-(1-b^2+d^2)^2}{4b^2}=1+d^2,\\
&r_2-r_1=\f{|1-b^2+d^2|}{b}.
\endaligned$$
Now we put $d:=\sqrt{C-1}$, then $r_1r_2=C$, and one can choose $b$ being sufficiently close to $\sqrt{C}$, such that
$r_2-r_1=\f{|1-b^2+d^2|}{b}=\f{|C-b^2|}{b}\in (0,\ep)$.
\end{proof}
\Section{Stationary graphs with finite total curvature}{Stationary graphs with finite total curvature}
As demonstrated in \cite{m-w-w}, the Bernstein theorem can not be generalized directly to stationary graphs in $\R^4_1$, because one can easily construct complete stationary graphs in $\R^4_1$ which is not flat. Interestingly these examples have infinite total curvature.
On the other hand, examples of complete stationary surfaces with finite total curvature are abundant, and there holds a generalized Jorge-Meeks formula about their total Gaussian curvature (and the total normal curvature) provided that they are algebraic \cite{m-w-w}. Thus one is naturally interested to know whether there could be a stationary graph with finite total curvature. The answer to this question is the following Bernstein type theorem (Note that here we do not need the algebraic assumption).
\begin{thm}\label{ftc}
Let $f=(f_1,f_2):\R^2\ra \R_1^2$ be a smooth function, such that $M=\text{graph }f:=\{(x,f(x)):x\in \R^2\}$ is a spacelike stationary graph in $\R_1^4$ whose curvature integral $\int_M |K| dM$ converges absolutely. Then $f$ is affine linear or $f=h\mathbf{y}_0+\mathbf{y}_1$, with $h$ a nonlinear harmonic function, $\mathbf{y}_0$
a nonzero lightlike vector and $\mathbf{y}_1$ a constant vector. In both cases,
$M$ is flat, i.e. $K\equiv 0$.
\end{thm}
\begin{proof}
Denote $z=u_1+\sqrt{-1} u_2$ as before. As in the proof of Theorem \ref{t1}, if $M$ is not a flat surface as we claimed, then the holomorphic differential
$\pd{\mathbf{x}}{z}$ can be expressed as
\begin{equation}\label{g}
(\alpha_1,\alpha_2,\alpha_3,\alpha_4)=
(\frac{1}{2},\frac{c}{2},\mu\cosh\be,\mu\sinh\be)
\end{equation}
where $c=a-b\sqrt{-1}$ is a complex constant with $b>0$, $\mu^2=-\frac{1+c^2}{4}$, and $\be=\be(z)$ is a non-constant homolomorphic function defined on $\C$. We will derive contradiction from this assumption.
By the Weierstrass representation formula given in \cite{m-w-w}, $\pd{\mathbf{x}}{z}$ can be expressed in terms of a pair of meromorphic functions $\phi,\psi$ (the \emph{Gauss maps}) and a holomorphic differential $dh=h'(z)dz$ (the \emph{height differential}) as below:
\begin{equation}
(\alpha_1,\alpha_2,\alpha_3,\alpha_4)
=(\phi+\psi,-\sqrt{-1}(\phi-\psi),1-\phi\psi,1+\phi\psi)h'.
\end{equation}
Comparing these two formulas, we obtain
\[
h'=\frac{\mu}{2}e^{\be},~~
\phi=\frac{1+c\sqrt{-1}}{2\mu}e^{-\be},~~
\psi=\frac{1-c\sqrt{-1}}{2\mu}e^{-\be}.
\]
Note that $\frac{1+c\sqrt{-1}}{2\mu}\cdot \frac{1-c\sqrt{-1}}{2\mu}=-1$,
and $b>0$ implies
$|\frac{1+c\sqrt{-1}}{2\mu}|>|\f{1+\bar{c}\sqrt{-1}}{2\mu}|$.
Denote $\frac{1+c\sqrt{-1}}{2\mu}:=re^{\sqrt{-1}\theta}$ with $r>1$ and $\th\in \R$, then $\frac{1-c\sqrt{-1}}{2\mu}=-r^{-1}e^{-\sqrt{-1}\theta}.$
In \cite{m-w-w} the Gaussian curvature and the normal curvature of a stationary surface was unified in a single formula in terms of $\phi,\psi$ and the Laplacian with respect to the
induced metric $g:=e^{2\om}|dz|^2$ as follows:
\begin{equation}\label{k1}
-K+\sqrt{-1}K^{\perp}
=\Delta\ln(\phi-\bar\psi)=4e^{-2\om}\frac{\phi_z\bar{\psi}_{\bar{z}}}{(\phi-\bar{\psi})^2}.
\end{equation}
Denote $\be:=v_1+\sqrt{-1}v_2$, where $v_1,v_2$ are both real functions on $\C$, then
\begin{equation}\label{k2}
\aligned
|K|e^{2\om}&=4\left|\text{Re}\left(\frac{\phi_z\bar{\psi}_{\bar{z}}}{(\phi-\bar{\psi})^2}\right)\right|\\
&=4\left|\text{Re}\left(\f{e^{2\sqrt{-1}\th}e^{-\be-\bar{\be}}}{(re^{\sqrt{-1}\th}e^{-\be}+r^{-1}e^{\sqrt{-1}\th}e^{-\bar{\be}})^2}\right)\right||\be'(z)|^2\\
&=4\left|\text{Re}\left(\f{1}{(re^{\f{1}{2}(\bar{\be}-\be)}+r^{-1}e^{\f{1}{2}(\be-\bar{\be})})^2}\right)\right||\be'(z)|^2\\
&=\f{4[2+(r^2+r^{-2})\cos(2v_2)]}{|re^{-\sqrt{-1}v_2}+r^{-1}e^{\sqrt{-1}v_2}|^2}|\be'(z)|^2\\
&\geq \f{4[2+(r^2+r^{-2})\cos(2v_2)]}{|r-r^{-1}|^2}|\be'(z)|^2.
\endaligned
\end{equation}
Thus the assumption of finite total curvature is equivalent to saying that
\begin{equation}\aligned\label{t-c}
+\infty>\int_M |K|dM
&=\int_{\C}
|K|e^{2\om}
du_1\wedge du_2\\
&\geq\int_{\C} \f{4[2+(r^2+r^{-2})\cos(2v_2)]}{|r-r^{-1}|^2}|\be'(z)|^2
du_1\wedge du_2\\
&\geq\int_{\C} \f{4[2+(r^2+r^{-2})\cos(2v_2)]}{|r-r^{-1}|^2}dv_1\w dv_2,
\endaligned\end{equation}
where the final equality follows from the assumption that $\beta$ is a non-constant entire function over $\mathbb{C}$, which takes almost every value of $\mathbb{C}$ at least one time.
It is easily-seen that the right hand side of (\ref{t-c}) is divergent, contradicting with the finiteness of the total curvature.
\end{proof}
\noindent{\bf Remarks:}
\begin{itemize}
\item Taking the imaginary part of (\ref{k1}), one can proceed as in (\ref{k2})-(\ref{t-c}) to get a contradiction when the condition ``$\int_M |K| dM<+\infty$" is replaced by
``$\int_M |K^\bot| dM<+\infty$". Therefore, if $M\subset \R_1^4$ is an entire spacelike stationary graph over $\R^2$, whose normal curvature integral converges absolutely,
then $M$ has to be a flat surface.
\item Let $M$ be a non-compact surface equipped with complete metric. If $\int_M |K| dM<+\infty$, then there is a compact Riemann surface $\bar{M}$, such that $M$ is
conformally equivalently to $\bar{M}\backslash \{p_1,p_2,\cdots,p_r\}$, with $p_1,\cdots,p_r\in \bar{M}$. This is a purely intrinsic result, discovered by A. Huber \cite{hu}.
Moreover, if we additionally assume $M$ to be a minimal surface in $\R^{2+m}$ ($m$ is arbitrary), then the Gauss map of $M$ is algebraic, and verse visa (see Theorem 1 of \cite{c-o}).
But this conclusion is no longer true for spacelike stationary surfaces in $\R_1^4$, due to the examples with finite totally curvature and essential singularities (see \cite{m-w-w}). Hence different from the $\R^4$ case \cite{o}, the conclusion of Theorem \ref{ftc} cannot be deduced directly from (\ref{g}).
\item Combing with Theorem 1 of \cite{c-o} and \S 5 of \cite{o}, it is easy to conclude that $M=\text{graph }f:=\{(x,f(x)):x\in \R^2\}$ is a minimal surface in $\R^4$ with finite total curvature if and only if
$f=p(z)$ or $p(\bar{z})$, with $p$ an arbitrary polynomial. Noting that any minimal graph in $\R^4$ over $\R^2$ can be regarded as a spacelike stationary graph in
$\R_1^n$ ($n\geq 5$), the conclusion of Theorem \ref{ftc} cannot be generalized to spacelike stationary graphs in higher dimensional Minkowski spaces.
\end{itemize}
\bibliographystyle{amsplain}
|
1,314,259,995,282 | arxiv | \section{Introduction}
The planets of the solar system and the exoplanets with intrinsic magnetic fields are emitters of cyclotron microwave amplification by stimulated emission of radiation (MASER) emission at radio wavelengths \cite{Kaiser,Zarka5}. This radio emission is generated by energetic electrons (keV) traveling along magnetic field lines, particularly in the auroral regions \cite{Wu}, accelerated in the reconnection region between the interplanetary magnetic field (IMF) and the intrinsic magnetic field of the exoplanet. The magnetic energy is transferred as kinetic and internal energy to the electrons (a consequence of the local balance between the Poynting flux, enthalpy, and kinetic fluxes). Most of the power transferred is emitted as auroral emission in the visible electromagnetic range, but a fraction is invested in cyclotron radio emission \cite{Zarka5} that escapes from the exoplanet environment if the surrounding stellar wind plasma frequency is smaller than the maximum cyclotron frequency at the planetary surface \cite{Weber,Vidotto4}. There are other sources of radio emission from giant gaseous exoplanets, where the acceleration of electrons is related to the rapid rotation of the magnetic field or its interaction with the plasma released by satellites or even their magnetosphere.
Radio telescopes lack the resolution of optical or infrared telescopes because the angular resolution, defined as $\lambda / D$ with $\lambda$ the observation wavelength and $D$ the telescope diameter, is smaller (the radio wavelength is $10^{5} - 10^{6}$ times larger than the visible wavelength). Consequently, the telescope diameter must be larger to reach the same angular resolution. To avoid this issue, current radio telescopes consist of arrays of wide spread antennas that can act as a single aperture, maximizing their collecting area and diameter. Array radio telescopes have been used to observe young stars and protoplanetary disks---for example, in 2014 the Atacama Large Millimeter Array (ALMA) observed the young star HL Tau---finding gaps in the circumstellar disk identified as young planet-like bodies \cite{Ricci}. The Very Large Array (VLA) also measured the radio emission protoplanetary disks in the star-forming region LDN 1551 \cite{Torrelles}. Radio receivers like the LOw Frequency ARray (LOFAR) operate in the frequency range of $10 - 240$ MHz with a sensitivity of $1$ mJy (at $15$ MHz) up to $0.3$ mJy (at $150$ MHz) \cite{Haarlem}. It should be noted that if the exoplanet magnetic field intensity is much lower than $4 \cdot 10^{5}$ nT the frequency of the signal is lower than $10$ MHz, below the LOFAR observation range. To be detectable from ground-based radio telescopes, the radio emission needs to be in the range of $15$--$200$ MHz with the best chances for LOFAR if the emission is in the range of its peak sensitivity, between $50$ and $60$ MHz. In a recent study performed by LOFAR the radio emission from the Jovian radiation belts was measured \cite{Girard}. Another study performed at the Giant Meterwave Radio Telescope (GMRT) tentatively identified radio emission from a hot Jupiter \cite{Lecavelier}, but this result could not be reproduced, and is thus unconfirmed. By contrast, other attempts to detect the radio emission from exoplanet magnetospheres have failed because the telescope was not sensitive enough \cite{Hallinan,Zarka6}, although the next generation of radio telescopes will be able to detect exoplanet radio emissions at distances of $\le 20$ parsec \cite{Carilli,Nan,Ricci2}.
The interaction of the stellar wind with the magnetosphere of an exoplanet can be described as the partial dissipation of the flow energy when a magnetized flow encounters an obstacle. The energy is dissipated as radiation in different ranges of the electromagnetic spectrum. This radiation depends on the flow and obstacle's magnetic properties. The power dissipated ($[P_{d}]$) can be approximated as the intercepted flux of the magnetic energy ($[P_{d}] \approx B^2 v \pi R^{2}_{obs} / 2 \mu_{0}$), with $B$ the magnetic field intensity perpendicular to the flow velocity in the frame of the obstacle, $\mu_{o}$ the magnetic permeability of the vacuum, $v$ the flow velocity, and $R_{obs}$ the radius of the obstacle.
The radiometric Bode law links the incident magnetized flow power and obstacle magnetic field intensity with the radio emission as $[P_{rad}] = \beta [P_{d}]^{n}$, with $[P_{rad}]$ the radio emission and $\beta$ the efficiency of dissipated power to radio emission conversion with $n \approx 1$ \cite{Zarka3,Zarka4}. Recent studies pointed out $\beta$ values between $2 \cdot 10^{-3}$ and $10^{-2}$ \cite{Zarka8}.
The interaction of the stellar wind (SW) with planetary magnetospheres is studied using different numerical frameworks such as single fluid codes \cite{2008Icar..195....1K,2015JGRA..120.4763J,Strugarek2,Strugarek}, multifluid codes \cite{2008JGRA..113.9223K}, and hydrid codes \cite{2010Icar..209...46W,Muller2011946,Muller2012666,2012JGRA..11710228R}. The simulations show that the planetary magnetic field is enhanced or weakened in distinct locations of the magnetosphere according to the IMF orientation, modifying its topology \cite{Slavin,2000Icar..143..397K,2009Sci...324..606S}. To perform this study we use the magnetohydrodynamic (MHD) version of the single fluid code PLUTO in spherical 3D coordinates \cite{Mignone}. The present study is based on previous theoretical studies devoted to simulating global structures of the Hermean magnetosphere using MHD numerical models \cite{Varela,Varela2,Varela3,Varela4}. The analysis takes part in recent modeling efforts to predict the radio emission from exoplanet magnetospheres \cite{Farrell2,Griemeier,Hess,Nichols}, complementary to other studies dedicated to analyzing the radio emission dependency with the stellar wind conditions \cite{Griemeier2,Jardine,Vidotto5,Vidotto6,See,Weber}. The model can be applied to exoplanet magnetospheres with topologies and intensities different from the Hermean case because no intrinsic spatial scales are contained in the equations of the ideal MHD (such as the Debye length or the Larmor radius in kinetic plasma physics) or in the spatial scale of the planetary dipole field. The only spatial scale of the problem is the planetary radius; however, this becomes negligible as soon as the magnetopause is far away from the planet surface (at least 2 times the planet radius). Under these circumstances, for the given stellar wind parameters (i.e., sonic Mach number and plasma beta) there is no difference between a magnetosphere with standoff distance at $10$ planetary radii and a magnetosphere with standoff distance at $1000$ planetary radii (and $10^{6}$ times stronger dipole field). The planet is essentially a point with no spatial scale in the simulation. Consequently, the study of the magnetospheric radio emission in exoplanets with a low or high magnetic field is analogous; from the point of view of the magnetosphere structure the problem to solve is the same. Foreseeing the magnetospheric radio emission in an exoplanet with a stronger magnetic field is a scaling problem that can be approximated to the first order using extrapolations. It should be noted that in a model with a strong magnetic field, any effects on the radio emission related to small magnetopause standoff distances are not observed, an important issue in exoplanets with large quadrupolar magnetic field components. Another reason to perform simulations with low magnetic fields is to maximize the model resolution required to reduce the numerical resistivity and obtain a better approach of the power dissipation in the magnetosphere. In addition, the inner boundary of the model is inside the exoplanet to reduce any numerical effects in the computational domain, so the Alfven time (the characteristic time of the simulation) should be small enough to have manageable simulations.
Previous studies predicted the variability of the power dissipated on the Hermean magnetosphere with the solar wind hydrodynamic parameters (density, velocity, and temperature) and interplanetary magnetic field orientation and intensity, and modified the topology of the Hermean magnetic field, leading to different distributions of the energy dissipation hot spots (local maximum) and total emissivity \cite{refId0}. Exoplanet magnetospheres can also show very different configurations, for example a different ratio of dipolar to quadrupolar magnetic field components, magnetic axis tilt, intrinsic magnetic field intensity, rotation, distortions driven by the magnetic field of other planets or satellites, leading to different exoplanet radio emissions even for the same configuration of the SW and IMF, namely host star of the same type, age, rotation, and magnetic activity.
The aim of this study is to estimate the radio emission driven in the interaction of the SW with an exoplanet magnetosphere, analyzing the kinetic and magnetic energy flux distributions as well as the net power dissipated on the exoplanet's day and night side, exploring the radio emission as a tool to identify the exoplanet's magnetic field properties. The analysis is performed for different orientations of the IMF and exoplanet magnetosphere topologies: different ratios of the dipolar to quadrupolar magnetic field components, magnetic axis tilts, and intrinsic magnetic field intensities. The parametrization of the radio emission in different exoplanet magnetosphere topologies is a valuable tool for the interpretation of future radio emission measurements and is used to estimate thresholds of the exoplanet magnetic field intensity, the ratio of the dipolar component versus higher degree components, or the magnetic axis tilt. Furthermore, if an exoplanet magnetosphere topology is known, the radio emission measurements also tabulate the instantaneous stellar wind dynamic pressure, as well as the IMF orientation and intensity of the host star at the exoplanet orbit \cite{Hess,Vidotto,Vidotto2}.
This paper is structured as follows: Section 2, description of the simulation model, boundary and initial conditions; Section 3, analysis of the radio emission generation for exoplanets with different intrinsic magnetic field intensities; Section 4, analysis of the radio emission from an exoplanet magnetic field with different ratios of dipolar to quadrupolar components; Section 5, analysis of the radio emission for different tilts of the exoplanet magnetic axis; and Section 6, discussion and conclusions.
\section{Numerical model}
We use the ideal MHD version of the open source code PLUTO in spherical coordinates to simulate a single fluid polytropic plasma in the nonresistive and inviscid limit \cite{Mignone}.
The conservative form of the equations are integrated using a Harten, Lax, Van Leer approximate Riemann solver (hll) associated with a diffusive limiter (minmod). The divergence of the magnetic field is ensured by a mixed hyperbolic--parabolic divergence cleaning technique \cite{Dedner}. The initial magnetic fields are divergenceless and remains so by applying the divergence cleaning method.
The grid is made of $256$ radial points, $48$ in the polar angle $\theta$ and $96$ in the azimuthal angle $\phi$ (the grid poles correspond to the magnetic poles). The simulation domain is confined within two spherical shells centered around the planet, representing the inner ($R_{in}$) and outer ($R_{out}$) boundaries of the system. Table 1 indicates the radial inner and outer boundaries of the system, characteristic length ($L$), effective numerical magnetic Reynolds number of the simulations due to the grid resolution ($R_{m}= V L/\eta$) and numerical magnetic diffusivity $\eta$. The kinetic Reynolds number ($R_{e}=V L/\nu$, with $\nu$ the numerical kinematic diffusivity) is in the range of the $[100,1000]$ for the different configurations. The numerical magnetic diffusivity is the driver of the reconnections in the model because we do not include an explicit value of the dissipation. Consequently, the numerical magnetic and kinetic diffusivities are determined by the grid resolution. \cite{Montgomery} calculated the kinematic viscosity and resistivity of the solar wind finding a value close to $1$ m$^2$/s and a Reynolds number of $10^{13}$. On the other hand, \cite{Verma} estimated an ion viscosity and resistivity of $5 \cdot 10^{7}$ m$^2$/s and a Reynolds number of $10^6$. If we assume \cite{Montgomery} results, the kinematic viscosity and resistivity values are too small and the Reynolds number too large to be simulated with the numerical resources available today because the grid resolution required is too large. If we consider the \cite{Verma} results, the numerical magnetic and kinetic diffusivities of the model for the given grid resolutions are closer to the solar wind parameters, particularly the B250 model (see columns VI and VII of table 1), so the study of the power dissipation should give a correct order of magnitude approximation. The numerical magnetic and kinematic diffusivity were evaluated in dedicated numerical experiments with the same grid resolution as the models and using a simpler setup, which indicated the limited impact of the grid resolution between models \cite{Varela,Varela2,Varela3,Varela4,Varela5,refId0}. The diffusivities change with the location because the grid is not uniform, so the dedicated experiments were performed using a resolution similar to the model resolution near the bow shock (BS) nose. It should be noted that the numerical resolution of the B6000 model is smaller than the B1000 and B250 models because the simulation domain is bigger for the same number of grid points, explaining why the numerical magnetic diffusivity is larger in that case.
Between the inner shell and the computational domain there is a ``soft coupling region'' ($R_{cr}$) where special conditions apply. Adding the soft coupling region improves the description of the plasmas flows towards the planet surface, isolating the simulation domain from spurious numerical effects of the inner boundary conditions \cite{Varela2,Varela3}. The outer boundary is divided into two regions, the upstream part where the stellar wind parameters are fixed and the downstream part where we consider the null derivative condition $\frac{\partial}{\partial r} = 0$ for all fields. At the inner boundary the value of the exoplanet intrinsic magnetic field is specified. In the soft coupling region the velocity is smoothly reduced to zero when approaching the inner boundary. The magnetic field and the velocity are parallel, and the density is adjusted to keep the Alfven velocity constant $\mathrm{v}_{A} = B / \sqrt{\mu_{0}\rho} = 25$ km/s with $\rho = nm_{p}$ the mass density, $n$ the particle number, and $m_{p}$ the proton mass. In the initial conditions we define a paraboloid on the night side with the vertex at the center of the planet, defined as $r < R_{cr} - 4sin(\theta)cos(\phi) / (sin^{2}(\theta)sin^{2}(\phi)+cos^{2}(\theta))$, where the velocity is null and the density is two orders of magnitude smaller than in the stellar wind. The IMF is also cut off at $R_{cr} + 2R_{ex}$, with $R_{ex}$ the exoplanet radius. Such initial conditions are required to stabilize the code at the beginning of the simulation. The radio emission of exoplanets with a magnetic field of $B_{ex}=6 \cdot 10^{3}$ nT is estimated on the day and night side using different $R_{out}$ values because several IMF orientations lead to the location of the magnetotail X point close to $R_{out}=150,$ while the magnetopause standoff distance is around $4R_{ex}$. Consequently, we use a model with $R_{out}=200$ to calculate the radio emission on the night side and a model with $R_{out}=75$ for the day side (improving the simulation resolution).
\begin{table*}[h]
\centering
\begin{tabular}{c | c c c c c c c}
Model & $R_{in}$ & $R_{out}$ & $R_{cr}$ & $L$ ($10^{6}$ m) & $R_{m}$ & $\eta$ ($10^{8}$ m$^{2}$/s) & $\nu$ ($10^{8}$ m$^{2}$/s)\\ \hline
B250 & $0.6$ & $16$ & $1.0$ & $2.44$ & $1800$ & $1.37$ & $0.30$\\
B1000 & $0.8$ & $30$ & $1.4$ & $4.30$ & $1020$ & $2.42$ & $0.42$\\
B6000 & $2.4$ & $75$ & $2.8$ & $36.59$ & $120$ & $25.81$ & $5.47$\\
Q02 & $2.0$ & $65$ & $2.5$ & $28.58$ & $150$ & $16.06$ & $4.10$\\
Q04 & $1.5$ & $50$ & $2.0$ & $17.82$ & $250$ & $10.01$ & $1.67$\\
\end{tabular}
\caption{Model inner boundary (column 1), outer boundary (column 2), soft coupling region (3), characteristic length (column 4), numerical magnetic Reynolds number (column 5), and numerical magnetic diffusivity (column 6).}
\label{table1}
\end{table*}
The exoplanet magnetic field is implemented in our setup as an axisymmetric ($m = 0$) multipolar field up to $l = 2$. The magnetic potential $\Psi$ is expanded in dipolar and quadrupolar terms:
\begin{equation} \label{eq:1}
\Psi (r,\theta) = R_{M}\sum^{2}_{l=1} (\frac{R_{M}}{r})^{l+1} g_{0l} P_{l}(cos\theta)
\end{equation}
The current free magnetic field is $B_{M} = -\nabla \Psi $, $r$ the distance to the planet center, $\theta$ the polar angle, and $P_{l}(x)$ the Legendre polynomials. The numerical coefficients $g_{0l}$ for each model is summarized in Table 2. The model B6000 and the configurations with tilted magnetic axis have the same $g_{0l}$ coefficients. The effect of the tilt is emulated modifying the orientation of the IMF and stellar wind velocity vectors, so we can use the same setup of the axisymmetric multipolar field for all the models.
\begin{table}[h]
\centering
\begin{tabular}{c | c c }
Model & $g_{01}$(nT) & $g_{02}/g_{01}$ \\ \hline
B250 & $-250$ & $0$ \\
B1000 & $-1000$ & $0$ \\
B6000 & $-6000$ & $0$ \\
Q02 & $-4800$ & $0.25$ \\
Q04 & $-3600$ & $0.67$ \\
\end{tabular}
\caption{Multipolar coefficients $g_{0l}$ for the exoplanet internal field.}
\label{table2}
\end{table}
The simulation frame is such that the z-axis is given by the planetary magnetic axis pointing to the magnetic north pole and the star is located in the XZ plane with $x_{star} > 0$. The y-axis completes the right-handed system.
We assume a fully ionized proton electron plasma. The sound speed is defined as $\mathrm{v}_{s} = \sqrt {\gamma p/\rho} $ (with $p$ the total electron + proton pressure and $\gamma=5/3$ the adiabatic index), the sonic Mach number as $M_{s} = \mathrm{v}/\mathrm{v}_{s}$, and the Alfvenic Mach number as $M_{a} = \mathrm{v}/\mathrm{v}_{A}$. In the simulations the interaction of the stellar wind and the exoplanet magnetosphere leads to super-Alfvenic shocks ($M_{a}>1$), so the present model does not describe the radio emission from an exoplanet located in an orbit where the interaction is sub-Alfvenic ($M_{a}<1$).
The recent model does not resolve the plasma depletion layer as a decoupled global structure from the magnetosheath due to the lack of model resolution, although simulations and observations share similar features in between the magnetosheath and magnetopause for the case of the Hermean magnetosphere \cite{Varela,Varela2,Varela3}. The magnetic diffusion of the model is larger than the real plasma so the reconnection between interplanetary and exoplanet magnetic field is instantaneous (no magnetic pile-up on the planet's day side) and stronger (enhanced erosion of the exoplanet magnetic field), although the essential role of the reconnection region in the depletion of the magnetosheath, injection of plasma into the inner magnetosphere and magnetosphere radio emission are reproduced \cite{refId0}. It should be noted that the exoplanet orbital motion is not included in the model, an effect likely important in the description of close-in exoplanets.
\section{Radio emission and exoplanet magnetosphere topology}
In this section we estimate the radio emission of exoplanets with different magnetosphere topologies and IMF orientations. We calculate the power dissipated by the interaction of the SW with the exoplanet magnetosphere on the planet's day side and at the magnetotail X point on the planet's night side, leading to irreversible processes in which internal, bulk flow kinetic, magnetic, or system potential energy are transformed into accelerated electrons and then into radiation and heating sources on the exoplanet magnetosphere. The transfer of energy can be assessed by evaluating the various energy fluxes ($F$) involved in the system
\begin{equation}
\frac{\partial e}{\partial t} = -\vec{\nabla} \cdot \vec{F}
,\end{equation}
where
\begin{equation}
e = \rho \frac{v^{2}}{2} + \rho \frac{\gamma T}{\gamma - 1} + \frac{B^{2}}{2\mu_{0}}
,\end{equation}
and the energy flux
\begin{equation}
\vec{F} = \rho \vec{v} (\frac{v^{2}}{2}+\frac{\gamma T}{\gamma - 1}) + \vec{S} + \vec{Q}
.\end{equation}
The first term is the flux of kinetic energy, the second term is the enthalpy flux (the sum of internal energy and the potential of the gas to do work by expansion), the third term is the Poynting flux $\vec{S} = \vec{E} \wedge \vec{B} / \mu_{0} \sim (\vec{v} \wedge \vec{B}) \wedge \vec{B}/\mu_{0}$ that shows the energy of the electromagnetic fields, and the last term $Q_{i} = -\mu \rho v_{i}\frac{\partial v_{i}}{\partial x_{j}}$ (i,j = 1,2,3) is the nonreversible energy flux. Here, $\vec{v}$ is the velocity field, $\vec{B}$ the magnetic field, $\vec{E}$ the electric field, $T$ the temperature, and $\mu$ the shear viscosity.
In the following, we calculate the power dissipated as a combination of the kinetic $P_{k}$ (associated with the stellar wind dynamic pressure) and magnetic Poynting $P_{B}$ terms (due to the reconnection between the IMF and the exoplanet magnetic field). The enthalpy and the nonreversible energy flux are neglected because they are tiny. The net power dissipated is calculated as the volume integral of $P_{k}$ and $P_{B}$ divergence in the regions of energy dissipation associated with hot spots (we define the threshold at $|P_{B}| > |P_{B}|_{max}/3$, with $|P_{B}|_{max}$ the absolute value of the maximum magnetic power in the hot spots):
\begin{equation} \label{eq:4}
[P_{k}] = \int_{V} \vec{\nabla} \cdot \left(\frac{\rho \vec{\mathrm{v}} |\vec{\mathrm{v}}^{2}|}{2} \right) dV
\end{equation}
\begin{equation} \label{eq:5}
[P_{B}] = -\int_{V} \vec{\nabla} \cdot \frac{(\vec{\mathrm{v}}\wedge\vec{B})\wedge\vec{B}}{\mu_{0}} dV
\end{equation}
On the day side, the volume integrated extends from the BS to the inner magnetosphere near the radio emission hot spots (magnetosheath and magnetopause included). On the night side the integrated volume is localized in the magnetotail X point where the magnetic field module is smaller than $10$ nT.
\subsection{Interaction between IMF and exoplanet magnetic field}
We now show examples of the interaction between the interplanetary and the exoplanet magnetic fields. In the following, the hydrodynamic parameters of the stellar wind in the simulations are fixed, summarized in Table 3 (including the stellar wind plasma frequency $\omega_{sw}$), as is the module of the IMF that is kept to $20$ nT (the IMF orientation for each model can be found in the Table B.1 of the Appendix). The selected IMF module and SW dynamic pressure are the expected typical values for an exoplanet in an orbit near the habitable zone of a star similar to the Sun (between 0.95 and 1.67 au; \cite{Kopparapu,Gallet}). We only consider the typical values because the SW and IMF instantaneous parameters can be very variable, for example in the case of Mercury, the range of SW density possible values is $[10 , 180]$ cm$^{-3}$, velocities between $[300 , 700]$ km/s and temperatures of [45,000, 200,000] K \cite{Varela2}. For a systematic study on the effect of the stellar wind dynamic pressure, temperature, and IMF on the radio emission we refer the reader to \cite{refId0}.
\begin{table}[h]
\centering
\begin{tabular}{c c c c c c c c}
$n$ (cm$^{-3}$) & $T$ (K) & $\mathrm{v}$ (km/s) & $\omega_{sw}$ (kHz) \\ \hline
$60$ & $90000$ & $350$ & $69.5$\\
\end{tabular}
\caption{Hydrodynamic parameters of the SW}
\label{table3}
\end{table}
Figure 1 shows a 3D view of the system for a northward IMF orientation of a model with a dipolar magnetic field of $6000$ nT and a magnetic axis tilt of $60^{o}$ (with respect to the rotation axis). The BS is identified by the color scale of the density distribution (large density increase). The SW dynamic pressure bends the exoplanet magnet field lines (red lines) compressing the magnetic field lines on the exoplanet's day side and forming the magnetotail on the night side. The IMF (pink lines) reconnects with the exoplanet magnetic field lines leading to the formation of the exoplanet magnetopause. The arrows indicate the orientation of the IMF and the dashed white line the outer limit of the simulation domain (the star is not included in the model).
\begin{figure}[h]
\centering
\resizebox{\hsize}{!}{\includegraphics[width=\columnwidth]{1.jpg}}
\caption{Three-dimensional view of a typical simulation setup. Density distribution (color scale), exoplanet magnetic field lines (red lines), and IMF (pink lines). The arrows indicate the orientation of the IMF (northward orientation). The dashed white line shows the beginning of the simulation domain. We note that the star is not included in the model.}
\label{1}
\end{figure}
In the following we identify the IMF orientation from the exoplanet to the star as Bx simulations, the IMF orientation from the star to the exoplanet as Bxneg simulations, the northward orientation with respect to the exoplanet's magnetic axis as Bz simulations (figure 1), the southward orientation as Bzneg simulations, the orientation perpendicular to the previous two cases on the exoplanet orbital plane as By (east) and Byneg (west) simulations. The IMF simulations and exoplanet intrinsic magnetic configurations are summarized in Appendix B.
\begin{figure}[h]
\centering
\resizebox{\hsize}{!}{\includegraphics[width=\columnwidth]{2.jpg}}
\caption{Exoplanet magnetic field lines with the intensity imprinted on the field lines by a color scale for the B6000 configuration with a magnetic axis tilt of $0^{o}$ (A) and $60^{o}$ (B). Magnetic field intensity at the frontal plane $X = 3R_{ex}$. Stellar wind stream lines (green). The pink arrow shows the reconnection region and the white arrow the magnetic field pile up region. The IMF is oriented in the Bx direction}.
\label{2}
\end{figure}
Figure 2 illustrates the interaction of the IMF and the exoplanet magnetospheric field in the model B6000 (panel A) and model tilt60 (panel B). The exoplanet magnetic field lines are color-coded with the magnetic field amplitude and the green lines are the SW stream lines. The frontal plane at $X = 3R_{ex}$ shows the magnetic field intensity. The reconnection regions are identified as a local decrease (blue color near the poles for model B6000 and near the south pole for model tilt60, highlighted by pink arrows) and local magnetic field pile up as an increase (yellow/orange colors near the equator for model B6000 and near the north pole for the model tilt60, highlighted by white arrows) of the magnetic field. The reconnections are associated with regions of SW injection in the inner magnetosphere (plasma streams from the magnetosheath towards the exoplanet surface) \cite{Varela3}. The magnetic field pile up regions are linked with radio emission hot spots (acceleration of electrons along the magnetic field lines) \cite{refId0}. In both cases there is a conversion of magnetic energy into kinetic and internal energy. Consequently, the exoplanet magnetic field topology and IMF orientation are critical in addressing the exoplanet radio emission since it is the direct outcome of the location and intensity of the magnetosphere reconnection and magnetic field pile up regions. It should be noted that the energy dissipation and radio emission hot spots are not localized in the same regions of the magnetosphere; the electrons are accelerated in the zones with large energy dissipation whereas the radio emission is generated along their trajectory around the magnetic field lines towards the exoplanet surface where the cyclotron frequency is the highest. Nevertheless, the energy dissipation and radio emission hot spots are correlated and show some common features in the simulations.
The present study is limited to the analysis of the dissipated power and radio emission driven by the stellar wind interaction with the magnetosphere of rocky and giant gaseous exoplanets. It should be noted that the radio emission from giant gaseous exoplanets is also caused by internal plasma sources combined with their fast rotation, as was observed for Jupiter and to a lesser extent for Saturn \cite{Bagenal,Krupp}, conditions not included in our present model so the predicted values may be considered as a lower bound. Icy exoplanets similar to Uranus or Neptune show strongly nonaxisymmetric magnetic fields and fast rotation, so an axisymmetric magnetic field model cannot reproduce their radio emission properly.
\subsection{Effect of the exoplanet magnetic field intensity}
In this section we analyze the effect of the intrinsic magnetic field intensity on the radio emission for different IMF orientations. We perform simulations for exoplanets with an intrinsic magnetic field intensity of $250$, $1000,$ and $6000$ nT.
\begin{figure*}[h]
\centering
\resizebox{\hsize}{!}{\includegraphics[width=\columnwidth]{3.jpg}}
\caption{View of the magnetic power from the night side of the exoplanet for different IMF orientations and exoplanet intrinsic magnetic field intensities. The first color bar is related to the Bx--Bxneg IMF orientations, and the second color bar is related to the other IMF orientation. The plotted surface is defined between the bow shock and the magnetopause where the magnetic power reaches its maxima.}
\label{3}
\end{figure*}
Figure 3 shows a view of the magnetic power from the night side of the exoplanet ($P_{B}(DS)$) for different IMF orientations and exoplanet intrinsic magnetic field intensities. The minima of the magnetic power are correlated with a local decrease in the exoplanet magnetic field intensity, while the maxima are correlated with a local enhancement of the exoplanet magnetic field (pile up). The hot spot distribution for the Bx--Bxneg IMF orientations is north--south asymmetric (panels 3A, B, G, H, N, and O) because there is reconnection region at the south (north) of the magnetosphere if the IMF is oriented in the Bx (Bxneg) direction. The hot spots are displaced northward for Bx IMF orientations and southward for Bxneg IMF orientations as the exoplanet magnetic field intensity increases because the reconnection regions are located farther away from the exoplanet surface (the magnetopause standoff distance increases). The hot spot distribution for the By--Byneg IMF orientations shows an east--west asymmetry also correlated with the location of the reconnection regions (panels 3C, D, I, J, P, and Q). If the exoplanet magnetic field intensity increases the hot spots sideways the magnetic axis are located farther away from the exoplanet, caused by the increase in the magnetopause standoff distance and the counterclockwise (co) rotation of the hot spots for the By (Byneg) IMF orientation. The reconnection regions for the Bz (panels E, K, and R) and Bzneg (panels 3F, M, and S) IMF orientations are located near the exoplanet poles and the equator, respectively. If the exoplanet magnetic field intensity increases the reconnection regions are located farther away from the poles (equatorial region), and the hot spot distribution is located closer to the equatorial (polar) region. In summary, the hot spots are located farther away from the exoplanet surface as the magnetic field intensity increases. It should be noted that a larger SW dynamic pressure leads to a more compact magnetosphere on the exoplanet's day side, so the hot spots will be located closer to the exoplanet surface. Consequently, the correct identification of the exoplanet magnetic field intensity requires an accurate identification of the host start SW dynamic pressure at the exoplanet orbit (a deviation larger than a $50 \%$ from the real value, particularly if the stellar wind pressure is large, will lead to incorrect results). Such information can be partially inferred analyzing the radio emission from the exoplanet's night side because a strong radio emission is a sign of intense magnetotail stretching and high SW dynamic pressure \cite{Varela2}.
\begin{figure*}[h]
\centering
\resizebox{\hsize}{!}{\includegraphics[width=\columnwidth]{4.jpg}}
\caption{View of the kinetic power from the night side of the exoplanet in the B6000 model for different IMF orientations. The plotted surface is defined between the bow shock and the magnetopause where the kinetic power reaches its maxima.}
\label{4}
\end{figure*}
Figure 4 shows a view of the kinetic power from the night side of the exoplanet ($P_{k}(DS)$) of B6000 model for different IMF orientations. A local decrease (enhancement) in the magnetospheric field is associated with a local enhancement (decrease) in $P_{k}(DS)$ caused by the acceleration of the plasma in the reconnection regions where the stellar wind is injected in the inner magnetosphere. Consequently, the $P_{k}(DS)$ distribution is determined by the magnetosphere topology and IMF orientation. An increase of the SW dynamic pressure enhances the magnetic and kinetic powers, although the hot spot distribution is only slightly disturbed \cite{refId0}.
\begin{figure*}[h]
\centering
\resizebox{\hsize}{!}{\includegraphics[width=\columnwidth]{5.jpg}}
\caption{Magnetic power on the exoplanet's night side (PB(NS)) for model B250 for Bx (A), By (B), and Bnegz (C) IMF orientations. The reconnection region (isosurface with magnetic field intensity between 0 and 20 nT) is indicated in dark blue and dark green; the magnetotail reconnection region is indicated by the yellow rectangle. Magnetic field lines of the exoplanet and IMF are indicated in red.}
\label{5}
\end{figure*}
Figure 5 shows a zoomed view of the magnetic power ($P_{B}(NS)$) on the night side for an exoplanet with a magnetic field intensity of $250$ nT for Bx, By, and Bzneg IMF orientations. The different IMF orientations modify the exoplanet's magnetosphere topology (red magnetic field lines) and reconnection regions between the exoplanet magnetic field and the IMF (dark blue and dark green iso-surfaces linked to the magnetopause and the magnetotail X point). For this reason the location and intensity of the radio emission hot spots are different in each model. The last closed magnetic field line indicates the location of the magnetopause and the open magnetic field lines are reconnected lines between the IMF and the exoplanet magnetic field.
The expected radio emission is calculated from the net magnetic and kinetic power dissipated on the planet's day and night sides using the radiometric Bode law \cite{Zarka3,Farrell} for different exoplanet magnetic field intensities:
\begin{equation} \label{eq:6}
[P(DS)] = a [P_{k}(DS)] + b [P_{B}(DS)]
\end{equation}
\begin{equation} \label{eq:7}
[P(NS)] = a [P_{k}(NS)] + b [P_{B}(NS)]
\end{equation}
with $a$ and $b$ the efficiency ratios assuming a linear dependency of $[P_{k}]$ and $[P_{B}]$ with $[P]$. The radio emission measured from the solar system planets can be explained by two possible combinations of efficiency ratios: ($a = 10^{-5}$, $b=0$) or ($a = 0$, $b=2\cdot10^{-3}$) \cite{Zarka4,Zarka10}. In the following, we only consider the combination of parameters $a = 0$, $b=2\cdot10^{-3}$ because the other combination leads to a radio emission several orders of magnitude smaller on the day and night sides. All the $[P(DS)]$ and $[P(NS)]$ values are calculated for an exoplanet with the same radius as Mercury ($R_{ex}= 2440$) km to have a straightforward comparison with the \cite{refId0} results. The model is in adimensional units and the distance is normalized to the exoplanet radius, so $[P(DS)]$ and $[P(NS)]$ can be expressed in terms of any exoplanet radius considering that $[W] = kg m^{2} / s^{3}$. For example, if we calculate the radio emission of an exoplanet with the same radius as the Earth, the values in the tables must be multiplied by a factor $(R_{Earth}/R_{Mercury})^2= 6.67$. It should be noted that the radius of the obstacle in the analysis of the radio emission is the distance from the magnetopause to the exoplanet surface, not the exoplanet radius; the radio power is enhanced as the module of the exoplanet magnetic field increases because the magnetosphere is wider. On the other hand the ratio between the exoplanet radius must be included in the extrapolation to be consistent with the fact that the magnetic field module is compared at the exoplanet surface.
\begin{table}[h]
\centering
\begin{tabular}{c}
$[P(DS)]$ ($10^{5}$ W)
\end{tabular}
\begin{tabular}{c | c c c c c c}
Model & Bx & Bxneg & By & Byneg & Bz & Bzneg \\ \hline
B250 & $0.63$ & $0.63$ & $6.90$ & $8.72$ & $5.96$ & $11.3$ \\
B1000 & $2.18$ & $2.04$ & $12.4$ & $12.2$ & $10.6$ & $29.5$ \\
B6000 & $4.34$ & $4.21$ & $35.0$ & $32.5$ & $14.7$ & $72.5$ \\
\end{tabular}
\begin{tabular}{c}
Linear regression slope day side vs $B_{ex}$ ($10^{11}$ W/T)
\end{tabular}
\begin{tabular}{c | c c c c c c}
& Bx & Bxneg & By & Byneg & Bz & Bzneg \\ \hline
$\alpha$ & $0.77 $ & $0.74$ & $6.05$ & $5.65$ & $2.71$ & $12.6$ \\
$\Delta \alpha$ & $\pm 0.1$ & $\pm 0.1$ & $\pm 1$ & $\pm 1$ & $\pm 1$ & $\pm 2$ \\
\end{tabular}
\begin{tabular}{c}
$[P(NS)]$ ($10^{5}$ W)
\end{tabular}
\begin{tabular}{c | c c c c c c}
Model & Bx & Bxneg & By & Byneg & Bz & Bzneg \\ \hline
B250 & $0.10$ & $0.10$ & $0.53$ & $0.53$ & $0.13$ & $0.18$ \\
B1000 & $1.04$ & $0.72$ & $1.74$ & $1.71$ & $1.00$ & $0.66$ \\
B6000 & $10.2$ & $10.2$ & $15.8$ & $16.4$ & $7.75$ & $20.4$ \\
\end{tabular}
\begin{tabular}{c}
Linear regression slope night side ($10^{11}$ W/T)
\end{tabular}
\begin{tabular}{c | c c c c c c}
& Bx & Bxneg & By & Byneg & Bz & Bzneg \\ \hline
$\alpha$ & $1.68$ & $1.67$ & $2.61$ & $2.70$ & $1.28$ & $3.32$ \\
$\Delta \alpha$ & $\pm 0.08$ & $\pm 0.1$ & $\pm 0.1$ & $\pm 0.1$ & $\pm 0.04$ & $\pm 0.3$ \\
\end{tabular}
\label{table4}
\caption{Expected radio emission on the exoplanet's day side (first table) and in the magnetotail X point on the exoplanet's night side (third table) for different IMF orientations ($a = 0$, $b=2\cdot10^{-3}$) and exoplanet magnetic field intensities. Slope ($\alpha$) and goodness of fit ($\Delta \alpha$) of the linear regression $[P] = \alpha B_{ex}$ for each IMF orientation (second and fourth table). $B_{ex}$ is the magnetic field intensity on the exoplanet surface and the exoplanet radius is $R_{ex}=2440$ km.}
\end{table}
Table 4 shows the predicted radio emission on the exoplanet's day side (top rows) and night side (middle rows). The radio emission increases with the magnetic field intensity, consistent with the theoretical scaling confirmed by the radio emission measurements of the gaseous planets in the solar system \cite{Desch,Zarka2,Zarka3,Zarka4,Nichols}. It should be noted that the scaling law ``emitted power'' versus ``impinging Poynting flux'' only gives an order of magnitude estimation for what may be observable with a given radio telescope, so the current paper is not about detection but about emission efficiency for various exoplanet magnetic field configurations and stellar wind magnetic field orientations. In addition, the total radio power is integrated over all the frequencies of the emission although a radio telescope has a finite bandwidth, so the radio power in Table 4 overestimates the radio telescope measurements. The radio emission on the day side is almost one order of magnitude higher than the radio emission on the night side for all the IMF orientations in models B250 and B1000. On the other hand the radio emission is larger on the night side in the model B6000 for the Bx--Bxneg IMF orientations. If we calculate the linear regression $[P] = \alpha B_{ex}$ (third table), with $B_{ex}$ the magnetic field intensity on the exoplanet surface, we observe that only the IMF orientations Bx--Bxneg lead to a stronger radio emission on the night side. For the By--Byneg and Bz--Bzneg IMF orientations the radio emission on the day side is 2 times larger than the night side. The fit goodness of the linear regression ($\Delta \alpha$) shows a reasonable agreement with the data tendency. The radio emission on the night side of model B6000 shows a smaller variation between the different IMF orientations, because the internal magnetosphere topology is less affected by the IMF orientation as the exoplanet magnetic field intensity increases. The strongest radio emission on the day side is observed for the Bzneg IMF orientation, followed by the By--Byneg orientations, whereas the Bx--Bxneg IMF orientations lead to the weakest radio emission. Thus, future radio emission measurements require an observation time, long enough, to remove the effect of the IMF orientation (as well as the variation in the SW dynamic pressure) because the instantaneous radio emission can change by up to one order of magnitude if the IMF is oriented in the exoplanet--star, southward or northward orientations. Similar trends are reproduced in a previous study about the IMF effect on the Hermean magnetospheric radio emission \cite{refId0}.
The maximum cyclotron frequency at the planetary surface for the models B250, B1000, and B6000 is $f_{max} = 14$, $56,$ and $336$ kHz. Consequently, the radio emission from exoplanets with less than $B_{ex} \approx 1000$ nT is less likely to be observed (at least for the stellar wind conditions analyzed in this article, where the plasma frequency is $69.5$ kHz). Based on our knowledge at Jupiter, Saturn, and the Earth, the auroral radio emission (CMI) is produced between very low frequencies (kHz) and $f_{max}$. It should be noted that the radio-magnetic scaling law relates the total power integrated on the emission’s spectrum and beaming pattern (and average time variations). In addition, the average radio flux density can be deduced from the power divided by the spectral range ($\approx f_{max}$) and the solid angle in which the emission is beamed (typically $0.2$ to $1$ sr, see \cite{Zarka9}). The peak instantaneous flux density can exceed the average flux density by $2$ orders of magnitude \cite{Galopeau,Zarka9,Lamy}.
\subsection{Effect of the magnetic field quadrupolar-to-dipolar components ratio}
In this section we analyze the effect of the exoplanet magnetic field topology on the radio emission in configurations with different ratios of the dipolar and quadrupolar components, namely models Q02 ($B_{dip}= 0.8 \cdot B_{ex}$ and $B_{quad}= 0.2 \cdot B_{ex}$ nT) and Q04 ($B_{dip}= 0.6 \cdot B_{ex}$ and $B_{quad}= 0.4 \cdot B_{ex}$ nT) with $B_{ex}=6000$ nT, for different IMF orientations.
Figure 6 shows a polar cut of the density distribution (color scale, panels A and B) and the frontal plane of the magnetic field modulus (color scale, panels C and D) of the models Q02 and Q04 for a Bx IMF orientation. The red lines show the exoplanet magnetic field lines. Compared to the B6000 model, Q02 and Q04 modes show wider regions of open magnetic field lines on the planet surface and a smaller magnetopause standoff distance because an increase in the $g_{20}/g_{10}$ ratio leads to the northward displacement and a faster decay of the exoplanet magnetic field. Consequently, the magnetosphere topology of the models Q02 and Q04 is different regarding the model B6000 so the reconnection regions, dissipation, and radio emission hot spot locations and intensity also change. Moreover, a higher $g_{20}/g_{10}$ ratio leads to a thinner and deformed magnetosheath, so the SW precipitates directly towards the exoplanet surface at low southern hemisphere latitudes in the Q4 model.
\begin{figure}[h]
\centering
\resizebox{\hsize}{!}{\includegraphics[width=\columnwidth]{6.jpg}}
\caption{Polar cut of the density distribution (color scale) and field lines of the exoplanet magnetic field (red lines) of models Q02 (A) and Q04 (B) for the Bx IMF orientation. The black disk radius is $R_{cr}$. Frontal cut of the magnetic field module on the star--exoplanet direction of models Q02 (C) and Q04 (D).}
\label{6}
\end{figure}
Figure 7 shows a view of the magnetic power from the night side of the exoplanet for different IMF orientations and exoplanet magnetic field topologies. If the IMF is oriented in the Bx--Bxneg directions (panels A, B, G, and H), an increment of the quadrupolar component of the exoplanet magnetic field leads to a northward drift of the hot spots, located farther away from the north pole for the Bx IMF orientation and closer to the equator for the Bxneg IMF orientation regarding the B6000 model, due to the northward displacement of the magnetosphere. A similar effect is observed for the By--Byneg IMF orientations (panels C, D, I and J) where the hot spots in the north of the magnetosphere are located farther away from the exoplanet, although the hot spots in the south of the magnetosphere are located closer to the exoplanet. For the Bz--Bzneg IMF orientations the hot spots are also displaced northward. The Q04 model shows for all the IMF orientations a region of large magnetic power near the exoplanet (panels K and M); compared to the B6000 and Q02 models (panels E and F) the magnetopause standoff distance is smaller and the reconnection regions are enhanced, which is caused by the strong deformation of the internal magnetospheric field compared to the dipolar case. These results point out that a northward displacement (or southward depending on the exoplanet magnetic field orientation) of the hot spot distribution, independently of the instantaneous IMF orientation, indicates a large quadrupolar component of the exoplanet magnetic field. It should be noted that the radio telescope observation angle with respect to the exoplanet and the exoplanet--host star vector must be calculated accurately to avoid an overestimation of the quadrupolar component.
\begin{figure*}[h]
\centering
\resizebox{\hsize}{!}{\includegraphics[width=\columnwidth]{7.jpg}}
\caption{View of the magnetic power from the night side of the exoplanet for different IMF orientations and exoplanet magnetic field topologies (different dipolar-to-quadrupolar component ratios). The first color bar is related to the Bx--Bxneg IMF orientations, and the second color bar is related to the other IMF orientations. The plotted surface is defined between the bow shock and the magnetopause where the magnetic power reaches its maxima.}
\label{7}
\end{figure*}
Figure 8 shows the magnetic power ($P_{B}(NS)$) on the exoplanet's night side and the magnetosphere topology of the model Q04 for the Bx, By, and Bz IMF orientations. Compared to the B6000 model (see figure 5) the magnetotail is slender and stretched, a consequence of a stronger perturbation of the internal magnetosphere topology by the IMF due to a faster decay of the quadrupolar component with respect to the dipolar component, so the radio emission is lower. To quantify the magnetotail stretching we calculate the ratio between the averaged width of the magnetotail with the location of the X point, showing values around $0.115$ for the Bx IMF case, $0.169$ for the By IMF case, and $0.187$ for the Bz case, smaller than the B6000 model where the ratio for the Bx IMF case is $0.313$ ($2.7$ times larger), for the By IMF case is $0.399$ ($2.3$ times larger), and for the Bz IMF case is $0.355$ ($1.9$ times larger). Consequently, the radio emission on the exoplanet's night side varies almost by one order of magnitude between the different configurations.
\begin{figure*}[h]
\centering
\resizebox{\hsize}{!}{\includegraphics[width=\columnwidth]{8.jpg}}
\caption{Magnetic power on the exoplanet's night side (PB(NS)) of model Q04. The reconnection region (isosurface with magnetic field intensity between 0 and 20 nT) is indicated in dark blue and dark green; the magnetotail reconnection region is indicated by the yellow rectangle. Magnetic field lines of the exoplanet and IMF are indicated in red.}
\label{8}
\end{figure*}
Table 5 shows the expected radio emission for different ratios of the quadrupolar to dipolar components on the exoplanet's day side (the values on the night side are not shown because the trends only indicate a decrease in the radio emission as the quadrupolar-to-dipolar ratio increases due to the faster decay of the quadrupolar component):
\begin{table}[h]
\centering
\begin{tabular}{c}
$[P(DS)]$ ($10^{5}$ W)
\end{tabular}
\begin{tabular}{c | c c c c c c}
Model & Bx & Bxneg & By & Byneg & Bz & Bzneg \\ \hline
Q02 & $3.33$ & $4.40$ & $19.1$ & $16.9$ & $9.43$ & $33.5$ \\
Q04 & $26.6$ & $51.3$ & $63.9$ & $55.6$ & $42.1$ & $44.6$ \\
\end{tabular}
\begin{tabular}{c}
Model / B6000 $[P(DS)]$ ratio
\end{tabular}
\begin{tabular}{c | c c c c c c}
Model & Bx & Bxneg & By & Byneg & Bz & Bzneg \\ \hline
Q02 & $0.77$ & $1.05$ & $0.55$ & $0.52$ & $0.64$ & $0.46$ \\
Q04 & $6.13$ & $12.2$ & $1.83$ & $1.71$ & $2.86$ & $0.61$ \\
\end{tabular}
\label{table5}
\caption{Expected radio emission on the exoplanet's day side for different IMF orientations ($a = 0$, $b=2\cdot10^{-3}$) and exoplanet magnetic field intensities. Results for an exoplanet with a radius of $R_{ex}=2440$ km.}
\end{table}
The radio emission of the model Q02 on the day side decreases for all the IMF orientations (except for the Bxneg case, which shows a slight increase) because the faster decay of the quadrupolar component leads to a weaker reconnection region showing similar internal magnetosphere topology than the B6000 model. If the quadrupolar component is large enough, as in model Q04, the internal magnetospheric topology changes with respect to the B6000 model leading to an enhancement of the reconnection regions and the radio emission near the exoplanet surface.
\subsection{Effect of the exoplanet magnetic axis tilt}
In this section we analyze the effect of the magnetic axis tilt on the exoplanet radio emission, namely the models tilt30 (tilt=$30^{o}$), tilt60 (tilt=$60^{o}$), and tilt90 (tilt=$90^{o}$). The analysis of the magnetic axis tilt is performed in addition to the study of the IMF orientation because different angles between magnetic axis and stellar wind velocity vector leads to different exoplanet magnetosphere configurations, due to the effect of the stellar wind dynamic pressure.
Figure 9 shows a view of the magnetic power from the night side of the exoplanet for different IMF orientations and exoplanet magnetic axis tilts. The hot spots distribution for the Bx (panels A and G) and Bxneg (panels B and H) IMF orientations are displaced respectively southward and northward as the tilt increases from $0^{o}$ to $60^{o}$ because the reconnection regions are displaced closer to (or farther away from) the star and closer to the exoplanet equatorial plane. In addition, models with a large tilt and a Bx (Bxneg) IMF orientation show a hot spot distribution similar to models with small tilt and a Bz (Bzneg) IMF orientation (compare panels N and O of fig. 4 with panels K, M, R, and S of fig. 9, or panels R and S of fig. 4 with panels G and H of fig. 9). This comes about because the magnetosphere topology is almost analogous if the SW dynamic pressure is not strong enough to drive major deformations on the magnetosheath structure. Compared to the B6000 model the hot spots are more spread out and located farther from the exoplanet by the effect of the SW dynamic pressure because as the tilt increases the IMF is more aligned with the SW flow. The model tilt90 has a reconnection region in the exoplanet equatorial plane where the IMF and magnetic field lines are (anti-)parallel if the IMF is oriented in the Bx (Bxneg) direction. Thus, the SW precipitates directly towards the surface at the equator and there is an enhancement of the magnetic power near the exoplanet. The hot spot distribution is located in the region with closed magnetic field lines, forming a ring around the exoplanet in the XY plane. For the By--Byneg IMF orientations, the hot spots at the north of the magnetosphere are located farther away from the exoplanet and more aligned with the magnetic axis as the magnetic axis tilt increases, so the east--west asymmetry of the magnetosphere decreases (panels C, D, I, J, P, and Q).
\begin{figure*}[h]
\centering
\resizebox{\hsize}{!}{\includegraphics[width=\columnwidth]{9.jpg}}
\caption{View of the magnetic power from the night side of the exoplanet for different IMF orientations and exoplanet magnetic axis tilts. The color bars of the model tilt90 for the Bx--Bxneg IMF orientations are different from those in the rest of the panels. The plotted surface is defined between the bow shock and the magnetopause where the magnetic power reaches its maxima.}
\label{9}
\end{figure*}
Figure 10 shows the magnetic power ($P_{B}(NS)$) of the models tilt30, tilt60, and tilt90 on the planet's night side for the Bx IMF orientation. The magnetotail topology changes as the tilt increases, showing a more slender and stretched shape, so the radio emission also changes for each configuration. The ratio between the magnetotail width and X point location is $0.232$ for the Bx IMF tilt30 case, $0.212$ for the Bx IMF tilt60 case, and $0.135$ for the Bx IMF tilt90 case. Model tilt90 is an extreme case with a reconnection ring in the YZ plane due to the bending of the closed magnetic field lines by the SW at the north and south geographic poles towards the star--exoplanet direction.
\begin{figure*}[h]
\centering
\resizebox{\hsize}{!}{\includegraphics[width=\columnwidth]{10.jpg}}
\caption{Magnetic power on the exoplanet's night side (PB(NS)) of models tilt30, tilt60, and tilt90 for the Bx IMF orientation. Panels (G) and (H) show models tilt30 and tilt90 for a Bx IMF orientation. The reconnection region (isosurface with magnetic field intensity between 0 and 20 nT) is indicated in dark blue and dark green; the magnetotail reconnection region is indicated by the yellow rectangle. Magnetic field lines of the exoplanet and IMF are indicated in red.}
\label{10}
\end{figure*}
Table 6 shows the radio emission for different magnetic axis tilts on the planet's day side. Increasing the magnetic axis tilt from $0^{o}$ to $60^{o}$ leads to a stronger radio emission for the Bx--Bxneg IMF orientations because the reconnection regions and hot spots are wider. On the other hand, if the tilt is $90^{o}$ the radio emission for the Bx (Bxneg) IMF orientation decreases (increases) due to the increase (decrease) in the exoplanet magnetic field near the magnetic poles by the effect of the magnetic reconnections (see fig. 9). If the IMF is oriented in the By--Byneg IMF direction, the radio emission increases between $0^{o}$ to $30^{o}$ because the hot spots are wider, although from $30^{o}$ to $90^{o}$ both radio emission and hot spot size saturate. For the Bz--Bzneg IMF orientations, the radio emission and hot spot size decreases as the tilt increases. On the other hand, the radio emission and hot spot size increases in the model tilt90. On the exoplanet's night side, the radio emission increases with the tilt because the magnetotail stretching is greater.
\begin{table}[h]
\centering
\begin{tabular}{c}
$[P(DS)]$ ($10^{5}$ W)
\end{tabular}
\begin{tabular}{c | c c c c c c}
Model & Bx & Bxneg & By & Byneg & Bz & Bzneg \\ \hline
tilt30 & $55.1$ & $19.6$ & $87.3$ & $80.1$ & $7.36$ & $49.9$ \\
tilt60 & $74.6$ & $32.4$ & $81.5$ & $82.0$ & $2.94$ & $4.06$ \\
tilt90 & $18.7$ & $57.6$ & $10.8$ & $7.87$ & $19.1$ & $60.1$ \\
\end{tabular}
\begin{tabular}{c}
Model / B6000 $[P(DS)]$ ratio
\end{tabular}
\begin{tabular}{c | c c c c c c}
Model & Bx & Bxneg & By & Byneg & Bz & Bzneg \\ \hline
tilt30 & $12.7$ & $6.66$ & $2.49$ & $2.46$ & $0.5$ & $0.69$ \\
tilt60 & $17.2$ & $7.70$ & $2.33$ & $2.52$ & $0.20$ & $0.06$ \\
tilt90 & $4.34$ & $13.7$ & $0.31$ & $0.24$ & $1.30$ & $0.83$ \\
\end{tabular}
\begin{tabular}{c}
$[P(NS)]$ ($10^{5}$ W)
\end{tabular}
\begin{tabular}{c | c c c c c c}
Model & Bx & Bxneg & By & Byneg & Bz & Bzneg \\ \hline
tilt30 & $70.7$ & $72.8$ & $95$ & $108$ & $48.9$ & $108$ \\
tilt60 & $98.3$ & $89.8$ & $114$ & $119$ & $62.1$ & $192$ \\
\end{tabular}
\begin{tabular}{c}
Model / B6000 $[P(NS)]$ ratio
\end{tabular}
\begin{tabular}{c | c c c c c c}
Model & Bx & Bxneg & By & Byneg & Bz & Bzneg \\ \hline
tilt30 & $6.93$ & $7.14$ & $6.01$ & $6.59$ & $6.31$ & $5.29$ \\
tilt60 & $9.64$ & $8.80$ & $7.21$ & $7.26$ & $8.01$ & $9.41$ \\
\end{tabular}
\label{table6}
\caption{Expected radio emission on the exoplanet's day and night sides for different IMF orientations ($a = 0$, $b=2\cdot10^{-3}$) and magnetic axis tilts. Results for an exoplanet with a radius of $R_{ex}=2440$ km.}
\end{table}
\section{Discussion and conclusions}
\label{Conclusions}
The aim and main contribution of the present communication is to show the radio emission as a potential tool for identifying the exoplanet's magnetic field properties. The information provided will be useful to guide future radio emission measurements to infer the exoplanet's magnetosphere properties such as the magnetic field intensity, tilt angle, and topology for different IMF orientations.
The analysis shows that the energy dissipation, hot spot distribution, and total radio emission are correlated with the exoplanet magnetic field topology and IMF orientation. Different magnetospheric configurations lead to different locations of the reconnection regions and hot spot distributions on the exoplanet's day side, associated with the maximum of the magnetic power and the minimum of the kinetic power, as well as local enhancements of the magnetospheric field. Therefore the characteristics of the exoplanet's magnetic field could likely be inferred by future radio telescopes because the radio emission measurements contain information about the exoplanet's magnetic field intensity, dipolar-to-quadrupolar components ratio, and magnetic axis tilt. The present and planned low-frequency radio telescopes with high sensitivity will reach $0.1$'' to $1$'' angular resolution, enough to separate an exoplanet from a star if the exoplanet orbits at several AU and the system is not farther than a few tens of parsecs, but will unlikely resolve structures within the exoplanetary magnetosphere. On the other hand, from the modeling of the dynamic spectrum in intensity and circular polarization it is possible to deduce several physical parameters from the system, including the planet’s magnetic field amplitude, tilt, offset, planetary rotation period, or the inclination of the orbital plane \cite{Hess}.
An increase in the exoplanet magnetic field intensity leads to an enhancement of the radio emission on the exoplanet's day and night sides, and to an increase in the hot spot size located farther away from the exoplanet surface (Fig. 11A). A linear regression between the exoplanet magnetic field and the radio emission on the day and night sides also shows a reasonable agreement. In addition, a large quadrupolar component of the exoplanet magnetic field leads to a northward (or southward) displacement of the magnetospheric field and the hot spot distribution. If the quadrupolar component is large enough ($g_{20}/g_{10} > 2/3$), the internal magnetospheric field is deformed compared to a pure dipolar case, leading to a larger radio emission on the exoplanet's day side (Fig. 11, panel C), although the radio emission on the night side decreases due to the faster decay of the quadrupolar component compared to the dipolar component.
The models with a small tilt and a Bx (Bxneg) IMF orientation have a similar magnetospheric topology to configurations with large tilt and Bz (Bzneg) IMF orientation (and vice versa), although not the same because the angle between the magnetic axis and stellar wind velocity vector is different. The magnetosphere topology can be different if the stellar wind dynamic pressure is large enough to drive strong deformations on the magnetosheath, for example if the host star has strong stellar wind fluxes or the exoplanet is in an orbit close to the host star. The consequence is an enhancement of the radio emission on the day side as the tilt increases for the Bx--Bxneg IMF orientations and a decrease for the Bz--Bzneg IMF orientations (Fig 11D). On the other hand if the exoplanet magnetic axis tilt is large the hot spot distribution is more spread out due to the effect of the SW dynamic pressure, because the SW is more aligned with the magnetic poles leading to an increase in the radio emission.
The radio emission on the night side is dominant for the Bx--Bxneg IMF orientations if the exoplanet magnetic field intensity is higher than $1000$ nT, although the radio emission on the day side is larger for the other IMF orientations (Fig. 11, panel B). If we extrapolate the trends obtained for the radio emission and exoplanet magnetic field intensity on the day and night sides (using the stellar wind parameters listed in Table 3), the expected radio emission range of a hot Jupiter with $B_{ex}=5 \cdot 10^{5}$ nT and a radius of $R_{ex}=7.2 \cdot 10^4$ km (similar to Jupiter) is $[P(DS)] = 0.3 - 5.5 \cdot 10^{11}$ W and $[P(NS)] = 0.7 - 1.4 \cdot 10^{11}$ W and of a super Earth with $B_{ex}=6 \cdot 10^{4}$ nT and $R_{ex}=1.26 \cdot 10^4$ km is $[P(DS)] = 0.6 - 9.6 \cdot 10^{8}$ W and $[P(NS)] = 1.3 - 2.5 \cdot 10^{8}$ W, values consistent with the observational scaling \cite{Desch2,Zarka7,Zarka3,Nichols}. Previous numerical studies predicted the radio emission of a hot Jupiter located $3$ to $10$ radius away from a star similar to the Sun \cite{Nichols}, suggesting a value of $[5,1300] 10^{12}$ W for an exoplanet magnetic fields between $0.1$ to $10$ times Jupiter's magnetic field, several orders of magnitude above the present model expectations. The reason for this difference is the dynamic pressure, almost $3\cdot10^{3}$ times lower in the present model. As a proxy of the magnetic power enhancement with the increase in the dynamic pressure we consider the results of \cite{refId0}: a dynamic pressure $3000$ times higher leads to a radio emission enhancement of 2700 times with respect to the present model. This means that the expected maximum magnetic power of the model is $[P_{B}(DS)] \approx 1.5 \cdot 10^{18}$ W and $[P_{B}(NS)] \approx 0.4 \cdot 10^{18}$ W, similar to the analysis performed by \cite{Saur} and \cite{Strugarek}, who expected a magnetic power around $10^{19}$ W. For the same reason the observational scaling shows a radio emission value almost one order of magnitude higher than the Jupiter radio emission measurements because the dynamic pressure at the Jupiter orbit is lower. Nevertheless, the real radio emission must be larger in a hot Jupiter with respect to the present results because we do not add the effect of other radio emission sources such as the fast rotation or internal plasma releases that do not depend on the distance to the parent star, thus the extrapolation result may be considered as a lower bound. If the hot Jupiter is located at $20$ parsec, the radio emission flux at Earth can be calculated as $\Phi = P / \Omega d^2 \omega$ with $\Omega \approx 1.6$ sr the solid angle, $d$ the distance to the exoplanet, and $\omega = 15$ MHz the detection bandwidth of the receiver, leading to a value of $0.1 - 1$ mJy, in the limit of the LOFAR observation range. For the case of a super Earth $\Phi \leq 10^{-4}$ mJy. The radio emission flux is lower than $10^{-4}$ mJy in the simulations performed in our study (e.g., model B6000 with a Bzneg IMF orientation, $\Phi = 3 \cdot 10^{-5}$ mJy), although stronger wind conditions lead to a higher radio emission flux. In addition, the model is only representative of an exoplanet with dipolar field without magnetic axis tilt in an orbit similar to Mercury for a host star like the Sun. If the exoplanet is located closer to the host star the SW dynamic pressure and IMF intensity is higher so the radio emission is also enhanced. Likewise, if the host star magnetic activity is higher the IMF is also larger (stars younger than the Sun with faster rotation are more active; see, e.g., \cite{Emeriau}) as well as the radio emission. The SW and IMF characteristics also change if the host star is not the same type as the Sun, leading to a different scaling \cite{Reville}. In other words, dedicated analyses are required to foresee the threshold of the exoplanet magnetic field topology for each star spectral type, age, magnetic activity, and exoplanet orbital distance \cite{Jardine,Vidotto5,See,Vidotto6,Weber}.
The radio emission in models with different IMF orientation show a variability factor up to 20, describing how the radio emission of an exoplanet should change during the magnetic cycle of the host star or through IMF variations along the exoplanet orbit \cite{Vidotto3}. Such variability can partially explain the uncertainty in the determination of the average auroral radio powers using the radio Bode law, around one order of magnitude between the lower and upper bounds \cite{Zarka3}.
The radio emission can escape the exoplanet magnetosphere if the maximum CMI emission is larger than the plasma frequency in the surrounding stellar wind. In the case of Mercury, the maximum CMI emission probably does not exceed a few 10s kHz, whereas the plasma frequency in the surrounding solar wind is between $70 - 100$ kHz, thus the CMI radiation---if it exists (which should be confirmed by BepiColombo and MMO measurements)---is trapped in the magnetospheric cavity. For close-in exoplanets, if the planet’s exo-ionosphere is expanding the CMI emission can be trapped inside the magnetosphere \cite{Weber}, although there are several possibilities for overcoming the CMI emission trapping; for example, if the exoplanet shows small-scale auroral plasma cavities like at the Earth, there are second harmonic emissions (especially on the ordinary mode) or if the exoplanet magnetic field is strong. Another option is to analyze the radio emission from exoplanets located farther away from the host star where the plasma frequency is lower.
Using the results of the present study it is possible to identify, in a first approximation, the minimum exoplanet radio emission associated with a magnetic field strong enough to shield the exoplanet surface (at low latitudes) from the stellar wind. The exoplanet magnetopause standoff distance can be estimated by this simplified expression:
$$ \frac{R_{MP}}{R_{ex}} = \left(\frac{B_{ex}^{2}}{m_{p}n\mu_{0}\mathrm{v}^{2}}\right)^{1/6} $$
Here $R_{MP}$ is the magnetopause standoff distance and $m_{p}$ the proton mass (we consider the same SW dynamic pressure as in the simulations, see Table 3). If the ratio is $ R_{MP} / R_{ex} = 1$ the SW precipitates directly toward the exoplanet surface, so the magnetic field is not strong enough to shield the planet, namely $B_{ex} \approx 120$ nT. If the exoplanet has the same radius as the Earth and the magnetic field is a dipole without magnetic axis tilt, the radio emission range is $[P(DS)] = 0.6 - 10 \cdot 10^{5}$ W and $[P(NS)] = 1.3 - 2.7 \cdot 10^{5}$ W, so we can identify a threshold for the exoplanet habitability from the point of view of the radio emission output: if the radio emission measurement is lower than $[P] = 10^{6}$ W the exoplanet is less likely to host life on the surface \cite{Vidotto7,Vidotto8}. There are other restrictions for the exoplanet habitability, for example the host star age. If the star is younger than the Sun the magnetic activity is higher, due to a faster rotation, so extreme events such as the coronal mass ejections (CME) are more frequent \cite{Aarnio}, which is why the exoplanet habitability requires a stronger magnetic field with a larger magnetopause standoff distance \cite{Khodachenko,Lammer}. If $ R_{MP} / R_{ex} = 5$, the exoplanet surface will be also shielded from most of the CME, so the exoplanet magnetic field must be at least $B_{ex} \approx 1.5 \cdot 10^{4}$ nT, leading to a radio emission of $[P(DS)] = 0.7 - 13 \cdot 10^{7}$ W and $[P(NS)] = 1.7 - 3.3 \cdot 10^{7}$ W. Thus, we can define another radio emission threshold for exoplanet habitability of $[P] = 1.3 \cdot 10^{8}$ W if the host star is younger and more
active than the Sun (but the same star type). Compared to the case of the Earth (older host star with lower SW dynamic pressure and IMF intensity at the exoplanet orbit), previous studies indicate a radio emission of at least $10^{7}$ W for a magnetosphere that can host life on the surface \cite{Zarka5}, a value compatible with the present results.
It should be noted that the expression to calculate the standoff distance only provides an approximated value, because a dipolar magnetic field with no tilt is assumed and the effect of the IMF orientation is not considered, so dedicated numerical experiments are required to obtain more accurate thresholds. In addition, this results are only valid if the SW dynamic pressure and IMF intensity at the exoplanet orbit are similar to the case of Mercury.
The combination of efficiency ratios ($a = 0$, $b=2\cdot10^{-3}$) shows the highest radio emission values. A previous study of the radio emission from the Hermean magnetosphere identified these efficiency ratios as the most accurate option for reproducing the expected radio emission from Mercury \cite{refId0}, but these results should be confirmed by in situ measurement and radio emission data from gaseous planets of the solar system. The present study's results can also be used to compare the expect radio emission of Bode's law with radio telescope measurements, with the aim of inferring the efficiency ratios that most closely fit the observations for different SW dynamic pressures, IMF orientations, and planet magnetic field topologies.
The trends obtained in the analysis are useful for the exoplanet magnetospheres detectable by the current radio telescopes, particularly hot Jupiters. Among other conclusions, we show that the present radio telescopes have enough sensitivity to measure the hot Jupiter radio emission, and in the best cases can possibly even constrain their magnetic field topology. In addition, the model can be calibrated analyzing the radio emission from the gaseous planets of the solar system---the results of the analysis and the scaling are similar to these measurements and to models of other authors---and in the near future by the measurement of the radio emission from Mercury by the BEPIColombo satellite.
The net magnetic energy dissipation predicted by Bode's law and the simulations show good agreement, so the magnetic energy dissipation on the day side and the magnetotail reconnection regions are well reproduced by the model in a first approximation. On the other hand, the net kinetic energy dissipation predicted by the simulations is smaller than calculated by Bode's law because the model can reproduce more accurately the complex flows on the day and night sides of the exoplanet.
\begin{figure}[h]
\centering
\resizebox{\hsize}{!}{\includegraphics[width=\columnwidth]{11.jpg}}
\caption{(A) Radio emission on the day side vs exoplanet magnetic field intensity. (B) Radio emission on the night side vs the exoplanet magnetic field intensity. (C) Radio emission on the day side vs the ratio of the exoplanet quadrupolar to dipolar magnetic field components. (D) Radio emission on the day side vs the magnetic axis tilt. Results for an exoplanet with a radius of $R_{ex}=2440$ km.}
\label{11}
\end{figure}
Future measurements of the radio emission will allow testing of the different configurations of the exoplanet magnetospheres in the light of the model selection problem in Bayesian statistics \cite{William}. In other words, it will be possible to select the model that best reproduces the observations based on the computation of the Bayesian evidence \cite{Trotta,Corsaro1,Corsaro2}, a key parameter that provides a statistical weight, favoring models that provide a better fit to the data but penalizing those that have a more complex analytical representation, i.e., a larger number of parameters that configure the model itself. In this way it will be possible to unambiguously constrain the most favored theoretical interpretation for a given observational set.
In summary, radio emission data bring constraints on the exoplanet magnetosphere topology, essential information to foresee the potential habitability of exoplanets, associated with the presence of permanent and strong enough magnetic fields to shield the planet surface and atmosphere from the stellar wind erosion. This information can be deduced if we analyze large time series of radio emission data when available \cite{Hess,Zarka6}, removing the effect of the instantaneous effect of the IMF orientation, intensity, and the stellar wind dynamic pressure and temperature. On the other hand, after identifying the characteristics of the exoplanet magnetic field, the radio emission data is useful in order to determine the properties of the stellar wind and magnetic field of the star. This process can be carried out through the adoption of a Bayesian model comparison. In this view, the competing models to test will incorporate the different configurations of the exoplanet magnetospheres, as shown in this work, and will be fit to the radio emission data in order to obtain the best set of free parameters which best match the observed radio emission. In a subsequent step, the Bayesian evidence of each model are compared in order to select the most likely configuration that reproduces the same observational set \cite{Corsaro2}. Our aim is to develop this thorough statistical approach by computing a grid of predictive models for future releases of radio emission measurements, and to test the methodology using synthetic datasets. The final target is to create a catalog that illustrates the main features of the exoplanets' magnetic fields and identify those that can harbor life.
\ack
This material is based on work supported by both the U.S. Department of Energy and the Office of Science, under Contract DE-AC05-00OR22725 with UT-Battelle, LLC. The research leading to these results has received funding from the European Commission's Seventh Framework Programme (FP7/2007-2013) under the grant agreement SHOCK (project number 284515), the grant agreement SPACEINN (project number 312844), and ERC STARS2 (207430). We extend our thanks to CNES for Solar Orbiter and PLATO science support and to INSU/PNST for our grant. We acknowledge GENCI allocation 1623 for access to the supercomputer where most of the simulations were run, and DIM-ACAV for supporting the ANAIS project and our graphics/post-analysis and storage servers, as well as DIO of the Paris Observatory. The authors would also like to acknowledge R.A. Garcia and E. Corsaro for the fruitful discussion.
This manuscript has been authored by UT-Battelle, LLC under Contract No. DE-AC05- 00OR22725 with the U.S. Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non exclusive, paid-up, irrevocable, world wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http://energy.gov/downloads/doe-public-access-plan).
\begin{appendix}
\section{Calculation of the numerical magnetic diffusivity}
We performed a set of simulations in a simplified test case to analyze the numerical magnetic diffusivity in a downscale model of characteristic length $L=1$ m with a grid made of $196$ radial points, $48$ in the polar angle and $96$ in the azimuthal angle for $R_{out}=12$. We study the evolution of a 3D Gaussian profile in a motionless fluid:
$$ \frac{\partial \vec{B}}{\partial t} + \cancel{\vec{\nabla} \wedge (-\vec{v} \wedge \vec{B})} = -\eta \frac{1}{\mu_{0}} \vec{\nabla} \wedge (\vec{\nabla} \wedge \vec{B})$$
$$ \Rightarrow \frac{\partial \vec{B}}{\partial t} = -\eta \frac{1}{\mu_{0}} \frac{\partial}{\partial \vec{r}} \left( \frac{\partial \vec{B}}{\partial \vec{r}} \right) $$
If we follow the Gaussian profile decay in time we can measure the decrease in the magnetic field module by the numerical magnetic diffusivity as $\Delta B \approx \Theta \Delta t + ctte$, where
$$\Theta = -\eta \frac{1}{\mu_{0}} \frac{\partial}{\partial \vec{r}} \left( \frac{\partial \vec{B}}{\partial \vec{r}} \right) $$
The next plot shows the decay of the Gaussian with the time for each model dimension.
\begin{figure}[h]
\centering
\includegraphics[width=0.3\textwidth]{12.jpg}
\caption{Gaussian decay with time in the radial (a) and angular (b and c) directions at six different times.}
\label{12}
\end{figure}
The value of the decay rate for each model dimension is obtained from the slope of the linear regression: $\Theta_{r} = 4.65 \cdot 10^{-5}$, $\Theta_{\theta} = 4.86 \cdot 10^{-5}$, and $\Theta_{\zeta} = 4.29 \cdot 10^{-5}$. Next, the second derivative of the magnetic field is calculated at different time steps, as is the numerical magnetic diffusivity. The numerical magnetic diffusivity is defined as the average of the values obtained: $\eta_{i} = (8.02 , 7.48 , 6.60) \cdot 10^{-3}$ m$^2$/s with $i=r$, $\theta$, $\zeta$. If we re-scale these values to that of the planet's scale (now assuming a characteristic length scale $L$ of $10^{6}$ m), we obtain for our setup $|\eta| \approx 1.81 \cdot 10^{8}$ m$^2$/s and $R_{m} = 1350$.
\section{Summary of simulation parameters:}
\begin{table*}[h]
\centering
\begin{tabular}{c | c c c c c c c}
Model & $B_{SW}$ (nT) & tilt ($^{o}$) & $|B_{dip}|$ (nT) & $|B_{quad}|$ (nT) & $R_{MP}/R_{ex}$ & $R_{magl}/R_{ex}$ & $R_{magw}/R_{ex}$ \\ \hline
B250 Bx & $(20, 0, 0)$ & $0$ & $250$ & $0$ & $0.33$ & $10.8$ & $2.3$ \\
B250 Bxneg & $(-20, 0, 0)$ & $0$ & $250$ & $0$ & $0.32$ & $10.8$ & $2.2$ \\
B250 By & $(0, 20, 0)$ & $0$ & $250$ & $0$ & $0.32$ & $9.2$ & $4.1$ \\
B250 Byneg & $(0, -20, 0)$ & $0$ & $250$ & $0$ & $0.32$ & $9.2$ & $4.3$ \\
B250 Bz & $(0, 0, 20)$ & $0$ & $250$ & $0$ & $0.50$ & $14.8$ & $5.1$ \\
B250 Bzneg & $(0, 0, -20)$ & $0$ & $250$ & $0$ & $0.00$ & $11.3$ & $3.2$ \\
B1000 Bx & $(20, 0, 0)$ & $0$ & $1000$ & $0$ & $1.11$ & $22.0$ & $2.2$ \\
B1000 Bxneg & $(-20, 0, 0)$ & $0$ & $1000$ & $0$ & $1.11$ & $22.8$ & $2.9$ \\
B1000 By & $(0, 20, 0)$ & $0$ & $1000$ & $0$ & $1.06$ & $17.5$ & $6.6$ \\
B1000 Byneg & $(0, -20, 0)$ & $0$ & $1000$ & $0$ & $1.06$ & $17.7$ & $6.5$ \\
B1000 Bz & $(0, 0, 20)$ & $0$ & $1000$ & $0$ & $1.29$ & $28.9$ & $9.9$ \\
B1000 Bzneg & $(0, 0, -20)$ & $0$ & $1000$ & $0$ & $0.87$ & $22.0$ & $1.9$ \\
B6000 Bx & $(20, 0, 0)$ & $0$ & $6000$ & $0$ & $3.65$ & $73.2$ & $22.9$ \\
B6000 Bxneg & $(-20, 0, 0)$ & $0$ & $6000$ & $0$ & $3.50$ & $73.4$ & $23.2$ \\
B6000 By & $(0, 20, 0)$ & $0$ & $6000$ & $0$ & $3.50$ & $56.1$ & $22.4$ \\
B6000 Byneg & $(0, -20, 0)$ & $0$ & $6000$ & $0$ & $3.55$ & $49.4$ & $19.8$ \\
B6000 Bz & $(0, 0, 20)$ & $0$ & $6000$ & $0$ & $4.11$ & $74.3$ & $26.4$ \\
B6000 Bzneg & $(0, 0, -20)$ & $0$ & $6000$ & $0$ & $3.31$ & $56.8$ & $21.6$ \\
Q02 Bx & $(20, 0, 0)$ & $0$ & $4800$ & $1200$ & $2.75$ & $40.6$ & $19.2$ \\
Q02 Bxneg & $(-20, 0, 0)$ & $0$ & $4800$ & $1200$ & $2.65$ & $41.5$ & $22.2$ \\
Q02 By & $(0, 20, 0)$ & $0$ & $4800$ & $1200$ & $2.65$ & $37.2$ & $17.5$ \\
Q02 Byneg & $(0, -20, 0)$ & $0$ & $4800$ & $1200$ & $2.64$ & $37.5$ & $16.6$ \\
Q02 Bz & $(0, 0, 20)$ & $0$ & $4800$ & $1200$ & $2.83$ & $34.2$ & $21.4$ \\
Q02 Bzneg & $(0, 0, -20)$ & $0$ & $4800$ & $1200$ & $2.58$ & $33.5$ & $17.8$ \\
Q04 Bx & $(20, 0, 0)$ & $0$ & $3600$ & $2400$ & $1.59$ & $49.5$ & $5.7$ \\
Q04 Bxneg & $(-20, 0, 0)$ & $0$ & $3600$ & $2400$ & $1.64$ & $49.8$ & $5.6$ \\
Q04 By & $(0, 20, 0)$ & $0$ & $3600$ & $2400$ & $1.62$ & $41.9$ & $7.1$ \\
Q04 Byneg & $(0, -20, 0)$ & $0$ & $3600$ & $2400$ & $1.66$ & $42.3$ & $6.9$ \\
Q04 Bz & $(0, 0, 20)$ & $0$ & $3600$ & $2400$ & $1.55$ & $49.8$ & $9.3$ \\
Q04 Bzneg & $(0, 0, -20)$ & $0$ & $3600$ & $2400$ & $1.68$ & $46.6$ & $4.6$ \\
tilt30 Bx & $(20, 0, 0)$ & $30$ & $6000$ & $0$ & $3.82$ & $144.5$ & $33.6$ \\
tilt30 Bxneg & $(-20, 0, 0)$ & $30$ & $6000$ & $0$ & $3.95$ & $101.5$ & $33.5$ \\
tilt30 By & $(0, 20, 0)$ & $30$ & $6000$ & $0$ & $4.18$ & $44.7$ & $13.2$ \\
tilt30 Byneg & $(0, -20, 0)$ & $30$ & $6000$ & $0$ & $4.41$ & $43.3$ & $13.1$ \\
tilt30 Bz & $(0, 0, 20)$ & $30$ & $6000$ & $0$ & $3.77$ & $39.5$ & $15.8$ \\
tilt30 Bzneg & $(0, 0, -20)$ & $30$ & $6000$ & $0$ & $3.94$ & $59.8$ & $19.2$ \\
tilt60 Bx & $(20, 0, 0)$ & $60$ & $6000$ & $0$ & $4.48$ & $81.6$ & $17.3$ \\
tilt60 Bxneg & $(-20, 0, 0)$ & $60$ & $6000$ & $0$ & $4.34$ & $100.1$ & $16.4$ \\
tilt60 By & $(0, 20, 0)$ & $60$ & $6000$ & $0$ & $4.49$ & $42.3$ & $6.6$ \\
tilt60 Byneg & $(0, -20, 0)$ & $60$ & $6000$ & $0$ & $4.43$ & $38.9$ & $5.6$ \\
tilt60 Bz & $(0, 0, 20)$ & $60$ & $6000$ & $0$ & $4.74$ & $47.3$ & $14.1$ \\
tilt60 Bzneg & $(0, 0, -20)$ & $60$ & $6000$ & $0$ & $4.53$ & $67.0$ & $12.9$ \\
tilt90 Bx & $(20, 0, 0)$ & $90$ & $6000$ & $0$ & $0.00$ & $30.3$ & $4.1$ \\
tilt90 Bxneg & $(-20, 0, 0)$ & $90$ & $6000$ & $0$ & $0.00$ & $54.9$ & $4.0$ \\
tilt90 By & $(0, 20, 0)$ & $90$ & $6000$ & $0$ & $0.00$ & $32.5$ & $2.5$ \\
tilt90 Byneg & $(0, -20, 0)$ & $90$ & $6000$ & $0$ & $0.00$ & $31.2$ & $2.6$ \\
tilt90 Bz & $(0, 0, 20)$ & $90$ & $6000$ & $0$ & $0.00$ & $55.1$ & $3.3$ \\
tilt90 Bzneg & $(0, 0, -20)$ & $90$ & $6000$ & $0$ & $0.00$ & $35.8$ & $4.4$ \\
\end{tabular}
\caption{Summary of simulation parameters.}
\label{table7}
\end{table*}
The parameters $R_{magl}/R_{ex}$ and $R_{magw}/R_{ex}$ in Table B.1 are the magnetotail length and width normalized to the exoplanet radius.
\end{appendix}
|
1,314,259,995,283 | arxiv | \section{Introduction}
\begin{figure*}
\centering
\includegraphics[angle=-90,width=16cm]{fig0.eps}
\caption{Overview of the S235A-B star forming complex
before the observations presented in this paper.
Left: overlay of the single-dish HCN map (contours) from Cesaroni et
al. (1999) with the $K$-band image of the S235A-B region
from Felli et al. (1997). Right: overlay of the
interferometric maps of the 3.3 mm continuum emission (thick contours and
grey scale) with the HCO$^{+}$(1--0)
outflows (thin solid line: $V < -20.9$ km s$^{-1}$,
dashed line: $V> -15.5$ km s$^{-1}$) from Felli et al.\ (2004).
In both, a cross marks the location of the H$_{2}$O maser at -61.2 km
s$^{-1}$ from Tofani et al. (1995). The infrared
source with the largest near-IR excess detected in previous works, M1,
is also indicated.
The offsets between M1, the water maser, the HCN peak, and the mm
continuum peak are all larger than the position uncertainties.
\label{Fig0}}
\end{figure*}
This paper continues the study of the star-forming complex
S235A-B (see Felli et al.\ 2004 and references therein), focussing in
particular on a
deeply embedded Young Stellar Object (YSO) found
between S235A and S235B, close to a water maser.
The presence of the YSO is implied from the typical signposts of early stellar
evolution, including two molecular outflows, a hot molecular core, a
sub-millimeter
peak, and a water maser. The YSO represents the youngest object in the
star-forming complex.
The site morphology
derived from all previous observations
is summarised in Fig.~\ref{Fig0}.
S235A is a small optical nebulosity coinciding with a compact, but
well-resolved, HII region. It appears as a less-evolved region, with respect
to the more extended and diffuse HII region S235
located further north, and probably is unrelated to the S235A-B complex.
S235A lies at the northern edge of a molecular clump
that
represents
the brightest peak of a more extended molecular cloud (Evans \& Blair 1981;
Nakano \& Yoshida 1986; Cesaroni et al.\ 1999).
Lying $\sim40\arcsec$ south of S235A (0.35 pc at the assumed distance of
1.8 kpc; Nakano \& Yoshida 1986), at the SW edge of the molecular
core, S235B is a smaller diffuse nebulosity detected both in the optical and
near-IR, exhibiting a near-IR excess and
intense emission in optical and IR hydrogen lines
(H$\alpha$, Br$\gamma$), but without a radio continuum counterpart,
making it
a rather
peculiar object.
In H$\alpha$, it consists of an unresolved peak superimposed
on a circular nebula that is $\sim10\arcsec$ in diameter (Krassner et al.\ 1982;
Alvarez et al.\ 2004). S235B appears to be a young star with an expanding
ionized envelope surrounded by a diffuse nebulosity (Felli
et al.\ 1997).
A large-scale molecular outflow was first found in $^{12}$CO(1--0) by
Nakano \& Yoshida (1986) with a resolution of $\sim14\arcsec$; it was
centred at S235B and aligned in a NE-SW direction, about 35$\arcsec$
(0.3 pc) in
length.
Felli et al.\ (1997)
confirmed the blue lobe of the outflow in $^{13}$CO(2--1) with a
resolution of $\sim11\arcsec$, but failed to detect the red lobe.
Near-IR images revealed a highly obscured stellar cluster
between S235A and S235B, i.e. centred on the water maser,
with several sources with IR excess
(Felli et al.\ 1997), in particular source M1, which exhibits the largest near-IR
excess. This source has been suggested to be the
candidate YSO supplying energy to the $-60$ km s$^{-1}$ water maser,
but with great uncertainty since it lies more than 5$\arcsec$ to the south
(see also Fig.~\ref{Fig0}).
The above picture was clarified
by Felli et al.\ (2004), who presented high-resolution (between
2$\arcsec$ and 4$\arcsec$) mm line (HCO$^{+}$, C$^{34}$S, H$_{2}$CS,
SO$_{2}$, and CH$_{3}$CN) and continuum observations, together with far-IR
observations.
A compact molecular core (hereafter, the mm core) was found
both in the mm continuum (hot dust emission, $T_{\rm dust} =
T_{\rm CH_{3}CN}\sim30$ K) and in the
molecular lines, peaking close to the water maser position
and well-separated from S235A and S235B.
Two molecular outflows were found in HCO$^{+}$(1--0)
centred on the mm core. One of them (hereafter, the NE-SW outflow) is
aligned along the same NE-SW direction of the large-scale outflow detected by
Nakano \& Yoshida (1986). It spans $\sim0.4$ pc,
and has an estimated mass of 9 M$_{\sun}$ and a mechanical luminosity
$>19$ L$_{\sun}$.
The other (hereafter, the NNW-SSE outflow) is more compact and aligned in a
NNW-SSE direction. It spans $\sim0.3$ pc and has
an estimated mass of 4 M$_{\sun}$ and a mechanical luminosity of
$\sim0.3$ L$_{\sun}$.
Felli et al.\ (2004) derived an upper limit of
$\sim10^{3}$ L$_{\sun}$ for the bolometric luminosity of the mm core and
suggested the presence of an embedded intermediate-mass YSO
driving the NE-SW outflow and supplying the energy for the
$-60$ km s$^{-1}$ water maser.
Studying the high velocity red and blue emission of C$^{34}$S(5--4)
towards the mm core, they also found a compact structure with a
velocity gradient perpendicular to
the NE-SW outflow that might represent the signature of a circumstellar disk.
An elongated structure (called a ``jet'') protruding from the mm core and
coinciding with the blue lobe of the NNW-SSE outflow
was also detected in the continuum at 3.3 mm (see Fig.~\ref{Fig0}, right)
and, with a lower signal-to-noise ratio, also at 1.2 mm.
The spectral index of the jet, $\alpha \sim 0.6$ (defined as
S$_{\nu}\propto \nu^{\alpha}$)
is rather uncertain, but different from that of the mm core,
$\alpha \sim 2.5$, suggesting that the emission
might arise in an ionized wind rather than being due to dust.
Radio jets
are often observed at the base of molecular outflows (Rodr\'{\i}guez
1997; Anglada et al.\ 1998; Beltr\'{a}n et al.\ 2001) in both low-mass
and high-mass
star-forming regions (e.\ g., Rodr\'{\i}guez 1996). They are characterized
by spectral indices in the range $-0.1$ to $\sim1$ and are elongated in
the outflow direction.
Expanding ionized envelopes also have a spectral index
$\alpha$ = 0.6 (Panagia \& Felli 1975).
Water masers are one of the most reliable signposts of early phases
in star formation
(see e.g. Tofani et al.\ 1995) since they provide the best indication
of the position of the required powering source, i.e. the YSO.
They occur both in low-mass
(see, e.g., Furuya et al.\ 2001, 2003) and in high-mass (see Churchwell
2002, and references therein) star-forming regions and are often found
to be associated with outflowing matter.
Sometimes they are also found in close association with radio jets
(e.g.\ G\'{o}mez et al.\ 1995).
The presence of a water maser in this region had been known since the
observations of Henkel et al.\ (1986) and Comoretto et al.\ (1990),
but with insufficient spatial resolution to properly locate it in the
region. Only with Very Large Array (VLA) cm line
observations (Tofani et al.\ 1995)
and interferometric mm observations of the continuum sources
(Felli et al.\ 2004), was the location of the water maser
in-between S235A and S235B, almost coincident with the mm core,
firmly established. This proved that the water maser is not
associated with either of the two nebulosities and
that a local early type star, presumably the YSO within the mm core,
is needed for its excitation.
In the VLA observations, maser emission was only searched for in a
limited velocity range, around $-60$ km s$^{-1}$, since at the time
this was the only component detected by single-dish observations.
The water maser in S235A-B has been monitored with the Medicina
radio telescope since 1987,
with coverage $\sim4$ times per year since 1993.
These observations
revealed
at least three separate velocity components: the one already known
at $\sim -60$ km s$^{-1}$, one between $-20$ to
$-30$ km s$^{-1}$, and one between $-10$ to 10 km s$^{-1}$. The last two
are always very weak (max 10-20 Jy) and exhibit
strong variations.
Whether they are all related to the same YSO or to
separate
ones could not be established from single-dish observations because of
low
spatial resolution. This drove us to carry out new VLA line observations
at 22 GHz covering the whole maser velocity
interval. All three velocity components found in the Medicina
data
were active at the time of this VLA observation.
The S235A-B region contains other masers, namely methanol
(CH$_{3}$OH) and SiO (Nakano \& Yoshida 1986; Haschick et al.\ 1990;
Harju et al.\ 1998). Kurtz et al.\
(2004) included S235A-B in their recent VLA survey of the CH$_{3}$OH maser
line at 44 GHz. This is a class I methanol
source,
and it is believed to trace outflow activity. These authors found
a cluster of 6 CH$_{3}$OH masers spread over an area of
a
few square arcsec around the water maser spot at
$\sim -60$ km s$^{-1}$.
To
clarify the nature of the embedded YSO and its relation with
the outflow found
in the S235A-B region, we have performed an
extensive observational program using the VLA and the Medicina radio telescopes,
complemented by archival Spitzer data.
The two primary goals of the new VLA continuum observations
were: 1) to clarify the nature of the ``jet'' and to derive its spectrum over
a larger frequency interval and 2) to search further for cm emission from
ionized hydrogen in the mm core. VLA observations in the water maser line
together with the Medicina patrol can give indications on the
location and activity of the maser in the star-forming region.
Finally, for a better understanding of the precise correspondence
between
IR and radio sources,
in particular the precise role of M1 and the
possibility of detecting IR emission from the mm core,
archive Spitzer-IRAC observations of the S235A-B region in the four
wavelengths (3.6, 4.5, 5.8, and 8 $\mu$m) were retrieved and
analyzed.
In Sect.~\ref{obs} we describe the observations, and in Sect.~\ref{res}
we present the results, while in Sect.~\ref{discuss} we discuss our findings
and how they enrich our current understanding of the S235A-B region.
In Sect.~\ref{conclu}, the main results are summarised. The reader
can refer to Fig.~\ref{schema} for a comprehensive sketch of the
S235A-B star forming-region, including the latest data.
\section{Observations and data reduction}
\label{obs}
\begin{table*}
\begin{minipage}{\columnwidth}
\caption{Summary of VLA observations.
\label{obs:tab}}
\centering
\renewcommand{\footnoterule}{}
\begin{tabular}{c c c c c c c c}
\hline\hline
Date & Frequency & \multicolumn{2}{c}{Synthesized beam\footnote{natural
weighting}} &
Largest
& Int.\ time & rms & Notes \\
& & Size & PA &
Angular Scale
& & & \\
& (GHz) & (arcsec $\times$ arcsec) & (degree) &
(arcsec)
& (sec) & (mJy/beam) & \\
\hline
07/03/2004, 26/02/2004 & 4.75 & $4.8 \times 4.3$ & $-8.2$ & 300 & 1200 & 0.09 & \\
07/03/2004 & 8.45 & $2.8 \times 2.6$ & $-20.8$ & 180 & 1800 & 0.08 & \\
26/02/2004 & 23 & $1 \times 1$ & $-40$ & 60 & 7224 & 0.03 & cont. \\
07/03/2004 & 23 & $1 \times 0.9$ & $-83.7$ & 60 & 434 & 0.05 & line \\
07/03/2004 & 45 & $0.6 \times 0.6$ & $88.6$ & 43 & 17590\footnote{fast switching} & 0.08 & \\
\hline
\end{tabular}
\end{minipage}
\end{table*}
\subsection{VLA observations}
The observations carried out with the VLA of the
National Radio Astronomy Observatory
(NRAO)\footnote{The National Radio Astronomy
Observatory is a facility of the National Science Foundation operated
under cooperative agreement by Associated Universities, Inc.} were made with the VLA in the C configuration
on February 26 and on March 7, 2004.
The location of the water maser at $-60$ km s$^{-1}$ was used as phase centre;
its coordinates are:\\
$\alpha(2000)=05^{h}40^{m}53.42^{s}$,
$\delta(2000)=35\degr41\arcmin48\farcs 8$.\\
The observations
consisted of
calibrator-target-calibrator scans; in the
Q-band (0.7 cm) the fast-switching mode was used.
The phase calibrator was 0555+398, with
boot-strapped fluxes of 5.3 Jy (6 cm), 4.55 Jy (3.6 cm), 2.71 Jy (1.3 cm),
and 1.85 Jy (0.7 cm); the absolute amplitude calibrator was 3C147
(0542+498),
with assumed fluxes of 7.9 Jy (6 cm), 4.8 Jy (3.6 cm), 1.8 Jy (1.3 cm),
and 0.9 Jy (0.7 cm). The pointing accuracy was checked on 0555+398 (at 3.6 cm)
every hour. Line observations at 1.3 cm were carried out with
the correlator
set to 64 channels and a bandwidth of $6.25$ MHz. This resulted in a velocity
resolution of 1.3 km
s$^{-1}$ over a range of 84 km s$^{-1}$, sufficient to cover the whole
velocity range
of the water masers. The receivers were tuned so as to have the band centred
at $-30$ km s$^{-1}$ with respect to the Local Standard of Rest.
A summary of the observations is contained in Table~\ref{obs:tab}.
Since the forthcoming analysis is based on a comparison of the position of the
radio sources with those
of near- and mid-IR sources, we checked
whether the radio
coordinate system is consistent with the near-IR coordinate system.
To this end, we searched the 2MASS point source catalogue
for a near-IR counterpart of our phase calibrator, finding an object at
the same position as the calibrator within 0$\farcs$1.
This gives us confidence that the two coordinate systems are consistent
with each other within this limit.
\subsection{Medicina observations}
The single-dish Medicina
radio telescope\footnote{The Medicina VLBI radio telescope is operated
by the Radioastronomy Institute of INAF, Italy} (HPBW 1\farcm 9)
observations are part
of a monitoring project of
a large sample of water masers. An autocorrelator with 1024 channels
and a 10 MHz bandwidth is usually employed. The typical
sensitivity for a 5-minute integration is of the order of 1 Jy, and the
average calibration error is about 20\%.
For a more detailed description of the radio telescope and the
relevant parameters of the
water masers patrolling, we refer to Valdettaro et al.\ (2002) and
Brand et al.\ (2003).
\subsection{Spitzer-IRAC observations}
Spitzer-IRAC observations of a large area around the S235A-B region in the four
wavelengths (3.6, 4.5, 5.8, and 8 $\mu$m) were extracted from the
Spitzer public archive.
The observations are part of the GTO program 201 ``The Role of
Photodissociation Regions in High Mass Star
Formation'' (Principal Investigator G. Fazio). Integration time is 12s at
all filters.
The positional accuracy is better than $1\arcsec$.
Point-source FWHM resolutions range from $\sim1\farcs 6$ at 3.6 $\mu$m
to $\sim1\farcs 9$ at 8.0 $\mu$m.
The IRAC bands are large and may contain various features,
depending on the environment being observed. Among these, the
most important for our case are polycyclic aromatic
hydrocarbon (PAH) features (3.6, 5.8, and 8.0 bands) and the
Br$\alpha$ line (4.5 band).
An overview of IRAC is given by Fazio et al.\ (2004a).
\section{Results}
\label{res}
\subsection{Spitzer-IRAC observations}
A colour coded image of the region covering S235A-B, obtained by combining
4.5, 5.8 and 8.0 $\mu$m observations,
is shown in Fig.~\ref{spitzercolor}.
The two diffuse nebulosities S235A and S235B clearly dominate the
extended emission.
Most of the point sources detected in the $K$-band and shown in
Fig.~\ref{Fig0} are also present here.
Two
sources deserve more attention
in view of the present work:
1) M1,
which
is detected at all bands, and 2)
a source
(hereafter, S235AB-MIR)
detected only at 4.5, 5.8, and 8.0 $\mu$m, which
is present in the area of the mm core. Both
sources are marked in Figs.~\ref{spitzercolor} and
~\ref{redsource} where
we show an overlay of the mm core (at 1.2 mm) with
the 8.0 $\mu$m IRAC image. S235AB-MIR lies $1\farcs 5$ to the
south of the peak of the mm core.
\begin{figure*}
\centering
\includegraphics[angle=90,width=16cm]{color2.eps}
\caption{ Three colour (4.5 blue, 5.8 green, and 8.0 $\mu$m red) image of the
S235A-B region. The labels indicate the position of
M1, S235AB-MIR, and the other stars of the cluster for which we
performed photometry.
\label{spitzercolor}}
\end{figure*}
\begin{figure}
\centering
\includegraphics[angle=-90,width=8cm]{fig4.eps}
\caption{The 1.2 mm core (contours)
from Felli et al.\ (2004) overlaid on the
5.8 $\mu$m Spitzer-IRAC image (grey scale, S235A and S235B have been
saturated to show the weak emission from S235AB-MIR).
The positions of the mm core,
S235AB-MIR, and M1 are indicated.
\label{redsource}}
\end{figure}
\begin{figure}
\centering
\includegraphics[angle=-90,width=8cm]{col_col_latest.eps}
\caption{Colour-colour plots of the most relevant sources in the
area of the S235A-B cluster. The identifying numbers and symbols
are the same as those used in Table~\ref{spitz:tab} and Fig.~\ref{spitzercolor}.
In the bottom box, the regions occupied by Class I
and Class II sources are enclosed with dotted and dashed lines, respectively.
When only an upper limit to the flux density could be estimated in one
of the bands, the corresponding point in the plot is marked by an arrow.
\label{colcol}}
\end{figure}
We performed aperture photometry on the IRAC sources
found near to the mm-core by using DAOPHOT in IRAF.
For all four bands, we selected
a radius of 2 pix ($\sim1$ FWHM; 1 pix $\sim1\farcs 2$)
and a 2-pix wide annulus with an
inner radius of 4 pix, to account for the highly
variable background. We applied aperture corrections as estimated
from the IRAC PSFs retrieved from the Spitzer Web Page
(http://ssc.spitzer.caltech.edu/obs/). To derive the
photometric zero points,
we used the zero-magnitude fluxes given by Fazio et al.\ (2004b).
Photometry was done only on
the most relevant sources present in the area of the S235A-B cluster,
which are indicated in Fig.~\ref{spitzercolor}.
Positions and flux densities in the four IRAC bands of the sources
labelled in Fig.~\ref{spitzercolor} are given in
Table~\ref{spitz:tab}
\begin{table*}
\caption{MIR fluxes for the Spitzer-IRAC sources towards the mm core.
\label{spitz:tab}}
\renewcommand{\footnoterule}{}
\begin{tabular}{l c c c c c c}
\hline\hline
ID & \multicolumn{2}{c}{Position} & $F_{\nu}(3.6)$ &
$F_{\nu}(4.5)$ & $F_{\nu}(5.8)$ & $F_{\nu}(8.0)$ \\
& $\alpha(2000)$ & $\delta(2000)$ & (mJy) & (mJy) & (mJy)
& (mJy) \\
\hline
1 & $05^{h}40^{m}54.2^{s}$ & $35\degr41\arcmin59\arcsec$ &
$8.6 \pm 0.8$ & $7.0 \pm 0.8$ & $<12$ & $<39$ \\
2 & $05^{h}40^{m}53.7^{s}$ & $35\degr41\arcmin56\arcsec$ &
$7.8 \pm 0.9$ & $4.7 \pm 1.2$ & $<11$ & $<19$ \\
3 & $05^{h}40^{m}53.5^{s}$ & $35\degr41\arcmin53\arcsec$ &
$17.2 \pm 0.3$ & $21.2 \pm 0.6$ & $29 \pm 1$ & $43 \pm 4$ \\
4 & $05^{h}40^{m}53.7^{s}$ & $35\degr41\arcmin47\arcsec$ &
$1.9 \pm 0.2$ & $2.9 \pm 0.4$ & $<3$ & $<5$ \\
5 (M1) & $05^{h}40^{m}53.6^{s}$ & $35\degr41\arcmin43\arcsec$ &
$8.6 \pm 0.1$ & $15.0 \pm 0.1$ & $16 \pm 1$ & $13 \pm 1$ \\
6 & $05^{h}40^{m}54.0^{s}$ & $35\degr41\arcmin41\arcsec$ &
$2.1 \pm 0.1$ & $4.8 \pm 0.1$ & $9 \pm 1$ & $9 \pm 1$ \\
7 & $05^{h}40^{m}54.4^{s}$ & $35\degr41\arcmin29\arcsec$ &
$5.9 \pm 0.1$ & $7.5 \pm 0.1$ & $9 \pm 1$ & $10 \pm 1$ \\
8 & $05^{h}40^{m}53.8^{s}$ & $35\degr41\arcmin27\arcsec$ &
$33.6 \pm 0.1$ & $33.1 \pm 0.1$ & $36 \pm 1$ & $54 \pm 1$ \\
9$^a$ & $05^{h}40^{m}52.2^{s}$ & $35\degr41\arcmin41\arcsec$ &
$5.0 \pm 0.3$ & $4.7 \pm 0.2$ & $<5$ & $<24$ \\
10 & $05^{h}40^{m}52.7^{s}$ & $35\degr41\arcmin45\arcsec$ &
$8.5 \pm 0.2$ & $7.6 \pm 0.2$ & $9 \pm 1$ & $18 \pm 7$ \\
11 & $05^{h}40^{m}52.2^{s}$ & $35\degr41\arcmin55\arcsec$ &
$5.4 \pm 0.8$ & $2.9 \pm 0.7$ & $<17$ & $<95$ \\
12 & $05^{h}40^{m}51.7^{s}$ & $35\degr41\arcmin57\arcsec$ &
$3.5 \pm 0.8$ & $2.7 \pm 0.6$ & $<20$ & $<59$ \\
13 (S235AB-MIR) & $05^{h}40^{m}53.4^{s}$ & $35\degr41\arcmin49\arcsec$ &
$<0.6$ & $5.0 \pm 0.3$ & $14 \pm 1$ & $28 \pm 2$ \\
14 & $05^{h}40^{m}53.0^{s}$ & $35\degr41\arcmin53\arcsec$ &
$1.7 \pm 0.9$ & $3.7 \pm 1.0$ & $3 \pm 4$ & $<35$ \\
15 & $05^{h}40^{m}52.7^{s}$ & $35\degr41\arcmin52\arcsec$ &
$2.4 \pm 1.1$ & $1.2 \pm 0.9$ & $<9$ & $<22$ \\
\hline
\end{tabular}
\vspace*{1mm}
$^a$ N-E of a close by source only visible in the $5.8$ $\mu$m band.
\end{table*}
Colour-colour plots are shown in Fig.~\ref{colcol}. In the
[5.8]--[8.0]/[3.6]--[4.5] plot, the region occupied by Class I and Class II
objects (Allen et al.\ 2004) is indicated.
M1 has colours typical of a Class I object, while S235AB-MIR
is located in
a part of the plot redward of that
occupied by YSOs of mass $\sim6$ M$_{\sun}$,
according to Whitney et al.\ (2004), i.e. the region corresponding to
[5.8]--[8.0] $\ge 1$ and [3.6]--[4.5] $\ge 1$.
\subsection{Radio continuum}
\subsubsection{S235A}
\begin{figure}
\centering
\includegraphics[angle=0,width=8cm]{radio.eps}
\caption{Maps of the continuum radio emission from S235A at 6, 3.6, and
1.3 cm (contours). Levels are from 2 to 14 mJy/beam by
2 mJy/beam for 6 cm, from 1 to
5 mJy/beam by 0.5 mJy/beam for 3.6 cm, and
0.1 to 0.5 mJy/beam by 0.1 mJy/beam for 1.3 cm.
The 1.3 cm map has been smoothed to a lower resolution to increase
the S/N ratio. The 3.6 cm map is
overlaid with the 5.8 $\mu$m Spitzer-IRAC image (grey scale).
\label{s235a:radio}}
\end{figure}
The VLA radio continuum maps
are dominated by the emission from the
compact HII region S235A.
We calculated the integrated flux at all frequencies
over the area within the 3$\sigma$
contour of the emission at 6 cm (i.e. at the lowest resolution);
the obtained values are
listed in Table~\ref{cont:tab}. The integrated flux at 6 cm is in very
good agreement
with previous measurements, yielding $270 \pm 40$ mJy. Israel \& Felli (1978)
found $220 \pm 30$ mJy at 21 cm and suggested a partially thick emission
at 6 cm, but the
ratio of fluxes at 6 and 3.6 cm that we derive from our data is in agreement
with an optically thin emission. The presence of an ionizing star of
spectral type B0.5 derived from the radio fluxes in previous
works (e.g. Felli et al.\ 1997) is therefore confirmed.
In Fig.~\ref{s235a:radio}, we show the maps of S235A at 6, 3.6, and 1.3 cm.
The 3.6 cm map is overlaid on
the 5.8 $\mu$m Spitzer-IRAC image and
shows a well-resolved spherical shape. The Spitzer-IRAC
image reveals the ionizing star S235A* at the centre of the nebula, previously
detected in the K band (Felli et al.\ 1997).
The main feature of the radio maps is the asymmetry of the isophotal contours in the
SE-NW direction. In the IRAC images, the morphology clearly indicates
the presence of a brighter ridge SE of S235A*.
The contours at 1.3 cm outline the brightest parts of the radio ridge.
In the original maps, the emission is more fragmented because of the
high resolution and low surface brightness.
In Fig.~\ref{s235a:radio}, the map has been smoothed to a
resolution of $3 \arcsec$.
At 1.3 and 0.7 cm, the radio fluxes are lower than expected
from an optically thin emission, as was also found by Felli et al.\ (2004) at 3.3 mm.
We attribute this to an instrumental effect caused by the filtering
of extended structures in the interferometric observations
(see Table~\ref{obs:tab}).
\subsubsection{The mm core, the jet, and the radio compact sources VLA-1 and VLA-2}
At none of the 4 VLA wavelengths
were we
able to
detect emission from the mm core where the presence of an intermediate-mass YSO
is suggested by the mm observations.
This excludes any thermal emission from a UCHII region
associated with the YSO with a flux density above the noise level
given in Table~\ref{cont:tab} and implies
that emission from the core is dominated by dust.
At the same time, our non-detection is not in contradiction
with the existence of
a dust core.
In fact,
extrapolating at 0.7 cm, the flux density measured at 3.3 mm (20 mJy)
with a spectral index of 2.5
(Felli et al.\ 2004), we obtain 3 mJy for the flux expected from the core.
Felli et al.\ (2004) estimate that the core diameter in the continuum
at the highest resolution (1.2 mm) is $\sim3\arcsec$; hence, assuming that
all the emission is uniformly distributed in a circle of $1\farcs 5$ in radius
and using the synthesized beam size at 0.7 cm given in Table~\ref{obs:tab},
we obtain an expected flux of $\sim0.12$ mJy/beam, i.\ e.\ $< 2\sigma$
(see Table~\ref{obs:tab}). Hence, dust emission at 0.7 cm
could be present below the sensitivity limit of our observations.
Similarly, no extended emission
from the jet was observed at any of our four frequencies.
While this could be an effect of over-resolution
and low surface brightness of the jet at the shortest wavelengths,
it definitely rules out the hypothesis
of an ionized jet at 6 and 3.6 cm. In fact, at 6 cm, the flux density
per beam area
extrapolated from the 3.3 mm flux in the hypothesis of an ionized
jet (i.e. using a spectral index $\alpha$ = 0.6) would be a factor of 3 higher
than the upper limit quoted in Table~\ref{cont:tab}.
At 1.3 and 0.7 cm, where the resolution is higher,
two nearly unresolved sources are present.
They have been named VLA-1 and VLA-2 and are indicated in Fig.~\ref{Fig1},
where we show the VLA
map at 1.3 cm (contours) overlaid on the Plateau de Bure map at 1.2 mm
(grey scale) from Felli et al.\ (2004).
We derived the integrated fluxes of the two sources within the $3\sigma$ level
on the maps obtained with natural weighting.
At 3.6 and 6 cm, the two sources fall within the sidelobes of
S235A. This results in a noise higher than that predicted from the
total integration time (see Table~\ref{obs:tab}).
Using different weightings
to partially filter out the extended emission does not yield any significant
improvement in the
measurements. The flux densities are listed in Table~\ref{cont:tab}; the
upper limits at 6 cm (for both) and at 3.6 cm (for VLA-2) refer
to a point source.
\begin{figure}
\centering
\includegraphics[angle=-90,width=8cm]{fig1.eps}
\caption{Map of the continuum radio emission at 1.3 cm of the area
around the water masers (contours)
overlaid with the map of the continuum 1.2 mm emission
(grey scale) from Felli et al.\ (2004).
Levels are from 0.12 ($\sim4\sigma$) to
0.48 mJy/beam in steps of 0.12 mJy/beam. The locations of the
three H$_{2}$O maser spots detected in this work are indicated by ``+''.
M1 and the centre of S235B (asterisk) are also indicated.
The dashed lines define
the directions of the two outflows.
The dotted circle is the primary beam at 1.2 mm.
The synthesized beam at 1.3 cm is drawn in the bottom left-hand corner.
\label{Fig1}}
\end{figure}
VLA-1 lies $\sim10\arcsec$
south of the mm core and is located within the jet,
almost along its axis (see Figs.~\ref{Fig0} and
\ref{Fig1}).
It coincides with a small component of the fragmented jet observed at 1.2 mm
and, most noticeably, with the K-band source M1.
VLA-2 lies within the boundary of the
S235B nebulosity. VLA-2 represents
the first radio detection of this peculiar region.
Our flux densities of $\sim0.5$ mJy (see Table~\ref{obs:tab})
are not in conflict with the previously derived upper limits of 5 mJy at 6 cm
($10\arcsec$ beam; Israel \& Felli 1978)
and 0.3 mJy at 3.6 cm ($0\farcs 1$ beam; Tofani et al.\ 1995).
We have checked the probability that VLA-1 and VLA-2 are
background sources. The expected number of extra\-galactic sources
at 1.3 cm in the field of view of Fig.~\ref{Fig1} ($\sim1$ square arcmin)
based on Eq.~(A11) of Anglada et al.\ (1998) is $N\sim 0.05$, making this
possibility very unlikely.
\begin{table*}
\begin{minipage}{\columnwidth}
\caption{Radio continuum fluxes. Upper limits are $3 \sigma$ noise levels.
\label{cont:tab}}
\centering
\renewcommand{\footnoterule}{}
\begin{tabular}{l c c c c c c}
\hline\hline
Source & \multicolumn{2}{c}{Position} & \multicolumn{4}{c}{$S_{\nu}$(mJy)} \\
& $\alpha(2000)$ & $\delta(2000)$ & 0.7 cm & 1.3 cm & 3.6 cm & 6 cm \\
\hline
S235A & $05^{h}40^{m}52.70^{s}$ & $35\degr42\arcmin21\arcsec$ & $114 \pm 3$\footnote{Corrected
for primary beam attenuation.} &
$172 \pm 1$ & $248 \pm 2$ & $257 \pm 7$ \\
VLA-1 & $05^{h}40^{m}53.60^{s}$ & $35\degr41\arcmin43\arcsec$ & $0.59 \pm 0.08$ & $0.44 \pm 0.03$ & $0.39 \pm 0.08$ & $< 0.27$ \\
VLA-2 & $05^{h}40^{m}52.40^{s}$ & $35\degr41\arcmin30\arcsec$ & $0.48 \pm 0.08$ & $0.47 \pm 0.03$ & $<0.24$ & $<0.27$ \\
\hline
\end{tabular}
\end{minipage}
\end{table*}
\subsection{H$_{2}$O masers}
\subsubsection{VLA observations}
In the selected velocity range (roughly from $-70$ to 10 km s$^{-1}$),
three maser spots were detected
above the $5\sigma$ noise.
We have determined the flux densities and positions of the maser spots
by 2-dimensional Gaussian fits in each channel. All velocity components
in the same spot are spatially unresolved.
Coordinates of the three maser spots and
flux density for each velocity peak are listed in Table~\ref{h2o:tab}
their relations to the other features present in the area are shown
in Figs.~\ref{Fig1} and \ref{Figmaser}.
It is important to note that the three maser spots
cover different velocity ranges, as shown in Fig.~\ref{Fig2}, so that
there is no velocity overlap among the three spatial components.
One of them (S235A-B-H2O/3)
coincides, within $\sim0\farcs 5$, with that found with the VLA at
$\sim -60$ km s$^{-1}$ by Tofani et al.\ (1995) and emits in the
same velocity range.
The other two (S235A-B-H2O/1 and S235A-B-H2O/2) occur at radial
velocities that had not been searched for in the previous
VLA observation because at that time they were not detectable in single-dish
observations, but they have since been revealed in the Medicina patrol.
The maser luminosity for each
spot was obtained by integrating the line emission
within the respective velocity ranges.
The results are listed in Table~\ref{lumi:tab},
along with the corresponding velocity range.
The H$_{2}$O luminosity is typical of masers associated with
far-Infrared (FIR) sources
of $10^{3} - 10^{4}$ L$_{\sun}$ (see Palagi et al.\ 1993). This is
consistent with the
upper limit of the bolometric luminosity inferred for the mm core
by Felli et al.\ (2004).
\begin{table}
\begin{minipage}{\columnwidth}
\caption{Water masers: fluxes and positions.
\label{h2o:tab}}
\centering
\renewcommand{\footnoterule}{}
\begin{tabular}{c c c c c}
\hline\hline
Name & \multicolumn{2}{c}{Position} & & \\
S235A-B & $\alpha(2000)$ & $\delta(2000)$ & $V_{\rm LSR}$ & $S_{\nu}$ \\
& & & (km s$^{-1}$) & (Jy) \\
\hline
H2O/1 & $05^{h}40^{m}53.27^{s}$ & $35\degr41\arcmin50\farcs 0$ & 7 & 1.20 \\
& & & 3 & 0.50 \\
H2O/2 & $05^{h}40^{m}53.63^{s}$ & $35\degr41\arcmin43\farcs 0$ & $-18$ & 0.16 \\
& & & $-25$ & 0.52 \\
& & & $-29$ & 1.66 \\
H2O/3 & $05^{h}40^{m}53.38^{s}$ & $35\degr41\arcmin48\farcs 6$ & $-58$ & 0.37 \\
& & & $-62$ & 0.23 \\
& & & $-64$ & 0.22 \\
& & & $-68$ & 1.44 \\
\hline
\end{tabular}
\end{minipage}
\end{table}
From Figs.~\ref{Fig1} and \ref{Figmaser},
S235A-B-H2O/2 clearly coincides with VLA-1 and M1
and does not seem to be
directly related to the mm core.
The other two spots (S235A-B-H2O/1 and S235A-B-H2O/3)
are very close to the mm core and are perpendicularly aligned
to the NE-SW outflow, $\sim2\arcsec$ apart from each other.
The possibility that they might be tracing a disk or torus
perpendicular to the NE-SW outflow will be examined in Sect.~\ref{discuss}.
\begin{table}
\begin{minipage}{\columnwidth}
\caption{Water masers: line luminosity.
\label{lumi:tab}}
\centering
\renewcommand{\footnoterule}{}
\begin{tabular}{c c c}
\hline\hline
Name & $L_{\rm H_{2}O}$\footnote{
All masers are assumed to be at the distance of the S235A-B complex.
}
& $\Delta V_{\rm LSR}$ \\
& ($10^{-7}$ L$_{\sun}$) & (km s$^{-1}$) \\
\hline
S235A-B-H2O/1 & $3.1$ & 9 to $-1$ \\
S235A-B-H2O/2 & $3.3$ & -17 to -30 \\
S235A-B-H2O/3 & $4.1$ & -55 to -71 \\
\hline
\end{tabular}
\end{minipage}
\end{table}
\begin{figure}
\centering
\includegraphics[angle=-90,width=8cm]{fig2.eps}
\caption{H$_{2}$O VLA spectra averaged on a circle 1$\arcsec$ in radius
and centred on the locations of each of the 3 maser spots. The name
of the spot is indicated above the corresponding
spectrum. The intensity scale is offset by 2 and 4 Jy for clearness
of the display. The vertical line defines the velocity of the molecular
cloud.
\label{Fig2}}
\end{figure}
\begin{figure}
\centering
\includegraphics[angle=-90,width=8.5cm]{upenv.eps}
\caption{Upper envelope of all water maser spectra
observed with the Medicina radio telescope towards S235A-B,
until July 2005. The three corresponding
maser spots detected with the VLA
observations are indicated. The vertical line defines the velocity of
the molecular cloud.
\label{medi:maser}}
\end{figure}
\begin{figure*}
\centering
\includegraphics[angle=-90,width=16cm]{new_var.eps}
\caption{Time-velocity-intensity plot of the water maser emission from
S235A-B observed with the Medicina radio telescope.
The starting date is March 31, 1987.
The dates of the first VLA observation by Tofani et al.\ (1995) and of
the present observations are indicated (month/year) with an arrow and
their velocity ranges are
enclosed within a long rectangle. The three maser
spots are indicated by bracketing the velocity ranges with vertical dashed lines.
The vertical solid line defines the velocity of the molecular
cloud. The black areas are time-velocity regions with no observations.
\label{Fig5}}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[angle=-90,width=16cm]{var_small.eps}
\caption{Time-velocity-intensity plot of the water maser emission from
S235A-B observed with the Medicina radio telescope over the
velocity range from $-10$ to $+10$ km s$^{-1}$, which corresponds to
the maser spot S235A-B-H2O/1.
\label{Fig6}}
\end{figure*}
\subsubsection{Medicina observations}
The different velocity ranges of the maser spots detected
with the VLA
make the low-resolution Medicina observations a useful tool to follow
the evolution of each spot, with no confusion arising from the fact that
they are all within the Medicina beam. The upper envelope of the water
maser emission using all the available single-dish observations is shown in
Fig.~\ref{medi:maser}. When compared with Fig.~\ref{Fig2}, it shows
that the emissions from the three spatially separated VLA maser spots
occur at very different velocity ranges, being thus separated (although spatially
unresolved) in the Medicina observations, as well.
In Fig.~\ref{Fig5}, we show the time-velocity-intensity plot from the Medicina
patrol. An indicative value of the noise level in these observations throughout
the entire period is of the order of 1-2 Jy.
The starting date corresponds to March 31, 1987. The patrol is
sparser at the beginning. After 1992, there are about 4 observations
every year. Following the separation in velocity of the three maser spots,
each velocity component (at $\sim-60$, $\sim-25$, and $\sim0$ km s$^{-1}$)
is labelled in Fig.~\ref{Fig5} with the corresponding VLA name.
The dates of the two VLA observations are indicated.
The systemic velocity of the
molecular cloud ($-17$ km s$^{-1}$) is also indicated.
Only S235A-B-H2O/2 occurs at a
velocity close to that of the thermal molecular lines; the other two components
(S235A-B-H2O/1 and S235A-B-H2O/3) emit well outside the width of the
thermal molecular lines.
The biggest flare occurred from the component
S235A-B-H2O/3 at the time of the first VLA observation. The source
then disappeared below the noise and came up again just shortly before
the second VLA observation.
Component S235A-B-H2O/2 was undetectable
for most of the time and appeared above the noise only after 2000.
Component S235A-B-H2O/1, although always rather weak, is present
most of the time. Its most noteworthy aspect is that the velocity
changes up to $\pm$ 5 km s$^{-1}$ around a mean value of $\sim0$ km s$^{-1}$.
To better illustrate this effect, the time-velocity-intensity plot for the
velocity range from $-10$ to 10
km s$^{-1}$ is shown in Fig.~\ref{Fig6}.
To make sure that the change in velocity is a real effect and not an
instrumental one and to provide an independent estimate of the accuracy
on the velocity, we have examined another water maser (G32.74--0.08), that is
also included in the Medicina patrol, and that was chosen because it is characterized
by a single, narrow, and intense velocity component.
For this maser, the velocity of the peak displays a maximum deviation
from the mean value over
the entire period of $< 0.1$ km s$^{-1}$, a factor of 50 smaller than the
velocity spread observed in S235A-B-H2O/1.
\section{Discussion}
\label{discuss}
\subsection{S235A}
In Fig.~\ref{s235a:radio}, the overlay of the 3.6 cm map
with the 5.8 $\mu$m Spitzer image shows that the peak of
the IR emission occurs in a shell outside the boundary of the radio
emission. This proves that PAH and thermal dust emissions are mostly
located beyond the ionization front in the Photodissociation Region.
\begin{figure}
\centering
\includegraphics[angle=-90,width=8cm]{c17_over.eps}
\caption{ The HCO$^{+}$ cloud at $-17$ km s$^{-1}$ (contours)
overlaid on the 5.8 $\mu$m Spitzer-IRAC image (grey scale). Note that
the lower contours of the molecular emission closely follow
the boundary of the IR emission, suggesting that the S235A HII
region is interacting with the molecular cloud.
\label{s235Amol}}
\end{figure}
In Fig.~\ref{s235Amol}, we show the overlay of the HCO$^{+}$ integrated
emission around $-17$~km
s$^{-1}$ with the 5.8 $\mu$m Spitzer-IRAC image.
The overlay indicates that the lower
contours of the molecular emission closely follow the outer boundary of
S235A, suggesting that the HII region and the molecular cloud are
interacting. It is also worth noting that the mm core is just outside the S235A
boundary. This situation resembles that of some well-known cometary-shaped
UCHII regions such as G29.96--0.02 and G34.26+0.15 (Reid \& Ho 1985;
Wood \& Churchwell 1989; Fey et al. 1995), which face a density peak of
the molecular clump enshrouding them (Maxia et al. 2001; Gibb et al. 2004;
Watt \& Mundy 1999). In a number of cases, it has been found that such
a peak coincides with a hot molecular core, where
massive star formation is going on (Cesaroni et al. 1998;
Heaton et al. 1989; Garay \& Rodr\'{\i}guez 1990). In our case, the
temperature of the molecular core is
$\sim30$ K (see Felli et al. 2004), well below the typical temperature
of hot molecular cores (see Kurtz et al.\ 2000),
and the HII region S235A has an asymmetric rather than a cometary shape;
nevertheless, the interaction
between the ionized gas and the molecular cloud, as traced by
the overall structure
of the region, suggests that in this case also, one is observing an active
burst of star formation where different evolutionary phases (from cores to
evolved HII regions) co-exist. Whether the star formation episode in
the molecular core has been triggered by the expansion of the
nearby HII region S235A remains an open issue.
\subsection{VLA-1 and the jet}
Our radio continuum observations were unable
to detect the elongated structure that had previously been found
at 3.3 mm, which also coincides with the blue lobe
of the NNW-SSE HCO$^{+}$(1--0) outflow. Instead,
a compact radio source, VLA-1, was found coincident with
a small blob of emission in the central part of the 1.2 mm ``jet'',
close (3$\arcsec$ east) to the weaker molecular peak at $-19$ km s$^{-1}$,
called C19
(Felli et al.\ 2004). More importantly, VLA-1 coincides with the newly found
water maser S235A-B-H2O/1 and with M1, as shown in
Fig.~\ref{spitzer11cm}, where the 1.3 cm VLA map is
overlaid on the Spitzer-IRAC 3.6 $\mu$m image.
M1, which had been previously assumed to be associated with
the $-60$ km s$^{-1}$ water maser (Felli et al.\ 1997),
is instead associated with a a radio continuum source and a new, separate
water maser at $\sim -30$ km s$^{-1}$.
Although the errors on the flux densities are large, the values given
in Table~\ref{cont:tab} for VLA-1 are consistent with
a spectral index $\alpha \approx 0.2$. This value is
typical of partially optically thick free-free emission.
Since the present morphology no longer supports an ionized
jet interpretation, we have to consider the alternative
possibility that VLA-1 is an independent UCHII region, a hypothesis
that should also be considered, in light
of the precise association of the water maser with M1. A lower limit
to the total number $N_{\rm Ly}$ of ionizing photons can be obtained from the
1.3 cm radio flux (the one with the highest signal-to-noise ratio) by
assuming that the HII regions are optically thin (Mezger 1978). We
obtain $N_{\rm Ly} \sim10^{44}$ s$^{-1}$, typical of B2-B3 ZAMS stars
(Panagia 1973).
What may have caused the apparent disagreement between the cm radio observations
which postulate the presence of a UCHII region and the mm observation that
had suggested a jet?
Extrapolating the radio flux of VLA-1 according to $\nu^{\alpha}$
($\alpha \approx 0.2$), we derive $\sim0.8$ mJy
at 1.2 mm and $\sim0.6$ mJy at 3.3 mm. The 1.2 mm flux is below
$1\sigma$ ($\sim1$ mJy) in the Plateau de Bure map, so it could
not have been detected as a point source. The 1.2 mm
flux of the ``jet'', 13 mJy, was integrated over a much larger
area defined by the 3.3 mm map.
At 3.3 mm, the extrapolated flux of VLA-1 is at a $2 \sigma$ level,
again barely detectable as a point source, and in any case
difficult to see since it would lie within the elongated 3.3 mm structure
(about $10\arcsec$ in the elongated direction and unresolved in the perpendicular
direction), with 7 mJy of integrated flux.
These contradictory aspects can be reconciled if
the emission from the elongated 3.3 mm structure comes
from dust, perhaps that associated with
the NNW-SSE blue outflow lobe or with the C19 molecular peak.
The spectral index $\alpha=0.6$ between 3.3 and 1.2 mm could be the result
of the simultaneous presence of a dusty jet with a steep spectral index
and a weak UCHII region showing up only at lower frequencies, where
dust emission is negligible.
Is VLA-1/M1
the driving source of the NNW-SSE outflow? Due to the overlap
with the NE-SW outflow, the centre of the NNW-SSE outflow
is ill-defined and any argument based on the position of
VLA-1 with respect to the centre of the NNW-SSE outflow is inconclusive.
However, we can exclude any association of VLA-1 with the NE-SW outflow,
given its offset with respect to its axis.
Finally, we note that one
of the methanol masers (CH$_3$OH/4) lies close (2$\arcsec$
southwest) to VLA-1 and the water maser
S235A-B-H2O/2 (see Fig.~\ref{Figmaser}) and has velocities in a
similar range (from
$-16$ to $-21$ km s$^{-1}$, see Kurtz et al.\ 2004), suggesting a common origin.
The velocities of the water and methanol masers
are very close to that of the molecular core, indicating that in this
case, the component of the motions along the line of sight
of the masers with respect to the molecular core is negligible.
\begin{figure}
\centering
\includegraphics[angle=-90,width=8cm]{fig10.eps}
\caption{Overlay of the 1.3 cm VLA map (full contours) with the
Spitzer-IRAC 3.6 $\mu$m image (grey scale) and the 1.2 mm
continuum (dashed contours).
\label{spitzer11cm}}
\end{figure}
\begin{figure}
\centering
\includegraphics[angle=-90,width=8cm]{fig_maser.eps}
\caption{Position of the three water masers (H20/1-3 $+$) and of
the six methanol masers (CH3OH/1-6 $\times$) detected by Kurtz et al.\ (2004).
The 1.2 mm core
(triangle), VLA-1 (square), and M1 are also indicated. Note that
the symbol used to mark the location of the 1.2 mm core
is smaller than the errorbars.
\label{Figmaser}}
\end{figure}
\subsection{VLA-2 and S235B}
The overlay of Fig.~\ref{spitzer11cm} shows that VLA-2 lies at the centre
of the S235B bright diffuse nebula (saturated in all IRAC bands).
The Spitzer-IRAC images, in particular
those at longer wavelengths,
indicate that this nebula is composed of two very close components
(see Fig.~\ref{spitzercolor}).
VLA-2 lies at the centre of the southern and more extended one.
Overall, the size of the mid-IR nebula is about 10$\arcsec$, similar to that
observed in H$\alpha$, so that the optical-IR
morphology is more reminiscent of S235A (resolved HII region)
than that of VLA-1 (IR and radio unresolved).
Besides being detected in H$\alpha$, strong Br$\gamma$ emission from
an unresolved source (i.e. $<$ 3$\arcsec$) coincident with the
$K$-band point source in S235B had been reported
by Krassner et al.\ (1982) and Felli et al.\ (1997), with an
integrated line
flux
of $F$(Br$\gamma$) = (2.0 $\pm$ 0.4) x 10$^{-12}$
erg s$^{-1}$ cm$^{-2}$.
The expected radio flux
density
from an optically thin HII region at 6 cm
would have been greater than 200 mJy, where the lower limit accounts
for the fact that the
Br$\gamma$ flux was not corrected for extinction. This is at odds
with lower limits to the radio flux found in all previous
radio continuum observations, as well as with the present detection.
In the past, this forced
abandon of
the hypothesis of a classical HII region (unless
extremely optically thick)
suggested
that the Br$\gamma$
emission originates from an ionized expanding envelope around an early type
star (Felli et al. \ 1997), in which case the ratio of radio-to-IR line
emission
would be
reduced by about two orders of magnitude (Simon et al.\ 1983).
Our measured fluxes represent the first detection of the radio
continuum from an underlying unresolved source.
The radio flux density at 3.6 cm
expected from a fully ionized envelope using
$F$(Br$\alpha$)/$F$(Br$\gamma$) $\sim0.8$
and Eq. (20) of Simon et al. (1983)
is 0.46 mJy. While this agreement with the observed value
might be fortuitous in view of the many unknown parameters involved
(velocity of the
wind, correction for extinction, etc.) it clearly points out that an
ionized envelope around a lower luminosity,more-evolved star remains
the best interpretation. However, it must be noted that
the observed spectral index is smaller than the expected 0.6 value.
Our flux densities are fully
consistent with the previous upper limits at radio wavelengths and with
the unresolved nature of the
S235B.
The implied mass loss using Eq. (14) of
Felli \& Panagia (1981) is $4\times10^{-6}$ M$_{\sun}$ yr$^{-1}$, quite large
and indicative of a luminous star.
Future radio recombination line observations
with sufficiently high sensitivity may
permit measurement of
the line width to determine the velocity of the ionized wind.
There are no masers, molecular peaks, outflows, or mm
peaks associated with S235B, all of which suggest
that S235B-VLA-2 is more evolved than the mm core and VLA-1.
The diffuse H$\alpha$ and IR emission from
S235B may be attributed to reflected light from ionized stellar envelopes.
Finally,
what
is the luminosity and mass of the star
embedded in S235B? We cannot use
the Spitzer-IRAC observations because they are saturated.
Instead, we used
the J, H, and K magnitudes from Felli et al.\ (1997) and the MSX fluxes
(see Egan et al.\ 1999).
The integral of these values gives 410 L$_{\sun}$,
which must be considered to be a lower limit
because the FIR part of the spectrum is not taken into
account in our calculation.
\begin{figure}
\centering
\includegraphics[width=8cm]{fig12.eps}
\caption{HCO$^{+}$(1--0) emission from the NE-SW outflow (thin contours)
overlaid with the blue- and red-shifted emission of
C$^{34}$S(5--4) (thick contours) from Felli et al.\ (2004).
The ``$+$'' symbols mark
the location of the water maser spots H2O/1 ($\sim7$ km s$^{-1}$)
and H2O/3 ($\sim-60$ km s$^{-1}$).
The background is the 8.0 $\mu$m Spitzer-IRAC image (grey scale) and shows the
position of S235AB-MIR, coincident with the southern water maser
(H2O/3).
\label{Fig3}}
\end{figure}
\subsection{The mm core and the S235A-B-H2O/1 and S235A-B-H2O/3 masers}
As was already pointed out, an important result of the cm radio continuum
observations is the lack of emission
from the mm core.
In Felli et al.\ (2004), the luminosity of the embedded YSO had
been estimated to be $10^3$ L$_{\sun}$.
If this comes from a ZAMS star of spectral type B3 or earlier,
to make the radio
free-free
emission undetectable, the
radius of the associated HII region must be less than $\sim$6.5~AU due to
confinement by a density larger than $7\times10^6$~cm$^{-3}$. In any case, the
detection of pure thermal dust emission from the core indicates a very early
evolutionary phase of the embedded YSO.
The new result provided by the Spitzer-IRAC observations is the
detection of S235AB-MIR very close to the mm peak.
The positions of the mm core and
S235AB-MIR differ by $\sim1\farcs 5$, which is slightly greater
than the error on the relative positions, so that at this stage
it cannot be firmly established if
S235AB-MIR represents the YSO embedded in the mm core.
Its position in the colour-colour plots of Fig.~\ref{colcol} definitely puts
this source in the Class I category.
It is possible to check whether the S235AB-MIR fluxes
between $3.6$--8 $\mu$m are consistent with the spectral energy distribution
(SED) of a heavily embedded
(proto-)star of $10^3$ L$_{\sun}$.
Felli et al.\ (2004) show that the non-detection of such a source in
the $K$ band towards the mm core implies $A_{V} \ga 37$
mag, in agreement with the derived H$_2$ column density. Using the extinction
law found by Indebetouw et al.\ (2005), we derived upper limits
for the intrinsic fluxes (or upper limits) of S235AB-MIR in the four
IRAC bands.
Of course, S235AB-MIR
being
a Class I source, most of the emission at these wavelengths
arises from circumstellar matter rather than from the (proto-)star
photosphere. For this reason, we compared our results to the SEDs
models for intermediate-mass Class I sources (star plus disk and envelope,
Whitney et al.\ 2003; Whitney et al.\ 2004).
We found that, within the limits of
the many parameters of their
models,
the MIR SED of
S235AB-MIR is consistent with a central star later than B3.
The two northernmost water masers, S235A-B-H2O/1 and S235A-B-H2O/3,
are located close to the mm core and emit over velocity ranges
quite different from
those of the other water and methanol masers,
as well as those of the molecular
cloud ($\sim-17$ km s$^{-1}$).
Figure~\ref{Fig3} (adapted
from Fig.~21 of Felli et al.\ 2004) shows the NE-SW HCO$^{+}$ outflow
with the contours of a perpendicular bipolar
structure traced by the wings
of C$^{34}$S(5--4).
This was interpreted by Felli et al.\ (2004) as the signature of a rotating
disk around the YSO driving the outflow. Strikingly enough, the
two water masers are aligned in the same direction and very close to the
C$^{34}$S(5--4) structures: the blue-shifted
maser (S235A-B-H2O/3 at $\sim -60$ km s$^{-1}$) lying towards
the blue C$^{34}$S(5--4) lobe
(integrated from $-21$ to $-19$ km s$^{-1}$)
and the red-shifted maser (S235A-B-H2O/1 at $\sim7$ km s$^{-1}$)
lying towards the red C$^{34}$S(5--4) lobe (from $-16$ to $-14$ km s$^{-1}$).
However, in both cases the velocity of the water masers are more red or blue
shifted than the corresponding C$^{34}$S(5--4) lobes.
A simple calculation based on the maser velocities
shows that they cannot simply belong to
a disk in Keplerian rotation around the YSO. From the difference between
the most extreme maser velocities (7 and $-68$ km s$^{-1}$),
we can infer a lower limit to the rotation velocity of $37$ km s$^{-1}$.
If one assumes the half-distance between the two maser
spots ($\sim1\arcsec$ or 1800 AU) to be the orbital radius, then the mass needed
to maintain such a rotating disk should be $> 2500$ M$_{\sun}$,
much larger than the mass of the molecular core.
This fact proves that the water masers cannot trace rotation about an
embedded YSO. One possibility is that they are instead the signature of the
interaction with the C$^{34}$S disk of high
velocity outflowing material
from the YSO. The higher (red and blue) velocities of the water maser with
respect to C$^{34}$S and their closer position to the mm core may suggest
that the acceleration of the outflowing material occurs in the immediate
surroundings of the YSO.
Another possibility is that the C$^{34}$S emission is not tracing a disk,
but rather an outflow, whose high-velocity component would be seen in the
H$_2$O maser lines. This scenario is more consistent with the common belief
that water masers are strictly associated with jets powering molecular outflows
(Felli et al. 1992), as observed, e.g., in the massive protostar
IRAS\,20126+4104 (Moscadelli et al. 2000, 2005). If this hypothesis is
correct, it remains to be established whether the bipolar outflow seen in the
C$^{34}$S and H$_2$O lines would be the same as the NNW-SSE outflow or a
distinct one oriented approximately in the same direction, but originating
from a different YSO.
Of the two hypotheses presented above, we believe that the outflow origin for
the C$^{34}$S and H$_2$O emission is the most likely, given the tight
association between outflows and water masers. However, at present it is
impossible to rule out the possibility that the C$^{34}$S emission is coming
from a disk, as in the case of the Keplerian disk in IRAS\,20126+4104
(Cesaroni et al. 2005). Class~I methanol masers are believed to be excited
in jets, so that a priori the 7~mm CH$_{3}$OH masers imaged by Kurtz et al.
(2004) in S235 could be used to establish the direction of the outflow and
hence choose between the two hypotheses. As shown in
Fig.~\ref{Figmaser}, five of the maser spots cluster around the mm core
suggesting that for them, too, the main source of energy is the YSO within the
mm core. In particular, CH$_{3}$OH/2 is very close to S235A-B-H2O/3. However
their velocities lie within a narrow range (from --15.9 to
$-21.0$~km~s$^{-1}$) and the spots do not show a clear bipolarity, either in
velocity or in their distribution. It is therefore difficult to associate the
methanol maser emission to any precise geometry. The only conclusion is
that they are unlikely to have the same dynamical origin as the two water
masers.
\subsection{Water maser variability}
Some indication of what occurs in the immediate surroundings of the
YSO embedded within the mm core may come from the variability
of the two associated water masers.
The emission at $\sim-60$ km s$^{-1}$
(S235A-B-H2O/3) reached its maximum (up to $\sim110$ Jy)
in 1992--1993, lasting at most
2 years, then disappearing for most of the Medicina patrol, and finally
reappearing just before the VLA observations.
Most noteworthy is
the emission from S235A-B-H2O/1 (between $-10$ and $10$ km s$^{-1}$), not only
because of its long lasting presence and high variability, but also
because of its velocity shifts, $\pm 5$ km s$^{-1}$.
Figure~\ref{medi:maser:velo} shows the LSR velocities
of S235A-B-H2O/1 (obtained by Gaussian fitting)
in the period 1989-2005.
While it is possible that the velocity variations are simply due to
the random flaring of spots at different velocities, we shall investigate
more physically appealing interpretations.
To check whether the effect is due to a rotational modulation, we
tried to fit all the observed velocities (the source is above the noise
for about 60\% of the time) with a sine function, as would be
expected for a maser spot on a rotating disk viewed edge-on.
No significant evidence was found.
The velocities in Fig.~\ref{medi:maser:velo} are all redshifted
with respect to the systemic velocity,
but they are not all at random and seem to fan out at late periods.
We identify two groups of
points (labelled as 1 and 2 in
Fig.~\ref{medi:maser:velo}), which exhibit linear drifts of velocity
away from the systemic value.
A linear fit (dashed lines in Fig.~\ref{medi:maser:velo}) provides
velocity drifts of
0.93 and 0.98 km s$^{-1}$ yr$^{-1}$, respectively.
Velocity drifts of this amount have been observed in other water masers that are
stronger and less variable in intensity (Brand et al. \ 2003).
It is tempting to explain the velocity drifts
with shocked material that is accelerated from
a mean velocity $\sim0$ km s$^{-1}$
by mass outflow from a central YSO.
The lifetime of the accelerated spots is $\sim1000-2000$ days
($\sim3-6$ yrs) and could be related to the duration of
ejection events from
the YSO.
The maser outflows from S235A-B-H2O/1 clearly deserve further, proper motion
studies with VLBI techniques.
\begin{figure}
\centering
\includegraphics[angle=-90,width=8.5cm]{velo_maser.eps}
\caption{
Velocities of the peaks of S235A-B-H2O/1
from Gaussian fits to the data from
the Medicina radio telescope. We have tentatively outlined,
with dashed lines, two components (labelled as 1 and 2)
whose velocities might be drifting during the period
of our observations.
\label{medi:maser:velo}}
\end{figure}
\section{
Summary and conclusions
}
\label{conclu}
\begin{figure}
\centering
\includegraphics[width=8cm]{scheme.eps}
\caption{Sketch (not to scale)
of the star-forming region S235A-B in light
of the new data presented in this paper. New and already-known sources
are labelled, and their relationships are discussed in the text.
\label{schema}}
\end{figure}
We have presented new, more sensitive high-resolution VLA cm
radio observations of the S235A-B region, as well as the results
of the Medicina water maser patrol (started in 1987), and archive Spitzer-IRAC
observations. Several new aspects of this star-forming region emerge;
they are illustrated in Fig.~\ref{schema} and summarised in the
following:
\begin{enumerate}
\item
The radio-IR morphology of the S235A HII region confirms that it is
a classical HII region, optically thin in the cm range.
It appears to interact with the molecular cloud and may
have induced
the formation of a second generation YSO in the mm core.
\item
No cm continuum emission is detected from the molecular core
discovered by Felli et al.\ (2004).
The lack of ionized hydrogen emission
suggests a very early evolutionary phase for the
intermediate-luminosity embedded YSO,
much before the appearance of a UCHII region.
We have found a new source, S235AB-MIR,
detectable only at 5.8 and 8.0 $\mu$m and close to the mm core, in the
archival Spitzer images.
Given its position in the colour-colour plot, this could be
the mid IR counterpart of the embedded YSO.
\item
We have observed no extended cm continuum emission from the
elongated jet-like structure detected at 3.3 mm,
suggesting that the putative 3.3 mm ``jet'' is due
to dust and not to an ionized jet.
\item
We found two compact radio-sources: VLA-1 and VLA-2.
Their spectral index is suggestive of partially thick free-free emission.
\item
VLA-1 is located at the centre of
the elongated 3.3 mm structure and coincides
with the near-IR source M1. It is close to the secondary molecular peak at
$-19$ km s$^{-1}$.
We estimate that VLA-1 could be a UCHII region associated with a B2-B3 star.
\item
We have discovered a water maser (S235A-B-H2O/2) at the same location of VLA-1.
A methanol maser (CH$_{3}$OH/4) lies close by at a similar velocity.
\item
VLA-2 is at the centre of S235B and represents the first radio
continuum detection from this source. Comparison with
the near-IR hydrogen lines confirms that both emissions came from
an ionized envelope.
\item
Two water masers (S235A-B-H2O/1 and S235A-B-H2O/3),
with very different velocities ($\sim7$ and $\sim-60$
km s$^{-1}$, respectively), are located close to the mm core and
are aligned parallel to a structure found in C$^{34}$S and perpendicular to
the NE-SW outflow.
\item
Our single-dish observations with the Medicina radio telescope
do not spatially resolve the three water masers detected with the VLA,
although
do provide variability information
since they do not overlap in velocity.
\item
A high degree of variability in the water maser emission
was found in all cases.
We found changes of the LSR
velocity with respect to the systemic velocity,
up to $\sim5$ km s$^{-1}$ for S235A-B-H2O/1.
A possible interpretation would be
velocity drifts due to
shocked gas accelerated by the flaring activity of the YSO.
\item
The duration
of the acceleration, of an order of $3-6$ yrs, is similar to the
lifetime of the emission.
\end{enumerate}
This paper, together with the preceding ones resulting from our long-term
study, reveals the simultaneous presence in the S235A-B complex of
widely different evolutionary phases, from the well-developed
HII region S235A, to the peculiar object S235B-VLA-2, to the UCHII region
VLA-1, to the mm core which harbors a YSO in a very early stage.
The comparison of high resolution multiwavelength observations,
combined
with long-term monitoring of time variable phenomena,
provides unique information on the nature of young (proto)stars,
shedding new light on their interaction with the placental environment.
\begin{acknowledgements}
The Medicina observations are part of a long lasting project carried
out by the Arcetri-INAF and IRA-INAF water maser group (see e.g. Brand et
al.\ 2003 and references therein).
This work is based in part on observations made with the Spitzer Space
Telescope, which is operated by the Jet Propulsion Laboratory, California
Institute of Technology under a contract with NASA.
This research made use of data products from the Midcourse Space Experiment
(MSX). Processing of the data was funded by the Ballistic Missile Defense
Organization with additional support from the NASA Office of Space Science.
We acknowledge G. Comoretto and F. Palagi for their help in the study
of the variability of the water masers.
\end{acknowledgements}
|
1,314,259,995,284 | arxiv | \section{Introduction}
Depression and anxiety are two of the most common psychiatric disorders that, depending on their severity, can have a profound impact on an individual's well-being and the quality of life \citep{henning2007impairment, gurland1992impact, roshanaei2009longitudinal,lepine2002epidemiology, richards2011prevalence}. Thus, it is imperative that treatments for depression and anxiety are prioritized as intervention can greatly improve patient outcomes \citep{dadds1997prevention, reynolds2012early}. Global improvement of anxiety and depression treatment options is estimated to have a direct economic benefit over the period from 2016 to 2030 of \$239 billion and \$169 billion, respectively \citep{CHISHOLM2016415}.
Despite the importance of bettering the treatment pipeline, many barriers remain. One of the primary barriers to effective depression and anxiety treatment is the screening process. Traditional methods for screening have a high burden on clinicians and patients in terms of their ease of administration and scoring, no clear reference standard, and the degree of patient activation and monitoring required \citep{nease2003depression}.
Assessment scales such as the Patient Health Questionnaire (PHQ-8) \citep{lowe2004monitoring} or Generalized Anxiety Disorder (GAD-7) \citep{spitzer2006brief} offer a more quantitative basis for screening.
From another perspective, speech and language are two modalities that form a promising and objective basis for mental health screening. It is well-established that depression and anxiety can alter an individual's general cognition, with specific biases in their attention and memory \citep{cohen1982effort, mathews2005cognitive}. These deficits can manifest in altered acoustic and linguistic dimensions of speech. Some of these include altered rate of speech or increased usage of first-person pronouns \citep{pope1970anxiety, junghaenel2008linguistic}.
With recent advances in natural language processing and computational power, we now have the ability to collect, measure, and analyze speech data on a larger scale. There is also the rise in popularity of digital platforms such as Amazon Mechanical Turk (mTurk) that has eased the burden of data collection from clinically significant populations \citep{engle2020amazon, tasnim2022depac}. All of this has accelerated development of ML models using speech-based biomarkers for depression and anxiety. These include models that classify anxiety and depression as well as those that predict the severity of these diseases \citep{banerjee2021predicting, toto2021audibert, yang2017multimodal}. We build upon the existing literature and extend AudiBERT \citep{toto2021audibert} for the classification of depression and anxiety from speech. Our model incorporates more recent sub-module advances in the architecture and experimental settings. Importantly, we also combine both deep-learned and hand-crafted features to best capture the signal of depression and anxiety that is carried through the acoustic and linguistic properties of speech. We demonstrate that our model achieves better performance on the validation dataset.
\begin{figure}[t]
\centering
\includegraphics[width=1\textwidth]{figures/wav2bert.png}
\caption{Classification module architecture diagram}
\label{fig:clf_diagram}
\end{figure}
\section{Modeling}
Depression and anxiety can present themselves through acoustic and linguistic features of speech \citep{pope1970anxiety, junghaenel2008linguistic}. Therefore, our architecture (Figure \ref{fig:clf_diagram}) leverages both of these modalities by parallel representation learning from audio and textual data in addition to representation learning from features hand-crafted by domain experts. Our architecture is inspired by AudiBERT \citep{toto2021audibert}.
Working with deep-learned representations of speech can allow for our models to capture more abstract signals in speech that can be used for better depression/anxiety detection. In our work, we use pre-trained speech and language representation models which have been shown to be effective and robust for generating representations of acoustics and text \citep{https://doi.org/10.48550/arxiv.2006.11477, liu2019roberta}.
We utilize Wav2Vec 2.0, one of the best acoustic signal representation models, to learn the features from the speech signal. The output of the Wav2Vec 2.0 base-model, pre-trained on 100k hours of the Vox-Populi dataset \citep{voxpopuli}, is forwarded to a two-layer biLSTM \citep{hochreiter1997long, graves2005framewise} and then to a multi-head attention layer with two heads. Vectors \textbf{$R1$}, outputs of multi-head attention representing acoustic signal, are used jointly with linguistic and hand-crafted features for classification.
Transformers-based architectures \citep{bert} have significantly improved language representation, and performance on variety of domain-specific tasks including emotion classification \citep{siriwardhana2020jointly}. To represent transcripts of human speech, we select the base model of RoBERTa \citep{liu2019roberta}, as one of the best performing language models which can be trained with a single GPU. The output of RoBERTa is forwarded to a two-layer biLSTM, whose output is redirected to a multi-head attention layer with two heads. Vectors \textbf{$R2$} are outputs of multi-head attention representing the linguistic signal.
There is a rich body of work studying the pathology of depression and anxiety that suggests specific changes in the acoustic, the semantic, and lexico-syntactic content of the speech of those who are suffering from these diseases \citep{pope1970anxiety, junghaenel2008linguistic}. We use domain-experts hand-crafted features \textbf{$R3$} as additional signal. List of those features can be found in the Appendix in Table \ref{tab:lin_feats} and \ref{tab:acoust-feats}.
Vectors $R1$, $R2$, and $R3$ are concatenated together to create a combined representation embedding of the subject's speech. This representation is passed through two feedforward layers followed by a binary cross-entropy loss. The architecture classifies between disease and no disease. We train two different models, one for depression and another for anxiety task.
\section{Experimental setup}
We train and evaluate our models using 5-fold cross-validation with the folds constructed such that there is no overlap between the subjects in the training and test fold. We report the mean of precision, recall and F1-score for each model over the 5 folds. Results are achieved using AdamW optimization with learning rate $lr = 3e-5$. We use binary cross-entropy with logits loss from the PyTorch library. The model is trained on T4 Tensor Core GPU with 16 GB RAM.
We use Wav2Vec 2.0 and RoBERTa implementations from the \emph{HuggingFace} library. Due to memory and architecture constraints with inputting large audio files into Wav2Vec2, we also split the audio samples into consecutive 10 second intervals. The audio was sampled at a rate of 16 000 Hz. Then Wav2Vec2 feature extractor is used to create the input. RoBERTa's input is a speech transcript generated from the audio via ASR. The text is further transformed by the RoBERTa tokenizer and padded to length of 512. Note, we also add several tokens to the tokenizer corresponding to a set of unfilled and filled pauses in the speech. Pre-trained model weights are not frozen and are fine-tuned for 10 epochs with a batch size of 4 due to GPU memory constraints.
As a baseline, we train a feedforward network using hand-crafted features provided by domain-experts only. The network consists of five linear layers followed by Leaky ReLU activation function \citep{maas2013rectifier}. Every layer is twice smaller than the previous one and we use a dropout of 0.2 throughout the network. Network is trained using AdamW optimization with learning rate $lr=3e-4$, batch size of 8, and binary cross-entropy with logits loss. We use the same 5-fold cross-validation process as for the proposed model and we report the mean of precision, recall and F1-score.
\subsection{Dataset}
The dataset used to train and test the model comes from an extended version of the DEPAC corpus \cite{tasnim2022depac}. The DEPAC corpus contains crowd-sourced (mTurk) audio samples from 3543 unique individuals performing a range of self-administered speech tasks. For the purposes of this analysis, we subset the data to only include speech from the tasks that contain elements of narrative speech. In total, the dataset contains 4209 unique audio samples and corresponding audio transcripts from the below-mentioned speech tasks.
\textbf{Journaling} and \textbf{prompted narrative tasks:} the participant is asked to describe an experience or event based on a given prompt. For journaling task, they are asked about their day whereas in prompted narrative, they are also asked about hobbies or travel experiences depending on the specific prompt. These narrative speech tasks can contain signals relevant for depression or anxiety prediction \citep{trifu2017linguistic}.
\textbf{Semantic fluency task:} the participant is prompted to describe within one minute positive experiences that will occur in the future. Similar verbal fluency tasks have been shown to correlate with issues with executive function associated with depression \citep{fossati2003qualitative}.
The dataset contains the self-rated PHQ-8 and GAD-7 scores for each individual. GAD-7 is rated on a scale of 0-21 and PHQ-8 on a scale of 0-24. Following AudiBERT, literature \citep{lowe2004monitoring, spitzer2006brief}, and consultations with experts, we adopt binary classification tasks. We convert these scores into a "soft" binary diagnosis label using a score of 10 as a cutoff on both scales. Approximately 25.3\% of subjects had a PHQ-8 score above 9, and 12.8\% had a GAD-7 score above 9 (diagnosis).
For each complete audio sample and transcript, we extract the hand-crafted features whose list is given in the Appendix \ref{tab:lin_feats} and \ref{tab:acoust-feats}.
\section{Results and Discussion}
The results of our experiments are displayed in Table \ref{tab:clf_results}.
Examining them in aggregate reveals that our models perform better in predicting no diagnosis, and they are struggling to predict diagnosis. We hypothesize that this is partially a function of the data imbalance that exists within our dataset, as most collected depression and anxiety data comes from individuals with lower scores.
The results show that the inclusion of deep-learned features enriches the representation by adding properties that are not fully captured the hand-crafted features, improving the detection of depression and anxiety. This reflects previous results \citep{toto2021audibert}, where the addition of deep-learned features, especially text representation models, improved classification performance for depression.
One of the challenges with developing models for classification of depression and anxiety comes from the distribution of data. In our data and much corpora, a majority of the subjects ware classified with having PHQ-8/GAD-7 scores under 10 leading to class imbalance \cite{valstar2014avec, gratch2014distress}. Imbalance in classes in training data poses a hurdle in development of robust models \citep{krawczyk2016learning}. Furthermore, within the classes, the distribution of scores is still uneven. A distribution of PHQ-8/GAD-7 scores is long-tailed and skewed towards lower severity cases. This can lead to issues of within-class imbalance that are difficult to resolve \citep{japkowicz2001concept}.
\begin{table}
\caption{Anxiety and depression classification results. Bold indicates highest F1 score per disease.}
\begin{adjustbox}{max width=1\linewidth, center}
\begin{tabular}{|l|lll|lll|lll|lll|}
\hline
& \multicolumn{6}{c|}{Anxiety} & \multicolumn{6}{c|}{Depression} \\ \cline{2-13}
& \multicolumn{3}{c|}{\begin{tabular}[c]{@{}l@{}}Hand-crafted\\ features only\end{tabular}} & \multicolumn{3}{l|}{\begin{tabular}[c]{@{}c@{}}Deep-learned + hand-crafted\\ features\end{tabular}} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}Hand-crafted\\ features only\end{tabular}} & \multicolumn{3}{l|}{\begin{tabular}[c]{@{}c@{}}Deep-learned + hand-crafted\\ features\end{tabular}} \\ \cline{2-13}
& \multicolumn{1}{l|}{Precision} & \multicolumn{1}{l|}{Recall} & F1 & \multicolumn{1}{l|}{Precision} & \multicolumn{1}{l|}{Recall} & F1 & \multicolumn{1}{l|}{Precision} & \multicolumn{1}{l|}{Recall} & F1 & \multicolumn{1}{l|}{Precision} & \multicolumn{1}{l|}{Recall} & F1 \\ \hline
\begin{tabular}[c]{@{}l@{}}No diagnosis\\ ( score\textless 10)\end{tabular} & \multicolumn{1}{l|}{0.81} & \multicolumn{1}{l|}{0.65} & 0.72 & \multicolumn{1}{l|}{0.76} & \multicolumn{1}{l|}{0.72} & \textbf{0.73} & \multicolumn{1}{l|}{0.73} & \multicolumn{1}{l|}{0.78} & 0.75 & \multicolumn{1}{l|}{0.77} & \multicolumn{1}{l|}{0.83} & \textbf{0.80}\\ \hline
\begin{tabular}[c]{@{}l@{}}Diagnosis\\ (score$\geq$ 10)\end{tabular} & \multicolumn{1}{l|}{0.28} & \multicolumn{1}{l|}{0.41} & 0.33 & \multicolumn{1}{l|}{0.37} & \multicolumn{1}{l|}{0.42} & \textbf{0.40} & \multicolumn{1}{l|}{0.31} & \multicolumn{1}{l|}{0.42} & 0.35 & \multicolumn{1}{l|}{0.48} & \multicolumn{1}{l|}{0.39} & \textbf{0.43}\\ \hline
Overall & \multicolumn{2}{l|}{\cellcolor[HTML]{656565}} & 0.54 & \multicolumn{2}{l|}{\cellcolor[HTML]{656565}} & \textbf{0.57} & \multicolumn{2}{l|}{\cellcolor[HTML]{656565}} & 0.58 & \multicolumn{2}{l|}{\cellcolor[HTML]{656565}{\color[HTML]{656565} }} & \textbf{0.63} \\ \hline
\end{tabular}
\end{adjustbox}
\label{tab:clf_results}
\end{table}
Interestingly, we also find that depression classification results in higher overall F1-score than anxiety classification. One reason for this was likely due to the data imbalance issue in anxiety samples, which was particularly pronounced as compared to depression (12.8\% vs. 25.3\% with scores above 9). Another potential reason for this worse performance is that acoustic features in anxiety have been shown to not vary as much with severity as compared with depression \citep{albuquerque2021association}. This suggests that anxiety prediction through speech-assessment is a harder task than its corollary in depression.
These findings add to the existing body of work that speech is an appropriate modality for depression and anxiety biomarker development. In particular, using both hand-crafted and deep-learned features maximizes the signal that can be extracted from the speech stream. It also shows how prediction performance for these models is often variable with respect to anxiety/depression severity.
\section{Conclusion}
In this work, we present a model for the prediction of anxiety and depression from self-administered speech tasks. Our models extend upon previous work that focuses on classification of depression and anxiety and combines it with a set of hand-crafted features that is able to capture many of the nuanced changes in acoustic and linguistic content of depressed and anxious speech.
We find that the proposed model, that combines hand-crafted features with deep-learning speech and language representation, improves classification F1-score of both classes compared to the baseline.
The results presented in this paper form a promising basis towards the development of better screening tools for anxiety and depression via speech data.
\newpage
\bibliographystyle{plainnat}
|
1,314,259,995,285 | arxiv | \section{Introduction}
$\ab$-metrics form a special class of Finsler metrics partly because
they are ``computable''\cite{bacs-cxy-szm-curv}. The researches on
$\ab$-metrics enrich Finsler geometry and the approaches offer
references for further study.
Randers metrics arising from physical applications\cite{rand-onan}
are the simplest $\ab$-metrics. They are expressed in the form
$F=\a+\b$, where $\a=\sqrt{a_{ij}(x)y^iy^j}$ is a Riemannian metric
and $\b=b_i(x)y^i$ is a 1-form with $\|\b\|_\a<1$. The following
Randers metric
\begin{eqnarray}
F=\frac{\sqrt{(1-|x|^2)|y|^2+\langle
x,y\rangle^2}}{1-|x|^2}+\frac{\langle x,y\rangle}{1-|x|^2}
\end{eqnarray}
is called {\em Funk metric}\cite{funk-uber}. It is a projectively
flat Finsler metric on $\mathbb B^n(1)$ with flag curvature
$K=-\frac{1}{4}$. Recall that a Finsler metric $F$ on an open domain
$\mathcal U\subset\RR^n$ is said to be {\em projectively flat}, if
all the geodesics of $F$ are straight lines\cite{css-szm-riem}.
Another important example of $\ab$-metric was given by L.
Berwald\cite{berw-uber},
\begin{eqnarray}
F=\frac{(\sqrt{(1-|x|^2)|y|^2+\langle x,y\rangle^2}+\langle
x,y\rangle)^2}{(1-|x|^2)^2\sqrt{(1-|x|^2)|y|^2+\langle
x,y\rangle^2}}.
\end{eqnarray}
It is of a special kind of $\ab$-metrics in the form
$F=\frac{(\a+\b)^2}{\a}$ with $\|\b\|_\a<1$. Berwald's metric is
also a projectively flat Finsler metric on $\mathbb B^n(1)$ with
flag curvature $K=0$.
The concept of $\ab$-metrics was firstly proposed by M. Matsumoto in
1972 as a direct generalization of Randers metrics\cite{mats-oncr}.
But some basic concepts of $\ab$-metrics were omitted. In section 2,
we make clear the geometric property about the indicatrixes of
$\ab$-metrics. Roughly speaking, a Minkowski norm $F$ is an
$\ab$-norm if and only if the indicatrix of $F$ is a rotation
hypersurface with the rotation axis passing the origin.
The aim of this paper is to study a new class of Finsler metrics
given by
\begin{eqnarray}\label{gab}
F=\gab,
\end{eqnarray}
where $\p=\p(b^2,s)$ is a $C^\infty$ positive function and
$b^2:=\|\b\|^2_\a$. This kind of Finsler metrics generalize
$\ab$-metrics in a natural way. They are a special class of general
$\ab$-metrics which are defined in section 3. But the most important
reason that we are interested in them is that they include some
Finsler metrics constructed by R. Bryant.
Bryant's metrics\cite{brya-some,brya-proj,brya-fins} are rectilinear
Finsler metrics on $S^n$ with flag curvature $K=1$ and given in the
following form with $X\in S^n,Y\in T_XS^n$,
\begin{eqnarray}\label{bryant}
F(X,Y)=\Re\left\{\frac{\sqrt{Q(X,X)Q(Y,Y)-Q(X,Y)^2}}{Q(X,X)}-i\frac{Q(
X,Y)}{Q(X,X)}\right\},
\end{eqnarray}
where
$$Q(X,Y)=x_0y_0+e^{ip_1}x_1y_1+e^{ip_2}x_2y_2+\cdots+e^{ip_n}x_ny_n$$
are complex quadratic forms on $\RR^{n+1}$ for $n\geq2$ with the
parameters satisfying
$$0\leq p_1\leq p_2\leq\cdots\leq p_n<\pi.$$
Note that the branch of the complex square root being used is the
one satisfying $\sqrt{1}=1$ and having the negative real axis as its
branch locus~(cf. \cite{brya-proj}).
The following result is related to Bryant's metrics, where the
constant $r_\mu$ is given by $r_\mu=\frac{1}{\sqrt{-\mu}}$ if
$\mu<0$ and $r_\mu=+\infty$ if $\mu\geq0$.
\begin{thm}\label{thm-3}
The following general $\ab$-metrics are projectively flat on
$\mathbb B^n(r_\mu)$ with $n\geq2$,
\begin{eqnarray}\label{eqn:bry}
F=\Re\frac{\sqrt{(e^{ip}+b^2)\a^2-\b^2}-i\b}{e^{ip}+b^2}\quad(-\frac{\pi}{2}\leq
p\leq\frac{\pi}{2}),
\end{eqnarray}
where $\a$ and $\b$ are given by
\begin{eqnarray}
\a&=&\frac{\sqrt{(1+\mu|x|^2)|y|^2-\mu\langle
x,y\rangle^2}}{1+\mu|x|^2},\label{a}\\
\b&=&\frac{\lambda\langle x,y\rangle+(1+\mu|x|^2)\langle
a,y\rangle-\mu\langle a,x\rangle\langle
x,y\rangle}{(1+\mu|x|^2)^\frac{3}{2}},\label{b}
\end{eqnarray}
in which $\mu$ is the sectional curvature of $\a$, $\lambda$ is a
constant and $a\in\RR^n$ is a constant vector.
\end{thm}
\begin{rmk}
When $\mu=0,\lambda=1,a=0$, the general $\ab$-metrics
(\ref{eqn:bry}) belong to Bryant's metrics in some appropriate
coordinate. One can see section 4 for details. At the same time, we
will point out that the previous metrics (\ref{bryant}) are not
always regular on the whole sphere. Recall that a Finsler metric is
said to be {\em regular}, if its fundamental tensor is positive
definite everywhere.
\end{rmk}
Moreover, we provide a sufficient condition for the general
$\ab$-metrics (\ref{gab}) to be projectively flat. In this paper, a
1-form is called {\em conformal} with respect to a Riemannian metric
if its dual vector field with respect to the Riemannian metric is
conformal.
\begin{thm}\label{thm-2}
Let $F=\gab$ be a general $\ab$-metric on a manifold $M$ with
dimension $n\geq2$. Then $F$ is locally projectively flat if the
following conditions hold:
\begin{enumerate}
\item
The function $\phi(b^2,s)$ satisfies the following partial
differential equation
\begin{eqnarray}\label{8}
\phi_{22}=2(\phi_{1}-s\phi_{12}).
\end{eqnarray}
\item
$\a$ is locally projectively flat, $\b$ is closed and conformal with
respect to $\a$.
\end{enumerate}
\end{thm}
\begin{rmk}\label{r-3}
Note that $\po$ means the derivation of $\p$ with respect to the
first variable $b^2$. On the other hand, a Riemannian metric $\a$ is
locally projectively flat if and only if it is of constant sectional
curvature by Beltrami's theorem\cite{css-szm-riem}.
\end{rmk}
The projective flatness is connected with the Hilbert's Fourth
Problem. Recently, Z. Shen has characterized all the projectively
flat $\ab$-metrics for dimension $n\geq3$\cite{szm-onpr}. The first
author rewrote the $\ab$-metric $F=\frac{(\a+\b)^2}{\a}$ as
$F=\frac{(\sqrt{1+\bar b^2}\ba+\bb)^2}{\ba}$ in his doctoral
dissertation, where $\ba=(1-b^2)\a,\bb=\sqrt{1-b^2}\b$, and proved
that this kind of Finsler metrics are locally projectively flat if
and only if $\ba$ is locally projectively flat while $\bb$ is closed
and conformal with respect to $\ba$.
Moreover, the first author has classified all the locally
projectively flat $\ab$-metrics for dimension $n\geq3$ in his
doctoral dissertation. The results show that the projective flatness
of an $\ab$-metric always arises from that of some Riemannian metric
by doing some special deformations. Therefore, we claim that the
conditions in Theorem \ref{thm-2} are, in a sense, also a necessary
condition for a non-Randers general $\ab$-metric $F=\gab$ to be
locally projectively flat for $n\geq3$.
To be specific, if $F$ is a non-Randers locally projectively flat
general $\ab$-metric, then $F$ can be represented as $F=\gab$ such
that $\p(b^2,s),\a$ and $\b$ satisfy the conditions in Theorem
\ref{thm-2}. For instance, suppose that $F=\frac{(\a+\b)^2}{\a}$ is
a locally projectively flat $\ab$-metric. In this case, the
corresponding function $\p(s)=(1+s)^2$ does not satisfy Eq.
(\ref{8}). Also $\a$ is not locally projectively flat and $\b$ is
not conformal with respect to $\a$ in general~\cite{szm-onpr}. But
if we rewrite $F$ as $F=\frac{(\sqrt{1+\bar b^2}\ba+\bb)^2}{\ba}$,
then the function $\p(\bar b^2,\bar s)=(\sqrt{1+\bar b^2}+\bar s)^2$
satisfies Eq. (\ref{8}) now. Although $F=\frac{(\a+\b)^2}{\a}$ is
simple in this form, the properties of $\a$ and $\b$ are not so
simple. This phenomenon is similar to that of Randers metrics of
constant flag curvature~\cite{db-cr-szm-zerm}.
\section{The geometric meaning of $\ab$-norms}
Let $V$ be an $n$-dimensional vector space. By definition, an
$\ab$-norm on $V$ is a Minkowski norm expressed in the following
form,
$$F=\a\phi(s),\quad s=\frac{\b}{\a},$$
where $\a=\sqrt{a_{ij}y^iy^j}$ is an Euclidean norm and
$\b=b_iy^i\in V^*$ is a linear functional on $V$. The function
$\phi=\phi(s)$ is a $C^\infty$ positive function on some open
interval $(-b_o,b_o)$ satisfying
$$\phi(s)-s\phi'(s)+(b^2-s^2)\phi''(s)>0,\qquad\forall|s|\leq b<b_o,$$
where $b=:\|\b\|_\a$\cite{css-szm-riem}.
Let $\{e_1,e_2,\cdots,e_n\}$ be an orthonormal basis of $\a$. Then
$$\a(y)=\sqrt{(y^1)^2+(y^2)^2+\cdots+(y^n)^2},\qquad y=y^ie_i\in V\cong\RR^n.$$
It is obvious that the orthogonal group $O(n)$ acting on $V$
preserves $\a$. Conversely, a Minkowski norm on $V$ preserved under
the action of $O(n)$ must be Euclidean. In other words, Euclidean
norms are the most symmetric Minkowski norms.
By considering the symmetry of $\ab$-norms, Theorem \ref{thm-1}
shows that the symmetry of $\ab$-norms is just next to that of
Euclidean norms. Firstly, we give a description of the symmetry of a
Minkowski norm.
\begin{definition}
Let $F$ be a Minkowski norm on an $n$-dimensional vector space $V$
and $G$ be a subgroup of $GL(n,\RR)$. Then $F$ is called {\em
$G$-invariant} if the following condition holds for some affine
coordinate $(y^1,y^2,\cdots,y^n)$ of $V$,
\begin{eqnarray}\label{eqn:g}
F(y^1,y^2,\cdots,y^n)=F((y^1,y^2,\cdots,y^n)g),\qquad\forall y\in
V,\forall g\in G.
\end{eqnarray}
\end{definition}
The symmetry of Minkowski norms should be paid more attentions since
it restricts the global symmetry of Finsler manifolds.
\begin{thm}\label{thm-1}
Let $F$ be a Minkowski norm on a vector space $V$ of dimension
$n\geq2$. Then $F$ is an $\ab$-norm if and only if $F$ is
$G$-invariant, where
$$
G=\left\{g \in GL(n,R)~|~g=\left
\begin{array}{cc}
A & 0 \\
0 & 1 \\
\end{array
\right),~A\in O(n-1) \right\}.
$$
\end{thm}
\begin{rmk}
The above theorem is trivial when $n=1$ because every Finsler curve
is of Randers type by the navigation problem.
\end{rmk}
\begin{proof}
Let $F=\pab$ be an $\ab$-norm. Take an orthonormal basis
$\{e_1,e_2,\cdots,e_n\}$ with respect to $\a$, such that
$\ker\b=\mathrm{span}\{e_1,e_2,\cdots,e_{n-1}\}.$ Then
$$F(y)=\sqrt{(y^1)^2+(y^2)^2+\cdots+(y^n)^2}\phi\left(\frac{by^n}{\sqrt{(y^1)^2+(y^2)^2+\cdots+(y^n)^2}}\right),$$
where $y=y^ie_i$ and $b=\|\b\|_\a$. Obviously, $F$ is $G$-invariant.
Conversely, assume that (\ref{eqn:g}) holds for the affine
coordinate $(y^1,y^2,\cdots,y^n)$.\\
Case 1. $n\geq3$.
By restricting $F$ on the linear subspace given by $y^n=0$, one can
obtain an $O(n-1)$-invariant Minkowski norm, which must be Euclidean
by the previous discussions. So we can choose a positive number $a$,
such that the Euclidean norm
$\a=a\sqrt{(y^1)^2+(y^2)^2+\cdots+(y^n)^2}$ on $V$ satisfies
$\a|_{y^n=0}=F|_{y^n=0}$.
For $y\neq0$, define
\begin{eqnarray}\label{eqn:tp}
\tilde\phi(y^1,y^2,\cdots,y^n)=\frac{F(y^1,y^2,\cdots,y^n)}{\a(y^1,y^2,\cdots,y^n)},
\end{eqnarray}
then $\tilde\phi$ is $G$-invariant, i.e.
$$\tilde\p(y^1,y^2,\cdots,y^n)=\tilde\p((y^1,y^2,\cdots,y^n)g),\qquad\forall y\neq0,\forall g\in G.$$
In particular,
$$\tilde\phi(\cos ty^1+\sin ty^2,-\sin ty^1+\cos ty^2,y^3,\cdots,y^n)=\tilde\phi(y^1,y^2,\cdots,y^n).$$
Differentiating the above equality with respect to $t$ and setting
$t=0$, one obtains $\pp{\tilde\phi}{y^1} y^{2} -
\pp{\tilde\phi}{y^2}y^1=0$. The same argument yields
\begin{eqnarray}\label{eqn:sym}
\pp{\tilde\phi}{y^i} y^{j} - \pp{\tilde\phi}{y^j}y^i=0,\qquad 1\leq
i< j\leq n-1.
\end{eqnarray}
Moreover, since $F$ and $\a$ are both positively homogeneous with
degree one, $\tilde\phi$ is positively homogeneous with degree zero,
i.e., $\tilde\phi(\lambda y)=\tilde\phi(y),\forall\lambda>0.$
Differentiating this equality with respect to $\lambda$ and setting
$\lambda=1$, one obtains
\begin{eqnarray}\label{eqn:zero}
\pp{\tilde\phi}{y^i}y^i=0.
\end{eqnarray}
Taking the spherical coordinate transformation
\begin{eqnarray*}
\left\{\begin{array}{ll}
y^1=r\cos\theta^1\cos\theta^2\cdots\cos\theta^{n-2}\cos\theta^{n-1},\\
y^2=r\cos\theta^1\cos\theta^2\cdots\cos\theta^{n-2}\sin\theta^{n-1},\\
\ \ \ \ \ \ \ \cdots\\
y^{n-1}=r\cos\theta^1\sin\theta^2,\\
y^n=r\sin\theta^1,
\end{array}\right.
\end{eqnarray*}
where
$r>0,-\frac{\pi}{2}\leq\theta^\gamma\leq\frac{\pi}{2}(\gamma=1,\cdots,n-2),0\leq\theta^{n-1}<2\pi$,
and using (\ref{eqn:sym}) (\ref{eqn:zero}), we have
\begin{eqnarray*}
\pp{\tilde\phi}{r}&=&\pp{\tilde\phi}{y^i}\pp{y^i}{r}=\pp{\tilde\phi}{y^i}\frac{y^i}{r}=0,\\
\pp{\tilde\phi}{\theta^\gamma}&=&-\pp{\tilde\phi}{y^1}y^{n-\gamma+1}
\cos\theta^{\gamma+1}\cdots\cos\theta^{n-2}\cos\theta^{n-1}\\
&&-\pp{\tilde\phi}{y^2}y^{n-\gamma+1}\cos\theta^{\gamma+1}\cdots\cos\theta^{n-2}\sin\theta^{n-1}-\cdots\\
&&-\pp{\tilde\phi}{y^{n-\gamma}}y^{n-\gamma+1}\sin\theta^{\gamma+1}
+\pp{\tilde\phi}{y^{n-\gamma+1}}r\cos\theta^1\cdots\cos\theta^\gamma\\
&=&-\pp{\tilde\phi}{y^{n-\gamma+1}}y^1\cos\theta^{\gamma+1}\cdots\cos\theta^{n-2}\cos\theta^{n-1}\\
&&-\pp{\tilde\phi}{y^{n-\gamma+1}}y^2\cos\theta^{\gamma+1}\cdots\cos\theta^{n-2}\sin\theta^{n-1}-\cdots\\
&&-\pp{\tilde\phi}{y^{n-\gamma+1}}y^{n-\gamma}\sin\theta^{\gamma+1}
+\pp{\tilde\phi}{y^{n-\gamma+1}}r\cos\theta^1\cdots\cos\theta^\gamma\\
&=&0,\qquad\gamma=2,\cdots,n-2,\\
\pp{\tilde\phi}{\theta^{n-1}}&=&-\pp{\tilde\phi}{y^1}y^2+\pp{\tilde\phi}{y^2}y^1=0.
\end{eqnarray*}
So $\tilde\phi=\tilde\phi(\theta^1)=\phi\left(\frac{y^n}{\a}\right)$
where the function $\p(s)=\tilde\p(\arcsin as)$, which means
$F=\a\phi\left(\frac{y^n}{\a}\right)$ is an
$\ab$-norm.\\
Case 2. $n=2$.
In this case, (\ref{eqn:g}) is equivalent to
$F(y^1,y^2)=F(-y^1,y^2),\forall y\in V.$ This equation implies that
the indicatrix of $F$ is reflection symmetric with respect to
$y^2$-axis. It is easy to see that it means that the function
defined by (\ref{eqn:tp}) has the form
$\tilde\phi=\phi\left(\frac{y^2}{\a}\right)$ for some function
$\phi$.
\end{proof}
\begin{rmk}
(\ref{eqn:tp}) shows that the function $\phi(s)$ contains the
informations about the shape of the indicatrix.
\end{rmk}
By Zermelo's viewpoint~\cite{db-cr-szm-zerm}, we can obtain new
Minkowski norms by shifting the indicatrix of an $\ab$-norm. We call
them {\em navigation $\ab$-norms}. The indicatrix of a navigation
$\ab$-norm is still a rotation hypersurface, but the rotation axis
does not pass the origin in general.
There will not be more discussions about this kind of Minkowski
norms in this paper. It shouldn't be omitted if one study the
properties of $\ab$-metrics besides Randers
metrics\cite{mxh-hlb-oncu,zlf-aloc}, although it may be very
complicated in algebraic form.
\section{General $\ab$-metrics}
Suppose that $F$ is a Finsler metric on a manifold $M$ such that
$F(x,y)$ is an $\ab$-norm on $T_xM$ for any $x\in M$. $F$ is not an
$\ab$-metric in general. This is because the shape of the indicatrix
for different point may be different. This observation leads to the
following definition.
\begin{definition}
Let $F$ be a Finsler metric on a manifold $M$. $F$ is called a {\em
general $\ab$-metric}, if $F$ can be expressed as the form
$F=\a\phi\left(x,\frac{\b}{\a}\right)$ for some $C^\infty$ function
$\phi(x,s)$ where $x\in M$, some Riemannian metric $\a$ and some
1-form $\b$. $F$ is called a {\em (special) $\ab$-metric}, if $F$
can be expressed as $F=\pab$ for some $C^\infty$ function $\phi(s)$,
some Riemannian metric $\a$ and some 1-form $\b$.
\end{definition}
The Finsler metrics in the form (\ref{gab}) become the simplest
class of general $\ab$-metrics except for special $\ab$-metrics.
$\phi(b^2,s)$ is a positive $C^\infty$ function with $b^2,s$ as its
variables and $|s|\leq b<b_o$ as its definitional domain for some
$0<b_o\leq+\infty$. We use $b^2$ instead of $b$ as the first
variable, partly because it is convenient for computations. In the
rest part of this paper, we will focus on this special kind of
general $\ab$-metrics. Firstly, we can obtain the basic facts of the
general $\ab$-metrics immediately from the corresponding ones of
$\ab$-metrics given in \cite{css-szm-riem}.
\begin{prop}\label{fand}
For a general $\ab$-metric $F=\gab$, the fundamental tensor is given
by
$$g_{ij}=\rho a_{ij}+\rho_0b_ib_j+\rho_1(b_i\a_{y^j}+b_j\a_{y^i})-s\rho_1\a_{y^i}\a_{y^j},$$
where
$$\rho=\p(\p-s\pt),\quad\rho_0=\p\ptt+\pt\pt,\quad\rho_1=(\p-s\pt)\pt-s\p\ptt.$$
Moreover,
$$\det(g_{ij})=\p^{n+1}(\p-s\pt)^{n-2}\big(\p-s\pt+(b^2-s^2)\ptt\big)\det(a_{ij}),$$
$$g^{ij}=\rho^{-1}\left\{a^{ij}+\eta b^ib^j+\eta_0\a^{-1}(b^iy^j+b^jy^i)+\eta_1\a^{-2}y^iy^j\right\},$$
where $(g^{ij})=(g_{ij})^{-1},(a^{ij})=(a_{ij})^{-1},b^i=a^{ij}b_j$,
$$\eta=-\frac{\ptt}{\big(\p-s\pt+(b^2-s^2)\ptt\big)},
\qquad\eta_0=-\frac{(\p-s\pt)\pt-s\p\ptt}{\p\big(\p-s\pt+(b^2-s^2)\ptt\big)},$$
$$\eta_1=\frac{\big(s\p+(b^2-s^2)\pt\big)\big((\p-s\pt)\pt-s\p\ptt\big)}{\p^2\big(\p-s\pt+(b^2-s^2)\ptt\big)}.$$
\end{prop}
\begin{proof}
Recall that the fundamental tensor of a Finsler metric $F$ is given
by $g_{ij}=\frac{1}{2}[F^2]_{y^iy^j}$. Note that for a general
$\ab$-metric, the variable $b^2$ is independent of $y$, so one can
get the above formulas immediately from the corresponding ones of
$\ab$-metrics given in \cite{css-szm-riem}.
\end{proof}
\begin{prop}\label{ttt}
Let $M$ be an $n$-dimensional manifold. $F=\gab$ is a Finsler metric
on $M$ for any Riemannian metric $\a$ and 1-form $\b$ with
$\|\b\|_\a<b_o$ if and only if $\p=\p(b^2,s)$ is a positive
$C^\infty$ function satisfying
\begin{eqnarray}\label{ppp}
\p-s\pt>0,\quad\p-s\pt+(b^2-s^2)\ptt>0,
\end{eqnarray}
when $n\geq3$ or
$$\p-s\pt+(b^2-s^2)\ptt>0,$$
when $n=2$, where $s$ and $b$ are arbitrary numbers with $|s|\leq
b<b_o$.
\end{prop}
\begin{proof}
The case $n=2$ is similar to $n\geq3$, so it is omitted here.
Suppose that (\ref{ppp}) holds. Consider a family of functions
$\p_t(b^2,s)=1-t+t\p(b^2,s)$. Let
$F_t=\a\p_t\left(b^2,\frac{\b}{\a}\right)$ and
$g^t_{ij}=\frac{1}{2}\left[F_t^2\right]_{y^iy^j}$, then $F_0=\a$ and
$F_1=F$. It is easy to see that for any $0\leq t\leq1$ and $|s|\leq
b<b_o$,
$$\p_t-s(\p_t)_2=1-t+t(\p-s\pt)>0,$$
$$\p_t-s(\p_t)_2+(b^2-s^2)(\p_t)_{22}=1-t+t\big(\p-s\pt+(b^2-s^2)\ptt\big)>0.$$
Thus $\det(g^t_{ij})>0$ for all $0\leq t\leq1$. Since $(g^0_{ij})$
is positive definite, we conclude that $(g^t_{ij})$ is positive
definite for any $t\in[0,1]$. Therefore, $F_t$ is a Finsler metric
for any $t\in[0,1]$.
Conversely, assume that $F=\gab$ is a Finsler metric for any
Riemannian metric $\a$ and 1-form $\b$ with $b<b_o$. Then
$\phi(b^2,s)$ is positive. By Proposition \ref{fand},
$\det(g_{ij})>0$ is equivalent to
$$(\p-s\pt)^{n-2}\big(\p-s\pt+(b^2-s^2)\ptt\big)>0,$$
which implies $\p-s\pt\neq0$ when $n\geq3$. Since $\p(b^2,0)>0$, the
previous inequality implies that the first inequality in (\ref{ppp})
holds. The second one also holds because $\det(g_{ij})>0$.
\end{proof}
\begin{rmk}
Note that the second inequality in (\ref{ppp}) doesn't imply the
first one, even though it does for special $\ab$-metrics(cf.
\cite{css-szm-riem}).
\end{rmk}
Let $b_{i|j}$ denote the coefficients of the covariant derivative of
$\b$ with respect to $\a$. Let
$$r_{ij}=\frac{1}{2}(b_{i|j}+b_{j|i}),~s_{ij}=\frac{1}{2}(b_{i|j}-b_{j|i}),
~r_{00}=r_{ij}y^iy^j,~s^i{}_0=a^{ij}s_{jk}y^k,$$
$$r_i=b^jr_{ji},~s_i=b^js_{ji},~r_0=r_iy^i,~s_0=s_iy^i,~r^i=a^{ij}r_j,~s^i=a^{ij}s_j,~r=b^ir_i.$$
It is easy to see that $\b$ is closed if and only if $s_{ij}=0$.
\begin{prop}\label{prop:G}
For a general $(\alpha,\beta)$-metric $F=\gab$, its spray
coefficients $G^i$ are related to the spray coefficients $G^i_\a$ of
$\a$ by
\begin{eqnarray*}
G^i&=& G^i_\a+\a Q s^i{}_0+\left\{\Theta(-2\a Q s_0+r_{00}+2\a^2
R r)+\a\Omega(r_0+s_0)\right\}\frac{y^i}{\a}\\
&&+\left\{\Psi(-2\a Q s_0+r_{00}+2\a^2 R
r)+\a\Pi(r_0+s_0)\right\}b^i -\a^2 R(r^i+s^i),
\end{eqnarray*}
where
$$Q=\frac{\pt}{\p-s\pt},\quad R=\frac{\po}{\p-s\pt},$$
$$\Theta=\frac{(\p-s\pt)\pt-s\p\ptt}{2\p\big(\p-s\pt+(b^2-s^2)\ptt\big)},
\quad\Psi=\frac{\ptt}{2\big(\p-s\pt+(b^2-s^2)\ptt\big)},$$
$$\Pi=\frac{(\p-s\pt)\pye-s\po\ptt}{(\p-s\pt)\big(\p-s\pt+(b^2-s^2)\ptt\big)},\quad
\Omega=\frac{2\po}{\p}-\frac{s\p+(b^2-s^2)\pt}{\p}\Pi.$$
\end{prop}
\begin{proof}
Recall that the spray coefficients of a Finsler metric $F$ are given
by
$$G^i=\frac{1}{4}g^{il}\left\{\left[F^2\right]_{x^ky^l}y^k-\left[F^2\right]_{x^l}\right\}.$$
For the general $\ab$-metric $F=\gab$, direct computations yield
\begin{eqnarray*}
\left[F^2\right]_{x^k}&=&[\a^2]_{x^k}\p^2+2\a^2\p\po[b^2]_{x^k}+2\a^2\p\pt s_{x^k},\\
\left[F^2\right]_{x^ky^l}&=&[\a^2]_{x^ky^l}\p^2+2[\a^2]_{x^k}\p\pt
s_{y^l}+2[\a^2]_{y^l}\p\po[b^2]_{x^k}\\
&&+2\a^2\po\pt[b^2]_{x^k}s_{y^l}+2\a^2\p\pye[b^2]_{x^k}s_{y^l}+2[\a^2]_{y^l}\p\pt s_{x^k}\\
&&+2\a^2(\pt)^2s_{x^k}s_{y^l}+2\a^2\p\ptt
s_{x^k}s_{y^l}+2\a^2\p\pt s_{x^ky^l}.
\end{eqnarray*}
Set $G^i=G^i_1+G^i_2$, where $G^i_1$ includes $\po$ and $\pye$ but
$G^i_2$ does not, i.e.,
\begin{eqnarray}
G^i_1&=&\frac{1}{2}g^{il}\Big\{[\a^2]_{y^l}\p\po[b^2]_{x^k}y^k+\a^2\po\pt[b^2]_{x^k}y^ks_{y^l}\nonumber\\
&&+\a^2\p\pye[b^2]_{x^k}y^ks_{y^l}-\a^2\p\po[b^2]_{x^l}\Big\}.\label{G1}
\end{eqnarray}
It is easy to see that $G^i_2$ can be obtained immediately by
exchanging $\phi'$ for $\pt$ and $\phi''$ for $\ptt$ in the spray
coefficients of $\ab$-metrics which can be found in
\cite{css-szm-riem}. So
\begin{eqnarray*}
G^i_2=G_\a^i+\a Q s^i{}_0+\Theta\left\{-2\a Qs_0+r_{00}\right\}
\frac{y^i}{\a}+\Psi\left\{-2\a Qs_0+r_{00}\right\}b^i.
\end{eqnarray*}
In order to compute $G^i_1$, we need the following simple facts:
\begin{eqnarray}\label{fact}
[\a^2]_{y^l}=2y_l,\quad[b^2]_{x^l}=2(r_l+s_l),\quad s_{y^l}=\frac{\a
b_l -sy_l}{\a^2},
\end{eqnarray}
where $y_l=a_{lt}y^t$.\\
By (\ref{G1}) and (\ref{fact}), we have
\begin{eqnarray*}
G^i_1=g^{il}\big\{A y_l+B b_l+C(r_l+s_l)\big\}:=\rho^{-1}\big\{D
y^i+E b^i+F(r^i+s^i)\big\},
\end{eqnarray*}
where
\begin{eqnarray*}
&A=(2\p\po-s\po\pt-s\p\pye)(r_0+s_0),&\\
&B=\a(\po\pt+\p\pye)(r_0+s_0),\quad C=-\a^2\p\po,&
\end{eqnarray*}
and by Proposition \ref{fand},
\begin{eqnarray*}
D&=&A+(As+\a^{-1}Bb^2+\a^{-1}Cr)\eta_0+\big\{A+\a^{-1}Bs+\a^{-2}C(r_0+s_0)\big\}\eta_1,\\
E&=&B+(\a As+Bb^2+Cr)\eta+\big\{\a
A+Bs+\a^{-1}C(r_0+s_0)\big\}\eta_0,\\
F&=&C.
\end{eqnarray*}
Plugging $\eta,\eta_0,\eta_1,A,B,C$ into the above equalities yields
\begin{eqnarray*}
D&=&\Bigg\{\left[2(\p-s\pt)+\frac{s\ptt\big(s\p+(b^2-s^2)\pt\big)}{\p-s\pt+(b^2-s^2)\ptt}\right]
\po\\
&&-\frac{(\p-s\pt)\big(s\p+(b^2-s^2)\pt\big)}{\p-s\pt+(b^2-s^2)\ptt}
\pye\Bigg\}(r_0+s_0)\\
&&+\frac{(\p-s\pt)\pt-s\p\ptt}{\p-s\pt+(b^2-s^2)\ptt}\po\a r,\\
E&=&\Bigg\{\frac{\p(\p-s\pt)}{\p-s\pt+(b^2-s^2)\ptt}\pye-\frac{s\p\ptt}{\p-s\pt+(b^2-s^2)\ptt}
\po\Bigg\}\a(r_0+s_0)\\
&&+\frac{\p\ptt}{\p-s\pt+(b^2-s^2)\ptt}\po\a^2r.
\end{eqnarray*}
One can obtain the spray coefficients $G^i$ by the above
equalities.
\end{proof}
\section{Some constructions of projectively flat general
$\ab$-metrics}
Bryant's metrics (\ref{bryant}) contain some general $\ab$-metrics.
In order to see that, let us take $p_1=p_2=\cdots=p_{n-1}=0,p_n=p$.
Then (\ref{bryant}) is given in the following form in some
appropriate coordinate by stereographic projection,
$$F=\Re\frac{\sqrt{(e^{ip}+|x|^2)|y|^2-\langle x,y\rangle^2}-i\langle x,y\rangle}{e^{ip}+|x|^2}.$$
If we take $p_1=p_2=\cdots=p_n=p$, then (\ref{bryant}) is given by
$$F=\Re\frac{\sqrt{(e^{-ip}+|x|^2)|y|^2-\langle x,y\rangle^2}-i\langle x,y\rangle}{e^{-ip}+|x|^2}.$$
So it is natural to consider the general $\ab$-metrics in the form
(\ref{eqn:bry}).
\begin{lem}
$F=\Re\frac{\sqrt{(e^{ip}+b^2)\a^2-\b^2}-i\b}{e^{ip}+b^2}$ is a
Finsler metric if and only if $b<b_o$, where
\begin{eqnarray*}
b_o=\left\{\begin{array}{ll}
+\infty&\qquad\mbox{if}\quad|p|\leq\frac{\pi}{2},\\
\sqrt{\frac{1}{2}\sec(\frac{2\pi}{3}-\frac{|p|}{3})}&\qquad\mbox{if}\quad\frac{\pi}{2}<|p|<\pi.
\end{array}\right.
\end{eqnarray*}
\end{lem}
\begin{proof}
There is no need to be discussed when $p=0$, because in this case
$F=\frac{\sqrt{(1+b^2)\a^2-\b^2}}{1+b^2}$ is just a Riemannian
metric.
Define a complex-valued function $\Phi(b^2,s)$ by
\begin{eqnarray}\label{eqn:Phi}
\Phi(b^2,s)=\frac{\sqrt{e^{ip}+b^2-s^2}-is}{e^{ip}+b^2}=\frac{1}{\sqrt{e^{ip}+b^2-s^2}+is},
\end{eqnarray}
then $\phi(b^2,s)$ is the real part of $\Phi$. Direct computations
yield
\begin{eqnarray}\label{eqn:PP}
\Phi-s\Phi_2=\frac{1}{(e^{ip}+b^2-s^2)^\frac{1}{2}},\label{eqn:P1}\\
\Phi-s\Phi_2+(b^2-s^2)\Phi_{22}=\frac{e^{ip}}{(e^{ip}+b^2-s^2)^\frac{3}{2}}.\label{eqn:P2}
\end{eqnarray}
When $0<p<\pi$, it is easy to see that the argument of
$e^{ip}+b^2-s^2$, denoted by $\theta$, satisfies $0<\theta\leq p$
since $b^2-s^2\geq0$. We conclude $\p$ and $\p-s\pt$ are positive
because the arguments of $\Phi$ and $\Phi-s\Phi_2$ belong to the
interval $(-\frac{\pi}{2},\frac{\pi}{2})$.
On the other hand,
$$\arg\left(\Phi-s\Phi_2+(b^2-s^2)\Phi_{22}\right)=p-\frac{3}{2}\theta,$$
so $\p-s\pt+(b^2-s^2)\ptt$ is positive when $p\leq\frac{\pi}{2}$. In
other words, $b_o=+\infty$ when $0<p\leq\frac{\pi}{2}$.
In the case $p>\frac{\pi}{2}$, $\p-s\pt+(b^2-s^2)\ptt$ is not always
positive because $\theta$ may be very small. Let $b_o$ be the
largest number such that for all $|s|\leq b<b_o$,
$\p-s\pt+(b^2-s^2)\ptt>0$. Then $b_o$ must be the solution, which is
given in the lemma, of the following equation,
$$\arg\frac{e^{ip}}{(e^{ip}+b_o^2)^\frac{3}{2}}=\frac{\pi}{2}.$$
We can finish the proof by the similar argument for the case
$-\pi<p<0$.
\end{proof}
\begin{rmk}
By the above lemma, Bryant's metrics (\ref{bryant}) do not always
define on the whole sphere. This conclusion have been confirmed by
R. Bryant. That is to say, in order to ensure the regularity of
(\ref{bryant}) on the whole sphere, some more conditions on the
parameters $p_i(1\leq i\leq n)$ should be provided.
\end{rmk}
\begin{proof}[Proof of Theorem \ref{thm-2}]
Since $\a$ is locally projectively flat, we can assume that
$G^i_\a=\theta y^i$ in some local coordinate system $(\mathcal
U;x^i)$, where $\theta=\theta_i(x)y^i$ is a 1-form on $\mathcal U$.
On the other hand, $b_{i|j}=c(x)a_{ij}$ for some function $c(x)$
because $\b$ is closed and conformal with respect to $\a$. It is
obvious that
\begin{eqnarray}\label{tu}
r_{00}=c\a^2,r_0=c\b,r=cb^2,r^i=cb^i,s^i{}_0=0,s_0=0,s^i=0.
\end{eqnarray}
Substituting (\ref{tu}) into the spray coefficients in Proposition
\ref{prop:G} yields
\begin{eqnarray*}
G^i&=&\left\{\theta+c\a[\Theta(1+2Rb^2)+s\Omega]\right\}y^i+c\a^2\left\{\Psi(1+2Rb^2)+s\Pi-R\right\}b^i\\
&=&\left\{\theta+c\a\left[\frac{\pt+2s\po}{2\p}
-\frac{\big(\ptt-2(\po-s\pye)\big)\big(s\p+(b^2-s^2)\pt\big)}
{2\p\big(\p-s\pt+(b^2-s^2)\ptt\big)}\right]\right\}y^i\\
&&+c\a^2\left\{\frac{\ptt-2(\po-s\pye)}{2\big(\p-s\pt+(b^2-s^2)\ptt\big)}\right\}b^i.
\end{eqnarray*}
So the spray coefficients are given by
\begin{eqnarray}\label{GGG}
G^i=\left\{\theta+c\a\frac{\pt+2s\po}{2\p}\right\}y^i
\end{eqnarray}
if $\p$ satisfies the first condition of Theorem \ref{thm-2}. Recall
that a Finsler metric is projectively flat if and only if its spray
coefficients are in the form $G^i=Py^i$\cite{css-szm-riem}.
Therefore $F$ is projectively flat on $\mathcal U$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm-3}]
The function $\Phi(b^2,s)$ is defined by (\ref{eqn:Phi}).
Differentiating (\ref{eqn:P1}) with respect to $b^2$ yields
$$\Phi_1-s\Phi_{12}=-\frac{1}{2(e^{ip}+b^2-s^2)^\frac{3}{2}}.$$
So by the above equality and (\ref{eqn:P2}), $\Phi$ satisfies the
following equality,
$$\Phi_{22}=2(\Phi_1-s\Phi_{12}).$$
The same relation is true for $\p$ by taking the real parts of the
above equality.
On the other hand, set $\varrho=\sqrt{1+\mu|x|^2}$, then the
Christoffel symbols of (\ref{a}) are given by
$\Gamma^k{}_{ij}=-\varrho^{-2}\mu(x^i\delta^k{}_j+x^j\delta^k{}_i)$,
and
\begin{eqnarray*}
b_i&=&\varrho^{-3}\lambda x^i+\varrho^{-1}a^i-\varrho^{-3}\mu\langle a,x\rangle x^i,\\
\pp{b_i}{x^j}&=&\varrho^{-3}\lambda\delta_{ij}-3\varrho^{-5}\mu\lambda
x^ix^j-\varrho^{-3}\mu a^ix^j\\
&&-\varrho^{-3}\mu\langle
a,x\rangle\delta_{ij}-\varrho^{-3}\mu
a^jx^i+3\varrho^{-5}\mu^2\langle a,x\rangle x^ix^j,\\
b_{i|j}&=&\pp{b_i}{x^j}-b_k\Gamma^k{}_{ij}\\
&=&\varrho^{-3}(\lambda-\mu\langle
a,x\rangle)\delta_{ij}-\varrho^{-5}(\lambda-\mu\langle
a,x\rangle)\mu x^ix^j.
\end{eqnarray*}
The last equality implies $s_{ij}=0$ and
$r_{ij}=\varrho^{-1}(\lambda-\mu\langle a,x\rangle)a_{ij}$. So $\b$
is closed and conformal with respect to $\a$ with conformal factor
$c(x)=\varrho^{-1}(\lambda-\mu\langle a,x\rangle)$.
Moreover, the spray coefficients of $F$ are given by
$$G^i=\left\{-\frac{\mu\langle x,y\rangle}{1+\mu|x|^2}+\frac{(\lambda-\mu\langle
a,x\rangle)}{\sqrt{1+\mu|x|^2}}
\Im\frac{\sqrt{(e^{ip}+b^2)\a^2-\b^2}-i\b}{e^{ip}+b^2}\right\}y^i,$$
which are obtained by the simple equality $\Phi_2+2s\Phi_1=-i\Phi^2$
and (\ref{GGG}).
\end{proof}
\begin{example}
Take $\lambda=1,a=0$ in Theorem \ref{thm-3}, then the following
general $\ab$-metrics are projectively flat for $-\frac{\pi}{2}\leq
p\leq\frac{\pi}{2}$:
$$F=\Re\frac{\sqrt{(e^{ip}+|x|^2+\mu e^{ip}|x|^2)|y|^2-(1+\mu e^{ip})\langle
x,y\rangle^2} -\frac{i\langle
x,y\rangle}{\sqrt{1+\mu|x|^2}}}{e^{ip}+|x|^2+\mu e^{ip}|x|^2}.$$
\end{example}
\begin{example}\label{ex2}
It is easy to verify that the function
$\p(b^2,s)=(\sqrt{1+b^2}+s)^2$ satisfies the first condition of
Theorem \ref{thm-2}. Take $\lambda=1,a=0$, then the following
general $\ab$-metrics are projectively flat:
$$F=\frac{(\sqrt{1+(1+\mu)|x|^2}\sqrt{(1+\mu|x|^2)|y|^2-\mu\langle x,y\rangle^2}+\langle x,y\rangle)^2}
{(1+\mu|x|^2)^2\sqrt{(1+\mu|x|^2)|y|^2-\mu\langle x,y\rangle^2}}.$$
In particular, $F$ is the Berwald's metric when $\mu=-1$.
\end{example}
\section{Some discussions about the PDE}
In this section, we will discuss some interesting properties about
the partial differential equation
\begin{eqnarray}\label{pde}
\ptt=2(\po-s\pye).
\end{eqnarray}
We will always assume $\lambda=1$ and $a=0$ in Theorem \ref{thm-3}
in this section. In this case, $\a$ and $\b$ are given by
$$\a_\mu=\frac{\sqrt{(1+\mu|x|^2)|y|^2-\mu\langle x,y\rangle^2}}{1+\mu|x|^2},
\qquad\b_\mu=\frac{\langle x,y\rangle}{(1+\mu|x|^2)^\frac{3}{2}}.$$
It is easy to verify that
$b^2_\mu:=\|\b_\mu\|^2_{\a_\mu}=\frac{|x|^2}{1+\mu|x|^2}$.
For any solution $\p$ of (\ref{pde}) satisfying Proposition
\ref{ttt}, $F=\a_\mu\p\left(b^2_\mu,\frac{\b_\mu}{\a_\mu}\right)$ is
a projectively flat general $\ab$-metric for any constant $\mu$ by
Theorem \ref{thm-2}. It is easy to see that such a metric can always
be rewrote as the form
\begin{eqnarray}
F=|y|\p_\mu\left(|x|^2,\frac{\langle x,y\rangle}{|y|}\right),
\end{eqnarray}
where the function $\p_\mu$ is given by
\begin{eqnarray}\label{T}
\p_\mu(b^2,s)=\frac{\sqrt{1+\mu(b^2-s^2)}}{1+\mu
b^2}\p\left(\frac{b^2}{1+\mu b^2},\frac{s} {\sqrt{1+\mu
b^2}\sqrt{1+\mu(b^2-s^2)}}\right).
\end{eqnarray}
In particular, $\p_0=\p$.
(\ref{T}) defines a family of transformations $\{\mathcal T_\mu\}$
by $\p_\mu=\mathcal T_\mu(\p)$. Such a family of transformations
become a transformation group of the solution space of (\ref{pde})
by the following proposition.
\begin{prop}\label{group}
For any solution $\p(b^2,s)$ of (\ref{pde}), the following facts
hold:
\begin{enumerate}
\item
$\p_\mu=\mathcal T_\mu(\p)$ is also a solution of (\ref{pde}) for
any constant $\mu$;
\item
$\mathcal T_0(\p)=\p$;
\item
$\mathcal T_\mu\circ\mathcal T_\nu(\p)=\mathcal T_{\mu+\nu}(\p)$.
\end{enumerate}
\end{prop}
\begin{proof}
Denote $\p_\mu$ by $\tilde\p$ and set $\tilde\p=A\p(B,S)$ where
\begin{eqnarray*}
&\displaystyle A(b^2,s)=\frac{\sqrt{1+\mu(b^2-s^2)}}{1+\mu b^2},\quad B(b^2,s)=\frac{b^2}{1+\mu
b^2},&\\
&\displaystyle S(b^2,s)=\frac{s}{\sqrt{1+\mu b^2}\sqrt{1+\mu(b^2-s^2)}}.&
\end{eqnarray*}
Then
\begin{eqnarray*}
\tilde\p_2&=&A_2\p(B,S)+AS_2\p_S(B,S)\\
&=&-\frac{\mu s\p(B,S)}{(1+\mu
b^2)\sqrt{1+\mu(b^2-s^2)}}+\frac{\p_S(B,S)}{\sqrt{1+\mu
b^2}\big(1+\mu(b^2-s^2)\big)},\\
\tilde\p-s\tilde\p_2&=&\frac{1}{\sqrt{1+\mu(b^2-s^2)}}\big(\p(B,S)-S\p_S(B,S)\big).
\end{eqnarray*}
Set $E=\frac{1}{\sqrt{1+\mu(b^2-s^2)}}$, then
\begin{eqnarray}
\big(\tilde\p-s\tilde\p_2\big)_1&=&E\big(\p(B,S)-S\p_S(B,S)\big)_BB_1
+E\big(\p(B,S)-S\p_S(B,S)\big)_SS_1\nonumber\\
&&+E_1\big(\p(B,S)-S\p_S(B,S)\big),\label{18}\\
\big(\tilde\p-s\tilde\p_2\big)_2&=&E\big(\p(B,S)-S\p_S(B,S)\big)_SS_2
+E_2\big(\p(B,S)-S\p_S(B,S)\big).\label{19}
\end{eqnarray}
The fact that $\p$ is a solution of (\ref{pde}) yields
\begin{eqnarray}\label{20}
\big(\p(B,S)-S\p_S(B,S)\big)_S=-2S\big(\p(B,S)-S\p_S(B,S)\big)_B.
\end{eqnarray}
Then by (\ref{18}), (\ref{19}) and (\ref{20}) we have
\begin{eqnarray*}
&&2(\tilde\p_1-s\tilde\p_{12})-\tilde\p_{22}\\
&=&2\big(\tilde\p-s\tilde\p_2\big)_1+s^{-1}\big(\tilde\p-s\tilde\p_2\big)_2\\
&=&(2ES_1+s^{-1}ES_2)\big(\p(B,S)-S\p_S(B,S)\big)_S+2EB_1\big(\p(B,S)-S\p_S(B,S)\big)_B\\
&&+(2E_1+s^{-1}E_2)\big(\p(B,S)-S\p_S(B,S)\big)\\
&=&2E(B_1-2SS_1-s^{-1}SS_2)\big(\p(B,S)-S\p_S(B,S)\big)_B\\
&&+(2E_1+s^{-1}E_2)\big(\p(B,S)-S\p_S(B,S)\big)\\
&=&0.
\end{eqnarray*}
The last equality holds because the items $B_1-2SS_1-s^{-1}SS_2$ and
$2E_1+s^{-1}E_2$ are both equal to 0 by direct computations. So (1)
holds.
(2) holds since $\p_0=\p$.
In order to see that (3) is true, we only need to compute $\mathcal
T_\mu(\p_\nu)$. By (\ref{T}) and the definition of $\mathcal T_\mu$,
\begin{eqnarray*}
\mathcal T_\mu(\p_\nu)&=&\frac{\sqrt{1+\nu(b^2-s^2)}}{1+\nu
b^2}\frac{\sqrt{1+\mu\left(\frac{b^2}{1+\nu b^2}-\frac{s^2}{(1+\nu
b^2)\left(1+\nu(b^2-s^2)\right)}\right)}}{1+\mu\frac{b^2}{1+\nu b^2}}\\
&&\p\left(\frac{\frac{b^2}{1+\nu b^2}}{1+\mu\frac{b^2}{1+\nu
b^2}},\frac{\frac{s}{\sqrt{1+\nu
b^2}{\sqrt{1+\nu(b^2-s^2)}}}}{\sqrt{1+\mu\frac{b^2}{1+\nu
b^2}}\sqrt{1+\mu\left(\frac{b^2}{1+\nu b^2}-\frac{s^2}{(1+\nu
b^2)\left(1+\nu(b^2-s^2)\right)}\right)}}\right)\\
&=&\p_{\mu+\nu}(b^2,s),
\end{eqnarray*}
which means $\mathcal T_\mu\circ\mathcal T_\nu(\p)=\mathcal
T_{\mu+\nu}(\p)$.
\end{proof}
Proposition \ref{group} implies a simple fact. If $\tilde\p$ can be
obtained from some solution $\p$ of (\ref{pde}) by some
transformation $\mathcal T_\mu$, then they will offer the same
projectively flat Finsler metrics by Theorem \ref{thm-2}. For
instance, obviously $\p=1$ is a solution of (\ref{pde}), and
$\mathcal T_\mu(1)=\frac{\sqrt{1+\mu(b^2-s^2)}}{1+\mu b^2}$. In this
case,
$$\p_\mu\left(b^2_\nu,\frac{\b_\nu}{\a_\nu}\right)=\p_{\mu+\nu}\left(b^2_0,\frac{\b_0}{\a_0}\right)=\a_{\mu+\nu}$$
are just the Riemannian metrics of constant sectional curvature.
We still don't know how to solve the equation (\ref{pde})
completely, but the following lemma is helpful to get its solutions.
\begin{lem}\label{C}
For any $C^\infty$ functions $f$ and $g$, the following function is
the solution of (\ref{pde}):
\begin{eqnarray}\label{eqn:jie}
\p(b^2,s)=f(b^2-s^2)+2s\int_0^sf'(b^2-\sigma^2)\mathrm{d}\sigma+g(b^2)s.
\end{eqnarray}
\end{lem}
\begin{proof}
It is easy to verify that the above function satisfies
(\ref{pde}).
\end{proof}
Suppose that $\p$ satisfies (\ref{eqn:jie}). Direct computations
show that
$$\p-s\pt=f(t),\qquad\p-s\pt+(b^2-s^2)\ptt=f(t)+2tf'(t),$$
where $t=b^2-s^2\geq0$. Assume that $f(0)>0$, then the inequalities
$\p>0$ and (\ref{ppp}) always hold for $b$ small enough. So one can
construct infinitely many projectively flat general $\ab$-metrics by
Lemma \ref{C}. Some simple examples are given in the following:
\begin{itemize}
\item$f(t)=\frac{1}{\sqrt{1-t}}$,
$$\p(b^2,s)=\frac{\sqrt{1-b^2+s^2}}{1-b^2}+g(b^2)s.$$
In this case, $F$ is of Randers type. In particular, it is the
navigation representation of Randers metrics when
$g(b^2)=-\frac{1}{1-b^2}$(cf. \cite{css-szm-riem}).
\item$f(t)=1+t$,
$$\p(b^2,s)=1+b^2+s^2+g(b^2)s.$$
In particular, it is given by example \ref{ex2} when
$g(b^2)=2\sqrt{1+b^2}$.
\item$f(t)=\sqrt{1-t},$
$$\p(b^2,s)=\sqrt{1-b^2+s^2}-s\ln(\sqrt{1-b^2+s^2}+s)+s\ln\sqrt{1-b^2}+g(b^2)s.$$
\item$f(t)=\sqrt{1+t}$,
$$\p(b^2,s)=\sqrt{1+b^2-s^2}+s\arcsin\frac{s}{\sqrt{1+b^2}}+g(b^2)s.$$
\item$f(t)=\ln(2+t)$,
$$\p(b^2,s)=\ln(2+b^2-s^2)+\frac{s}{\sqrt{2+b^2}}\ln\frac{\sqrt{2+b^2}+s}{\sqrt{2+b^2}-s}+g(b^2)s.$$
\item$f(t)=\ln(2-t)$,
$$\p(b^2,s)=\ln(2-b^2+s^2)-\frac{2s}{\sqrt{2-b^2}}\arctan\frac{s}{\sqrt{2-b^2}}+g(b^2)s.$$
\item$f(t)=1+\arctan t$,
\begin{eqnarray*}
\p(b^2,s)&=&1+\arctan(b^2-s^2)+\frac{s}{\sqrt{1+b^4}\sqrt{2\sqrt{1+b^4}-2b^2}}\\
&&\cdot\Bigg(\frac{1}{2}\left(\sqrt{1+b^4}-b^2\right)\ln\frac{\sqrt{1+b^4}+\sqrt{2\sqrt{1+b^4}+2b^2}s+s^2}
{\sqrt{1+b^4}-\sqrt{2\sqrt{1+b^4}+2b^2}s+s^2}\\
&&+\arctan\left(\sqrt{2\sqrt{1+b^4}+2b^2}s+\sqrt{1+b^4}+b^2\right)\\
&&+\arctan\left(\sqrt{2\sqrt{1+b^4}+2b^2}s-\sqrt{1+b^4}-b^2\right)\Bigg)+g(b^2)s.\\
\end{eqnarray*}
\end{itemize}
Obviously, the general $\ab$-metrics include all the $\ab$-metrics.
But it seems a little difficult to determine whether a general
$\ab$-metric is an $\ab$-metric or not. If $\p=\p(s)$ is independent
of $b^2$, then there is no doubt that $F=\pab$ is an $\ab$-metric.
But if $\p=\p(b^2,s)$, we can't conclude that $F=\gab$ isn't an
$\ab$-metric. For instance, as we know in section 1, the general
$\ab$-metric $F=\frac{(\sqrt{1+\bar b^2}\ba+\bb)^2}{\ba}$ is
actually an $\ab$-metric. So the following problem is still open:
Give an approach to distinguish $\ab$-metrics from general
$\ab$-metrics.\\
{\bf Acknowledgement} The authors thank Professor R. Bryant for his
immediate reply that there is indeed something left out in his paper
\cite{brya-some} when we ask him for suggestion. We also thank
Doctor Libing Huang. Before we introduce the concept of general
$\ab$-metrics, he has studied some Finsler metrics in the form
$F=|y|\p\left(|x|^2,\frac{\langle x,y\rangle}{|y|}\right)$ in a
different way, which are the simplest and most important general
$\ab$-metrics.
|
1,314,259,995,286 | arxiv | \section{Introduction}
\label{sec:intro}
\begin{figure*}[!bht]
\centering
\scalebox{0.8}{
\includegraphics[width=\textwidth]{systemoverview.pdf}}
\caption{Graphic overview of the summarization pipeline.}
\label{fig:systemoverview}
\end{figure*}
In recent years, the availability of video content online has been growing rapidly. YouTube alone has over a billion users, and every day people watch hundreds of millions of hours on YouTube \cite{youtube2016}. With the rapid growth of available content and the rising popularity of online video platforms, accessibility and discoverability become increasingly important. Specifically, in the video search scenario, it is crucial that the platforms enable effective discovery of relevant video content.
Previous research, indeed, has dedicated a great deal of attention to video retrieval \cite{2015trecvidover}, a task that is much harder than document retrieval due to the semantic mismatch between the keyword queries and the video frames. Therefore, video classification has been a prominent research topic \cite{karpathy2014large,brezeale2008automatic}, as well as detecting semantic concepts within video material \cite{jiang2007towards}. Both video categories and semantic concepts can be used for relevance matching between the query and parts of the video \cite{snoek2008concept}.
In this paper, we extend this existing research, and propose a system for query-based video summarization. Our system creates a brief, visually attractive trailer, which captures the parts of the video (or a collection of videos) that are most relevant to the user-issued query. For instance, for a query \emph{Istanbul}, and a video describing a trip to Istanbul, our system will construct an informative trailer, highlighting points of interest (\emph{Hagia Sophia}, \emph{Blue Mosque}, \emph{Grand Bazaar}), and skipping non-relevant content (shots of the tour bus, hotel room interior, etc.).
The applications for such a system are numerous, as such trailer skips the extraneous parts of a video, thus enhancing the user experience and saving time. For instance, it can better inform user decisions, and save time and money for services where users pay per view or pay for mobile data consumption.
A trailer can also serve as an alternative to the standard thumbnail, a still image that represents a video in the query result list. It could potentially better capture the relevant contents of the full video than a single thumbnail image.
The query-based summarization done by our system has two main objectives. First, the trailer will capture a \emph{semantic} match between the query and the video frames that goes beyond simple entity matching.
For instance, for a query \emph{racecar}, a frame containing a \emph{car driving on a racetrack} will be more relevant than a frame containing a \emph{stationary car}.
We achieve this semantic match via the use of entity embeddings \cite{levy2014dependency}.
Second, the trailer will be visually attractive. For instance, we will prefer frames containing visually prominent, clear depictions of relevant content. We will also prefer summaries that have smooth contiguous frame transitions, similar to human-edited movie trailers.
The overall approach -- combining semantic match and visual similarities -- is outlined in Figure~\ref{fig:systemoverview}. In summary, the main contributions of this paper are:
\begin{enumerate}
\item A robust approach for semantically matching keyword queries to video frames, using entity embeddings trained on non-video corpora.
\item A scalable method for detecting prominent visual clusters within videos based on label propagation.
\item An efficient and effective graph-based approach that combines semantic and visual signals to construct trailers, which are both relevant and visually appealing.
\item Detailed empirical evaluation of the proposed method with comparison to several baseline systems.
\end{enumerate}
\section{Related Work}
Previous work on video summarization has taken many different approaches to the problem and interpretations of the task. The task of summarizing a video can be interpreted as creating a textual description, a story board, a graphical representation or a \emph{video skim} that captures the content of a video appropriately \cite{money2008video}.
In this study we address the task of constructing a \emph{video skim}, which is done by taking the video and skipping all unimportant parts. Thus all content in the resulting skim comes from the video and is played in the same chronological order. The main difference from this prior work is that our summaries are query-based.
Approaches to computing the prominence of a video fragment are widely varied. Some use only visual features, e.g. the model only adds a fragment if it is visually distinct from already added fragments \cite{zhao2014quasi,almeida2013online}. Others cluster all the frames in the video based on their visual similarity~\cite{carvajal2014summarisation}, and subsequently compose a summary by including a single fragment from each cluster. All of these approaches attempt to capture a video by covering all of its visually distinct parts.
Conversely,~\cite{gong2014diverse} propose a supervised system that learns from human created summaries. Furthermore, by using a collection of videos belonging to a very narrow category one could train a model to recognize the fragments that are the most characteristic of their category \cite{potapov2014category}. Moreover, if no such videos are available, the model can be trained on web images of the same category \cite{khosla2013large}.
Our method contrasts with these approaches, as we incorporate a semantic interpretation of the video segments, as well as use the visual information of the fragments. In addition, our approach scales much better, as it is not restricted to a specific video category.
Existing work has also looked into using higher level concepts to construct summaries. For instance, recognizing events summaries can better address user issued event queries \cite{wang2012event}. In the same vein, detected events can be used to infer causality and construct a story-based summary \cite{lu2013story}.
More similar to our method is previous work which recognizes ontology concepts in sports videos. A rule based method is then used to detect and include the meaningful events within the video in the summary \cite{ouyang2013ontology}. Comparable to these methods, our system computes a semantic interpretation of the video content, however we use entity embeddings, which avoids the limitation of rigid event ontologies.
Although not used for summarization, semantic embeddings have been trained for video frames. These can embody a temporal aspect as the embedding of a frame can also based on the preceding and following frames \cite{ramanathan2015learning}. Similar embeddings have been used for thumbnail detection where embeddings can be used to find the frame that is the most characteristic of the video's content \cite{liu2015multi}.
The novelty of our approach is that it uses embeddings to find the most relevant segments with respect to a keyword query and uses them for video summarization. Additionally, it is expected to create visually appealing summaries, by including visual features.
Lastly, text-based summarization methods for documents and other textual content have been long studied in the natural language processing literature. However, all these methods have primarily focused on summarizing text documents or user generated written content~\cite{submodularDispersion:2013,wangEtAl:2014}. Graph-based methods have also been used in the past for summarization~\cite{Ganesan:2010}, but in a very different context. For a detailed survey on existing text summariation techniques, see~\cite{NenkovaM12}.
\section{Method}
\label{sec:method}
In this section we propose two models for semantic query-based video summarization, the first only uses semantic information of the video whereas the second incorporates both semantic and visual information.
\begin{figure*}[tb]
\centering
\includegraphics[scale=0.22]{trailergraph.pdf} \hspace{15mm} \includegraphics[scale=0.22]{trailerrerankinggraph.pdf}
\caption{\small Query-video graph used for summarization before (Left) and after (Right) discarding discarding all segment nodes except for the hundred most strongly semantically connected to the query node. Query $q$ and segments $F$ from the video are represented by nodes, edges are based on visual similarity between $(F_i, F_j)$ and semantic similarity between $(q, F_i)$. For coherency all segments besides the first four have been collapsed.}
\label{fig:fullgraph}
\end{figure*}
Both models take as input a query $q$ and a video $V$; the query has been issued by a user and the video is judged to be relevant by a video retrieval algorithm. Each input video is first divided into one second segments, these are eventually used to compose the trailer summary. Working with these segments makes the final summary more comprehensible, as a second is enough time for the viewer to perceive an included clip. Furthermore, it makes the systems more scalable, as computationally expensive operations only have to be run every second instead of once for every frame in the full video. Both systems rank all the segments of a single video based on the segment content and the user query. The summary is then generated by taking the top $k=20$ ranked segments and stitching them together in order of chronological appearance in the full video. By keeping the ordering of the original video the resulting
trailer
is expected to be more coherent, additionally the generated summary is the equivalent of a video skim.
\subsection{Query Representation}
All our models are based on the intuition that segments capturing the same semantic content as the query should be included. Thus, the model estimates how similar the content in the query and the segment are, and ranks them accordingly. The first step in similarity estimation is to process the query $q$ and map it to a universal representation of entities $e_q \in E_q$ (and their corresponding confidence scores $w_{e_q}$), extracted from a knowledge base such as Wikipedia.
\subsection{Direct Matching}
\label{sec:method:direct}
Given the entities $E_q$ in the query, a straightforward approach is to use an image-processing model to recognize the given entities in the frame image, e.g. a deep learning architecture for concept detection in images \cite{inception,he2015spatial}. Then, the query-segment matching is simply a confidence of the concept detection model in detecting the query entities in the segment. However, this direct matching approach has several major drawbacks.
First, the number of concepts that a state of the art detection model can recognize is limited to 22,000 by the largest publicly available corpus \cite{russakovsky2015imagenet}, an extremely small subset of the entities a query can express. Moreover, processing the dataset of query-video pairs gathered for our experiments in section \ref{sec:dataset} which contains over 34,000 pairs revealed that 57\% had no entity overlap.
Second, many summaries should contain segments that do not directly display the entities in the query but are relevant nonetheless. For instance a good summary for the entity \emph{turkey} could contain a segment of \emph{turkey stuffing} being prepared, despite that visually no \emph{turkey} is actually present. However, direct detection models are not robust enough to recognize such related concepts.
Therefore, since direct matching models cannot be applied to majority of the summarization cases, instead we focus our attention on more advanced approaches in the rest of the paper. We present two such methods next.
\subsection{Semantic Matching}
\label{sec:method:semantic}
As in the previous method, we first apply the Inception model~\cite{inception} -- state-of-the-art deep neural network architecture, that is trained to detect a large number of concepts in images -- on each frame $F_i$ in the segment. The model outputs a set of entity concepts $E_{F_i}$ with confidence scores $w_{e_f}$ for how certain the system is that each concept $e_f \in E_{F_i}$ is present in the segment $F_i$.
However, instead of directly matching concepts between the sparse entity mappings $E_{F_i}$ and $E_q$, we compute a dense semantic embedding representation for both the query $q$ and a given video frame $F_i$ using their entity mappings. In other words, we replace each concept $e$ with its pre-computed semantic embeddings vector $\mathcal{S}_e$ . Then, a semantic representation of the segment $F_i$ is given by
$$\mathcal{S}_{F_i} = \frac{1}{|E_{F_i}|}\sum_{e_f \in E_{F_i}} w_{e_f}\mathcal{S}_{e_f}$$
Similarly, we represent the query $q$, by weighted average of embeddings for its entities to create a semantic representation $\mathcal{S}_q$.
Semantic embeddings at the entity level are computed using the recent approach from Mikolov et al.~\shortcite{mikolov13b}, and trained on a large corpus of text documents from Wikipedia. The embedding model can be learned in an unsupervised manner, thus the amount of training data can be acquired at magnitudes greater than labeled data available for training visual recognition systems. This allows the embedding model to be applicable for a substantially larger number of entities. Recent work reports 175,000 embeddings can be trained from only using the English Wikipedia \cite{levy2014dependency}.
Finally, the similarity between the query $q$ and segment $F_i$ can be estimated using the cosine similarity of their associated embeddings $\mathcal{S}_q, \mathcal{S}_{F_i}$ as follows:
\begin{align*}
\sum_{e_q \in E_{q}}\sum_{e_f \in E_{F_i}} w_{e_q} w_{e_f} {cosine(\mathcal{S}_{e_q},\mathcal{S}_{e_f})} \\ \nonumber
= cosine(\mathcal{S}_q, \mathcal{S}_{F_i}) && \\
\end{align*}
The ranking of segments $F_i$ is based on the estimated semantic similarity to $q$, where the most similar segment is added first to the summary.
\subsection{Graph-Based Matching}
\label{sec:method:graph}
The semantic matching approach provides a robust method of estimating the relevance of segments, however it only considers semantic similarity and treats all the segments independently. Next, we introduce a second graph-based approach that models the intuition that content {\it visually} prominent in a video must be relevant to the topic it covers. In other words, besides the semantic similarity between the query and segments, the prominence of the content in a segment should also be used to estimate its relevance. We estimate prominence using visual information, thus if large parts of the video look visually similar we will assume they cover relevant content.
To effectively combine the semantic and visual signals in our system, we use Expander, an efficient graph-based learning framework based on label propagation~\cite{expander}. The framework is typically used for semi-supervised learning scenarios over graph structures~\cite{Bengio+al-ssl-2006,expander,emailcateg2016}. Usually, the weight of the edge between two nodes indicate their similarity, and true labels are known for only a subset of the nodes. The approach relies on the assumption that nodes that are very similar are also very likely to have the same labels. Accordingly the model iterates over the graph several times, at each iteration all nodes acquire the labels of the nodes they are connected to. Each node keeps a confidence score for every label based on how strongly it is connected to the nodes it acquired it from and their corresponding confident scores. In this manner, the labels are propagated through the graph at each iteration until a stable distribution of labels is reached.
The typical use of this method is considered semi-supervised, as only a fraction of the true labels need to be known and the remaining are not learned from training data but directly inferred from the graph structure.
Our model uses a graph for each query-video pair ($q, V$) to be summarized, each segment $F_i$ extracted from the video $V$ is represented by a node in the graph, finally there is a node representing the query $q$. The values of the edges between the query node and the segment nodes are computed using the semantic matching approach, thus these edges represent their semantic similarity $cosine(\mathcal{S}_{q}, \mathcal{S}_{F_i})$. The edges between the segments on the other hand are computed by their visual similarity, this is done sampling a frame from each segment and calculating their resemblance $cosine({\mathcal{V}_{F_i}, \mathcal{V}_{F_j}})$, where $\mathcal{V}_{F_i}$ corresponds to a visual embedding corresponding to the frame $F_i$ which is computed using a hidden layer representation of the frame image within the deep learning network described earlier. A diagram of the resulting graph is displayed in Figure~\ref{fig:fullgraph}.
We learn a label assignment $\hat{L}$ on this graph that minimizes the following convex objective function:
\vspace{-0.05in}
\begin{small}
\begin{align}
\mathcal{C}(\bf \hat{L}) =&& \sum_{F_i \in V} w_{qF_i}|| \hat{L}_q - \hat{L}_{F_i} ||_2^2 \nonumber \\
&+& \sum_{F_i, F_j \in V} w_{ij} || \hat{L}_{F_i} - \hat{L}_{F_j} ||_2^2 \nonumber \\
&+& \sum_{F_i \in V} || L_{F_i} - \hat{L}_{F_i} ||_2^2
\label{expander:obj}
\end{align}
\end{small}
where $w_{qF_i}, w_{ij}$ represent the {\it semantic} and {\it visual} similarity scores as defined above; $\hat{L}$ is the learned label distribution for query and segment nodes in the graph; and $L_{F_i}$ is the seed label (i.e., identity) on the video segment nodes.
The segment nodes are each assigned a unique ``seed'' label (i.e., their identity). We optimize the above objective function using the iterative streaming algorithm described in \cite{expander}, then after running label propagation the confidence scores of the labels acquired by the query node $\hat{L}_q$ are considered. The segments are ranked corresponding to how strongly their corresponding labels were propagated to the query node. In other words, the output label scores on the query node $\hat{L}_q$ indicate how well the segments are connected to the query in the graph. A segment can be strongly connected because it is semantically similar to the query or it is visually similar to other segments that are strongly connected. Note that contrary to the typical usage of label propagation, our approach is in fact unsupervised as the initial labels can automatically be assigned. The streaming Expander algorithm permits efficient scaling to thousands or millions of frames for long videos while maintaining constant space complexity.
Presumably we could ignore the semantic edges in the graph completely and propagate only the frame-ids over the visual edges. This is equivalent to performing visual clustering, we do not consider this model here because it ignores the query and therefore is unsuited for this task. Similarly the edges could be weighted so that the model values the either semantic or visual signals more. We can also easily incorporate {\em diversity} among ranked results, as in traditional summarization approaches, by simply converting the visual similarity signal into a distance metric.\footnote{Different graph configurations were tried but are not included in to maintain brevity.} Furthermore the generic setup of the method allows it to be easily extended with novel signals in the future.
Though the intuition behind the previous graph construction is reasonable, preliminary results revealed some practical problems with this model. Namely many videos contain visuals that often recur in the video but are not relevant for a summary. For instance, news shows or documentaries can feature a presenter who talks periodically throughout the video. These segments will be very similar visually despite being the least interesting parts to include in a summary. Moreover this problem can be extremely prevalent in online video content, since they often feature an almost static outro where users are asked to leave favorable feedback and watch more videos. Because these outros usually consist of text on a near static background, they form very strong clusters in the graph which boost these segments into the summary.
To counter these issues, we change the model to instead only consider the hundred highest semantically similar segments, thereby yielding a {\it graph-based reranking} model. The nodes representing the other segments and their edges are completely disregarded, as can be seen in the Figure~\ref{fig:fullgraph}. The intuition behind this reranking model is that content prominent among the relevant parts of a video are expected to be good additions to a summary and the irrelevant frames are automatically discarded.
\section{Experiments}
\label{sec:experiments}
In this section, we detail our experiments designed to evaluate the performance of our models. Section~\ref{sec:baselines} introduces two baselines for comparison, subsequently we discuss the data used for evaluation and our experimental setup in Section~\ref{sec:dataset} and Section~\ref{sec:setup} respectively.
\subsection{Baselines}
\label{sec:baselines}
To properly investigate the performance of the models introduced in Section~\ref{sec:method} we introduce the {\bf uniform} baseline model for comparison. Similar to the models, the uniform baseline also uses one second segments, however instead of judging their relevance the method selects segments according to a uniform distribution.
As a result each segment is equally likely to appear in the generated summary.
Because the uniform sampling covers all parts of the video equally, the summary is expected to capture all parts of the video. Since the video is selected using a state-of-the-art retrieval method, its content is expected to be very relevant to the topic. Thus the resulting summary is expected to be just as relevant to the query.
However since it does not take into account the content of the video nor the query,
it is expected to fail on videos that spend disproportionate time on some topics or contain cover material unrelated to the query. Both of these are unlikely if a strong retrieval model was used or if it was a short video.
Additionally, we introduce a second baseline: the first twenty seconds model ({\bf first-20}). This baseline creates a summary of a video by taking its first twenty seconds. This simple model is based on two intuitions.
Firstly the generated summaries keep the coherency of the original video because each summary is an unaltered clip where no \emph{film cuts} were introduced. Secondly, many videos start with an introduction of their topic usually to gain the viewers attention. Accordingly, this baseline attempts to select a single clip that gives an overview of the video.
\subsection{Dataset}
\label{sec:dataset}
Since our proposed system uses a query and a matching video, we make use of YouTube to collect these query-video pairs. Because YouTube receives millions of user queries per day and has a large variety of content, we consider it a good fit to test the effectiveness of our system.
We sampled 1800 of the most commonly issued queries, for each query twenty matching videos were sampled uniformly from the top hundred search results. Subsequently the summarization system was then applied to the resulting 34,725 videos, note that some videos are matched to multiple queries.
Sampling of videos was limited to those with a running length greater than ten minutes. This makes sure that summarization is not a trivial task.
In addition, video-query pairs which had an overlap in extracted entities were discarded as well. We chose to discard these videos to test the robustness of our system, since this limitation makes the direct match approach (described in Section~\ref{sec:method:direct}) impossible. As a result the data only contains instances where the semantic similarity between segments and the query cannot be computed directly. As described in Section~\ref{sec:method:semantic} our system can handle these entity mis-matches by using semantic embeddings. We believe this focus on the mis-matching cases is warranted, as we consider wide applicability as more important than good performance on a particular video subset.
Lastly since the system was evaluated using crowdsourcing we were unable to use the entire set of summarized query-video pairs. Instead a subset of 127 query-video pairs was used for the crowdsourced evaluation.
\subsection{Experimental Setup}
\label{sec:setup}
The quality of a summary is difficult be judged objectively. Consequently we used the Amazon Turk platform to perform a crowdsourced experiment, with three raters per task. Our comparison of models and baselines is based on the crowdsourced assessments of generated summaries.
However the task of judging a single summary proved to be very hard for most people, instead we found asking for preferences between summaries is a more comprehensible task. Accordingly the task consisted of a single question: ``\emph{Someone is looking for a video about }[query], \emph{which of the following two 20 second videos is best to show?}" followed by two side-by-side summary trailers: one generated by a model, and another by a baseline, their order randomized.
A judgement was collected for the combination of each query-video pair, model and baseline, giving us a total of 508 judgments.
However we noticed that some users disregarded the task to quickly optimize on the money incentive. For this reason we disregarded any judgement made within less than 30 seconds, bringing the number of judgements down to 449. Significance testing of the preferences between the systems was done by applying a two sided Wilcoxon sign test.
\section{Results}
\label{sec:results}
In this section, we present the results of our experiment described in Section~\ref{sec:experiments}, provide several example summaries and evaluate our proposed summarization method.
\subsection{Experimental results}
\label{sec:experimentalresults}
The results of our crowdsourcing experiment are displayed in Table~\ref{tab:preferences}. A clear preference of both models over the first twenty seconds baseline is visible. Since they are statistically significant ($p~<~0.01$) we conclude that both our models create better summaries than this baseline. When compared to the uniform baseline though, the graph-based approach yields more favorable summaries compared to the semantic-only model. However, overall preference \% for the two models compared to the uniform baseline are not as high. There could be several reasons for this, e.g., the task is not easy for people who are not familiar with video summarization.
Furthermore, the videos may not be appropriate for summarization; to further investigate this judgements were split based on video-category and length. Table~\ref{tab:preferences} shows the preferences for videos in the Gaming and Animation category (29\% of videos) and all others. These categories were chosen as they are prevalent on YouTube and are expected to be less suited for summarization.
The results show us that both models perform better for Non Gaming and Animation categories when compared to the uniform baseline. Additionally, results split by video length are also displayed in Table~\ref{tab:preferences}, we chose to split on 20 minutes as close to half (44\%) are under 20 minutes. Here we see that the semantic model performs substantially better on videos over 20 minutes with a 13\% difference compared to the uniform model, though graph-based performs almost the same with a 1\% difference. These results suggest that certain types of videos are more suited for auto-generating summary trailers.
In addition to the previous experiment, we performed a more detailed study on a smaller video dataset to better understand the differences between models. This experiment was also crowdsourced and showed judges a single summary together with a multi-choice questionnaire; videos were sampled and judgements were gathered for their summaries created by the uniform baseline and the graph-based model. In total 60 judgements were collected, the questions and results are displayed in Table~\ref{tab:questionnaire}, answers ranged from 1 (most negative) to 5 (most positive). The questionnaire shows us a clear signal that the graph-based method creates summary trailers that are visually more attractive than the uniform baseline.
\vspace{-1em}
\begin{table}[bth]
\small
\begin{tabular*}{\columnwidth}{ l c c c }
\toprule
\bf Question & \bf uniform & \bf \specialcell{graph\\-based} & \bf $\Delta$ \\
\midrule
\specialcell{Rate the visual quality\\of the summary,\\how good does it look?} & 3.54 & 3.94 & +11.16\% \\
\midrule
\specialcell{For query X, how well\\does the summary\\capture all relevant parts\\of the video?} & 4.38 & 4.47 & +1.87\% \\
\midrule
\specialcell{For query X, how relevant\\is the summary?} & 4.15 & 4.27 & +2.72\% \\
\bottomrule
\end{tabular*}
\vspace{-2.4pt}
\caption{\small Average results of questionnaire, scores range from 1 (most negative) to 5 (most positive).}
\label{tab:questionnaire}
\end{table}
\begin{figure}[ht!]
\centering
\includegraphics[width=\columnwidth]{creativecommonsvideoskimming.pdf}
\vspace{-1.9em}
\caption{\small Summaries created by the uniform, semantic and graph-based models for the queries: \emph{frogs}, \emph{salmon pasta} and \emph{volvo P1800}. Visualized by sampling a frame every 2 seconds.}
\vspace{-2.4em}
\label{fig:videoskim}
\end{figure}
\vspace{-1.8em}
\subsection{Example summaries}
\label{sec:examplesummaries}
To further investigate the effects of using different models we display example summaries in Figure~\ref{fig:videoskim} which are the result of applying different models to the same three query-video pairs.
For this illustration the uniform baseline, semantic model and graph-based model were applied, the first twenty seconds baseline was dismissed as
it performs significantly worse according to the results in Section~\ref{sec:experimentalresults}.
Three videos were sampled from different categories to illustrate robustness and diversity, the selected query-video pairs are: \emph{frogs}, an animal documentary; \emph{salmon pasta}, an amateur cooking video; \emph{volvo P1800}, an informational video regarding a famous car model\footnote{Videos are available under the Creative Commons licence at: youtu.be/w-AItfioqlw, youtu.be/tR9ZtaGtCAM and youtu.be/FwCjOakOMKE}.
The uniform summaries cover the videos passably, however the summaries contain many shots unrelated to the query. Most notably all uniform summaries contain shots of people who are presenting the video but are not relevant to the query.
In contrast, the semantic summaries only contain shots related to the query. For the first video we see that the semantic model has only included shots containing frogs, for the \emph{salmon pasta} video only shots of fish are included, and for the \emph{volvo P1800} video the summary consists of only shots that clearly display cars. Therefore we conclude that the semantic model can recognize semantic similarity robustly, as it found relevant shots effectively despite the fact that no direct annotations of the query were available in the video.
Lastly we have the graph-based summaries, as expected they are very similar to those of the semantic model. The differences are important though: the \emph{frogs} summary displays more shots of more different frogs, which adds diversity to the video. The model picked up on shots where the frogs are less directly recognizable (for instance due to camouflage or displaying the head) due to their visual similarity to semantically relevant shots. In the \emph{salmon pasta} summary shots of the vegetable sauce are included, the model inferred their relevance due to their prominence in the video. The semantic model did not include these as \emph{salmon pasta} is defined by its fish, however with respect to the cooking video this seems to be a good inclusion. Finally, the \emph{volvo P1800} summary displays more shots showing the outside of the car. The model picked up on interesting shots by their prominence and the result is a more visually appealing summary.
These examples show a clear difference between the uniform baseline and our models. This contrasts with some of the results in Section 5.1, where the preference differences between our models and the uniform model were not as pronounced. This suggests that the query-based video summarization task is a difficult one, and visual summary evaluation is an interesting direction for future work.
\section{Conclusion}
We presented a system for query-based video summarization that effectively combines semantic interpretations and visual signals of the video to construct summary trailers.
Despite the difficulties of evaluating for this complex task, we show that the new approach outperforms other baselines in terms of summarization quality as judged by human raters. We also show several examples which demonstrate that the approach of combining embeddings with frame annotations allows for robust semantic detection of relevant segments.
Moreover, our proposed graph-based model is able to recognize parts of the video that are both relevant to the query and visually prominent in the video. Future research could expand this approach by applying the graph-based model over several related videos to find latent topics using their visual similarity or to create multiple summary views per video each focused on a different topic. Finally, the usage of query-based summaries as dynamic thumbnails seems a promising direction for research.
|
1,314,259,995,287 | arxiv | \section{Introduction}
The number one issue in Higgs physics is the solution
of the hierarchy / fine-tuning problems that arise in the Standard
Model and Higgs sector extensions thereof from quadratically divergent
one-loop corrections to the Higgs mass. In fact, this ``quadratic
divergence fine-tuning'' is only one of three fine-tunings that we
will discuss. The second kind of fine-tuning is that sometimes called
``electroweak fine-tuning''; it is the fine-tuning associated with
getting the value of $m_Z$ correct starting from GUT-scale parameters
of some model that already embodies a solution to the quadratic
fine-tuning problem. A third type of fine-tuning will emerge in the context of
the next-to-minimal supersymmetric model solution to avoiding electroweak
fine-tuning.
Were it not for the quadratic divergence fine-tuning problem, there is
nothing to forbid the SM from being valid all the way up to the Planck
scale. The two basic theoretical constraints on $m_{\hsm}$ as a function
of the scale $\Lambda$ at which new physics enters are:
\bit
\item the Higgs self coupling should not blow up below scale $\Lambda$
--- this leads to an upper bound on $m_{\hsm}$ as a function of $\Lambda$.
\item the Higgs potential should not develop a new minimum at large
values of the scalar field of order $\Lambda$ --- this leads to a
lower bound on $m_{\hsm}$ as a function of $\Lambda$.
\eit
The SM remains consistent with these two constraints all the way up to
$\Lambda\sim\mplanck$ if $130\mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}}m_{\hsm}\mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 180~{\rm GeV}$. This is shown in
Fig.~\ref{trivialityfinetuning}.
\begin{figure}[h!]
\includegraphics[width=3in]{triviality.ps}\includegraphics[width=3in]{kolda.ps}
\caption{Left: Triviality and global minimum constraints
on $m_{\hsm}$ vs. $\Lambda$ from Ref.~\cite{Riesselmann:1997kg}. Right: Fine-tuning constraints on $\Lambda$, from Ref.~\cite{Kolda:2000wi}.}
\label{trivialityfinetuning}
\end{figure}
However, it is generally believed that the SM cannot be the full
theory all the way up to $\mplanck$ due to quadratic divergence of loop
corrections to the Higgs mass. Because of this divergence, a light
Higgs is not ``natural'' in the SM context given the large
``hierarchy'' between the $100~{\rm GeV}$ and $M_{\rm P}$ scales. Assuming
that the SM is valid up to some large scale $\Lambda$, to obtain the
low Higgs mass favored by data (and required by $WW$ scattering
perturbativity) requires an enormous cancellation between top loop
corrections (as well as $W$, $Z$ and $h_{\rm SM}$ loops) and the bare Higgs
mass of the Lagrangian. At one-loop, assuming
cutoff scale $\Lambda$,
\beq
m_{h_{\rm SM}}^2=m_0^2+{3\over 16\pi^2}
(2m_W^2+m_Z^2+m_{\hsm}^2-4m_t^2)\Lambda^2
\label{quaddiv}
\eeq
where $m_0^2=2\lambda v_{SM}^2$ with $v_{SM}\sim 174~{\rm GeV}$. ($V\ni
{1\over 2} \lambda^2 [(\Phi^\dagger\Phi)^2 -v_{SM}^2 (\Phi^\dagger \Phi)]$ and
$\vev{\Phi}=v_{SM}$.) Assuming no particular connection between the
contributions, we must fine tune $m_0^2$ to cancel the $\Lambda^2$
term with something like a precision of one part in $10^{32}$ if
$\Lambda=M_{\rm P}$. Further, this requires that the Higgs
self-coupling strength, $\lambda$, must be very large and
non-perturbative. Keeping only the $m_t$ term with $\Lambda\to
\Lambda_t$, one measure of fine-tuning is:
\beq
F_t(m_{\hsm})=\left|{\partial\deltam_{\hsm}^2\over \partial \Lambda_t ^2} {\Lambda_t ^2\over
m_{\hsm}^2}\right|={3\over 4\pi^2}{m_t^2\over v_{SM}^2}{\Lambda_t ^2\over m_{\hsm}^2}\equiv K {\Lambda_t^2\overm_{\hsm}^2}\,.
\eeq
Given a maximum acceptable $F_t$, new physics must enter at or below
the scale
\begin{equation}
\Lambda_t \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} {2\pi v_{SM}\over \sqrt 3 m_t} m_{\hsm} F_t^{1/2}\sim 400 \; \mbox{GeV} \left( {\frac{m_{\hsm}}{115 \; \mbox{GeV}}}
\right){F_t^{1/2}}.
\label{eq:LSM}
\end{equation}
$F_t>10$, corresponding to fine-tuning parameters with a precision of
better than 10\%, is deemed problematical. For $m_{\hsm}\sim 100~{\rm GeV}$, as
preferred by precision electroweak data, this implies new physics
somewhat below $1~{\rm TeV}$, in principle well within LHC reach.
\section{Options for Delaying New Physics}
Given that by definition new physics enters at scale $\Lambda$,
it is generically interesting to understand how the
quadratic divergence fine-tuning problem can be delayed to $\Lambda$
values substantially above $1~{\rm TeV}$, thereby making LHC new-physics
signals more difficult to detect. Two possible ways are the following.
\ben
\item $m_{\hsm}$ could obey the ``Veltman'' condition
\cite{Veltman:1980mj} (see also \cite{Fang:1996cn} and
\cite{Scadron:2006dy}),
\beq
m_{\hsm}^2=4m_t^2-2m_W^2-m_Z^2\sim (317~{\rm GeV})^2\,,
\label{veltcond}
\eeq
for which the coefficient of $\Lambda^2$ in Eq.~(\ref{quaddiv})
vanishes. However, it turns out that at higher loop order, one must
carefully coordinate the value of $m_{\hsm}$ with the value of $\Lambda$
\cite{Kolda:2000wi}. Just as we do not want to have a fine-tuned
cancellation of the two terms in Eq.~(\ref{veltcond}), we also do not
want to insist on too fine-tuned a choice for $m_{\hsm}$ (in the SM,
there is no symmetry that predicts any particular value). The
right-hand plot of Fig.~\ref{trivialityfinetuning} shows the result
after taking this into account. The upper bound for $\Lambda$ at
which new physics must enter is largest for $m_{\hsm}\sim 200~{\rm GeV}$ where
the SM fine-tuning would be $10\%$ if $\Lambda\sim 30~{\rm TeV}$. At this
point, one would have to introduce some kind of new physics. However,
we already know that there is a big problem with this approach --- the
latest $m_t$ and $m_W$ values when combined with LEP precision
electroweak data require $m_{\hsm}<160~{\rm GeV}$ at 95\% CL.
\item An alternative approach to delaying quadratic divergence
fine-tuning is to employ the multi-doublet model of
\cite{Espinosa:1998xj}. In this model, the $ZZ$ coupling is shared
among (perhaps many) Higgs mass eigenstates because the SM vev is
shared among the corresponding Higgs fields. A bit of care in
setting the scenario up is needed to avoid seeing other Higgs while
at the same time satisfying the precision EW constraint:
\bea \sum_i \frac{v_i^2}{v_{SM}^2} \ln m_{h_i} \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} \ln
\left( 160 ~{\rm GeV} \right),
\eea
where $\vev{\Phi_j}\equiv v_j$ and $\sum_j v_j^2=v_{SM}^2 \sim (175
~{\rm GeV})^2$ . If you don't want LEP to have seen any sign of a Higgs
boson, the PEW constraint can still be satisfied even if all the Higgs
decay in SM fashion, so long as the eigenstates are not too much below
$100~{\rm GeV}$ and not degenerate. But, of course, with enough $h_j$
eigenstates, Higgs decays will not be SM-like given the proliferation
of $h_j\to h_i h_i$ and $h_j\to a_ia_i$ decays. The combination of
such decays and weakened production rates for the individual Higgs
bosons would make Higgs detection very challenging at the LHC and
require a high-luminosity linear collider.
Returning to the quadratic divergence issue, we note that in the simplest
case where all $h_i$ fields have the same top-quark Yukawa, $\lambda_t$
in ${\cal L}=\lambda_t \overline t h_i t$, each $h_i$ has its top-quark-loop
mass correction scaled by $f_i^2\equiv {v_i^2\over v_{SM}^2}$ and
one gets a significantly reduced $F_t$ for each $h_i$:
\beq
F_t^i=f_i^2 F_t(m_i)=K f_i^2{ \Lambda_t^2\over m_i^2}.
\label{ftreduced}
\eeq
Thus, multiple mixed Higgs allow a much larger $\Lambda_t$ for a given
maximum acceptable common $F_t^i$. A model with $4$ doublets can have
$F_t^i<10$ for $\Lambda_t$ up to $5~{\rm TeV}$.
\een
One good feature of delaying new physics is that large $\Lambda_t$ implies
that significant corrections to low-$E$ phenomenology from
$\Lambda_t$-scale physics ({\it e.g.}\ FCNC) are less likely. However, in the
end, there is always going to be a $\Lambda$ or $\Lambda_t$ for which
quadratic divergence fine-tuning becomes unacceptable. Ultimately we
will need new physics. So, why not have it right away ({\it i.e.}\ at
$\Lambda \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 1~{\rm TeV}$) and avoid the above somewhat ad hoc games.
This is the approach of supersymmetry, which (unlike Little Higgs or
UED or ....) solves the hierarchy problem once and for all, {\it i.e.}\ there
is no need for an unspecified ultraviolet completion of the theory. We
will return to supersymmetry momentarily.
\section{Criteria for an Ideal Higgs Theory}
Theory and experiment have led us to a set of
criteria for an ideal Higgs theory. We list these below.
\bit
\item It should allow for a light Higgs boson without quadratic
divergence fine-tuning.
\item It should predict a Higgs with SM couplings to $WW,ZZ$ and with
mass in the range preferred by precision electroweak data. The LEPEWWG
plot from winter 2008 is shown in Fig.~\ref{blueband}.
\begin{figure}
\includegraphics[width=4in,angle=0]{w08_blueband.ps}
\caption{The ``blue-band'' plot showing the preferred Higgs mass range
as determined using precision electroweak data and measured top
and $W$ boson masses.}
\label{blueband}
\end{figure}
At 95\% CL, $m_{\hsm}<160~{\rm GeV}$
and the $\Delta\chi^2$ minimum is between $80~{\rm GeV}$ and $100~{\rm GeV}$.
\item Thus, in an ideal model, the Higgs should have mass no larger
than $100~{\rm GeV}$. But, at the same time, one must avoid the LEP
limits on such a light Higgs. One generic possibility is for the
Higgs decays to be non-SM-like. The limits on various Higgs decay
modes from LEP are given in Table~\ref{lepmodes}, taken from
Ref.~\cite{Chang:2008cw}. From this table, we see that to have
$m_H\leq 100~{\rm GeV}$ requires that the Higgs decays to one of the final
three modes or something even more exotic.
\begin{table}
\small
\caption {LEP $m_H$ Limits for an $H$ with SM-like $ZZ$ coupling, but
varying decays. \label{lepmodes} }
\bigskip
\footnotesize
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
Mode & SM modes & $2\tau$ or $2b$ {\it only} & $2j$ & $ WW^*+ZZ^*$ & $\gamma\gam$ & $\emiss$ & $4e,4\mu,4\gamma$ \cr
Limit (GeV) & $114.4$ & $115$ & $113$ & $100.7$ & $117$ & $114$ & $114$? \cr
\hline
\hline
Mode & $4b$ & $4\tau$ & any (e.g. $4j$) & $2f+\emiss$ & & & \cr
Limit (GeV) & $110$ & $86$ & $82$ & $90$? & & & \cr
\hline
\end{tabular}
\end{table}
\item Perhaps the Higgs properties should be such as to predict the
$2.3\sigma$ excess at $M_{b\overline b} \sim 98~{\rm GeV}$ seen in the
$Z+b\overline b$ final state --- see Fig.~\ref{clbplot}.
\begin{figure}
\includegraphics[height=3in,width=3in,angle=90]{2b_flt25_tb10_meq123.ps}
\includegraphics[width=3in,angle=0]{hZbb_clb_lhwg.ps}
\caption{LEP plots for the $Zb\overline b$ final state from the LEP Higgs
Working Group.}
\label{clbplot}
\end{figure}
For consistency with the observed excess, the $e^+e^-\to Z H\to Z
b\overline b$ rate should be about one-tenth the SM value. There are two
obvious ways to achieve this: (1) one could have $B(H\to b\overline
b)\sim 0.1 B(H\to b\overline b)_{SM}$ and $g_{ZZH}^2\sim g_{ZZh_{\rm SM}}^2$;
or (2) $B(H\to b\overline b)$ could be SM-like but $g_{ZZH}^2\sim 0.1
g_{ZZh_{\rm SM}}^2$.
Regarding (1), almost any additional decay channel will severely
suppress the $b\overline b$ branching ratio. A Higgs of mass 100 GeV has
a decay width into Standard Model particles that is only $2.6 ~{\rm MeV}$,
or about $10^{-5}$ of its mass. This implies that it doesn't take a
large Higgs coupling to some new particles for the decay width to
these new particles to dominate over the decay width to SM particles
--- see \cite{Gunion:1984yn}, \cite{Li:1985hy}, and
\cite{Gunion:1986nh} (as reviewed in \cite{Chang:2008cw}). For
example, compare the decay width for $h\to b\overline b$ to that for $h\to
aa$, where $a$ is a light pseudoscalar Higgs boson. Writing ${\cal L}\ni
g_{h aa}haa$ with $g_{haa}\equiv c\, {g m_h^2\over 2m_W}$ and ignoring
phase space suppression, we find
\bea
{\Gamma(h\to
aa)\over \Gamma(h\to b\overline b)}&\sim & 310\, c^2\left({m_{h}\over
100~{\rm GeV}}\right)^2.
\eea
This expression includes QCD corrections to the $b\overline b$ width as
given in HDECAY which decrease the leading order $\Gamma(h\to b \overline
b)$ by about 50\%. The decay widths are comparable for $c\sim 0.057$
when $m_h=100~{\rm GeV}$. Values of $c$ at this level or substantially
higher (even $c=1$ is possible) are generic in BSM models containing
an extended Higgs sector.
Regarding possibility (2), let us return to the scenario of
\cite{Espinosa:1998xj} in which the $ZZ$ coupling is shared among many
Higgs mass eigenstates. To explain the $2.3 \sigma$ excess, there
should be a Higgs field having vev squared of order $0.1\times
v_{SM}^2$ and corresponding eigenstate with mass $\sim 100~{\rm GeV}$. (This
simple scenario assumes no Higgs mixing --- incorporation of mixing is
straightforward.) An interesting special case is to construct a 2HDM
with $m_{\hl}=98~{\rm GeV}$ and $g_{ZZh^0}^2=0.1 g_{ZZh_{\rm SM}}^2$ and with
$m_{\hh}=116~{\rm GeV}$ (the other LEP excess) and $g_{ZZH^0}^2\sim 0.9
g_{ZZh_{\rm SM}}^2$ (see, for example, \cite{Drees:2005jg}). As discussed
earlier, multiple Higgs games are also ``useful'' in that they can
delay the quadratic divergence fine-tuning problem to higher
$\Lambda$.
\eit
\section{Why Supersymmetry}
Ultimately, however, we must solve the quadratic divergence problem.
There are many reasons why supersymmetry is regarded as the leading
candidate for a theory beyond the SM that accomplishes this.
Let us review them very briefly.
(a) SUSY is mathematically intriguing.
(b) SUSY is naturally incorporated in string theory.
(c) Elementary scalar fields have a natural place in SUSY, and so
there are candidates for the spin-0 fields needed for electroweak
symmetry breaking and Higgs bosons.
(d)
SUSY cures the naturalness / hierarchy problem (quadratic divergences
are largely canceled) in a particularly simple way. And, it does so
without electroweak fine-tuning (see definition below) provided the
SUSY breaking scale is $\mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 500~{\rm GeV}$. For example, the top quark
loop (which comes with a minus sign) is canceled by the loops of the
spin-0 partners called "stops" (which loops enter with a plus sign).
Thus, $\Lambda_t^2$ is effectively replaced by $\overline m_{\widetilde t}^2\equiv
m_{\stopone}m_{\stoptwo}$.
Overall, the most minimal version of SUSY, the MSSM comes close to
being very nice. If we assume that all sparticles reside at the
${\cal O}(1~{\rm TeV})$ scale and that $\mu$ is also ${\cal O}(1~{\rm TeV})$, then, the
MSSM has two particularly wonderful properties.
\begin{figure}[h!]
\includegraphics[width=2.5in]{gauge_un_sm.eps}\includegraphics[width=2.5in]{gauge_un_mssm.eps}
\caption{Unification of couplings constants ($\alpha_i=g_i^2/(4\pi)$)
in the minimal supersymmetric model (MSSM) as compared to failure
without supersymmetry.}
\label{gaugeunification}
\end{figure}
First, the MSSM sparticle content plus two-doublet
Higgs sector leads to gauge coupling unification
at $M_U\sim few\times 10^{16}~{\rm GeV}$, close to
$M_{\rm P}$ --- see Fig.~\ref{gaugeunification}. High-scale unification correlates well with the
attractive idea of gravity-mediated SUSY breaking.
\begin{figure}[h!]
\includegraphics[width=3in,height=3in]{rewsb3.ps}
\caption{Evolution of the (soft) SUSY-breaking masses or masses-squared,
showing how $m_{H_u}^2$ is driven $<0$ at low $Q \sim {\cal O}(m_Z)$.}
\label{rewsb3}
\end{figure}
Second, starting with universal soft-SUSY-breaking masses-squared at
$M_U$, the RGE's predict that the top quark Yukawa coupling will
drive one of the soft-SUSY-breaking Higgs masses-squared ($m_{H_u}^2$)
negative at a scale of order $Q\sim m_Z$, thereby automatically
generating electroweak symmetry breaking
($\vev{H_u}=h_u,\vev{H_d}=h_d$, where $H_u$ and $H_d$ are the two
scalar Higgs fields of the MSSM) --- see Fig.~\ref{rewsb3}. However,
as we shall discuss, fine-tuning of the GUT-scale parameters may be
required in order to obtain the correct value of $m_Z$ unless, for
example, the stop masses are no larger than $2m_t$ or so.
\section{MSSM Problems}
However, the MSSM is suspect because of two critical problems.
\bit
\item {\bf The $\mu$ parameter problem:} In $W\ni \mu \widehat H_u \widehat
H_d$,\footnote{Hatted (unhatted) capital letters denote superfields
(scalar superfield components).} $\mu$ is dimensionful, unlike all
other superpotential parameters. Phenomenologically, it must be
${\cal O}(1~{\rm TeV})$ (as required for proper EWSB and in order that the
chargino mass be heavier than the lower bounds from LEP and Tevatron
experiments). However, in the MSSM context the most natural values
are either ${\cal O}(M_U,M_{\rm P})$ or $0$.
\item {\bf LEP limits and Electroweak Fine-tuning:} Since the lightest
Higgs, $h$, of the (CP conserving) MSSM has SM-like coupling {\it
and} decays, the LEP limit of $m_{\h}>114.4~{\rm GeV}$ applies for most of
MSSM parameter space. Such a $h$ is only possible for special MSSM
parameter choices, for example large $\tan\beta=v_u/v_d$
and large stop masses (roughly
$\sqrt{\mstopone\mstoptwo}\mathrel{\raise.3ex\hbox{$>$\kern-.75em\lower1ex\hbox{$\sim$}}} 900~{\rm GeV}$) or large stop mixing. To quantify the
problem we define
\beq
F={\rm Max}_p \left\vert {p\over m_Z}{\partialm_Z\over
\partial p} \right\vert,
\eeq
where $p\in\left\{M_{1,2,3}, m_Q^2, m_U^2, m_D^2, m_{H_u}^2, m_{H_d}^2, \mu,
A_t, B\mu,\ldots\right\}$ (all at $M_U$). These $p$'s are the
GUT-scale parameters that determine all the $m_Z$-scale SUSY
parameters, and these in turn determine $v_{SM}^2$ to which $m_Z^2$ is
proportional. For example, $F>20$ means worse than $5\%$ fine-tuning
of the GUT-scale parameters is required to get the right value of
$m_Z$, a level generally regarded as unacceptable. Thus, an important
question is what is the smallest $F$ that can be achieved while
keeping $m_{\h}>114~{\rm GeV}$. The answer is (see, in particular,
\cite{Dermisek:2007ah,Dermisek:2007yt}): (a) For most of parameter
space, $F>100$ or so; (b) For a part of parameter space with large
mixing between the stops, $F$ can be reduced to $16$ at best ($6\%$
fine-tuning), but this part of parameter space has many other
peculiarities. An ideal model would have $F\mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 5$, which
corresponds to absence of any significant electroweak fine-tuning.
\eit
\section{The NMSSM}
Both problems are nicely solved by the next-to-minimal supersymmetric
model (NMSSM) in which a single extra singlet superfield is added to
the MSSM. The new superpotential and associated soft-SUSY-breaking
terms are
\beq
W\ni \lambda\widehat S\widehat H_u\widehat H_d+\ifmath{{\textstyle{1 \over 3}}} \kappa \widehat S^3\,,\quad
V\ni \lambdaA_\lam S H_u H_d+\ifmath{{\textstyle{1 \over 3}}}\kappaA_\kap S^3\,.
\eeq
The explicit $\mu \widehat H_u \widehat H_d$ term found in the MSSM
superpotential is removed. Instead, $\mu$ is automatically generated
by $\vev{S}\neq 0$ leading to $\mu_{eff}\widehat H_u \widehat H_d$ with
$\mu_{eff}=\lambda \vev{S}$. The only requirement is that $\vev{S}$ not
be too small or too large. This is automatic if there are no
dimensionful couplings in the superpotential since $\vev{S}$ is then
of order the SUSY-breaking scale, which will be of order a $~{\rm TeV}$
or below.
Electroweak fine-tuning and its implications for the NMSSM have been
studied in
\cite{Dermisek:2007ah,Dermisek:2007yt,Dermisek:2006py,Dermisek:2006ya,Dermisek:2006wr,Dermisek:2005gg,Dermisek:2005ar}
and reviewed in \cite{Accomando:2006ga,Chang:2008cw}.
Electroweak fine-tuning can be absent since the sparticles, especially
the stops, can be light without predicting a light Higgs boson with
properties such that it has already been ruled out by LEP, a point we
return to shortly. A plot of $F$ as a function of the mass of the
lightest CP-even Higgs, $m_{\h_1}$, appears in Fig.~\ref{fvsmhinmssm}.
\begin{figure}
\hspace*{-1.3cm}\includegraphics[width=4in,angle=90]{progressivecuts_tb10_meq123_all_flt50_fvsmh1_r.ps}
\caption{$F$ vs. $m_{\h_1}$
for $M_{1,2,3}=100,200,300~{\rm GeV}$ and $\tan\beta=10$. Small $\times$
points have no constraints other than the requirement that they
correspond to a global and local minimum, do not have a Landau pole
before $M_U$ and have a neutralino LSP. The O points are those
which survive after stop and chargino mass limits are imposed, but
no Higgs limits. The square points pass all LEP {\it single
channel}, in particular $Z+2b$ and $Z+4b$, Higgs limits. The large
yellow fancy crosses are those left after requiring $m_{\ai}<2m_b$, so
that LEP limits on $Z+b's$, where $b's=2b+4b$, are not violated. }
\label{fvsmhinmssm}
\end{figure}
The electroweak fine-tuning parameter has a minimum of order $F\sim 5$
(which arises for stop masses of order 350 GeV) for $m_{\h_1}\sim
100~{\rm GeV}$, even without placing any experimental constraints on the
model (the $\times$ points). This is perfect for precision
electroweak constraints because the $\h_1$ has very SM-like $WW,ZZ$
couplings and an ideal mass. However, most of the $\times$ points are
such that the $\h_1$ is excluded by LEP. Only the fancy-yellow-cross
points pass all LEP Higgs constraints, but there are many of these with
$F\sim 5$. These points are such that $m_{\h_1}\sim 100~{\rm GeV}$ {\it and}
the $\h_1$ avoids LEP Higgs limits by virtue of $B(\h_1\to
a_1\ai)>0.75$ with $m_{\ai}<2m_b$. (Here, $a_1$ is the lightest of the
two CP-odd Higgs bosons of the NMSSM.) In the $\h_1\to a_1\ai\to
4\tau$ channel, the LEP lower limit is $m_{\h_1}>87~{\rm GeV}$. In
the $\h_1\to a_1\ai\to 4j$ channel, the LEP lower limit is
$m_{\h_1}>82~{\rm GeV}$ --- see Table~\ref{lepmodes}.
Further, there is an intriguing coincidence. For the many points with
$B(\h_1\to a_1\ai)>0.85$, then $B(\h_1\to b\overline b)\sim 0.1$ and the
$2.3\sigma$ LEP excess near $m_{b\overline b}\sim 98~{\rm GeV}$ in $e^+e^-\to
Z+b's$ is perfectly explained. There are a significant number of such
points in NMSSM parameter space. For these points, the $\h_1$
satisfies all the properties listed earlier for an ``ideal'' Higgs.
Further, for these points the GUT-scale SUSY-breaking parameters (such
as the Higgs soft masses-squared, the $A_\kap$ and $A_\lam$
soft-SUSY-breaking parameters, and the $A_t$ stop mixing parameter)
are particularly appealing being generically of the 'no-scale'
variety. That is, for the lowest $F$ points we are talking about,
almost all the soft-SUSY-breaking parameters are small at the GUT
scale. This is a particularly attractive possibility in the string
theory context.
There is one remaining issue for these NMSSM scenarios. We must ask
whether a light $a_1$ with the right properties is natural, or does
this require fine-tuning of the GUT-scale parameters? This is the topic
of \cite{Dermisek:2006wr}. The answer is that these scenarios can be
very natural. First, we note that the NMSSM has a $U(1)_R$ symmetry
obtained when $A_\kap$ and $A_\lam$ are set to zero. If this limit is
applied at scale $m_Z$, then, $m_{\ai}=0$. But, it turns out that then
$B(\h_1\to a_1\ai)\mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 0.3$, which does not allow escape from the
LEP limit. However, the much more natural idea is to impose the
$U(1)_R$ symmetry at the GUT scale. Then, the renormalization group
often generates exactly the values for $A_\kap$ and $A_\lam$ needed to obtain
a light $a_1$ with large $B(\h_1\toa_1\ai)$.
Quantitatively, we measure the tuning needed to get small $m_{\ai}$ and
large $B(\h_1\toa_1\ai)$ using a quantity called $G$ (the
"light-$a_1$ tuning measure"). We want small $G$ as well as small $F$
for scenarios such that the light Higgs is consistent with LEP limits. Fig.~\ref{gvsf}
shows that it is possible to get small $G$ and small $F$
simultaneously for phenomenologically acceptable points if
$m_{\ai}>2m_\tau$ (but still below $2m_b$).
\begin{figure}
\includegraphics[width=3in,angle=90]{gvsf_tb10_meq123_flt15.ps}
\caption{$G$ vs. $F$
for $M_{1,2,3}=100,200,300~{\rm GeV}$ and $\tan\beta=10$ for points with
$F<15$ having $m_{\ai}<2m_b$ and large enough $B(\h_1\toa_1\ai)$ to
escape LEP limits. The color coding is: blue = $m_{\ai}<2m_\tau$; red
$=2m_\tau<m_{\ai}<7.5~{\rm GeV}$; green $=7.5~{\rm GeV}<m_{\ai}<8.8~{\rm GeV}$; and black $=
8.8~{\rm GeV}<m_{\ai}<9.2~{\rm GeV}$. }
\label{gvsf}
\end{figure}
A phenomenologically important quantity is $\cos\theta_A$, the coefficient of
the MSSM-like doublet Higgs component, $A_{MSSM}$, of the $a_1$
defined by
\beq
a_1=\cos\theta_A A_{MSSM}+\sin\theta_A A_S\,
\eeq
where $A_S$ is the singlet pseudoscalar field. The value of $G$ as a
function of $\cos\theta_A$ for various $m_{\ai}$ bins is shown in
Fig.~\ref{gvscta} for points consistent with LEP bounds.
\begin{figure}
\includegraphics[width=2.5in,angle=90]{gvsa1nonsinglet_meq123_tb10_mu150.ps}\hspace*{-.5cm}\includegraphics[width=2.5in,angle=90]{gvsa1nonsinglet_meq123_tb10_flt15.ps}
\caption{ $G$ vs. $\cos\theta_A$
for $M_{1,2,3}=100,200,300~{\rm GeV}$ and $\tan\beta=10$ from $\mu_\mathrm{eff}=150~{\rm GeV}$
scan (left) and for points with $F<15$ (right) having $m_{\ai}<2m_b$
and large enough $B(\h_1\toa_1\ai)$ to escape LEP limits. The color
coding is: blue = $m_{\ai}<2m_\tau$; red $=2m_\tau<m_{\ai}<7.5~{\rm GeV}$; green
$=7.5~{\rm GeV}<m_{\ai}<8.8~{\rm GeV}$; and black $= 8.8~{\rm GeV}<m_{\ai}<9.2~{\rm GeV}$. }
\label{gvscta}
\end{figure}
Really small $G$ occurs for $m_{\ai}>7.5~{\rm GeV}$ and $\cos\theta_A\sim -0.1$. Also
note that there is a lower bound on $|\cos\theta_A|$. This lower
bound arises because $B(\h_1\toa_1\ai)$ falls below $0.75$ for too
small $|\cos\theta_A|$. For the preferred $\cos\theta_A\sim -0.1$ values, the $a_1$
is mainly singlet and its coupling to $b\overline b$, being proportional
to $\cos\theta_A\tan\beta$, is not enhanced. However, it is also not that
suppressed, which has important implications for $B$ factories.
\section{Detection of the NMSSM light Higgs bosons}
We now turn to how one can detect the $\h_1$ and/or the $a_1$. At the
{\bf LHC}, all standard LHC channels for Higgs detection fail: {\it e.g.}\
$B(\h_1\to\gamma\gam)$ is much too small because of large
$B(\h_1\toa_1\ai)$. The possible new LHC channels are as follows.
\underline{$WW\to \h_1\toa_1\ai\to 4\tau$.} This channel looks
moderately promising but complete studies are not available.
\underline{ $t\overline t \h_1\to t \overline ta_1\ai\to t\overline t
4\tau$.} A study is needed. \underline{$\widetilde\chi^0_2\to
\h_1\widetilde\chi^0_1$ with $\h_1\to a_1\ai\to4\tau$.} This might work given
that the $\widetilde\chi^0_2\to \h_1 \widetilde\chi^0_1$ channel provides a signal in the MSSM
when $\h_1\to b\overline b$ decays are dominant. A $4\tau$ final state
might have smaller backgrounds. Last, but definitely not least,
\underline{diffractive production $pp\to pp\h_1\to pp X$} looks quite
promising. The mass $M_X$ can be reconstructed with roughly a
$1-2~{\rm GeV}$ resolution, potentially revealing a Higgs peak, independent
of the decay of the Higgs. The event is quiet so that the tracks from
the $\tau$'s appear in a relatively clean environment, allowing track
counting and associated cuts. Our \cite{Forshaw:2007ra} results are
that one expects about $3-5$ clean, {\it i.e.}\ reconstructed and tagged
events with no background, per $30~{\rm fb^{-1}}} \def\Cgrav{C_{\rm grav}$ of integrated luminosity.
Thus, high integrated luminosity will be needed.
The rather singlet nature of the $a_1$ and its low mass, imply that
direct production/detection will be challenging at the LHC. But,
further thought is definitely warranted.
At the {\bf ILC}, $\h_1$ detection would be much more straightforward.
The process $e^+e^-\to ZX$ will reveal the
$M_X\simm_{\h_1}\sim100~{\rm GeV}$ peak no matter how the $\h_1$ decays.
But the ILC is decades away.
At {\bf B factories} it may be possible to detect the $a_1$
via $\Upsilon\to\gamma a_1$ decays \cite{Dermisek:2006py}. Both BaBar and
CLEO have been working on dedicated searches. CLEO has placed some
useful, but not (yet) terribly constraining, new limits. The predicted
values of $B(\Upsilon\to\gammaa_1)$ for $F<15$ NMSSM scenarios are shown
in Fig.~\ref{upsilon}. Note that the scenarios with no light-$a_1$
fine-tuning are those with $|\cos\theta_A|$ close to the lower bound and
$m_{\ai}$ near to $\mupsilon$, implying the smallest values of
$B(\Upsilon\to\gammaa_1)$.
\begin{figure}
\includegraphics[width=3in,angle=90]{brupsvsa1nonsinglet_meq123_tb10_mu150_scan+flt15.ps}
\caption{$B(\Upsilon\to\gammaa_1)$ for NMSSM scenarios. Results are
plotted for various ranges of $m_{\ai}$ using the color scheme of
Fig.~\ref{gvscta} (blue, red, green, black correspond to increasing
$m_{\ai}$ in that order). The left plot comes from an $A_\lam,A_\kap$
scan, holding $\mu_\mathrm{eff}(m_Z)=150~{\rm GeV}$ fixed. The right plot shows
results for $F<15$ scenarios with $m_{\ai}<9.2~{\rm GeV}$ found in a general
scan over all NMSSM parameters. The lower bound on $B(\Upsilon\to \gammaa_1)$ arises
basically from the LEP requirement of $B(\h_1\toa_1\ai)>0.7$ which
leads to the lower bound on $|\cos\theta_A|$ noted in text.}
\label{upsilon}
\end{figure}
Of course, as $m_{\ai}\to \mupsilon$ phase space for the decay causes
increasingly severe suppression. And, there is the small region of
$\mupsilon<m_{\ai}<2m_b$ that cannot be covered by $\Upsilon$ decays. However,
Fig.~\ref{upsilon} suggests that if $B(\Upsilon\to \gammaa_1)$ sensitivity can be pushed
down to the $10^{-7}$ level, one might discover the $a_1$.
The exact level of sensitivity needed for full coverage of points with $m_{\ai}<9.2~{\rm GeV}$
is $\tan\beta$-dependent, decreasing to a few times $10^{-8}$ for
$\tan\beta=3$ and increasing to near $10^{-6}$ for $\tan\beta=50$.
Discovery of the $a_1$ at a $B$ factory would be very important input to the LHC program.
\section{Cautionary Remarks}
The scenario with dominant $\h_1\to a_1\ai\to 4\tau$ and $m_{\h_1}\sim
100~{\rm GeV}$ certainly has many attractive properties. However, one can
get quite different scenarios by decreasing the attractiveness
somewhat. First, one could \underline{relax light-$a_1$ fine-tuning,
$G$}. While $m_{\ai}<2m_\tau$ points have larger $G$ values than points
with $m_{\ai}>2m_\tau$, we should be prepared for the former possibility.
It yields a very difficult scenario for a hadron collider,
$\h_1\toa_1\ai \to 4j$. Of course, a significant fraction will be
charmed jets. A question is whether the $pp\to pp \h_1$ production
mode might provide a sufficiently different signal from background in
the $\h_1\to 4j$ modes that progress could be made. If the $a_1$ is
really light, then $\h_1\to 4\mu$ could be the relevant mode. This
would seem to be a highly detectable mode, so don't forget to look for
it --- it should be a cinch compared to $4\tau$. Second, we can
\underline{allow more electroweak / $m_Z$-fine-tuning} corresponding
to higher $F$. In Fig.~\ref{fvsmhinmssm}, the blue squares show that
$m_{\h_1}\sim 115~{\rm GeV}$ with $m_{\ai}$ either below $2m_b$ or above $2m_b$ can
be achieved if one accepts $F>10$ rather than demanding the very
lowest $F\sim 5$ fine-tuning measure. Of course, we do not then
explain the $2.3\sigma$ LEP excess, but this is hardly mandatory.
And, $m_{\h_1} \sim 115~{\rm GeV}$ is still ok for precision electroweak. Thus,
one should work on $\h_1$ detection assuming: (a) $m_{\h_1}\geq 115~{\rm GeV}$
with $\h_1\to a_1\ai\to 4\tau$; and (b) $m_{\h_1}\geq 115~{\rm GeV}$ with $\h_1\to
a_1\ai\to 4b$. The $pp\to pp \h_1$ analysis in case (a) will be very
similar to that summarized earlier for $m_{\h_1}\sim 100~{\rm GeV}$, but
production rates will be smaller. In case (b), there are several
papers in the literature claiming that such a Higgs signal can be seen
~\cite{Carena:2007jk,Cheung:2007sva} in $W\h_1$ production.
The most basic thing to keep in mind is that
for a primary Higgs with mass $\mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 150~{\rm GeV}$,
dominance of $\h_1\toa_1\ai$ decays, or even
$\h_2\to \h_1\hi$ decays, is a very generic feature of any model with
extra Higgs fields, supersymmetric or otherwise.
And, these Higgs could decay in many ways in the most general case.
Further alternatives arise if there is more than one singlet
superfield. String models with SM-like matter content that have been
constructed to date have many singlet superfields. One should
anticipate the possibility of several, even many different Higgs-pair
states being of significance in the decay of the SM-like Higgs of the
model. Note that this motivates in a very general way the importance
of looking for the light CP-even or CP-odd Higgs states in $\Upsilon\to
\gamma X$ decays.
Another natural possibility is that the $\h_1$ could decay to final
states containing a pair of supersymmetric particles (one of which
must be a state other than the LSP if $m_{\h_1}<114~{\rm GeV}$). A particular
case that arises in supersymmetric models, especially those with extra singlets, is $\h_1\to \widetilde\chi^0_2
\widetilde\chi^0_1$ with $\widetilde\chi^0_2 \to f \overline f \widetilde\chi^0_1$ --- see
\cite{Chang:2007de,Chang:2008cw}. Once again, the very small $b\overline
b$ width of a Higgs with SM-like couplings to SM particles means that
this mode could easily dominate if allowed. As noted in
Table~\ref{lepmodes}, LEP constraints allow $m_{\h_1}<100~{\rm GeV}$ if this is
an important decay channel. Higgs discovery would be really
challenging if $\h_1\to a_1\ai\to 4\tau$ and $\h_1\to \widetilde\chi^0_2\widetilde\chi^0_1\to
f\overline f\ \emiss$ were both present.
\section{Conclusions}
The NMSSM can have small fine-tuning of all types. First, quadratic
divergence fine-tuning is erased ab initio. Second, electroweak
fine-tuning to get the observed value of $m_Z^2$ can be avoided for
$m_{\h_1}\sim 100~{\rm GeV}$, large $B(\h_1\toa_1\ai)$ and $m_{\ai}<2m_b$.
Light-$a_1$ fine-tuning to achieve $m_{\ai}<2m_b$ and (simultaneously)
large $B(\h_1\toa_1\ai)$ (as needed above) can be avoided ---
$m_{\ai}>2m_\tau$ with $a_1$ being mainly singlet is somewhat preferred to
minimize light-$a_1$ fine-tuning. Thus, requiring low fine-tuning of
all kinds in the NMSSM leads us to expect an $\h_1$ with $m_{\h_1}\sim
100~{\rm GeV}$ and SM-like couplings to SM particles but with primary decays
$\h_1\toa_1\ai\to 4\tau$.
The consequences are significant. Higgs detection will be quite
challenging at a hadron collider. Higgs detection at the ILC is easy
using the missing mass $e^+e^- \to Z X$ method of looking for a peak in
$M_X$. Higgs detection in $\gamma\gam\to \h_1\toa_1\ai$ will be easy.
The $a_1$ might be detected using dedicated $\Upsilon\to\gammaa_1$ searches.
The stops and other squarks should be light. Also, the gluino and,
assuming conventional mass orderings, the wino and bino should all
have modest mass. As a result, although SUSY will be easily seen at
the LHC, Higgs detection at the LHC will be a real challenge. Still,
it now appears possible with high luminosity using doubly-diffractive
$pp\to pp \h_1\to pp 4\tau$ events. Even if the LHC sees the
$\h_1\toa_1\ai$ signal directly, only the ILC and possibly $B$-factory
results for $\Upsilon\to\gammaa_1$ can provide the detailed measurements
needed to verify the model.
It is likely that other models in which the MSSM $\mu$ parameter is
generated using additional scalar fields can achieve small fine-tuning
in a manner similar to the NMSSM. However, it is always the case that
low electroweak fine-tuning will require low SUSY masses which in turn
typically imply $m_{\h_1}\sim 100~{\rm GeV}$. Then, to escape LEP limits large
$B(\h_1\toa_1\ai+f\overline f\ \emiss+\ldots)$, with most final states not
decaying to $b$'s ({\it e.g.}\ $m_{\ai}<2m_b$) would be needed. In general, the
$a_1$ might not need to be so singlet as in the NMSSM and would then
have larger $B(\Upsilon\to \gammaa_1)$.
If the LHC Higgs signal is really marginal in the end, and even if
not, the ability to check perturbativity of $WW\to WW$ at the LHC
might prove to be very crucial to make sure that there really is a
light Higgs accompanying light SUSY and that it carries most of the SM
coupling strength to $WW$.
It is also worth noting that a light $a_1$ allows for a light $\widetilde\chi^0_1$
to be responsible for dark matter of correct relic density
\cite{Gunion:2005rw}: annihilation would typically be via
$\widetilde\chi^0_1\cnone\toa_1$. To check the details, properties of the $a_1$
and $\widetilde\chi^0_1$ would need to be known fairly precisely. The ILC might
be able to measure their properties in sufficient detail to verify
that it all fits together. Also $\Upsilon\to\gammaa_1$ decay
information would help tremendously.
In general, as reviewed in \cite{Chang:2008cw}, the Higgs sector is
extraordinarily sensitive to new physics from some extended model
through operators of the form $H^\dagger\ H E$, where $H$ is a SM or
MSSM Higgs field and $E$ is a gauge singlet combination of fields from
the extended sector such a $\phi^\dagger\phi$ or $\phi+\phi^\dagger$,
where $\phi$ is a singlet scalar field from the new physics sector. In
the former case, the operator will have a dimensionless coupling
coefficient and in the latter case a dimensionful coupling
coefficient. This implies that in either case this new operator is
likely to have large impact on Higgs decays. In the NMSSM, the
supersymmetric structure implies a slightly more complex arrangement:
the superpotential component $\lambda\widehat S \widehat H_u\widehat H_d$ and
soft-SUSY-breaking term $\lambdaA_\lam SH_uH_d$ both establish a
connection between the MSSM sector and the extended singlet field
sector and lead to large modifications of the light Higgs decays.
Ref.~\cite{Chang:2008cw} reviews other proposals for the extended
sector. In some, $E$ has higher dimensionality and the operator
coupling coefficient is suppressed by the new physics scale but
nonetheless would greatly influence Higgs physics. In general, the
precision electroweak preference for a Higgs $h$ with SM-like $WW,ZZ$
couplings and $m_{\h}\sim 100~{\rm GeV}$ greatly increases the odds that a
SM-like Higgs is present but decays to new physics channels. In this
context, SUSY is strongly motivated since electroweak fine-tuning is
minimized precisely for $m_{\h}\sim 100~{\rm GeV}$ and an extended SUSY model
such as the NMSSM can provide the needed non-SM Higgs decays.
\vspace*{-.2in}
\begin{theacknowledgments}
This work is supported in part by the U.S. Department of Energy. I am
grateful to the Kavli Institute for Theoretical Physics for support
during the project. Most importantly, I would like to thank George
Rupp for the opportunity to present this overview and honor Mike
Scadron on his 70th birthday in the process.
\end{theacknowledgments}
\vspace*{-.2in}
\bibliographystyle{aipproc}
|
1,314,259,995,288 | arxiv | \section{Introduction}
A Diophantine $m$-tuple is a set of $m$ positive
integers with the property that the product of any two of its distinct
elements is one less then a square. If a set of nonzero rationals
has the same property, then it is called
a rational Diophantine $m$-tuple.
Diophantus of Alexandria found the first example of a rational Diophantine quadruple
$\{1/16, 33/16, 17/4, 105/16\}$, while the first Diophantine quadruple in integers was
found by Fermat, and it was the set $\{1,3,8,120\}$.
It is well-known that there exist infinitely many integer Diophantine quadruples
(e.g. $\{k,k+2,4k+4,16k^3+48k^2+44k+12\}$ for $k\geq 1$),
while it was proved in \cite{D-crelle} that an integer Diophantine sextuple does not exist and that there are
only finitely many such quintuples. A folklore conjecture is that there does not exist an integer
Diophantine quintuple. There is an even stronger conjecture which predicts that all integer
Diophantine quadruples $\{a,b,c,d\}$ satisfy the equation $(a+b-c-d)^2=4(ab+1)(cd+1)$ (such
quadruples are called regular).
However, in the rational case, there exist larger sets with the same property.
Euler found infinitely many rational Diophantine quintuples,
e.g. he was able to extend the Fermat quadruple to the rational quintuple
$\{ 1, 3, 8, 120, 777480/8288641\}$. Gibbs \cite{Gibbs} found the first rational Diophantine sextuple
$$\{ 11/192, 35/192, 155/27, 512/27, 1235/48, 180873/16\}, $$
while Dujella, Kazalicki, Miki\'c and Szikszai \cite{DKMS} recently proved that there exist
infinitely many rational Diophantine sextuples.
No example of a rational Diophantine septuple is known.
Moreover, we do not know any rational Diophantine quadruple
which can be extended to two different rational Diophantine sextuples.
On the other hand, by the construction from \cite{DKMS}, we know that there
exist infinitely many rational Diophantine triples,
each of which can be extended to rational Diophantine sextuples in infinitely
many ways.
In particular, there are infinitely many rational Diophantine sextuples
containing the triples
$\{15/14, -16/21, 7/6\}$ and $\{ 3780/73, 26645/252, 7/13140\}$.
The construction from \cite{DKMS} uses elliptic curves induced by Diophantine triples,
i.e. curves of the form $y^2 = (x+ab)(x+ac)(x+bc)$ where $\{a,b,c\}$ is a
rational Diophantine triple, with torsion group $\mathbb{Z}/2\mathbb{Z} \times
\mathbb{Z}/6\mathbb{Z}$ over $\mathbb{Q}$.
Piezas \cite{TP} studied Gibbs's examples of rational Diophantine sextuples which do not fit into the
construction from \cite{DKMS} and realized that most of them follow a common pattern:
they contain two regular subquadruples with two common elements (see Proposition \ref{prop:1}).
By studying sextuples of that special form, he obtained new simpler parametric formulas
for rational Diophantine sextuples, and also obtained infinitely many sextuples
$\{a,b,c,d,e,f\}$ with fixed products $ab$ and $cd$ (e.g. $ab=24$ and $cd=9/16$).
In this paper, we will reformulate results from \cite{TP} in terms of the geometry of a certain algebraic variety parameterizing rational Diophantine quadruples, in fact the fiber product of three Edwards curves over $\mathbb{Q}(t)$, and obtain a method for generating (new) parametric formulas for rational Diophantine sextuples.
\section{Construction}
\subsection{Correspondence}
Let $\{a,b,c,d\}$ be a rational Diophantine quadruple with elements in $\Q$ or $\Q(t)$, and let
\begin{align*}
ab+1=t_{12}^2 \quad ac+1&=t_{13}^2 \quad ad+1=t_{14}^2\\
bc+1=t_{23}^2 \quad bd+1&=t_{24}^2 \quad cd+1=t_{34}^2.
\end{align*}
It follows that $(t_{12},t_{34},t_{13},t_{24},t_{14},t_{23}, m'=abcd)$ defines a point on an algebraic variety $\mathcal{C}$ defined by the following equations:
\begin{align*}
(t_{12}^2-1)(t_{34}^2-1)&=m'\\
(t_{13}^2-1)(t_{24}^2-1)&=m'\\
(t_{14}^2-1)(t_{23}^2-1)&=m'.
\end{align*}
Conversely, the points $(\pm t_{12},\pm t_{34},\pm t_{13},\pm t_{24},\pm t_{14},\pm t_{23}, m')$ on $\mathcal{C}$ determine two rational Diophantine quadruples $\pm(a,b,c,d)$ (for example $a^2=(t_{12}^2-1)(t_{13}^2-1)/(t_{23}^2-1)$) provided that the elements $a,b,c$ and $d$ are rational, distinct and non-zero.
Note that if one element is rational, then all the elements are rational.
The projection $(t_{12},t_{34},t_{13},t_{24},t_{14},t_{23}, m') \mapsto m'$ defines a fibration of $\mathcal{C}$ over the projective line, and a generic fiber is the product of three curves $\mathcal{D}: (x^2-1)(y^2-1)=m'$. Any point on $\mathcal{C}$ corresponds to the three points $Q_1=(t_{12},t_{34})$, $Q_2=(t_{13},t_{24})$ and $Q_3=(t_{14}, t_{23})$ on $\mathcal{D}$. The elements of the quadruple corresponding to these three points are distinct if and only if no two of these points can be transformed from one to another by changing signs and switching coordinates, e.g. for the points $(t_{12}, t_{34})$, $(-t_{34}, t_{12})$ and $(t_{14},t_{23})$, we have that $a=d$.
\subsection{Extending quadruples to sextuples}
The following proposition gives a criterion for extending quadruples to sextuples.
\begin{proposition}[T. Piezas \cite{TP}]\label{prop:1}
Let $\{a, b, c, d\}$ be a rational Diophantine quadruple, and $x_1$ and $x_2$ the roots of
\[
(abcdx+2abc+a+b+c-d-x)^2=4(ab+1)(ac+1)(bc+1)(dx+1).
\]
If $x_1x_2 \ne 0$ and
\begin{equation}\label{eq:1}
(abcd-3)^2=4(ab+cd+3),
\end{equation}
then $\{a,b,c,d,x_1,x_2\}$ is a Diophantine sextuple. Furthermore,
\begin{eqnarray*}
(a+b-x_1-x_2)^2 &=& 4(ab+1)(x_1x_2+1)\\
(c+d-x_1-x_2)^2 &=& 4(cd+1)(x_1x_2+1).
\end{eqnarray*}
\end{proposition}
Note that $x_1$ and $x_2$ coincide with the extensions of rational Diophantine quadruples
given in \cite[Theorem 1]{D-acta2}, and the condition (\ref{eq:1}) implies
that $x_1x_2+1=\left(\frac{a+b-c-d}{abcd-1}\right)^2$.
In this section, we will reformulate Proposition \ref{prop:1} in terms of the geometry of the algebraic variety $\mathcal{C}$.
The condition \eqref{eq:1} is equivalent to $t_{12} t_{34}= \pm t_{12} \pm t_{34}$, or $t_{34}=\pm t_{12}/(t_{12}\pm 1)$. For the rest of the paper, we set $t_{12} =t$, $t_{34}=t/(t-1)$ and $m'=(t^2-1)(\frac{t^2}{(t-1)^2}-1)=\frac{2t^2 + t - 1}{t - 1}$, and thus condition \eqref{eq:1} is satisfied.
The curve $\mathcal{D}$ over $\Q(t)$
$$\mathcal{D}: (x^2-1)(y^2-1)=\frac{2t^2 + t - 1}{t - 1}$$ is birationally equivalent to the elliptic curve
$$E: S^2 = T^3 -2\cdot \frac{2t^2 - t + 1}{t-1}T^2 + \frac{(2t - 1)^2 (t + 1)^2}{(t-1)^2} T.$$
The map is given by $T = 2(x^2-1)y+2x^2-(2-m')$,
and $S = 2Tx$, where $m'=\frac{2t^2 + t - 1}{t - 1}$.
Denote by $P=\displaystyle\left[\frac{(2t-1)^2(t+1)}{t-1}, \frac{2t(2t-1)^2(t+1)}{t-1}\right]\in E(\Q(t))$ a point of infinite order on $E$, and by $R
= \displaystyle\left[\frac{(t+1)(2t-1)}{(t-1)}, \frac{2(t+1)(2t-1)}{t-1}\right]$ a point of order $4$. The point $(t_{12},t_{34}) \in \mathcal{D}(\Q(t))$ corresponds to the point $P\in E(\Q(t))$.
\begin{proposition}\label{prop:MW}
The Mordell-Weil group of $E(\Q(t))$ is generated by $P$ and $R$.
\end{proposition}
\begin{proof}
It is enough to prove that the specialization homomorphism at $t_0=6$ is injective. Then one can easily check that the specializations of points $P$ and $R$ generate the Mordell-Weil group of $E_{t_0}(\Q)$.
We use the injectivity criterion from Theorem 1.3 in \cite{GT}. It states that given an elliptic curve $y^2=x^3+A(t)x^2+B(t)x$, where $A,B \in \mathbb{Z}[t]$, with exactly one nontrivial $2$-torsion point over $\Q(t)$, the specialization homomorphism at $t_0\in \Q$ is injective if the following condition is satisfied: for every nonconstant square-free divisor $h(t)\in \mathbb{Z}[t]$ of $B(t)$ or $A(t)^2-4B(t)$ the rational number $h(t_0)$ is not a square in $\Q$.
The claim follows (after clearing out the denominators in the defining equation of $E$).
\end{proof}
If $Q\in E$ is the point that corresponds to the point $(x,y)\in \mathcal{D}$, then the points $-Q$ and $Q+R$ correspond to the points $(-x,y)$ and $(y,-x)$.
Hence the triple $(Q_1,Q_2,Q_3)\in E(\Q(t))^3$ corresponds to the quadruple whose elements are not distinct if and only if there are two points, say $Q_i$ and $Q_j$, such that $Q_i=\pm Q_j+kR$, where $k \in \{0,1,2,3\}$.
If instead of $m'$, we fix on $\mathcal{C}$ coordinates $t_{12}, t_{13}, t_{23}$ we will obtain an elliptic curve on $\mathcal{C}$ consisting of points $(t_{34}, t_{24}, t_{14}, m')$ which satisfy
\begin{align*}
(t_{34}^2-1)&=\frac{m'}{(t_{12}^2-1)}\\
(t_{24}^2-1)&=\frac{m'}{(t_{13}^2-1)}\\
(t_{14}^2-1)&=\frac{m'}{(t_{23}^2-1)}.
\end{align*}
Thus, to the point $(t_{12},t_{34},t_{13},t_{24},t_{14},t_{23}, m')$ on $\mathcal{C}$ that corresponds to the rational quadruple $\{ a, b, c, d \}$, we associate the elliptic curve $E_{abc}:y^2=(x+ab)(x+ac)(x+bc)$ together with the point $W=[abcd,abc \cdot t_{14} t_{24} t_{34}]$.
A short calculation shows that if we denote by $V=[1, t_{12} t_{13} t_{23}]$ a point on $E_{abc}$, then $x_1$ and $x_2$ from Proposition \ref{prop:1} are given by
\[
x_1 = \frac{x(W+V)}{abc} \quad\textrm{ and }\quad x_2=\frac{x(W-V)}{abc}.
\]
For more details on using the elliptic curve $E_{abc}$ for extending rational Diophantine triples and quadruples see \cite[Theorem 1]{D-acta2}, \cite[Theorem 1]{D-Nom} and \cite[Proposition 2.1]{DKMS}.
\subsection{Degenerate case}
In this subsection we fix $Q_1=P$ and investigate conditions under which the point $(Q_1, Q_2, Q_3)\in E(\Q(t))\times E(\Q(t)) \times E(\Q(t))$ corresponds to the degenerate Diophantine sextuple (i.e. $x_1 x_2 =0$). We call such triple degenerate. Following the notation from the previous section, we see that the triple is degenerate if and only if $\pm W \pm V = [0,abc]\in E_{abc}(\Q(t))$ for some choice of the signs.
\begin{proposition} \label{prop:2}
Let $Q_2, Q_3 \in E(\Q(t))$. The triple $(Q_1, Q_2, Q_3)\in E(\Q(t))\times E(\Q(t)) \times E(\Q(t))$ is degenerate if and only if $\pm Q_1\pm Q_2 \pm Q_3= R$ for some choice of the signs.
\end{proposition}
\begin{proof}
Let $r=x(Q_2)$ and $s=x(Q_3)$. Direct calculation shows that the constant term of the polynomial from Proposition \ref{prop:1} is zero if and only if $g(r,s)h(r,s)=0$ where
\begin{eqnarray*}
g(r,s)&= \left( (-1+t)^2 r s-(1+t)^2(-1+2t)(r+s)+(1+t)^2(-1+2t)^2 \right)^2 - 16 r s t^2(1+t)^2(-1+2t),\\
h(r,s) &= \left( (-1+t)^2 r s-(1-t)^2(-1+2t)(r+s)+(1+t)^2(-1+2t)^2 \right)^2 - 16 r s t^2(1-t)^2(-1+2t).
\end{eqnarray*}
One can check that $r$ and $s$ satisfy this equation if $\pm Q_1\pm Q_2 \pm Q_3= R$ for some choice of the signs.
Conversely, both $g(r,s)=0$ and $h(r,s)=0$ define a curve that is birationally equivalent to $E$. Hence, we have a degree four map from ``the degeneracy locus'' in $E\times E$ to $E$ given by $(Q_2,Q_3) \mapsto (x(Q_2), x(Q_3))$. Since we already have $8$ irreducible components in ``the degeneracy locus'' (one for the each choice of the signs), the claim follows.
\end{proof}
\subsection{Rationality}
Given a triple $(Q_1, Q_2, Q_3)\in E(\Q(t))\times E(\Q(t)) \times E(\Q(t))$, where $Q_1=P$, we want to know if the corresponding Diophantine quadruple is rational. It is enough to prove that one element is rational.
A short calculation shows that for the point $(S,T)\in E(\Q(t))$ we have
\begin{equation}\label{eq:2}
x^2-1=\left(\frac{S}{2T}\right)^2-1=T\left(\frac{T - \frac{2t^2 + t - 1}{t - 1}}{2T}\right)^2=:f(T).
\end{equation}
Since
\[
\begin{array}{ll}
a^2 &=\frac{f(Q_1)f(Q_2)f(Q_3)}{m'} \equiv x(Q_1)x(Q_2)x(Q_3)m'\equiv (2t-1)x(Q_2)x(Q_3)\\ & \equiv x(-P+R)x(Q_2)x(Q_3) \pmod{\Q(t)^{\times 2}}
\end{array}
\]
for the rationality of $a$ it is enough to prove that $x(-P+R)x(Q_2)x(Q_3)$ is a square in $\Q(t)$.
Since the point $(0,0)\in E(\Q(t))$ is a point of order $2$, the usual 2-descent homomorphism $E(\Q(t)) \rightarrow \Q(t)^\times/\Q(t)^{\times 2}$, which is for non-torsion points defined by $(T,S) \mapsto T$ (note that $(0,0)\mapsto 1$), implies the following proposition.
\begin{proposition}\label{prop:3} Let $Q_2, Q_3 \in E(\Q(t))$.
\begin{enumerate}
\item [a)] If $Q_2+Q_3\equiv \mathcal{O} \bmod{2E(\Q(t))}$ then $a^2 \equiv (2t-1) \bmod{\Q(t)^{\times 2}}$.
\item [b)] If $Q_2+Q_3\equiv R \bmod{2E(\Q(t))}$ then $a^2 \equiv (t-1)(t+1) \bmod{\Q(t)^{\times 2}}$.
\item [c)] If $Q_2+Q_3\equiv P \bmod{2E(\Q(t))}$ then $a^2 \equiv (t-1)(t+1)(2t-1) \bmod{\Q(t)^{\times 2}}$.
\item [d)] If $Q_2+Q_3\equiv P+R \bmod{2E(\Q(t))}$ then $a^2 \equiv 1 \bmod{\Q(t)^{\times 2}}$.
\end{enumerate}
\end{proposition}
\begin{remark}
In the cases a) and b) we can still obtain parametric families of Diophantine sextuples if we specialize to those $t's$ for which $2t-1$ and $(t-1)(t+1)$ are squares (e.g. if we specialize $t$ to $\frac{t^2+1}{2}$ and $\frac{t^2+1}{2t}$). Concerning the case c), the elliptic curve $y^2=(x-1)(x+1)(2x-1)$ has Mordell-Weil group isomorphic to $\Z/2\Z+\Z/4\Z$.
\end{remark}
\begin{remark}
The proposition covers all the possibilities, since the Mordell-Weil group of $E(\Q(t))$ is generated by $P$ and $R$ (see Proposition \ref{prop:MW}).
\end{remark}
\section{Examples}
\subsection{Family corresponding to $(P,2P,4P)$}
For an illustration, we calculate a parametric family $\{a, b, c, d, e, f \}$ of rational Diophantine sextuples that corresponds to the triple $(P,2P,4P)$. It follows from Proposition \ref{prop:2} that the triple is not degenerate. The rationality of the sextuple will follow if we replace $t$ by $\frac{t^2+1}{2}$ (see part a) of Proposition \ref{prop:3}). Then, the corresponding Diophantine quadruple is equal to
\begin{eqnarray*}
a &\!\!=\!\!& \frac{ (t^{8} - 8 t^{6} - 14 t^{4} + 32 t^{2} - 27) \cdot (t^{8} + 26 t^{4} - 40 t^{2} - 3)}{64\cdot(t - 1) \cdot t \cdot (t + 1) \cdot (t^{4} - 2 t^{2} + 5) \cdot (t^{4} + 6 t^{2} - 3)}, \\
b &\!\!=\!\!& \frac{16 \cdot t \cdot (t - 1)^{2} \cdot (t + 1)^{2} \cdot (t^{2} + 3) \cdot (t^{4} - 2 t^{2} + 5) \cdot (t^{4} + 6 t^{2} - 3)}{(t^{8} - 8 t^{6} - 14 t^{4} + 32 t^{2} - 27) \cdot (t^{8} + 26 t^{4} - 40 t^{2} - 3)}, \\
c &\!\!=\!\!& \frac{t \cdot (t^{8} - 8 t^{6} - 14 t^{4} + 32 t^{2} - 27) \cdot (t^{8} + 26 t^{4} - 40 t^{2} - 3)}{(t - 1) \cdot (t + 1) \cdot (t^{2} - 3)^{2} \cdot (t^{2} + 1)^{2} \cdot (t^{4} - 2 t^{2} + 5) \cdot (t^{4} + 6 t^{2} - 3)},\\
d &\!\!=\!\!& \frac{4 \cdot t \cdot (t^{2} - 3)^{2} \cdot (t^{2} + 1)^{2} \cdot (t^{4} - 2 t^{2} + 5) \cdot (t^{4} + 6 t^{2} - 3)}{(t - 1) \cdot (t + 1) \cdot (t^{8} - 8 t^{6} - 14 t^{4} + 32 t^{2} - 27) \cdot (t^{8} + 26 t^{4} - 40 t^{2} - 3)}.
\end{eqnarray*}
Using Proposition \ref{prop:1} (let $e=x_1$ and $f=x_2$), we find that $e = e_1/e_2$ and $f = f_1/f_2$ are equal to
{\small
\begin{eqnarray*}
e_1&\!\!=\!\!& (t + 1) \cdot (t^{2} - 2 t + 3) \cdot (t^{2} + 2 t - 1) \cdot (t^{6} - 2 t^{5} + t^{4} + 12 t^{3} + 7 t^{2} - 2 t - 9) \\ & & \mbox{} \cdot (t^{6} + 2 t^{5} - 3 t^{4} + 4 t^{3} - 17 t^{2} + 18 t + 3) \\ & & \mbox{} \cdot (t^{12} - 4 t^{11} + 6 t^{10} + 20 t^{9} - t^{8} + 24 t^{7} - 12 t^{6} - 88 t^{5} - 177 t^{4} + 364 t^{3} - 90 t^{2} - 60 t + 81)\\ & & \mbox{} \cdot (t^{12} + 4 t^{11} - 2 t^{10} - 4 t^{9} - 41 t^{8} + 40 t^{7} + 100 t^{6} - 72 t^{5} + 63 t^{4} + 212 t^{3} - 66 t^{2} - 180 t + 9), \\
e_2&\!\!=\!\!&
64 \cdot (t - 1) \cdot t \cdot (t^{2} - 3)^{2} \cdot (t^{2} + 1)^{4} \cdot (t^{4} - 2 t^{2} + 5) \cdot (t^{4} + 6 t^{2} - 3) \cdot (t^{8} - 8 t^{6} - 14 t^{4} + 32 t^{2} - 27) \\ & & \mbox{} \cdot (t^{8} + 26 t^{4} - 40 t^{2} - 3), \\
f_1&\!\!=\!\!&
(t - 1) \cdot (t^{2} - 2 t - 1) \cdot (t^{2} + 2 t + 3) \cdot (t^{6} - 2 t^{5} - 3 t^{4} - 4 t^{3} - 17 t^{2} - 18 t + 3) \cdot (t^{6} + 2 t^{5} + t^{4} - 12 t^{3} + 7 t^{2} + 2 t - 9) \\ & & \mbox{} \cdot (t^{12} - 4 t^{11} - 2 t^{10} + 4 t^{9} - 41 t^{8} - 40 t^{7} + 100 t^{6} + 72 t^{5} + 63 t^{4} - 212 t^{3} - 66 t^{2} + 180 t + 9)\\ & & \mbox{} \cdot (t^{12} + 4 t^{11} + 6 t^{10} - 20 t^{9} - t^{8} - 24 t^{7} - 12 t^{6} + 88 t^{5} - 177 t^{4} - 364 t^{3} - 90 t^{2} + 60 t + 81), \\
f_2&\!\!=\!\!&
64 \cdot t \cdot (t + 1) \cdot (t^{2} - 3)^{2} \cdot (t^{2} + 1)^{4} \cdot (t^{4} - 2 t^{2} + 5) \cdot (t^{4} + 6 t^{2} - 3) \cdot (t^{8} - 8 t^{6} - 14 t^{4} + 32 t^{2} - 27)\\ & & \mbox{} \cdot (t^{8} + 26 t^{4} - 40 t^{2} - 3).\\
\end{eqnarray*}
\subsection{Rank two examples}
If we specialize $t$ to $t^2+1$, the elliptic curve $E$ will have another point of infinite order (independent of $P$), $S=\left[\frac{(2+t^2)^2}{t^2}, \frac{(2+t^2)^2(1+t^2)}{t^3}\right]$.
Now the triple $(P,2P+S,R+S-P)$ is not degenerate and satisfies the condition
of Proposition \ref{prop:3}(d). Our construction gives the following family of
rational Diophantine sextuples
\begin{eqnarray*}
a&\!\!=\!\!& \frac{(t^3+3t^2+t+1)\cdot (t^3+t^2+3t+1)\cdot (2t-1)}{2 \cdot(t-2)\cdot (t-1)\cdot (t^2+t+1) \cdot (t+1)}, \\
b&\!\!=\!\!& \frac{2\cdot (t-1)\cdot (t^2+t+1)\cdot (t+1)\cdot (t^2+2)\cdot (t-2)\cdot t^2}{(t^3+3 t^2+t+1) \cdot(t^3+t^2+3 t+1)\cdot (2t-1)},\\
c&\!\!=\!\!& \frac{(t^3+3t^2+t+1)\cdot (t^3+t^2+3t+1)\cdot (t-2)}{2\cdot (2t-1)\cdot (t-1)\cdot (t^2+t+1) \cdot(t+1)\cdot t^2},\\
d&\!\!=\!\!& \frac{2\cdot (2t^2+1)\cdot (2t-1)\cdot (t-1)\cdot (t^2+t+1)\cdot (t+1)}{t^2\cdot (t^3+3t^2+t+1) \cdot(t^3+t^2+3t+1)\cdot (t-2)},\\
e&\!\!=\!\!& \frac{8 \cdot t^2\cdot (t-1)\cdot (2 t+1)\cdot (t+2)\cdot (t+1)\cdot (t^2+1)}{(t-2)\cdot (2t-1)\cdot (t^2+t+1)\cdot (t^3+t^2+3t+1)\cdot (t^3+3t^2+t+1)},\\
f&\!\!=\!\!& \frac{3\cdot (3 t^2+2t+1)\cdot (t^2+2 t+3)\cdot (t^4+1)\cdot (t^4+4 t^2+1)}{2\cdot (t-1)\cdot (t-2)\cdot (2t-1)\cdot (t+1)\cdot (t^2+t+1)\cdot (t^3+t^2+3t+1)\cdot (t^3+3 t^2+t+1)}.\\
\end{eqnarray*}
If we further require $2(t^2+1)$ to be a square, then the resulting parametrization $t \mapsto 1+\left(\frac{4t^2 - 8t - 4}{4t^2 + 8t - 4}\right)^2$ yields a point $K$ on $E$
$$K=\left[\frac{(t^{2} - 2t + 1) \cdot (t^{2} + 2 t + 3) \cdot (t^{2} + 2t+ 1)^{2}}{(t^{2} - 2 t - 1)^{2} \cdot (t^{2} + 2 t - 1)^{2}},\frac{4 (t^{2} - 2 t + 1) \cdot (t^{2} + 1) \cdot (t^{2} + 2 t + 3) \cdot (t^{2} + 2 t + 1)^{2}}{(t^{2} + 2 t - 1)^{2} \cdot (t^{2} - 2 t - 1)^{3}}\right],$$
with the property that $2K=S$.
When we apply our construction to the triple $(P, K, -2K+R)$, we obtain a very simple family of sextuples also found by Piezas \cite{TP}
\begin{eqnarray*}
a&\!\!=\!\!& \frac{(t^2-2t-1)\cdot(t^2+2t+3)\cdot(3t^2-2t+1)}{4t\cdot(t^2-1)\cdot(t^2+2t-1)},\\
b&\!\!=\!\!& \frac{4t\cdot(t^2-1)\cdot(t^2-2t-1)}{(t^2+2t-1)^3},\\
c&\!\!=\!\!& \frac{4t\cdot(t^2-1)\cdot(t^2+2t-1)}{(t^2-2t-1)^3}, \\
d&\!\!=\!\!& \frac{(t^2+2t-1)\cdot(t^2-2t+3)\cdot(3t^2+2t+1)}{4t\cdot(t^2-1)\cdot(t^2-2t-1)},\\
e&\!\!=\!\!& \frac{ -t\cdot(t^2+4t+1)\cdot(t^2-4t+1)}{(t-1)\cdot(t+1)\cdot(t^2+2t-1)\cdot(t^2-2t-1)},\\
f&\!\!=\!\!& \frac{(t-1)\cdot(t+1)\cdot(3t^2-1)\cdot(t^2-3)}{4t\cdot(t^2+2t-1)\cdot(t^2-2t-1)}.\\
\end{eqnarray*}
{\bf Acknowledgements:} {Authors acknowledge support from the QuantiXLie Center of Excellence.
A.D. was supported by the Croatian Science Foundation under the project no. 6422.}
|
1,314,259,995,289 | arxiv | \section{Motivation}
The pion form factor is often considered a good observable for
studying the onset of the perturbative QCD (pQCD)regime in exclusive
processes. There are several reasons: First, the asymptotic forms of
the pion form factor at both large and small $Q^2$ are known. At
large $Q^2$ it scales as
\cite{Brodsky:1973kr,Brodsky:1974vy,Farrar:1979aw,Radyushkin:1977gp,Efremov:1978,Efremov:1978rn,Efremov:1979qk,
Jackson:1977,Lepage:1979zb}
\begin{equation}
\label{eq:large_qsq_scaling}
F_\pi(Q^2) =
\frac{8\pi\alpha_s(Q^2)f_\pi^2}{Q^2} \quad \mathrm{as} \quad Q^2 \to
\infty
\end{equation}
while at small $Q^2$, the pion form factor can be well described by
the Vector Meson Dominance (VMD) Model \cite{Holladay:1955,
Frazer:1959gy, Frazer:1960}
\begin{equation}
\label{eq:vmd_form} F_\pi(Q^2) \approx \frac{1}{1+Q^2 \left/
m_\mathrm{VMD}^2 \right.} \quad \mathrm{for} \quad Q^2 \ll
m_\mathrm{VMD}^2
\end{equation}
Therefore at some $Q^2$ there must be a transition from the VMD
behavior to the large $Q^2$ scaling predicted by pQCD. Since the
pion is the lightest hadron, the transition is expected to occur at
lower $Q^2$ than heavier hadrons, which makes it relatively easier
to probe by both experiments and Lattice QCD (LQCD). Finally, there
is no disconnected diagram on the lattice for the pion form factor.
Thus the calculation is pretty straightforward. Previous LQCD
studies on the pion form factor can be found in
\cite{Bonnet:2004fr,Hashimoto:2005am,Brommel:2005ee,Brommel:2006ww}
and the references therein.
The current results from various experiments are shown in Fig.
\ref{fig:Exp}, including the latest results from Jefferson Lab
(JLab) experiments E93-021 \cite{Volmer:2000ek,Tadevosyan:2007} and
E01-004 \cite{Horn:2007}. As we can indicate from the figure, the
data points around $Q^2 \thicksim 2 \mathrm{GeV}^2$ start to show
some hints of deviation from the VMD fit. This is the energy regime
we would like to explore in our study.
\begin{figure}[h]
\centering
\includegraphics[width=0.66\textwidth]{Fpi_expt.png}
\label{fig:Exp}
\caption{Summary of experimental data
for the pion electromagnetic form factor. The two points with open
circles are the latest data from the Jefferson Lab (JLab). Shaded
regions are expected sensitivities of future experiments.}
\end{figure}
\newpage
\section{Lattice Techniques}
In this section we explain the techniques we used in our lattice
calculations, namely the sequential source method (for calculating
the quark propagator) and the ratio method (for the correlation
functions). The pion electromagnetic form factor is obtained in
LQCD by placing a pion creation operator (the ``source'') at
Euclidean time $t_i$ with momentum $p_i$, a pion annihilation
operator (the ``sink'') at $t_f$ with momentum $p_f$, and a current
insertion at time $t$ with momentum transfer $q$, as shown in Fig.
\ref{fig:ThrPt}. The standard quark propagator calculation provides
the two propagator lines that originate from $t_i$, the remaining
propagator from $t_f$ is obtained by the \textit{sequential source
method}: completely specify the quantum numbers and $p_f$ at the
sink, and contract the propagator from $t_i$ to $t_f$ with the
annihilation operator to serve as the source vector of a second,
sequential propagator inversion. The advantage of using the
sequential source method is that various currents with different
$Q^2$ can be inserted at time $t$ without additional matrix
inversions. The largest $Q^2$ available lies in Breit frame ($\vec
{p_f}= - \vec{p_i}$).
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth,angle=-90]{ThrPt.pdf}
\label{fig:ThrPt}
\caption{The quark propagators used to compute the pion form factor.}
\end{figure}
To obtain a simple expression on the lattice, we construct the pion
form factor using the \textit{ratio method}. The pion form factor
$F(Q^2)$ is defined as
\begin{eqnarray}
&&\!\!\!\!\!\!\!\!\!\!\!\!
\left<\pi(\vec{p}_f)\left|V_\mu(0)\right|\pi(\vec{p}_i)\right>_{\rm
continuum}
\\
&&\!\!\! =
Z_V\left<\pi(\vec{p}_f)\left|V_\mu(0)\right|\pi(\vec{p}_i)\right> =
F(Q^2)(p_i+p_f)_\mu \nonumber
\end{eqnarray}
where $V_\mu(x)$ is the chosen vector current. We can extract
$F(Q^2)$ from some ratio of the three-point correlation function and
the two-point functions. The three-point function can be written as
\begin{eqnarray}
&&\!\!\!\!\!\!\!\!\!\!\!\!
\Gamma_{\pi\mu\pi}^{AB}(t_i,t,t_f,\vec{p}_i,\vec{p}_f)
= a^9\sum_{\vec{x}_i,\vec{x}_f}
e^{-i(\vec{x}_f-\vec{x})\cdot\vec{p}_f}
\nonumber \\
&&\times e^{-i(\vec{x}-\vec{x}_i)\cdot\vec{p}_i}
\left<0\left|\phi_B(x_f)V_\mu(x)\phi_A^\dagger(x_i)\right|0\right>
\end{eqnarray}
where $\phi$'s are operators with pion quantum numbers; $A\in(L,S)$
and $B\in(L,S)$ denote either ``local''($L$) or ``smeared''($S$).
Inserting complete sets of hadron states and requiring $t_i\ll t\ll
t_f$, gives
\begin{eqnarray}
&&\!\!\!\!\!\!\!\!\!\!\!\!
\Gamma_{\pi\mu\pi}^{AB}(t_i,t,t_f,\vec{p}_i,\vec{p}_f) \to
\left<0\left|\phi_B(x)\right|\pi(\vec{p}_f)\right>
\nonumber \\
&&
\times\left<\pi(\vec{p}_f)\left|V_\mu(x)\right|\pi(\vec{p}_i)\right>
\left<\pi(\vec{p}_i)\left|\phi_A^\dagger(x)\right|0\right> \nonumber \\
&&
\times\frac{a^3}{4E_\pi(\vec{p}_f)E_\pi(\vec{p}_i)} e^{-(t_f-t)E_\pi(\vec{p}_f)}e^{-(t-t_i)E_\pi(\vec{p}_i)}.
\end{eqnarray}
Similarly for the two-point correlator,
\begin{eqnarray}
&&\!\!\!\!\!\!\!\!\!\!\!\! \Gamma_{\pi\pi}^{AB}(t_i,t_f,\vec{p})
\to \left<0\left|\phi_B(x_i)\right|\pi(\vec{p})\right>
\nonumber \\ && \times
\left<\pi(\vec{p})\left|\phi_A^\dagger(x_i)\right|0\right>
\frac{a^3}{2E}e^{-(t_f-t_i)E}.
\end{eqnarray}
We can obtain $F(Q^2)$ from the following ratio
\begin{eqnarray}
F(Q^2) &=& \frac{\Gamma_{\pi
4\pi}^{AB}(t_i,t,t_f,\vec{p}_i,\vec{p}_f)
\Gamma_{\pi\pi}^{CL}(t_i,t,\vec{p}_f)}
{\Gamma_{\pi\pi}^{AL}(t_i,t,\vec{p}_i)
\Gamma_{\pi\pi}^{CB}(t_i,t_f,\vec{p}_f)}
\nonumber \\ && \times
\left(\frac{2Z_VE_\pi(\vec{p}_f)}{E_\pi(\vec{p}_i)+E_\pi{\vec{p}_f}}\right), \label{theratio}
\end{eqnarray}
where the indices $A$, $B$ and $C$ can be either $L$ (local) or $S$
(smeared).
\section{Simulation Details and Results}
We use lattices generated by MILC \cite{Bernard:2001av}, with volume
$20^3\times32$ and lattice spacing $a=0.125$ fm. The sea quark mass
$m_{sea}$ and the valence quark mass $m_{val}$ are tuned so that we
get the same lightest pion mass $m_\pi(m_{sea})=m_\pi(m_{val})$
\cite{Hagler:2007xi}. The pion operators are fixed at time $t_i=10$
and $t_f=20$, and the number of configurations used in this study is
$201$. We use five different sets of sink momenta: $\vec{p}_f =
(0,0,0), (1,0,0), (1,1,0), (1,1,1)$, and $(2,0,0)$.
We present our results in terms of the square of the pion charge
radius, obtained by the VMD fit:
\begin{equation}
\langle r_\pi^2 \rangle= \frac{6}{m_{VMD}^2}
\label{eq:charge_radius}
\end{equation}
as shown in Fig. \ref{fig:Radius_allPf}. The first point on the left
is from the data set with only zero sink momentum ($\vec{p}_f = (0,
0, 0)$), and for the next point we combined the data from both
$\vec{p}_f = (0, 0, 0)$ and $\vec{p}_f = (1, 0, 0)$, and for the
third point we added in $\vec{p}_f = (1, 1, 0)$, and so on.
\begin{figure}[h]
\centering
\includegraphics[width=0.65\textwidth]{Radius_all.png}
\caption{Pion Form Factor VMD fit for $\vec{p}_f = (0, 0, 0)$ to $(2, 0, 0)$.}
\label{fig:Radius_allPf}
\end{figure}
We can see from Fig. \ref{fig:Radius_allPf} that the error bars of
$r_\pi^2$ increases as higher sink momenta are included. Since the
pion charge radius is related to the slope of $F(Q^2)$ \emph{at low
$Q^2$}, we derive $\langle r_\pi^2 \rangle$ from the data set of
zero sink momentum $\vec{p}_f = (0,0,0)$ and $Q^2 < 1
\mathrm{GeV}^2$, and check the consistency between the VMD fit and
the data above $1 \mathrm{GeV}^2$ to see if there is any deviation
from the VMD model.
The result of this ``consistency check'' is presented in Fig.
\ref{fig:QsqF_with_exp}, where we plot $Q^2F(Q^2)$ against $Q^2$.
While the quantity $Q^2F(Q^2)$ should approach a constant as
predicted by VMD, we can see that there are some hints of deviation
from the VMD model for points with $Q^2
> 2\mathrm{GeV}^2$. To further emphasize this observation, we
define $\Delta Q^2F(Q^2)= Q^2F(Q^2)_{Lattice}- Q^2F(Q^2)_{VMD}$, and
plot $\Delta Q^2F(Q^2)$ against $Q^2$ in Fig. \ref{fig:delta_QsqF}.
\begin{figure}[h]
\centering
\includegraphics[width=0.75\textwidth]{QsqF_with_Exp.png}
\caption{Consistency between data and the VMD fit from low $Q^2$ with zero sink momentum. The three red lines
represent the VMD fit and its error bars, and the triangle points correspond to the experimental data from JLab.}
\label{fig:QsqF_with_exp}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.65\textwidth]{delta_QsqF.png}
\caption{$\Delta Q^2F(Q^2)$, as defined in the text.}
\label{fig:delta_QsqF}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.65\textwidth]{new_BF_QsqF_with_VMD.png}
\caption{VMD fits with error bands from both data of low $Q^2$ with zero $\vec{p}_f$, and from data points
in the Breit frame.}
\label{fig:BF_QsqF_with_VMD}
\end{figure}
We also compared this VMD fits from low $Q^2$ with that from the
Breit frame (where$\vec {p_f}= - \vec{p_i}$), for the data points of
the Breit frame have relatively small error bars at high $Q^2$. The
result is shown in Fig. \ref{fig:BF_QsqF_with_VMD}. The figure
implies that the VMD fit from the Breit frame (the purple one) is
about $1 \sigma$ away from the fit of a single zero sink momentum
(the blue one), hence exploring further in the Breit frame may be
the correct direction for studying the pion form factor at higher
momentum transfer.
\section{Summary and Outlook}
In this study we have acquired enough lattice data for $Q^2 < 1
\mathrm{GeV}^2$ to extract a reliable pion charge radius $r_\pi$. By
comparing the VMD fit from data with low $Q^2$ and high $Q^2$, we've
started to see some hints of discrepancy between data points at
different momentum transfer, which may indicate the transition from
the VMD model to pQCD, the goal we are seeking for. We also found
that the VMD fit from the Breit frame is about $1 \sigma$ away from
the fit of a single zero sink momentum, and we infer that we may
explore further in the high $Q^2$ regime by studying the data from
the Breit frame. We are generating four times more data to shrink
the error bars in the Breit frame in the hope of a clearer and
stronger evidence of the transition into the perturbative QCD
regime.
In the meantime, the JLQCD Collaboration has also reported their
calculation of the pion form factor based on all-to-all propagators.
Interested readers may find details in their upcoming publication
\cite{Kaneko:2007}.
\section{...}
|
1,314,259,995,290 | arxiv | \@ifstar{\@ssection}{\@section}{\@ifstar{\@ssection}{\@section}}
\def\@section#1
\if@nobreak
\everypar{}%
\ifnum\LastMac=\Hae \addvspace{\half}\fi
\else
\addpen{\gds@cbrk}%
\addvspace{\two}%
\fi
\bgroup
\ninepoint\bf
\Raggedright
\ifAutoNumber
\global\advance\Sec \@ne
\noindent\@nohdbrk\number\Sec\hskip 1pc \uppercase{#1}\@par}
\global\SecSec=\z@
\else
\noindent\@nohdbrk\uppercase{#1}\@par}
\fi
\egroup
\nobreak
\vskip\half
\nobreak
\@noafterindent
\LastMac=\Hae\relax
}
\def\@ssection#1
\if@nobreak
\everypar{}%
\ifnum\LastMac=\Hae \addvspace{\half}\fi
\else
\addpen{\gds@cbrk}%
\addvspace{\two}%
\fi
\bgroup
\ninepoint\bf
\Raggedright
\noindent\@nohdbrk\uppercase{#1}\@par}
\egroup
\nobreak
\vskip\half
\nobreak
\@noafterindent
\LastMac=\Hae\relax
}
\def\subsection#1
\if@nobreak
\everypar{}%
\ifnum\LastMac=\Hae \addvspace{1pt plus 1pt minus .5pt}\fi
\else
\addpen{\gds@cbrk}%
\addvspace{\onehalf}%
\fi
\bgroup
\ninepoint\bf
\Raggedright
\ifAutoNumber
\global\advance\SecSec \@ne
\noindent\@nohdbrk\number\Sec.\number\SecSec \hskip 1pc\relax #1\@par}
\global\SecSecSec=\z@
\else
\noindent\@nohdbrk #1\@par}
\fi
\egroup
\nobreak
\vskip\half
\nobreak
\@noafterindent
\LastMac=\Hbe\relax
}
\def\subsubsection#1
\if@nobreak
\everypar{}%
\ifnum\LastMac=\Hbe \addvspace{1pt plus 1pt minus .5pt}\fi
\else
\addpen{\gds@cbrk}%
\addvspace{\onehalf}%
\fi
\bgroup
\ninepoint\it
\Raggedright
\ifAutoNumber
\global\advance\SecSecSec \@ne
\noindent\@nohdbrk\number\Sec.\number\SecSec.\number\SecSecSec
\hskip 1pc\relax #1\@par}
\else
\noindent\@nohdbrk #1\@par}
\fi
\egroup
\nobreak
\vskip\half
\nobreak
\@noafterindent
\LastMac=\Hce\relax
}
\def\paragraph#1
\if@nobreak
\everypar{}%
\else
\addpen{\gds@cbrk}%
\addvspace{\one}%
\fi%
\bgroup%
\ninepoint\it
\noindent #1\ \nobreak%
\egroup
\LastMac=\Hde\relax
\ignorespaces
}
\let\tx=\relax
\def\beginlist{%
\@par}\if@nobreak \else\addvspace{\half}\fi%
\bgroup%
\ninepoint
\let\item=\list@item%
}
\def\list@item{%
\@par}\noindent\hskip 1em\relax%
\ignorespaces%
}
\def\par\egroup\addvspace{\half}\@doendpe{\@par}\egroup\addvspace{\half}\@doendpe}
\def\beginrefs{%
\@par}
\bgroup
\eightpoint
\Raggedright
\let\bibitem=\bib@item
}
\def\bib@item{%
\@par}\parindent=1.5em\Hang{1.5em}{1}%
\everypar={\Hang{1.5em}{1}\ignorespaces}%
\noindent\ignorespaces
}
\def\par\egroup\@doendpe{\@par}\egroup\@doendpe}
\newtoks\CatchLine
\def\@journal{Mon.\ Not.\ R.\ Astron.\ Soc.\ }
\def\@pubyear{1994}
\def\@pagerange{000--000}
\def\@volume{000}
\def\@microfiche{} %
\def\pubyear#1{\gdef\@pubyear{#1}\@makecatchline}
\def\pagerange#1{\gdef\@pagerange{#1}\@makecatchline}
\def\volume#1{\gdef\@volume{#1}\@makecatchline}
\def\microfiche#1{\gdef\@microfiche{and Microfiche\ #1}\@makecatchline}
\def\@makecatchline{%
\global\CatchLine{%
{\rm \@journal {\bf \@volume},\ \@pagerange\ (\@pubyear)\ \@microfiche}}%
}
\@makecatchline
\newtoks\LeftHeader
\def\shortauthor#1
\global\LeftHeader{#1}%
}
\newtoks\RightHeader
\def\shorttitle#1
\global\RightHeader{#1}%
}
\def\PageHead
\begingroup
\ifsp@page
\csname ps@\sp@type\endcsname
\global\sp@pagefalse
\fi
\ifodd\pageno
\let\the@head=\@oddhead
\else
\let\the@head=\@evenhead
\fi
\vbox to \z@{\vskip-22.5\p@%
\hbox to \PageWidth{\vbox to8.5\p@{}%
\the@head
}%
\vss}%
\endgroup
\nointerlineskip
}
\def\today{%
\number\day\space
\ifcase\month\or January\or February\or March\or April\or May\or June\or
July\or August\or September\or October\or November\or December\fi
\space\number\year%
}
\def\PageFoot{}
\def\authorcomment#1{%
\gdef\PageFoot{%
\nointerlineskip%
\vbox to 22pt{\vfil%
\hbox to \PageWidth{\elevenpoint\noindent \hfil #1 \hfil}}%
}%
}
\newif\ifplate@page
\newbox\plt@box
\def\beginplatepage{%
\let\plate=\plate@head
\let\caption=\fig@caption
\global\setbox\plt@box=\vbox\bgroup
\TEMPDIMEN=\PageWidth
\hsize=\PageWidth\relax
}
\def\par\egroup\global\plate@pagetrue{\@par}\egroup\global\plate@pagetrue}
\def\plate@head#1{\gdef\plt@cap{#1}}
\def\letters{%
\gdef\folio{\ifnum\pageno<\z@ L\romannumeral-\pageno
\else L\number\pageno \fi}%
}
\everydisplay{\displaysetup}
\newif\ifeqno
\newif\ifleqno
\def\displaysetup#1$${%
\displaytest#1\eqno\eqno\displaytest
}
\def\displaytest#1\eqno#2\eqno#3\displaytest{%
\if!#3!\ldisplaytest#1\leqno\leqno\ldisplaytest
\else\eqnotrue\leqnofalse\def#2}\fi{#2}\def\eq{#1}\fi
\generaldisplay$$}
\def\ldisplaytest#1\leqno#2\leqno#3\ldisplaytest{%
\def\eq{#1}%
\if!#3!\eqnofalse\else\eqnotrue\leqnotrue
\def#2}\fi{#2}\fi}
\def\generaldisplay{%
\ifeqno \ifleqno
\hbox to \hsize{\noindent
$\displaystyle\eq$\hfil$\displaystyle#2}\fi$}
\else
\hbox to \hsize{\noindent
$\displaystyle\eq$\hfil$\displaystyle#2}\fi$}
\fi
\else
\hbox to \hsize{\vbox{\noindent
$\displaystyle\eq$\hfil}}
\fi
}
\def\@notice{%
\@par}\addvspace{\two}%
\noindent{\b@ls{11pt}\ninerm This paper has been produced using the
Blackwell Scientific Publications \TeX\ macros.\@par}}%
}
\outer\def\@notice\par\vfill\supereject\end{\@notice\@par}\vfill\supereject\end}
\def\start@mess{%
Monthly notices of the RAS journal style (\@typeface)\space
v\@version,\space \@verdate.%
}
\everyjob{\Warn{\start@mess}}
\newif\if@debug \@debugfalse
\def\Print#1{\if@debug\immediate\write16{#1}\else \fi}
\def\Warn#1{\immediate\write16{#1}}
\def\immediate\write\m@ne#1{}
\newcount\Iteration
\def\Single{0} \def\Double{1}
\def\Figure{0} \def\Table{1}
\def\InStack{0}
\def1{1}
\def2{2}
\def3{3}
\newcount\TEMPCOUNT
\newdimen\TEMPDIMEN
\newbox\TEMPBOX
\newbox\VOIDBOX
\newcount\LengthOfStack
\newcount\MaxItems
\newcount\StackPointer
\newcount\Point
\newcount\NextFigure
\newcount\NextTable
\newcount\NextItem
\newcount\StatusStack
\newcount\NumStack
\newcount\TypeStack
\newcount\SpanStack
\newcount\BoxStack
\newcount\ItemSTATUS
\newcount\ItemNUMBER
\newcount\ItemTYPE
\newcount\ItemSPAN
\newbox\ItemBOX
\newdimen\ItemSIZE
\newdimen\PageHeight
\newdimen\TextLeading
\newdimen\Feathering
\newcount\LinesPerPage
\newdimen\ColumnWidth
\newdimen\ColumnGap
\newdimen\PageWidth
\newdimen\BodgeHeight
\newcount\Leading
\newdimen\ZoneBSize
\newdimen\TextSize
\newbox\ZoneABOX
\newbox\ZoneBBOX
\newbox\ZoneCBOX
\newif\ifFirstSingleItem
\newif\ifFirstZoneA
\newif\ifMakePageInComplete
\newif\ifMoreFigures \MoreFiguresfalse
\newif\ifMoreTables \MoreTablesfalse
\newif\ifFigInZoneB
\newif\ifFigInZoneC
\newif\ifTabInZoneB
\newif\ifTabInZoneC
\newif\ifZoneAFullPage
\newbox\MidBOX
\newbox\LeftBOX
\newbox\RightBOX
\newbox\PageBOX
\newif\ifLeftCOL
\LeftCOLtrue
\newdimen\ZoneBAdjust
\newcount\ItemFits
\def1{1}
\def2{2}
\def\LineAdjust#1{\global\ZoneBAdjust=#1\TextLeading\relax}
\MaxItems=15
\NextFigure=\z@
\NextTable=\@ne
\BodgeHeight=6pt
\TextLeading=11pt
\Leading=11
\Feathering=\z@
\LinesPerPage=61
\topskip=\TextLeading
\ColumnWidth=20pc
\ColumnGap=2pc
\newskip\ItemSepamount
\ItemSepamount=\TextLeading plus \TextLeading minus 4pt
\parskip=\z@ plus .1pt
\parindent=18pt
\widowpenalty=\z@
\clubpenalty=10000
\tolerance=1500
\hbadness=1500
\abovedisplayskip=6pt plus 2pt minus 2pt
\belowdisplayskip=6pt plus 2pt minus 2pt
\abovedisplayshortskip=6pt plus 2pt minus 2pt
\belowdisplayshortskip=6pt plus 2pt minus 2pt
\ninepoint
\PageHeight=682pt
\PageWidth=2\ColumnWidth
\advance\PageWidth by \ColumnGap
\pagestyle{headings}
\newcount\DUMMY \StatusStack=\allocationnumber
\newcount\DUMMY \newcount\DUMMY \newcount\DUMMY
\newcount\DUMMY \newcount\DUMMY \newcount\DUMMY
\newcount\DUMMY \newcount\DUMMY \newcount\DUMMY
\newcount\DUMMY \newcount\DUMMY \newcount\DUMMY
\newcount\DUMMY \newcount\DUMMY \newcount\DUMMY
\newcount\DUMMY \NumStack=\allocationnumber
\newcount\DUMMY \newcount\DUMMY \newcount\DUMMY
\newcount\DUMMY \newcount\DUMMY \newcount\DUMMY
\newcount\DUMMY \newcount\DUMMY \newcount\DUMMY
\newcount\DUMMY \newcount\DUMMY \newcount\DUMMY
\newcount\DUMMY \newcount\DUMMY \newcount\DUMMY
\newcount\DUMMY \TypeStack=\allocationnumber
\newcount\DUMMY \newcount\DUMMY \newcount\DUMMY
\newcount\DUMMY \newcount\DUMMY \newcount\DUMMY
\newcount\DUMMY \newcount\DUMMY \newcount\DUMMY
\newcount\DUMMY \newcount\DUMMY \newcount\DUMMY
\newcount\DUMMY \newcount\DUMMY \newcount\DUMMY
\newcount\DUMMY \SpanStack=\allocationnumber
\newcount\DUMMY \newcount\DUMMY \newcount\DUMMY
\newcount\DUMMY \newcount\DUMMY \newcount\DUMMY
\newcount\DUMMY \newcount\DUMMY \newcount\DUMMY
\newcount\DUMMY \newcount\DUMMY \newcount\DUMMY
\newcount\DUMMY \newcount\DUMMY \newcount\DUMMY
\newbox\DUMMY \BoxStack=\allocationnumber
\newbox\DUMMY \newbox\DUMMY \newbox\DUMMY
\newbox\DUMMY \newbox\DUMMY \newbox\DUMMY
\newbox\DUMMY \newbox\DUMMY \newbox\DUMMY
\newbox\DUMMY \newbox\DUMMY \newbox\DUMMY
\newbox\DUMMY \newbox\DUMMY \newbox\DUMMY
\def\immediate\write\m@ne{\immediate\write\m@ne}
\def\GetItemAll#1{%
\GetItemSTATUS{#1}
\GetItemNUMBER{#1}
\GetItemTYPE{#1}
\GetItemSPAN{#1}
\GetItemBOX{#1}
}
\def\GetItemSTATUS#1{%
\Point=\StatusStack
\advance\Point by #1
\global\ItemSTATUS=\count\Point
}
\def\GetItemNUMBER#1{%
\Point=\NumStack
\advance\Point by #1
\global\ItemNUMBER=\count\Point
}
\def\GetItemTYPE#1{%
\Point=\TypeStack
\advance\Point by #1
\global\ItemTYPE=\count\Point
}
\def\GetItemSPAN#1{%
\Point\SpanStack
\advance\Point by #1
\global\ItemSPAN=\count\Point
}
\def\GetItemBOX#1{%
\Point=\BoxStack
\advance\Point by #1
\global\setbox\ItemBOX=\vbox{\copy\Point}
\global\ItemSIZE=\ht\ItemBOX
\global\advance\ItemSIZE by \dp\ItemBOX
\TEMPCOUNT=\ItemSIZE
\divide\TEMPCOUNT by \Leading
\divide\TEMPCOUNT by 65536
\advance\TEMPCOUNT \@ne
\ItemSIZE=\TEMPCOUNT pt
\global\multiply\ItemSIZE by \Leading
}
\def\JoinStack{%
\ifnum\LengthOfStack=\MaxItems
\Warn{WARNING: Stack is full...some items will be lost!}
\else
\Point=\StatusStack
\advance\Point by \LengthOfStack
\global\count\Point=\ItemSTATUS
\Point=\NumStack
\advance\Point by \LengthOfStack
\global\count\Point=\ItemNUMBER
\Point=\TypeStack
\advance\Point by \LengthOfStack
\global\count\Point=\ItemTYPE
\Point\SpanStack
\advance\Point by \LengthOfStack
\global\count\Point=\ItemSPAN
\Point=\BoxStack
\advance\Point by \LengthOfStack
\global\setbox\Point=\vbox{\copy\ItemBOX}
\global\advance\LengthOfStack \@ne
\ifnum\ItemTYPE=\Figure
\global\MoreFigurestrue
\else
\global\MoreTablestrue
\fi
\fi
}
\def\LeaveStack#1{%
{\Iteration=#1
\loop
\ifnum\Iteration<\LengthOfStack
\advance\Iteration \@ne
\GetItemSTATUS{\Iteration}
\advance\Point by \m@ne
\global\count\Point=\ItemSTATUS
\GetItemNUMBER{\Iteration}
\advance\Point by \m@ne
\global\count\Point=\ItemNUMBER
\GetItemTYPE{\Iteration}
\advance\Point by \m@ne
\global\count\Point=\ItemTYPE
\GetItemSPAN{\Iteration}
\advance\Point by \m@ne
\global\count\Point=\ItemSPAN
\GetItemBOX{\Iteration}
\advance\Point by \m@ne
\global\setbox\Point=\vbox{\copy\ItemBOX}
\repeat}
\global\advance\LengthOfStack by \m@ne
}
\newif\ifStackNotClean
\def\CleanStack{%
\StackNotCleantrue
{\Iteration=\z@
\loop
\ifStackNotClean
\GetItemSTATUS{\Iteration}
\ifnum\ItemSTATUS=\InStack
\advance\Iteration \@ne
\else
\LeaveStack{\Iteration}
\fi
\ifnum\LengthOfStack<\Iteration
\StackNotCleanfalse
\fi
\repeat}
}
\def\FindItem#1#2{%
\global\StackPointer=\m@ne
{\Iteration=\z@
\loop
\ifnum\Iteration<\LengthOfStack
\GetItemSTATUS{\Iteration}
\ifnum\ItemSTATUS=\InStack
\GetItemTYPE{\Iteration}
\ifnum\ItemTYPE=#1
\GetItemNUMBER{\Iteration}
\ifnum\ItemNUMBER=#2
\global\StackPointer=\Iteration
\Iteration=\LengthOfStack
\fi
\fi
\fi
\advance\Iteration \@ne
\repeat}
}
\def\FindNext{%
\global\StackPointer=\m@ne
{\Iteration=\z@
\loop
\ifnum\Iteration<\LengthOfStack
\GetItemSTATUS{\Iteration}
\ifnum\ItemSTATUS=\InStack
\GetItemTYPE{\Iteration}
\ifnum\ItemTYPE=\Figure
\ifMoreFigures
\global\NextItem=\Figure
\global\StackPointer=\Iteration
\Iteration=\LengthOfStack
\fi
\fi
\ifnum\ItemTYPE=\Table
\ifMoreTables
\global\NextItem=\Table
\global\StackPointer=\Iteration
\Iteration=\LengthOfStack
\fi
\fi
\fi
\advance\Iteration \@ne
\repeat}
}
\def\ChangeStatus#1#2{%
\Point=\StatusStack
\advance\Point by #1
\global\count\Point=#2
}
\def\InZoneB{1}
\ZoneBAdjust=\z@
\def\MakePage
\global\ZoneBSize=\PageHeight
\global\TextSize=\ZoneBSize
\global\ZoneAFullPagefalse
\global\topskip=\TextLeading
\MakePageInCompletetrue
\MoreFigurestrue
\MoreTablestrue
\FigInZoneBfalse
\FigInZoneCfalse
\TabInZoneBfalse
\TabInZoneCfalse
\global\FirstSingleItemtrue
\global\FirstZoneAtrue
\global\setbox\ZoneABOX=\box\VOIDBOX
\global\setbox\ZoneBBOX=\box\VOIDBOX
\global\setbox\ZoneCBOX=\box\VOIDBOX
\loop
\ifMakePageInComplete
\FindNext
\ifnum\StackPointer=\m@ne
\NextItem=\m@ne
\MoreFiguresfalse
\MoreTablesfalse
\fi
\ifnum\NextItem=\Figure
\FindItem{\Figure}{\NextFigure}
\ifnum\StackPointer=\m@ne \global\MoreFiguresfalse
\else
\GetItemSPAN{\StackPointer}
\ifnum\ItemSPAN=\Single \def\InZoneB{2}\relax
\ifFigInZoneC \global\MoreFiguresfalse\fi
\else
\def\InZoneB{1}
\ifFigInZoneB \def\InZoneB{3}\fi
\fi
\fi
\ifMoreFigures\Print{}\FigureItems\fi
\fi
\ifnum\NextItem=\Table
\FindItem{\Table}{\NextTable}
\ifnum\StackPointer=\m@ne \global\MoreTablesfalse
\else
\GetItemSPAN{\StackPointer}
\ifnum\ItemSPAN=\Single\relax
\ifTabInZoneC \global\MoreTablesfalse\fi
\else
\def\InZoneB{1}
\ifTabInZoneB \def\InZoneB{3}\fi
\fi
\fi
\ifMoreTables\Print{}\TableItems\fi
\fi
\MakePageInCompletefalse
\ifMoreFigures\MakePageInCompletetrue\fi
\ifMoreTables\MakePageInCompletetrue\fi
\repeat
\ifZoneAFullPage
\global\TextSize=\z@
\global\ZoneBSize=\z@
\global\vsize=\z@\relax
\global\topskip=\z@\relax
\vbox to \z@{\vss}
\eject
\else
\global\advance\ZoneBSize by -\ZoneBAdjust
\global\vsize=\ZoneBSize
\global\hsize=\ColumnWidth
\global\ZoneBAdjust=\z@
\ifdim\TextSize<23pt
\Warn{}
\Warn{* Making column fall short: TextSize=\the\TextSize *}
\vskip-\lastskip\eject\fi
\fi
}
\def\MakeRightCol
\global\TextSize=\ZoneBSize
\MakePageInCompletetrue
\MoreFigurestrue
\MoreTablestrue
\global\FirstSingleItemtrue
\global\setbox\ZoneBBOX=\box\VOIDBOX
\def\InZoneB{2}
\loop
\ifMakePageInComplete
\FindNext
\ifnum\StackPointer=\m@ne
\NextItem=\m@ne
\MoreFiguresfalse
\MoreTablesfalse
\fi
\ifnum\NextItem=\Figure
\FindItem{\Figure}{\NextFigure}
\ifnum\StackPointer=\m@ne \MoreFiguresfalse
\else
\GetItemSPAN{\StackPointer}
\ifnum\ItemSPAN=\Double\relax
\MoreFiguresfalse\fi
\fi
\ifMoreFigures\Print{}\FigureItems\fi
\fi
\ifnum\NextItem=\Table
\FindItem{\Table}{\NextTable}
\ifnum\StackPointer=\m@ne \MoreTablesfalse
\else
\GetItemSPAN{\StackPointer}
\ifnum\ItemSPAN=\Double\relax
\MoreTablesfalse\fi
\fi
\ifMoreTables\Print{}\TableItems\fi
\fi
\MakePageInCompletefalse
\ifMoreFigures\MakePageInCompletetrue\fi
\ifMoreTables\MakePageInCompletetrue\fi
\repeat
\ifZoneAFullPage
\global\TextSize=\z@
\global\ZoneBSize=\z@
\global\vsize=\z@\relax
\global\topskip=\z@\relax
\vbox to \z@{\vss}
\eject
\else
\global\vsize=\ZoneBSize
\global\hsize=\ColumnWidth
\ifdim\TextSize<23pt
\Warn{}
\Warn{* Making column fall short: TextSize=\the\TextSize *}
\vskip-\lastskip\eject\fi
\fi
}
\def\FigureItems
\Print{Considering...}
\ShowItem{\StackPointer}
\GetItemBOX{\StackPointer}
\GetItemSPAN{\StackPointer}
\CheckFitInZone
\ifnum\ItemFits=1
\ifnum\ItemSPAN=\Single
\ChangeStatus{\StackPointer}{2}
\global\FigInZoneBtrue
\ifFirstSingleItem
\hbox{}\vskip-\BodgeHeight
\global\advance\ItemSIZE by \TextLeading
\fi
\unvbox\ItemBOX\vskip\ItemSepamount\relax
\global\FirstSingleItemfalse
\global\advance\TextSize by -\ItemSIZ
\global\advance\TextSize by -\TextLeading
\else
\ifFirstZoneA
\global\advance\ItemSIZE by \TextLeading
\global\FirstZoneAfalse\fi
\global\advance\TextSize by -\ItemSIZE
\global\advance\TextSize by -\TextLeading
\global\advance\ZoneBSize by -\ItemSIZE
\global\advance\ZoneBSize by -\TextLeading
\ifFigInZoneB\relax
\else
\ifdim\TextSize<3\TextLeading
\global\ZoneAFullPagetrue
\fi
\fi
\ChangeStatus{\StackPointer}{\InZoneB}
\ifnum\InZoneB=3 \global\FigInZoneCtrue\fi
\fi
\Print{TextSize=\the\TextSize}
\Print{ZoneBSize=\the\ZoneBSize}
\global\advance\NextFigure \@ne
\Print{This figure has been placed.}
\else
\Print{No space available for this figure...holding over.}
\Print{}
\global\MoreFiguresfalse
\fi
}
\def\TableItems
\Print{Considering...}
\ShowItem{\StackPointer}
\GetItemBOX{\StackPointer}
\GetItemSPAN{\StackPointer}
\CheckFitInZone
\ifnum\ItemFits=1
\ifnum\ItemSPAN=\Single
\ChangeStatus{\StackPointer}{2}
\global\TabInZoneBtrue
\ifFirstSingleItem
\hbox{}\vskip-\BodgeHeight
\global\advance\ItemSIZE by \TextLeading
\fi
\unvbox\ItemBOX\vskip\ItemSepamount\relax
\global\FirstSingleItemfalse
\global\advance\TextSize by -\ItemSIZE
\global\advance\TextSize by -\TextLeading
\else
\ifFirstZoneA
\global\advance\ItemSIZE by \TextLeading
\global\FirstZoneAfalse\fi
\global\advance\TextSize by -\ItemSIZE
\global\advance\TextSize by -\TextLeading
\global\advance\ZoneBSize by -\ItemSIZE
\global\advance\ZoneBSize by -\TextLeading
\ifFigInZoneB\relax
\else
\ifdim\TextSize<3\TextLeading
\global\ZoneAFullPagetrue
\fi
\fi
\ChangeStatus{\StackPointer}{\InZoneB}
\ifnum\InZoneB=3 \global\TabInZoneCtrue\fi
\fi
\global\advance\NextTable \@ne
\Print{This table has been placed.}
\else
\Print{No space available for this table...holding over.}
\Print{}
\global\MoreTablesfalse
\fi
}
\def\CheckFitInZone{%
{\advance\TextSize by -\ItemSIZE
\advance\TextSize by -\TextLeading
\ifFirstSingleItem
\advance\TextSize by \TextLeading
\fi
\ifnum\InZoneB=1\relax
\else \advance\TextSize by -\ZoneBAdjust
\fi
\ifdim\TextSize<3\TextLeading \global\ItemFits=2
\else \global\ItemFits=1\fi}
}
\def\BeginOpening{%
\thispagestyle{titlepage}%
\global\setbox\ItemBOX=\vbox\bgroup%
\hsize=\PageWidth%
\hrule height \z@
\ifsinglecol\vskip 6pt\fi
}
\let\begintopmatter=\BeginOpening
\def\EndOpening{%
\On
\egroup
\ifsinglecol
\box\ItemBOX%
\vskip\TextLeading plus 2\TextLeadin
\@noafterindent
\else
\ItemNUMBER=\z@%
\ItemTYPE=\Figure
\ItemSPAN=\Double
\ItemSTATUS=\InStack
\JoinStack
\fi
}
\newif\if@here \@herefalse
\def\no@float{\global\@heretrue}
\let\nofloat=\relax
\def\beginfigure{%
\@ifstar{\global\@dfloattrue \@bfigure}{\global\@dfloatfalse \@bfigure}%
}
\def\@bfigure#1{%
\@par}
\if@dfloat
\ItemSPAN=\Double
\TEMPDIMEN=\PageWidth
\else
\ItemSPAN=\Single
\TEMPDIMEN=\ColumnWidth
\fi
\ifsinglecol
\TEMPDIMEN=\PageWidth
\else
\ItemSTATUS=\InStack
\ItemNUMBER=#1%
\ItemTYPE=\Figure
\fi
\bgroup
\hsize=\TEMPDIMEN
\global\setbox\ItemBOX=\vbox\bgroup
\eightpoint\nostb@ls{10pt}%
\let\caption=\fig@caption
\ifsinglecol \let\nofloat=\no@float\fi
}
\def\fig@caption#1{%
\vskip 5.5pt plus 6pt%
\bgroup
\eightpoint\nostb@ls{10pt}%
\setbox\TEMPBOX=\hbox{#1}%
\ifdim\wd\TEMPBOX>\TEMPDIMEN
\noindent \unhbox\TEMPBOX\@par}
\else
\hbox to \hsize{\hfil\unhbox\TEMPBOX\hfil}%
\fi
\egroup
}
\def\endfigure{%
\@par}\egroup
\egroup
\ifsinglecol
\if@here \midinsert\global\@herefalse\else \topinsert\fi
\unvbox\ItemBOX
\endinsert
\else
\JoinStack
\Print{Processing source for figure \the\ItemNUMBER}%
\fi
}
\newbox\tab@cap@box
\def\tab@caption#1{\global\setbox\tab@cap@box=\hbox{#1\@par}}}
\newtoks\tab@txt@toks
\long\def\tab@txt#1{\global\tab@txt@toks={#1}\global\table@txttrue}
\newif\iftable@txt \table@txtfalse
\newif\if@dfloat \@dfloatfalse
\def\begintable{%
\@ifstar{\global\@dfloattrue \@btable}{\global\@dfloatfalse \@btable}%
}
\def\@btable#1{%
\@par}
\if@dfloat
\ItemSPAN=\Double
\TEMPDIMEN=\PageWidth
\else
\ItemSPAN=\Single
\TEMPDIMEN=\ColumnWidth
\fi
\ifsinglecol
\TEMPDIMEN=\PageWidth
\else
\ItemSTATUS=\InStack
\ItemNUMBER=#1%
\ItemTYPE=\Table
\fi
\bgroup
\eightpoint\nostb@ls{10pt}%
\global\setbox\ItemBOX=\vbox\bgroup
\let\caption=\tab@caption
\let\tabletext=\tab@txt
\ifsinglecol \let\nofloat=\no@float\fi
}
\def\endtable{%
\@par}\egroup
\egroup
\setbox\TEMPBOX=\hbox to \TEMPDIMEN{%
\hss
\vbox{%
\hsize=\wd\ItemBOX
\ifvoid\tab@cap@box
\else
\noindent\unhbox\tab@cap@box
\vskip 5.5pt plus 6pt%
\fi
\box\ItemBOX
\iftable@txt
\vskip 10pt%
\eightpoint\nostb@ls{10pt}%
\noindent\the\tab@txt@toks
\global\table@txtfalse
\fi
}%
\hss
}%
\ifsinglecol
\if@here \midinsert\global\@herefalse\else \topinsert\fi
\box\TEMPBOX
\endinsert
\else
\global\setbox\ItemBOX=\box\TEMPBOX
\JoinStack
\Print{Processing source for table \the\ItemNUMBER}%
\fi
}
\def\UnloadZoneA{%
\FirstZoneAtrue
\Iteration=\z@
\loop
\ifnum\Iteration<\LengthOfStack
\GetItemSTATUS{\Iteration}
\ifnum\ItemSTATUS=1
\GetItemBOX{\Iteration}
\ifFirstZoneA \vbox to \BodgeHeight{\vfil}%
\FirstZoneAfalse\fi
\unvbox\ItemBOX\vskip\ItemSepamount\relax
\LeaveStack{\Iteration}
\else
\advance\Iteration \@ne
\fi
\repeat
}
\def\UnloadZoneC{%
\Iteration=\z@
\loop
\ifnum\Iteration<\LengthOfStack
\GetItemSTATUS{\Iteration}
\ifnum\ItemSTATUS=3
\GetItemBOX{\Iteration}
\vskip\ItemSepamount\relax\unvbox\ItemBOX
\LeaveStack{\Iteration}
\else
\advance\Iteration \@ne
\fi
\repeat
}
\def\ShowItem#1
{\GetItemAll{#1}
\Print{\the#1:
{TYPE=\ifnum\ItemTYPE=\Figure Figure\else Table\fi}
{NUMBER=\the\ItemNUMBER}
{SPAN=\ifnum\ItemSPAN=\Single Single\else Double\fi}
{SIZE=\the\ItemSIZE}}}
}
\def\ShowStack
\Print{}
\Print{LengthOfStack = \the\LengthOfStack}
\ifnum\LengthOfStack=\z@ \Print{Stack is empty}\fi
\Iteration=\z@
\loop
\ifnum\Iteration<\LengthOfStack
\ShowItem{\Iteration}
\advance\Iteration \@ne
\repeat
}
\def\B#1#2{%
\hbox{\vrule\kern-0.4pt\vbox to #2{%
\hrule width #1\vfill\hrule}\kern-0.4pt\vrule}
}
\newif\ifsinglecol \singlecolfalse
\def\onecolumn{%
\global\output={\singlecoloutput}%
\global\hsize=\PageWidth
\global\vsize=\PageHeight
\global\ColumnWidth=\hsize
\global\TextLeading=12pt
\global\Leading=12
\global\singlecoltrue
\global\let\onecolumn=\rela
\global\let\footnote=\sing@footnot
\global\let\vfootnote=\sing@vfootnote
\ninepoint
\message{(Single column)}%
}
\def\singlecoloutput{%
\shipout\vbox{\PageHead\pagebody\PageFoot}%
\advancepageno
\ifplate@page
\shipout\vbox{%
\sp@pagetrue
\def\sp@type{plate}%
\global\plate@pagefalse
\PageHead\vbox to \PageHeight{\unvbox\plt@box\vfil}\PageFoot%
}%
\message{[plate]}%
\advancepageno
\fi
\ifnum\outputpenalty>-\@MM \else\dosupereject\fi%
}
\def\vskip\ItemSepamount\relax{\vskip\ItemSepamount\relax}
\def\ItemSepbreak{\@par}\ifdim\lastskip<\ItemSepamount
\removelastskip\penalty-200\vskip\ItemSepamount\relax\fi%
}
\let\@@endinsert=\endinsert
\def\endinsert{\egroup
\if@mid \dimen@\ht\z@ \advance\dimen@\dp\z@ \advance\dimen@12\p@
\advance\dimen@\pagetotal \advance\dimen@-\pageshrink
\ifdim\dimen@>\pagegoal\@midfalse\p@gefalse\fi\fi
\if@mid \vskip\ItemSepamount\relax\box\z@\ItemSepbreak
\else\insert\topins{\penalty100
\splittopskip\z@skip
\splitmaxdepth\maxdimen \floatingpenalty\z@
\ifp@ge \dimen@\dp\z@
\vbox to\vsize{\unvbox\z@\kern-\dimen@
\else \box\z@\nobreak\vskip\ItemSepamount\relax\fi}\fi\endgroup%
}
\def\gobbleone#1{}
\def\gobbletwo#1#2{}
\let\footnote=\gobbletwo
\let\vfootnote=\gobbleone
\def\sing@footnote#1{\let\@sf\empty
\ifhmode\edef\@sf{\spacefactor\the\spacefactor}\/\fi
\hbox{$^{\hbox{\eightpoint #1}}$}\@sf\sing@vfootnote{#1}%
}
\def\sing@vfootnote#1{\insert\footins\bgroup\eightpoint\b@ls{9pt}%
\interlinepenalty\interfootnotelinepenalty
\splittopskip\ht\strutbox
\splitmaxdepth\dp\strutbox \floatingpenalty\@MM
\leftskip\z@skip \rightskip\z@skip \spaceskip\z@skip \xspaceskip\z@skip
\noindent $^{\scriptstyle\hbox{#1}}$\hskip 4pt%
\footstrut\futurelet\errmessage{Use \string\Bbb\space only in math mode}}\fi\next\fo@t%
}
\def\kern-3\p@ \hrule height \z@ \kern 3\p@{\kern-3\p@ \hrule height \z@ \kern 3\p@}
\skip\footins=19.5pt plus 12pt minus 1pt
\count\footins=1000
\dimen\footins=\maxdimen
\def\landscape{%
\global\TEMPDIMEN=\PageWidth
\global\PageWidth=\PageHeight
\global\PageHeight=\TEMPDIMEN
\global\let\landscape=\rela
\onecolumn
\message{(landscape)}%
\raggedbottom
}
\output{%
\ifLeftCOL
\global\setbox\LeftBOX=\vbox to \ZoneBSize{\box255\unvbox\ZoneBBOX}%
\global\LeftCOLfalse
\MakeRightCol
\else
\setbox\RightBOX=\vbox to \ZoneBSize{\box255\unvbox\ZoneBBOX}%
\setbox\MidBOX=\hbox{\box\LeftBOX\hskip\ColumnGap\box\RightBOX}%
\setbox\PageBOX=\vbox to \PageHeight{%
\UnloadZoneA\box\MidBOX\UnloadZoneC}%
\shipout\vbox{\PageHead\box\PageBOX\PageFoot}%
\advancepageno
\ifplate@page
\shipout\vbox{%
\sp@pagetrue
\def\sp@type{plate}%
\global\plate@pagefalse
\PageHead\vbox to \PageHeight{\unvbox\plt@box\vfil}\PageFoot%
}%
\message{[plate]}%
\advancepageno
\fi
\global\LeftCOLtrue
\CleanStack
\MakePage
\fi
}
\Warn{\start@mess}
\def\mnmacrosloaded{}
\catcode `\@=12
\@ifstar{\@ssection}{\@section}{1 Introduction}
\tx The discovery with the Hubble Space Telescope (HST) of a number of
early-type galaxies with very small stellar discs, with scale lengths
of the order of 20 pc (van den Bosch et al.~ 1994; Forbes 1994; Lauer
et al.~ 1995), has opened new windows on galaxy dynamics and
formation. From a dynamical point of view, nuclear stellar discs are
interesting because their kinematics allow an accurate determination
of the central mass density of their host galaxies (van den Bosch \&
de Zeeuw 1996). The existence of a morphologically and kinematically
distinct stellar component in the nucleus of these galaxies raises the
question whether they formed coevally with the host galaxy, or arose
from evolution of the host galaxy in a later stage. One possible form
of evolution would be gas infall to the centre induced by either a bar
or a merger, which after subsequent star-formation could result in a
nuclear disc.
\begintable{1}
\caption{{\bf Table~1.} Parameters of the observed galaxies}
\halign{#\hfil&\quad #\hfil\quad& \hfil#\hfil&
\hfil# \quad& \hfil# \quad& \hfil#\hfil\quad& #\hfil\quad \cr
NGC & RC2 & $ M_{B} $ & $D_{25}$ & $V_{\rm hel}$ &
$S_{100 \mu{\rm m}}$ & $S_{21 {\rm cm}}$ \cr
& & & (arcsec) & (km/s) & (mJy) & (mJy) \cr
4342 & S0$^{-}$ & -17.47 & 84.8 & 714 & $0 \pm 160$ & $<3$ \cr
4570 & S0 & -19.04 & 244.4 & 1730 & $0 \pm 100$ & $<10$ \cr
}
\tabletext{Column~(1) gives the NGC number of the galaxy. Column~(2) gives
the galaxy type according to the Second Reference Catalogue (RC2; de
Vaucouleurs et al.~ 1976). The absolute blue magnitude (for a Virgo
distance of 15 Mpc) is listed in column~(3), whereas column~(4) gives
the major axis isophotal diameter at the surface brightness level
$\mu_{B} = 25.0$ mag arcsec$^{-2}$. Column~(5) lists the heliocentric
velocity in ${\rm\,km\,s^{-1}}$. Columns~(6) and~(7) give upper limits on the IRAS
flux density at $100 \mu {\rm m}$ (Knapp et al.~ 1989) and on the flux
density at 21 cm (Wrobel 1991).}
\endtable
Stellar discs with sizes of 0.2--1~kpc were found in several
elliptical galaxies from ground-based observations (e.g., Nieto
et al.~ 1991). The nuclear discs discussed here are considerably smaller,
and have sizes $\mathrel{\spose{\lower 3pt\hbox{$\sim$} 100$~pc. They were discovered in
monochromatic broad-band images that were taken with the HST Planetary
Camera (PC) before any corrections had been made for the spherical
aberration of the HST primary mirror.
To investigate the kinematics of the nuclear discs, and to learn about
their formation, we have taken spectroscopic and improved photometric
data for two early-type galaxies in the Virgo cluster: NGC~4342 and
NGC~4570. These galaxies are intermediate between ellipticals and
lenticulars, and in both cases previous HST images revealed the
presence of bright nuclear discs (van den Bosch et al.~ 1994). Global
parameters of the two galaxies are listed in Table~1.
We obtained multi-colour ($U$,$V$,$I$) images with the second Wide
Field and Planetary Camera (WFPC2) aboard the HST, in order to
investigate the detailed morphology of the different components in
NGC~4342 and NGC~4570, and to study their differences in colour. We
also obtained long-slit spectra with the William Herschel Telescope
(WHT), and higher spatial-resolution single-aperture spectra with the
HST Faint Object Spectrograph (FOS), to determine the stellar
kinematics of the different components and investigate the stellar
populations. The HST data were obtained after the telescope was
serviced to correct for the spherical aberration of the primary
mirror.
\begintable{2}
\caption{{\bf Table~2.} Log of HST/WFPC2 observations}
\halign{#\hfil&\quad \hfil#\hfil\quad& \hfil#\hfil\quad&
\hfil#\hfil\quad& \hfil#\hfil\quad& \hfil#\hfil\quad \cr
NGC & Filter & colour & date & $t_{\rm exp}$ & \# exp \cr
& & & & (sec) & \cr
4342 & F336W & U & 21/01/96 & 3600 & 4 \cr
& F555W & V & 21/01/96 & 200 & 2 \cr
& F814W & I & 21/01/96 & 200 & 2 \cr
4570 & F336W & U & 19/04/96 & 5100 & 5 \cr
& F555W & V & 19/04/96 & 400 & 2 \cr
& F814W & I & 19/04/96 & 460 & 2 \cr
}
\tabletext{Column~(4) lists the total exposure time per filter.
Column~(5) gives the number of exposures per filter.}
\endtable
The nuclear discs have a small angular size, even for HST standards,
and are embedded in complex larger structures. Reliable astrophysical
interpretation in terms of dynamics and stellar populations therefore
requires careful reduction and modeling. In this paper we describe the
observations, the reduction, and the parameterization of the
results. The quantity and diversity of the data has led us to
parameterize it into a form suitable for presentation and
modeling. The imaging data is parameterized in terms of both
elliptical isophotal parameters and multi-Gaussian decompositions; the
kinematic data is parameterized through Gauss-Hermite expansions of the
line-of-sight velocity distributions.
A detailed interpretation of the data will be presented in a series of
companion papers. Scorza \& van den Bosch (1997) will discuss the
results of the decomposition of both galaxies into bulge and disc
components. Cretton \& van den Bosch (1997) will present detailed
three-integral modeling of the dynamics of NGC~4342. Van den Bosch \&
Emsellem (1997) will present evidence that the galaxy NGC~4570 has
been shaped under the influence of a rapidly tumbling bar potential.
In Section~2 we present the reduction and parameterization of the
multi-colour HST photometry. Sections~3 and~4 contain the reduction of
the WHT and HST/FOS spectra, respectively. The stellar kinematical
analysis is presented in Section~5. Section~6 discusses the stellar
populations of both galaxies, based on the broad-band colour and line
strength data. We summarize and discuss our conclusions in
Section~7. Throughout this paper we adopt a distance of 15 Mpc for
both galaxies, as appropriate for the Virgo cluster (Jacoby, Ciardullo
\& Ford 1990).
\@ifstar{\@ssection}{\@section}{2 HST multi-colour photometry}
In this section we describe the observations (Section~2.1), the
reduction (Section~2.2) and the parameterization of the broad-band
WFPC2 images. We derive multi-Gaussian decompositions (Section~2.3)
and standard isophotal parameters (Sections~2.4 and~2.5). We also
subtract a pure elliptical model from the observations to emphasize
the disc structures (Figure~6).
\subsection{2.1 Observations}
We obtained $U$, $V$ and $I$ band images of NGC~4342 and NGC~4570
using the HST/WFPC2 as part of our GO-project
\#6107. A detailed description of the WFPC2 can be found in the HST
WFPC2 Instrument Handbook (Burrows et al.~ 1995). The nuclei of the
galaxies were centred in the Planetary Camera chip (PC1), which
consists of $800 \times 800$ pixels of $0.0455'' \times 0.0455''$
each. Exposures were taken with the broad band filters F336W, F555W
and F814W; these correspond closely to the Johnson~$U$ and~$V$ bands,
and the Cousins~$I$ band, respectively. In each band several separate
exposures were taken. Table~2 lists the log of the observations. All
exposures were taken with the telescope guiding in fine lock, yielding
a nominal pointing stability of $\mathord{\sim} 3$~mas. Since there was no
danger of saturation, the analogue-to-digital gain was set to its low
setting of 7.12 electrons/DN (where DN is the number of counts). The
CCD read-out noise was 5.24 electrons; the dark rate was only 0.003
electrons pixel$^{-1}$sec$^{-1}$.
\subsection{2.2 Reduction}
The images were calibrated with the standard `pipeline' that is
maintained by the Space Telescope Science Institute (STScI). The
reduction steps, including e.g., bias subtraction, dark current
subtraction and flat-fielding, are described in detail by Holtzman
et al.~ (1995a).
Subsequent reduction was done using standard {\tt IRAF} tasks. With
the $V$- and $I$-band filters, two separate frames were obtained for
each galaxy within one HST orbit. These frames were offset from each
other by $11 \times 11$ pixels. This allows removal of chip defects,
such as hot pixels and bad columns, as well as cosmic rays. After
shifting over an integer $11 \times 11$ pixels, we checked the
alignment of the two frames by measuring the positions of a number of
globular clusters present on the PC1. The alignment was found to be
better than 0.05 pixels. For the $U$-band images, 4-5 separate
exposures were available with different exposure times. Frames
obtained in different orbits where offset from each other by an
integer $11 \times 11$ pixels. Once again we used globular clusters to
determine the offsets, and found them to be accurate at the 0.05-pixel
level. Therefore, we could align the frames by shifting over integer
pixels, without the need for interpolation. Registered frames for the
same galaxy and filter were combined with cosmic-ray rejection. A
constant background was subtracted from all combined frames, as
measured at the boundaries of the WF2 CCD, where the galactic
contribution is negligible.
\beginfigure{1}
\centerline{\psfig{figure=enc.ps,width=\hssize}}\smallskip
\caption{{\bf Figure~1.} The dots show measurements of the
encircled energy of the PSF, as a function of radius, for the HST
Planetary Camera (PC) with the F555W filter. The solid line is the
encircled energy of the model PSF used in the MGE method. This is
a sum of 5 circular Gaussians.}
\endfigure
\beginfigure*{2}
\centerline{\psfig{figure=tot.ps,width=\hdsize,clip=}}
\smallskip
\caption{{\bf Figure~2.} Contour maps of the WFPC2 $V$-Band images of NGC~4342
and NGC~4570 (without PSF deconvolution), at two different
scales: $20''\times 20''$ (plots on the left) and $4'' \times 4''$
(plots on the right). Contours of the best-fitting
MGE models are superimposed.}
\endfigure
In order to convert the raw counts in the F336W, F555W and F814W
frames to Johnson $U$ and $V$, and Cousins $I$ band magnitudes,
respectively, we performed a flux calibration following the guidelines
given by Holtzman et al.~ (1995b). The equations that convert counts to
$U$, $V$ and $I$ surface brightness magnitudes include $U-V$ and $V-I$
colour terms. We approximated those by the average values found for
ellipticals. After the photometric calibration, we calculated the
$U-V$ and $V-I$ colours from our images, and iterated the calibration
until the colours had converged.
For the $U$-band, two further corrections are required. First,
photometry in the UV is sensitive to the presence of contaminants on
the CCD. There is a linear behaviour between the light being lost due
to those contaminants and the day since the last decontamination of
the CCD. We used the formula of Holtzman et al.~ (1995b), and applied
corrections of 0.0070 and 0.0119 magnitudes to the zero-points of the
F336W images of NGC~4342 and NGC~4570, respectively. Secondly, the UV
filters have a considerable red leak. Table~6.5 in the WFPC2
Instrument Handbook gives an estimate for the percentage of red light
leaking through the filter for a number of stellar spectra. Since
early-type galaxies consist mainly of late-type stars, we estimate
that 2--15\% of the light falling through the F336W filter is coming
from wavelengths around 7500{\AA}. We assumed that 8\% of the flux
through the F336W filter is due to the red leak, and increased the
U-band photometric zero-point by 0.0905 magnitudes. We estimate
that our final photometric accuracies are $\mathrel{\spose{\lower 3pt\hbox{$\sim$} 0.02$ magnitudes for
the $V$- and $I$-band, and $\mathrel{\spose{\lower 3pt\hbox{$\sim$} 0.08$ magnitudes for the $U$-band
(mainly due to uncertainties in the amount of red leak).
\subsection{2.3 Multi-Gaussian fitting}
The HST point-spread-function (PSF) has improved significantly with
the 1993 refurbishment mission. Figure~1 shows measurements of the
encircled energy curve for the F555W filter and the PC CCD (Holtzman
et al.~ 1994). The FWHM of the PSF is only $\mathord{\sim} 0.1''$. However, the
PSF wings are still very broad; several percent of the light is
scattered more than 1 arcsec away. Since the luminosity profiles of
NGC 4342 and NGC 4570 are strongly peaked towards the centre, PSF
convolution still has a considerable degrading effect. Deconvolution
therefore remains essential to obtain the maximum amount of
information from our images. We have explored two methods of PSF
deconvolution: the Multi Gaussian Expansion (MGE) method (this
section), and direct Richardson-Lucy deconvolution (Section~2.4).
The MGE method was developed by Monnet, Bacon \& Emsellem (1992). It
builds a model for the galaxy, while deconvolving for the effects of
PSF convolution at the same time. The method assumes that both the PSF
and the deconvolved (i.e., intrinsic) surface brightness distribution
of the galaxy can be approximated by a sum of Gaussians. Each Gaussian
is parameterized by 6 parameters: the centre $(x_i,y_i)$, the position
angle, the flattening $q_i$, the standard deviation $\sigma_i$, and
the central intensity $I_i$. We approximated the HST $V$-band PSF as
the sum of 5 circular (i.e., $q=1$) Gaussians, chosen so as to fit the
observed encircled energy curve (Figure~1). Using this PSF model we
derived the parameters of the $N$ Gaussians that describe the {\it
deconvolved} surface brightness, by fitting to the HST $V$-band galaxy
images. Since both the PSF and the deconvolved surface brightness are
assumed to be sums of Gaussians, the convolution is analytical. In the
fitting we forced each of the $N$ Gaussians to have the same position
angle and centre (i.e., the MGE model is assumed to be
axisymmetric). Therefore, the model is described by $3N + 3$ free
parameters, which are fit simultaneously to the images using a global
bidimensional fitting process. More and more components are added
until convergence is achieved (for details on the MGE method, see
Emsellem, Monnet \& Bacon, 1994). We found our fits to converge for
$N=11$ Gaussian components, for both NGC~4342 and NGC~4570.
The results are shown in Figure~2, which displays contour maps
of the HST $V$-band images, with superimposed the best fitting
MGE-models. In general the fits are excellent. For NGC~4342 a small
discrepancy is seen on the outside. This is due to the fact that the
isophotes of this galaxy twist slightly at large radii (see
Section~2.5), which we ignore by forcing the position angles of the
different Gaussians to be the same. In both galaxies there is a clear
multi-component structure: the isophotes are highly flattened and
discy at the outside (due to the outer disc), less flattened at
intermediate radii $(r \approx 1'')$, where the bulge is dominating the
light, and again very elongated and discy close to the centre, due to
the nuclear disc. The parameters of the MGE models are listed
in Table~3.
\begintable*{3}
\caption{{\bf Table~3.} Parameters of MGE models for the
deconvolved $V$-band surface brightness}
\halign{#\hfil&\quad \hfil# \quad& \hfil# \quad& \hfil# \quad&
\hfil#\hfill \quad& \hfil# \quad& \hfil# \quad& \hfil# \quad \cr
& NGC 4342 &&&& NGC 4570 &&\cr
$i$ & \hfill$I_i$\hfill & \hfill$\sigma_i$\hfill & \hfill$q_i$\hfill & &
\hfill$I_i$\hfill & \hfill$\sigma_i$\hfill & \hfill$q_i$\hfill \cr
& (${\rm\,L_\odot} {\rm pc}^{-2}$) & (arcsec) & & & (${\rm\,L_\odot} {\rm pc}^{-2}$)
& (arcsec) & \cr
1 & 3136240.0 & 0.02 & 0.119 & & 1755160.0 & 0.02 & 0.158 \cr
2 & 95319.8 & 0.08 & 0.841 & & 61238.0 & 0.09 & 0.800 \cr
3 & 42954.3 & 0.26 & 0.632 & & 21526.4 & 0.23 & 0.748 \cr
4 & 48520.9 & 0.36 & 0.136 & & 21589.3 & 0.26 & 0.140 \cr
5 & 17155.4 & 0.42 & 0.848 & & 11285.3 & 0.51 & 0.780 \cr
6 & 4930.9 & 0.72 & 0.521 & & 5728.7 & 0.60 & 0.120 \cr
7 & 8657.3 & 0.79 & 0.840 & & 7911.9 & 1.11 & 0.809 \cr
8 & 3207.9 & 1.80 & 0.759 & & 3800.5 & 2.73 & 0.635 \cr
9 & 2154.3 & 3.89 & 0.275 & & 1624.8 & 4.20 & 0.700 \cr
10& 1085.9 & 9.11 & 0.270 & & 1095.8 & 12.88 & 0.350 \cr
11& 219.1 & 9.61 & 0.836 & & 334.4 & 17.12 & 0.583 \cr
}
\tabletext{Column (1) gives the index number of each Gaussian.
Columns~(2) and~(5) give its central surface brightness,
columns~(3) and~(6) give its standard deviation, and columns~(4) and~(7)
give its flattening.}
\endtable
\beginfigure*{3}
\centerline{\psfig{figure=lumprofs.ps,width=\hdsize}}\smallskip
\caption{{\bf Figure~3.} The projected $V$-band surface brightness profiles
(in mag arcsec$^{-2}$) of the Lucy-deconvolved (open circles) and
MGE-deconvolved (crosses) images of NGC~4342 and NGC~4570.
Agreement between the two methods of deconvolution is excellent.
Profiles along both the major and the minor axes are shown.
The excess light along the major axes clearly reveals the
`double-disc' structure in both galaxies. The minor axis profiles show
that the bulges have a luminosity profile similar to that of
low-luminosity elliptical galaxies, with very steep cusps.}
\endfigure
\subsection{2.4 Luminosity profiles}
Richardson-Lucy iteration (Lucy 1974) provides an alternative PSF
deconvolution method. For this, accurate knowledge of the PSF is
required. We calculated model PSFs appropriate for each given filter
and position of the nucleus on the PC1 CCD, using the TinyTim software
package. Since our observations were made while guiding in fine
lock, no corrections for telescope jitter were necessary.
We used the Richardson-Lucy algorithm to deconvolve the $V$-band
images of NGC~4342 and NGC~4570; 20 iterations were found to be
sufficient for convergence. We subsequently derived the luminosity
profiles along the major and minor axes, using the isophote fitting
procedure described below (Section~2.5). The resulting $V$-band
surface brightness profiles are shown in Figure~3 (open cicrles).
The difference
between the major and minor axis profiles clearly reveals the excess
light due to the nuclear and outer disc components. The minor axis
profiles, which have a negligible contribution of disc light, show a
double power-law behaviour for the bulge luminosity distribution, with
a steep cusp. Such profiles are characteristic for low luminosity
ellipticals (Gebhardt et al.~ 1996). The bulge luminosity profiles
will be further discussed by Scorza \& van den Bosch (1997).
The crosses in Figure~3 correspond to the same luminosity profiles but
now determined from the MGE model of the intrinsic (i.e., deconvolved)
surface brightness, again using the isophote-fitting procedure.
The agreement with the luminosity profiles derived from the Lucy-deconvolved
images is excellent. Small discrepancies can be seen at the outside. These
are related to the discrepancies seen in Figure~2 and originate from
neglecting the small amount of isophote twisting, when constructing the MGE
models.
\beginfigure*{4}
\centerline{\psfig{figure=isoph4342.ps,width=0.8\hdsize}}\smallskip
\caption{{\bf Figure~4.} The isophotal parameters as a function
of log(radius), for the $V$- and $I$-band images of NGC 4342. Both the
ellipticity and the $\cos 4\theta$-term clearly reveal the double-disc
structure.}
\endfigure
\beginfigure*{5}
\centerline{\psfig{figure=isoph4570.ps,width=0.8\hdsize}}\smallskip
\caption{{\bf Figure~5.} Same as Figure~4, but now for NGC 4570.}
\endfigure
\beginfigure*{6}
\line{\psfig{figure=residV4342.ps,width=\hssize}\hfill
\psfig{figure=residV4570.ps,width=\hssize}}\smallskip
\caption{{\bf Figure~6.} Contour maps of the $V$-band residual images
of NGC~4342 (left) and NGC 4570 (right), obtained after subtraction of
perfectly elliptical galaxy models. The images were rotated to align
the major axis of each galaxy with the $x$-axis. Highly flattened
nuclear discs are clearly visible inside the central arcsec. Only
positive contours are plotted for clarity. The apparent central point
sources are artifacts, due to the limited radial extent of the galaxy
models that were subtracted. The features `A' and `B' in NGC~4570 are
both at $1.7''$ from the centre, and will be discussed elsewhere. The
feature marked `G' is a globular cluster.}
\endfigure
\subsection{2.5 Isophotal analysis}
We derived the ellipticity and position angle of the isophotes, as a
function of radius, for each colour and each galaxy, from the
non-deconvolved images. In addition, the sin and cos $3\theta$
and $4\theta$ terms were derived that describe the high-order
deviations of the isophotes from pure ellipses (e.g., Lauer 1985;
Jedrzejewski 1987; Bender, D\"obereiner \& M\"ollenhoff 1988). For
a pure ellipse, these coefficients are all equal to
zero. Positive $\cos 4\theta$ terms correspond to `discy' isophotes,
whereas `boxy' isophotes give rise to negative values of the $\cos
4\theta$ term.
The results for the $V$- and $I$-band images are shown in Figures~4
and~5. Again, the double-disk structure of both galaxies is clear:
the isophotes are highly flattened and discy at the outside ($r >
1''$), moderately flattened and nicely elliptical at intermediate
radii ($r \approx 1''$), and again strongly elongated and discy inside
$1''$. At radii inside $\mathord{\sim} 0.5''$, the measured parameters are
not a reliable representation of their intrinsic values, due to
the convolution with the HST PSF.
For both galaxies, there is almost
no difference between the $V$- and $I$-band parameters. The same
appears to be true for the $U$-band parameters (not plotted here),
although these are noisier due to lower $S/N$.
Except for the $\cos 4\theta$ term, all other high-order
terms that express deviations from elliptical isophotes are close
to zero. For both galaxies the position angle is close to
constant, although in NGC~4342 there is a mild, but significant twist
of a few degrees. Comparison with the isophotal parameters derived
from Lucy deconvolved F555W images obtained with the PC before the HST
refurbishment (van den Bosch et al.~ 1994) generally shows good
agreement, with one exception. The pre-refurbishment images
revealed strange `wiggles' in the higher order terms of the Fourier
expansion, interpreted by van den Bosch et al.~ (1994) as indicative of
a patchy dust distribution. However, from the WFPC2 images presented
here no such evidence for dust is found. It therefore seems likely
that the pre-refurbishment images suffered from insufficiently
corrected measles due to contaminants.
Michard (1994) showed that the inner isophotes of strongly flattened
galaxies containing a sharp central feature are distorted in the
process of convolution and subsequent deconvolution, in a way that can
mimic the presence of a nuclear disc. Michard therefore suggested that
the nuclear discs inferred from isophotal analysis of deconvolved
images could be merely an artifact of the deconvolution procedure. Van
den Bosch et al.~ (1994) found similar deconvolution induced distortions
from tests performed on model galaxies and concluded that the
isophotal parameters inside $0.5''$ (e.g., five times the FWHM of the
PSF) indeed could not be trusted. Their evidence for the presence of
nuclear discs was therefore based solely on the photometry outside
$0.5''$. The new data presented here clearly shows the nuclear discs
in NGC~4342 and NGC~4570 even in data that are {\it not} deconvolved
(Figures~2, 4 and~5). This proves incontrovertibly that the nuclear
discs are real structures.
In order to reveal more clearly the nuclear discs, we constructed
residual $V$-band images by subtracting a model image that has the
same luminosity and ellipticity profile as the real image, but is
taken to have perfectly elliptical isophotes. These residual images
reveal the structures that are responsible for the higher-order
deviations from perfectly elliptical isophotes. Contour maps of these
images are shown in Figure~6. We only show the inner $3''
\times 3''$ regions where the nuclear discs clearly stand out. The
models that were subtracted from the images are based on the isophotal
parameters outside $0.1''$; inside that radius no meaningful isophotes
can be fitted. As a resulting artifact, the residual images show a
central point source in addition to the nuclear discs. The residual
image of NGC~4570 reveals, besides the nuclear disc, two unresolved
features, marked `A' and `B'. These features are both at $1.7''$
offset from the centre, and are perfectly aligned with the nuclear
disc. The nature of these features is discussed in van den Bosch \&
Emsellem (1997). The feature marked `G' is a globular cluster.
\begintable{4}
\caption{{\bf Table~4.} Log of long-slit WHT observations}
\halign{#\hfil&\quad \hfil#\hfil\quad& \hfil#\hfil\quad&
\hfil#\hfil\quad& \hfil#\hfil\quad& \hfil#\hfil\quad&
\hfil#\hfil\quad& \hfil#\hfil \cr
NGC & slit pos. & PA & slit & $S$ & exp & airmass & off \cr
& & & ($''$) & ($''$) & (min) & & ($''$) \cr
4342 & major axis & 347 & 1.0 & 0.80 & 90 & 1.10 & 0.15 \cr
& minor axis & 257 & 1.0 & 0.95 & 90 & 1.52 & 0.24 \cr
4570 & major axis & 159 & 1.0 & 1.10 & 80 & 1.12 & -0.22 \cr
& offset axis & 159 & 1.0 & 1.05 & 70 & 1.38 & 0.94 \cr
& minor axis & 249 & 1.0 & 1.70 & 60 & 1.19 & 0.17 \cr
}
\tabletext{Column~(3) gives the position angle of the slit in degrees.
The slit width is given in column~(4). Column~(5) gives the seeing
FWHM $S$, defined as in Section~3.3. The exposure time is
given in column~(6), and the airmass during each exposure in
column~(7). Column~(8) gives the offset of the slit from the centre,
corrected for differential atmospheric refraction.}
\endtable
\@ifstar{\@ssection}{\@section}{3 Ground-based spectra}
We obtained high-$S/N$, ground-based, long-slit spectra of NGC~4342
and NGC~4570, with a spatial resolution of $\mathord{\sim} 1''$, using the 4.2m
WHT on La Palma. In this section we describe the observations
(Section~3.1), the reduction (Section~3.2), the seeing PSF of the
observations (Section~3.3), and our observations of a library of
template stars for use in the kinematical analysis (Section~3.4).
\subsection{3.1 Observations}
The observations were done in March 1994 with the WHT/ISIS
spectrograph. With ISIS two spectra are obtained simultaneously, on
the red and blue arms of the spectrograph. On both arms we used the
high resolution gratings with 1200 lines/mm. The red arm was centred
around the Ca II triplet (8498, 8542, 8662{\AA}), while the blue arm
covered the Mg $b$ triplet (5167, 5173, 5184{\AA}). The blue spectra
were irreparably disturbed by an internal reflection (`ghost') and
will not be further discussed.
On the red arm we used a Tek CCD of $1124\times 1124$ pixels. Each
pixel measures $0.36''$ by 0.41{\AA}. All spectra were obtained with
a slit width of $1''$, which is roughly equal to the average
seeing. The instrumental resolution, expressed as the Gaussian
dispersion of spectral lines in the arc lamp frames, was $9 {\rm\,km\,s^{-1}}$.
The galaxy exposures were split into consecutive exposures of
typically 20-30 minutes. Before each galaxy exposure we took exposures
of arc lamps to allow accurate wavelength calibration. Bias frames,
tungsten lamp flats and sky flats were taken during twilight.
We also took spectra of a spectro-photometric standard, to allow
correction for the wavelength sensitivity of the CCD (see
Section~3.2). A log of the galaxy observations is given in Table~4.
Guiding during each exposure was done with a TV camera with a Johnson
$V$-band filter. Differential atmospheric refraction can play an
important role, because the $V$-band central wavelength ($\lambda_{\rm
cen} = 5500${\AA}) is offset considerably from the central wavelength
of our red spectra ($\lambda_{\rm cen} = 8580${\AA}). This results in
an offset of the slit from the intended position on the galaxy that
was selected with the TV camera. These offsets can be calculated and
are listed in Table~4. Note that for the `offset exposure' of NGC~4570
we aimed for an intentional offset of $1.5''$, perpendicular to the
major axis. However, due to the atmospheric refraction the real offset
only amounted to $0.94''$. Differences in atmospheric refraction over
the observed spectral range (8390--8770{\AA}) are at most $0.03''$,
and can be neglected.
\subsection{3.2 Reduction}
All spectra where reduced using {\tt IRAF}. The bias level was
determined from the overscan columns and subtracted. For each night
two flat-field frames were created. The tungsten flats were used to
create one high-S/N flat-field that shows the pixel-to-pixel
sensitivity variations of the CCD. The spectra of the twilight sky
were used to construct a high-S/N flat-field that shows the large scale
illumination pattern due to vignetting of the slit. Both flat-fields
were normalized and divided into all spectra.
Cosmic rays were removed from all frames by interpolating over
high-$\sigma$ deviations, as judged from Poisson statistics and the
known gain and read-out noise of the detector. The wavelength
calibration was done using the arc-lamp frames. Spectra were
rebinned using the resulting wavelength solution, both in the
spatial direction (to align the direction of dispersion with the rows
of the frames), and in logarithmic wavelength. The latter was
done to a scale of $11.076 {\rm\,km\,s^{-1}}/{\rm pixel}$, covering the
wavelength range from 8390{\AA} to 8770{\AA}. Subsequently, we
determined the sensitivity as a function of wavelength from the
spectra of the spectro-photometric standards, and corrected all
spectra for these sensitivity variations. All exposures of the
same galaxy and slit position were added. Sky spectra were
determined from the data beyond $90''$ from the centre of the slit,
and were subtracted. Template star frames (see Section~3.4), were
summed along columns to yield one high-S/N spectrum for each star.
\begintable*{5}
\caption{{\bf Table~5.} HST/FOS observations: log and kinematical results}
\halign{#\hfil&\quad \hfil#\hfil\quad& \hfil# \quad& \hfil# \quad&
\hfil# \quad& \hfil# \quad& \hfil# \quad& \hfil# \quad&
\hfil# \quad& \hfil# \quad& \hfil# \quad& \hfil# \quad\cr
id. & Galaxy & $x_{\rm ap}$ & $y_{\rm ap}$ & $t_{\rm exp}$ & Intensity &
$\gamma$ & $\Delta\gamma$ & $V_{\rm rot}$ & $\Delta V$ & $\sigma$ &
$\Delta\sigma$ \cr
& & (arcsec) & (arcsec) & (sec) & (counts/sec) & & & (${\rm\,km\,s^{-1}}$) &
(${\rm\,km\,s^{-1}}$) & (${\rm\,km\,s^{-1}}$) & (${\rm\,km\,s^{-1}}$) \cr
A1 & NGC 4342 & $-0.014$ & $-0.010$ & 1000 &$1421.1$ & 1.132 & 0.079 &
--0.2 & 30.8 & 418.3 & 32.7 \cr
A2 & & $+0.236$ & $-0.010$ & 1190 & $746.3$ & 1.165 & 0.091 &
+188.2 & 25.5 & 321.1 & 33.3 \cr
A3 & & $-0.264$ & $-0.010$ & 1200 & $714.4$ & 1.281 & 0.089 &
--202.1 & 21.1 & 308.2 & 26.8 \cr
A4 & & $+0.486$ & $-0.010$ & 990 & $407.6$ & 1.025 & 0.174 &
+239.8 & 55.0 & 312.7 & 53.5 \cr
A5 & & $-0.514$ & $-0.010$ & 900 & $384.3$ & 1.285 & 0.152 &
--253.7 & 25.1 & 237.5$^{*}$ & 23.9 \cr
A6 & & $+0.098$ & $+0.214$ & 500 & $456.6$ & 1.447 & 0.185 &
+111.0 & 49.0 & 327.2 & 50.2 \cr
A7 & & $-0.126$ & $-0.234$ & 570 & $521.4$ & 1.152 & 0.177 &
--185.2 & 73.0 & 391.9 & 101.0 \cr
& & & & & & & &
& & & \cr
B1 & NGC 4570 & $-0.011$ & $-0.052$ & 810 & $866.5$ & 1.147 & 0.082 &
+35.8 & 25.0 & 249.4 & 31.9 \cr
B2 & & $+0.239$ & $-0.052$ & 450 & $434.9$ & 1.335 & 0.158 &
--4.1 & 31.7 & 217.6 & 38.0 \cr
B3 & & $-0.261$ & $-0.052$ & 450 & $446.6$ & 1.323 & 0.162 &
--119.7 & 41.0 & 205.8 & 38.9 \cr
B4 & & $+0.489$ & $-0.052$ & 450 & $273.7$ & 0.990 & 0.168 &
+104.2 & 27.7 & 121.7 & 46.9 \cr
B5 & & $-0.511$ & $-0.052$ & 450 & $267.3$ & 1.073 & 0.159 &
--108.9 & 39.9 & 170.8$^{*}$ & 20.6 \cr
B6 & & $-0.011$ & $+0.198$ & 450 & $360.8$ & 1.279 & 0.162 &
+79.3 & 34.2 & 183.3 & 41.5 \cr
B7 & & $-0.011$ & $-0.302$ & 450 & $306.3$ & 1.466 & 0.230 &
+102.4 & 78.5 & 332.9 & 78.2 \cr
}
\tabletext{Column~(1) gives the label for the spectrum used in the remainder
of the paper. Columns~(3) and~(4) give the aperture position in a
$(x,y)$ coordinate system centred on the galaxy, and with $x$ along
the major axis. Column (5) gives the exposure time. In Column (6) we
list the observed intensity, integrated over the wavelength range
covered by the grating. Columns (7)--(12) give the results of the
kinematical analysis and their errors; $\gamma$ is the line strength,
$V_{\rm rot}$ the rotation velocity, and $\sigma$ the velocity
dispersion. Asterisks indicate dispersions that where used as
reference dispersions (see Appendix~A).}
\endtable
\subsection{3.3 Seeing estimates}
$I$-band images of photometric standard fields were taken during
the observing run using the Cassegrain focus Auxiliary Port of the
WHT. The detector used was a EEV CCD with $0.10'' \times 0.10''$
pixels. The images were bias subtracted, flat-fielded, and cleaned of
cosmic rays.
The shape of the seeing PSF was determined using bright stars
on these images. For all images obtained throughout the run, the
shape could be well described by a sum of two Gaussians:
\eqnam\psfshape
$${\rm PSF}(r) = A_1 {\rm e}^{-r^2/2\sigma_1^2} +
A_2 {\rm e}^{-r^2/2\sigma_2^2},\eqno\hbox{(\the\eqnumber )}\global\advance\eqnumber by 1 $$
with fixed ratios of the dispersions, $\sigma_2/\sigma_1 =
1.65$, and amplitudes, $A_2/A_1 = 0.22$, of the two components. The
PSF is normalized for $A_1 = 0.09897/\sigma_1^2$, and is fully
specified by its FWHM $S = 2.543\sigma_1$.
The auto-guider camera provides an independent measure of the seeing
FWHM, $S_{\rm auto}$, during each exposure. However, this measure does
not necessarily take the full PSF shape (equation~\psfshape) into
account. In addition, the auto-guider was equipped with a $V$-band
filter, whereas our Ca II triplet spectra fall in the $I$-band. We
therefore calibrated the autoguider FWHM estimates, by comparing them
to the FWHM values, $S_{\rm image}$, inferred from double-Gaussian fits
to the images of bright stars on the $I$-band Auxiliary Port
exposures. This yielded $S_{\rm image}/S_{\rm auto} \approx 1.0$,
indicating that $S_{\rm auto}$ can be used directly as an estimate of
the true $I$-band seeing FWHM. The FWHM values $S$ in Table~4 list the
seeing FWHM thus obtained for each of our spectra.
As a consistency check, we convolved the Lucy-de\-con\-vol\-ved
$I$-band HST image of NGC~4342 with the PSF of equation (\psfshape),
for different test values of the FWHM $S$. We subsequently overlaid a
slit (with the same width and position angle as for the WHT spectra),
and binned into pixels of the appropriate size. This profile was then
compared to the observed intensity profile along the slit for each of
the spectra. In all cases we found the best-fitting values of $S$ to
be consistent with the FWHM values listed in Table~4.
\subsection{3.4 Template spectra}
To infer stellar kinematical quantities from the galaxy spectra, they
must be compared to a template spectrum. It is important to choose a
template that closely matches the `average' spectrum of the stars in
the galaxy (without kinematical Doppler broadening), to minimize
possible systematic errors (e.g., van der Marel et al.~ 1994). Most of
the visible light of elliptical galaxies comes from stars on the
giant, asymptotic, and horizontal branches, and their spectra are
therefore comparable to those of G and K giants. The spectrum of a
single K giant star generally makes a reasonable template, but not an
ideal one, because galaxies are made up of stars of different stellar
types. We therefore observed 13 template stars with spectral types
ranging from F7 to M0. All these spectra were taken with the same
instrumental setup as the galaxy spectra, and were reduced in the same
way. We used this template library to determine the mix of template
stars that best matches the galaxy spectra. We assume that there are
no strong changes in stellar population over the galaxy, and we
therefore determined only one optimal template spectrum per
galaxy. For this purpose we summed the spectra along the major axes of
NGC~4342 and NGC~4570 inside the inner $1''$, to yield one high-$S/N$
spectrum per galaxy. We then used a method similar to a `biased random
walk', in order to search for the optimal template that consists of a
weighted sum of the stellar spectra in the library (van der Marel
1994).
\beginfigure{7}
\centerline{\psfig{figure=aper_plot.ps,width=\hssize}}\smallskip
\caption{{\bf Figure~7.} Aperture positionings for the HST/FOS spectra.
The labels are as in Table~5.}
\endfigure
\@ifstar{\@ssection}{\@section}{4 HST spectra}
In this section we discuss the high spatial resolution spectra
obtained with the HST/FOS. We describe the observations
(Section~4.1), the target acquisition (Section~4.2), the reduction
(Section~4.3), the wavelength calibration (Section~4.4), and our
choice of template spectrum for use in the kinematical analysis
(Section~4.5). A detailed description of the FOS can be found in
the HST/FOS Instrument Handbook (Keyes et al.~ 1995).
\beginfigure{8}
\centerline{\psfig{figure=throughput.ps,width=\hssize}}\smallskip
\caption{{\bf Figure~8.} The aperture transmission for a star in the
circular $0.26''$ aperture (nominal size), as a function of
its distance from the aperture centre. Data points are from Evans
(1995). The relative throughput is normalized to $1.0$ for a star
centred in the circular $0.86''$ diameter aperture (the FOS {\tt 1.0}
aperture). The solid line is our model fit, which assumes a
PSF that is a sum of three circularly symmetric Gaussians, and an
aperture diameter of $0.238''$.}
\endfigure
\subsection{4.1 Observations}
We obtained spectra of NGC~4342 and NGC~4570 with the HST/FOS, using
the circular $0.26''$ diameter aperture (the so-called FOS {\tt 0.3}
aperture) and the G570H grating. This grating covers the wavelength
range from 4569 to 6818{\AA}, and has a dispersion of 4.37{\AA} per
diode. The main absorption lines in this spectral range are the Mg $b$
triplet (5167, 5173, 5184{\AA}) and the Na D lines at 5892{\AA}. The
spectra were quarter-stepped, yielding 2064 $1/4$-diode pixels of
1.09{\AA}. For each galaxy, 7 spectra were taken at different aperture
positions, as illustrated in Figure~7 and summarized in Table~5.
\subsection{4.2 Target acquisition}
Some form of target acquisition is required to properly position the
galaxy in the $0.26''$ aperture. We used the `peak-up' acquisition
mode to centre the aperture on the nucleus of each galaxy; spectra at
offset positions were obtained by slewing the telescope from this
position. Pointing drifts during the observations are generally
not significant ($\mathrel{\spose{\lower 3pt\hbox{$\sim$} 0.03''$; Keyes et al.~ 1995). The
acquisition consists of different stages. In each stage the total flux
through an aperture is measured for a grid of aperture positions. The
telescope is then centred on the aperture position with the highest
flux. In each subsequent stage a smaller aperture is used with a
tighter grid (smaller inter-point spacings), thus increasing the
accuracy of the target positioning. We used different target
acquisition patterns for NGC~4342 and NGC~4570. For NGC~4570, the
final stage consisted of a $3 \times 3$ grid with $0.1''$ inter-point
spacings, using the circular $0.26''$ aperture. This yields an
expected pointing accuracy $\mathrel{\spose{\lower 3pt\hbox{$\sim$} 0.08''$ (Keyes et al.~ 1995). For
NGC~4342, a $5 \times 5$ grid with $0.052''$ inter-point spacings was
adopted, again using the $0.26''$ aperture, yielding an expected
pointing accuracy $\mathrel{\spose{\lower 3pt\hbox{$\sim$} 0.04''$.
Precise knowledge of the aperture positions is of great importance
when interpreting the data and comparing it to models. We
therefore modeled the observed fluxes in the final peak-up stage to
verify the success of the target acquisition. We denote the offset
of the true galaxy centre from the grid position that produced the
most counts in the final peak-up stage by
($\Delta_x$,$\Delta_y$). Here $x$ is along the major axis of the
galaxy. We adopt the MGE-model for the $V$-band WFPC2 data to describe
the intrinsic surface brightness distribution of each galaxy. For a
given offset $(\Delta_x,\Delta_y)$, one may calculate the predicted
flux at each grid point in the final peak-up stage, taking into
account the HST/FOS PSF and the aperture size. These predictions can
be compared to the observed fluxes, and the best-fitting offset
$(\Delta_x,\Delta_y)$ can be determined using $\chi^2$-minimization.
The result describes the accuracy of the target acquisition.
\beginfigure{9}
\centerline{\psfig{figure=bestap2.ps,width=\hssize}}\smallskip
\caption{{\bf Figure~9.} Results of the final peak-up stage on the nucleus
of NGC~4342. The $0.26''$ aperture was placed at the positions of a
$5\times 5$ grid on the sky, as illustrated in the inset. The data
points show the measured counts for each aperture position, normalized
to unity for the position with the maximum count rate (position
\#19). The error bars indicate the Poisson noise on the measurements.
The solid curves connect the predictions of the best-fit model for
these data. This model is based on convolutions of the WFPC2 $V$-band
image, and assumes that the centre of NGC~4342 is offset from the
centre of aperture position \#19 by $-0.014''$ along the major axis,
and $-0.010''$ along the minor axis.}
\endfigure
To properly model the observations one must know the spatial
convolution kernel due to the combined effects of the HST/PSF and the
aperture size, neither of which has been particularly well calibrated
previously. We therefore performed a new calibration of these
quantities, using existing observations obtained by Evans (1995). This
follows the approach of van der Marel et al.~ (1997), who did the same
for the small {\it square} FOS apertures. Evans measured the
throughput of the $0.26''$ diameter for a star positioned at various
distances from the aperture centre, as shown in Figure~8. We fitted
these data under the assumption of a purely circular aperture, and a
PSF that can be described as the sum of circularly symmetric
Gaussians. No good fit could be obtained if the aperture diameter was
kept fixed at its nominal value of $0.26''$. This can be due either to
the fact that our model ignores the effects of diffraction at the
aperture edges, or because the aperture does in fact have a different
diameter than its nominal value. An excellent fit to the calibration
observations could in fact be obtained (solid curve in Figure~8) if
the aperture diameter was treated as a free parameter, yielding a
value of $0.238''$. The solid curve in Figure~8 shows the fit under
the assumption that the PSF can be described by the sum of three
Gaussians. The question whether the true aperture diameter is $0.26''$
or $0.238''$ is not relevant here; the diameter enters into the
analysis only through the kernel that describes the combined effect of
the PSF and the aperture size. This kernel is adequately fit by our
model, independent of what the actual aperture size is.
The peak-up data for the final stage of the NGC~4342 target
acquisition are shown in Figure~9. It displays the count rate measured
at each of the 25 positions in the $5\times5$ grid at which the
$0.26''$ aperture was placed. The curve shows the predictions of our
best-fit model, which provides an excellent fit. It has an offset
$(\Delta_x,\Delta_y)$ of only $(+0.014'',+0.010'')$. Similar models
for the NGC~4570 data indicate a somewhat larger offset of
$(+0.011'',+0.052'')$. From a number of experiments we estimate the
errors on our determination of $\Delta_x$ and $\Delta_y$ to be smaller
than $0.005''$. Thus the peak-up acquisitions worked well for both
galaxies, and our models yield the precise positions of the apertures
to high accuracy.
\subsection{4.3 Reduction}
The spectra were reduced using the standard pipeline procedure
described by Keyes et al.~ (1995). The pipeline flat-field was checked
by cross-correlating it with our continuum subtracted galaxy
spectra. A clear cross-correlation peak at zero shift confirmed the
appropriateness of the flat-field. The spectra were converted from
counts to erg cm$^{-2}$ s$^{-1}$ {\AA}$^{-1}$ using the inverse
sensitivity file (IVS) for the circular $0.26''$ aperture. We did not
attempt to correct the calibrated spectra for the PSF dependence on
wavelength; this only affects the continuum slope, which is subtracted
in the stellar kinematical analysis anyway.
\subsection{4.4 Wavelength calibration}
A vacuum wavelength scale is computed by the STScI pipeline, based
upon dispersion coefficients for the given grating and aperture. Due
to non-repeatability of the filter-grating wheel and the aperture
wheel, offset errors in the wavelength scale can occur of up to
several Angstroms. Since we changed neither the aperture nor the
grating during our visits, this will not affect the relative velocity
scale for each galaxy. It may affect the absolute velocity scale, but
that is of little importance.
The FOS suffers from the so-called `geomagnetically induced image
motion problem' (GIMP). Although on-board corrections are applied to
correct for this, residual effects still affect the wavelength
scale considerably ($0.13${\AA} RMS, according to the HST Data
Handbook). Additional wavelength calibration is therefore useful. For
NGC~4570, one arc lamp spectrum was obtained after the acquisition. We
used this spectrum to check the pipeline wavelength calibration, by
comparing the wavelengths of the emission line centres to their actual
vacuum wavelengths. In addition to an offset of $\mathord{\sim} 2${\AA} (due to
the non-repeatability of the wheels), a small non-linearity of the
wavelength scale was found. We therefore recalibrated the wavelength
scale using the arc spectrum, including an additional shift of
-0.769{\AA} to correct for the offset between internal and external
sources (Keyes et al.~ 1995). For NGC~4342, an arc lamp spectrum was
obtained at the end of each orbit. The wavelength scale of these
spectra was found to vary by 0.35{\AA} during the visit, as a result
of residual GIMP. The arc spectra were used to recalibrate the
wavelength scale of each galaxy spectrum. For this, we used linear
interpolation in time to estimate the wavelength scale for
observations between two arc spectra. After wavelength recalibration,
all spectra were rebinned logarithmically to a scale of $58.539
{\rm\,km\,s^{-1}}$/pixel, covering the wavelength range between 4570 and 6817{\AA}.
We estimate the uncertainties in the final wavelength scale for
NGC~4342 to be $\mathrel{\spose{\lower 3pt\hbox{$\sim$} 3.5 {\rm\,km\,s^{-1}}$. For NGC~4570, we could not correct for
residual GIMP variations from orbit to orbit, because only one arc
lamp spectrum was obtained. If the variations of the absolute
wavelength scale were of the same order as during the NGC~4342 visit
(i.e., 0.35{\AA}), the absolute velocity scales of different NGC~4570
spectra may vary by $\mathord{\sim} 20 {\rm\,km\,s^{-1}}$. This may induce systematic errors
in the rotation velocities of the same order (see Section~5.2).
\subsection{4.5 Template spectra}
To facilitate the kinematical analysis it is convenient to have
template spectra that are observed with the same instrumental setup.
However, because of the time consuming target acquisitions, only very
few template stars have been observed with the FOS. From the HST
archive we took a spectrum of the KIII-star F193, which was observed
with the same setup as our galaxy spectra under GO proposal
5744 (PI: H.C. Ford). Unfortunately, after reducing the spectrum, it
was found to provide a poor match to our galaxy spectra.
We therefore decided to use a template library obtained from
ground-based observations. The library consists of 27 stars of
different spectral type, obtained by M. Franx at the 4m telescope of
the KPNO with the RC Spectrograph (see van der Marel \& Franx 1993 for
more details on this template library). The spectra were rebinned
logarithmically to the same scale as the galaxy spectra, and were
shifted to a common velocity. The spectra cover the wavelength range
$4836-5547${\AA}. Although this range is smaller than that covered by
the FOS spectra, it is centred on the Mg b triplet ($\mathord{\sim}
5170${\AA}), which is the most useful wavelength range for stellar
kinematic analysis. The other strong feature in the FOS spectra, the
NaD line, can be influenced by absorption from the interstellar
medium, and is not a good absorption line for kinematic analysis.
Given the relative low $S/N$ of the FOS spectra, we decided
to construct one optimal template spectrum for each galaxy, rather
than for each separate spectrum. For this purpose, we constructed a
grand total spectrum of each galaxy, by summing all spectra at
different aperture positions. We determined the best-fitting stellar
mix using the same method as described in Section~3.4. For both
NGC~4342 and NGC~4570 we found the best-fitting template mix to
consist of giants and dwarfs of spectral types G and K.
\beginfigure*{10}
\centerline{\psfig{figure=ngc4342all.ps,width=0.8\hdsize}}\smallskip
\caption{{\bf Figure~10.} Major and minor axis stellar kinematics of
NGC~4342 inferred from the WHT spectra. The panels show, from top to
bottom: the line strength $\gamma$, the rotation velocity $V$, the
velocity dispersion $\sigma$, the RMS projected line-of-sight velocity
$V_{\rm rms} \equiv \sqrt{V^2+\sigma^2}$, and the Gauss-Hermite
coefficients $h_3$ and $h_4$. All velocities are in units of
${\rm\,km\,s^{-1}}$. The scales on the abscissa are different for the major and
minor axis data, as a result of the strong flattening of NGC~4342.}
\endfigure
\beginfigure*{11}
\centerline{\psfig{figure=ngc4570all.ps,width=0.8\hdsize}}\smallskip
\caption{{\bf Figure~11.} Same as Figure~10, but now for the major axis,
offset axis (see Table~4), and minor axis of NGC~4570.}
\endfigure
\beginfigure*{12}
\centerline{\psfig{figure=fosres.ps,width=0.8\hdsize}}\smallskip
\caption{{\bf Figure~12.} Major axis rotation velocities $V$ (left) and
velocity dispersions $\sigma$ (right) for the nuclear regions of
NGC~4342 (top) and NGC~4570 (bottom), inferred from the HST/FOS
spectra (data points with error bars). For comparison, the open
circles connected by dashed lines show the lower-spatial resolution
results from the WHT spectra.}
\endfigure
\@ifstar{\@ssection}{\@section}{5 Stellar kinematical analysis}
The accessible stellar kinematical information of galaxies is
contained in the stellar line-of-sight velocity profiles (VPs). We
expand each VP in a Gauss-Hermite series, following the approach of
van der Marel \& Franx (1993):
\eqnam\ghseries
$$ {\rm VP}(v) = {\gamma \over \sigma}
\alpha(w) \left( 1 + \sum_{j=3}^{N} h_j H_j(w) \right)
,\eqno\hbox{(\the\eqnumber )}\global\advance\eqnumber by 1$$
where
$$ \alpha(w) \equiv {1 \over \sqrt{2 \pi}} {\rm e}^{-{1\over2} w^2}
, \qquad
w \equiv (v-V)/\sigma . \eqno\hbox{(\the\eqnumber )}\global\advance\eqnumber by 1$$
Here $v$ is the line-of-sight velocity, $H_j$ are the Hermite
polynomials of degree $j$, and $h_j$ are the Gauss-Hermite
coefficients. The first term in equation (\ghseries) represents a
Gaussian with line strength $\gamma$, mean radial velocity $V$, and
velocity dispersion $\sigma$. The even Gauss-Hermite coefficients
quantify symmetric deviations of the VP from the best fitting
Gaussian, and the odd coefficients quantify anti-symmetric deviations.
We determined the best-fitting VP parameters for each galaxy spectrum
by $\chi^2$-minimization of the difference between the galaxy spectrum
and a broadened template spectrum (using the Gauss-Hermite series as
the broadening function). The fitting can be done either in Fourier
space (e.g., van der Marel \& Franx 1993) or in pixel space (e.g., van
der Marel 1994). We adopted the latter approach, because it allows
straightforward masking of sky lines (which are especially abundant in
the WHT Ca II triplet spectra) and bad pixel regions (e.g., due to
dead or noisy diodes in the FOS spectra). However, we also performed
Fourier-space fitting for all spectra, as a consistency check, and
found excellent agreement in all cases.
\subsection{5.1 The WHT spectra}
The stellar kinematical results obtained from the WHT spectra are
shown in Figures~10 and~11, for NGC~4342 and NGC~4570, respectively,
and are listed in the tables of Appendix B. Prior to analysis, the
spectra were spatially rebinned along the slit to a S/N $\geq 20$ per
$10 {\rm\,km\,s^{-1}}$. As template we used the optimal mix of stellar spectra
determined as described in Section~3.4.
The kinematics of both galaxies are remarkably similar. Each galaxy
shows rapid rotation along the major axis, no measurable rotation
along the minor axis, and a central peak in the velocity dispersion
profile. The central dispersion measured for NGC~4342 is $317{\rm\,km\,s^{-1}}$;
that for NGC~4570 is $198{\rm\,km\,s^{-1}}$. The central velocity dispersion of
NGC~4342 is extremely high. Only very few galaxies have central
velocity dispersions, measured from the ground, that are larger than,
or comparable to that of NGC~4342; examples are: M87 (van der Marel
1994), NGC~3115 (Kormendy \& Richstone 1992) and NGC~4594 (Kormendy
1988; van der Marel et al.~ 1994). These galaxies are all strong
candidates for harbouring a massive nuclear black hole. The latter two
galaxies are S0s, and have rather similar kinematics as NGC~4342 and
NGC~4570.
The observed rotation curve shapes are typical for S0 galaxies (e.g.,
Simien, Michard \& Prugniel 1992; Fisher 1997). They are steep in the
centre, show a dip at intermediate radii, and then rise more gradually
out to the last measured point. The dip is also present in the radial
profile of the rms-velocity, $V_{\rm rms} = \sqrt{V^2 +
\sigma^2}$. For NGC~4342, the radial profile of $h_3$ also shows an
interesting feature. Although the profile is somewhat noisy, it
appears that $h_3$ changes its sign in the radial region where the dip
in the rotation curve occurs. This has in fact been observed in a
number of other S0s as well (Fisher 1997). Most likely, all these
kinematical features reflect radial changes in the relative
contributions of the different structural components. The even
Gauss-Hermite coefficient $h_4$, expressing symmetric deviations of
the VP from a Gaussian, is never significantly different from zero.
\subsection{5.2 The HST spectra}
{}From the HST/FOS spectra we determined only the mean velocity and
velocity dispersion of the best-fitting Gaussian VPs. The spectra are
not of sufficient $S/N$ to determine the deviations from a Gaussian
shape. A complication in the kinematical analysis is provided by the
fact that we must use a template spectrum that was obtained with a
different instrument (Section~4.5). This implies that the template and
galaxy spectra do not have the same line-spread-function (LSF; i.e.,
the observed response for a single monochromatic line). The parameters
${\widetilde V}$ and ${\widetilde \sigma}$ obtained from the
kinematical analysis must be corrected for these LSF differences, to
obtain unbiased estimates for the true mean stellar velocity $V$ and
velocity dispersion $\sigma$. The required corrections can be made,
since the LSFs of both the galaxy and template spectra can be measured
and/or calculated. Our approach for this is described in detail in
Appendix~A. The kinematical results obtained after correction for LSF
differences are listed in Table~5. Figure~12 shows the results for the
apertures that were centered on the major axis, as function of major
axis distance. The systemic velocity for each galaxy was estimated as
the mean velocity at $r=0$, obtained by linearly interpolating the
rotation curve between aperture positions~\#4 and~\#5.
For NGC~4342, the FOS spectra show a much higher central velocity
dispersion than the lower spatial resolution WHT spectra, $\sigma_0 =
418 {\rm\,km\,s^{-1}}$ vs.~$\sigma_0 = 320 {\rm\,km\,s^{-1}}$, respectively. Also, a very steep
central rotation gradient is measured with the FOS, much steeper than
that measured from the ground. The rotation velocity reaches $V_{\rm
rot} \approx 200{\rm\,km\,s^{-1}}$ at $0.25''$ from the centre (corresponding to 18
pc at a distance of 15 Mpc). These observations suggest the presence
of a strong central mass concentration in NGC~4342, possibly a massive
black hole. We will address this issue in a forthcoming paper through
detailed dynamical models.
For NGC 4570, the FOS results are somewhat more difficult to
interpret. There is certainly much less of a suggestion for a central
mass concentration on the basis of the qualitative features of the
data. The central velocity dispersion is larger than measured from the
ground, but only by a marginal amount. As for NGC~4342, the rotation
curve is steeper than measured from the ground, but the scatter
between neighbouring points (especially observations~B1 and~B2),
suggests the possible presence of small systematic errors. No known
error could be identified, but we cannot exclude GIMP-related
wavelength offsets of several tens of ${\rm\,km\,s^{-1}}$ between different spectra
(cf.~Section~4.4). Such offsets are not a problem for the NGC~4342
spectra, for which our wavelength calibration is more accurate.
\beginfigure*{13}
\centerline{\psfig{figure=color_cont.ps,width=\hdsize}}\smallskip
\caption{{\bf Figure~13.} Contour plots of $V-I$ (thick contours),
superimposed on isophotal $V$-band contours (thin contours). In both
galaxies, the outermost colour isophote corresponds to $V-I=1.22$.
Subsequent contours are 0.02 mag redder. For NGC~4342, the inner four
contours step by only 0.01 mag, in order to better sample the small
colour gradient in this galaxy (see Figure~14). The colour images were
smoothed to suppress noise, while maintaining the information in the
images. At large radii, the colour contours are flatter than the isophotal
contours. In the central region, the flattening of both contours is
similar. The dents in the $V-I=1.30$ contour of NGC~4570 are caused by
the relatively blue colours of the two features marked `A' and `B' in
Figure~6 (see van den Bosch \& Emsellem 1997, for a discussion on the
nature of these features).}
\endfigure
\@ifstar{\@ssection}{\@section}{6 Stellar populations}
In this section we investigate the stellar populations of NGC~4342
and NGC~4570, by studying broad-band colour images (Section~6.1) and
line strengths indices (Section~6.2). The results are compared to
models to study the age and metallicity of the populations
(Section~6.3). This allows us to address the formation of the nuclear
discs, and to discuss the evidence for either coeval or secular
formation.
\subsection{6.1 Broad-band colour images}
Fisher, Franx \& Illingworth (1996) presented $B-R_c$ colour images of
$148'' \times 148''$ for a number of close to edge-on S0s. They found
that the $B-R_c$ contours are flatter than the isophotes;
i.e., whereas the colour gradients along the minor axis decrease
outwards, the major axis colour gradients flatten out towards larger
radii. Similar behaviour was found for the Mg2 line strength
gradients. The H$\beta$ gradients, however, were found to be rather
flat throughout the entire galaxy. These findings suggest that
the (outer) discs of S0s are more metal rich than their bulges,
therewith contradicting formation scenarios in which the bulges are
formed from heated disc material.
We constructed $U-V$ and $V-I$ colour images from the HST/WFPC2 data.
To take into account that the PSFs are significantly different for the
three bands, we convolved the $U$-band image with the $V$-band PSF,
the $I$-band image with the $V$-band PSF, and the $V$-band image with
either the $U$-, or $I$-band PSF (using PSFs constructed with the
TinyTim software package). This approach degrades the spatial
resolution of the colour images somewhat, but provides the safest way
to avoid systematic colour errors near the centre.
Figure~13 presents contour maps of the $V-I$ colour images (thick
contours), superimposed on contour maps of the $V$-band images (thin
contours). The images only go out to $\sim 10''$, and within this
limited extent, both the bulge and the outer disc add
significantly to the projected surface brightness. A determination of
the colours of the individual components is therefore not
straightforward. Nonetheless, our results clearly show that the colour
contours are flatter than the isophotes. Similar results were obtained
for the $U-V$ images. Our results are therefore at least
qualitatively consistent with those of Fisher, Franx \& Illingworth (1996).
\beginfigure*{14}
\centerline{\psfig{figure=colgrad.ps,width=\hdsize}}\smallskip
\caption{{\bf Figure~14.} Major-axis $U-V$ and $V-I$ colour gradients,
as a function of radius, for NGC~4342 and NGC~4570. The behaviour of
the gradients is markedly different for the two galaxies.}
\endfigure
Our high spatial resolution colour images are best suited to study
population differences between the bulges and the {\it nuclear} discs
of both galaxies. If these components had the same, uniform stellar
population, then the flattening of the colour contours and the
isophotes in the central arcsec would have to be similar. This is
exactly what is observed, and contrasts strongly with the results at
larger radii. Thus, the colour images do not suggest a clear
difference in colour between the nuclear discs and the central regions
of the bulges in either of the two galaxies. However, one has to keep
in mind that broad band colours are rather poor population diagnostics
(Worthey 1994), in that different populations can have similar broad
band colours (see e.g., Figure~15 below). Therefore, we cannot rule
out that a more detailed analysis may yet reveal some subtle
population differences. In Section~6.2 below we probe the stellar
populations of the nuclear regions in NGC~4342 and NGC~4570 through
their absorption line strengths.
\begintable{6}
\caption{{\bf Table~6.} Line strength indices}
\halign{#\hfil&\quad \hfil#\hfil\quad& \hfil#\hfil\quad \cr
& NGC~4342 & NGC~4570 \cr Mg2 & $0.338 \pm 0.087$ & $0.386 \pm 0.015$
\cr H$\beta$ & $1.52 \pm 0.30$ & $2.51 \pm 0.48$ \cr Mgb & $5.11 \pm
0.33$ & $5.32 \pm 0.53$ \cr Fe5270 & $2.94 \pm 0.37$ & $3.38 \pm 0.60$
\cr Fe5335 & $3.29 \pm 0.42$ & $3.40 \pm 0.67$ \cr }
\tabletext{Line strength indices and errors for the central regions of
NGC~4342 and NGC~4570 (corrected for velocity dispersion broadening
and converted to the Lick-IDS system), derived from grand-total FOS
spectra as described in the text.}
\endtable
Figure~14 shows the $U-V$ and $V-I$ colour gradients as a function of
major axis radius. There is a marked difference between the two
galaxies. NGC~4342 shows no clear colour-gradients inside $\mathord{\sim}
0.3''$ and outside $\mathord{\sim} 3''$. Around $\mathord{\sim} 1''$ we find
$\Delta(U-V)/\Delta \log r = -0.26 \pm 0.02$ and $\Delta(V-I)/\Delta
\log r = -0.05 \pm 0.01$. By contrast, NGC~4570 shows strong gradients
over the entire radial interval studied: $\Delta(U-V)/\Delta \log r$
ranges from $-0.41 \pm 0.02$ at the outside, to $-0.21 \pm 0.02$ in
the centre; $\Delta(V-I)/\Delta \log r$ has more or less a constant
value of $-0.06 \pm 0.01$ from the centre out to $\mathord{\sim} 20''$. The
$U-V$ gradient at the outside of NGC~4570 is extremely large as
compared with those of other early-type galaxies (cf.~Peletier 1989).
The central reddening typical of early-type galaxies is
generally interpreted as due to a metallicity gradient. The best
evidence for this comes from spectroscopic measurements of absorption
lines in ellipticals (e.g., Faber 1977; Burstein et al.~ 1984; Efstathiou
\& Gorgas 1985; Peletier 1989). Several studies have found a
correlation between colour (and line strength) gradients and
total luminosity. For low-mass galaxies ($M_B > -20.5$), the gradients
increase with the total mass of the galaxy (e.g., Vader et al.~ 1988;
Carollo, Danziger \& Buson 1993), consistent with the
predictions of simple models of dissipative collapse coupled with
supernovae-induced winds (Larson 1974; Carlberg 1984; Arimoto \& Yoshii
1987; Matteucci \& Tornamb\`e 1987). For NGC~4342 and
NGC~4570, the presence of both an outer and a nuclear disc indeed
indicates that dissipation has played a role during their
formation. The morphology of these galaxies is very similar, but
NGC~4570 is almost 1.6 magnitudes brighter. This indicates that
NGC~4570 is 4.2 times more massive, assuming that both galaxies have a
similar mass-to-light ratio. The finding that the colour gradients
in NGC~4570 are larger than those in NGC~4342 is therefore
qualitatively consistent with dissipative galaxy formation, in which
star formation lasts longer and the onset of a galactic wind starts
later, in higher mass galaxies.
\beginfigure*{15}
\centerline{\psfig{figure=allcol.ps,width=\hdsize}}\smallskip
\caption{{\bf Figure~15.} Colour-colour diagrams for NGC~4342 and
NGC~4570. Small circles connected by solid lines show the colours for
isophotes of different radii; the outermost isophote is indicated by
an asterisk. Large symbols indicate the average colours of the nucleus
(triangle), nuclear disc (circle) and bulge (square). Overplotted as
dotted lines are the predictions of the single-burst stellar
population models of Worthey (1994), for a grid of age (3--17 Gyr) and
metallicity ([Fe/H] $=-0.25$ -- $+0.5$) values. The lines for [Fe/H]
$=-0.25$ and [Fe/H] $=0$ partly overlay each other, indicating a
degeneracy between metallicity and age at low metallicities. The arrow
indicates the slope of dust reddening for $R_V \equiv A(V)/E(B-V) =
3.1$, as typical for Galactic dust.}
\endfigure
\subsection{6.2 Line strengths}
Line strength measurements provide additional information about ages
and metallicities that is complementary to, and often more accurate
than, that provided by broad-band colours. The most commonly used
spectral indices are those of the Lick-IDS system (Faber et al.~ 1985;
Worthey et al.~ 1994), in particular H$\beta$, Mg2, Mgb, Fe5270 and
Fe5335. These are all in the wavelength range 4800--5400{\AA}, which
is included in the FOS spectra. The $S/N$ of the FOS spectra was not
sufficient to infer line strengths at each aperture position
individually, and we therefore measured line strength indices from one
grand-total spectrum for each galaxy, obtained by summing the
different spectra available for each galaxy. Two corrections have to
be applied to these indices. First, they must be corrected for the
broadening effect of velocity dispersion, which weakens most of the
lines. We determined empirical correction factors, $C(\sigma) \equiv
{\rm index}(0)/{\rm index}(\sigma)$, for each of the 5 indices listed
above; ${\rm index}(0)$ is the index measured from the template star
K193, $\sigma$ is the velocity dispersion of the grand-total spectrum
derived using K193 as template, and ${\rm index}(\sigma)$ is the index
of the K193 spectrum broadened with a Gaussian of dispersion
$\sigma$. Second, to be able to compare our indices with the Worthey
(1994) stellar population models (see Section~6.3), they must be
converted to the Lick scale. This is necessary to correct for
differences in the spectral resolution and in the spectral response
function between our FOS data and the Lick group data. We compared the
indices derived by us from the FOS spectrum of F193 with those
obtained by the Lick group from observations of the same star (Worthey
et al.~ 1994). This yields correction factors
${\rm index(LICK)/index(FOS)}$ for each of the four atomic
indices. For the molecular Mg2 index, we used the difference ${\rm
[index(LICK) - index(FOS)]}$. Table~6 lists the line strength indices
thus obtained for NGC~4342 and NGC~4570, corrected for velocity
dispersion broadening and converted to the Lick scale.
\subsection{6.3 Comparison with stellar population models}
To address the age and metallicity of the stellar populations
in NGC~4342 and NGC~4570, we have compared their colours and line
strengths to the single-burst stellar population models of Worthey
(1994). These models give the fluxes, colours, and line strengths of
an evolving stellar population, as a function of age and
metallicity [Fe/H], calculated from isochrones and model flux
libraries of stars with a Salpeter (1955) initial mass function.
In Figure~15 we plot $V-I$ vs.~$U-V$ for the two galaxies. Small
open circles indicate results for isophotes at different radii;
neighbouring isophotes are connected, and an asterisk indicates the
outermost radius. Large symbols indicate the average colours of the
nucleus (open triangle), the nuclear disk (open circle) and the bulge
(open square). For the nucleus we used the region inside $0.18''$, for
the nuclear disk we used the region along the major axis between
$0.18''$ and $0.71''$ (which is where the nuclear discs most clearly
stand out from the isophotal analysis, cf.~Figures~4 and~5), and for
the bulge we used the region inside $0.71''$, but offset from the
major axis by $0.5'' - 0.6''$. Overplotted in Figure~15 are the
predictions of Worthey's models for an age-metallicity grid, with ages
between 3 and 17 Gyr, and [Fe/H] between $-0.25$ and $+0.50$. For
[Fe/H] $\mathrel{\spose{\lower 3pt\hbox{$\sim$} +0.25$ there is strong degeneracy between age and
metallicity, as the grid points crowd together. However, for high
metal abundances the $V-I$ vs.~$U-V$ diagram is a useful discriminator
between age and metallicity effects.
For NGC~4342, the main body of the galaxy lies close to the 8 Gyr,
solar-metallicity grid point. The nucleus and nuclear disc have
different colours than the main body, but have similar colours to each
other (cf.~Figure~15). Their colours are well fit with a
similar age as the main body, but with a somewhat higher metallicity
([Fe/H] $\approx +0.25$). There is considerable uncertainty in these
results though, since most points fall in the region of the model grid
where there is a strong age-metallicity degeneracy.
\beginfigure{16}
\centerline{\psfig{figure=line_index.ps,width=\hssize}}\smallskip
\caption{{\bf Figure~16.} Diagram of the age indicator H$\beta$ vs.~the
metallicity indicator [MgFe]. Data points with error bars indicate the
measurements from the grand-total FOS spectra. Overplotted as dotted
lines are the predictions of the single-burst stellar population
models of Worthey (1994), for a grid of age (3--17 Gyr) and
metallicity ([Fe/H] $=-0.25$--$+0.5$) values.}
\endfigure
The colour-colour diagram of NGC~4570 is very different. The
colours change dramatically from the outermost point to the
very nucleus. Whereas the outermost points fall outside the grid of
Worthey's models, the nucleus is consistent with an 8 Gyr old
population with [Fe/H] $\approx +0.35$. The run of the central
isophotes in the colour-colour diagram has a similar slope as would be
expected from reddening due to dust (as indicated by the arrow in
Figure~15), but dust reddening is not likely to be the cause of the
observed gradients: both the isophotes and the colour-images are very
smooth, and no $100\mu{\rm m}$ emission has been detected. It would be
tempting to interpret the observed colour differences between the
nucleus, nuclear disc, and bulge as due to changes in age. However,
the nuclear disc does not show up as a separate component in the
colour images. The colour differences therefore most likely reflect
mere changes in stellar population with radius from the centre, rather
than differences between separate components.
{}From the line indices listed in Table~6, we can calculate the new index
[MgFe], which is defined as $\sqrt{{\rm Mgb} \langle {\rm Fe}
\rangle}$, where $\langle {\rm Fe} \rangle = ({\rm Fe5270} + {\rm
Fe5335})/2$. This index is often used as a metallicity indicator
(e.g., Gonz\'alez 1993). We derive $\log{\rm [MgFe]} = 0.60
\pm 0.04$ and $0.63 \pm 0.06$, for NGC~4342 and NGC~4570,
respectively. The H$\beta$ index is a sensitive
age-indicator. Figure~16 shows H$\beta$ vs.~[MgFe] for each galaxy,
with overplotted the predictions of Worthey's stellar population
models. The observed indices refer to the central region ($\mathrel{\spose{\lower 3pt\hbox{$\sim$}
0.5''$) of each galaxy. The models indicate that this central region
in NGC~4342 has a high metallicity (${\rm [Fe/H]} \approx +0.35$) and
an age of $\mathord{\sim} 8$ Gyr. This is roughly consistent with the age and
metallicity found from the broad band colours (see Figure~15). The
indices for NGC~4570, on the other hand, fall outside the model
grid. The high H$\beta$ index suggests a very young stellar
population, whereas both the colours and the value of [MgFe] suggest a
high metallicity.
Although these results are suggestive, we do not wish to draw very
strong conclusions. The error bars on the measured line indices are
large, due to the low $S/N$ of the FOS spectra. We have been careful
and conservative in the determination of these errors, but note that
it is very difficult to quantify possible systematic errors. In
addition, the age and metallicity values inferred from the line
strengths refer only to the nuclear region $\mathrel{\spose{\lower 3pt\hbox{$\sim$} 0.5''$, and not
necessarily to the entire bulge population. Fisher, Franx \&
Illingworth (1996) have shown that the nuclear regions of S0s are
typically a few Gyr younger than the bulk of the galaxy. Also,
ages measured from the H$\beta$ index have to be interpreted with
care. First, the observed H$\beta$ index could be
dominated by horizontal branch stars, rather than by
main-sequence turnoff stars. It this case it would be more
sensitive to variations in the initial mass function, than to
variations in age. Second, the H$\beta$ index is very
sensitive to small numbers of young stars present. The
relatively young age inferred for the nuclear population in NGC~4570
may therefore reflect merely the most recent generation of
stars, rather than the age of the entire, presumably much older
generation. Indeed, van den Bosch \& Emsellem (1997) present
evidence that the central region of NGC~4570 has had recent
star formation. A more thorough investigation of the stellar
populations of the separate components in NGC~4243 and NGC~4570 than
presented here will have to await additional observations of line
indices, both at larger radii from the centre and with higher
$S/N$.
\@ifstar{\@ssection}{\@section}{7 Conclusions and discussion}
Small stellar discs embedded in the nuclei of early-type galaxies are
intriguing: they contain clues about the process of galaxy formation,
and their dynamics provide a useful tool to constrain the central mass
distribution. We have presented high spatial resolution photometric
and spectroscopic data for two E/S0 galaxies in the Virgo cluster,
NGC~4342 and NGC~4570, for which pre-refurbishment HST images showed
nuclear discs in addition to larger outer discs. New HST/WFPC2 images
confirm the existence of the nuclear discs, and dismiss suggestions
that the earlier detections were artifacts of Lucy deconvolution; the
new images clearly show the discs even without deconvolution. The
decomposition of both galaxies in disc and bulge components will be
presented in a forthcoming paper (Scorza \& van den Bosch 1997). Here
we have focussed on using the multi-colour WFPC2 images, WHT spectra
and HST/FOS spectra to do a first study of the stellar populations and
dynamics of the nuclear discs.
Broad-band colour images were constructed from the WFPC2 data,
properly taking into account the different PSFs in the different
bands. For both galaxies we find the colour contours outside $\mathord{\sim}
2''$ to be flatter than the isophotes. We have also determined the
radial colour gradients in both galaxies. The gradients in the more
massive galaxy NGC~4570 are larger than those for NGC~4342, consistent
with the predictions of simple models of dissipative collapse coupled
with supernovae-induced galactic winds. All these results are in good
agreement with those of Fisher, Franx \& Illingworth (1997), who
studied the ages and metallicities of a sample of 20 S0s, and seem to
suggest that the bulges of S0s are not formed out of heated disc
stars, but are most likely the results of dissipational formation.
By contrast, from the colour images and gradients inside
$\mathord{\sim} 2''$ we find no strong evidence for population differences
between the bulges and the {\it nuclear} discs in both galaxies.
We have measured line strength indices for several diagnostic lines
from the summed FOS spectra, to obtain additional stellar population
information about the nuclear ($\mathrel{\spose{\lower 3pt\hbox{$\sim$} 0.5''$) components of both
galaxies. Comparison with the single-burst stellar population models
of Worthey (1994) indicates that the central regions of both galaxies
have metallicities [Fe/H] $\approx +0.25$ or larger. NGC~4342 is well
fit with an age of $\mathord{\sim} 8$ Gyr. NGC 4570 has an unusually large
H$\beta$ line strength, which may be suggestive of recent star
formation. In van den Bosch \& Emsellem (1997) evidence is presented
that the central region of NGC~4570 has experienced bar induced
secular evolution. Unfortunately, neither the broad band colours nor
the line strength measurements presented here, place important
constraints on possible formation scenarios for the nuclear
discs. Additional observations of line indices, both at larger radii
from the centre and with higher $S/N$, are required for a more
thorough investigation of the stellar populations of the separate
components in these galaxies.
The WHT and FOS spectra were used to determine the nuclear stellar
kinematics of NGC~4342 and NGC~4570. The dynamical structure of both
galaxies is found to be remarkably similar to that of other
well-studied S0s, such as NGC~3115 (Kormendy \& Richstone 1992),
NGC~4026 and NGC~4111 (Simien, Michard \& Prugniel 1993; Fisher
1997). The long-slit WHT spectra have high $S/N$ and a large radial
extent. They reveal a very centrally peaked velocity dispersion
profile in both galaxies. The rotation curves clearly show the
different dynamical properties of the structural components identified
photometrically. The single-aperture FOS spectra of the nuclear
regions of both galaxies have lower $S/N$ than the WHT spectra, but
have four times higher spatial resolution. The FOS data of NGC~4342
show significantly higher velocities than the WHT data. The observed
central velocity dispersion is $\mathord{\sim} 420{\rm\,km\,s^{-1}}$, compared to `only'
$\mathord{\sim} 320{\rm\,km\,s^{-1}}$ as measured from the WHT spectra. The rotation
velocity reaches $\mathord{\sim} 200 {\rm\,km\,s^{-1}}$ at $0.25''$, implying a much steeper
rotation gradient than inferred from the WHT data. These observations
indicate a high central mass density. Detailed dynamical models to be
presented in a forthcoming paper (Cretton \& van den Bosch 1997, see
also van den Bosch \& Jaffe 1997) provide strong evidence for the
presence of a few times $10^8 {\rm\,M_\odot}$ black hole in NGC~4342. The FOS
data for NGC~4570 are more difficult to interpret. The rotation curve
is considerably steeper than measured from the ground, but there is
some indication for possible systematic errors. The central velocity
dispersion is larger than measured from the ground, but only
marginally so. Although these data certainly do not exclude the
possible presence of a black hole in NGC~4570, the qualitative
features of the data do not suggest such a black hole as strongly as
they do for NGC~4342.
\@ifstar{\@ssection}{\@section}*{Acknowledgments}
\tx The observations presented in this paper were obtained with the
NASA/ESA Hubble Space Telescope and with the William Herschel
Telescope. HST data are obtained at the Space Telescope Science
Institute, which is operated by AURA, Inc., under NASA contract NAS
5-26555. The WHT is operated on the island of La Palma by the Royal
Greenwich Observatory in the Spanish Observatorio del Roque de los
Muchachos of the Instituto de Astrofisica de Canarias. We are grateful
to Eric Emsellem for his help with the MGE analysis, and to Marijn
Franx for providing a ground-based template library. RPvdM was
supported by a Hubble Fellowship, \#HF-1065.01-94A, awarded by STScI.
\@ifstar{\@ssection}{\@section}*{References}
\beginrefs
\bibitem Arimoto N., Yoshii Y., 1987, A\&A, 173, 23
\bibitem Bender R., D\"obereiner S., M\"ollenhoff C., 1988, A\&AS, 74, 385
\bibitem Burrows C.J. et al., 1995, Hubble Space Telescope Wide Field
and Planetary Camera 2 Instrument Handbook, Version 3.0.
STScI, Baltimore
\bibitem Burstein D., 1979, ApJ, 234, 435
\bibitem Burstein D., Faber S.M., Gaskell C.M., Krumm N., 1984, ApJ, 287, 586
\bibitem Carlberg R.C., 1984, ApJ, 286, 403
\bibitem Carollo C.M., Danziger I.J., Buson L., 1993, MNRAS, 265, 553
\bibitem Cretton N., van den Bosch F.C., 1997, in preparation
\bibitem de Vaucouleurs G., de Vaucouleurs A., Corwin H.C., 1976,
Second Reference Catalogue of Bright Galaxies. University of Texas,
Austin (RC2)
\bibitem Dressler A., 1984, ApJ, 286, 97
\bibitem Efstathiou G., Gorgas J., 1985, MNRAS, 215, 37p
\bibitem Emsellem E., Monnet G., Bacon R., 1994, A\&A, 285, 723
\bibitem Evans I.N., 1995, FOS Instrument Science Report CAL/FOS-140,
`Post-COSTAR FOS small aperture relative throughputs derived from
SMOV data'. STScI, Baltimore
\bibitem Faber S.M., 1977, in Tinsley B.T., Larson R.B., eds.,
The Evolution of Galaxies and Stellar Populations,.
Yale University Press, New Haven, p.~157
\bibitem Faber S.M., Friel E., Burstein D., Gaskell C.M., 1985, ApJS,
57, 711
\bibitem Fisher D., Franx M., Illingworth G.D., 1996, ApJ, 459, 110
\bibitem Fisher D., 1997, AJ, 113, 950
\bibitem Forbes D.A., 1994, AJ, 107, 2017
\bibitem Gebhardt K. et al., 1996, AJ, 112, 105
\bibitem Gonz\'alez J.J., 1993, PhD Thesis. University of California,
Santa Cruz
\bibitem Holtzman J.A. et al., 1995a, PASP, 107, 156
\bibitem Holtzman J.A. et al., 1995b, PASP, 107, 1065
\bibitem Jacoby G., Ciardullo R., Ford H.C., 1990, ApJ, 356, 332
\bibitem Jaffe W., Ford H.C., Ferrarese L., van den Bosch F.C., O'Connell R.W.,
1994, AJ, 108, 1567
\bibitem Jedrzejewski R.I., 1987, MNRAS, 226, 747
\bibitem Keyes T. et al., 1995, Hubble Space Telescope Faint Object
Spectrograph Instrument Handbook, Version 6.0. STScI, Baltimore
\bibitem Knapp G.R, Guhathakurta P., Kim D.-W., Jura M., 1989, ApJS, 70, 329
\bibitem Kormendy J., 1988, ApJ, 335, 40
\bibitem Kormendy J., Richstone D.O., 1992, ApJ, 393, 559
\bibitem Kuijken K., Merrifield M.R., 1993, MNRAS, 264, 712
\bibitem Larson R.B., 1974, MNRAS, 166, 385
\bibitem Lauer T.R., 1985, MNRAS, 216, 429
\bibitem Lauer T.R. et al., 1995, AJ, 110, 2622
\bibitem Lucy L.B., 1974, AJ, 74, 745
\bibitem Matteucci F., Tornamb\`e A., 1987, A\&A, 185, 51
\bibitem Michard R., 1996, A\&AS, 117, 583
\bibitem Nieto J.-L., Bender R., Arnaud J., Surma P., 1991, A\&A, 244, L25
\bibitem Peletier R.F., 1989, PhD Thesis. University of Groningen
\bibitem Rix H.-W., White S.D.M., 1992, MNRAS, 254, 389
\bibitem Sargent W.L.W., Young P.J., Boksenberg A., Shortridge K., Lynds C.R.,
Hartwick F.D.A., 1978, ApJ, 221, 731
\bibitem Scorza C., Bender R., 1990, A\&A, 235, 49
\bibitem Scorza C., Bender R., 1995, A\&A, 293, 20
\bibitem Scorza C., van den Bosch F.C., 1997, in preparation
\bibitem Simien F., Michard R., Prugniel P., 1993, in
Danziger I.J., Zeilinger W.W., Kj\"ar K., eds.,
Structure, Dynamics and Chemical Evolution of Elliptical Galaxies.
ESO Conference and Workshop Proceedings, Garching bei M\"unchen,
p.~211
\bibitem Tonry J.L., Davis M., 1979, AJ, 84, 1511
\bibitem Vader J.P., Vigroux L., Lachi\`eze-Rey M., Souviron J., 1988,
A\&A, 203, 217
\bibitem van den Bosch F.C., Ferrarese L., Jaffe W., Ford H.C., O'Connell R.W.,
1994, AJ, 108, 1579
\bibitem van den Bosch F.C., de Zeeuw, P.T., 1996, MNRAS, 283, 381
\bibitem van den Bosch F.C., Jaffe W., 1997, in Arnaboldi M., Da Costa G.S.,
Saha P., eds., The Nature of Elliptical Galaxies, A.S.P Conference
Series Volume 116, p. 142
\bibitem van den Bosch F.C., Emsellem E., 1997, MNRAS, submitted
\bibitem van der Marel R.P., 1991, MNRAS, 253, 710
\bibitem van der Marel R.P., 1994, MNRAS, 270, 271
\bibitem van der Marel R.P., Franx M., 1993, ApJ, 407, 525
\bibitem van der Marel R.P., Rix H.-W., Carter D., Franx M., White S.D.M.,
de Zeeuw P.T., 1994, MNRAS, 268, 521
\bibitem van der Marel R.P., de Zeeuw P.T., Rix H.-W., 1997, ApJ, in press
\bibitem Worthey G., 1994, ApJS, 95, 107
\bibitem Worthey G., Faber S.M., Gonz\'alez J.J., Burstein D., 1994, ApJS, 94,
687
\bibitem Wrobel J.M., 1991, AJ, 101, 127
\par\egroup\@doendpe
\@ifstar{\@ssection}{\@section}*{Appendix A: LSF corrections for the HST/FOS spectra}
\subsection{A.1 Basic equations}
For each galaxy there are seven HST/FOS galaxy spectra, $G_i$
($i=1,..,7$). Each spectrum is the convolution of the stellar velocity
profile, ${\rm VP}_i$, the characteristic stellar spectrum of the
population of the galaxy, $S_G$, and the line-spread-function of the
observation, ${\rm LSF}_i$:
$$ G_i = {\rm VP}_i \otimes S_G \otimes {\rm LSF}_i . \eqno (A1) $$
We assume that each ${\rm VP}_i$ is a Gaussian with mean velocity
$V_i$ and velocity dispersion $\sigma_i$. The template spectrum, $T$,
is the convolution of the stellar spectral mix, $S_T$, and the
line-spread-function of the template star observations, ${\rm LSF}_T$:
$$ T = S_T \otimes {\rm LSF}_T . \eqno (A2) $$
The stellar kinematical analysis minimizes
\eqnam\chisq
$$ \chi^2 = \int [G_i - (T \otimes {\widetilde {\rm VP}}_i)]^2 , \eqno (A3) $$
to find the parameters ${\widetilde V}_i$ and ${\widetilde \sigma}_i$ of
the best-fitting Gaussian broadening function ${\widetilde {\rm
VP}}_i$. It is generally assumed that there is no template mismatch,
i.e., $S_T = S_G$.
If the LSFs of the galaxy and template spectra are identical, ${\rm
LSF}_i \equiv {\rm LSF}_T$, as is usually assumed if all observations
are obtained with the same instrument, then ${\widetilde V}_i$ and
${\widetilde \sigma}_i$ are unbiased estimates of the true mean velocity
$V_i$ and velocity dispersion $\sigma_i$. In our case the galaxy and
template spectra were not obtained with the same instrument, and hence
the LSFs cannot be assumed to be identical. Thus, ${\widetilde V}_i$ and
${\widetilde \sigma}_i$ must be corrected for the LSF differences
between the galaxy and template spectra, to obtain proper estimates
for $V_i$ and~$\sigma_i$.
\subsection{A.2 The HST line-spread-functions}
For HST/FOS observations one may assume (van der Marel 1997) that the
LSF for observation $i$, ${\rm LSF}_i$, is the convolution of the
normalized intensity distribution of the light that falls onto the
grating, $A_i$, and the instrumental line-broadening function due to the
grating and the detector resolution, $H$:
$$ {\rm LSF}_i = A_i \otimes H , \eqno (A4) $$
The illumination function $A_i$ is a function of the unconvolved,
projected light distribution of the galaxy, the aperture position, and
the kernel function that describes the HST/FOS PSF and the aperture
geometry. These are all known: the MGE fits in Table~3 describe the
unconvolved light distribution, the aperture positions $(x_{\rm
ap},y_{\rm ap})$ for all observations are listed in Table~5, and the
PSF+aperture kernel was derived in Section~4.2. The functions $A_i$
can therefore be calculated explicitly for all galaxy observations.
One may write for any pair of observations at positions $i$ and $j$
within the same galaxy, at least formally,
\eqnam\aplumconv
$$ A_j = z_{ji} \otimes A_i \eqno (A5) $$
We found, as did van der Marel (1997), that the functions $z_{ji}$ can be
well approximated by Gaussians with mean velocity ${\widehat V}_{ji}$
and velocity dispersion ${\widehat \sigma}_{ji}$.
\subsection{A.3 Kinematical corrections}
The functions $H$ and ${\rm LSF}_T$ influence the kinematical analysis
for each galaxy spectrum in exactly the same way. Their shape and
properties therefore do not enter into the {\it differences} between
the stellar kinematics inferred from observations at different
positions $i$ and $j$ in the same galaxy. Since the convolution of two
Gaussians is again a Gaussian, it is straightforward to show that
\eqnam\fosvel
$$ V_j - V_i = {\widetilde V}_{j} - {\widetilde V}_{i} + {\widehat V}_{ji}
\eqno (A6) $$
and
\eqnam\fossig
$$ \sigma^2_j - \sigma^2_i =
{\widetilde \sigma}^2_j - {\widetilde \sigma}^2_i +
{\widehat \sigma}^2_{ji} . \eqno (A7) $$
So if the stellar kinematics for any galaxy spectrum $j$ are known
{\it a priori}, then the kinematics at any other position $i$ can be
calculated without knowledge of either of the functions $H$ and ${\rm
LSF}_T$.
\subsection{A.4 Reference kinematics}
Position \#5 for each galaxy is located at $\mathord{\sim} 0.5''$ from the
galaxy center (cf.~see~Table 5). This position is beyond the region
most affected by seeing in ground-based observations, and can
therefore be used as a `reference position' for use with
equations~(A6) and~(A7). One possibility is to assume that the
velocity dispersions for spectra~A5 and~B5 are known a priori, from
the results of the ground-based WHT observations. This yields
$\sigma_5 = 260.7 {\rm\,km\,s^{-1}}$ for NGC 4342, and $\sigma_5 = 173.1 {\rm\,km\,s^{-1}}$ for
NGC 4570. More accurate estimates for $\sigma_5$ can be obtained by
modeling the small residual effect of seeing on the ground-based
observations. For this purpose we calculated predicted kinematical
quantities from $f(E,L_z)$ Jeans equation models for the ground-based
kinematics, based on the MGE-fitted density distributions (see Cretton
\& van den Bosch 1997). We then calculated predictions for
$\sigma_5$, by convolving the unconvolved model kinematics with the
$0.26''$ aperture kernel at the appropriate positions. This yielded
$\sigma_5 = 237.5 {\rm\,km\,s^{-1}}$ and $\sigma_5 = 170.8 {\rm\,km\,s^{-1}}$ for NGC 4342 and
NGC 4570 respectively. Comparison with the directly observed values
confirms that the effect of seeing convolution at a galactocentric
distance of $0.5''$ is only modest.
The LSF corrected stellar kinematics listed in Table~5 were
obtained using equations~(A6) and~(A7), with the seeing corrected
$\sigma_5$ as reference value, and with ${\widehat V}_{i5}$ and
${\widehat \sigma}_{i5}$ chosen so as to best fit equation~(A5). The
systemic velocity was independently estimated and subtracted from the
velocities as described in Section~5.2. Hence, any arbitrary value may
be used for the reference velocity $V_5$, because it only enters into
equation~(A6) as an additive constant.
\begintable{7}
\caption{{\bf Table~A1.} HST/FOS stellar kinematics: a consistency check}
\halign{#\hfil&\quad \hfil#\hfil\quad& \hfil#\hfil\quad&
\hfil#\hfil\quad& \hfil#\hfil\quad& \hfil#\hfil\quad& \hfil#\hfil\quad \cr
\multispan3\quad\hfil NGC~4342 \hfil & &
\multispan3\quad\hfil NGC~4570 \hfil \cr
id. & $\delta_V$ & $\delta_{\sigma}$ & &
id. & $\delta_V$ & $\delta_{\sigma}$ \cr
A1 & $-0.73$ & $-0.41$ & & B1 & $-1.29$ & $+0.52$ \cr
A2 & $+0.41$ & $-0.39$ & & B2 & $-0.14$ & $-0.38$ \cr
A3 & $-0.55$ & $+0.93$ & & B3 & $+0.06$ & $+0.59$ \cr
A4 & $-0.32$ & $+0.02$ & & B4 & $-2.01$ & $+0.26$ \cr
A5 & $+0.68$ & $+0.76$ & & B5 & $-2.85$ & $-1.24$ \cr
A6 & $-0.26$ & $-0.50$ & & B6 & $-0.54$ & $-0.28$ \cr
A7 & $-0.32$ & $+0.33$ & & B7 & $-1.51$ & $-1.06$ \cr
}
\tabletext{The quantities $\delta_V$ and $\delta_{\sigma}$ are the
differences between the kinematical results obtained from the HST/FOS
galaxy spectra with: (a) a ground-based template that provides a good
match to the galaxy spectra; and (b) an HST template that has
significant template mismatch. The differences are expressed in units
of the formal error bars.}
\endtable
\subsection{A.5 Consistency check}
As a consistency check on the {\it assumed} values of $\sigma_5$, we
also tried to {\it calculate} $\sigma_5$ from the actual FOS
observations. This can be done under the assumption that the LSFs of
both the galaxy and template spectrum are Gaussian, with dispersions
$\sigma_G$ and $\sigma_T$, respectively. This is not a very accurate
approximation, but provides a useful consistency check. It yields:
\eqnam\testsigma
$$ \sigma_5^2 = {\widetilde \sigma}_5^2 + \sigma_T^2 - \sigma_G^2
. \eqno (A8)$$
The dispersion of the template spectrum is $\sigma_T = 71 {\rm\,km\,s^{-1}}$ (at
5170{\AA}), whereas $\sigma_G = 100 \pm 2 {\rm\,km\,s^{-1}}$ (Keyes et al.~ 1995).
The dispersions ${\widetilde \sigma}_5$ measured directly from the
$\chi^2$ minimization~(A3) are $240.6 {\rm\,km\,s^{-1}}$ and $171.5 {\rm\,km\,s^{-1}}$, for
NGC~4342 and NGC~4570, respectively. This yields with equation~(A8)
that $\sigma_5 = 230.1{\rm\,km\,s^{-1}}$ for NGC~4342, and $\sigma_5 = 156.4 {\rm\,km\,s^{-1}}$
for NGC~4570, in good agreement (within the error bars of the FOS
data) with the values derived from the Jeans modeling. The reference
values assumed on the basis of modeling of the ground-based data are
therefore confirmed by the FOS data.
\subsection{A.6 Analysis with an HST template}
As yet another test, we also analyzed the FOS spectra with an actual
FOS template spectrum: the KIII star F193, observed with the same
instrumental setup as our galaxy spectra. There is definite template
mismatch between this stellar spectrum and our galaxy spectra, but
the results should nonetheless be roughly consistent with those
derived using the more appropriate ground-based template.
In this case, the instrumental broadening function $H$ (equation~[A4])
is the same for the galaxy and template spectra. However, the template
has a different illumination function $A_T$ than the galaxy
spectra. We calculated $A_T$ assuming that the star was properly
centred in the $0.26''$ aperture. In analogy with equation~(A5), we
subsequently assumed that each $A_i$ can be approximated as a
convolution of $A_T$ with a Gaussian. For each galaxy spectrum we
found the best Gaussian, and convolved it with the template spectrum.
This results in a different template spectrum for each galaxy
spectrum, that has the same LSF as the galaxy spectrum. We refer to
the stellar kinematics determined with these templates as $V_i^{*}$
and $\sigma_i^{*}$. In Table~A1 we list the relative differences
$\delta_V \equiv (V_i^{*} - V_i)/\Delta V_i$ and $\delta_{\sigma}
\equiv (\sigma_i^{*} - \sigma_i)/\Delta \sigma_i$, between these results
and those listed in Table~5. The agreement is satisfactory: most
residuals are smaller than the error bars listed in Table 5 (i.e.,
$\delta < 1$). The results in Table~5 are the more accurate ones,
because they suffer less from template mismatch.
The results obtained with an HST template are consistent with those
derived using a ground-based template. This confirms that there are no
large systematic errors in our analysis.
\@ifstar{\@ssection}{\@section}*{Appendix B: WHT kinematics}
The tables in this Appendix list the stellar kinematics inferred
from the WHT spectra. The quantities $V$ and $\sigma$ (in ${\rm\,km\,s^{-1}}$) are
the mean and dispersion of best-fitting Gaussian VPs, and $h_3$ and
$h_4$ are the lowest-order Gauss-Hermite moments of the VPs. The
kinematics inferred from the HST/FOS spectra are listed in
Table~5.
\begintable{8}
\caption{{\bf Table~B1.} NGC~4342 major axis}
\halign{\hfil#&\quad \hfil#\quad& \hfil#\quad& \hfil#\quad& \hfil#\quad&
\hfil#\quad& \hfil#\quad& \hfil#\quad& \hfil# \cr
R & $V$ & $\Delta V$ & $\sigma$ & $\Delta\sigma$ &
$h_3$ & $\Delta h_3$ & $h_4$ & $\Delta h_4$ \cr
-14.235&-238.5& 9.4& 94.6& 12.1&-0.111& 0.088& 0.041& 0.070\cr
-11.218&-223.6& 6.3& 78.6& 7.7&-0.040& 0.068&-0.038& 0.056\cr
-9.463&-233.1& 7.0& 95.8& 9.6& 0.042& 0.068& 0.037& 0.054\cr
-8.222&-231.1& 8.8& 118.0& 11.6&-0.052& 0.070& 0.035& 0.055\cr
-7.147&-216.7& 6.5& 100.2& 9.1&-0.003& 0.061& 0.051& 0.048\cr
-6.259&-227.8& 8.4& 109.8& 11.0& 0.093& 0.072& 0.003& 0.057\cr
-5.542&-212.2& 8.3& 117.9& 10.6&-0.062& 0.066& 0.007& 0.052\cr
-4.826&-204.2& 8.3& 144.1& 11.9& 0.114& 0.056& 0.042& 0.044\cr
-4.109&-190.2& 7.3& 129.4& 8.8& 0.078& 0.050&-0.016& 0.040\cr
-3.580&-155.8& 9.3& 126.5& 11.9& 0.059& 0.065& 0.029& 0.053\cr
-3.222&-145.9& 10.3& 152.3& 13.8& 0.055& 0.061& 0.038& 0.050\cr
-2.864&-152.3& 9.5& 134.9& 12.7&-0.011& 0.063& 0.046& 0.052\cr
-2.506&-141.1& 9.5& 165.3& 11.1&-0.043& 0.049&-0.025& 0.039\cr
-2.148&-137.6& 10.0& 180.0& 13.4&-0.049& 0.050& 0.024& 0.040\cr
-1.790&-123.6& 9.4& 183.8& 12.8&-0.057& 0.046& 0.031& 0.036\cr
-1.432&-114.9& 10.1& 220.5& 12.8& 0.001& 0.038& 0.010& 0.031\cr
-1.074&-135.0& 6.9& 196.3& 9.2&-0.008& 0.031& 0.025& 0.025\cr
-0.716&-124.6& 7.4& 243.5& 9.8& 0.071& 0.026&-0.011& 0.021\cr
-0.358& -71.2& 7.7& 292.9& 10.6& 0.046& 0.022&-0.012& 0.018\cr
0.000& 0.0& 7.3& 317.1& 10.6&-0.026& 0.021&-0.015& 0.016\cr
0.358& 70.5& 7.0& 276.8& 9.5&-0.050& 0.021&-0.007& 0.017\cr
0.716& 114.2& 7.1& 215.3& 9.5&-0.011& 0.027& 0.006& 0.022\cr
1.074& 134.1& 7.3& 197.1& 9.3& 0.002& 0.030&-0.016& 0.025\cr
1.432& 132.8& 8.0& 175.1& 10.9& 0.045& 0.038& 0.017& 0.032\cr
1.790& 120.4& 8.8& 160.5& 11.4& 0.041& 0.046& 0.002& 0.038\cr
2.148& 124.2& 8.7& 152.2& 11.1&-0.042& 0.047& 0.006& 0.040\cr
2.506& 154.8& 8.9& 147.9& 11.1& 0.032& 0.050&-0.020& 0.042\cr
2.864& 153.3& 10.1& 147.0& 12.2&-0.062& 0.056&-0.012& 0.047\cr
3.222& 158.6& 9.3& 139.1& 11.2&-0.039& 0.055&-0.016& 0.045\cr
3.580& 158.3& 10.1& 135.9& 12.5&-0.008& 0.063&-0.014& 0.052\cr
4.109& 178.7& 6.8& 121.0& 9.1&-0.094& 0.050& 0.020& 0.041\cr
4.826& 189.7& 6.6& 101.2& 9.2&-0.123& 0.059& 0.040& 0.047\cr
5.542& 197.2& 7.6& 108.1& 9.3&-0.041& 0.060&-0.024& 0.048\cr
6.429& 197.8& 7.1& 113.1& 8.1&-0.045& 0.052&-0.055& 0.042\cr
7.505& 203.9& 6.8& 90.0& 8.0&-0.130& 0.063&-0.053& 0.051\cr
8.749& 207.4& 6.3& 90.4& 7.9&-0.043& 0.061&-0.034& 0.049\cr
10.344& 219.8& 6.5& 81.4& 7.9&-0.112& 0.066&-0.024& 0.055\cr
12.614& 214.5& 7.1& 90.0& 8.8& 0.010& 0.067&-0.028& 0.055\cr
}
\endtable
\begintable{9}
\caption{{\bf Table~B2.} NGC~4342 minor axis}
\halign{\hfil#&\quad \hfil#\quad& \hfil#\quad& \hfil#\quad& \hfil#\quad&
\hfil#\quad& \hfil#\quad& \hfil#\quad& \hfil# \cr
R & $V$ & $\Delta V$ & $\sigma$ & $\Delta\sigma$ &
$h_3$ & $\Delta h_3$ & $h_4$ & $\Delta h_4$ \cr
-2.661& 15.2& 9.9& 107.4& 13.3& 0.138& 0.079& 0.044& 0.064\cr
-2.148& -3.7& 15.1& 190.4& 25.5&-0.092& 0.068& 0.133& 0.056\cr
-1.790& -6.2& 12.3& 200.4& 16.4&-0.035& 0.052& 0.019& 0.043\cr
-1.432& 16.8& 9.1& 183.8& 13.0&-0.027& 0.044& 0.028& 0.036\cr
-1.074& 5.0& 9.3& 219.5& 10.9& 0.036& 0.033&-0.031& 0.028\cr
-0.716& -5.8& 7.1& 240.9& 9.6& 0.017& 0.027&-0.034& 0.022\cr
-0.358& -0.8& 6.9& 299.0& 10.1& 0.048& 0.021&-0.008& 0.016\cr
0.000& 0.0& 6.2& 293.3& 9.5& 0.056& 0.020&-0.012& 0.016\cr
0.358& -3.7& 6.6& 265.5& 9.5& 0.054& 0.022& 0.009& 0.018\cr
0.716& -10.7& 6.9& 239.4& 10.1& 0.047& 0.025& 0.028& 0.021\cr
1.074& -1.7& 7.1& 216.7& 9.9& 0.013& 0.029& 0.007& 0.024\cr
1.432& 6.4& 8.6& 191.2& 11.0&-0.048& 0.037& 0.006& 0.031\cr
1.790& 9.0& 9.1& 156.1& 12.0&-0.051& 0.049& 0.025& 0.041\cr
2.148& 20.7& 9.8& 138.5& 12.3&-0.059& 0.060&-0.013& 0.050\cr
2.658& 37.9& 8.4& 114.3& 10.8&-0.007& 0.065&-0.011& 0.052\cr
}
\endtable
\begintable{10}
\caption{{\bf Table~B3.} NGC~4570 major axis}
\halign{\hfil#&\quad \hfil#\quad& \hfil#\quad& \hfil#\quad& \hfil#\quad&
\hfil#\quad& \hfil#\quad& \hfil#\quad& \hfil# \cr
R & $V$ & $\Delta V$ & $\sigma$ & $\Delta\sigma$ &
$h_3$ & $\Delta h_3$ & $h_4$ & $\Delta h_4$ \cr
-33.516&-161.3& 4.4& 71.4& 6.2& 0.051& 0.051& 0.037& 0.044\cr
-26.363&-157.4& 4.7& 86.6& 6.6& 0.061& 0.048& 0.001& 0.039\cr
-21.951&-165.5& 4.3& 84.3& 6.6&-0.065& 0.048&-0.003& 0.039\cr
-18.749&-158.6& 4.5& 88.2& 5.9& 0.093& 0.043&-0.024& 0.034\cr
-16.259&-148.2& 5.0& 99.2& 6.7& 0.133& 0.044&-0.016& 0.035\cr
-14.308&-127.7& 6.1& 113.8& 7.9& 0.025& 0.045&-0.006& 0.036\cr
-12.518&-118.5& 5.5& 108.1& 6.6&-0.012& 0.042&-0.045& 0.033\cr
-10.907&-114.0& 5.1& 95.1& 7.1& 0.029& 0.047& 0.003& 0.038\cr
-9.472& -96.3& 5.4& 118.7& 7.2& 0.014& 0.039&-0.016& 0.032\cr
-8.222& -83.5& 5.5& 113.9& 7.1&-0.066& 0.041&-0.025& 0.034\cr
-7.144& -91.2& 5.3& 118.2& 8.2& 0.029& 0.041& 0.037& 0.034\cr
-6.258& -90.6& 6.1& 112.9& 8.2&-0.030& 0.047&-0.017& 0.038\cr
-5.542& -96.7& 5.7& 131.7& 7.2& 0.081& 0.036&-0.036& 0.030\cr
-4.825& -95.4& 5.2& 130.8& 6.5& 0.014& 0.033&-0.038& 0.028\cr
-4.296& -81.6& 7.2& 142.8& 9.4&-0.028& 0.042&-0.036& 0.035\cr
-3.938& -85.8& 7.0& 149.1& 9.4& 0.010& 0.038&-0.012& 0.032\cr
-3.580& -88.5& 6.7& 149.1& 9.0& 0.005& 0.037&-0.019& 0.031\cr
-3.222& -86.3& 6.5& 141.2& 9.0&-0.018& 0.039& 0.009& 0.033\cr
-2.864& -74.9& 6.4& 147.5& 8.4&-0.090& 0.036&-0.033& 0.030\cr
-2.506& -75.1& 5.8& 157.1& 7.4&-0.032& 0.030&-0.041& 0.025\cr
-2.148& -66.1& 5.5& 158.6& 7.4& 0.017& 0.028&-0.016& 0.024\cr
-1.790& -64.6& 5.1& 161.8& 7.0& 0.018& 0.026&-0.014& 0.022\cr
-1.432& -59.2& 4.6& 167.8& 6.7& 0.018& 0.023&-0.007& 0.019\cr
-1.074& -43.2& 4.0& 161.9& 5.6&-0.003& 0.021&-0.006& 0.017\cr
-0.716& -33.6& 4.1& 178.4& 5.8& 0.003& 0.019&-0.011& 0.015\cr
-0.358& -15.7& 4.0& 193.0& 5.9&-0.001& 0.017&-0.012& 0.014\cr
0.000& 0.0& 3.9& 197.8& 5.8&-0.019& 0.016&-0.018& 0.013\cr
0.358& 8.6& 4.0& 197.6& 5.8&-0.018& 0.017&-0.030& 0.014\cr
0.716& 32.3& 4.1& 189.3& 6.0&-0.039& 0.018&-0.018& 0.015\cr
1.074& 50.9& 4.5& 186.9& 6.2&-0.047& 0.020&-0.020& 0.016\cr
1.432& 62.6& 4.8& 180.3& 6.9&-0.069& 0.022&-0.006& 0.018\cr
1.790& 63.8& 5.2& 171.8& 7.2&-0.014& 0.025&-0.002& 0.020\cr
2.148& 68.1& 5.6& 164.2& 8.0&-0.045& 0.029&-0.007& 0.024\cr
2.506& 77.8& 6.2& 159.6& 8.4&-0.061& 0.032&-0.015& 0.026\cr
2.864& 90.7& 5.8& 152.1& 7.8&-0.039& 0.032&-0.012& 0.027\cr
3.222& 100.0& 6.5& 147.4& 8.3&-0.049& 0.036&-0.012& 0.030\cr
3.580& 91.3& 7.5& 164.0& 10.2&-0.024& 0.037&-0.009& 0.031\cr
3.938& 101.5& 7.8& 161.2& 10.3& 0.024& 0.039&-0.018& 0.033\cr
4.296& 101.4& 7.7& 153.6& 9.7& 0.006& 0.040&-0.041& 0.034\cr
4.825& 100.0& 5.6& 154.5& 7.1& 0.019& 0.029&-0.039& 0.024\cr
5.541& 82.6& 6.0& 135.8& 7.9& 0.040& 0.037&-0.010& 0.031\cr
6.258& 88.4& 6.2& 133.5& 8.1&-0.037& 0.038&-0.005& 0.032\cr
7.145& 97.4& 5.2& 115.9& 6.7&-0.013& 0.039&-0.024& 0.032\cr
8.223& 85.1& 6.0& 117.6& 8.8&-0.019& 0.046& 0.027& 0.037\cr
9.470& 80.6& 5.4& 122.7& 7.4&-0.005& 0.038&-0.002& 0.031\cr
10.909& 104.2& 5.3& 95.0& 7.3& 0.002& 0.048& 0.017& 0.038\cr
12.517& 119.8& 4.5& 98.7& 5.2& 0.000& 0.037&-0.069& 0.029\cr
14.303& 128.3& 4.9& 102.3& 5.7&-0.057& 0.039&-0.046& 0.031\cr
16.266& 152.4& 4.9& 86.5& 5.5&-0.072& 0.046&-0.091& 0.037\cr
18.755& 148.4& 5.2& 90.6& 6.2&-0.167& 0.047&-0.045& 0.038\cr
21.948& 152.1& 5.2& 88.1& 5.7&-0.050& 0.046&-0.099& 0.037\cr
26.375& 176.4& 4.3& 83.1& 7.1&-0.095& 0.046& 0.064& 0.038\cr
33.529& 161.8& 4.6& 83.4& 5.5& 0.034& 0.045&-0.078& 0.037\cr
}
\endtable
\begintable{11}
\caption{{\bf Table~B4.} NGC~4570 offset axis}
\halign{\hfil#&\quad \hfil#\quad& \hfil#\quad& \hfil#\quad& \hfil#\quad&
\hfil#\quad& \hfil#\quad& \hfil#\quad& \hfil# \cr
R & $V$ & $\Delta V$ & $\sigma$ & $\Delta\sigma$ &
$h_3$ & $\Delta h_3$ & $h_4$ & $\Delta h_4$ \cr
-32.931&-182.6& 4.6& 68.5& 5.5&-0.046& 0.046& 0.022& 0.040\cr
-25.097&-155.7& 5.0& 96.9& 6.9& 0.051& 0.043& 0.034& 0.035\cr
-20.515&-147.1& 4.3& 90.5& 5.8& 0.142& 0.041& 0.002& 0.033\cr
-17.318&-145.0& 4.3& 99.9& 6.5&-0.054& 0.040& 0.027& 0.032\cr
-14.835&-128.6& 4.6& 107.3& 6.1&-0.006& 0.039&-0.025& 0.031\cr
-12.874&-121.6& 5.2& 111.3& 6.0&-0.051& 0.039&-0.060& 0.031\cr
-11.080& -92.7& 5.1& 106.5& 6.0&-0.162& 0.039&-0.038& 0.031\cr
-9.471& -92.7& 5.0& 120.9& 6.8&-0.055& 0.038&-0.020& 0.031\cr
-8.222& -71.1& 5.7& 107.2& 6.8&-0.104& 0.045&-0.037& 0.036\cr
-7.146& -73.8& 5.1& 102.8& 6.8&-0.072& 0.043&-0.006& 0.034\cr
-6.259& -90.8& 6.6& 127.4& 7.6&-0.012& 0.041&-0.050& 0.034\cr
-5.542& -85.9& 5.9& 138.3& 7.6& 0.030& 0.036&-0.051& 0.030\cr
-4.825& -76.3& 5.3& 120.6& 6.8& 0.023& 0.037&-0.013& 0.030\cr
-4.109& -78.4& 4.8& 132.3& 6.4& 0.000& 0.031&-0.010& 0.026\cr
-3.580& -71.8& 6.8& 135.6& 8.9& 0.063& 0.042&-0.017& 0.035\cr
-3.222& -60.1& 7.0& 147.2& 8.4& 0.007& 0.038&-0.054& 0.032\cr
-2.864& -62.4& 6.6& 147.3& 8.4&-0.022& 0.037&-0.036& 0.031\cr
-2.506& -60.5& 6.5& 161.9& 8.9&-0.023& 0.033& 0.000& 0.027\cr
-2.148& -58.5& 6.5& 163.3& 9.1&-0.038& 0.033&-0.007& 0.027\cr
-1.790& -47.3& 5.7& 149.8& 8.0&-0.026& 0.033&-0.012& 0.028\cr
-1.432& -40.1& 5.8& 159.6& 8.3&-0.025& 0.031&-0.003& 0.026\cr
-1.074& -30.7& 5.5& 165.8& 7.8& 0.013& 0.028&-0.006& 0.023\cr
-0.716& -23.3& 5.4& 175.2& 7.5& 0.002& 0.025&-0.012& 0.021\cr
-0.358& -5.7& 5.4& 176.9& 7.9&-0.015& 0.026& 0.000& 0.021\cr
0.000& 0.0& 5.4& 172.2& 8.1&-0.033& 0.026& 0.019& 0.022\cr
0.358& 16.0& 5.0& 165.9& 7.6&-0.047& 0.026& 0.013& 0.022\cr
0.716& 26.2& 5.4& 175.7& 7.9&-0.048& 0.026& 0.001& 0.021\cr
1.074& 34.9& 5.6& 174.3& 8.2&-0.028& 0.027& 0.001& 0.022\cr
1.432& 45.9& 5.9& 173.5& 8.3&-0.039& 0.028&-0.005& 0.023\cr
1.790& 62.2& 6.0& 163.4& 8.3&-0.048& 0.030&-0.003& 0.025\cr
2.148& 70.7& 6.0& 154.0& 8.4&-0.028& 0.033&-0.005& 0.028\cr
2.506& 72.6& 6.0& 145.8& 7.9&-0.029& 0.034&-0.022& 0.029\cr
2.864& 78.4& 6.6& 148.3& 8.5& 0.001& 0.037&-0.036& 0.031\cr
3.222& 76.1& 6.5& 141.4& 8.7& 0.012& 0.039&-0.015& 0.033\cr
3.580& 81.7& 6.9& 143.7& 9.1&-0.019& 0.040&-0.013& 0.033\cr
4.109& 89.0& 5.1& 146.6& 7.4&-0.068& 0.031& 0.013& 0.026\cr
4.826& 94.5& 5.5& 136.4& 7.1&-0.055& 0.034&-0.013& 0.028\cr
5.542& 92.6& 5.6& 127.1& 7.3& 0.031& 0.038&-0.013& 0.031\cr
6.259& 94.5& 6.5& 129.3& 8.7&-0.018& 0.042& 0.010& 0.035\cr
7.146& 99.7& 5.6& 126.9& 7.4&-0.074& 0.037& 0.003& 0.031\cr
8.223& 108.9& 6.2& 121.3& 7.9&-0.005& 0.045&-0.040& 0.037\cr
9.472& 97.2& 5.0& 96.2& 7.2& 0.044& 0.046& 0.030& 0.037\cr
11.080& 126.1& 5.6& 106.1& 6.7& 0.091& 0.045&-0.055& 0.036\cr
12.870& 137.5& 5.3& 99.6& 6.9& 0.022& 0.046&-0.022& 0.037\cr
14.841& 144.7& 5.1& 115.5& 6.1&-0.001& 0.038&-0.067& 0.030\cr
17.316& 158.0& 4.7& 100.6& 5.7&-0.061& 0.038&-0.031& 0.031\cr
20.516& 176.3& 4.2& 87.7& 5.8&-0.091& 0.041&-0.008& 0.033\cr
25.086& 176.7& 3.4& 73.9& 5.1&-0.136& 0.043&-0.020& 0.036\cr
32.870& 179.2& 3.6& 70.0& 4.7&-0.082& 0.043&-0.035& 0.037\cr
}
\endtable
\begintable{12}
\caption{{\bf Table~B5.} NGC~4570 minor axis}
\halign{\hfil#&\quad \hfil#\quad& \hfil#\quad& \hfil#\quad& \hfil#\quad&
\hfil#\quad& \hfil#\quad& \hfil#\quad& \hfil# \cr
R & $V$ & $\Delta V$ & $\sigma$ & $\Delta\sigma$ &
$h_3$ & $\Delta h_3$ & $h_4$ & $\Delta h_4$ \cr
-5.430& 8.4& 6.5& 125.1& 8.0&-0.010& 0.043&-0.031& 0.035\cr
-3.903& 18.4& 6.7& 121.5& 8.9& 0.025& 0.048&-0.010& 0.039\cr
-3.027& 10.5& 7.1& 138.6& 9.4&-0.066& 0.043&-0.020& 0.036\cr
-2.308& -2.1& 6.5& 156.6& 9.1&-0.022& 0.034& 0.007& 0.029\cr
-1.790& -1.8& 7.8& 170.5& 11.1&-0.043& 0.037&-0.002& 0.031\cr
-1.432& -1.3& 7.3& 185.9& 10.9& 0.003& 0.032& 0.006& 0.027\cr
-1.074& -10.7& 6.6& 201.8& 10.5& 0.006& 0.027& 0.015& 0.022\cr
-0.716& -5.5& 5.8& 189.5& 8.8& 0.018& 0.025& 0.003& 0.021\cr
-0.358& -4.7& 5.6& 184.3& 8.3&-0.016& 0.025&-0.004& 0.021\cr
0.000& 0.0& 5.7& 184.0& 8.4&-0.039& 0.025&-0.004& 0.021\cr
0.358& 3.1& 5.8& 190.6& 8.7& 0.003& 0.025&-0.005& 0.021\cr
0.716& -0.2& 5.8& 187.0& 8.1& 0.032& 0.025&-0.020& 0.021\cr
1.074& -3.5& 6.3& 176.6& 9.1& 0.019& 0.030&-0.003& 0.025\cr
1.432& -1.8& 7.1& 183.6& 11.7& 0.001& 0.033& 0.029& 0.027\cr
1.790& 3.5& 8.1& 178.4& 14.0&-0.041& 0.039& 0.043& 0.032\cr
2.148& 13.6& 8.0& 136.0& 11.1& 0.006& 0.053&-0.017& 0.044\cr
2.668& 12.0& 6.5& 149.0& 9.6& 0.004& 0.037& 0.014& 0.032\cr
3.543& 18.5& 6.5& 138.3& 9.2&-0.066& 0.040& 0.009& 0.034\cr
4.922& 22.2& 6.2& 116.2& 8.6&-0.062& 0.048&-0.003& 0.039\cr
8.059& 9.6& 6.6& 106.5& 9.7&-0.019& 0.054& 0.043& 0.043\cr
}
\endtable
\@notice\par\vfill\supereject\end
|
1,314,259,995,291 | arxiv |
\subsection{Experimental Setup}
\paragraph{Data}
We evaluate the efficacy of our approach on standard, large-scale benchmarks and on low-resource scenarios, where the Transformer was shown to induce poorer syntax.
Following~\newcite{D17-1209}, we use News Commentary v11 (NC11) with En-De and De-En tasks to simulate low resources and test multiple source languages.
To compare with previous work, we train our models on WMT16 En-De and WAT En-Ja tasks, removing sentences in incorrect languages from WMT16 data sets.
For a thorough comparison with concurrent work, we also evaluate on the large-scale WMT17 En-De and low-resource WMT18 En-Tr tasks.
We rely on Stanford CoreNLP~\cite{manning-EtAl:2014:P14-5} to parse source sentences.\footnote{\label{ft:app}For a detailed description, see Appendix~\ref{sec:full_exp}.}
\paragraph{Training}
We implement our models in PyTorch on top of the Fairseq toolkit.\footnote{\url{https://github.com/e-bug/pascal}.}
Hyperparameters, including the number of \textsc{Pascal}\xspace heads, that achieved the highest validation \textsc{Bleu}\xspace~\cite{Papineni:2002:BMA:1073083.1073135} score were selected via a small grid search.
We report previous results in syntax-aware NMT for completeness, and train a Transformer model as a strong, standard baseline. We also investigate the following syntax-aware Transformer approaches:\textsuperscript{\ref{ft:app}}
\begin{itemize}[noitemsep,topsep=1pt]
\item \textbf{+\textsc{Pascal}\xspace:} The model presented in \cref{sec:model}.
The variance of the normal distribution was set to $1$ (i.e., an effective window size of $3$) as $99.99\%$ of the source words in our training sets are at most split into $7$ sub-words units.\\[-12pt]
\item \textbf{+\textsc{LISA}\xspace:} We adapt LISA~\cite{D18-1548} to NMT and sub-word units by defining the parent of a given token as its first sub-word (which represents the root of the parent word).\\[-12pt]
\item \textbf{+\textsc{Multi-Task}\xspace:} Our implementation of the multi-task approach by~\newcite{currey-heafield-2019-incorporating} where a standard Transformer learns to both parse and translate source sentences.\\[-12pt]
\item \textbf{+\textsc{S$\text{\&}$H}\xspace:} Following~\newcite{sennrich2016linguistic}, we introduce syntactic information in the form of dependency labels in the embedding matrix of the Transformer encoder.
\end{itemize}
\subsection{Results}
Table~\ref{tab:results} presents the main results of our experiments.
Clearly, the base Transformer outperforms previous syntax-aware RNN-based approaches, proving it to be a strong baseline in our experiments.
The table shows that the simple approach of~\newcite{sennrich2016linguistic} does not lead to notable advantages when applied to the embeddings of the Transformer model.
We also see that the multi-task approach benefits from better parameterization, but it only attains comparable performance with the baseline on most tasks.
On the other hand, \textsc{LISA}\xspace, which embeds syntax in a self-attention head, leads to modest but consistent gains across all tasks, proving that it is also useful for NMT.
Finally, \textsc{Pascal}\xspace outperforms all other methods, with consistent gains over the Transformer baseline independently of the source language and corpus size: It gains up to $+0.9$ \textsc{Bleu}\xspace points on most tasks and a substantial $+1.75$ in \textsc{Ribes}\xspace~\cite{isozaki2010ribes}, a metric with stronger correlation with human judgments than \textsc{Bleu}\xspace in En$\leftrightarrow$Ja translations.
On WMT17, our slim model compares favorably to other methods, achieving the highest \textsc{Bleu}\xspace score across all source-side syntax-aware approaches.\footnote{Note that modest improvements in this task should not be surprising as Transformers learn better syntactic relationships from larger data sets~\cite{raganato-tiedemann-2018-analysis}.}
Overall, our model achieves substantial gains given the grammatically rigorous structure of English and German.
Not only do we expect performance gains to further increase on less rigorous sources and with better parses~\cite{zhang-etal-2019-syntax}, but also higher robustness to noisier syntax trees obtained from back-translated with parent ignoring.
\paragraph{Performance by sentence length}
As shown in Figure~\ref{fig:len2bleu}, our model is particularly useful when translating long sentences, obtaining more than $+2$ \textsc{Bleu}\xspace points when translating long sentences in all low-resource experiments, and $+3.5$ \textsc{Bleu}\xspace points on the distant En-Ja pair.
However, only a few sentences ($1\%$) in the evaluation datasets are long.
\begin{table}[t]
\centering
\small
\setlength\tabcolsep{2pt}
\begin{CJK}{UTF8}{min}
\begin{tabular}{l|l}
\toprule
\textbf{SRC} & In a cooling experiment , \textbf{only} a tendency agreed \\
\textbf{BASE} & 冷却 実験 で は ,\textbf{わずか な} 傾向 が 一致 し た \\
\textbf{OURS} & 冷却 実験 で は 傾向 \textbf{のみ} 一致 し た \\
\midrule
\textbf{SRC} & Of course I \textbf{don't} hate you \\
\textbf{BASE} & Natürlich hass\textbf{te} ich dich nicht \\
\textbf{OURS} & Natürlich hass\textbf{e} ich dich nicht \\
\midrule
\textbf{SRC} & What are those people fighting for? \\
\textbf{BASE} & Was sind die Menschen, für die kämpfen? \\
\textbf{OURS} & Wofür kämpfen diese Menschen? \\
\bottomrule
\end{tabular}
\end{CJK}
\caption{Example of correct translation by Pascal.} \label{tab:example}
\end{table}
\paragraph{Qualitative performance}
Table~\ref{tab:example} presents examples where our model correctly translated the source sentence while the Transformer baseline made a syntactic error.
For instance, in the first example, the Transformer misinterprets the adverb ``only" as an adjective of ``tendency:'' the word ``only'' is an adverb modifying the verb ``agreed.''
In the second example, ``don't'' is incorrectly translated to the past tense instead of present.
\paragraph{\textsc{Pascal}\xspace layer}
When we introduced our model, we motivated our design choice of placing \textsc{Pascal}\xspace heads in the first layer in order to enrich the representations of words from their isolated embeddings by introducing contextualization from their parents.
We ran an ablation study on the NC11 data in order to verify our hypothesis.
As shown in Table~\ref{tab:pascal_layer}, the performance of our model on the validation sets is lower when placing Pascal heads in upper layers; a trend that we also observed with the \textsc{LISA}\xspace mechanism.
These results corroborate the findings of~\newcite{raganato-tiedemann-2018-analysis} who noticed that, in the first layer, more attention heads solely focus on the word to be translated itself rather than its context.
We can then deduce that enforcing syntactic dependencies in the first layer effectively leads to better word representations, which further enhance the translation accuracy of the Transformer model.
Investigating the performance of multiple syntax-aware layers is left as future work.
\paragraph{Gaussian variance}
Another design choice we made was the variance of the Gaussian weighing function.
We set it to $1$ in our experiments motivated by the statistics of our datasets, where the vast majority of words is at most split into a few tokens after applying BPE.
Table~\ref{tab:var} corroborates our choice, showing higher \textsc{Bleu}\xspace scores on the NC11 validation sets when the variance equals $1$.
Here, ``parent-only" is the case where weights are only placed to the middle token (i.e. the parent).
\begin{table}[t]
\begin{subtable}{.5\linewidth}
\centering
\small
\setlength\tabcolsep{3pt}
\begin{tabular}{l|r|r}
\toprule
\textbf{Layer} & \textbf{En-De} & \textbf{De-En} \\
\midrule
1 & \textbf{23.2} & \textbf{24.6} \\
2 & 22.5 & 20.1 \\
3 & 22.5 & 23.8 \\
4 & 22.6 & 23.8 \\
5 & 22.9 & 23.8 \\
6 & 22.4 & 23.9 \\
\bottomrule
\end{tabular}
\caption{}\label{tab:pascal_layer}
\end{subtable}%
\begin{subtable}{.5\linewidth}
\centering
\small
\setlength\tabcolsep{3pt}
\begin{tabular}{l|r|r}
\toprule
\textbf{Variance} & \textbf{En-De} & \textbf{De-En} \\
\midrule
Parent-only & 22.5 & 22.4 \\
1 & \textbf{23.2} & \textbf{24.6} \\
4 & 22.7 & 24.3 \\
9 & 22.8 & 24.3 \\
16 & 22.7 & 24.4 \\
25 & 22.8 & 24.1 \\
\bottomrule
\end{tabular}
\caption{}\label{tab:var}
\end{subtable}
\caption{Validation \textsc{Bleu}\xspace as a function of \textsc{Pascal}\xspace layer (a) and Gaussian's variance (b) on NC11 data.}
\end{table}
\paragraph{Sensitivity to hyperparameters}
Due to the large computational cost required to train Transformer models, we only searched hyperparameters in a small grid.
In order to estimate the sensitivity of the proposed approach to hyperparameters, we trained the NC11 De-En model with the hyperparameters of the En-De one.
In fact, despite being trained on the same data set, we find that more \textsc{Pascal}\xspace heads help when German (which has a higher syntactic complexity than English) is used as the source language.
In this test, we only find $-0.2$ \textsc{Bleu}\xspace points with respect to the score listed in Table~\ref{tab:results}, showing that our general approach is effective regardless of extensive fine-tuning.
Additional analyses are reported in Appendix~\ref{sec:analysis}.
\section{Introduction}
\input{introduction/intro0.tex}
\section{Model}\label{sec:model}
\input{model/mod0.tex}
\section{Experiments}
\begin{figure*}[t]
\center
\includegraphics[width=\textwidth, trim={0cm 0.2cm 0cm 0.3cm}, clip]{figures/len2bleu.pdf}
\includegraphics[width=\textwidth, trim={0cm 0.2cm 0cm 0.3cm}, clip]{figures/len2nsents.pdf}
\vspace{-20pt}
\caption{Analysis by sentence length: $\Delta$\textsc{Bleu}\xspace with the Transformer (above) and percentage of data (below).}
\label{fig:len2bleu}
\end{figure*}
\begin{table*}[t]
\center
\small
\setlength\tabcolsep{5pt}
\begin{minipage}{\textwidth}
\center
\begin{tabular}{l|r|r|r||r|r|r|r}
\toprule
\multirow{2}{*}{\textbf{Method}}
& \textbf{NC11} & \textbf{NC11} & \textbf{WMT18} & \textbf{WMT16} & \textbf{WMT17} & \multicolumn{2}{c}{\textbf{WAT}} \\
& \textbf{En-De} & \textbf{De-En} & \textbf{En-Tr} & \textbf{En-De} & \textbf{En-De} & \textbf{En-Ja} [B] & \textbf{En-Ja} [R] \\
\midrule
\newcite{eriguchi2016tree} & & & & & & 34.9 & 81.58 \\
\newcite{D17-1209} & 16.1 & & & & &\\
\newcite{D17-1012} & & & & & & 39.4 & 82.83 \\
\newcite{tran2018inducing} & & & & 30.3 & 24.3 & &\\
SE+SD-NMT$^\dagger$~\cite{wu2018dep2dep} & & & & & 24.7 & 36.4 & 81.83 \\
\
SE+SD-Transformer$^\dagger$~\cite{wu2018dep2dep} & & & & & \textbf{26.2} & &\\
Mixed Enc.~\cite{currey-heafield-2019-incorporating} & & & 9.6 & 31.9 & 26.0 & &\\
Multi-Task~\cite{currey-heafield-2019-incorporating} & & & 10.6 & 29.6 & 23.4 & &\\
\hline
Transformer & 25.0 & 26.6 & 13.1 & 33.0 & 25.5 & 43.1 & 83.46\\
~~~~+ \textsc{Pascal}\xspace & \textbf{25.9}$^\Uparrow$ & \textbf{27.4}$^\Uparrow$ & \textbf{14.0}$^\Uparrow$ & \textbf{33.9}$^\Uparrow$ & \textit{26.1}$^\Uparrow$ & \textbf{44.0}$^\Uparrow$ & \textbf{85.21}$^\Uparrow$ \\
~~~~+ \textsc{LISA}\xspace & 25.3 & 27.1 & 13.6 & 33.6 & 25.7 & 43.2 & 83.51 \\
~~~~+ \textsc{Multi-Task}\xspace & 24.8 & 26.7 & \textbf{14.0} & 32.4 & 24.6 & 42.7 & 84.18 \\
~~~~+ \textsc{S$\text{\&}$H}\xspace & 25.5 & 26.8 & 13.0 & 31.9 & 25.1 & 42.8 & 83.88 \\
\bottomrule
\end{tabular}
\end{minipage}
\vspace{-5pt}
\caption{Test \textsc{Bleu}\xspace (and \textsc{Ribes}\xspace for En-Ja) scores on small-scale (left) and large-scale (right) data sets. Models that also require target-side syntax information are marked with $^\dagger$, while $^\Uparrow$ indicates statistical significance ($p < 0.01$) against the Transformer baseline via bootstrap re-sampling~\cite{koehn-2004-statistical}.}\label{tab:results}
\end{table*}
\input{experiments/exp0.tex}
\section{Conclusion}
\input{conclusion/end0.tex}
\section*{Acknowledgments}
We are grateful to the anonymous reviewers, Desmond Elliott and the CoAStaL NLP group for their constructive feedback.
The research results have been achieved by ``Research and Development of Deep Learning Technology for Advanced Multilingual Speech Translation,'' the Commissioned Research of National Institute of Information and Communications Technology (NICT), Japan.
\subsection{Parent-Scaled Self-Attention} \label{sec:pascal}
Figure~\ref{fig:pascal} shows our parent-scaled self-attention sub-layer.
Here, for a sequence of length $T$, the input to each head is a matrix $\mathbf{X}\in\mathbb{R}^{T\times d_{model}}$ of token embeddings and a vector $\textbf{p}\in\mathbb{R}^T$ whose $t$-th entry $p_t$ is the middle position of the $t$-th token's dependency parent.
Following~\newcite{vaswani2017attention}, in each attention head $h$, we compute three vectors (called query, key and value) for each token, resulting in the three matrices $\mathbf{K}^h\in\mathbb{R}^{T\times d}$, $\mathbf{Q}^h\in\mathbb{R}^{T\times d}$, and $\mathbf{V}^h\in\mathbb{R}^{T\times d}$ for the whole sequence, where $d = d_{model}/H$.
We then compute dot products between each query and all the keys, giving scores of how much focus to place on other parts of the input when encoding a token at a given position.
The scores are divided by $\sqrt{d}$ to alleviate the vanishing gradient problem arising if dot products are large:
\begin{equation}
\mathbf{S}^h = \mathbf{Q}^h~{\mathbf{K}^h}^\top / \sqrt{d}.
\end{equation}
Our main contribution is in weighing the scores of the token at position $t$, $\textbf{s}_t$, by the distance of each token from the position of $t$'s dependency parent:
\begin{equation}\label{eq:scaling}
n^h_{tj} = s^h_{tj} ~ d^p_{tj}, ~~\text{ for } j=1,...,T,
\end{equation}
where $\textbf{n}^h_{t}$ is the $t$-th row of the matrix $\mathbf{N}^h\in\mathbb{R}^{T\times T}$ representing scores normalized by the proximity to $t$'s parent. $d^p_{tj} = dist(p_t, j)$ is the $\left(t, j\right)^{th}$ entry of the matrix $\mathbf{D}^p\in\mathbb{R}^{T\times T}$ containing, for each row $\mathbf{d}_t$, the distances of every token $j$ from the middle position of token $t$'s dependency parent $p_t$.
In this paper, we
compute this distance as the value of the probability density of a normal distribution centered at $p_t$ and with variance $\sigma^2$, $\mathcal{N}\left(p_t,\sigma^{2}\right)$:
\begin{equation}
dist(p_t, j) = f_{\mathcal{N}}\left(j\middle\vert p_t, \sigma^2\right) = \frac{1}{\sqrt{2\pi\sigma^2}}e^{-\frac{\left(j - p_t\right)^2}{2\sigma^2}}.
\end{equation}
Finally, we apply a softmax function to yield a distribution of weights for each token over all the tokens in the sentence, and multiply the resulting matrix with the value matrix $\textbf{V}^h$, obtaining the final representations $\mathbf{M}^h$ for \textsc{Pascal}\xspace head $h$.
One of the major strengths of our proposal is being parameter-free: no additional parameter is required to train our \textsc{Pascal}\xspace sub-layer as $\textbf{D}^p$ is obtained by computing a distance function that only depends on the vector of tokens' parent positions and can be evaluated using fast matrix operations.
\paragraph{Parent ignoring}
Due to the lack of parallel corpora with gold-standard parses, we rely on noisy annotations from an external parser. However, the performance of syntactic parsers drops abruptly when evaluated on out-of-domain data~\cite{dredze2007frustratingly}.
To prevent our model from overfitting to noisy dependencies, we introduce a regularization technique for the \textsc{Pascal}\xspace sub-layer: \textit{parent ignoring}.
In a similar vein as dropout~\cite{JMLR:v15:srivastava14a}, we disregard information during the training phase. Here, we ignore the position of the parent of a given token by randomly setting each row of $\mathbf{D}^p$ to $\mathbf{1}\in\mathbb{R}^T$ with some probability $q$.
\paragraph{Gaussian weighing function}
The choice of weighing each score by a Gaussian probability density is motivated by two of its properties.
First, its bell-shaped curve: It allows us to focus most of the probability density at the mean of the distribution, which we set to the middle position of the sub-word units of the dependency parent of each token.
In our experiments, we find that most words in the vocabularies are not split into sub-words, hence allowing \textsc{Pascal}\xspace to mostly focus on the actual parent. In addition, non-negligible weights are placed on the neighbors of the parent token, allowing the attention mechanism to also attend to them. This could be useful, for instance, to learn idiomatic expressions such as prepositional verbs in English.
The second property of Gaussian-like distributions that we exploit is their support: While most of the weight is placed in a small window of tokens around the mean of the distribution, all the values in the sequence are actually multiplied by non-zero factors; allowing a token $j$ farther away from the parent of token $t$, $p_t$, to still play a role in the representation of $t$ if its score $s_{tj}^h$ is high.
\paragraph{}
\textsc{Pascal}\xspace can be seen as an extension of the local attention mechanism of~\newcite{luong2015effective}, with the alignment now guided by syntactic information.
\newcite{yang-etal-2018-modeling} proposed a method to learn a Gaussian bias that is added to, instead of multiplied by, the original attention distribution.
As we will see next, our model significantly outperforms this. |
1,314,259,995,292 | arxiv | \section*{Acknowledgements}
This work has been supported by a Human Capital and Mobility grant of the
European Union, contract no.\ ERBFMRX-CT96-0012.
\section*{References}
|
1,314,259,995,293 | arxiv | \section{Introduction}
\label{sec:introduction}
The color-kinematics duality is a conjectured property of the perturbative expansion of gauge theory amplitudes
proposed by Bern, Carrasco, and Johansson (BCJ) \cite{Bern:2008qj}.
It was born as a means of constructing gravity amplitudes via the {double-copy} procedure \cite{Bern:2010ue}. The range of application of these techniques have been remarkably wide, from amplitudes to classical solutions of General Relativity and gravitational wave emission patterns, to string theory. A comprehensive review can be found in \cite{Bern:2019prr}.
It is therefore all the more remarkable that the property at the root of these developments, the color-kinematics duality, is known to hold to arbitrary multiplicity for tree-level processes \cite{BjerrumBohr:2009rd,Stieberger:2009hq,Feng:2010my,Bern:2010yg}. In particular, it is still a conjectured property of loop amplitudes, and it is even less clear how it is implemented at the level of non-linear classical solutions. If the conjecture can be proven true at higher loops, it would not only be very useful in simplifying computations of scattering amplitudes, but would also reflect a deep relationship between perturbative gauge theories and quantum gravity, invisible at the level of their respective Lagrangians.
This duality has been extensively checked for amplitudes at loop orders with a bottleneck at five loops~\cite{Bern:2017yxu,Bern:2017ucb,Bern:2018jmv}. Despite its many successes, we remain completely ignorant as to whether or not the duality continues to hold true or as to how it should be applied in a completely general setting.
Our approach to this problem, which has proven useful in the past, will be to use string theory. At tree-level, the color-kinematics duality is indeed fully understood from string theory. It originates from fundamental identities in open-string theory scattering amplitudes, known since the early days of dual models~\cite{Plahte:1970wy}, today known as \emph{monodromy relations}~\cite{Plahte:1970wy,BjerrumBohr:2010hn,BjerrumBohr:2010zs,BjerrumBohr:2009rd,Stieberger:2009hq}.
Those relations were generalized to loop-level in \cite{Tourkine:2016bak,Hohenegger:2017kqy}, and recently seen to emanate from a deeper mathematical structure known as {twisted homology}~\cite{Mizera:2017cqs,Mizera:2017rqa,Mizera:2019gea,Casali:2019ihm,Mizera:2019blq}.
Over the past few years a related approach based on ambitwistor string theory has emerged, see, e.g., \cite{Geyer:2015bja,Geyer:2015jch,Geyer:2016wjx,Geyer:2017ela,Geyer:2019hnn,Edison:2020uzf,Farrow:2020voh} (various other recent worldsheet approaches to color-kinematics duality include \cite{Mafra:2011kj,Ochirov:2013xba,He:2015wgf,Mafra:2017ioj,Fu:2018hpu,Fu:2020frx,Mafra:2018pll,Gerken:2020yii,Gerken:2019cxz,Edison:2020ehu}), which gives a handle on the problem of constructing BCJ numerators, at a cost of introducing linearized propagators which need to be transformed into quadratic ones using non-trivial partial fraction identities. Despite many successes of this research direction, our goal here is to obtain Feynman diagrams directly from worldsheet degenerations, which at present is understood most appropriately in the case of string theory.
\textbf{Mysterious transverse integrals in the monodromy relations:}
These relations, however, revealed another conundrum:
In open string theory, gauge bosons are represented by strings with color charges at their ends, the Chan-Paton factors. This implies that vertex operators of gluons are always inserted at the \textit{boundaries} of open-string worldsheets.
The mysterious feature of the monodromy relations and their associated twisted cycles at loop-level is that they involve integrating the vertex operators of gluons \textit{into the bulk} of the worldsheet. From the perspective of string theory, this is an exotic phenomenon, which, presently, has no physical interpretation.
In \cite{Tourkine:2016bak,Tourkine:2019ukp,Casali:2019ihm} it was suggested that they are related to the color-kinematics duality, but this statement was not made precise.
\textbf{The labeling problem in the color-kinematics duality:}
A Yang-Mills amplitude can be written as an expansion involving only trivalent Feynman diagrams by expanding the four-point vertex for instance. In $d$ space-time dimensions, the $n$-gluon Yang-Mills amplitude at the $L$-th loop order is then written as
\begin{equation}
\int \prod_{i=1}^L\frac{d^d\ell_i}{(2\pi)^{ d}}
\underbrace
{\sum_{\substack{\mathrm{trivalent}\\ \mathrm{graphs}\, \Gamma}} \frac{1}{S_\Gamma} \frac{n_\Gamma\, c_\Gamma}{D_\Gamma}}_{\textstyle =\mathcal{I}(\ell_1,\dots,\ell_L) }.
\label{eq:Int-YM}
\end{equation}
Contributions from each trivalent graph features kinematic numerators $n_\Gamma$, which depend on external and internal kinematics; the color factors $c_\Gamma$, which are products of structure constants $f^{abc}$; and products of Feynman propagators $D_\Gamma$ associated to this specific graph. A symmetry factor $1/S_\Gamma$ also needs to be inserted.
Color-kinematics duality states that, given all triples of color factors $c_\Gamma$'s obeying Jacobi identities of the form
\begin{equation}
\includegraphics[valign=c]{color-jacobi}
\end{equation}
there exists a representation of the amplitude where the kinematic numerators $n_\Gamma$ also satisfy the same Jacobi identities.
When this representation exists, fewer kinematic numerators have to be computed, e.g. planar graphs relate to non-planar graphs: this reduces the complexity of computing the amplitude.
However, this representation suffers from some ambiguities. A natural one is the possibility to shift the numerators by quantities that vanish in a Jacobi identity. This is a structural ambiguity akin to gauge redundancy. A more severe ambiguity, and one we address in the text in our framework, comes from the freedom of redefining loop momenta in field theory. This means that a notion of ``the'' integrand $\mathcal{I}(\ell_1,\dots,\ell_L)$ as in eq.~\label{sec:introduction-1}~\eqref{eq:Int-YM} is usually ill-defined.
In contrast, string theory has a well defined notion of the integrand, on which a global definition of loop momentum can be introduced using the formalism of chiral splitting~\cite{DHoker:1988pdl,Tourkine:2019ukp}. It is then likely that following this notion of integrand the through the field-theory (or tropical \cite{Tourkine:2013rda}) limit gives, if not a canonical, at least a ``nice'' representation for a field theory integrand. Figure~\ref{fig:BCJ-labeling} illustrates this problem in the case of $n=4$ particles. The labeling induced by string theory is that the loop momentum always starts after leg $4$: this is a gauge choice coming from fixing translation invariance on the annulus. The problem is that there are Jacobi identities which exchange the position of this leg and modify the definition of the loop momentum in mismatching ways.
In field theory, one is able to cook up a solution and declare that the numerator of the mismatched graph is equal to that of the other, but at higher loop order this question become more tricky. This phenomenon is called the \textit{labeling problem} and is actually one of the bottleneck in finding color-kinematics satisfying representations. There are no rules to determine which graphs should be used at higher loop orders, e.g. no rule to tell if graphs with different labeling of internal loop momenta should count as different graphs with different numerators or not.
One goal of this paper is to use in the field-theory limit of string theory monodromy relations to see what string theory has to say about this question.
\begin{figure}
\centering
\includegraphics{jacobi}
\caption{Illustration of the labeling problem. At one-loop in string theory the loop momentum can be globally defined by the property that it always starts after the $n$-th leg, here leg $4$ on the left-hand side box graph.
\underline{Top:} example of BCJ identity which does not change the definition of the loop momentum. \underline{Bottom:} identity which changes the definition of the loop momentum. Note that the rightmost graph has a correctly defined loop momentum because leg $1$ is to the left of leg $4$.}
\label{fig:BCJ-labeling}
\end{figure}
\paragraph{Summary of the results:}
\label{sec:plan-paper}
\begin{itemize}[leftmargin=*]
\item We find that the field theory limit of the monodromy relations produces numerators which automatically satisfy Jacobi identities inside the graph, i.e., in those places where the definition of the loop momentum would not be changed by a Jacobi move, as explained above.
\item We characterize the extra contributions arising from the bulk transverse integrals of the annulus present in the monodromy relations. We carefully compute their field theory limit and show that it produces two types of graphs: contact terms, and graphs with trees attached to the loop. The existence of the first class was suggested in \cite{Ochirov:2017jby}, but the second are completely new. These graphs enter the monodromy relations in a crucial way by removing the graphs where a Jacobi identity would be ambiguous otherwise, in the sense that it would require a cancellation between two graphs with different definitions of the loop momentum. Therefore, string theory evades the problem of loop-momentum redefinition by effectively \textit{removing the ambiguous identities}.
\end{itemize}
We would like to add that it is not our intention to imply that monodromy relations lead to BCJ-satisfying numerators. In particular, the stringy way to solve the monodromies, as we detail in this text, does not produce BCJ identities at those points where the loop momentum jumps, and rather adds contact terms so as to satisfy the monodromy relations.
In the discussion section we elaborate on the significance of these results in the context of gravity. Contact terms to be squared seem in particular unavoidable, which furnishes an a posteriori justification for the generalized double-copy procedure of \cite{Bern:2017yxu,Bern:2017ucb}. This also hints towards the physical role of the bulk integrals as a possible new underlying structure in the color-kinematics duality.
The paper is organized as follows. In section~\ref{sec:reviews}, we review the mechanism of the field theory limit and the monodromy relations. In section~\ref{sec:field-theory-limit}, we describe the field theory limit of the bulk contours and how they generate contact terms and triangle-type graphs. This can be seen as a new item in the Bern-Kosower rules, required for the monodromy relations. In section~\ref{sec:field-theory-limit-monodromy} we show how the field theory limit of the monodromy relations produces numerators which satisfy BCJ identities in the bulk, and how the bulk contours remove the terms in which the BCJ identities could have been spoiled by redefinitions of the loop momentum. We summarize the paper in section~\ref{sec:discussion}, where we also comment on the extensions of to higher-loop orders and interpretation of bulk cycles in the context of double-copy.
\section{Reviews of the tropical limit and monodromy relations}
\label{sec:reviews}
\subsection{Field-theory limit and Bern-Kosower rules}
\label{subsec:ftl}
In this section, we present a short review of the field-theory limit of open-string theory \cite{Bern:1990cu,Bern:1990ux,Bern:1991aq,Bern:1993wt}. Field theory amplitudes are generated by sending $\alpha'\to0$ in a string amplitude, more precisely $\alpha' k_i\cdot k_j\ll 1$ for all $i,j$. We take all external states to be massless, $k_i^2 =0$. In the absence of UV divergences, the leading order contributions to this amplitude, after suitable rescaling, become Feynman graphs.\footnote{When there are UV divergences, it is sufficient, for our purposes, to truncate the modulus integrations in the amplitudes, as our relations are valid pointwise in the moduli space. This results in Schwinger proper-time amplitudes with a hard cut-off of order $\alpha'$ for the Schwinger proper-time.}
This scaling limit can be also understood as coming from a tropicalization of the moduli space of punctured Riemann surfaces~\cite{Tourkine:2013rda}, therefore in the text we will use the terms \emph{tropical limit} and \emph{field-theory limit} interchangeably. We refer to~\cite{Tourkine:2013rda} for conventions, signs and factors of $\pi$ and $2$'s which are necessary for a clean analysis of the limit. It is crucial to keep track of these factors given how delicate some cancellations are.
In the field theory limit, the moduli space integration of string theory only receives contributions from regions near its boundaries, corresponding to the Riemann surface degenerating into graphs with different topologies. Intuitively, the open-string worldsheet becomes a collection of infinitely long and thin ribbons, with widths proportional to $\sqrt{\alpha'}$, joining and splitting at interaction points. The resulting object depends only on the length of the edges which correspond to Schwinger proper-time parametrization of Feynman graphs after suitable rescaling. At one-loop, this process is systematized by the Bern-Kosower rules~\cite{Bern:1990cu,Bern:1990ux,Bern:1991aq,Bern:1993wt}. We refer the reader to \cite{Schubert:2001he} for a thorough review, and recall below only the aspects of these rules necessary for our purposes.
For concreteness, we will focus on the one-loop case but the basic idea generalizes to all genera: we comment on the higher-loop case in the discussion section~\ref{sec:discussion}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.8]{sep_deg.pdf}
\end{center}
\caption{Example of a separating degeneration at one-loop.}
\label{fig:sep_deg}
\end{figure}
There are two types of degenerations at the boundaries of the moduli space: \textit{separating} and \textit{non-separating}. A separating degeneration occurs when the original surface pinches and splits into two surfaces connected at a point, or equivalently by an infinitely long strip, see figure~\ref{fig:sep_deg}.
A non-separating degeneration occurs when the pinched surface is a connected surface with a double-point, see figure~\ref{fig:non_sep_deg}.
As an example, take a one-loop open-string amplitude with $n$ ordered punctures on the same worldsheet boundary. Its field theory limit generates all possible trivalent graphs that have this ordering: the $n$-gon, and all other one-loop graphs with trees attached to the loop. The attached trees are generated from boundary components where two or more punctures get very close together and the worldsheet pinches as depicted on the right of figure \ref{fig:sep_deg}.
The generic case of a $g$-loop graph with particles ordered on the $g+1$ boundaries obey the same mechanism. Therefore, all graphs which respect a given ordering are generated in the field-theory limit.
However, this does not mean that a given string amplitude has support on all of these graphs. For instance, supersymmetry can prevent the appearance of certain graphs, such as triangles in maximally supersymmetric theories, see~\cite{Bern:1998sv,Bern:2005bb,BjerrumBohr:2005xx,BjerrumBohr:2006yw,Bern:2007xj,BjerrumBohr:2008ji}. What happens in this case is that the string integrand has zero support on those degenerations at
leading order in $\alpha'$.
\begin{figure}
\begin{center}
\includegraphics[scale=0.8]{non_sep_deg.pdf}
\end{center}
\caption{Example of a non-separating degeneration at one-loop.}
\label{fig:non_sep_deg}
\end{figure}
What properties of a string integrand indicate whether or not it has support on a given boundary of the moduli space? To answer this question, we specialize to one-loop, but the statements below are generic since they depend only on the local structure of the propagator and not the topology (genus) of the surface. A typical string integrand assumes the following form
\begin{equation}
\label{eq:typical-string}
\varphi(\{z_i\})\, \times e^{-\alpha' \ell^2 \Im\tau -2\pi \alpha' \sum_{i=1}^n \ell\cdot k_i \Im z_i +\sum_{i<j}k_i \cdot k_j G_{ij}},
\end{equation}
where the exponent, which is traditionally called Koba-Nielsen factor, is universal to all string amplitudes, and $\varphi$ is a theory-dependent function with no branch cuts. The annulus is defined by a rectangle of height $t$ and width $1/2$, so that $\tau=it$. The punctures $z_i$ live on both boundaries, $0\leq \Im z_i\leq t$ and $\Re z_i=0,1/2$. We follow the conventions of \cite{Tourkine:2016bak,Casali:2019ihm}. The Koba-Nielsen factor is constructed out of the following function, \footnote{It differs from the Green's function $\langle X(z_i) X(z_j) \rangle$ by a non-holomorphic term proportional to $(\Im z_{ij})^2/\Im \tau$. We always compensate this term by working in the chiral splitting formalism\cite{DHoker:1988pdl}, which introduces a loop momentum integration. Consequently, we always work at fixed loop momentum, i.e. before integration.
}
\begin{equation}
G_{ij}:=G(z_i -z_j)=-\alpha'\ln \bigg| \frac{\vartheta_1(z_i-z_j)}{\vartheta_1'(0)} \bigg| \,
\label{eq:corr-XX}
\end{equation}
explicitly given by
\begin{equation}
\label{eq:G-def}
G(z)/\alpha'=
-\ln | \sin(\pi z) | +
2\sum_{m=1}^\infty \frac{q^m}{1-q^m}\frac{1}{m}\cos(2\pi m z) +c(\tau
\,.
\end{equation}
Here $\vartheta_1$ is the first Jacobi theta function, $q = e^{2\pi i\tau}$ is the nome of the Riemann surface with modular parameter $\tau$ and $c(\tau)$ is a function that eventually drops out of computations due to momentum conservation. For the annulus we have $\tau \in i\mathbb{R}$; for more conventions see appendix~\ref{sec:conventions}.
The monodromy properties of the integrand are entirely given by this factor, which contains all the branch cuts of the integrand. In other words, it controls the monodromy structure regardless of the matter content in a specific scattering process.
The remaining part of the integrand, $\varphi$, is naively given by a polynomial in the derivatives of $G$ and kinematic constant terms (powers of internal and external momenta). Bern and Kosower proved that it is always possible to find a sequence of integrations by part so that the $\varphi$ has only first derivatives of $G$~\cite{Bern:1990cu,Bern:1990ux,Bern:1991aq,Bern:1993wt}, see also the review \cite{Schubert:2001he}.\footnote{This reasoning is valid at one-loop. Possible obstructions at higher loop involve the risk that integration by parts may interact non-trivially with picture changing operators.}
Therefore, the function $\varphi$ can always be written as a polynomial in $\dot{G}_{ij}$, and takes the most general form:
\begin{equation}
\label{eq:schem}
\varphi(\{z_i\}) = \sum_\alpha c_\alpha \prod_{(i,j)\in \alpha} \dot G_{ij},
\end{equation}
where $\alpha$ is a set of pairs of labels $(i,j)$ appearing in a given term and $c_\alpha$'s contain the polarization and kinematic dependence of the amplitudes. Note that $n$ powers of $\dot G$ correspond to a numerator with $n$ powers of the loop momentum in the field theory limit.
Now, following Bern and Kosower consider the pair $(i,j) \in \alpha$. We ask whether a given monomial $\varphi_\alpha = \prod_{(i,j)\in \alpha} \dot G_{ij}$ splits off a tree or not. There are two cases: 1) $\dot G_{ij}$ appears with two or no powers, i.e., $\dot G_{ij}^{2} \subset \varphi_\alpha$ or $\dot G_{ij} \not\subset \varphi_\alpha$, or 2) exactly one power of $\dot G_{ij}$ appears in $\varphi_\alpha$. The mechanism of the field theory limit, systematized by the Bern-Kosower rules, stipulates that case 1 gives an integrand with no support on graphs where legs $(i,j)$ forms an external tree, while case 2 gives has support on those graphs, as well on other graphs, where $(i,j)$ do not split off a tree.
\paragraph{Case 1: no (ij)-tree.}
In the field theory limit, the annulus becomes infinitely long, so that $\tau\to i\infty$, $z_j\to i \infty$ with a tropical scaling
\begin{equation}
\Im z_j = \frac{Y_j}{\pi \alpha'}\,,\qquad\Im \tau=\frac T {\pi\alpha'}\,,
\label{eq:trop-scal}
\end{equation}
The quantities $Y_j$ and $T$ are the field theory Schwinger proper-times of the graph.
The propagator reduces to
\begin{equation}
\label{eq:trop-G}
G_{ij}= -\ln(|\sinh(Y_j-Y_i)/\alpha' |) = -|Y_j-Y_i|/\alpha' +
{\cal O}(e^{-2 |Y_{ij}|/\alpha'}),
\end{equation}
where terms with non-zero powers of $q$ are exponentially suppressed in the field theory limit and drop out.\footnote{The story is more complicated than this and depends on the amount of supersymmetry. The string partition function may possess terms of order $q^{-1}$ or $q^{-1/2}$ which extract residues at order $q^{1}$ or $q^{1/2}$. The effect of these terms, fully systematized in the original Bern-Kosower rules, is to adapt the number of powers of $\dot G_{ij}$ in the numerator to the amount of SUSY and the spin of the particles. It does not change the fact that the integrand is solely made of powers of $\dot G_{ij}$.}
Equation \eqref{eq:trop-G} approaches, as expected, the worldline propagator
$-|Y_j -Y_i|$
in the limit $\alpha'\rightarrow 0$, when taking into account the factor of $\alpha'$ in eq.~\eqref{eq:typical-string}.
\paragraph{Case 2: (ij)-tree.}
A tree graph occurs when a separating degeneration pinches off a punctured disk from the original surface, or equivalently when a set of punctures comes infinitesimally close to each other. Consider the case where two particles, $z_i$ and $z_j$, approach each other such that a three-punctured disk splits off. The answer to our question above is that a string integrand will have support on this degeneration if $\varphi_\alpha$ contains exactly one power of $\dot G_{ij}$.
In the region $z_i-z_j\ll1$, the integrand can then be written by as
\begin{equation}
\varphi(\{z_i\})\, e^{\sum_{r,s}k_r \cdot k_s G_{rs}}=\dot G_{ij}e^{k_i \cdot k_j G_{ij}}\times \left(\tilde \varphi\, e^{\sum_{r,s\neq i} k_r \cdot k_s G_{rs}}\right)\bigg|_{z_i=z_j}\,+{\cal O}(z_i-z_j),
\end{equation}
where the Bern-Kosower rules stipulate that the ${\cal O}(z_i-z_j)$ terms drop out in the field theory limit.
Thus, the only part of the integrand which still depends on the variable $z_i$ is $\dot G_{ij}e^{k_i \cdot k_j G_{ij}}$.
From the derivative of the Green's function,
\begin{equation}
\label{eq:G-dot-def}
\partial_z G(z) =-\frac{\vartheta_1(z)'}{\vartheta_1(z)} =
-\pi \cot(\pi z)- 4\pi
\sum_{m=1}^\infty \frac{q^m}{1-q^m}\sin(2\pi m z)\,
\end{equation}
and from $G_{ij}$, we retain only the first term in the $q$-expansion, as the ${\cal O}(q)$ terms are exponentially suppressed in the limit \eqref{eq:trop-scal}.
Then, on an integration contour where $y_i=\Im z_i$ approaches $y_j=\Im z_j$ from below, we perform the tropical limit rescaling and zoom around the piece of the contour near $y_j$. For a term involving $\dot G_{ij}e^{k_i \cdot k_j G_{ij}}$ this gives us
\begin{align}
\mathcal{I}_{\text{trop}} &= i \int_{y_j-L}^{y_j}dy_i \cot(i\pi(y_j-y_i)) e^{\alpha' k_1\cdot
k_j\ln(-i\sin(i\pi(y_j-y_i)))} \nonumber \\
&= \frac{(\sinh (\pi L )) ^{\alpha'
k_1\cdot k_j}}{\pi \alpha' k_1\cdot k_j} = \frac{1}{\pi \alpha'
k_1\cdot k_j}
+{\cal O}(1),\label{eq:tr-usual}
\end{align}
where $L$ is some cut-off which drops out in the limit. Besides, since $L$ was inserted by hand, the full integral cannot depend on it order-by-order in $\alpha'$, it must therefore vanish when taking into account the other parts of the contour, where $y_i$ is above $y_j$. An explicit example is given in eqs~\eqref{eq:tr-usual-1}, \eqref{eq:tr-usual-2}, where we see that the dependence on $L$ is pushed to $O(\alpha')^2$ compared to leading order.
On the right hand side of \eqref{eq:tr-usual} we recognize immediately the propagator of a tree subgraph with legs $i$ and $j$. If particle $y_i$ approaches from above the result is the same apart from an overall minus sign from the antisymmetry of the cotangent function. After integrating out $y_i$ in this way the rest of the integrand is given by the previous integrand with $\dot G_{ji}$ removed and $z_i$ replaced everywhere by $z_j$. In section~\ref{sec:field-theory-limit}, we will redo this calculation to extract the field theory limit of the bulk contours entering the monodromy relations. Because the standard string amplitudes do not involve those contours, their analysis was absent from the older literature on the field theory limit of strings.
\subsection{Monodromy relations}
Monodromy relations \cite{BjerrumBohr:2009rd,BjerrumBohr:2010zs,Stieberger:2009hq} are linear relations between open-string theory amplitudes known to exist at tree-level since the early days of dual models \cite{Plahte:1970wy}. These relations can be used to solve for a basis of the BCJ color-kinematics duality, see references above and also \cite{Feng:2010my}. They were extended to loop level in \cite{Tourkine:2016bak,Hohenegger:2017kqy,Ochirov:2017jby}, and formalized in the context of twisted homologies by the present authors in \cite{Casali:2019ihm}. The reader should refer to~\cite{Casali:2019ihm} for more details, conventions and proofs of the identities used in this paper.
Mathematically, monodromy relations are linear relations between integrals over the configurations space of points on sliced genus-$g$ Riemann surfaces, whose integrand involves a multi-valued function, the Koba-Nielsen factor, $T_g$.
At tree-level, this function is given by
\begin{align}
T_0(\{z_1,\dots,z_n\})=\prod_{i<j}(z_j-z_i)^{\alpha' k_i\cdot k_j},
\end{align}
where $k_i$ is the momentum associated with puncture $z_i$.
The monodromy relations at tree-level can be expressed as relations among color-ordered open-string \textit{amplitudes}, where a single puncture circulates around, starting from its original position. Taking the ordering $12\dots n$ and circulating $1$ for instance gives Plahte's relations~\cite{Plahte:1970wy}:
\begin{align}
\sum_{i=1}^{n-1}e^{\pm \pi i\alpha' k_1\cdot \sum_{j=2}^ik_j}A_{\text{tree}}(2,\dots, i, 1, i{+}1,\dots,n ) =0,
\label{eq:plahte}
\end{align}
where $A_\text{tree}(1,\dots,n)$ denotes tree-level open string amplitudes in a particular color ordering. These are two separate relations labeled by a sign $\pm$.
At loop-level, the monodromy relations can be expressed as relations between color-ordered open-string \textit{loop integrands}: they hold at fixed surface moduli and fixed loop momenta. Most of the contributions to these relations are integrated over the usual open-string cycles, i.e., the particles are ordered along the $g{+}1$ boundaries of a Riemann surface, in accordance with a given Chan-Paton ordering.
But there are also unavoidable contributions from \emph{bulk cycles}, coming from contours that run in the interior of the surface, along its $A$-cycles, see, e.g., the red lines in figure~\ref{contours_def}.
It is worth recalling that, so far, they have no interpretation as originating from the string theory path-integral.
At genus one, fixing $m{-}1$ punctures on one boundary and $n{-}m{-}1$ on the other (we fix $z_m=it$ by translation invariance), the monodromy relations can be written as
\begin{align}\label{string_non_planar}
&\qquad\sum_{i=1}^{m-1}e^{\pm\pi i \alpha' k_1\cdot \sum_{j=2}^{i} k_{j}} \,\mathcal{I}(2,3,\dots, i,1,i{+}1,\dots, m | m{+}1,\dots, n)\nonumber\\
&\qquad\qquad\qquad +\sum_{i=m}^n e^{\pm\pi i \alpha' k_1\cdot(\ell+ \sum_{j=2}^{i}k_{j})} \,\mathcal{I}(2,\dots, m|m{+}1, \dots, i, 1, i{+}1,\dots, n)\\
&\!= \mp e^{\pm\pi i \alpha' k_1 \cdot \ell} \!\left( e^{\pm\pi i \alpha' k_1 \cdot \sum_{j=2}^{m} k_j}\mathcal{J}_{\mathbf{a}_\pm}(2,\dots, m|1,m{+}1,\dots, n) {-} \mathcal{J}_{\mathbf{c}_\pm}(2,\dots, m| m{+}1, \dots, n,1) \right)\!,\nonumber
\end{align}
where $\mathcal{I} (\cdots|\cdots)$ denote a physical integration contour with the two slots denoting the ordering of punctures on each boundary, and $\mathcal{J_{\mathbf{a}_{\pm}}}$, $\mathcal{J_{\mathbf{c}_{\pm}}}$ denotes the contributions integrated along $A$-cycles as denoted in figure~\ref{contours_def}.
Those are the relations we use in this paper. The relation with minus signs in the phases and $\mathcal{J}_{\mathbf{a_-/c_-}}$ is obtained by drawing the same vanishing contour, but on a reflected rectangle, see \cite[Figure 9]{Casali:2019ihm}.
\begin{figure}\begin{center}
\includegraphics{contours_def_2.pdf}
\end{center}
\caption{Illustration of monodromies relations coming from the vanishing of an integral around the closed blue contour,cv for a generic non-planar amplitude. \underline{Left:} open string annulus, with punctures. Red line: $A$-cycle, along which the Riemann surface is cut, defines where the loop momentum $\ell^\mu = \int_A \partial X^\mu$ is measured. Blue cycle: the contour over which the puncture $z_1$ is being integrated. As no pole exist in the bulk, the full integral vanishes. Each segment along the boundary. \underline{Right:} rectangle representation of the annulus, with depictions of contours for $z_1$. The $\mathcal{I}$ contours are usual open string boundary contours, the $\mathcal{J}$ are bulk contours.}
\label{contours_def}
\end{figure}
The general form of the $\mathcal{J}$ terms is
\begin{multline}
\mathcal{J}_{\mathbf{a/c}_{\pm}}(2,\dots,m|1,m{+}1,\dots,n)=\int_{\Delta} \prod_{i\neq 1,m}d z_i e^{-2\pi \alpha' \ell\cdot \sum_{i\neq 1}^nk_i \Im(z_i)}\\\times\prod_{i,j\neq 1}|G(z_i,z_j)|^{\alpha' k_i\cdot k_j}\int_0^{\pm1/2} d x_1 T_1(z_1),
\end{multline}
where $x_1=\Re z_1$. The integration contours are $\Im z_1=0$ for $\mathcal{J}_{\mathbf{c}_\pm}$ and $\Im z_1=t$ for $\mathcal{J}_{\mathbf{a}_\pm}$. The contour $\Delta$ is the usual one for the $n{-}2$ punctures distributed along the two boundaries and we have fixed the $m$-th puncture to $z_m=\tau=it$. The function $T_1(z_1)$ is obtained from analytic continuation of the string integrand in the variable $z_1$,
\begin{equation}\label{T1}
T_1(z_1) := e^{2\pi i \alpha' k_1\cdot \ell\, z_1} \prod_{j=2}^m \left(-i \vartheta_1(iy_j{-}z_1)\right)^{\alpha' k_1\cdot k_j}\prod_{j=m+1}^n\vartheta_2(iy_j{-}z_1)^{\alpha' k_1\cdot k_j}.
\end{equation}
The field theory limit of the physical $\mathcal{I}$ terms in the relations is known and given by the Bern-Kosower rules, as explained in the previous subsection. In the next section, we will provide an analysis of the field theory limit of the $\mathcal{J}$ terms in the monodromy relations.
\section{Field theory limit of bulk contours at one-loop}
\label{sec:field-theory-limit}
\subsection{Overall phases}
We analyze here the field theory limit of the $\mathcal{J}$ cycles from~\eqref{string_non_planar}. The relevant part is the piece of the contour running along the $A$-cycle,
\begin{align}
\label{eq:bulk-integrals}
\mathcal{J}_{\mathbf{a}_\pm} :=&\int_0^{\pm 1/2} T_1(x+it) \,\varphi(x+it)\, dx,\\
\mathcal{J}_{\mathbf{c}_\pm} :=&\int_0^{\pm1/2} T_1(x)\, \varphi(x)\, dx,
\end{align}
with $T_1$ defined as \eqref{T1}. For now we take the string integrand $\varphi=1$ in order to analyze the overall complex phase that these contributions have in the field theory limit.
For definiteness we look at the integral $\mathcal{J}_{\mathbf{a}_+}$ in detail, while the other phases can be obtained from similar computations.
Define $\mathcal{E}$ as the exponent of the function $T_1=e^{\alpha' \mathcal{E}(z)}$, that is
\begin{equation}
\mathcal{E}(z)= 2i\pi \ell \cdot k_1 z + \sum_{j=2}^m k_1\cdot k_j
\ln(-i\vartheta_1(iy_j-z) )+ \sum_{j=m+1}^n k_1\cdot k_j \ln\vartheta_2(iy_j-z).
\end{equation}
Now we take the tropical limit using eq.~\eqref{eq:trop-scal}, but keep the stringy lower-case variables so as to not clutter the notation with factors of $\pi$ and $\alpha'$.
As $\alpha'\to 0$ we can safely drop the higher-order terms in $q$ and $w_j=\exp(2i\pi z_j)$ in
$\ln(-i\vartheta_{1})$ and $\ln(\vartheta_2)$. What remains are logarithms of trigonometric functions, $\ln(-i\sin(\cdot ))$ and $\ln(\cos(\cdot))$ (see appendix \ref{sec:conventions}). The sine and cosine functions are a sum of two terms, one of
which is exponentially growing, the other suppressed. On the $\mathbf{a}_+$ contour, we have $z=x+it$, which gives
\begin{align}
-i\sin(\pi(i y_j-z)) &= -\frac{1}{2}
e^{2\pi(t-ix-y_j)}(1-e^{-2\pi(t-ix-y_j)}),\label{eq:first_phase}\\
-i \sin(\pi(it-z)) &= i \sin (\pi x),\label{eq:weird_phase}\\
\cos(\pi(i y_j-z)) &=\frac{1}{2}
e^{2\pi(t-ix-y_j)}(1+e^{-2\pi(t-ix-y_j)})
\end{align}
for $j\neq m$.
Upon taking the logarithm we can also discard the exponentially
suppressed terms $e^{-2\pi(t-ix-y_j)}$, which correspond the exchange of massive string states. The exponent then reduces to
\begin{align}
\mathcal{E}(it+x)\rightarrow \;&2i\pi \ell \cdot k_1 (it+x) + \pi\sum_{j=2}^{m-1} k_1\cdot k_j
(t-y_j -ix+i\pi)\nonumber\\
&+\pi \sum_{j=m+1}^n k_1\cdot k_j (t-y_j -ix) +
k_1\cdot k_m (\sin(\pi x)+i \pi/2).
\label{eq:exotic-phase-energy}
\end{align}
Here we recover the $i\pi \sum_{j=2}^{m-1} k_1\cdot k_j$ from the term $\ln(-1)=\ln(e^{i\pi})$ in \eqref{eq:first_phase}. This is the usual phase in the monodromy relations obtained from the projection onto the physical branch of the analytically continued function $T_1(z)$.
But note that we have also obtained a term $i \pi k_1\cdot k_m/2$ with an unusual factor of one half, originating from the overall factor of $i=e^{i\pi/2}$ in \eqref{eq:weird_phase}. We can take it to define a canonical branch for these bulk contours. It would be very interesting to see what is its significance in the worldsheet CFT but we leave such considerations to future investigations.
In summary, we find that, in the field theory limit, the contour \eqref{eq:bulk-integrals}, has the following phase:
\begin{equation}
\label{eq:mono-half}
\boxed{ \mathlarger{e^{i\pi\alpha' (\sum_{j=2}^{m-1} k_1\cdot k_j + k_1\cdot k_m/2)}}}\,.
\end{equation}
The computation on the $\mathbf{c}_+$ contour is essentially the same and produces the phase
\begin{equation}
\label{eq:mono-half-c}
\boxed{ \mathlarger{e^{i\pi \alpha' k_1\cdot k_m/2}}}\,.
\end{equation}
Before concluding this part, let us emphasize the following point. At a generic value of $\alpha'$, the integrals $\mathcal{J}_{\mathbf{a}/\mathbf{c}_\pm}$ are complex, unlike the integrals on the physical contours, which are purely real for real kinematics. When applying monodromies, those physical integrals acquire a phase, which is unambiguous. The situation is different for the integrals $\mathcal{J}_{\mathbf{a}/\mathbf{c}_\pm}$, since even when factoring out the phase mentioned above the integral remains complex.\footnote{These points were already discussed in \cite{Casali:2019ihm} where it was observed that no canonical choice of branch exist for these integrals.}
What the phase above mean is that, taking the field theory limit of the integrands, the first term of the $\alpha'$ expansion is real and given by Feynman graphs, but higher order terms remain complex. Therefore, when we write
\begin{equation}
\mathcal{J}_{\mathbf{a}/\mathbf{c}_\pm}\underset{\alpha'\to 0}{\simeq} e^{i\pi\alpha' (\sum_{j=2}^{m-1} k_1\cdot k_j + k_1\cdot k_m/2)} \times (\mathrm{Feynman\ graphs}+O(\alpha'))\,,\label{eq:J-limit-schem}
\end{equation}
we do not mean that the higher order terms in $\alpha'$ are real, in contrast with the physical integrals where it is indeed true.
\subsection{Contact terms and triangles}
\label{sec:contact_triangle}
We now turn to the evaluation of the integral over $x$ in \eqref{eq:bulk-integrals}. As in the usual Bern-Kosower rules reviewed in section~\ref{subsec:ftl}, there are two cases of interest: when the integrand contains a monomial with exactly one derivative of the worldsheet propagator; and when it doesn't. Taking this into consideration we again write a generic integrand along the $A$-cycles as
\begin{equation}
\varphi(z_1)=\varphi_1 \dot{G}(z_m-z_1) +\varphi_2
\end{equation}
where $\varphi_1$ and $\varphi_2$ don't contain any monomials on derivatives of the propagator with arguments involving $z_1$. We take the ordering of $\mathcal{J}_{\mathbf{a}/\mathbf{c}_\pm}$ the same as in~\eqref{string_non_planar}, but note that since these calculations are only sensitive to the local structure of the integrand near $z_1$ the results below are valid for any other permutations provided the gauge $z_m=it$ is kept fixed.
\textbf{Triangles:} Triangle graphs are generated by monomials that have exactly one derivative of the worldsheet propagator, which in the field theory limit reduces to a cotangent function, see eq. \eqref{eq:G-dot-def}.
Just like in Bern-Kosower rules, the field theory limit is only affected by local behavior, so it is sufficient to consider the case of $\tilde{\mathcal{J}}_{\mathbf{a}_\pm}$ with $\varphi(x+it)=\dot{G}(z_m-z_1)=\dot{G}(-x)$, that is $\varphi_1=1$ and $\varphi_2=0$; the result will be similar for the contours $\tilde{\mathcal{J}}_{\mathbf{c}_\pm}$. In the tropical limit, this integral descends to an integral of the form
\begin{equation}
\label{eq:inttr}
\mathcal{J}_{\text{trop},\mathbf{a}_\pm} = \int_0^{\pm1/2} \cot(-\pi x) e^{\alpha' i c x + \alpha' k_1\cdot k_m \ln\sin(\pi x)} dx.
\end{equation}
Where momentum conservation was used to rewrite the $x$-dependence of the exponent \eqref{eq:exotic-phase-energy} as
\begin{equation}
\label{eq:f-def}
i c x + k_1\cdot k_m \sin(\pi x) \qquad \text{with}\qquad c=\pi(2\ell +k_m)\cdot
k_1\,.
\end{equation}
This integral can be computed by expanding the exponential $e^{\alpha' i c x}$ since this term is regular when $x\to0$, and we are interested only in the first few orders in $\alpha'$. These give the integrals:
\begin{equation}
\label{eq:inttr2}
\mathcal{J}_{\text{trop},\mathbf{a}_\pm}^0= -\int_0^{\pm1/2} \cot(\pi x) e^{\alpha' k_1\cdot k_m \ln(\sin(\pm\pi
x))} dx =-\frac{1}{\alpha'\pi k_1\cdot k_m}+{\cal O}(1)
\end{equation}
\begin{equation}
\begin{aligned}
\label{eq:inttrcorr}
\mathcal{J}_{\text{trop},\mathbf{a}_\pm}^1 &= -\alpha' i c x \int_0^{\pm 1/2} x \cot(\pi x) e^{\alpha'
k_1\cdot k_m \ln(\sin(\pm \pi x))} dx
=-\alpha' i c x\frac{\ln(2)}{2\pi}+{\cal O}(\alpha')
\end{aligned}
\end{equation}
The first integral \eqref{eq:inttr2} produces a tree-like contribution, which is the analogue of the standard case described by the Bern-Kosower rules when two punctures collide on the boundary, except that now it happens in the bulk.\footnote{Note that, contrary to computation in eq.~\eqref{eq:tr-usual}, it was not necessary to zoom in a neighborhood of $z_m=it$, the triangle contribution came from the full integral between $0$ and $\pm 1/2$. This is the case because we already dropped higher order terms, which are exponentially suppressed. Colliding the punctures produces the same pole and cut-off of order $\alpha'$ relative to the pole, which drops out, as in eq. \eqref{eq:tr-usual} and eq. \eqref{eq:tr-usual_2} below.}
The next term in the $\alpha'$ expansion \eqref{eq:inttrcorr} produces terms at two orders higher in $\alpha'$ than the contribution from \eqref{eq:inttr2}, so $ \mathcal{J}_{\text{trop},\mathbf{a}_\pm}^1$ does not contribute in the field theory limit. The second order in $\alpha'$ is important here, because the monodromies can be seen as ${\cal O}(1)$ and ${\cal O}(\alpha')$ relations, therefore a term at first order could possibly enter the first order monodromies. However, we shown below that this does not happen.
In order to fix normalizations we reproduce here also the triangle computation arising from the \textit{physical} contour, where $z_1$ approaches
$z_m=it$, from below, $z_1 \sim i y$:
\begin{equation}
\label{eq:tr-usual_2}
\begin{aligned}
\mathcal{I}_{\text{trop}} &= i \int_{t-L}^t \cot(i\pi(t-y)) e^{\alpha' k_1\cdot
k_m\ln(-i\sin(i\pi(t-y)))}dy \\ &= \frac{(\sinh (\pi L )) ^{\alpha
k_1{\cdot} k_m}}{\pi \alpha k_1\cdot k_m} = \frac{1}{\alpha' \pi
k_1\cdot k_m}+\frac{\log (\sinh (\pi L))}{\pi }+O(\alpha').
\end{aligned}
\end{equation}
and we see that triangles from the $A$-cycle contours and from the physical contours have the same normalization. As explained in \eqref{eq:tr-usual}, $L$ is an IR cut-off which drops off when taking into account the part of the integration that is between $y_{m-1}$ and $t-L$.
\textbf{Contact term:} Finally, we return to the bulk contour and investigate the case $\varphi=\varphi_2=1$, so that it does not contain a derivative of the Green's function and no subtree is generated. The integration can be done explicitly and leads to
\begin{equation}
\label{eq:no-tr}
\int_0^{\pm1/2}e^{\alpha'k_1\cdot k_m \ln \sin(\pm \pi x)}dx =\frac{\Gamma
\left(\frac{1}{2} (\alpha' k_1\cdot k_m +1)\right)}{2 \sqrt{\pi }
\Gamma \left(\frac{\alpha' k_1\cdot k_m}{2}+1\right)}=\pm\frac12
\mp\alpha'k_1{\cdot} k_m{\ln(2)}/2+O(\alpha'^2)
\end{equation}
giving an important factor of $\pm1/2$ at leading order. Since in the limit the particles $z_1$ and $z_m$ are squeezed together, we interpret the leading order in \eqref{eq:no-tr} as a diagram with a contact term involving particles $1$ and $m$ as depicted in figure~\ref{fig:bulk_contributions}.
Note that since we do not rescale $z_1$ in the field theory limit the contributions from the above integrals come at a higher order in $\alpha'$ than the other terms in the monodromy relations. This is easily seen by looking at how the moduli space measure scales for different terms in the monodromy relations~\eqref{string_non_planar}. In the $\mathcal{I}$ contributions all coordinates are rescaled in the field theory limit such that
\begin{equation}\label{eq:I_meas}
d \tau\prod_{i=1;i\neq m}^{n}d z_i\rightarrow (\alpha')^{-n} dT\prod_{i=1;i\neq m}dY_i
\end{equation}
while the contributions from $\mathcal{J}$ have an overall $(\alpha')^{n-1}$ from the measure since the coordinate $z_1$ is not scaled in the limit. Another way of seeing this is to realize that each propagator in a trivalent graph in the field theory limit has to come with a factor of $\pi\alpha'$. This comes from the fact that propagators are generated in the tropical limit by integrals of the tropical Koba-Nielsen factor, where Mandelstam variables are multiplied by $\pi\alpha'$. This implies that the contact term comes with an overall extra $\pi\alpha'$ term compared with the other terms.
Note also the presence of a relative factor of $-i$ in front of the contact term coming from the measure: the original integration contour for $z$ along the imaginary axis is such that under analytic continuation $dy=-i dz$. Since the $\mathcal{J}$ terms are integrated along the real axis where $dz = dx$, a relative factor of $i$ should be added. The triangle terms originating from $\mathcal{J}$ also have this extra $-i$ for the same reason but due to another factor of $i$ coming from the analytic continued $-i\vartheta_1(z_m-z_1)$ in the integrand the overall coefficient of these terms is $-1$, the same as if particle $1$ was above $m$ in a contribution away from the edges.
\begin{figure}
\centering
\includegraphics[scale=0.9]{bulk_contributions.pdf}
\caption{Graphs appearing in the field theory limit of the bulk cycles.}
\label{fig:bulk_contributions}
\end{figure}
\section{Field theory limit of the monodromy relations}
\label{sec:field-theory-limit-monodromy}
Let us now turn to studying the field theory limit of the relations.
We start by reviewing the tree-level setup to recall how the limit works in this simple case.
At finite $\alpha'$, the monodromy relations~\eqref{string_non_planar} are two linearly-independent identities. Plahte's relations~\eqref{eq:plahte}, can therefore be rewritten as their sum and differences, leading to
\begin{equation}
\begin{aligned}
\sum_{i=1}^{n-1}\cos(\pi \alpha' k_1\cdot \sum_{j=2}^ik_j)A_{\text{tree}}(2,\dots, i, 1, i{+}1,\dots,n ) =0,\\ \sum_{i=1}^{n-1}\sin(\pi \alpha' k_1\cdot \sum_{j=2}^ik_j)A_{\text{tree}}(2,\dots, i, 1, i{+}1,\dots,n ) =0.
\end{aligned}
\label{eq:plahte-trigo}
\end{equation}
As $\alpha'\to0$, the trigonometric functions are simplified as $\sin(\pi\alpha' s)\simeq \pi\alpha' s$, $\cos(\pi\alpha' s)\simeq 1$ and $A_{\text{tree}}$ descends to the field theory amplitude $A_{\text{tree}}^{\text{FT}}$, leading to
\begin{align}
\sum_{i=1}^{n-1}A_{\text{tree}}^{\text{FT}}(2,\dots, i, 1, i{+}1,\dots,n ) =0, \label{eq:KK}\\
\sum_{i=1}^{n-1} (k_1\cdot \sum_{j=2}^ik_j) A_{\text{tree}}^{\text{FT}}(2,\dots, i, 1, i{+}1,\dots,n ) =0.\label{eq:fundbcj}
\end{align}
The first relations are the Kleiss-Kuijf relations~\cite{Kleiss:1988ne}, while the second are the fundamental BCJ relations, known to be equivalent to BCJ relations between graphs \cite{Feng:2010my,Feng:2011fja}, as initially observed in \cite{BjerrumBohr:2009rd,Stieberger:2009hq}.
A more cavalier way to arrive at these identities is by taking the field theory limit before taking their linear combinations in \eqref{eq:plahte-trigo}, and extract the ${\cal O}(1)$ and ${\cal O}(\alpha')$ from the phases only. This is requires, in theory, to check that no $O(\alpha')$ arise from the amplitude themselves and mixes up with the $\alpha'$ coming form the amplitudes. Both relations obtained in both ways are identical.\footnote{An interesting related point, which, to our knowledge, was not raised in the literature before, is the following (see also \cite{Broedel:2012rc}). From the $+$ relation, involving only cosines of the phases, one can see that any order $\alpha'$ correction to the amplitude $A^{FT}$, $A_{\alpha'}$, must satisfy independently the Kleiss-Kuijf relations of the form $\sum_\mathrm{permuations}A_{\alpha'}=0$.
Therefore, any ${\cal O}(\alpha')$ corrections to the amplitude which would occur in the ${\cal O}(\alpha')$ relations would cancel up independently of the ${\cal O}(\alpha')$ terms coming expanding the phases. Indeed, the ${\cal O}(\alpha')$ relations would write $\sum \alpha' k\cdot k A^{FT}+ \sum A_{\alpha'}=0$ and the second sum vanishes, in virtue of what was said above.}
At loop level, it is not hard to convince oneself that this shortcut reasoning works for the standard part of the relations, which involve usual $\mathcal{I}$ integrands, since they are purely real. But there are also new terms, the $\mathcal{J}$ bulk contours, which are complex and do not have a well-defined phase. Therefore terms of the form $\mathcal{J}_{+}\pm \mathcal{J}_-$ occur and one needs to be careful in handling them. It turns out that the analysis at orders ${\cal O}(1)$ and ${\cal O}(\alpha')$ continues to hold, due to (anti)-symmetry properties of the $\mathcal{J}_{\pm}$'s under complex conjugation. Therefore we shall present the analysis of the field theory limit in terms of ${\cal O}(1)$ and ${\cal O}(\alpha')$ relations directly, rather than sums and differences, as it is less cumbersome.
Following up on the discussion above, we consider only the $+$ monodromy relation in eq.~\eqref{string_non_planar}.
They take the schematic form:
\begin{equation}
\sum_i e^{i\alpha' \pi \phi_i} \mathcal{I}_i+\sum_j e^{i\alpha' \pi \theta_j}\mathcal{J}_j=0\label{eq:mono-schem}\,,
\end{equation}
where the phases $\phi_j$ and $\theta_j$ consist of combinations of external and internal kinematic invariants, and the indices $i,j$ runs over the set of contours in the relation. In the field theory limit, our analysis in section \ref{sec:field-theory-limit} shows that the integrands behave as
\begin{align}
\mathcal{I}_i &=\mathcal{I}^{\text{FT}}_i +\mathcal{O}(\alpha'^2),\\
\mathcal{J}_j &=-\mathcal{J}^{\text{FT}}_j -i\pi\alpha'\mathcal{J}^{\text{CT}}_j +\mathcal{O}(\alpha'^2),
\end{align}
where the superscript $\text{FT}$ denotes contributions from the usual trivalent graphs appearing in the field theory limit as described in the previous section. In other words, $\mathcal{I}^\mathrm{FT}$ and $\mathcal{J}^\mathrm{FT}$ are simply sums of trivalent graphs with a common definition of the loop momentum. The superscript $\text{CT}$ denotes contact terms that appear with an extra factor of $-i\pi\alpha'$ in front as described at the end of section~\ref{sec:field-theory-limit}.
Other signs and factors of $i\pi \alpha'$ are also carefully explained there. In other words, $\mathcal{J}^\mathrm{CT}$ is the result of taking all the graphs appearing in $\mathcal{J}^\mathrm{FT}$ and removing the legs generated by subtrees from the bulk integrals.
In the field theory limit the monodromy relations are therefore written as
\begin{equation}
\label{eq:mono-FTL-schem}
\left( \sum_i \mathcal{I}^{\text{FT}}_i-\sum_j\mathcal{J}^{\text{FT}}_j \right)+ i\pi\alpha'\left( \sum_i \phi_i \mathcal{I}^{\text{FT}}_i - \sum_j\theta_j\mathcal{J}^{\text{FT}}_j- \mathcal{J}^{\text{CT}}_j\right) + {\cal O}(\alpha'^2)=0\,.
\end{equation}
As noted above, and originally observed in~\cite{Tourkine:2016bak}, these relations split into two different sets. The ${\cal O}(1)$ term constitute the Bern-Dixon-Dunbar-Kosower relations \cite{Bern:1994zx,DelDuca:1999rs} at one loop, and contain the relations found in \cite{Feng:2011fja} at two loops as shown in \cite{Tourkine:2016bak}. For the planar case at ${\cal O}(1)$ they are the Boels-Isermann relations~\cite{Boels:2011mn,Boels:2011tp}.
Our goal in the next two subsections is to study in detail these two relations, and in particular characterize exactly the types of graphs which enter the relations.
\subsection{Cancellations at $\mathcal{O}(1)$}
\label{sec:cancelations-at-o1}
At $O(1)$, there are two main classes of graphs that we need to investigate: those coming from an integrand containing a monomial on $\dot{G}_{i1}$, and the others. As explained in section \ref{sec:field-theory-limit}, the former produces a tree where particles $i$ and $1$ attach to the rest of the graph; the latter produces a propagator connecting particles $i$ and $1$.\footnote{Contact terms can also be produced in the $\mathcal{J}$ contours but these only contribute at higher order in $\alpha'$.} These two situations can be visualized as two ways of attaching particles $i$ and $1$ to the main graph using a ``bridge'', as in figure~\ref{fig:box_cancel}.
\begin{figure}
\centering
\includegraphics{box_cancel.pdf}
\caption{Contributions from different boundaries that cancel each other.}
\label{fig:box_cancel}
\end{figure}
The cancellation within these classes of diagrams is immediate: for each diagram with a propagator between $1$ and $i$ originating from a contour where $z_1$ is in one boundary, the same diagram is also generated from a term where $z_1$ is in the other boundary but with a negative sign, see figure~\ref{fig:box_cancel}. Since at this order the phases in the monodromy relation do not contribute, these diagrams cancel term by term.
The diagrams with attached trees also cancel term by term, but care must be taken for the edge cases, that is, when $z_1$ approaches $0$ or $it$. If the triangle diagrams originate from a contour away from $z=0$ or $z=it$ as in figure~\ref{fig:triangle_cancel}, they cancel term by term due to the antisymmetry of $\dot{G}_{i1}$ in the integrand. This is explicitly shown below. In \eqref{eq:tr-usual-1}, $z_1$ approaches $z_i$ from below, and in \eqref{eq:tr-usual-2}, where approaches $z_i$ from above:
\begin{multline}
\label{eq:tr-usual-1}
\mathcal{I}_{\text{trop}}^0 = i \int_{y_i-L}^{y_i}\cot(i\pi(y_i-y)) e^{\alpha' k_1\cdot
k_m\ln(-i\sin(i\pi(y_i-y)))} dy\\ = \frac{(\sinh (\pi L )) ^{\alpha'
k_1\cdot k_i}}{\pi \alpha' k_1\cdot k_i} = \frac{1}{\alpha' \pi
k_1\cdot k_i}+\frac{\log (\sinh (\pi L))}{\pi }+ {\cal O}(\alpha'),
\end{multline}
\begin{multline}
\label{eq:tr-usual-2}
\mathcal{I}_{\text{trop}}^1 = i \int_{y_i}^{y_i+L} \cot(i\pi(y_i-y)) e^{\alpha' k_1\cdot
k_m\ln(-i\sin(i\pi(y-y_i)))}dy \\=- \frac{(\sinh (\pi L )) ^{\alpha'
k_1\cdot k_i}}{\pi \alpha' k_1\cdot k_i} =- \frac{1}{\alpha' \pi
k_1\cdot k_i}-\frac{\log (\sinh (\pi L))}{\pi }+ {\cal O}(\alpha').
\end{multline}
\begin{figure}
\centering
\includegraphics{triangle_cancel.pdf}
\caption{Graphs with attached trees that cancel each other due to a relative minus sign.}
\label{fig:triangle_cancel}
\end{figure}
Since phases do not contribute at this order, these diagrams are simply summed and therefore cancel. Note that the dependence on the $L$ is pushed to order $(\alpha')^2$, preventing the risk of spoiling the $O(\alpha')$ relations. As argued in section~\ref{subsec:ftl}, all dependence in $L$ should vanish in the full contour.
At the edges, $z_1=0$ or $z_1=it$, the argument given above does not apply. There are triangles generated by these edge terms, see figure~\ref{fig:edge_triangle} a) and b), but they come with different labeling of the loop momenta and cancellation is not possible at fixed loop momentum with just these terms.
However, as demonstrated in section~\ref{sec:contact_triangle}, the $\mathcal{J}$ terms produce diagrams with trees attached with the right propagator prefactor to cancel these edge contributions.
The integrals in eqs.~\eqref{eq:inttr} and \eqref{eq:tr-usual_2} give precisely the correct sign for this cancellation to occur. Moreover, they have the same loop momentum assignment, which gives a complete cancellation, at fixed loop momentum. These extra contributions are depicted in figure~\ref{fig:edge_triangle}, c) and d).
This concludes our analysis of the field theory limit of the monodromy relations at order ${\cal O}(1)$.
\afterpage{\begin{figure}
\centering
\includegraphics{trees-bulk.pdf}
\caption{Four graph degenerations near the worldsheet edges, $(a$-$d)$. \underline{Top:} Worldsheet (gray) with puncture $z_1$ approaching $z_m$ from various boundaries. \underline{Bottom:} Corresponding graphs with resulting $(1m)$ tree pinching off.}
\label{fig:edge_triangle}
\end{figure}}
\subsection{Cancellation at $\mathcal{O}(\alpha')$ and BCJ triples}
\label{sec:canc-at-mathc}
The only new graph that has to be considered is the contact term that appears at $\mathcal{O}(\alpha')$ in the expansion of the $\mathcal{J}$'s, see figure~\ref{fig:bulk_contributions}. This is a structurally new mechanism. Unlike before, where at ${\cal O}(\alpha')$ only the phases from the exponentials contributed, here we see an order ${\cal O}(\alpha')$ coming from the diagram itself. It is remarkable to see that this term precisely cancels the other graphs generated by the monodromies. Notwithstanding, the cancellation itself is not surprising since we know that the relations hold at finite $\alpha'$ \cite{Casali:2019ihm} and the field theory limit is simply a consequence of these relations.
Like at tree-level, the cancellation will depend on two mechanisms: the fact that the $\mathcal{O}(\alpha')$ terms of the phases conspire to cancel propagators in the graphs resulting in the grouping of numerators from different graphs into BCJ triples; and that the string theory, through the Bern-Kosower rules, produces automatically color-kinematics satisfying numerators, in the field theory limit, for the graphs where the definition loop momentum does not jump in a Jacobi identity.
To our knowledge, this last observation is new and we shall devote some time explaining it now.
\paragraph{Numerators away from the $A$-cycle.}
For cycles away from the edges we can argue that numerators obey color-kinematics as follows (see also \cite{Mizera:2019blq}). Consider a Jacobi identity with legs $1$ and $j$. Generically, a string theory numerator will be of the form
\begin{equation}
\varphi = \dot{G}_{j1} \varphi_1 +\varphi_2,
\label{eq:num-dec}
\end{equation}
where $\varphi_{1}$ and $\varphi_{2}$ do not contain any terms with single poles as $z_1\rightarrow z_j$ (this definition allows $\varphi_2$ to have a $\dot{G}_{j1}^2$ term). This integrand gives three types of graphs in the field theory limit, each with its respective numerator $n_s,\,n_t$ or $n_u$, see figure~\ref{fig:stu}.
The term $\dot{G}_{j1}\varphi_1$ produces diagrams with attached trees, which we call \emph{triangle-type}, and diagrams with propagators connecting $1$ and $j$, which we call \emph{box-type}, in analogy with the situation at four points. These come with numerators
\begin{equation}
n_{1,\text{triangle}} = 2 \varphi_1,\qquad n_{1,\text{box}} = \varphi_1,
\end{equation}
where the factor of $2$ comes from rewriting the propagators in the standard form $1/(k_1 \cdot k_j ) = 2/(k_1+k_j)^2$. The term $\varphi_2$ produces only box type graphs with numerator
\begin{equation}
n_{2,\text{box}} = \varphi_2.
\end{equation}
\afterpage{\begin{figure}
\centering
\includegraphics{graphstu}
\caption{Illustration where the $s$-, $t$-, $u$-channel graphs come from on the worldsheet.}
\label{fig:stu}
\end{figure}}
The numerators for each graph are the following combinations of the above:
\begin{equation}
\begin{aligned}
n_s& = n_{1,\text{box}} + n_{2,\text{box}},\\
n_t& = n_{1,\text{triangle}} ,\\
n_u& = -n_{1,\text{box}} + n_{2,\text{box}},
\end{aligned}
\end{equation}
which can be immediately checked to satisfy
\begin{equation}
n_s-n_t=n_u.
\end{equation}
This is the kinematic Jacobi identity. We conclude that numerators given by string theory away from the origin of the loop momentum obey color-kinematics duality.
\paragraph{Numerators near the $A$-cycle.}
The above argument cannot hold near the $A$-cycles of the worldsheet since the monodromy relations do not have the required contours either above or below the $A$-cycle where the surface is cut.
However, there are contributions along this cycle, the $\mathcal{J}$ integrands, which exactly cancel the leftover numerators. To see this explicitly consider again a generic integrand of the form \begin{equation}\label{eq:gen_2}
\varphi=\dot{G}_{m1}\varphi_1+\varphi_2,
\end{equation}
where $z_m$ is the position of the particle fixed at the corner of the cut worldsheet. There are six contours that contribute to graphs where $1$ and $m$ are adjacent, divided in two sets of three coming either from the top or the bottom of the rectangle
\afterpage{\begin{figure}
\centering
\includegraphics{edge_contours.pdf}
\caption{Contours that produce graphs where particles $1$ and $m$ are adjacent in the field theory limit and which have mutually consistent definition of the loop momentum. Three more contours are on the lower side of the annulus.}
\label{fig:edge_contours}
\end{figure}}
\paragraph{Consistency with monodromy relations near the $A$-cycle.}
For example, one set of diagrams with mutually consistent definition of loop momenta comes from a term where both particles are on the same boundary $\mathcal{I}(\dots 1,m|\dots)$, one where they are on different boundaries $\mathcal{I}(\dots m|1\dots)$, and one where $1$ is integrated along an $A$-cycle: $\mathcal{J}(\dots m|1\dots)$. These are depicted in figure~\ref{fig:edge_contours}. The monodromy phases produce a coefficient of the box-type diagram that can be simplified:
\begin{equation}
\label{eq:box_coeff}k_1\cdot\sum_{i=2}^{m-1}k_i-k_1\cdot \left(\sum_{i=2}^m k_i-\ell\right)=-k_1\cdot(k_m-\ell)=\frac{(k_m-\ell)^2}{2}-\frac{(k_1+k_m-\ell)^2}{2}.
\end{equation}
The second term cancels the propagator between $1$ and $m$ in these diagrams leaving behind a contact diagram with coefficient $\frac{1}{2}(\varphi_1+\varphi_2)$. Recall that $\mathcal{J}$ produces a contact diagram with exactly the same form as the one leftover from the box type but with a coefficient $-\frac{1}{2}\varphi_2$. This cancels one of the terms leaving only the $\frac{1}{2}\varphi_1$. The last term should be canceled by the triangle-type term, which can be seen easily: both $\mathcal{I}(\dots 1,m|\dots)$ and $\mathcal{J}(\dots m|1\dots)$ produce triangles with phase coefficients
\begin{equation}
\label{eq:tri_phase}
k_1\cdot \left(\sum_{i=2}^{m-1}k_i\right)-k_1\cdot\left(\sum_{i=2}^{m-1}k_i+\frac{k_m}{2}\right)=-\frac{1}{2}k_1\cdot k_m,
\end{equation}
which, combined with the integrand numerator $\varphi_1$, cancels the leftover term exactly. In this cancellation we see the crucial role played by the unusual $1/2$ phase that appears in $\mathcal{J}$.
More precisely, we have:
\begin{equation}
\includegraphics[scale=.9,valign=c]{edge_cancellation.pdf}
\end{equation}
\paragraph{Consistency with monodromy relations away from the $A$-cycle.}
As described in \cite{Ochirov:2017jby,Tourkine:2019ukp}, the mechanism of cancellation of the propagators also works for cycles that lie away from the $A$-cycles.
To see this in detail consider the generic non-planar terms $\mathcal{I}(\dots,1,j,\dots|\dots,p,\dots)$ and $\mathcal{I}(\dots,j,1,\dots|\dots,p,\dots)$ where particle $1$ is away from the edge and particle $p$ is some fixed particle with which we use to group together graphs with the same internal momenta as in figure~\ref{fig:general_prop}. The graphs of interest are those where in the field theory limit the worldsheet degenerates such that particle $p$ sits right before $1$ an $j$ in the resulting cyclic ordering.
A tree-type graph can only be generated when $1$ and $j$ sit on the same boundary, so it comes with a coefficient
\begin{equation}
k_1\cdot\sum\limits_{i=2}^j k_i ,
\end{equation}
which cancels the propagator in the tree-type graph with $1$ right after $j$, giving a contact term with numerator $n_t$. The two box-type graphs are generated from both the planar and non-planar string amplitude, in this case their phase coefficients combine:
\begin{align}\label{eq:phase_1_j}
k_1\cdot\left(\sum_{i=2}^{j-1}k_i-\sum_{i=2}^{p-1}k_i+\ell\right)&=-k_1\cdot\left(\sum_{i=j}^{p-1}k_i-\ell\right)\nonumber\\
&=\frac{1}{2}\left(\ell-\sum_{i=j}^{p-1}k_i\right)^2-\frac{1}{2}\left(\ell-k_1-\sum_{i=j}^{p-1}k_i\right)^2
\end{align}
for the diagram with $1$ before $j$ and
\begin{equation}\label{eq:phase_j_1}
\frac{1}{2}\left(\ell-\sum_{i=j+1}^{p-1}k_i\right)^2-\frac{1}{2}\left(\ell-k_1-\sum_{i=j+1}^{p-1}k_i\right)^2
\end{equation}
for the diagram with $j$ before $1$. These give four canceled propagators but only two of them are related to the BCJ triple where $1$ and $j$ are exchanged, that is, we look for the terms that cancel the propagator between $1$ and $j$. Minding the signs (all particles are taken as incoming), these are the first term in \eqref{eq:phase_1_j} and the second term in \eqref{eq:phase_j_1}, the other two give contributions to other BCJ triples. The result is two contact terms with numerators $n_s$ and $n_u$. Since we canceled the propagator between $1$ and $j$ for these box-type graphs and the exchange propagator for the tree-type graph, they all produce the same contact term graph with a numerator given by the sum
\begin{equation}
n_s-n_t-n_u=(\varphi_1+\varphi_2)-2\varphi_1-(-\varphi_1+\varphi_2)=0.
\end{equation}
In more detail, we have:
\begin{equation}
\includegraphics[scale=1,valign=c]{BCJ.pdf}
\end{equation}
This proof is completely generic and leads us to the conclusion that any set of numerators obtained from the Bern-Kosower rules at one loop, after bringing it to the form \eqref{eq:schem}, is automatically in a BCJ satisfying representation. Again, this analysis holds for the numerators of those graphs involved in BCJ identities which do not relabel the loop momentum.
\begin{figure}[t]\centering
\includegraphics{general_prop.pdf}
\caption{How graphs participating on a particular BCJ triple are generated. \underline{Top:} Planar graphs contribute to tree and box-type graphs. \underline{Bottom:} Non-planar graphs only contributes to box-type diagrams. (We use $q^2=(\sum_{i=2}^{j-1}k_i+\sum_{i=p}^nk_i+\ell)^2 =(\ell-\sum_{i=j}^{p-1}k_i-k_1)^2$.)}
\label{fig:general_prop}
\end{figure}
\section{Discussion}
\label{sec:discussion}
\subsection{Summary of results}
\label{sec:summary-results}
\paragraph{Field theory limit of the bulk cycles.}
As we have seen in the text, the bulk contours contribute significantly to both in the $O(1)$ and $O(\alpha')$ relations, at fixed loop momentum.\footnote{It was first observed in~\cite{Tourkine:2016bak,Hohenegger:2017kqy} that the ${\cal O}(1)$ relations can be seen as amplitudes relations once the loop momentum is integrated out, these are the Bern-Dixon-Dunbar-Kosower relations \cite{Bern:1994zx}. As the bulk integrals simply differ by a relabelling of the loop momentum they cancel each other out after integration since the phases don't contribute at this order.} %
However, at fixed loop momentum, as shown above, they are needed for an exact cancellation between graphs (including tree-type graphs) already at $O(1)$, which are identical but for the definition of the loop momentum.
At $O(\alpha')$ they are essential for complete cancellation. We showed that in the field theory limit of these cycles there is an unconventional phase in front of the $\mathcal{J}$ contours. This is a hitherto new phase, not seen before:
\begin{align}
e^{\pm\frac{1}{2}i\alpha'\pi k_1\cdot k_m}.
\end{align}
The unusual factor of $1/2$ was crucial for a complete cancellation among diagrams at fixed loop momentum.
As mentioned previously there's no physical justification from string theory as for the existence and importance of those bulk contours, although they are unavoidable in the theory of twisted cycles~\cite{Casali:2019ihm}. Our analysis shows that they are related to ambiguities that arise in shifting loop momentum and restore the well-definedness of the string integrand under loop momentum redefinition.
\paragraph{BCJ numerators in Bern-Kosower representations.} In checking the exact cancellations between all graphs appearing in the monodromy relations, we also studied ``standard'' cancellations, where no ambiguity linked to loop-momentum definition arise. We found that all Bern-Kosower numerators satisfy BCJ identities away from the points where loop-momentum jumps. To our knowledge, this is a new result. Below, we comment on the fact that we expect this to hold to any loop order, since it stems from the elementary properties of the derivative of the worldsheet Green's function; antisymmetry, and local behavior.
\subsection{Towards KLT and higher loops}
\label{sec:towards-klt-loops}
\paragraph{Contact-terms of higher valency in a KLT formulation.}
In \cite{Casali:2019ihm}, we argued that a complete basis of integration cycles for the Kawai-Lewellen-Tye \cite{Kawai:1985xq} construction at loop-level has to include bulk cycles. The bulk cycles described there there, and used here, are those where only one particle moves in the interior the annulus. Because the construction~\cite{Casali:2019ihm} was recursive, it is not hard to see that twisted cycles have to include cycles where more than one particle is inside the annulus.\footnote{Actually, apart from the one particle whose position is gauge fixed by translation invariance, all particles could even be on those cycles. Which of those cycles would be related by monodromy relations can also be worked out within our formalism.}
In the tropical limit, those higher-bulk cycles would generalize our analysis, and give rise to contact terms where $k$ propagators are canceled, if $k$ particles sit in the bulk. A more thorough analysis is left over for future work.
Following up on the discussion in~\cite{Casali:2019ihm}, one could therefore speculate about the form of a KLT formula at loop level. Since the bulk cycles are unavoidable, and since in the field theory limit they create contact-terms, we are lead to expect the following. In a general formulation of the double copy in string theory, \'a la KLT, one should expect to find squared contact-term graphs and products of ordinary graphs and contact-term graphs.
This would arise from a KLT formulation, where twisted cycles are multiplied together by a yet-to-be discovered momentum kernel.
It is however interesting to remark that, in the context of the generalized double copy \cite{Bern:2017yxu,Bern:2017ucb}, contact-terms were only needed at high loop orders (five loops), while they would seem to appear naturally already at one-loop.
This conundrum makes the question of figuring out a KLT formula at loop level even more pressing and interesting.
\paragraph{Higher-loop generalizations.} While extension of our results to higher-loop integrands requires extra care and is left for future work, let us briefly comment on which parts generalize straightforwardly and which need new analysis.
\begin{figure}
\centering
\includegraphics{unknown}
\caption{Our Jacobi identities do not involve doing purely internal BCJ moves at higher loops.}
\label{fig:internal-triple}
\end{figure}
First of all, let us delineate which BCJ triples we expect to be describable within our framework. Higher-genus monodromy relations, at least as described in \cite{Tourkine:2016bak,Hohenegger:2017kqy,Casali:2019ihm}, only involve identities corresponding to a puncture traveling along boundaries of the cut Riemann surface. Therefore, these identities always involve BCJ triples that have at least one external leg. An example of an ``internal'' BCJ triple at four loops, which is not contained in our framework, is shown in figure~\ref{fig:internal-triple}.
Secondly, the Jacobi identities related to triplets without loop-momentum ambiguity (away from the $A$-cycles) have been described in \cite{Tourkine:2019ukp,Casali:2019ihm}. The phase factors of the monodromies were shown to adequately cancel neighboring propagators and regroup graphs by BCJ triplets. Indeed, one can straightforwardly extend the reasoning of section~\ref{sec:canc-at-mathc} and show that, if a Bern-Kosower representation is provided at higher loops for a worldline integrand, it has to satisfy color-kinematics duality. In other words, granted that Bern-Kosower representations may be found at higher loops, we claim that corresponding numerators which do not redefine the origin of loop momenta should obey Jacobi identities. \footnote{Concerning this latter question, one may worry about a few things. In the RNS formalism, the supermoduli space non-splitness~\cite{Donagi:2013dua,Witten:2012bh} prevents the naive existence of a purely bosonic integrand, which is the type of the Bern-Kosower representations. Sen's vertical integration~\cite{Sen:2014pia, Sen:2015hia} is in principle a prescription to project the supermoduli space integral to a bosonic one, but nothing is known about whether or not this could lead to Bern-Kosower representations. One may for instance worry that during the integration by parts process to reach a Bern-Kosower integrand (with single derivatives of the Green's function), vertex operators could collide with picture changing operators. If this happens, additional contact-term like contributions may be expected. In purely bosonic representations (bosonic string, pure spinors), there are no such obstructions and, since the integration by part sequence which leads to a given Bern-Kosower integrand does not depend on the genus of the Riemann surface, one may expect these representations to exist.}
\begin{figure}
\centering
\includegraphics[scale=0.8]{3-loop-pinch-a}
\caption{Mercedes graph degeneration. \underline{Left:} The decomposition of the Riemann surface in terms of $A$- and $B$-cycles. The loop momenta $\ell_I$ are flowing through $A_I$ for $I=1,2,3$. \underline{Right:} The $\mathcal{J}$ cycles (green) get trapped between the $B$ cycles in the neighborhood of the Mercedes degeneration.}
\label{fig:J-pinch}
\end{figure}
Finally, we can discuss triples in the neighborhood of the $A$-cycles: those suffer from the loop-momenta ambiguities analogous to the ones at one loop. Here a genuinely new feature arises: the $\mathcal{J}$ cycles can be pinched between holes of a Riemann surface in the tropical limit, cf. figure~\ref{fig:J-pinch} for a 3-loop Mercedes diagram degeneration. Apart from this important issue, which will be carefully addressed elsewhere, the analysis for other types of degenerations is quite similar to the one at genus one.
Let us therefore finish the paper with a brief discussion of those cases and illustrate it with a two-loop example.
The arguments of section~\ref{sec:canc-at-mathc} generalize to higher loops immediately, as they depend only on the local properties of derivatives of the Green's function: its antisymmetry on the worldline $\partial_{z_i} G^{WL}(z_i,z_j) = -\partial_{z_i} G^{WL}(z_j,z_i)$ and the local behavior $\partial_z G(z) \sim 1/z$, which gives rise to subtree graphs required for $t$-channel graphs.\footnote{See \cite{Dai:2006vj,Tourkine:2013rda} for the explicit expressions of the worldline Green's function at higher genus coming from the tropical or field theory limit of string theory. Along an edge, it is given by the geometric distance between two points, whose derivative is just the sign function, and therefore antisymmetric.}
Considering the types of graphs present at higher loops, each internal strips of the string graphs have their lengths $t_i$ undergoing tropical scaling, $t_i \sim T_i/(2\pi\alpha')$, and the $\mathcal{J}$ cycles contribute contact-terms with width $1/2$. One aspect of the discussion is simpler at higher loops compared to one loop: there are no triangles generated by the internal $\mathcal{J}$ contours with only one puncture. This stems from the fact that the translation invariance which, at one-loop, allowed to fix one point at the origin of the $\mathcal{J}$ cycles is absent at higher loops.\footnote{This also raises the possibility at one loop that a different gauge choice could lead to removing those graphs. But the definition of the loop momentum would be modified and would introduce some extra parameter related to the distance at which the loop momentum is measured relative to an origin of the coordinate system. This parameter would possibly affect the monodromy relations and we do not know what effect it would have in terms of graph representation in the field theory limit.} Therefore, puncture $1$ will not be able to go infinitesimally close to $m$ on a $\mathcal{J}$ cycle, as in figure~\ref{fig:edge_triangle} for instance.\footnote{One could wonder if a configuration where puncture $m$ collides towards the origin of the $\mathcal{J}$ cycles at the same time as puncture $1$ but along a $\mathcal{J}$ cycle, could not lead to a triangle. Such degenerations are of higher order and measure zero at leading order in $\alpha'$.}
\begin{figure}
\centering
\includegraphics{two-loop-CT}
\caption{Two-loop string amplitude with contours from eq.~\eqref{eq:2-loop-monodromy}.}
\label{fig:two-loop-ct}
\end{figure}
We now consider an explicit example: the planar two-loop monodromy represented by integrating the position of leg $1$ within the two loop surface of figure~\ref{fig:two-loop-ct}. It reads
\begin{multline}
\label{eq:2-loop-monodromy}
\mathcal{I}(1234|\cdot|\cdot)+
e^{i\pi k_1\cdot k_2}\mathcal{I}(2134|\cdot|\cdot)+
e^{i\pi k_1\cdot k_{23}}\mathcal{I}(2314|\cdot|\cdot)+
e^{i\pi k_1\cdot k_{234}}\mathcal{I}(2341|\cdot|\cdot)+\\
e^{i\pi k_1\cdot \ell_1}\mathcal{I}(234|1|\cdot)+
e^{i\pi k_1\cdot \ell_2}\mathcal{I}(234|\cdot|1)=
e^{i\pi k_1\cdot \ell_1}(\mathcal{J}_{a,1}- \mathcal{J}_{c,1})+ e^{i\pi k_1\cdot \ell_2}(\mathcal{J}_{a,2}- \mathcal{J}_{c,2}),
\end{multline}
where the various contours are depicted on the figure (for more details see \cite{Casali:2019ihm}). The field theory limit of this relation produces, as usual, two identities, one at order ${\cal O}(1)$ and one at ${\cal O}(\alpha')$. At leading order, the graphs cancel in virtue of the antisymmetry of the $\dot G$ propagator; no contact terms contribute, as they higher order in $\alpha'$. At the next order, the phases produce $\alpha'$ factors and the contact terms do contribute. We focus on graphs involved in the Jacobi identity which flips leg 1 past the origin of the $\mathcal{J}$ cycles on the outer boundary, which is the analogue of the one-loop case we studied above.
\begin{figure}
\centering
\includegraphics{two-loop-contact-terms}
\caption{Cancellation systematics at two-loops. Top: two-loop string amplitude with contours from eq.~\eqref{eq:2-loop-monodromy}. Red lines correspond to propagators canceled by the phases of the monodromy relations. Bottom: contact terms generated by the $\mathcal{J}$ contours. While these graphs seem to constitute a Jacobi triplet, this would imply different definitions of the loop momentum than those. Therefore the graphs cannot cancel at fixed loop momentum in the monodromy relations. The role of the contact terms from the $\mathcal{J}_{a/c,1/2}$ integrals is precisely to remove these terms.}
\label{fig:two-loop-graphs}
\end{figure}
The graphs (a) in figure~\ref{fig:two-loop-graphs} come from the contours $\mathcal{I}(1234|\cdot|\cdot)$ and $\mathcal{I}(234|1|\cdot)$, likewise (b) comes from $\mathcal{I}(2341|\cdot|\cdot)$ and $\mathcal{I}(234|\cdot|1)$ and (c) from $\mathcal{I}(234|\cdot|1)$ and $\mathcal{I}(234|1|\cdot)$. An inspection of the phases in eq.~\eqref{eq:2-loop-monodromy} at order $\alpha'$ shows that correct inverse propagators are reconstructed to cancel the propagators connected at the origin of the $\mathcal{J}$ cycles, as in, e.g., eq.~\eqref{eq:phase_1_j}. This results in three contact-term graphs, which are equal, apart from the important fact that they have different definitions of the loop momenta. Therefore, they cannot cancel by a BCJ mechanism and another contribution from the monodromy relation is needed to remove those graphs.\footnote{Remember that the relations hold at fixed loop-momentum.}
The $\mathcal{J}$ cycles come to the rescue here, once again. Although we have not computed this coefficient from first principle, if the contact they give rise to comes with a coefficient $1/2$, in strict analogy with the one-loop case, the two types of contributions cancel each other out term-by-term, at fixed loop momentum: the contact term $(d)$ comes from $\mathcal{J}_{a,1}$, $(e)$ from $\mathcal{J}_{c,2}$, and $(f)$ from $\mathcal{J}_{c,1}$ and $\mathcal{J}_{a,2}$. Each of these contact terms has the exact same loop-momentum assignment as the graphs with canceled propagators pictured above on figure~\ref{fig:two-loop-graphs}.
Furthermore, one can see that no other region of the moduli space can produce the corresponding terms. The cancellation therefore \textit{must} happen in this way. In this sense we have bootstrapped the field theory limit of the $\mathcal{J}$ contours from the knowledge that the monodromy relations must be satisfied. The only ambiguity that could arise in this bootstrap reasoning relates is the relative coefficient of $\mathcal{J}_{c,1}$ and $\mathcal{J}_{a,2}$ in the sum that cancels the contribution $\mathcal{I}(234|\cdot|1)+\mathcal{I}(234|1|\cdot)$ giving rise to the diagram $(c)$. But this coefficient does not play a role in the field theory limit (only the sum does) so it is guaranteed that $(c)$ and $(g)$ cancel exactly.
|
1,314,259,995,294 | arxiv | \section{Introduction}
In the last decades, the efficiency estimation problem has been a key aspect \textcolor[rgb]{0,0,1}{of} evaluating safety performance for pedestrian evacuations.
During the occurred emergency events (such as fire, earthquake, and terrorist attacks), since the pedestrians in the evacuation space are usually myopic and many irrational behaviors may inherently and negatively affect the pedestrians' decision making (which may eventually lead to an efficiency loss causing massive casualties) in the evacuation process \cite{meng2019pedestrian,cheng2018emergence,haghani2019panic,guan2019towards}, some evacuation strategies are often beforehand designed to enhance the evacuation efficiency. In the literature, those evacuation strategies may include the choices on room layout \cite{liao2014layout}, number of exits \cite{liu2020fuzzy}, location of obstacles \cite{varas2007cellular}, to name but a few. To effectively evaluate evacuation efficiency,
establishing mathematical models {to describe} the pedestrians' {decision-making} behavior and the resulting phenomena in the evacuation process should be of prime importance \cite{lovaas1994modeling,helbing2000simulating,muller2014study,PhysRevE,huang2017behavior,alizadeh2011dynamic,fu2015influence,shahhoseini2019pedestrian,yang2021effect,chraibi2010generalized,hao2014exit,HRABAK2017486}.
For example, a cellular automaton (CA) model describing the change of cooperative behavior in heterogeneous pedestrians was investigated in \cite{huang2017weighted} to
examine how the pedestrians' dependency relationship \textcolor[rgb]{0,0,1}{influences} evacuation efficiency.
A floor field model investigating agitated behaviors \textcolor[rgb]{0,0,1}{with} elastic characteristics was presented in \cite{xie2012agitated}, whereas a fine discrete field CA model integrating anisotropy, heterogeneity, and time-dependent characteristics, was discussed in \cite{fu2018fine}.
A multi-grid CA model capturing the turning behaviors was proposed in \cite{miyagawa2020cellular}.
Furthermore, an extended floor-field CA model describing the group behaviors in crowd evacuation was found in \cite{lu2017study}, whereas
a modified social force model and a force-driven CA model were proposed in \cite{yang2014guided} and \cite{REN2021125965} to reflect the evacuation process under guidance
With the development of modern technology, especially in the aspect of \textcolor[rgb]{0,0,1}{the} imaging process, communication networks, and IoT technologies,
some evacuation assistants (or equipment) can be adopted in the (computer-aided) evacuation process \textcolor[rgb]{0,0,1}{to provide} accurate route
or exit information to the evacuees and hence help (guide) them to escape as efficiently as possible.
In the related works with guiding behaviors \cite{yang2016necessity,ma2016effective,ma2017dual,zhou2019guided,chu2017variable,zhou2019optimization,long2020simulation,yang2020guide}, Yang et al. \cite{yang2016necessity} are the first {researchers} \textcolor[rgb]{0,0,1}{to claim} the necessity of using guides (dynamic leaders) in the evacuation process, whereas the effects of leadership in single-exit rooms were investigated in \cite{ma2016effective} with limited visible range.
The authors in \cite{ma2016effective} further stated that guiding behavior may yield a negative effect when too many pedestrians are neighbor to the leader. Considering the situation where the dynamic leader attracts the crowds and \textcolor[rgb]{0,0,1}{moves} together with them towards one of the exits, the optimal number and positions for those leaders were derived in \cite{yang2020guide} and \cite{yang2020pedestrian}.
Even though the literature \cite{ma2016effective} and \cite{yang2020guide} have claimed that the density of pedestrians plays an important role in the evacuation process, to our best knowledge, our previous study in \cite{REN2021125965} is the first work indicating the possibility and goodness \textcolor[rgb]{0,0,1}{of} controlling the pedestrians' density. It was found that for a symmetric-exit evacuation space, unbalanced pedestrian distributions may be yielded because of
the mutual attractions among the pedestrians, but {those unbalanced distributions} can be suppressed by imposing some control mechanisms based on the data of pedestrian density from an observed region. Given the constant observed region, by adjusting the guiding signals according to the pedestrian densities around the exits, \cite{REN2021125965} showed preliminary evidence that imposing control mechanisms may be able to pursue a more balanced pedestrian distribution and an efficient evacuation process.
However, the effects of the size of the observed regions are not yet revealed. Furthermore, it is essential to note that the results in \cite{REN2021125965} did not consider the influence of \textcolor[rgb]{0,0,1}{data} time delay in the control mechanisms, which is unnatural for a realistic computer-aided evacuation process.
Contributions: Different from \cite{REN2021125965} and the other existing results, the main contribution of this paper is focused on finding the optimal size of the observed regions under different data time delays for a dynamic guiding
assistant system in partially observable asymmetric-exit evacuation. To this end, inspired by control theory, we propose a general framework of dynamic guiding
assistant systems with a density control algorithm by assuming that the evacuation assistants can observe only a few parts of the evacuation spaces.
We consider a simple on-off-based density control algorithm for evacuation assistants with delayed data of the
observed information, i.e., pedestrian densities in the regions near the corresponding exits. By involving a force-driven
cellular automaton model, we discuss strategic suggestions on \textcolor[rgb]{0,0,1}{setting} the observed region in this paper by presenting the numerical simulation results under homogeneous and heterogeneous pedestrians with respect to \textcolor[rgb]{0,0,1}{the size of the visual field.} Our numerical findings contribute to \textcolor[rgb]{0,0,1}{providing} some new insights on
designing computer-aided (control-based) guiding strategies in actual evacuations.
This paper is organized as follows: A framework of dynamic guiding
assistant system with density control algorithm is proposed in \autoref{section2}. In \autoref{section3}, we first introduce the necessity of using dynamic guiding assistance in the
asymmetric-exit evacuation process and then investigate the optimal size of the observed regions for various scenarios with a slight data delay and a large data delay. Finally, the conclusion is given in \autoref{section:4}.
\emph{Notation:} We use $\mathbb R_+$ for the set of positive real numbers, $\mathbb N_+$ for the positive integers, and $\emptyset$ for the empty set.
\vspace{-10pt}
\section{System Description and Problem Formulation}\label{section2}
\vspace{-3pt}
\subsection{System Description}\label{Sec22}
Consider an evacuation system $\Pi$ consisting of several pedestrians in a multi-exits evacuation space. Each pedestrian dynamically changes its location over time to escape from the evacuation space through one of the $m\in\mathbb Z_+$ number of exits according to its personal information, e.g., visible exit, communication with the other evacuees. We denote the set of exits by $\mathcal N_{\rm ex}\triangleq \{1,2,\ldots,m\}$.
As myopic decision-makers with limited information, the pedestrians may move in the same direction even when congestions exist around the target exit.
In such a case, the exits may not be used efficiently. To help pedestrians better efficiently escape from space, in this paper, we consider a guiding assistant system for the aforementioned evacuation system with $\varepsilon\in\mathbb N_+$ evacuation assistants (EAs), where the set of EAs is given by $\mathcal N_{\rm EA}\triangleq \{1,2,\ldots,\varepsilon\}$. In this system, each evacuation assistant $i\in\mathcal N_{\rm EA}$ is assigned to a corresponding exit $j\in\mathcal N_{\rm ex}$, and sends a \emph{guiding signal} to the pedestrians {to attract} them towards its own assigned exit. We assume that the guiding signal is dynamically adjustable by the evacuation assistant according to the \textcolor[rgb]{0,0,1}{real-time} situation of the evacuation process. Henceforth, we denote the guiding assistant system by $\mathcal G$.
\begin{figure}
\centering
\includegraphics[width=100mm]{fig1_yan_new-eps-converted-to.pdf}
\caption{Structure of the guiding assistant-based evacuation system. Each evacuation assistant observes the partial information of agents' location and adjusts the guiding signal according to the observed information. \textcolor[rgb]{0,0,1}{Involving the density controller, the classic (static) guiding assistant system becomes a dynamic guiding assistant system} where the guiding signals of $\mathcal G$ are understood as the inputs (feedback) to the evacuation system $\Pi$. With a well-designed control algorithm (e.g., density control law), the states (location) of the pedestrians and the evacuation efficiency may be controlled (positively affected). The detailed control algorithm and pedestrian evacuation model considered in this paper are given in Sections~\ref{model} and \ref{Sec23} below. }
\label{fig:3}
\end{figure}
{According to the control theory,} it is essential to notice that the guiding signal of the guiding assistant system $\mathcal G$ can be understood as the \emph{input} to the evacuation system so that the states (location) of the pedestrians are controlled (affected) by the EAs.
This fact is characterized independently of the underlying mathematical models that describe the decision-making process of pedestrians under guidance (guiding signals).
However, as a common issue in control systems, the states (location) of the pedestrians may not be fully observed. Inspired by the classic control theory, we note that an unobservable system may \textcolor[rgb]{0,0,1}{still be} controllable under some well-designed control algorithms \cite{LIU201510,wang2018event}. Consequently, in this paper, we assume that EAs may observe partial information on the locations of the pedestrians (e.g., the pedestrian density $\rho_i$ near exit $i\in\mathcal N_{\rm ex}$) using some computer-aided technologies. For example, those EAs can observe the regions near exits and adjust the intensity of guiding signals according to the observed information. A fundamental question is how to set the size of the observed regions and control the guiding signals to enhance the evacuation efficiency as much as possible. To address this problem, we introduce a force-driven CA model under density control algorithm in the next section to implement the structure of the guiding assistant-based evacuation system
illustrated in \autoref{fig:3}.
\begin{figure}
\centering
\includegraphics[width=100mm]{figure2-eps-converted-to.pdf}
\caption{Preferred moving directions for a given social force $F_n$. In (a), the sequence of preferred moving directions is given by upper right, right, upward and bottom right. In (b), as the cells in the upper right and bottom right are occupied, the sequence of preferred moving directions is given by right and upward, i.e., $P_n^3=P_n^4=\emptyset$. Supposing that the pedestrians are only allowed to compete for 2 rounds in terms of preferred moving directions (i.e., $k=2$), it is understood that the pedestrian $n$ in (a) (resp., in (b)) first competes for the upper right direction (resp., the right direction). If he loses in the competition, then the next competition target is the right direction (resp., upward direction) in (a) (resp., in (b)).}
\label{fig:2}
\end{figure}
\vspace{-5pt}
\subsection{Brief Introduction of Force-driven CA Model Under Density Control}\label{model}
In this section, we characterize the force-driven CA model to describe the guided evacuation process with the two-dimensional Moore cells, where each cell may be occupied by only one of the pedestrians with eight neighbor cells as the possible moving directions for the next time instant (see \autoref{fig:2}(a)).
Specifically, \textcolor[rgb]{0,0,1}{to be consistent with the literature in guided
evacuation \cite{yang2016necessity,ma2016effective,ma2017dual,yang2020guide,yang2020pedestrian},} we consider a square evacuation space and divide the space to $N\times N$ number of cells.
Letting $\mathbb {RM}(t)$ denote the set of the pedestrians remaining in the evacuation system $\Pi$ at time instant $t=0,1,2,\ldots$, we suppose that the remaining pedestrians simultaneously update their locations according to a social force-based evolution rule given in the following sections.
\vspace{-7pt}
\subsubsection{Force Definitions}
Before we present the detailed expression of the evolution rule, we define the social force consisting of a guiding force from an evacuation assistant, the mutual forces among visible pedestrians, and the attractive forces to visible exits for each pedestrian $n\in\mathbb {RM}(t)$, \textcolor[rgb]{0,0,1}{i.e.,}
{\setlength\abovedisplayskip{4pt}
\setlength\belowdisplayskip{4pt}
\begin{align}\label{social}
{F_{n} = \textcolor[rgb]{0,0,1}{ w_1 F_{\rm guide}^n+ w_2\!\sum\nolimits_{m\in\mathbb {VP}_n}{F_{\rm mutual}^{n,m}}+ w_3\!\sum\nolimits_{i\in\mathbb {VE}_n} F_{\rm exit}^{n,i},}}
\end{align}
where} \textcolor[rgb]{0,0,1}{$w_1$, $w_2$ and $w_3\in\mathbb R_+$ denote the positive weighting factors depending on the evacuation scenario,} and $\mathbb {VP}_n$ (resp., $\mathbb {VE}_n$) represents the set of visible pedestrians (resp., visible exits) for the pedestrian $n\in\mathbb {RM}(t)$.
Note that in \autoref{social}, $ F_{\rm guide}^n$ represents the guiding force pointing to the exit whose EA's guiding signal possesses a maximal \emph{signal to interference ratio}, i.e.,
{\setlength\abovedisplayskip{4pt}
\setlength\belowdisplayskip{4pt}\begin{equation}\label{guiding}
F_{\rm guide}^n = u_{I_n}D \vec r_{n,I_n},\quad I_n\triangleq\arg\max_{i\in\mathcal N_{\rm EA}}\left(\frac{u_i}{1+\sum_{j\in\mathcal N_{\rm EA}\setminus\{i\}}(r_{n,i}^2/r_{n,j}^2)}\right),
\end{equation}
where} $u_{i}\in[0,1]\subset\mathbb R$ refers to the intensity coefficient controlled by the evacuation assistant $i\in\mathcal N_{\rm EA}$, $D\in\mathbb R _+$ refers to the maximal intensity of the guiding signal, $r_{n,i}$ refers to the distance (steps) to the exit $i$,
and $\vec{r}_{n,I_n}$ refers to the unit vector pointing from \textcolor[rgb]{0,0,1}{the} pedestrian $n$ to the exit $I_n$ (see Ref.~\cite{REN2021125965});
$F_{\rm mutual}^{n,m}$ represents a mutual force from the visible pedestrian $m$ \cite{chen2012study,REN2021125965}, i.e.,
{\setlength\abovedisplayskip{4pt}
\setlength\belowdisplayskip{4pt}\begin{eqnarray}\label{mutual}
F_{\rm mutual}^{n,m}= \begin{cases} \textcolor[rgb]{0,0,1}{{\eta_1}\vec{r}_{m,n}},& \mbox{if $r_{n,m}=1$} \\
{\eta_2}/{r_{n,m}^2}\vec{r}_{n,m}, &\mbox{if $1<r_{n,m}\leq w$}\end{cases}, \quad m\in \mathbb {VP}_n,
\end{eqnarray}
where} $\eta_1\in\mathbb R_+$ refers to the repulsion coefficient, $\eta_2\in\mathbb R_+$ refers to the attraction coefficient, $w$ refers to the length of the \textcolor[rgb]{0,0,1}{visual field}, ${r}_{n,m}$ refers to the distance (steps) between the pedestrians $m$ and $n$, and $\vec{r}_{n,m}$ refers to the unit vector pointing to the pedestrian $m$;
$F_{\rm exit}^{n,i}$ represents an attractive force to a visible exit \cite{chen2012study,REN2021125965}, i.e.,
{\setlength\abovedisplayskip{4pt}
\setlength\belowdisplayskip{4pt}\begin{equation}\label{exit}
F_{\rm exit}^{n,i}= E\vec{r}_{n,i}, \quad i\in\mathbb {VE}_n\triangleq\{i\in\mathcal N_{\rm ex}: r_{n,i}\leq w\},
\end{equation}
where} $E\in\mathbb R_+$ refers to the constant intensity, $r_{n,i}$ and $\vec{r}_{n,i}$ respectively refer to the distance and the unit
vector to the exit $i\in\mathcal N_{\rm ex}$.
\vspace{-3pt}
\begin{rem}\emph{
\textcolor[rgb]{0,0,1}{It is necessary to notice that the repulsions between the pedestrian and obstacle may need to be further added
in the expression of the social force (\ref{social}) when the obstacle appears in the visual field. The expression of repulsions between the pedestrian and obstacle
can be found in \cite{chen2012study}. To be consistent with the literature in guided evacuation (see \cite{yang2016necessity,ma2016effective,ma2017dual,yang2020guide,yang2020pedestrian}), we do not consider an evacuation space with any obstacle and hence those forces are excluded in this paper. On the other hand, when the heterogeneity
on quality of the pedestrians \cite{chen2012study} is considered, the expressions of the repulsions and attractions
should be changed to $
F_{\rm mutual}^{n,m}=
\begin{cases} {\eta_1}q_mq_n\vec{r}_{m,n},& \mbox{if $r_{n,m}=1$} \\
{\eta_2}q_mq_n/{r_{n,m}^2}\vec{r}_{n,m}, &\mbox{if $1<r_{n,m}\leq w$}
\end{cases} $,
where $q_m$ is the quality of the pedestrian $m\in \mathbb {VP}_n$ and $q_n$ is the quality of the pedestrian $n$.}}
\end{rem}\vspace{-6pt}
After defining the notion of social forces, the preferences of moving direction
candidates are consequently derived by sorting the magnitude of the decomposed forces of \emph{F}$_{n}$ at \emph{x}, \emph{y}, \emph{s}, and \emph{f}-axis (see \autoref{fig:2}). Henceforth, we denote the sequence of ordered preference target cells by $P_n=[P_n^1,P_n^2,P_n^3,P_n^4]$, where $P_n^i$ refers to the $i$-th preferred target.
\vspace{-7pt}
\subsubsection{Social Force-Based Evolution Rule}
Given an initial distribution of the pedestrians $\mathbb {RM}(0)$ and the intensity coefficients $u_i(0)$, $i\in\mathcal N_{\rm EA}$ of the guiding signals from the guiding assistant system $\mathcal G$, the social force-based evolution rule of the force-driven CA model is summarized as follows:
\vspace{-6pt}
\begin{enumerate}[(i)]\setlength{\itemsep}{-0.08cm}
\item Calculate the social force according to \autoref{social} and determine the sequence of ordered preferred target cells $P_n=[P_n^1,P_n^2,P_n^3,P_n^4]$ for all the remaining pedestrians $n\in\mathbb {RM}(t)$. If some of the desired targets have already been occupied, then some elements of $P_n$ are understood as empty. For example, the cells in the upper right and bottom right
directions in \autoref{fig:2}(b) are not considered as the preferred target cells and hence $P_n^3=P_n^4=\emptyset$ because those cells have \textcolor[rgb]{0,0,1}{already been} occupied.
\item Collisions and target competition. We suppose that each pedestrian $n\in\mathbb {RM}(t)$ possesses at most $k$ chances {to compete for} the target cells with the other pedestrians according to its own sequence $P_n$. If a cell is targeted as the preferred moving direction by several pedestrians in one of the competition rounds, it turns out that \emph{collisions} may happen \cite{tanimoto2010study} so that none of the conflicting pedestrians wins in the competition. Here, we suppose that the probability that \emph{collision} happens is $\varphi\in[0,1)$, whereas the probability that one of the conflicting pedestrians wins (and hence the target cell is assigned to him) is $1-\varphi\in(0,1]$. {In} the case where \textcolor[rgb]{0,0,1}{a} collision does not happen, the winner is \emph{randomly} selected. If the pedestrian $n$ loses in one of \textcolor[rgb]{0,0,1}{the} competitions, then he continues to compete for the next un-assigned preferred target cell until all the $k$ chances are used. If the pedestrians lose in all the $k$ rounds of competitions, they do not change their locations next time instant \footnote{\textcolor[rgb]{0,0,1}{According to our observation and personal experience in the real world, when two pedestrians are targeting the same location, one is likely to change his direction when the collision is about to happen. This change of direction may happen almost at the same time when another pedestrian successfully comes to the targeted location. Therefore, in the rest of the paper, we suppose that each pedestrian has one
chance to alter his target, i.e., $k=2$.}}.
\item Update the locations for all the pedestrians $n\in\mathbb{RM}(t)$. Since the pedestrians {who reached} one of the exits are understood as the ones who have successfully escaped from the evacuation system $\Pi$, release the escaped pedestrians from the simulation data set, let $t=t+1$, and update the set $\mathbb {RM}(t)$.
\item Calculate and output the pedestrian density $\rho_i(t)$ observed at the exit $i\in\mathcal N_{\rm ex}$.
\item Require the new information of $u_i(t)$, $i\in\mathcal N_{\rm EA}$, and go to (i) to continue until all of \textcolor[rgb]{0,0,1}{the} pedestrians left the evacuation system $\Pi$ (i.e., $\mathbb {RM}=\emptyset$).
\end{enumerate}
\begin{rem}\emph{
It is important to note that the proposed evacuation model requires the inputs (i.e., the intensity coefficient $u_i$, $i\in\mathcal N_{\rm EA}$) from the guiding assistant system $\mathcal G$ and offers the real-time data (partial information) to the density controller for redesigning the inputs. Consequently, when a density controller is established, the guiding signals are understood as the \emph{feedback} to the evacuation system for enhancing the efficiency of the evacuation process (see \autoref{fig:3}).}
\end{rem}
\vspace{-7pt}
\subsection{Density Control Algorithm}\label{Sec23}
In this section, we consider an on-off-based density control algorithm as the feedback control law for the guiding assistant system to adjust the guiding signals and hence affect the evacuation efficiency. Specifically, we suppose that all of the EAs only observe the values of pedestrian density $\rho_j(t)$ around their own assigned exit $j\in\mathcal N_{\rm ex}$ based on some computer-aided technologies (e.g., image processing) and use the observed information to update $u_i$, $i\in\mathcal N_{\rm EA}$ \textcolor[rgb]{0,0,1}{for} the guiding signals.
The density control algorithm considered in this paper is given by
{\setlength\abovedisplayskip{4pt}
\setlength\belowdisplayskip{4pt}
\begin{eqnarray}\label{BB1}
u_i(t)=\left\{\begin{array}{c}
1,\quad \rho_j(t-\zeta)\leq\rho^{\rm aim}_i,\\
0,\quad \rho_j(t-\zeta)>\rho^{\rm aim}_i,
\end{array}
\right. \quad i\in\mathcal N_{\rm EA},\quad t=1,2,3,\ldots,\vspace{-4pt}
\end{eqnarray}
where} $\zeta$ denotes the constant time step delay of the data (observed information), $j$ refers to the exit index assigned to \textcolor[rgb]{0,0,1}{the} evacuation assistant $i$, and $\rho^{\rm aim}_i$ refers to the target pedestrian density set by the evacuation assistant $i$. In this case, the evacuation assistant $i$ turns on (resp., turns off) the guiding signal when the current density around its exit is less (resp., larger) than the target density \footnote{If $\rho^{\rm aim}_i=1$ holds for some $i\in\mathcal N_{\rm EA}$, then it is understood that those EAs never stop sending the guiding signals for attracting the pedestrian towards their own assigned exits. Alternatively, if $\rho^{\rm aim}_i=0$ holds for some $i\in\mathcal N_{\rm EA}$, then it is understood that those EAs only begin to send the guiding signals when all the pedestrians around their assigned exits have already escaped from the evacuation system $\Pi$.}.
\textcolor[rgb]{0,0,1}{It is important to note that when $\rho^{\rm aim}_i=1$, $i\in\mathcal N_{\rm EA}$, the guiding signal is always opened and hence
the guidance from the guiding assistant system $\mathcal G$ is understood as the classical (\emph{static}) guiding assistance. Alternatively, when $\rho^{\rm aim}_i\not=1$, $i\in\mathcal N_{\rm EA}$, it is understood that the classical (\emph{static}) guidance from the guiding assistant system $\mathcal G$ changes to \emph{dynamic} guidance. }
\vspace{-3pt}
\begin{rem}\emph{
\textcolor[rgb]{0,0,1}{Note that a continuous-function control can be considered as the density control algorithm to enhance the evacuation efficiency if
the guiding signals $u_i$, $i\in\mathcal N_{\rm EA}$ are quantifiable. However, establishing quantifiable guiding
signals relies on the structure (equipment) of the guidance system. For example, a PI control algorithm
can be used in the evacuation process when the strength of the guiding signals
is shown in the IoT devices to the pedestrians. Since the opening and closing of the guiding effect
do not rely on the structure of the guidance system, we use the on-off-based density control
in this paper.}}
\end{rem}\vspace{-6pt}
\emph{Problem}: Consider the guiding assistant system $\mathcal G$ with the density control algorithm shown in \autoref{BB1}. The main question investigated in this paper is how to set the observed regions and the target density in the guiding assistant system to enhance the evacuation efficiency as much as possible. Since the data collection (e.g., imaging processing) and data transmission take time,
the time step delay $\zeta$ may not be very small, and this parameter may affect the evacuation efficiency under the guiding assistant system $\mathcal G$. In the following sections, we first consider the case where the time step delay $\zeta$ is very small, and then we discuss the situation when considering a \textcolor[rgb]{0,0,1}{larger} time step delay $\zeta$.
We evaluate the evacuation efficiency by using the nation of \emph{travel time} $T_{\rm end}$, which is defined as
the time instant when all the pedestrians successfully escaped from the evacuation system, i.e., $\mathbb {RM}(t)=0$ at $t=T_{\rm end}$.
\textcolor[rgb]{0,0,1}{The dynamical guiding assistant system (or, the density control algorithm) is said to be effective if the travel time is shorter than the one under static guidance (i.e., $\rho^{\rm aim}_i=1$, $i\in\mathcal N_{\rm EA}$).}
\vspace{-12pt}
\section{Simulation and Analysis}\label{section3}\vspace{-4pt}
\textcolor[rgb]{0,0,1}{Limited by
the difficulty of obtaining real data under the COVID pandemic, the validity of the force-driven CA model is
verified by the context of the fundamental results in \cite{chen2012study,gaofengqiang2016,REN2021125965}.} The main objective of this paper is to \textcolor[rgb]{0,0,1}{obtain the optimal} control strategies
(e.g., target density, observed region) for the guiding assistant system $\mathcal G$ through analyzing simulation results.
In this section, we present and analyze our simulation results for an evacuation system $\Pi$ with $m=4$ exits and $\varepsilon=4$ evacuation assistants where every evacuation assistant is located \textcolor[rgb]{0,0,1}{at} the exit and is observing the pedestrian density near its exit to turn on or shut down the guiding signal according to \autoref{BB1}.
Specifically, letting $N=23$, we divide the indoor evacuation space to $23\times 23$ cells so that the evacuation system $\Pi$ may contain at most 529 pedestrians. We set $w_1=w_2=w_3=1$ in \autoref{social}, $D=30$ in \autoref{guiding}, $E=60$ in \autoref{exit}, $\eta_1=10$, and $\eta_2=20$ in \autoref{mutual}. We consider the evacuation space with asymmetric exits layout as shown in \autoref{fig:5}.
\begin{figure}
\centering
\includegraphics[width=50mm]{initial-eps-converted-to.pdf
\caption{Evacuation space with asymmetric exits layout with $N=23$ and $\mathcal N_{\rm ex}=\mathcal N_{\rm EA}=\{1,2,3,4\}$, where the four colored regions around exits represent the observed region for calculating pedestrians' density with the radius being $d$. Two pedestrians possessing the field of view of $w=3$ and $w=4$ are invisible to each other. Since none of the two pedestrians appear in the observed regions, their information (location) is unknown to the evacuation assistants. }
\label{fig:5}
\end{figure}
\vspace{-3pt}
\begin{rem}\emph{
\textcolor[rgb]{0,0,1}{In general, the effect of exit distribution (e.g., the number of exits and asymmetric degree of exits) may influence the evacuation efficiency (see the results in \cite{yue2011simulation}). For example, different from the main contribution of this paper, the effect of density control with a symmetric exit distribution is revealed in \cite{REN2021125965} without considering data time delay. In terms of the asymmetric exit distribution, it turns out that the tendency of simulation results shown in the following sections still can be similarly found when we alter the distributions of those exits. Therefore, the evacuation space shown in \autoref{fig:5} is considered without loss of generality in the rest of this paper. }}
\end{rem}\vspace{-6pt}
For simplicity, we let $\rho^{\rm aim}_i=\rho_{\rm aim}$ for all $i\in\mathcal N_{\rm ex}$ and define the observed region as a rectangular area expanded from the exit $i\in\mathcal N_{\rm ex}$ with $d\in\mathbb N_+$ steps (see the typical example of the four observed regions with $d=3$ in \autoref{fig:5}, where each of the EAs is able to observe at most 21 pedestrians near its exits).
Recalling the fact that the time step delay $\zeta$ of the observed data depends on the data collection structure, we first assume that the time step delay $\zeta$ is very small in \autoref{small} and then relax this assumption in \autoref{large} to discuss how the result changes when we have a more significant time step delay $\zeta$.
It is important to note that as one of the pedestrians' inherent properties, the size of the \textcolor[rgb]{0,0,1}{visual field} may be diverse from each other and hence the pedestrians existing in the evacuation system $\Pi$ may be homogeneous or heterogeneous.
As the simulation result might be strongly connected to the mentioned inherent property (i.e., size of the \textcolor[rgb]{0,0,1}{visual field}),
to derive a strategic conclusion on how to set the target density and observed region without loss of generality, we may consider the evacuation processes with heterogeneous pedestrians \textcolor[rgb]{0,0,1}{in} the following sections.
\subsection{Small Time Step Delay in Density Control Algorithm}\label{small}
\subsubsection{Necessity of Using Dynamic Guiding Assistances}\label{exmaple}
In the beginning, to indicate the necessity of using dynamic guiding assistance in the asymmetric-exits evacuation process,
we use the initial distribution shown in \autoref{fig:02}(a) in the simulation where 317 pedestrians are distributed randomly in the evacuation space. The field of view is given by $w=3$ for all the pedestrians so that each one of them is able to observe at most 48 pedestrians around its location (see the grey circle in \autoref{fig:5} representing a pedestrian with $w=3$).
Letting the collision probability be $\varphi=40\%$, {the distributions of the pedestrians under \emph{static} guiding assistance (i.e., $\rho_{\rm aim}=1.0$) are shown in \autoref{fig:02}.}
It can be seen from the figure that since the EAs never stop attracting the pedestrians, whatever how large the density is in its observed region, the pedestrians are suddenly separated in the horizontal direction, and hence a blank zone appears in the middle of the evacuation space from the time instant $t=20$. The blank zone expands along with time and the utilization of the three exits in the left-hand side of the evacuation space is almost vacant in the later stage of the evacuation, i.e., the remaining pedestrians \emph{only} locate around exit 4 after time instant $t=60$.
As a result, it is observed that a lot of evacuation efficiency is wasted in the evacuation process under the static guiding assistance.
\begin{figure}
\centering
\includegraphics[width=100mm]{no_control_all3new-eps-converted-to.pdf}
\caption{Distributions of homogeneous pedestrians under \emph{static} guiding assistance. Then guiding signal is always opened in the evacuation process since $\rho_{\rm aim}=1$. Pedestrians immediately huddle near the exits and never change the moving direction to the reverse direction. A large group of pedestrians huddle around one of the exits (exit 4) even though the other 3 exits are not utilized at all, which makes the evacuation process run in an inefficient way. }
\label{fig:02}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=100mm]{step20_100-eps-converted-to.pdf}
\caption{Distributions of homogeneous pedestrians under \emph{dynamic} guiding assistance with $\rho_{\rm aim}=0.3$ and $d=3$, $\zeta=1$. The 4 colored regions represent the regions at which the evacuation assistants observe the crowd density. The evacuation assistant turns on or shuts down the guiding signal according to the real-time data of the observed crowd density. Pedestrians do not huddle near the exits but stay in the evacuation space's central area until the pedestrians in the observed regions are all evacuated. The 4 exits are almost utilized efficiently even at the time instant $t=100$. Compared to the simulation results under static guidance in \autoref{fig:02}, this example indicates the significance of using \emph{dynamic} guiding assistance in the asymmetric-exits evacuation process.}
\label{fig:0}
\end{figure}
Next, we show a typical example of pedestrian distributions, which reveals the effectiveness and significance of using \emph{dynamic} guiding
assistance. Specifically, in the simulation, we let $\rho_{\rm aim}=0.3$, $\zeta=1$, $d=3$ (see the colored areas representing the observed regions in \autoref{fig:5}). {The distributions of the pedestrians are illustrated in \autoref{fig:0}.} Compared to the case with \emph{static} guiding assistance, the amounts of the remaining pedestrians at the time instants $t=20$, $40$, $60$, $80$, and $100$ under \emph{dynamic} guiding assistance in \autoref{fig:0} are \emph{certainly} less than the ones in \autoref{fig:02}. It can be easily found in \autoref{fig:0} that the pedestrian densities in the four observed regions under control are much less than the ones in \autoref{fig:02} so that the possibility of collisions and congestions happening around the exits is suppressed, which make the evacuation run more efficiently. Another direct and interesting observation found in the comparison is that the pedestrians are no longer separated into two crowds in the horizontal direction but stay in the central region under \emph{dynamic} guiding
assistance (see \autoref{fig:0}(b)--(e)).
{Those simulation results} capture the fact that the dynamic guiding assistant system with only partially observable data may still positively control (affect) the location of all pedestrians in the multi-exit
evacuation process by involving the density control algorithm.
\begin{figure}
\centering
\includegraphics[width=120mm]{target1-eps-converted-to.pdf}
\caption{Travel time versus the target density $\rho_{\rm aim}$ in density control when $\zeta=1$ with different sizes of the observed regions. To avoid misinterpretation caused by the stochastic process existing in the evacuation model, all the data of average travel times are generated from more than 30 simulations. The initial distribution is shown on the left-hand side of the figure. \textcolor[rgb]{0,0,1}{The fitting
results are generated by the ``Smoothing Spline'' function of the Curve Fitting box in Matlab
with the goodness of fit (R-square) being larger than 0.995.}
The optimal size of the observed region is found as 2. That is to say, the observed region is suggested to be small for the case where the time step delay $\zeta$ is small. The suggested target density for the dynamic evacuation assistant system ranges from 0.3 to 0.65.
}
\label{fig:16}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=100mm]{targetDensitynew-eps-converted-to.pdf}\vspace{-8pt}
\caption{Distributions of homogeneous pedestrians under \emph{dynamic} guiding assistance with different target densities at the time instant $t = 40$ when $\zeta=1$, $d=6$. It is observed that the pedestrians are slowly attracted towards the exits when the target density is small, and an arching phenomenon occurs when the target density is large.
The pedestrians are obviously separated in the horizontal direction for $\rho_{\rm aim}\geq 0.7$, which should be avoided as discussed in \autoref{exmaple}. The number of pedestrians trapped in the arching phenomenon increases when the target density increases.
}
\label{fig:40}\vspace{-13pt}
\end{figure}
\subsubsection{Influence of Target Density and Size of Observed Regions}\label{Sec312}
In this section, we investigate the influence of the target density of the dynamic guiding assistant system on the travel time of the evacuation process and hence derive the optimal size of the observed regions. Letting $\rho_{\rm aim}\in[0.1,1]$, the response curves of travel time of the evacuation process on target density are shown in \autoref{fig:16} with several different sizes of the observed regions where the collision probability is still set to $\varphi=40\%$. Note that the presented curves are yielded from the average value of the data from more than 30 times simulations and show similar tendencies with respect to $\rho_{\rm aim}$. On the one hand, it can be seen from the figure that compared to the case with static guidance (i.e., when target density equals 1.0), the evacuation efficiency is significantly improved when the target density $\rho_{\rm aim}$ is moderate in the density control algorithm \autoref{BB1}.
Without loss of generality, letting the size of observed regions be $d=3$, \autoref{fig:40} shows the pedestrian distributions at the time instant $t = 40$ when different target densities are used in the simulations. This figure provides evidence to support that the target density for the dynamic evacuation assistant system should be a moderate value. In particular, it is observed from \autoref{fig:40} that pedestrians are slowly attracted towards the exits when the target density is small, and \textcolor[rgb]{0,0,1}{an} arching phenomenon occurs when the target density is large\footnote{Arching phenomenon refers to the scene in which a large group of
pedestrians huddle around the exit in the evacuation process. This phenomenon leads to serious conflicts and congestions among the pedestrians, and eventually reduce the traffic efficiency of the exits. During designing the dynamic guiding assistant system, the arching phenomenon should be suppressed as much as possible to avoid conflicts and congestions.}.
On the other hand, it can be further seen from \autoref{fig:16} that except for the case with $d=2$, the efficiency improvement made by the density control algorithm drops off when the size of the observed regions increases. The optimal size $d$ for the observed regions and the optimal target density $\rho_{\rm aim}$ are found as $d=2$ and $\rho_{\rm aim}\in[0.3,0.7]$, respectively.
Note that this observation reveals an interesting fact that to enhance the evacuation efficiency, we only need to observe the pedestrians' location from a small region near the exit instead of a large region when the time step delay in the density control algorithm (\ref{BB1}) is very small.
\begin{figure}
\centering
\includegraphics[width=85mm]{thmer_collision-eps-converted-to.pdf}\vspace{-11pt}
\caption{ \textcolor[rgb]{0,0,1}{Amount of collisions (that trigged around the exits) versus target density with different sizes of the observation region when $\zeta=1$. The collision amount is reduced when we narrow the observation
region for the dynamic guiding assistant system. Decreasing the target density of the on-off-based density control can significantly reduce the collisions around the exits. }}
\label{fig:18}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=92mm]{thmer_percent-eps-converted-to.pdf}
\caption{ \textcolor[rgb]{0,0,1}{Proportion of use of each exit versus target density with different sizes of the observation region when $\zeta=1$. The use of exits is more balanced by
narrowing the observation region for the dynamic guiding assistant system. The use of exits is extremely imbalanced under static guidance.}}
\label{fig:17}
\end{figure}
To further understand the intrinsic reason why the target density and the size of the observed region matter in the evacuation process, we illustrate the response curves of collision amounts (that trigged around the exits) on the target density $\rho_{\rm aim}$ in \autoref{fig:18}.
It can be observed from the figure that the
collision amount is reduced when we narrow the observation region for the dynamic guiding assistant system. Moreover, it can be seen that decreasing the target density of the
on-off-based density control (\ref{BB1}) can significantly reduce the collisions around the exits in the evacuation process.
On the other hand, the proportion of use of each exit versus target density with different sizes of the observation region is demonstrated in \autoref{fig:17}. It can be seen from this figure that the use of exits is extremely imbalanced under static
guidance and the density control can significantly balance the use of the four asymmetric exits, especially when the target density is set to $\rho_{\rm aim}\in(0.3,0.7)$. Also, it can be observed that the use of exits is more
balanced by narrowing the observation region for the dynamic guiding assistant system.
Now, it is important to note that the above observations of the optimal size $d$ are without loss of generality when we consider different initial distributions or sizes of the visual field.
For example, considering the situation where the pedestrians in the initial time instant may be mixed by some taller pedestrians possessing a larger field of view,
we present the simulation results with heterogeneous pedestrians holding the \textcolor[rgb]{0,0,1}{visual field} with $w=3$ and $w=4$ under different mixing ratios in \autoref{fig:38}. Even though the tendencies of the curves are slightly different from \autoref{fig:16}, we note that the size $d=2$ remains the optimal size for the observed regions in all of the simulations under different mixing ratios. Moreover, when $d\geq 3$, it is still observed that the efficiency improvement made by the
density control algorithm drops off when the size of the observed regions increases for each of the sub-figures.
Even though the mixing ratios change the tendency of the response curves, the travel time under the target density $\rho_{\rm aim}\in(0.5,0.6)$ with $d=2$ is always the minimum value. As a result, we conclude that no matter how much the mixing ratio is in the simulation,
the observed region is suggested to be small for the case where the time step delay $\zeta$ is small.
\begin{figure}
\centering
\includegraphics[width=130mm]{82-eps-converted-to.pdf}
\caption{Response of the target density on travel time under density control with mixed or pure pedestrians possessing different or the same view sizes when $\zeta=1$. (a) 20 percent of the pedestrians possess a bigger \textcolor[rgb]{0,0,1}{visual field} in the simulation. (b) 50 percent of pedestrians possess a bigger \textcolor[rgb]{0,0,1}{visual field} in the simulation. (c) 80 percent of pedestrians possess a bigger \textcolor[rgb]{0,0,1}{visual field} in the simulation. (d) All pedestrians possess a bigger \textcolor[rgb]{0,0,1}{visual field} in the simulation. The initial distributions set in the simulations are shown
the left-hand side of the sub-figures where the blue circles denote the pedestrians with a larger \textcolor[rgb]{0,0,1}{visual field} (i.e., $w=4$ instead of $w=3$). \textcolor[rgb]{0,0,1}{All of the fitting
results are generated by the ``Smoothing Spline'' function of the Curve Fitting box in Matlab
with the goodness of fit (R-square) being larger than 0.995.} In all of the cases, the optimal size of the observed region is found as 2. That is to say, the observed region is suggested to set as a small one around the exit for the case with small time step delay $\zeta$. }
\label{fig:38}
\end{figure}
\subsection{Influence of Data Time Delay}\label{large}
\begin{figure}
\centering
\includegraphics[width=100mm]{thmer-eps-converted-to.pdf}
\caption{Travel time versus the target density $\rho_{\rm aim}$ in density control with different sizes of the observed regions and different time delay $\zeta$. The initial distribution considered in the simulations is the same one that we used in \autoref{fig:16}. \textcolor[rgb]{0,0,1}{The travel time is reduced by
slightly increasing the delay. However, when the time delay $\zeta$ is too large, the density control algorithm may no longer enhance the evacuation efficiency. } }
\label{fig:48}
\end{figure}
Recalling the fact that the time step delay $\zeta$ of the density control algorithm is a given positive parameter depending on the data collection structure of the guiding assistant system, it is important to discuss how the optimal size changes when the given time step delay $\zeta$ changes. The travel time versus the target density $\rho_{\rm aim}$ in density control with different sizes of the observed region and time delays is shown in \autoref{fig:48}, where we used \autoref{fig:16} as the initial distribution in all the simulations. It can be seen from \autoref{fig:48} that the travel time can be reduced by
slightly increasing the delay. When the given time step delay $\zeta$ is larger, setting up a larger region to observe the pedestrians' location is better.
Those observations are natural in the sense of physical meaning, since the data with a large delay are not able to represent the characteristics of the pedestrians' current situation for a small observed region, but for a large observed region, those data may be able to represent the partial characteristics of the pedestrians' real situation because of the intrinsic inertia.
However, when the time delay $\zeta$ is too large, the density control algorithm may no longer obviously enhance the evacuation efficiency (see \autoref{fig:48}). As a result, the time step delay allowed in our proposed density control algorithm is bounded. The above numerical findings give important insights on designing computer-aided (control-based) guiding strategies in real evacuations.
\section{Conclusion}\label{section:4}
To enhance the evacuation efficiency in partially observable asymmetric-exit evacuation \textcolor[rgb]{0,0,1}{under guidance, a general framework of the dynamic guiding assistant system
was proposed to investigate the effect of density control.} In the characterized system,
multiple evacuation assistants are established to observe the partial information of pedestrians'
location and adjust the guiding signals of the dynamic guiding assistant systems according to the observed information (i.e., pedestrian densities in the observed regions near the corresponding exits). Specifically, a simple on-off-based density control algorithm associated with a target density is considered for evacuation assistants based on the delayed data of (observed) pedestrian densities to meet the physical challenges on data collection, transmission, and implementation which often exist in the realistic computer-aided evacuation process.
By involving a force-driven CA model, we presented the simulation results in accordance with data delays to eventually give a strategic suggestion on how to set the observed region and the target density.
According to our numerical simulations, we first revealed the necessity of using dynamic guiding assistance in the asymmetric-exit evacuation process.
It was found that the proposed density control algorithm can control (positively affect) the global distribution of the pedestrians' locations and suppress arching phenomena in the evacuation process even using partially observed information under time delays. To derive the optimal size of the observed regions, we investigated the influence of the target density of the dynamic guiding assistant system on the
travel time of the evacuation process.
It was found that the dynamic guiding assistant system with
only partially observable data can suppress the triggers of collisions around the exits and avoid inefficiently separating the pedestrians in the evacuation process. After showing various simulation results, we revealed an interesting fact without loss of generality that to enhance the evacuation efficiency, we only need to observe
the pedestrians' location from a small region near the exit instead of a \textcolor[rgb]{0,0,1}{large} region when the time step delay in the
density control algorithm is very small. Furthermore, we found that the time step delay allowed in our proposed
density control algorithm is bounded since it may be impossible to significantly enhance the evacuation efficiency by using the density control algorithm when the time step delay is too large. For both the small delay case and large delay case, we suggested the target density of the density control algorithm to be a moderate value. Our numerical results
are expected to provide insights into designing the computer-aided guiding strategies in real evacuations.
\textcolor[rgb]{0,0,1}{However, due to the difficulty of inviting
human volunteers under the COVID pandemic, conducting an actual experiment is expected
in future. Moreover, there may be many
different heterogeneities in the evacuation process, such as the psychological considerations
(risk-averse/risk-seeking), quality of the pedestrians, will on sharing visual field, to name but a few. The analysis on those factors may be promised necessary future research directions.}
As a preliminary study connecting the control theory and pedestrian evacuation theory, we only used a simple and intuitive control algorithm for the evacuation assistant system. However, there may exist possible future research directions when we apply some novel control algorithms to the evacuation process, such as model predictive control \cite{kohler2020computationally}, system identification \cite{mauroy2019koopman}, Q-learning for optimal control \cite{lee2018primal}, noise elimination, etc.
\section*{CRediT Authorship Contribution Statement}
{\bf Fengqiang Gao}: Investigation; formal analysis; writing- original draft preparation, reviewing (equal).
{\bf Zhihao Chen}: Software; validation; reviewing (equal).
{\bf Yuyue Yan}: Conceptualization; methodology; investigation; writing- review $\&$ editing (lead).
{\bf Linxiao Zheng}: Visualization; reviewing (equal).
{\bf Huan Ren}: Group management; reviewing (equal).
\section*{Acknowledgments}
This work was supported jointly by Program for Young Excellent Talents in University of Fujian Province (201847) and China Scholarship Council (201908050058).
We thank Xie Chen for participating in a discussion in the early state of this work.
\bibliographystyle{elsarticle-num}
|
1,314,259,995,295 | arxiv | \section{Introduction}
The science goals of modern radio telescopes are diverse and require the highest quality data to be delivered to the end users. In order to achieve this, data are taken at the highest resolution in time and in frequency, so that radio frequency interference (RFI) mitigation \citep{Wilensky2019} can be carried out satisfactorily. An equally important data processing step is the elimination of systematic errors from the data, also called as {\em calibration}. Systematic errors are introduced by the Earth atmosphere and by the instrument itself. Calibration is essentially an estimation problem and for its success the data need to have sufficient signal to noise ratio (SNR), in other words, sufficient number of data samples need to be considered together (thus increasing the effective SNR). However, with limited compute memory, the number of data samples that can be accommodated in compute memory is limited. Hence, the commonly used practice is to average the data before any calibration is performed. On the one hand, this is inevitable due to limited memory but on the other hand, averaging also loses some valuable information.
In this paper, we propose a paradigm shift in the processing of radio interferometric data as shown in Fig. \ref{new_cal}. We re-use a widely used concept in modern machine learning -- i.e., stochastic learning or training, and introduce {\em stochastic} calibration of radio interferometric data. We calibrate data at the highest resolution -- i.e., at the same resolution where RFI mitigation is performed. Normally, calibrating data at this resolution would require a huge amount of compute memory. We overcome that by working with a subset of data at each iteration of calibration, we call this subset of data a {\em mini-batch}. These mini-batches are sequentially fed to the calibration algorithm, and at convergence, we find a solution that is valid for the full dataset being calibrated. Note that what we call as the {\em full dataset} here is the dataset that is within a specific time and frequency interval, in other words, the domain of the calibration solutions is defined by this time and frequency interval. Moreover, this full dataset is still only a fraction of the total data being calibrated in a long observation where multiple calibration solutions are obtained.
Working with mini-batches of data introduces a fundamental problem -- increased variance \citep{VarReduc} due to the lower SNR compared with the full dataset. This has already been noticed in calibration as well, for instance by \cite{Jeffs06} in their demixed peeling approach of calibration. There are many ways of reducing this variance and we refer the reader to e.g. \citep{robbins1951,VarReduc,Adam} for some widely used methods in machine learning. There is however a subtle difference in most machine learning problems and calibration -- i.e., the size of the data. In most machine learning problems, the full dataset can be pre-loaded into compute memory but this is nearly impossible for radio interferometric data at the highest resolution. Hence the data need to be read from disk during each iteration of calibration. The number of times the full dataset (divided into many mini-batches) is passed through the learning algorithm is called as an {\em epoch} in machine learning jargon and we use the same term here as well. Because we read the data from disk storage during calibration, it is also important minimize the number of epochs needed for finding a satisfactory solution. With this objective, we have already introduced a stochastic, limited memory Broyden Fletcher Goldfarb Shanno (LBFGS) algorithm in \citep{escience2018,DSW2019}. Compared with the commonly used gradient descent based algorithms \citep{robbins1951,Adam} in machine learning, the LBFGS algorithm has faster convergence \cite{Fletcher,Liu1989} and hence need lower number of epochs in calibration as we show later.
\begin{figure*}
\begin{minipage}{0.98\linewidth}
\begin{center}
\centering
\input{stochastic_calibration.pdf_t}
\vspace{0.1in}
\end{center}
\end{minipage}
\caption{The proposed calibration of data at highest resolution, compared with traditional data processing.\label{new_cal}}
\end{figure*}
There is a wide variety of calibration algorithms (too numerous to list here, see e.g. \cite{DSW2019} and references therein), each having their own merits and demerits. However, one common aspect of these algorithms is that they all operate in full batch mode. By introducing stochastic calibration and enabling calibration of data at the highest resolution, we get many additional benefits as we describe below.
\begin{itemize}
\item RFI mitigation will work better if the signals from the celestial sources are subtracted from the data \citep{Wilensky2019}. Hence, the residual of stochastic calibration (where the sky is subtracted) will reveal many more weak RFI signals. { One caveat here is that the RFI mitigation is dependent on the accuracy (and completeness) of the sky model used in calibration.}
\item The spatial localization of fast radio bursts (FRB) \citep{Chat2017} need calibrated data at the highest resolution and stochastic calibration is an obvious choice to provide this.
\item The removal of strong celestial sources (the Sun, Cassiopeia A, Cygnus A etc.) that appear far away from the field of view is best done at the highest resolution of data. Stochastic calibration is an improvement to \citep{Jeffs06} in this regard.
\item Radio polarimetric science (rotation measure synthesis) \cite{Brentjens2005,Schnitz2015} can also benefit from stochastic calibration. First, by preserving the frequency resolution of the data, we can maximize the range of Faraday depth in the data. Secondly, by preserving the time resolution during calibration, we can overcome depolarization due to rapid Faraday rotation of the data.
\item Low power devices provide an energy efficient alternative for processing of data by telescopes such as the SKA \citep{SKA0,Spreeuw2019}. However, such devices have limited compute memory and stochastic calibration provides a feasible calibration algorithm in that case.
\item Distributed calibration where the data is calibrated using a network of compute agents { that exchange information available at multiple frequencies} is shown to give better results \citep{DCAL,Brossard2016,DMUX,OLLIER2018} { than conventional single-frequency calibration}. For instance, in \citep{Patil2017}, about $300$ sub-bands are calibrated using about $60$ compute agents. The data used in this case is averaged by a factor of about $60$ in frequency before calibration is performed. If the same data are calibrated at the original resolution, the number of compute agents needed would increase from $300$ to about $18000$ which would overwhelm the network. We propose a distributed stochastic calibration scheme for this situation where the number of compute agents that need to communicate is minimized.
\item The calibration for the instrumental pass band (bandpass calibration) is normally done using large time intervals because the bandpass is assumed to vary very slowly with time. With stochastic calibration, we can use large time intervals for bandpass calibration.
\end{itemize}
We emphasize that as shown in Fig. \ref{new_cal}, stochastic calibration is not a replacement for traditional calibration that is done at a later stage of data processing. On the contrary, it is an enhancement to traditional data processing where all existing data processing stages can follow stochastic calibration.
The rest of the paper is organized is as follows. In section \ref{sec:model}, we introduce the data model we use in radio interferometry. In section \ref{sec:calib}, we present distributed stochastic calibration of multi-frequency radio interferometric data. In section \ref{sec:simul}, we compare the performance of the stochastic LBFGS algorithm \cite{DSW2019} to commonly used first order learning algorithms used in machine learning \cite{Adam} in stochastic calibration using PyTorch \cite{paszke}. We also provide an example of distributed stochastic calibration in section \ref{sec:simul}. Finally, we draw our conclusions in section \ref{sec:conc}.
{\em Notation}: Lower case bold letters refer to column vectors (e.g. ${\bf y}$). Upper case bold letters refer to matrices (e.g. ${\bf C}$). Unless otherwise stated, all parameters are complex numbers. The set of complex numbers is given as ${\mathbb C}$ and the set of real numbers as ${\mathbb R}$. The matrix inverse, pseudo-inverse, transpose, Hermitian transpose, and conjugation are referred to as $(\cdot)^{-1}$, $(\cdot)^{\dagger}$, $(\cdot)^{T}$, $(\cdot)^{H}$, $(\cdot)^{\star}$, respectively. The matrix Kronecker product is given by $\otimes$. The vectorized representation of a matrix is given by $\mathrm{vec}(\cdot)$. The $i$-th element of a vector ${\bf y}$ is given by $y[i]$. The identity matrix of size $N$ is given by ${\bf I}_N$. All logarithms are to the base $e$, unless stated otherwise. The Frobenius norm is given by $\|\cdot \|$. Rounding up to the nearest integer is done by $\lceil \cdot \rceil$.
\section{Radio interferometric data model}\label{sec:model}
In this section, we give an overview of the data model we use, especially in relation to stochastic calibration. The interferometric signal formed by the receiver pair $p$ and $q$ is given as \citep{HBS}
\begin{equation} \label{V}
{\bf V}_{pq}=\sum_{i=1}^K {\bf J}_{pi} {\bf C}_{pqi} {\bf J}_{qi}^H + {\bf N}_{pq}
\end{equation}
where we have signals from $K$ directions in the sky being received. The systematic errors along the $i$-th direction are given by ${\bf J}_{pi}$ and ${\bf J}_{qi}$ ($\in {\mathbb C}^{2\times 2}$) for the $p$-th and $q$-th station, respectively. The source { coherency} ${\bf C}_{pqi}$ ($\in {\mathbb C}^{2\times 2}$) is generally well known and calculated using a sky model \citep{TMS}. The noise ${\bf N}_{pq}$ ($\in {\mathbb C}^{2\times 2}$) is assumed to have complex circular Gaussian entries. All entries of (\ref{V}) are time and frequency dependent and this is implicitly assumed throughout the paper. { The total number of stations is $N$, therefore $p,q\in[1,N]$ and the total number of baselines per given time and frequency is $N(N-1)/2$.}
Calibration is the determination of ${\bf J}_{pi}$ and ${\bf J}_{qi}$ in (\ref{V}) for all $p,q,i$ and for the full time and frequency domain of the data. Since there are too many time and frequency points at which data are taken, solutions for ${\bf J}_{pi}$ and ${\bf J}_{qi}$ are obtained for finite time and frequency intervals, that cover many data points. In order to use our stochastic LBFGS \citep{DSW2019} algorithm for calibration, we need to convert (\ref{V}) to a model with real values. First, we vectorize (\ref{V}) and get
\begin{equation} \label{vecV}
{\bf v}_{pq}={\bf s}_{pq}({\bmath \theta}) +{\bf n}_{pq}
\end{equation}
where ${\bf s}_{pq}({\bmath \theta})=\sum_{i=1}^{K}({\bf J}_{qi}^{\star}\otimes{\bf J}_{pi}) vec({\bf C}_{pqi})$, ${\bf v}_{pq}=vec({\bf V}_{pq})$, and ${\bf n}_{pq}=vec({\bf N}_{pq})$. We represent the parameters ${\bf J}_{pi}$ and ${\bf J}_{qi}$ (that are ${\mathbb C}^{2\times 2}$ matrices) as ${\bmath \theta}$, a vector of real parameters of length $8NK$ ($\in {\mathbb R}^{8NK\times 1}${, the factor $8$ comes from representing the $4$ complex cross-correlations as real values}). Consider that there are $T$ samples in time and $B$ samples in frequency within the time and frequency interval where calibration is performed. For a full observation, there will be many such intervals to cover the full integration time and bandwidth. We stack all data points within the calibration interval into vectors as
\begin{eqnarray} \label{modvec}
{\bf x}=[\mathrm{real}({\bf v}_{12}^T),\mathrm{imag}({\bf v}_{12}^T),\mathrm{real}({\bf v}_{13}^T),\ldots]^T\\\nonumber
{ {\bf m}({\bmath \theta})=[\mathrm{real}\left({\bf s({\bmath \theta})}_{12}^T\right),\mathrm{imag}\left({\bf s({\bmath \theta})}_{12}^T\right),\mathrm{real}\left({\bf s({\bmath \theta})}_{13}^T\right),\ldots]^T}\\\nonumber
\end{eqnarray}
where ${\bf x}$ and ${\bf m}({\bmath \theta})$ are vectors of size $N(N-1)/2\times 8\times T \times B$ ($\in {\mathbb R}^{4TBN(N-1)}$). { In (\ref{modvec}), ${\bf x}$ represents the data being calibrated and ${\bf m}({\bmath \theta})$ represents the predicted model visibilities based on the current value of ${\bmath \theta}$.}
{ We use a robust noise model for ${\bf n}_{pq}$ as in \cite{Kaz3} during calibration. For maximum likelihood estimation, the negative log-likelihood of the data is minimized. Ignoring the terms independent of ${\bmath \theta}$, the cost function to be minimized becomes}
\begin{equation} \label{gLBFGS}
g({\bmath \theta})=\sum_{i=1}^{4TBN(N-1)} \log\left(1 +\frac{\left({\bf x}[i]-{\bf m}({\bmath \theta})[i]\right)^2}{\nu}\right)
\end{equation}
where ${\bf x}[i]$ and ${\bf m}({\bmath \theta})[i]$ represent the $i$-th elements of ${\bf x}$ and ${\bf m}({\bmath \theta})$, respectively, and $\nu$ is the degrees of freedom \citep{Kaz3}. { It is possible to select the most suitable value for $\nu$ based on the data itself as in \citep{Kaz3}. However, as we are also after reducing the computational cost of calibration, we select a low value $\nu=2$ for improved robustness (note that by making $\nu \rightarrow \infty$, we get a Gaussian noise model).}
In (\ref{gLBFGS}), the total number of data points used to evaluate the cost function is $4TBN(N-1)$, and this is the full batch size. In time samples, the full batch size is $T$. In stochastic calibration, we use $M$ time samples $1\le M \le T$ to obtain a solution for (\ref{gLBFGS}). Therefore, the full batch is divided into $\lceil T/M \rceil$ mini-batches. In other words, if the $i$-th mini-batch has $g_i({\bmath \theta})$ as the cost function,
\begin{equation} \label{minicost}
g({\bmath \theta})=\sum_{i=1}^{\lceil T/M \rceil} g_i({\bmath \theta}).
\end{equation}
Note also that in (\ref{minicost}), in spite of working with mini-batches of data, we still find one solution for ${\bmath \theta}$ that minimizes $g({\bmath \theta})$, covering the full $4TBN(N-1)$ data points. Therefore, the time and frequency domain of the solution for ${\bmath \theta}$ is determined by the full batch of data. Because we work with $M$ time slots instead of $T$, we require less memory and computation if $M\ll T$. The main drawback of this approach however is increased variance \citep{robbins1951} and minimizing this has been well studied (e.g., \cite{VarReduc}). The stochastic LBFGS algorithm we have developed in \citep{DSW2019} also takes into account the increased variance as are other versions of stochastic LBFGS \citep{Berahas,Bolla,Li2018}. In section \ref{sec:calib}, we enhance the performance of stochastic calibration by exploiting the continuity of ${\bmath \theta}$ over frequency and using the full bandwidth of the observation into our advantage as in \cite{DCAL}.
\section{Distributed stochastic calibration}\label{sec:calib}
Calibration is performed over a small time and frequency interval compared with the full observation that has a large integration time and a wide bandwidth. We propose a scheme where we can work with mini-batches of data and at the same time, exploit the continuity of systematic errors over frequency as in \citep{DCAL,DMUX}. We introduce the distributed computing framework as shown in Fig. \ref{block}, which is a refinement of our previous work (e.g. Fig. 1 of \cite{DMUX}). The main difference in Fig. \ref{block} compared to our previous work is that we have $D$ fusion centres instead of just one. On top of this, we have a higher level centre where only averaging is performed.
The motivation behind the framework shown in Fig. \ref{block} is to handle significantly more frequency channels than in our previous work \citep{DMUX}. As we discussed before (and as shown in Fig. \ref{new_cal}), data at the highest resolution will have orders of magnitudes more channels than data that is averaged. Therefore, adopting a strategy with only one fusion centre as in \citep{DMUX} would be prohibitive in terms of the network bandwidth required at the fusion centre. In order to overcome this, we propose a hierarchy, where we have $D$ fusion centres $D>1$ and each fusion centre is connected to a subset of compute agents. In Fig. \ref{block} for instance, the $1$-st fusion centre is connected to $C$ compute agents and these $C$ compute agents have access to data at $P$ frequencies. If we have a similar data distribution in other fusion centres as well, the top level federated averaging centre gets information only from $D$ agents instead of $PD$ (or $CD$ with multiplexing) as in our previous schemes.
Each fusion centre and the compute agents connected to it perform consensus optimization as in \citep{DCAL,DMUX}. We describe this for the $1$-st fusion centre in the following, but the same calibration scheme is also performed by other fusion centres and their compute agents. The Jones matrices for the $k$-th direction, at frequency $f_i$, for all $N$ stations are represented in block form as
\begin{equation}
{{\bf J}_{kf_i}}\buildrel\triangle\over=[{\bf{J}}_{1k{f_i}}^T,{\bf{J}}_{2k{f_i}}^T,\ldots,{\bf{J}}_{Nk{f_i}}^T]^T,
\end{equation}
where ${{\bf J}_{k{f_i}}} \in \mathbb{C}^{2N\times 2}$. We represent the cost function (\ref{gLBFGS}) with ${\bf J}_{kf_i},\ k\in[1,K]$ as input by $g_{f_i}(\{{\bf J}_{kf_i}:\ \forall k\})$. It is straightforward to get $g({\bmath \theta})$ from $g_{f_i}(\{{\bf J}_{kf_i}:\ \forall k\})$ (and vice versa) by mapping ${\{{\bf J}_{kf_i}:\ \forall k\}}$ to ${\bmath \theta}$ and we omit the details here.
Following our previous work, we enforce smoothness in frequency by the constraint
\begin{equation}
{\bf J}_{kf_i}={\bf B}_{f_i} {\bf Z}_{k}^{(1)}
\end{equation}
where ${\bf B}_{f_i} \in \mathbb{R}^{2N\times 2FN}$ and ${\bf Z}_{k}^{(1)} \in \mathbb{C}^{2FN\times 2}$. The polynomial basis (with $F$ basis functions) evaluated at frequency $f_i$ is given by ${\bf B}_{f_i}$. The global variable (but {\em local} to the fusion centre $1$) is given by ${\bf Z}_{k}^{(1)}$. We remind the reader that we use the superscript $(\cdot)^{(1)}$ to denote that ${\bf Z}_{k}^{(1)}$ is local to fusion centre $1$ and its $C$ compute agents. Calibration with the frequency smoothness constraint is formulated as
\begin{eqnarray} \label{conscalib}
\{{\bf {J}}_{kf_i},\ldots,{\bf {Z}}_k^{(1)}:\ \forall\ k,i\}=\underset{{\bf {J}}_{kf_i},\ldots,{\bf {Z}}_k^{(1)}}{\rm arg\ min} \sum_i g_{f_i}(\{{\bf J}_{kf_i}:\ \forall k\})\\\nonumber
{\rm subject\ to}\ \ {\bf {J}}_{kf_i}={\bf {B}}_{f_i} {\bf {Z}}_k^{(1)},\ \ i\in[1,P],k\in[1,K]\\\nonumber
{\rm and}\ \ {\bf {Z}}_k^{(1)}=\overline{{\bf Z}}_k, \ \ k\in[1,K].
\end{eqnarray}
The key difference from our previous work is the additional constraint ${\bf {Z}}_k^{(1)} = \overline{{\bf Z}}_k$, where $\overline{{\bf Z}}_k \in \mathbb{C}^{2FN\times 2}$ is a global variable that is available to {\em all} fusion centres and this is calculated at the federated averaging centre in Fig. \ref{block}. { We introduce Lagrange multipliers for each constraint, namely, ${\bf {Y}}_{kf_i}$ ($\in \mathbb{C}^{2N\times 2}$) for the constraint ${\bf {J}}_{kf_i}={\bf {B}}_{f_i} {\bf {Z}}_k^{(1)}$, and, ${\bf X}_{k}$ ($\in \mathbb{C}^{2FN\times 2}$) for the constraint ${\bf {Z}}_k^{(1)}=\overline{{\bf Z}}_k$.} To find a solution for (\ref{conscalib}) at fusion centre $1$, { we need to minimize the original cost and the cost due to the constraints} and we form the augmented Lagrangian as
\begin{eqnarray} \label{aug}
\lefteqn{
L(\{{\bf {J}}_{kf_i},{\bf {Z}}_k^{(1)},{\bf {Y}}_{kf_i},{\bf X}_{k}: \forall\ k,i\}) }\\\nonumber
&& =\sum_i g_{f_i}(\{{\bf J}_{kf_i}:\ \forall k\})\\\nonumber
&& + \sum_{i,k} \left( \| {\bf {Y}}_{kf_i}^H ({\bf {J}}_{kf_i}- {\bf {B}}_{f_i} {\bf {Z}}_k^{(1)})\| + \frac{\rho}{2} \| {\bf {J}}_{kf_i}- {\bf {B}}_{kf_i} {\bf {Z}}_k^{(1)} \|^2 \right) \\\nonumber
&&+\sum_k \left(\|{\bf X}_k^H\left( {\bf {Z}}_k^{(1)} - \overline{{\bf Z}}_k \right)\|+ \frac{\alpha}{2} \| {\bf {Z}}_k^{(1)} - \overline{{\bf Z}}_k \|^2 \right).
\end{eqnarray}
In (\ref{aug}), $\rho \in \mathbb{R}^{+}$ is the regularization factor for smoothness in frequency and $\alpha \in \mathbb{R}^{+}$ is the regularization factor for variable ${\bf {Z}}_k^{(1)}$ over all fusion centres. We use consensus alternating direction method of multipliers (ADMM) \citep{boyd2011} exactly as before \citep{DCAL,DMUX} to find solutions for ${\bf {J}}_{kf_i}$ and ${\bf {Y}}_{kf_i}$. The only difference is finding a solution for ${\bf X}_{k}$ and ${\bf {Z}}_k^{(1)}$. The gradient of $L(\{{\bf {J}}_{kf_i},{\bf {Z}}_k^{(1)},{\bf {Y}}_{kf_i},{\bf X}_{k}: \forall\ k,i\})$ with respect to ${\bf {Z}}_k^{(1)}$ is
\begin{eqnarray} \label{gradZ}
\lefteqn{
{2\times \rm grad}(L,{\bf {Z}}_k^{(1)})=}\\\nonumber
&& \sum_i {\bf {B}}_{f_i}^T\left(-{\bf {Y}}_{kf_i}+\rho\left(-{\bf {J}}_{kf_i}+{\bf {B}}_{f_i} {\bf {Z}}_k^{(1)}\right)\right)\\\nonumber
&& + {\bf X}_{k}+ \alpha \left({\bf {Z}}_k^{(1)} - \overline{{\bf Z}}_k \right)
\end{eqnarray}
and equating this to zero gives
\begin{eqnarray} \label{zsol}
\lefteqn{
{\bf {Z}}_k^{(1)}= }\\\nonumber
&& \left( \sum_i \rho {\bf {B}}_{f_i}^T {\bf {B}}_{f_i} + \alpha {\bf I}_{2FN} \right)^{\dagger}
\left(\sum_i {\bf {B}}_{f_i}^T \left({\bf {Y}}_{kf_i} + \rho {\bf {J}}_{kf_i}\right) + \alpha \overline{{\bf Z}}_k -{\bf X}_{k} \right).
\end{eqnarray}
The major difference in (\ref{zsol}) from our previous work is that the solution obtained for ${\bf {Z}}_k^{(1)}$ includes global information $\overline{{\bf Z}}_k$, which is directly fed into the solution by $\alpha\overline{{\bf Z}}_k$ and indirectly by the Lagrange multiplier ${\bf X}_{k}$. The global $\overline{{\bf Z}}_k$ is updated at regular intervals (not necessarily at each ADMM iteration) by federated averaging \citep{McMahan2016,Sava2019} -- i.e.,
\begin{equation} \label{fa}
\overline{{\bf Z}}_k = \frac{1}{D} \sum_{j=1}^{D} {\bf {Z}}_k^{(j)}
\end{equation}
where the $j$-th fusion centre sends ${\bf {Z}}_k^{(j)}$ to the federated averaging centre in Fig. \ref{block}. There is one caveat that we need to keep in mind when we find the average as in (\ref{fa}), when we have a sky model with unpolarized sources (which is the most common scenario). First, note that the solutions for (\ref{V}) -- i.e., ${\bf {J}}_{kf_i}$, can have a unitary ambiguity \citep{interpolation}. In other words, if ${\bf {J}}_{kf_i}$ is a valid solution, ${\bf {J}}_{kf_i} {\bf U}$ where ${\bf U} \in \mathbb{C}^{2\times 2}$ is an unknown unitary matrix, is also a valid solution. Therefore, if ${\bf {Z}}_k^{(j)}$ is a valid solution for (\ref{zsol}) at the $j$-th fusion center, then ${\bf {Z}}_k^{(j)} {\bf U}$ (where ${\bf U}\in \mathbb{C}^{2\times 2}$ is unitary) is also a valid solution. Hence, each ${\bf {Z}}_k^{(j)}$ in (\ref{fa}) will have its own unitary ambiguity and we use an iterative scheme as proposed by \cite{interpolation} to find the average $\overline{{\bf Z}}_k$. Moreover, fusion centre $j$ gets back $\overline{{\bf Z}}_k$ which is projected back to the current value of ${\bf {Z}}_k^{(j)}$ to minimize $\|\overline{{\bf Z}}_k {\bf U} - {\bf {Z}}_k^{(j)}\|$ where ${\bf U} \in \mathbb{C}^{2\times 2}$ is determined by solving the matrix Procrustes problem \citep{interpolation}.
We make several remarks regarding (\ref{zsol}) here:
\begin{itemize}
\item The scheme of local consensus optimization together with global federated averaging is already being used in other applications, see e.g., \citep{Sava2019}, which we can use for further enhancement of our algorithm.
\item The averaging (\ref{fa}) assumes the number of datasets are equally distributed between each fusion centre and the associated compute agents. If this is not the case, a weighted average can be performed here. Moreover, in order to handle missing data or already flagged data, a similar weighting scheme can be applied.
\item Consider the case where the data available at each fusion centre span a narrow bandwidth, or in other words, the columns of ${\bf {B}}_{f_i}$ for all $f_i$ local to a fusion centre span a narrow region (compared to the case where the full set of frequencies is used to evaluate the basis). In this case, solving (\ref{zsol}) will be highly ill-conditioned. By having $\alpha>0$, we can reduce this ill-conditioning. In other words, via $\overline{{\bf Z}}_k$, we can feed information available at other frequencies (or other fusion centres) to each fusion centre. In the extreme case, by making $\alpha \rightarrow \infty$, we can force all fusion centres to only use the federated average as the solution for (\ref{zsol}).
\item Instead of using (\ref{fa}) for finding $\overline{{\bf Z}}_k$, we can use other sources of information as well. For instance, we can rely on physical models for the beam shape and the ionosphere \citep{DistModel,Albert2020} to derive $\overline{{\bf Z}}_k$ and feed this to calibration using (\ref{zsol}).
\item The update (\ref{fa}) does not have to be performed at the same cadence as the update of (\ref{zsol}). Moreover, if one fusion centre does not receive an updated value for $\overline{{\bf Z}}_k$, the calibration can be carried out by using and older value for $\overline{{\bf Z}}_k$ or, in the extreme case, by making $\alpha=0$. When $\alpha=0$, we revert to our previous calibration schemes \citep{DCAL,DMUX}. This is useful when the data are stored in multiple data processing clusters. Each fusion centre is connected to its compute nodes via a fast and reliable local network while the communication between each fusion centre and the federated averaging centre is through the slow and unreliable internet. Furthermore, we have to preserve privacy and security when communicating via the internet and we can use a specialized communication scheme between the fusion centres and the federated averaging centre to achieve this.
\end{itemize}
\begin{figure*}
\begin{minipage}{0.98\linewidth}
\begin{center}
\input{scheme_federated.pdf_t}\\
\end{center}
\end{minipage}
\caption{Distributed stochastic calibration framework. Data are distributed across multiple networks. There are $D$ fusion centres and the $1$-st fusion center is connected to $C$ compute agents that access the data stored local to them. The total number of datasets (frequencies) accessed by compute agents connected to the $1$-st fusion centre is $P$. The frequencies of the data handled by these $P$ compute agents are given by $f_1,f_2,\ldots,f_P$ respectively. The $D$ fusion centres are connected to a higher level fusion centre where only averaging is performed.\label{block}}
\end{figure*}
We summarize the distributed stochastic calibration scheme in algorithm \ref{algSADMM}. There is basically three iterative loops in algorithm \ref{algSADMM}. We try to minimize the number of epochs $E$ since we read the data from disk. We also try to maximize $M$, the number of mini-bacthes, because the size of the data read into memory is proportional to $1/M$. We try to keep the maximum number of ADMM iterations $A$ as low as possible as well, to cut down the number of times data is read and also to cut down the network bandwidth use. We however caution that the best values of $A$, $E$ and $M$ need to be determined to suit each situation and it depends on many variables including the signal to noise ratio of the data, the number of constraints, the number of directions being calibrated $K$, the network bandwidth, and the memory of each compute node as well as the disk reading speed.
\begin{algorithm}
\caption{Distributed stochastic calibration}
\label{algSADMM}
\begin{algorithmic}[1]
\REQUIRE Number of ADMM iterations $A$, Number of mini-batches $M$ and Number of epochs $E$
\STATE Initialize ${\bf {Y}}_{kf_i}$,${\bf Z}_k^{(j)}$,${\bf X}_k$ to zero $\forall k,i,j$
\FOR{$a=1,\ldots,A$}
\STATE \COMMENT{In parallel at all compute agents and fusion centres}
\FOR{$e=1,\ldots,E$}
\FOR{$m=1,\ldots,M$}
\STATE \COMMENT{Using $m$-th mini-batch of data, $\forall k,i,j$}
\STATE Compute agents solve (\ref{aug}) for ${\bf {J}}_{kf_i}$
\STATE Fusion centres solve (\ref{zsol})
\STATE ${\bf {Y}}_{kf_i} \leftarrow {\bf {Y}}_{kf_i} + \rho \left({\bf {J}}_{kf_i}-{\bf {B}}_{f_i} {\bf {Z}}_k^{(j)}\right)$
\ENDFOR
\ENDFOR
\STATE Update federated average (\ref{fa})
\STATE ${\bf X}_k \leftarrow {\bf X}_k + \alpha \left( {\bf {Z}}_k^{(j)} - \overline{{\bf Z}}_k\right)$
\ENDFOR
\end{algorithmic}
\end{algorithm}
In section \ref{sec:simul}, we test the performance of distributed stochastic calibration using simulated data. We also compare the performance of the stochastic LBFGS scheme with commonly used first order stochastic optimization algorithms.
\section{Simulations}\label{sec:simul}
The workhorse of our distributed stochastic calibration scheme is the stochastic LBFGS algorithm presented in \citep{DSW2019,escience2018}. In contrast, there is a multitude of gradient descent based stochastic optimization methods that are widely used in machine learning \citep{robbins1951,Adam}. Therefore, we first compare the performance of the stochastic LBFGS algorithm with a widely used gradient descent based optimization method (Adam) \cite{Adam} in calibration of radio interferometric data. We have implemented the LBFGS algorithm in PyTorch \citep{paszke} \footnote{https://github.com/SarodYatawatta/calibration-pytorch-test}, a popular machine learning software.
We simulate an interferometric array with $N=62$ stations, observing a point source of $1$ Jy at the phase centre ($K=1$). The full batch size is $T=10$ times lots at a single frequency ($B=1$). We generate Jones matrices in (\ref{V}) with complex circular Gaussian entries having zero mean and unit variance. Finally, we add additive white Gaussian noise to the corrupted data with a signal to noise ratio of $0.1$. For calibration, we minimize the cost (\ref{minicost}) for this dataset with several mini-batch sizes $M$. The smallest mini-batch size is $M=1$ time slots and the largest is $M=10=T$, which also corresponds to full batch mode of calibration. The number of mini-batches for a single epoch is $T/M$ and varies from $10$ to $1$, respectively. For LBFGS, we use memory size of $7$ and $4$ iterations per each mini-batch. For Adam, we use a learning rate of $0.1$. We measure the performance in terms of the robust cost $g_i({\bmath \theta})$, evaluated for each mini-batch number $i$, and normalized by $1/M$.
\begin{figure}
\begin{minipage}{0.98\linewidth}
\begin{minipage}{0.98\linewidth}
\centering
\centerline{\epsfig{figure=figures/timing_comparison,width=8.0cm}}
\vspace{0.1cm} \centerline{(a)}\smallskip
\end{minipage}\\
\begin{minipage}{0.98\linewidth}
\centering
\centerline{\epsfig{figure=figures/minibatch_comparison,width=8.0cm}}
\vspace{0.1cm} \centerline{(b)}\smallskip
\end{minipage}
\end{minipage}
\caption{Comparison of LBFGS with Adam for various mini-batch sizes $M$. (a) The reduction of cost with compute time. (b) The reduction of cost with mini-batch number. While Adam uses much less compute time to calibrate each mini-batch, it convergences much slower. \label{adam_comp}}
\end{figure}
We show the results of the comparison between LBFGS and Adam in Fig. \ref{adam_comp}. In Fig. \ref{adam_comp} (a), we plot the reduction of the cost with computing time (measured using a single CPU) while in Fig. \ref{adam_comp} (b), we show the reduction of the cost with each mini-batch of data processed. We see that Adam runs much faster than LBFGS, processing more mini-batches of data within a given CPU time interval. However, the convergence (or the reduction of cost) of Adam is slower than LBFGS, illustrating their first-order and second-order convergence rates which is well known \cite{Fletcher,Liu1989}. In particular, if we count the number of mini-batches required for each algorithm to reach convergence, we see in Fig. \ref{adam_comp} (b) that LBFGS needs far fewer mini-batches. As shown in Fig. \ref{block}, we need to read each mini-batch of data from disk, and the cost of reading data is much less for LBFGS, making it the preferred choice for stochastic calibration.
Having established the superiority of stochastic LBFGS for our particular use case, we test the performance of distributed stochastic calibration in the next simulation. In order to do this, we again simulate an inteferometric array with $N=62$ stations (similar to LOFAR \citep{LOFAR}). Data is simulated over $8$ subbands uniformly distributed in the frequency range $115$ MHz to $185$ MHz, and each subband has $64$ channels each, with each channel having a bandwidth of $3$ kHz. The bandwidth of each subband is $0.192$ MHz and the total bandwidth is therefore $1.536$ MHz. The total observation time is $20$ minutes, with data sampling at every $1$ second. Therefore, the total number of datapoints is $1200$.
We simulate $K=2$ points sources in the sky (with flux densities $3$ Jy and $1.5$ Jy) and corrupt their signals with direction dependent systematic errors. The systematic errors are modeled as Jones matrices with complex circular Gaussian entries with zero mean and unit variance. Moreover, the systematic errors are randomly varied for every $10$ time samples and also varied smoothly over frequency (by multiplying them with low order polynomials in frequency). An additional $150$ weak sources (flux density in the range $0.01$ Jy to $0.1$ Jy) randomly positioned across a field of view of $7\times 7$ square degrees are also added to the simulation, but without any systematic errors (mainly to check the accuracy of calibration). Next, the total signal is multiplied by a random and smooth bandpass polynomial, per each subband. Additive white Gaussian noise with a signal to noise ratio of $0.1$ is added to this signal. Finally, Radio frequency interference is also added. Both broad band (long duration), low amplitude as well as narrow band (narrow duration), high amplitude RFI is randomly simulated and added for a randomly selected subset of baselines. We show a typical sample of RFI added to the data in Fig. \ref{XXamp} (a).
\begin{figure}
\begin{minipage}{0.98\linewidth}
\begin{minipage}{0.98\linewidth}
\centering
\centerline{\epsfig{figure=figures/xx_beforecal,width=8.0cm}}
\vspace{0.1cm} \centerline{(a)}\smallskip
\end{minipage}\\
\begin{minipage}{0.98\linewidth}
\centering
\centerline{\epsfig{figure=figures/xx_aftercal,width=8.0cm}}
\vspace{0.1cm} \centerline{(b)}\smallskip
\end{minipage}
\end{minipage}
\caption{Visibility amplitude of $XX$ correlation of one baseline, showing both narrow band high and broad band low RFI (a) before calibration, and (b) after calibration. \label{XXamp}}
\end{figure}
For consensus optimization, we construct a Bernstein basis with $F=3$ basis functions. The basis spans the full frequency range $[115,185]$ MHz. In order to calibrate this dataset, we use $D=8$ fusion centres (see Fig. \ref{block}) and each fusion centre works with $C=P=16$ compute agents. The original $64$ channels of each subband is averaged to $B=16$ for obtaining a solution. We obtain a solution for every $T=10$ time samples using a mini-batch size of $M=5$ (so $2$ mini-batches in total). We show the performance of calibration in terms of three criteria in Fig. \ref{convg}. We measure the primal residual $\| {\bf {J}}_{kf_i}- {\bf {B}}_{kf_i} {\bf {Z}}_k^{(j)} \|$, the dual residual $\| \left({\bf {Z}}_k^{(j)}\right)^{new} - \left({\bf {Z}}_k^{(j)}\right)^{old} \|$ and the federated averaging residual $\| {\bf {Z}}_k^{(j)} - \overline{{\bf Z}}_k \|$ with each iteration (or mini-batch). We have normalized each quantity in Fig. \ref{convg} by the sizes of the matrices involved and find the average value for all $k,i,j$. We have $E=2$ and $M=2$ and because of this, the federated averaging residual is updated at every $E\times M=4$ mini-batches. We have varied the regularization factor for federated averaging -- i.e., $\alpha=0.01,1,100$ while keeping the consensus penalty at $\rho=1.0$.
\begin{figure}
\begin{minipage}{0.98\linewidth}
\centering
\centerline{\epsfig{figure=figures/convergence_res,width=8.0cm}}
\end{minipage}
\caption{Convergence for three different values of $\alpha$. We show the dual residual, primal residual and federated averaging residual from top to bottom.\label{convg}}
\end{figure}
We make several observations from Fig. \ref{convg}. First, we see that the primal residual shows no improvement with iterations, this is because our basis functions with $F=3$ do not have enough freedom to completely describe the frequency behavior of the systematic errors. We remind the reader that in addition to the global variation within the full band of $[115,185]$ MHz, there is local variation within $0.192$ MHz of each subband due to the bandpass shapes we have introduced. Therefore, the systematic errors have more variation than what is assumed by the consensus polynomials and the primal residual reflect this. The marked difference in performance is seen in the dual residual and the federated averaging residual. For $\alpha=0.01$, the regularization is too low for the federated averaging to come into effect and the federated averaging residual diverges. In other words, each fusion centre finds a solution for ${\bf {Z}}_k^{(j)}$ that is much different from the federated average. When $\alpha=100$, the federated averaging is forced upon each fusion centre and this gives poor performance as seen in the dual and primal residuals. The best result is obtained at $\alpha=1$, when we see both dual residual and federated averaging residual go to a low value together and in this case, we can say that each fusion centre has a solution ${\bf {Z}}_k^{(j)}$ that is also globally accepted. This is what we want to achieve in terms of the physical origins of the systematic errors.
We calibrate the full dataset with the number of ADMM iterations set to $A=4$ and with $\alpha=1$. The total number of mini-batches used for each calibration run is therefore $A\times E\times M=4\times 2\times 2=16$ and this is lower than in Fig. \ref{convg}. The solutions are initialized with the solution obtained for the averaged data of the first subband for the first $10$ time samples. We show the images of a small area in the sky surrounding the brightest source being calibrated ( $3$ Jy ) in Fig. \ref{sI}. We see that the contribution from this source is cleanly removed from the data and in Fig. \ref{sI} (b), only the weak sources and the RFI remain (see Fig. \ref{XXamp} (b)). This sky-subtracted data can be used for better RFI mitigation as in \citep{Wilensky2019}. Considering the computational effort, we use $D=8$ fusion centres, each working with a subband of data. Therefore, the network traffic at the federated averaging centre has to deal with $8$ messages at a time. In contrast, in our previous distributed calibration software, either we need to use $D\times P=8 \times 16=128$ fusion centres \citep{DCAL} (high network traffic) or we need to multiplex data by a factor of $1/16$ \citep{DMUX} (slow convergence).
\begin{figure}
\begin{minipage}{0.98\linewidth}
\begin{minipage}{0.98\linewidth}
\centering
\centerline{\epsfig{figure=figures/sI_beforecal,width=8.0cm}}
\vspace{0.1cm} \centerline{(a)}\smallskip
\end{minipage}\\
\begin{minipage}{0.98\linewidth}
\centering
\centerline{\epsfig{figure=figures/sI_aftercal,width=8.0cm}}
\vspace{0.1cm} \centerline{(b)}\smallskip
\end{minipage}
\end{minipage}
\caption{Images showing the area around the strongest source $3$ Jy peak flux (a) before calibration, and (b) after calibration. Weak sources and artefacts due to RFI are still present after calibration. \label{sI}}
\end{figure}
Looking back at Fig. \ref{XXamp}, we see that the RFI is well preserved even after calibration. This is attributed to the robust cost function used in calibration \cite{Kaz3} and this is also confirmed by \citep{sob2019}. Therefore, after stochastic calibration, better RFI mitigation can be achieved using techniques similar to \citep{Wilensky2019}.
\section{Conclusions}\label{sec:conc}
We have presented a distributed stochastic calibration scheme that minimizes the use of compute memory and network traffic for the calibration of large data volumes at their highest resolution. We have also highlighted the many applications of this calibration scheme in radio astronomy. Ready to use software based on this scheme is already available\footnote{http://sagecal.sourceforge.net/} and we will explore this further for various science cases in radio astronomy in future work.
\section*{Acknowledgments}
This work is supported by Netherlands eScience Center (project DIRAC, grant 27016G05). We thank the anonymous reviewer for the valuable comments.
\bibliographystyle{mnras}
|
1,314,259,995,296 | arxiv | \section{introduction}
\label{sec:intro}
Although electronic-structure calculations based on the density functional theory (DFT)\cite{bib:76,bib:77} have been successful by and large for quantitative explanations and predictions of the properties of molecules and solids,
they are known to have a tendency to fail in describing the material properties even qualitatively for strongly correlated systems.
To remedy such shortcomings of DFT,
various approaches have been proposed.
There exist such approaches based on the Green's function (GF) theory,
including $GW$ method.\cite{bib:GW1,bib:GW2,bib:GW3}
They often use the non-interacting states obtained in DFT calculations as reference states for the construction of interacting GFs.
On the other hand,
many sophisticated approaches based on the wave function theory have been developed for quantum chemistry calculations.
The coupled-cluster singles and doubles (CCSD) method\cite{Helgaker} is a widely accepted one
since it achieves moderate balance between its high accuracy and high computational cost.
Not only is the relation between $GW$ and CCSD methods theoretically interesting,
but also their quantitative comparison is worth examining\cite{bib:4535} from a practical viewpoint.
Photoelectron spectroscopy is one of the most active fields in experimental physics of today.
Measurements of the photoelectric effects in target materials make use of various kinds of techniques such as angle-resolved photoemission spectroscopy (ARPES) for clarifying the material properties.
The measured spectra of an interacting electronic system are often explained under a certain assumption via the one-particle GF.\cite{bib:4070,bib:4165,bib:pw_unfolding}
The clear understanding of the characteristics of GFs is thus important both for theoretical and practical studies in material science.
Mathematically speaking,
the quasiparticle and satellite peaks in photoelectron spectra represent nothing but the poles of one-particle GF of an interacting system.
Particularly, the distance between the peaks closest to zero frequency is the fundamental gap.
It has been demonstrated that there exists an analytically solvable model\cite{bib:4575} which helps to obtain transparent insights into interacting GFs.
Meanwhile, the GFs in the context of correlated electronic-structure calculations for
uniform electron gases\cite{bib:4033} and realistic systems have been drawing attention recently\cite{bib:4473,bib:4483,bib:4516,bib:4582},
which we deal with in the present study.
CCSD\cite{bib:4115} and subsequent GF calculations\cite{Nooijen92,Nooijen93,Nooijen95,bib:3947,bib:4275} are difficult especially for a periodic system due to their large computational cost since a sufficiently large number of sampled $k$ points is needed.
This fact hinders one from performing detailed comparison between the band structures obtained by a Hartree--Fock (HF) or DFT calculation and
the spectra obtained from CCSD GF,
and the measured spectra.
Development of physically appropriate interpolation schemes for CCSD GFs is thus desirable for examining spectral properties of correlated systems,
which is nothing but what we do in this study.
This paper is organized as follows.
In Sect. \ref{sec:method},
we review CCSD and GF calculations briefly and
explain the interpolation schemes.
In Sect. \ref{sec:comp_details},
we describe the details of our computation.
In Sect. \ref{sec:results},
we show the results for the target systems.
In Sect. \ref{sec:conclusions},
our conclusions are provided.
\section{method}
\label{sec:method}
\subsection{CCSD and GF for a periodic system}
The CC state for a reference state $| \Psi_0 \rangle$ is constructed by performing an exponentially parametrized transform as $| \Psi_{\mathrm{CC}} \rangle \ = e^{\hat{T}} | \Psi_0 \rangle$,
where $\hat{T}$ is a so-called cluster operator.
The normalization of our CCSD wave functions obeys the bi-variational formulation,\cite{Arponen83,Bi-vari1,Bi-vari2}
with which we calculate the CCSD one-particle GFs\cite{Nooijen92,Nooijen93,Nooijen95} in the recently proposed procedure\cite{bib:3947,bib:4275} as well as in our previous studies.\cite{bib:4473,bib:4483,bib:4516}
Here we review briefly the calculation of CCSD GF for a periodic system.
The GF in frequency domain is given by
\begin{gather}
G (\boldsymbol{k}, \omega)
=
G^{(\mathrm{h})} (\boldsymbol{k}, \omega)
+
G^{(\mathrm{e})} (\boldsymbol{k}, \omega)
,
\label{def_CCSD_GF}
\end{gather}
where
\begin{gather}
G_{p p'}^{(\mathrm{h})} (\boldsymbol{k}, \omega)
=
\langle \Psi_0 |
(1 + \hat{\Lambda})
\overline{a_{\boldsymbol{k} p}^\dagger}
\frac{1}{\omega + \overline{H}}
\overline{a}_{\boldsymbol{k} p'}
| \Psi_0 \rangle
\label{def_G_hole}
\end{gather}
and
\begin{gather}
G_{p p'}^{(\mathrm{e})} (\boldsymbol{k}, \omega)
=
\langle \Psi_0 |
(1 + \hat{\Lambda})
\overline{a}_{\boldsymbol{k} p}
\frac{1}{\omega - \overline{H}}
\overline{a_{\boldsymbol{k} p'}^\dagger}
| \Psi_0 \rangle
\label{def_G_elec}
\end{gather}
are the partial GFs from the hole and electron excitations,
respectively.
$\boldsymbol{k}$ is a wave vector and $\omega$ is a complex frequency.
$p$ is the composite index of a spatial orbital and a spin direction for an occupied or unoccupied single-electron state.
For the original Hamiltonian $\hat{H}$,
we defined the similarity transformed Hamiltonian
$
\overline{H}
\equiv
e^{-\hat{T}} \hat{H} e^{\hat{T}} - E_0
$
measured from the CCSD total energy $E_0$.
We also defined the transformed creation and annihilation operators
$
\overline{a_{\boldsymbol{k} p}^\dagger}
=
e^{-\hat{T}}
\hat{a}_{\boldsymbol{k} p}^{\dagger}
e^{\hat{T}}
$
and
$
\bar{a}_{\boldsymbol{k} p}
=
e^{-\hat{T}}
\hat{a}_{\boldsymbol{k} p}
e^{\hat{T}}
,
$
respectively.
$\hat{\Lambda}$ is the parametrized de-excitation operator determined in the $\Lambda$-CCSD calculation,\cite{bib:3947,bib:4275}
which has to be introduced since the CCSD operator $e^{\hat{T}}$ is not unitary.
In order to avoid the computational difficulty in treating the inverse matrix $(\omega \pm \overline{H})^{-1}$
in eqs. (\ref{def_G_hole}) and (\ref{def_G_elec}),
the parametrized operators
$\hat{X}_{\boldsymbol{k} p} (\omega)$
and
$\hat{Y}_{\boldsymbol{k} p} (\omega)$
are introduced so that\cite{bib:3947,bib:4275}
\begin{gather}
(\omega + \overline{H})
\hat{X}_{\boldsymbol{k} p} (\omega)
| \Psi_0 \rangle
=
\overline{a}_{\boldsymbol{k} p}
| \Psi_0 \rangle
\label{def_ip_eom}
\end{gather}
and
\begin{gather}
(\omega - \overline{H})
\hat{Y}_{\boldsymbol{k} p} (\omega)
| \Psi_0 \rangle
=
\overline{a_{\boldsymbol{k} p}^\dagger}
| \Psi_0 \rangle
.
\label{def_ea_eom}
\end{gather}
The linear equation for the non-Hermitian matrix in eq. (\ref{def_ip_eom}) is called the ionization potential (IP) equation-of-motion (EOM) CCSD equation,
while that in eq. (\ref{def_ea_eom}) is called the electron affinity (EA) EOM-CCSD equation.
After obtaining the parametrized operators,
we use them in eqs (\ref{def_G_hole}) and (\ref{def_G_elec}) to get
\begin{gather}
G_{p p'}^{(\mathrm{h})} (\boldsymbol{k}, \omega)
=
\langle \Psi_0 |
(1 + \hat{\Lambda})
\overline{a_{\boldsymbol{k} p}^\dagger}
\hat{X}_{\boldsymbol{k} p'} (\omega)
| \Psi_0 \rangle
\label{G_hole_X}
\end{gather}
and
\begin{gather}
G_{p p'}^{(\mathrm{e})} (\boldsymbol{k}, \omega)
=
\langle \Psi_0 |
(1 + \hat{\Lambda})
\overline{a}_{\boldsymbol{k} p}
\hat{Y}_{\boldsymbol{k} p'} (\omega)
| \Psi_0 \rangle
.
\label{G_elec_Y}
\end{gather}
The $k$-resolved spectral function is defined via the GF as
\begin{gather}
A (\boldsymbol{k}, \omega)
=
-\frac{1}{\pi}
\mathrm{Im Tr} \,
G (\boldsymbol{k}, \omega + i \delta)
\label{def_spec_k}
\end{gather}
for a real $\omega$ with a small positive constant $\delta$ ensuring causality.
The spectral function calculated in this way reflects our correlated approach,
to be compared with the band structures obtained in mean-field-like approaches such as HF and DFT.
Before moving on to the description of our interpolation schemes,
it is noted here that
there exists an alternative to obtain correlated spectra or band structure for arbitrary $k$ points without resorting to interpolation.
Specifically,
usage of a large series of shifted regular $k$ meshes
enables one to perform EOM-CCSD calculations to get the excitation energies for an arbitrarily fine $k$ mesh,
as adopted by McClain et al.\cite{bib:4115}
This approach requires large computational cost for the accuracy ensured by the EOM-CCSD framework itself.
\subsection{Wannier interpolation}
\subsubsection{Wannier orbitals}
Wannier orbitals (WOs)\cite{bib:5} and their variants in solids are analogues of Foster--Boys orbitals\cite{bib:4594,bib:4595} in molecular systems.
In particular, maximally localized WOs (MLWOs)\cite{bib:4596} are widely used not only for analyses of chemical bonds but also for accurate calculations of anomalous Hall conductivity and transport properties.
The generic expression of a WO is
\begin{gather}
w_{\boldsymbol{R} n} (\boldsymbol{r})
=
\frac{1}{N_k}
\sum_{\boldsymbol{k}, p}
e^{-i \boldsymbol{k} \cdot \boldsymbol{R}}
\psi_{\boldsymbol{k} p} (\boldsymbol{r})
U_{p n}^{(\boldsymbol{k})}
.
\label{def_Wannier}
\end{gather}
$\boldsymbol{R}$ is the lattice point where the unit cell containing the $n$th WO is located.
$U^{(\boldsymbol{k})}$ is a unitary matrix at $\boldsymbol{k}$ for the construction of localized orbitals from the extending Bloch orbitals $\psi_{\boldsymbol{k} p} (\boldsymbol{r})$.
When the transformation matrix $U^{(\boldsymbol{k})}$ is identity at each $\boldsymbol{k}$,
the normal WOs (NWOs)\cite{bib:5} are obtained.
When the matrices are determined so that the spread functional\cite{bib:369, bib:368} is minimized,
on the other hand,
the MLWOs are obtained.
\subsubsection{Direct interpolation}
The Bloch sum of the localized orbital in eq. (\ref{def_Wannier}) for a wave vector $\boldsymbol{k}$ is defined as
$
w_{\boldsymbol{k} n} (\boldsymbol{r})
=
\sum_{\boldsymbol{R}}
e^{i \boldsymbol{k} \cdot \boldsymbol{R}}
w_{\boldsymbol{R} n} (\boldsymbol{r})
,
$
which extends over the whole crystal.
The Bloch sums of the target bands allows one to transform the CCSD GF in the band representation,
which is also said to be in the Bloch gauge,
to the new one in the Wannier gauge as
\begin{gather}
G_{n n'} (\boldsymbol{k}, \omega)
=
\sum_{p, p'}
(U^{(\boldsymbol{k}) \dagger})_{n p}
G_{p p'} (\boldsymbol{k}, \omega)
U_{p' n'}^{(\boldsymbol{k})}
.
\label{GF_wan_from_band_repr}
\end{gather}
For the calculated GF at $N_k$ sampled $k$ points in the Brillouin zone (BZ),
we perform Fourier transformation as
\begin{gather}
\widetilde{G}_{n n'} (\boldsymbol{R}, \omega)
=
\frac{1}{N_k}
\sum_{\boldsymbol{k}}^{\mathrm{sampled}}
e^{-i \boldsymbol{k} \cdot \boldsymbol{R}}
G_{n n'} (\boldsymbol{k}, \omega)
,
\label{def_G_R_sampled}
\end{gather}
which is ideally equal to the exact Fourier transform $G_{n n'} (\boldsymbol{R}, \omega)$ in the limit of an infinite number of sampled $k$ points.
The real-space representation defined above enables us to obtain the GF for an arbitrary wave vector via inverse Fourier transformation as
\begin{gather}
\widetilde{G}_{n n'}^{\mathrm{d}} (\boldsymbol{k}, \omega)
=
\sum_{\boldsymbol{R}}
e^{i \boldsymbol{k} \cdot \boldsymbol{R} }
\widetilde{G}_{n n'} (\boldsymbol{R}, \omega)
,
\label{def_G_R_direct}
\end{gather}
which we call the direct interpolation hereafter.
It is clear from eq. (\ref{def_spec_k}) that
the spectral function $\widetilde{A}^{\mathrm{d}} (\boldsymbol{k}, \omega)$ calculated from direct interpolation does not depend on the matrices $U^{(\boldsymbol{k})}$ since they are unitary.
It is also clear from eq. (\ref{def_G_R_direct}) that
the interpolated spectral function integrated over an arbitrarily fine $k$ mesh is identical to the original spectra integrated over the sampled $k$ points:
$\widetilde{A}^{\mathrm{d}} (\omega) = A (\omega)$.
\subsubsection{Self-energy-mediated interpolation}
We cannot avoid being concerned about the reliability of $\widetilde{G}_{n n'} (\boldsymbol{R}, \omega)$ defined in eq. (\ref{def_G_R_sampled}) since
the number of sampled $k$ points has to be small in general due to
the large computational cost of CCSD and subsequent GF calculations.
To circumvent the difficulty in increasing the number of sampled $k$ points,
we propose another interpolation scheme for GFs here.
The self-energy $\Sigma$ is obtained via the Dyson equation
\begin{gather}
G^{-1} (\boldsymbol{k}, \omega)
=
G_0^{-1} (\boldsymbol{k}, \omega)
-
\Sigma (\boldsymbol{k}, \omega)
,
\label{Dyson_eq}
\end{gather}
where $G_0$ is the HF GF.
Substituting the CCSD GF in eq. (\ref{def_CCSD_GF}) into the matrix equation above,
we get the CCSD self-energy.
It is noted here that the CCSD self-energy does not contain the contributions from the HF self-energy diagrams,
which are already contained in $G_0$.\cite{stefanucci2013nonequilibrium}
The HF GF in the Bloch gauge is diagonal in reciprocal space,
whose component is given by
\begin{gather}
(G_0^{-1})_{p p'} (\boldsymbol{k}, \omega)
=
( \omega - \varepsilon_{\boldsymbol{k} p} )
\delta_{p p'}
,
\end{gather}
where $\varepsilon_{\boldsymbol{k} p}$ is the HF orbital energy.
The interpolation procedure is as follows.
We first calculate the CCSD self-energy in the Bloch gauge via eq. (\ref{Dyson_eq}),
which is then transformed into the Wannier gauge as well as in eq. (\ref{GF_wan_from_band_repr}).
We apply Fourier transformation to it using the sampled $k$ points to get $\widetilde{\Sigma}_{n n'} (\boldsymbol{R}, \omega)$ similarly to eq. (\ref{def_G_R_sampled}).
From this real-space representation,
we can interpolate the self-energy $\widetilde{\Sigma}_{n n'} (\boldsymbol{k}, \omega)$ for an arbitrary wave vector via inverse Fourier transformation,
which we plug into the Dyson equation to get the interpolated GF
\begin{gather}
\widetilde{G}^{\mathrm{sem}} (\boldsymbol{k}, \omega)
=
[
\widetilde{G}_0^{-1} (\boldsymbol{k}, \omega)
-
\widetilde{\Sigma} (\boldsymbol{k}, \omega)
]^{-1}
.
\end{gather}
We call this scheme the self-energy-mediated interpolation hereafter.
Since this scheme includes inversion of matrices,
the resultant spectral function depends on the construction of WOs since the unitary matrices $U^{(\boldsymbol{k})}$ depend on $\boldsymbol{k}$ in general.
There exists an attempt for interpolating $GW$ quasiparticle band structure using MLWOs done by Hamann and Vanderbilt.\cite{bib:4684}
Their scheme uses the $GW$ quasiparticle wave functions and their orbital energies to get the $GW$ Hamiltonian in real space by adopting a manner computationally similar to our direct interpolation.
Their formalism for efficient interpolation of correlated band structure stems from the localized shapes of MLWOs.
The self-energy-mediated interpolation, on the other hand,
relies on the localized nature of self-energies,
as will be demonstrated later.
It will be interesting to examine the interpolation using the $GW$ self-energy in the future.
\section{computational details}
\label{sec:comp_details}
We adopt STO-3G basis set for the Cartesian Gaussian-type basis functions\cite{Helgaker} of all the elements in the present study.
The Coulomb integrals between AOs are calculated efficiently.\cite{Libint1}
By transforming them using the results of the HF calculations for periodic systems,
we obtain the integrals between the Bloch orbitals\cite{bib:4024},
with which we perform the CCSD calculations by successive substitution.
We solve the IP-EOM-CCSD and EA-EOM-CCSD equations in eqs. (\ref{def_ip_eom}) and (\ref{def_ea_eom}), respectively,
by using the shifted BiCG method.\cite{bib:4514,bib:4515,bib:4512}
We set $\delta = 0.02$ Ht in eq. (\ref{def_spec_k}) throughout this study.
For the construction of MLWOs,
we calculate the overlaps between the cell-periodic parts of the Bloch orbitals as input to wannier90.\cite{bib:4587}
\section{results and discussion}
\label{sec:results}
\subsection{LiH chain}
\subsubsection{Band structure and CCSD GF}
For a LiH chain composed of equidistant atoms,
we first optimized the lattice constant via HF calculations using $N_k = 12 \times 1 \times 1$ sampled $k$ points.
We obtained the optimized lattice constant $a = 3.28$ \AA,
in reasonable agreement with previous studies.\cite{bib:4586,bib:4585}
We obtained a restricted HF (RHF) solution for this lattice constant and
adopted it as the reference state for the CCSD calculation.
We constructed the MLWOs from all the 6 bands.
The MLWOs can be used for interpolation of the original bands.\cite{bib:369, bib:368}
The HF bands and their Wannier interpolation are plotted in Fig. \ref{Fig_LiH_bands},
where the original bands are accurately reproduced.
The flat valence band at $\omega = -10$ eV comes from the H 1$s$ orbital,
while the conduction bands are dispersive.
The CCSD spectral function $A (\boldsymbol{k}, \omega)$ is also shown in the figure.
We find clear correspondence between the HF band energies and the quasiparticle peaks in the CCSD spectra.
In addition, low intensities exist in the CCSD spectra,
known as the satellite peaks.\cite{bib:4473}
They are direct consequences of many-body effects taken into account by the correlated approach.
The locations of quasiparticle peaks below (above) the Fermi level are closer to $\omega = 0$ than those of the valence (conduction) HF band energies are,
as generic characteristics of correlation effects.
Since the system is spin unpolarized,
the spectral intensities are the same at an arbitrary $\boldsymbol{k}$ and $-\boldsymbol{k}$ due to time reversal symmetry.
\begin{figure}
\begin{center}
\includegraphics[width=7cm]{LiH_bands.eps}
\end{center}
\caption{
HF band structure of a LiH chain as circles and that obtained with the MLWOs as curves.
The spectral function $A (\boldsymbol{k}, \omega)$ calculated from the CCSD GF at 12 sampled $k$ points are also shown.
The chain extends in the $x$ direction.
}
\label{Fig_LiH_bands}
\end{figure}
\subsubsection{Direct interpolation}
The spectral function $\widetilde{A}^{\mathrm{d}} (\boldsymbol{k}, \omega)$ calculated from direct interpolation is shown in Fig. \ref{Fig_LiH_gf_direct} (a).
One finds soon that three obviously unfavorable features exist in the interpolated spectra.
First,
the quasiparticle peaks for the highest conduction band consist of spots separated by the distance $\Delta k_x$ between the neighboring sampled $k$ points.
Second,
there exist trains of specks at $\omega = 10$ and 5 eV, where each speck is separated by $\Delta k_x$ again.
The spectral intensities for some of the specks are,
even worse, unphysically negative.
Third,
the time reversal symmetry is not preserved in the spectra,
particularly for the trains of specks.
For the sampled frequencies in a range $-40$ eV $< \omega < 40$ eV,
the absolute values of diagonal components of
$\widetilde{G} (\boldsymbol{R}, \omega)$
in the region near the Fermi level
($-12$ eV $< \omega < 22$ eV) and
the outside region are plotted in Fig. \ref{Fig_LiH_gf_direct} (b).
Although the decreasing tendencies of those values for the frequencies near the Fermi level are seen for both kinds of WOs,
their convergence is slow for the increase in $|\boldsymbol{R}|$.
In contrast,
the diagonal components for the other frequencies decrease rapidly enough already at $|\boldsymbol{R}|/a = 2$.
These observations indicate that the sampled $k$ points are too few for the direct interpolation near the Fermi level despite the fact that the HF bands are sufficiently convergent with respect to the $k$ points.
The unfavorable features of the direct-interpolated spectra enumerated above are numerical artifacts due to the insufficient number of sampled $k$ points.
\begin{figure}
\begin{center}
\includegraphics[width=7cm]{LiH_gf_direct.eps}
\end{center}
\caption{
(a)
Spectral function $\widetilde{A}^{\mathrm{d}} (\boldsymbol{k}, \omega)$ calculated from the direct interpolation of CCSD GF for a LiH chain.
(b)
The absolute values $|\widetilde{G}_{n n} (\boldsymbol{R}, \omega)|$ of diagonal components of the GFs as functions of $|\boldsymbol{R}|$.
Those obtained using the NWOs and MLWOs for the energy region near the Fermi level ($-12$ eV $< \omega < $ 22 eV) and the outside region are plotted.
}
\label{Fig_LiH_gf_direct}
\end{figure}
\subsubsection{Self-energy-mediated interpolation}
To circumvent the direct interpolation,
let us next try the self-energy-mediated interpolation.
We impose the time reversal symmetry condition on the spectral function from the self-energy-mediated interpolation as
\begin{gather}
\widetilde{A}^{\mathrm{sem}}_{\mathrm{TR}} (\boldsymbol{k}, \omega)
\equiv
\frac{
\widetilde{A}^{\mathrm{sem}} (\boldsymbol{k}, \omega)
+
\widetilde{A}^{\mathrm{sem}} (-\boldsymbol{k}, \omega)
}{2}
.
\label{def_time_rev_spec}
\end{gather}
The spectral functions calculated in this way
by using the NWOs and MLWOs are shown in Fig. \ref{Fig_LiH_gf_dyson} (a),
where the unfavorable features for the direct interpolation do not appear.
The spectra for the two kinds of NWOs are almost indistinguishable from each other.
The absolute values of diagonal components of
$\widetilde{\Sigma} (\boldsymbol{R}, \omega)$
in the same regions as in Fig. \ref{Fig_LiH_gf_direct} (b) are plotted in Fig. \ref{Fig_LiH_gf_dyson} (b).
Those values decrease rapidly enough already at $|\boldsymbol{R}|/a = 1$ for all the frequencies.
This means that the number of sampled $k$ points is sufficient for the description of the variation in CCSD self-energy in reciprocal space,
and hence the self-energy-mediated interpolation of GF is reliable within the accuracy ensured by our preceding procedure of CCSD GF calculations.
\begin{figure}
\begin{center}
\includegraphics[width=7cm]{LiH_gf_dyson.eps}
\end{center}
\caption{
(a)
Spectral functions $\widetilde{A}_{\mathrm{TR}}^{\mathrm{sem}} (\boldsymbol{k}, \omega)$ calculated from the self-energy-mediated interpolation for a LiH chain by using the NWOs and MLWOs are shown in the upper and lower panels, respectively.
(b)
The absolute values $|\widetilde{\Sigma}_{n n} (\boldsymbol{R}, \omega)|$ of diagonal components of the self-energies as functions of $|\boldsymbol{R}|$.
}
\label{Fig_LiH_gf_dyson}
\end{figure}
The spectral functions integrated over $k$ points,
or equivalently the densities of states,
for the original CCSD GF and
the interpolated GFs using the WOs are shown in Fig. \ref{Fig_LiH_spec_integ} (a).
Those for the two kinds of WOs look indistinguishable,
in addition to which they almost coincide with the original spectra.
To see whether the self-energy-mediated interpolation using a smaller number of sampled $k$ points reproduces the original spectra,
we calculated the interpolated spectra for $N_k = 6$ and plotted them in Fig. \ref{Fig_LiH_spec_integ} (b).
The interpolated spectra from $N_k = 12$ and those from $N_k = 6$ look quite similar to each other,
implying the usefulness of our scheme for $k$-integrated spectra.
\begin{figure}
\begin{center}
\includegraphics[width=7cm]{LiH_spec_integ_new.eps}
\end{center}
\caption{
(a)
$k$-integrated spectral functions of a LiH chain for
the original CCSD GF at 12 sampled $k$ points and
the interpolated GFs using the WOs.
(b)
The original spectra and the self-energy-mediated interpolated ones using the NWOs for 12 sampled $k$ points.
The latter for 6 sampled $k$ points are also shown.
}
\label{Fig_LiH_spec_integ}
\end{figure}
\subsection{$trans$-polyacetylene}
\subsubsection{Band structure and CCSD GF}
For $trans$-polyacetylene,
we adopted the structural parameters provided by Teramae\cite{bib:4040} to construct the unit cell consisting of two C atoms and two H atoms,
where the bond alternation has occurred.\cite{bib:4566,bib:4565}
We obtained an RHF solution for this geometry using $N_k = 8 \times 1 \times 1$ sampled $k$ points and
adopted it as the reference state for the CCSD calculations.
Although it has been shown\cite{bib:4602} that
the band picture on this system is dubious by resorting to DFT calculations incorporating the zero-point vibrations of atoms,
we keep to the band picture since the main purpose of present study is to propose the interpolation schemes.
We constructed the MLWOs from the 10 bands near the Fermi level.
The HF bands and their Wannier interpolation are plotted in Fig. \ref{Fig_C2H2_bands},
where the original bands are accurately reproduced.
The calculated band gap of 8.9 eV at X ($k_x = \pm \pi/a$) is in reasonable agreement obtained by Teramae\cite{bib:550} using the same basis set.
These calculated gaps are much larger than the experimental ones\cite{bib:4565,bib:4564} of 1 - 2 eV,
as is often the case with HF calculations.
The CCSD spectral function is also shown in the figure,
where the satellite peaks for $\Gamma$ ($k_x = 0$) have stronger intensities than for $k_x \ne 0$.
\begin{figure}
\begin{center}
\includegraphics[width=7cm]{C2H2_bands.eps}
\end{center}
\caption{
HF band structure of $trans$-polyacetylene as circles and that obtained with the MLWOs as curves.
The spectral function $A (\boldsymbol{k}, \omega)$ calculated from the CCSD GF at 8 sampled $k$ points are also shown.
The polymer extends in the $x$ direction.
$a$ is the lattice constant.
}
\label{Fig_C2H2_bands}
\end{figure}
\subsubsection{Direct interpolation}
The spectral function $\widetilde{A}^{\mathrm{d}} (\boldsymbol{k}, \omega)$ calculated from direct interpolation is shown in Fig. \ref{Fig_C2H2_gf_direct} (a),
where one finds unfavorable features similarly to the case of a LiH chain.
For the sampled frequencies in a range $-60$ eV $< \omega < 50$ eV,
the absolute values of diagonal components of
$\widetilde{G} (\boldsymbol{R}, \omega)$
in the region near the Fermi level
($-33$ eV $< \omega < $ $33$ eV) and
the outside region are plotted in Fig. \ref{Fig_C2H2_gf_direct} (b).
No clear tendency of decrease in those values is seen for the two kinds of WOs.
The numerical artifacts in the direct-interpolated spectra thus look more prominent than for a LiH chain.
In particular,
the interpolated satellite peaks for $\omega < -25$ eV can be unphysically negative,
as seen in Fig. \ref{Fig_C2H2_gf_direct} (a).
\begin{figure}
\begin{center}
\includegraphics[width=7cm]{C2H2_gf_direct.eps}
\end{center}
\caption{
(a)
Spectral function $\widetilde{A}^{\mathrm{d}} (\boldsymbol{k}, \omega)$ calculated from the direct interpolation of CCSD GF for $trans$-polyacetylene.
(b)
The absolute values $|\widetilde{G}_{n n} (\boldsymbol{R}, \omega)|$ of diagonal components of the GFs as functions of $|\boldsymbol{R}|$.
Those obtained using the NWOs and MLWOs for the energy region near the Fermi level ($-33$ eV $< \omega < $ $33$ eV) and the outside region are plotted.
}
\label{Fig_C2H2_gf_direct}
\end{figure}
\subsubsection{Self-energy-mediated interpolation}
The spectral functions
$\widetilde{A}^{\mathrm{sem}}_{\mathrm{TR}} (\boldsymbol{k}, \omega)$
calculated via self-energy-mediated interpolation by using the NWOs and MLWOs are shown in Fig. \ref{Fig_C2H2_gf_dyson} (a).
Unphysical intensity does not appear in the interpolated spectra near the Fermi level.
The absolute values of diagonal components of
$\widetilde{\Sigma} (\boldsymbol{R}, \omega)$
in the same frequency regions as in Fig. \ref{Fig_C2H2_gf_direct} (b) are plotted in Fig. \ref{Fig_C2H2_gf_dyson} (b).
The diagonal components near the Fermi level for the NWOs are large for $|\boldsymbol{R}| = 0$ compared to $|\boldsymbol{R}| \ne 0$.
This is also the case for the MLWOs.
On the other hand,
there exist significant contributions from
$|\boldsymbol{R}| \ne 0$ for the frequencies far from the Fermi level in contrast to the case of a LiH chain.
The unphysical intensities are thus seen for $-25$ eV $< \omega <$ $60$ eV at $\Gamma$,
where the two kinds of WOs give slightly different spectra. [See Fig. \ref{Fig_C2H2_gf_dyson} (a)]
\begin{figure}
\begin{center}
\includegraphics[width=7cm]{C2H2_gf_dyson.eps}
\end{center}
\caption{
(a)
Spectral functions $\widetilde{A}_{\mathrm{TR}}^{\mathrm{sem}} (\boldsymbol{k}, \omega)$ calculated from the self-energy-mediated interpolation for $trans$-polyacetylene by using the NWOs and MLWOs are shown in the upper and lower panels, respectively.
(b)
The absolute values $|\widetilde{\Sigma}_{n n} (\boldsymbol{R}, \omega)|$ of diagonal components of the self-energies as functions of $|\boldsymbol{R}|$.
}
\label{Fig_C2H2_gf_dyson}
\end{figure}
The spectral functions integrated over $k$ points for the original CCSD GF and the interpolated GFs using the WOs are shown in Fig. \ref{Fig_C2H2_spec_integ} (a).
Those for the two kinds of WOs look indistinguishable even for $\omega < -25$ eV in contrast to the $k$-resolved spectra. [See Fig. \ref{Fig_C2H2_gf_dyson} (a)]
Furthermore, negative intensities do not appear for those frequencies in the $k$-integrated spectra.
These observations imply that accurate interpolation of $k$-resolved spectra requires more sampled $k$ points than $k$-integrated spectra do.
To see whether the self-energy-mediated interpolation using a small number of sampled $k$ points allows one to access the $k$-integrated spectra which would be obtained for a larger number of $k$ points,
we calculated the interpolated spectra for $N_k = 6$ and plotted them in Fig. \ref{Fig_C2H2_spec_integ} (b).
The interpolated spectra from $N_k = 8$ and those from $N_k = 6$ look quite similar to each other,
indicative of well converged self-energy with respect to $N_k$.
On the other hand,
the peak locations of the original spectra for $-10$ eV $< \omega < 15$ eV differ slightly from those of the interpolated spectra,
implying slow convergence of the original GF.
These results corroborate the usefulness of the self-energy-mediated interpolation scheme as well as in the LiH chain case.
\begin{figure}
\begin{center}
\includegraphics[width=7cm]{C2H2_spec_integ_new.eps}
\end{center}
\caption{
(a)
$k$-integrated spectral functions of $trans$-polyacetylene for
the original CCSD GF at 8 sampled $k$ points and
the interpolated GFs using the WOs.
(b)
The original spectra and the self-energy-mediated interpolated ones using the NWOs for 8 sampled $k$ points.
The latter for 6 sampled $k$ points are also shown.
}
\label{Fig_C2H2_spec_integ}
\end{figure}
It has been demonstrated that the self-energy-mediated interpolation is successful for our two systems at least near the Fermi level.
Our results are consistent with the often adopted assumption that the self-energy of an electronic system is more localized than the GF.
The dynamical mean-field theory (DMFT)\cite{PhysRevB.45.6479} and its application in electronic-structure calculations\cite{bib:3127} are based on this assumption and have been used successfully.
\section{conclusions}
\label{sec:conclusions}
We proposed two schemes for interpolation of the one-particle GF calculated within CCSD method for a periodic system.
These schemes employ transformation of representation from reciprocal to real spaces by using WOs for circumventing huge cost for a large number of sampled $k$ points.
One of the schemes is the direct interpolation,
which obtains the GF straightforwardly by using Fourier transformation.
The other is the self-energy-mediated interpolation,
which obtains the GF via the Dyson equation.
We applied the schemes to two insulating systems,
a LiH chain and $trans$-polyacetylene,
and examined their validity in detail.
We found that the direct-interpolated GFs suffered from numerical artifacts stemming from slow convergence of CCSD GFs in real space.
The self-energy-mediated interpolation, on the other hand,
was found to provide more physically appropriate GFs
due to the localized nature of CCSD self-energies.
We should keep in mind that in a metallic system,
whose density matrix\cite{bib:4597,bib:4598} and GF\cite{bib:4604} decay only algebraically at a zero temperature,
a large number of sampled $k$ points would be required for sufficiently convergent results.
Remembering the widely accepted assumption that the self-energy of an interacting system is more localized than the GF,
the self-energy-mediated interpolation is expected to be more suitable for generic systems than the direct interpolation.
Since our interpolation schemes are not restricted to CCSD method,
they are applicable to any correlated methods in quantum chemistry as long as it provides a way to obtain one-particle GFs.
Development of various correlated methods with GFs in solids is thus important for reliable explanations and predictions of their spectral shapes and excitation energies.
\begin{acknowledgments}
This research was supported by MEXT as Exploratory Challenge on Post-K computer (Frontiers of Basic Science: Challenging the Limits).
This research used computational resources of the K computer provided by the RIKEN Advanced Institute for Computational Science through the HPCI System Research project (Project ID: hp180227).
\end{acknowledgments}
\bibliographystyle{apsrev4-1}
|
1,314,259,995,297 | arxiv | \section{Introduction and overview}\label{section 1 introduction}
Scattering theory has been an important tool in the mathematical and theoretical study of black hole solutions to the Einstein equations, which in vacuum take the form
\begin{align}\label{EVE}
R_{ab}[g]=0
\end{align}
(setting the cosmological constant to zero). Whereas there has been extensive work on scattering for scalar, electromagnetic, fermionic fields on black hole backgrounds (see already \cite{DimockKayI}, \cite{BachelotAFMaxwell}, \cite{Nicolas}, \cite{DRSR14}, \cite{DaudeNicoleau}), in the case of the scattering of gravitational perturbations much of the historic literature has been concerned with solutions to equations governing fixed frequency modes (see \cite{Chandrasekhar}, \cite{HandlerFuttermanMatzner} for an extensive survey, and the very recent \cite{SRTdC}), and comparatively little has been said about scattering theory on black holes \textit{in physical space}. The aim of this work is to address this vacancy for the case of linearised gravitational perturbations around the Schwarzschild exterior, which in familiar coordinates has the metric \cite{Schwarzschild}:
\begin{align}\label{SchwMetric}
g=-\left(1-\frac{2M}{r}\right)dt^2+\left(1-\frac{2M}{r}\right)^{-1}dr^2+ r^2\left(d\theta^2+\sin^2\theta d\phi^2\right).
\end{align}
The subject of scattering theory is the study of perturbations evolved on scales that are large in comparison to a characteristic scale of the perturbed system. More concretely, scattering theory is relevant when the perturbations are meant to be asymptotically free from the effects of the target. In this picture, incoming and outgoing perturbations are approximated by solutions describing "free" propagation. A mathematical description of scattering hinges on an appropriate and rigorous formulation of these ideas, and much of the value of scattering theory lies in the identification of the correct candidates for spaces of "scattering states" that describe incoming and outgoing perturbations. In these terms, a satisfactory scattering theory must provide answers to the following questions:
\begin{enumerate}[I]
\item \textit{Existence of scattering states}: Is there an interesting class of initial data that evolve to solutions which can be associated with past/future scattering states?\label{QI}
\item \textit{Uniqueness of scattering states}: Is the above association injective? Do solutions that give rise to the same scattering state coincide?\label{QII}
\item \textit{Asymptotic completeness}: Does this association exhaust the class of initial data of interest?\label{QIII}
\end{enumerate}
Because of the nonlinear nature of the Einstein equations \bref{EVE}, the study of scattering in general relativity is dependent on a thorough understanding of the perturbative behaviour of the equations. As a first step, it is useful to understand the evolution of solutions to the linearised Einstein equations, which are obtained by formally expanding a family of solutions in some smallness parameter $\epsilon$ around some fixed background, e.g.~\bref{SchwMetric}, and keeping only leading order terms in $\epsilon$ in the equations \bref{EVE}. Studying the evolution of linear equations on black hole backgrounds has its own appeal, as black holes by their very nature are immune to "direct" observation and even their existence can only be inferred by examining their effects on the propagation of wave phenomena in spacetime. The linearised Einstein equations still inherit many of the features as well as the difficulties that plague the study of the nonlinear equations.\\
\indent A foundational breakthrough in the analysis of the linearised equations was discovered by Bardeen and Press \cite{Bardeen-Press} in the case of the Schwarzschild black hole \bref{SchwMetric} and Teukolsky \cite{TeukP74} in the case of the Kerr black hole \cite{Kerr}, who showed that by casting the equations of linearised gravity in the Newman--Penrose formalism, it is possible to identify gauge-invariant components of the curvature that obey 2nd order {\em decoupled} wave equations, which on the Schwarzschild spacetime take the forms
\begin{align}\label{wave equation +}
\Box_g \Omega^2\alpha +\frac{4}{r\Omega^2}\left(1-\frac{3M}{r}\right)\partial_u \Omega^2\alpha=V(r) \Omega^2\alpha,
\end{align}
\begin{align}\label{wave equation -}
\Box_g \Omega^2\underline\alpha -\frac{4}{r\Omega^2}\left(1-\frac{3M}{r}\right)\partial_v \Omega^2\underline\alpha=V(r) \Omega^2\underline\alpha.
\end{align}
Here, $\Box_g$ is the d'Alembertian operator of the Schwarzschild metric $g$, $\alpha, \underline\alpha$ are symmetric traceless $S^2$-tangent 2-tensor fields, $\Omega^2=1-\frac{2M}{r}$ and $V=\frac{2(3\Omega^2+1)}{r^2}$ (see already \Cref{Chandra1}). Equations \bref{wave equation -}, \bref{wave equation +} are known as the \textbf{Teukolsky equations of spin $\bm{+2}$ and $\bm{-2}$} respectively.\\
\indent In addition to the Teukolsky equations \bref{wave equation +}, \bref{wave equation -}, the quantities $\alpha, \underline\alpha$ satisfy a closed system of equations known as the Teukolsky--Starobinsky identities:
\begin{align}
\frac{\Omega^2}{r^2}\Omega\slashed{\nabla}_3 \left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\right)^3\alpha=2r^4\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1\overline{\slashed{\mathcal{D}}}_1\slashed{\mathcal{D}}_2 r\Omega^2{\underline\alpha}+12M\partial_t\hspace{.5mm}r\Omega^2{\underline\alpha}, \label{eq:227intro1}\\
\frac{\Omega^2}{r^2}\Omega\slashed{\nabla}_4 \left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\right)^3{\underline\alpha}=2r^4\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1\overline{\slashed{\mathcal{D}}}_1\slashed{\mathcal{D}}_2 r\Omega^2\alpha-12M\partial_t\hspace{.5mm}r\Omega^2\alpha.\label{eq:228intro1}
\end{align}
The purpose of this paper is to study the scattering theory of the Teukolsky equations \bref{wave equation +}, \bref{wave equation -} as a prelude to studying scattering for the full system of linearised Einstein equations. This is done by first developing a scattering theory for \bref{wave equation +}, \bref{wave equation -} in particular addressing points \ref{QI}, \ref{QII}, \ref{QIII} above, and then bridging this scattering theory to the full system of linearised Einstein equations by incorporating the constraints \bref{eq:227intro1} and \bref{eq:228intro1}. A complete treatment of the full system will appear in the forthcoming \cite{M2050}. \\
\indent To elaborate on the ideas involved we go through a quick survey of the history of the subject. In \Cref{RedshiftScalar} we review known scattering theory for the scalar wave equation highlighting the role of redshift as a feature of scattering on black hole backgrounds. \Cref{LinearisedGravity} is a survey of the difficulties encountered in the study of scattering for the (linearised) Einstein equations, and will motivate and introduce the main results. \Cref{IntroResults} contains a preliminary statement of the results of this paper. \Cref{subsection 1.4 outline} contains an outline of the structure of the paper.
\subsection{Scattering for the scalar wave equation and the redshift effect}\label{RedshiftScalar} It is clear that understanding scattering for the scalar wave equation
\begin{align}\label{wave equation}
\Box_{g} \phi=0
\end{align}
on a fixed Schwarzschild background \bref{SchwMetric} is a necessary prerequisite for our scattering problem, and already at this level we see many of the difficulties that characterise the evolution of perturbations to black holes. Much of the historical literature on scattering for \bref{wave equation} concerns the Schr\"odinger-like equation that results from a formal separation of \bref{wave equation} and governs the radial part. While this leads to important insights, it does not lead on its own to a satisfactory answer to points \ref{QI}, \ref{QII}, \ref{QIII} above. \\
\indent The first result on physical-space scattering for \bref{wave equation} on \bref{SchwMetric} goes back to Dimock and Kay \cite{DimockKayI}, who applied the Lax--Philips scattering theory to the scalar wave equation on the Schwarzschild spacetime. In \cite{Friedlander}, Friedlander's use of the radiation field at null infinity to describe future scattering states initiated an alternative method from the Lax--Philips formalism to a more geometric treatment of the notion of scattering states, and subsequent works have largely adhered to this point of view, see the discussion by Nicolas \cite{Nicolas}. The state of the art in this area is the work of Dafermos, Rodnianski and Shlapentokh-Rothman \cite{DRSR14}, where a complete understanding of scattering for the wave equation \bref{wave equation} on the Kerr exterior is laid out. The scattering problem for the scalar wave equation \bref{wave equation} on the extremal Reissner--Nordstr\"om background was definitively resolved in \cite{AAG19}. In the case of asymptotically de-Sitter black holes, we note the result \cite{HafnerGerardGeorgescu} on asymptotic completeness for the Klein--Gordon equation restricting to solution of fixed azimuthal modes against a very slowly rotating Kerr--de-Sitter black hole. Scattering for \bref{wave equation} has also been considered on the interior of the Reissner--Nordstr\"om black hole by Kehle and Shlapentokh-Rothman \cite{KSR18}.\\
\indent What leads to the rich theory available to \bref{wave equation} is the fact that it comes with a natural Lagrangian structure with which we can associate conservation laws encoded in the energy-momentum tensor:
\begin{align}
T_{\mu\nu}[\phi]=\partial_\mu \phi \; \partial_\nu \phi-\frac{1}{2}g_{\mu\nu}\;\partial_\alpha\phi \;\partial^\alpha \phi,
\end{align}
which satisfies $\nabla_\mu T^\mu{}^\nu[\phi]=0$. Since the vector field $T:=\partial_t$ generates an isometry, classical scattering theory immediately suggests the class of solutions of finite $T$-energy, defined as the flux on a spacelike or null hypersurface of the quantity
\begin{align}
n^\mu J^T_\mu[\phi],
\end{align}
where $J^X[\phi]_\mu=T_{\mu\nu}[\phi]X^\nu$ and $n^\mu$ is the vector field normal to the hypersurface, as this flux is non-negative definite and conserved. Solutions to \bref{wave equation} arising from suitable Cauchy data have sufficiently tame asymptotics to induce smooth radiation fields on $\mathscr{I}^+$ and $\mathscr{H}^+$. The conservation of $T$-energy allows us to resolve the scattering problem by constructing an isomorphism between the space of Cauchy data of finite energy and the corresponding space of radiation fields. With this, the answer to the questions \ref{QI}, \ref{QII}, \ref{QIII} of scattering theory for equation \bref{wave equation} is in the affirmative. \\
\indent At the same time, the fact that the vector field $T$ becomes null on the event horizon points to a deficiency, since the $T$-energy density then loses control over some derivatives and the norm on the event horizon defined by the $T$-energy,
\begin{align}\label{horizon energy}
\int_{\mathscr{H}^+}J^T_\mu[\phi]n^\mu_{\mathscr{H}^+},
\end{align}
is degenerate. The energy density observed along a horizon-penetrating timelike curve is better described by $J^N_\mu[\phi]$ for a timelike vector field $N$, but such a vector field cannot be Killing everywhere. The flux of this quantity is therefore not conserved and new issues appear, paramount among which is the \textit{redshift effect}.\\
\indent An intuitive hint of the role played by the redshift effect is the exponential decay in frequency that affects signals originating near the event horizon by the time they reach late-time observers, which relates to the divergence of outgoing null geodesics near the event horizon towards the future. It turns out that this effect can be exploited to produce nondegenerate energies useful for evolution in the future direction, precisely by choosing a timelike $N$ to be a time-translation invariant vector field measuring the separation of null geodesics near the event horizon, see \cite{DR05}. In addition to using $N$ as a multiplier $X=N$, key to this method is the fact that commuting the wave equation \bref{wave equation} with such $N$ produces terms of lower order derivatives that come with a good sign when estimating the solution forwards. This can be traced to the positivity of the surface gravity; the fact that on $\mathscr{H}^+$, $\nabla_T T=\kappa T$ with $\kappa>0$. See \cite{DR08} for a detailed exposition.\\
\indent Unfortunately, when it comes to backwards evolution the technique described above does not work, as the redshift effect in the forwards evolution problem turns to a deleterious blueshift effect when evolving towards the past, and it is not possible to use the energy associated with $N$ to bound the solution in the backwards direction. Furthermore, it can be shown that there exists a large class of scattering data having a finite $N$-energy on the future event horizon $\mathscr{H}^+$ whose $N$-energy blows up evolving backwards, see \cite{DSR17}.\\
\indent Note that in the case of the Kerr exterior $(a\neq0)$ there is no obvious analogue of the $T$-energy scattering theory, as the stationary Killing vector field becomes spacelike in the ergoregion and therefore its flux no longer has a definite sign. Therefore, superradiance features as an additional aspect of scattering theory. One cannot hope for a unitary map, but one can still hope for a bounded invertible map. In view of the above discussion, the $N$-energy space is not appropriate however. One of the difficulties is indeed identifying the correct notion of energy. See \cite{DRSR14} for the detailed treatment.
\subsection{Linearised gravity and the Teukolsky equations}\label{LinearisedGravity}
The above discussion involves linear \textit{scalar} perturbations only, i.e.~solutions to \bref{wave equation}, and little is known about the scattering theory of the Einstein equations even when linearised, see \cite{Chandrasekhar} and \cite{HandlerFuttermanMatzner} for a survey. Indeed, a comprehensive study of scattering under the Einstein equations \bref{EVE} on black hole exteriors involves and subsumes major aspects of the study of black hole stability. To date, full nonlinear stability for an asymptotically flat spacetime has only been satisfactorily proven for Minkowski space, see \cite{Ch-K}, \cite{Lin-Rod} for instance. For asymptotically flat black holes, stability results against generic perturbations exist only for the linearised Einstein equations, see \cite{DHR16} for the case of the Schwarzschild spacetime, \cite{DHR18}, \cite{Ma}, \cite{AnderssonKerr} and \cite{HVH19} for the case of very slowly rotating Kerr black holes, and \cite{SRTdC} for the general subextremal case. For the case of asymptotically de-Sitter black holes, results concerning the nonlinear stability of black hole solutions with positive cosmological constant do exist, see \cite{Hintz2018}.
\subsubsection{The Bianchi equations and the lack of a Lagrangian structure}\label{subsubsection 1.2.1 no lagrangian}
In a spacetime satisfying the Einstein equations \bref{EVE} with a vanishing cosmological constant, the components of the Weyl curvature tensor satisfy the \textit{Bianchi equations}
\begin{align}\label{Bianchi}
\nabla^a W_{abcd}=0.
\end{align}
These equations, along with the equations defining the connection components, comprise the evolutionary content of the Einstein equations \bref{EVE}. Importantly, the Bel--Robinson tensor
\begin{align}
Q_{abcd}=W_{aecf} W_b{}^e{}_d{}^f + {}^*W_{aecf} {}^*W_b{}^e{}_d{}^f
\end{align}
acts as an energy-momentum tensor for the Bianchi equations. Upon linearising these equations against the background of Minkowski space, this structure survives in the linearised equations and allows to estimate the curvature components using the vector field method in the same way that it was applied to study the scalar wave equation, as was done in \cite{Ch-K-linear}. In fact, the vector field method applied using the Bel--Robinson tensor was key to the proof of nonlinear stability of the Minkowski spacetime by Christodoulou and Klainerman in \cite{Ch-K}, and it is possible to use this strategy to study scattering for small perturbations to the Minkowski spacetime evolving according to the nonlinear Einstein equations \bref{EVE}.\\
\indent Unfortunately, this structure is lost in the process of linearising around black holes, where the connection components couple to the curvature in a way that destroys the Lagrangian structure of the equations \bref{Bianchi}: in terms of a formal expansion of perturbed quantities of the form
\begin{align}
\bm{g}=g\;+\stackrel{\;\;\mbox{\scalebox{0.4}{(1)}}}{\epsilon g}, \qquad \bm{\Gamma}=\Gamma+\stackrel{\;\;\mbox{\scalebox{0.4}{(1)}}}{\epsilon\; \Gamma}, \qquad \bm{R}=R+\stackrel{\;\;\mbox{\scalebox{0.4}{(1)}}}{\epsilon R},
\end{align}
the linearised version of equations \bref{Bianchi} have the schematic form
\begin{align}\label{coupling}
\stackrel{\;\;\;\;\mbox{\scalebox{0.4}{(1)}}}{\nabla \; W}+\stackrel{\mbox{\scalebox{0.4}{(1)}}\;\;\;\;\;\;}{\Gamma\; W}=0.
\end{align}
Therefore, it is not possible to directly use the Bianchi equations alone to prove boundedness and decay results for curvature components independently of the connection components. See the discussion in \cite{DHR16}, \cite{DHR17}.
\subsubsection{Double null gauge}\label{subsubsection 1.2.2 double null gauge}
It is important to note that the formulation of the problem depends crucially on the choice of gauge. It turns out that working with a \textit{double null gauge} is particularly useful to manifest a special structure in the linearised Einstein equations that reveals an alternative method to control curvature. This gauge leads to a well-posed reduction of the linearised Einstein equations around Schwarzschild, arising from a well-posed reduction of the full Einstein equations (see \cite{DHR16} and \cite{Ch-K}).\\
\indent A double null gauge is a coordinate system $(\bm{u},\bm{v},\bm{\theta}^A)$ that foliates spacetime with two families of ingoing and outgoing null hypersurfaces. In this gauge we decompose the curvature and connection components in terms of $\mathcal{S}_{\bm{u},\bm{v}}$-tangent tensor fields, where $\mathcal{S}_{\bm{u},\bm{v}}$ is the compact 2-dimensional manifold where the null hypersurfaces of constant $\bm{u}, \bm{v}$ intersect (see already \Cref{section 2 preliminaries} and \Cref{Appendix B Double null guage}). On the exterior of the Schwarzschild spacetime, the Eddington--Finkelstein null coordinates $(u,v,\theta^A)$ provide an example of this gauge (where $\mathcal{S}_{u,v}$ are just standard spheres). \\
\indent For an example of the resulting equations, the linearised curvature components $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\alpha}_{AB}=\stackrel{\mbox{\scalebox{0.4}{(1)}}}{W}_{A4B4}$ and $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\beta}_A=\stackrel{\mbox{\scalebox{0.4}{(1)}}}{W}_{A434}$ obey the transport equations
\begin{align}\label{example1}
\frac{1}{\Omega}\slashed{\nabla}_3 r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\alpha}\;=-2r\slashed{\mathcal{D}}^*_2 \Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\beta} +\frac{6M}{r^2}\Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\chi}}, \qquad\qquad \Omega\slashed{\nabla}_4 r^4\Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\beta}-2M r^2\Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\beta}\;= r\slashed{div}\;r^3\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\alpha},
\end{align}
where $\Omega^2=\left(1-\frac{2M}{r}\right)$, $\slashed{\nabla}_4,\slashed{\nabla}_3$ denote the projections of the null covariant derivatives to $S^2_{u,v}$ and $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\chi}}$ denotes the linearised outgoing shear. The coupling to the connection components means we must simultaneously consider the connection components like $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat\chi}$, which satisfy transport equations of a similar form, for example:
\begin{align}\label{example2}
\Omega\slashed{\nabla}_4\; r\Omega \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\chi}}+\Big(1-\frac{4M}{r}\Big)\Omega \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\chi}}=-r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\alpha}.
\end{align}
\indent We note that in this formulation, we can see the presence of a \textit{blueshift} effect in the linearised Einstein equations by observing that the second equation of \bref{example1} above carries a lower order term with a sign that forces the solution to grow exponentially when evolved forward in a neighborhood of the horizon. This appears to be an essential feature of working with tensorial quantities decomposed using null frames.
\subsubsection{The Teukolsky equations}\label{subsubsection 1.2.3 Teukolsky}
A quick glance at \bref{example1}, \bref{example2} reveals that we can derive a decoupled equation for $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\alpha}$ alone by acting on the first equation of \bref{example1} with $\Omega\slashed{\nabla}_4$ and following through the remaining equations to discover that $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\alpha}$ obeys the $+2$ Teukolsky equation \bref{wave equation +}. The linearisation of the component $\bm{\underline\alpha}_{AB}=W_{A4B4}$ can be shown to obey \bref{wave equation -} by a similar logic, see \Cref{subsection 2.2 Linearised Einstein equations in double null gauge} for the full list of the linearised Einstein equations around the Schwarzschild background.\\
\indent The derivation of \bref{wave equation +}, \bref{wave equation -} by Bardeen and Press \cite{Bardeen-Press} for perturbations around Schwarzschild and their extension to the Kerr black holes by Teukolsky \cite{Teu73} (using the Newman--Penrose formalism) was a game changer in the study of linearised gravity. If one can estimate solutions to the Teukolsky equations (i.e.~equations \bref{wave equation +}, \bref{wave equation -} on Schwarzschild), one can hope to make use of the hierarchical nature of the linearised Einstein equations in double null gauge (as manifest in \bref{example1}, \bref{example2} for example) to estimate the remaining components. \\
\indent Unfortunately, however, having arrived at the decoupled wave equations \bref{wave equation +}, \bref{wave equation -} for the components $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\alpha}, \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}$, the essential difficulty in dealing with the linearised Einstein equations is still inherited by the Teukolsky equations \bref{wave equation +}, \bref{wave equation -}, in the sense that equations \bref{wave equation +}, \bref{wave equation -}, taken in isolation, also suffer from the lack of a variational principle, and neither \bref{wave equation +} nor \bref{wave equation -} has its own energy-momentum tensor. This is related to the 1st order null derivative term on the left hand side of \bref{wave equation +}, \bref{wave equation -}. These first order terms are reminiscent of the wave equation \bref{wave equation} when commuted with the redshift vector field $N$ (note in particular that the 1st order term in the $-2$ Teukolsky equation \bref{wave equation -} has a redshift sign near $\mathscr{H}^+$, while the $+2$ has a 1st order term with a blueshift sign near $\mathscr{H}^+$). This issue meant that the Teukolsky equations \bref{wave equation +}, \bref{wave equation -}, despite their decoupling, have remained immune to known methods for a long time.
\subsubsection{Chandrasekhar-type transformations in physical space}\label{subsubsection 1.2.4 DHR}
In \cite{DHR16}, Dafermos, Holzegel and Rodnianski succeed in deriving boundedness and decay estimates for \bref{wave equation +} and \bref{wave equation -} and they subsequently prove the linear stability of the Schwarzschild solution in double null gauge. Key to their work is the exploitation of a physical space version of a trick due to Chandrasekhar \cite{Chandrasekhar}, which works by commuting derivatives in the null directions past the equations. This commutation removes the first order derivative terms and reduces the equations \bref{wave equation +}, \bref{wave equation -} to a familiar form:
\begin{align}\label{RWintro}
\Omega\slashed{\nabla}_3\Omega\slashed{\nabla}_4\stackrel{\mbox{\scalebox{0.4}{(1)}}}\Psi-\Omega^2\slashed{\Delta}\stackrel{\mbox{\scalebox{0.4}{(1)}}}\Psi+V(r)\stackrel{\mbox{\scalebox{0.4}{(1)}}}\Psi=0,
\end{align}
where $V(r)=\frac{\Omega^2(3\Omega^2+1)}{r^2}$ and
\begin{align}\label{transport}
\stackrel{\mbox{\scalebox{0.4}{(1)}}}\Psi=\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\right)^2r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha.
\end{align}
The same applies to $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}$ by differentiating in the $4$- direction instead and we obtain a quantity $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\Psi}$ satisfying \bref{RWintro} via
\begin{align}\label{transport 2}
\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\Psi}=\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\right)^2r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}.
\end{align}
\indent Equation \bref{RWintro} is the well-known Regge--Wheeler equation, which first appeared in the context of the theory of metric perturbations studied by Regge and Wheeler \cite{ReggeWheeler}, Vishveshwara \cite{Vishveshwara}, and Zerilli \cite{Zerilli} to describe gauge invariant combinations of the metric perturbations. The Regge--Wheeler equation \bref{RWintro} has a very similar structure to the equation that governs the radiation field of the scalar wave equation \bref{wave equation}, and in particular the vector field method can be adapted to study \bref{RWintro}. This is what was done in \cite{DHR16} to obtain boundedness and decay estimates for solutions of \bref{RWintro}. These estimates for \bref{RWintro} can in turn be used to estimate $\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha, \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}$ \textit{by regarding \bref{transport} and its $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}$ counterpart as transport equations for $\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha, \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}$}. For this to work, it was fundamental that a sufficiently strong decay statement is available for solutions of \bref{RWintro} for a nondegenerate energy (i.e.~the analogue of the $N$-energy above).\\
\indent Note that in the case of the Kerr spacetime $a\neq 0$, the strategy outlined above suffers from the fact that the analogues of \bref{RWintro} are coupled to $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\alpha}, \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}$ via $a$. Nevertheless, it is possible to apply the same strategy to obtain boundedness and decay results for solutions to the Teukolsky equations, see \cite{DHR18} and \cite{Ma} for the case of the very slowly rotating Kerr exterior $|a|\ll M$ and the very recent \cite{SRTdC} for the full subextremal range $|a|<M$. For the case of the extremal Kerr exterior $a=M$, see \cite{Rita2019}, \cite{Lucietti_2012}.\\
\indent The first preliminary goal of our work will be to analyse the Regge--Wheeler equation \bref{RWintro} from the point of view of scattering. The fact that the conservation of the $T$-energy leads to a scattering theory for the scalar wave equation \bref{wave equation} means one can expect to prove an analogous statement for the Regge--Wheeler equation using analogous methods. This will be the content of \textbf{Theorem 1} (see \Cref{subsubsection 1.3.1 scattering for RW}).
\subsubsection{Reconstructing curvature from the Regge--Wheeler equation}\label{subsubsection 1.2.5 RW}
\indent Starting from such a scattering theory for the Regge--Wheeler equation \bref{RWintro}, one can hope to apply the strategy used in \cite{DHR16} to construct a scattering theory for the Teukolsky equations \bref{wave equation +} and \bref{wave equation -} via the transport relations \bref{transport} and \bref{transport 2}. It is however far from clear that the transport equations \bref{transport}, \bref{transport 2} can lead to a suitable scattering theory, in particular one that could in turn lead to a scattering theory for the linearised Einstein equations. The central question we aim to address is whether the $T$-energy obtained via the Regge--Wheeler equation could define a Hilbert space of scattering states for solutions to \bref{wave equation +}, \bref{wave equation -}, for which the central questions of scattering theory (points \ref{QI}, \ref{QII}, \ref{QIII} above) could be answered. \\
\indent Adapting the strategy above to a scattering setting based on $T$-energies, we succeed in constructing such a scattering theory for the Teukolsky equations answering \ref{QI}, \ref{QII}, \ref{QIII} in the affirmative. This will lead to \textbf{Theorem 2} of this paper (see \Cref{subsubsection 1.3.2 scattering for teukolsky}).
\subsubsection{The Teukolsky--Starobinsky correspondence}\label{subsubsection 1.2.6 TS}
Finally, we treat what is known as the Teukolsky--Starobinsky correspondence. The Teukolsky--Starobinsky correspondence is the study of the relationship between $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\alpha}, \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}$ using \bref{eq:227intro1}, \bref{eq:228intro1} and the Teukolsky equations \bref{wave equation +}, \bref{wave equation -}, independently of the remaining components of a solution to the linearised Einstein system.
The idea that knowing either $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\alpha}$ or $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}$ uniquely determines the other via \bref{eq:227intro1}, \bref{eq:228intro1} permeates the literature on the Einstein equations since the appearance of the constraints in \cite{TeukP74}, \cite{StarC}, but little has been done in the way of a systematic study of the combined system consisting of the Teukolsky equations \bref{wave equation +}, \bref{wave equation -} and the constraints \bref{eq:227intro1}, \bref{eq:228intro1}, governing a pair $\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha, \stackrel{\mbox{\scalebox{0.4}{(1)}}}
{\underline\alpha}$. \\
\indent The constraints \bref{eq:227intro1}, \bref{eq:228intro1} provide a bridge between the scattering theory we construct for equations \bref{wave equation +}, \bref{wave equation -} and the full linearised Einstein equations. This is because scattering for the linearised Einstein equations would involve scattering data for the metric components, from which data for only one of $\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha$ or $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}$ could be constructed from the scattering data for the metric on each component of the asymptotic boundary. One can hope to use the identities \bref{eq:227intro1}, \bref{eq:228intro1} to obtain scattering data for either $\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha$ or $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}$ out of the other, but it is entirely unclear whether we would obtain scattering data that are compatible with the scattering theory constructed here for \bref{wave equation +}, \bref{wave equation -}, or even whether the system consisting of \bref{wave equation +}, \bref{wave equation -}, \bref{eq:227intro1}, \bref{eq:228intro1} is well-posed. In the context of scattering, we are specifically interested in whether the operators involved on each side of the identities \bref{eq:227intro1}, \bref{eq:228intro1} are invertible on the spaces of scattering states, and we would like to know whether, given scattering data for $\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha, \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}$ related via \bref{eq:227intro1}, \bref{eq:228intro1}, the ensuing solutions to \bref{wave equation +}, \bref{wave equation -} would in turn satisfy \bref{eq:227intro1}, \bref{eq:228intro1}.\\
\indent Interestingly, it turns out that the study of constraints \bref{eq:227intro1}, \bref{eq:228intro1} is much more transparent when done via scattering rather than directly via the Cauchy problem, and combining this with asymptotic completeness will answer the question of well-posedness for the system \bref{wave equation +}, \bref{wave equation -}, \bref{eq:227intro1}, \bref{eq:228intro1}. We also find that it is only in the context where solutions to \bref{wave equation +}, \bref{wave equation -} are studied on the entirety of the exterior region that the constraints \bref{eq:227intro1}, \bref{eq:228intro1} are sufficient to determine $\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha$ completely from $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}$ and vice versa. Scattering necessarily involves considering solutions globally on the exterior. These considerations are the subject of \textbf{Theorem 3}. \\
\indent A corollary to our main results is that one may formulate a scattering statement for a combined pair $(\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha,\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha})$ satisfying the Teukolsky equations \bref{wave equation +}, \bref{wave equation -} and the constraints \bref{eq:227intro1}, \bref{eq:228intro1} (this is \textbf{Corollary 1}, see \Cref{subsection 4.4 Corollary 1: mixed scattering}). One can then hope that such a scattering statement would provide a bridge towards scattering for the full linearised Einstein equations, taking into account \Cref{example2} relating $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\alpha}$ to $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\chi}}$ and counterpart equation relating $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}$ to $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\underline\chi}}$. We will immediately remark at the end of this introduction on how to formally derive a conservation law at the level of the shears $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\chi}}$, $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\underline\chi}}$ which excludes the possibility of superradiant reflection (see \bref{conservation law} of \Cref{subsubsection 1.3.4 corollary}). This will be treated in detail again in the upcoming \cite{M2050} as part of a complete scattering theory for the linearised Einstein equation in double null gauge.
\subsection{Scattering maps}\label{IntroResults}
The following are preliminary statements of the results of this work, with detailed statements to follow in the body of the paper (see \cref{section 4 main theorems}).
\subsubsection{Scattering for the Regge--Wheeler equation}\label{subsubsection 1.3.1 scattering for RW}
We begin by stating the result for the Regge--Wheeler equation \bref{RWintro} (we omit the superscript $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{{}}$ in what follows). We show that a solution arising from Cauchy data with initially finite $T$-energy gives rise to a set of radiation fields in the limit towards $\mathscr{I}^+, \mathscr{H}^+$, from which the solution can be recovered. The choice of the Cauchy surface does not affect the fact that the flux of the $T$-energy defines a Hilbert space norm on Cauchy data. For the surface $\overline\Sigma=\{t=0\}$, this flux is given by
\begin{align}\label{RWfluxT}
\big\|(\Psi|_{\overline\Sigma},\slashed{\nabla}_{n_{\overline{\Sigma}}}\Psi|_{\overline\Sigma})\big\|^2_{\mathcal{E}^T_{\overline\Sigma}}=\int_{\overline\Sigma}dr\sin\theta d\theta d\phi\;|\slashed{\nabla}_{n_{\overline\Sigma}}\Psi|^2+\Omega^2|\slashed{\nabla}_r\Psi|^2+|\slashed{\nabla}\Psi|^2+\frac{3\Omega^2+1}{r^2}|\Psi|^2.
\end{align}
Conservation of the $T$-energy suggests Hilbert space norms on $\mathscr{I}^+, \mathscr{H}^+$:
\begin{align}\label{RWfluxIH}
\|\bm{\uppsi}_{\mathscr{I}^+}\|_{\mathcal{E}^T_{\mathscr{I}^+}}^2=\int_{\mathscr{I}^+}du\sin\theta d\theta d\phi\; |\partial_u\bm{\uppsi}_{\mathscr{I}^+}|^2,\qquad\qquad \|\Psi_{{\mathscr{H}^+}}\|_{\mathcal{E}^T_{\mathscr{H}^+}}^2=\int_{\mathscr{H}^+}du\sin\theta d\theta d\phi\;|\partial_v\bm{\uppsi}_{\mathscr{H}^+}|^2.
\end{align}
The Hilbert spaces $\mathcal{E}^T_{\overline{\Sigma}}, \mathcal{E}^T_{\overline{\mathscr{H}^+}},\mathcal{E}^T_{\overline{\mathscr{I}^+}}$ are defined to be the completion of smooth, compactly supported data under the norms defined in \bref{RWfluxT}, \bref{RWfluxIH} and the spaces $\mathcal{E}^T_{\mathscr{H}^-}, \mathcal{E}^T_{\mathscr{I}^-}$ are defined analogously.
\begin{theorem*}\label{Theorem 1}
Forward evolution under the Regge--Wheeler equation \bref{RWintro} extends to a unitary Hilbert space isomorphism
\begin{align}
\mathscr{F}^+:\mathcal{E}^T_{\overline\Sigma} \longrightarrow \mathcal{E}^T_{\overline{\mathscr{H}^+}}\oplus\mathcal{E}^T_{\mathscr{I}^+}.
\end{align}
A similar statement holds for scattering towards $\mathscr{H}^-, \mathscr{I}^-$. As a corollary, we obtain the Hilbert space isomorphism
\begin{align}
\mathscr{S}:\mathcal{E}^T_{\mathscr{H}^-}\oplus\mathcal{E}^T_{\mathscr{I}^-}\longrightarrow\mathcal{E}^T_{\mathscr{H}^+}\oplus\mathcal{E}^T_{\mathscr{I}^+}.
\end{align}
\end{theorem*}
The precise statement of this result is contained in \Cref{forwardRW,,backwardRW,,RW isomorphisms} of \Cref{subsection 4.1 Theorem 1}.\\
\indent Note that Theorem 1 can be applied to the study of scattering for the linearised Einstein equations in the Regge--Wheeler gauge, see also the recent \cite{TruongConformal}.
\subsubsection{Scattering for the Teukolsky equations}\label{subsubsection 1.3.2 scattering for teukolsky}
Given $\alpha$ or $\underline\alpha$ solving the Teukolsky equations \bref{wave equation +}, \bref{wave equation -}, the weighted null derivatives $\Psi, \underline\Psi$ defined by \bref{transport}, \bref{transport 2} satisfy the Regge--Wheeler equation \bref{RWintro}, so we can try to use Theorem 1 to construct a scattering theory for $\alpha, \underline\alpha$ using the spaces of scattering states associated to \bref{RWintro}: \\
\indent Let $(\upalpha,\upalpha')$, $(\underline\upalpha,\underline\upalpha')$ be Cauchy data for \bref{wave equation +}, \bref{wave equation -} respectively on $\overline{\Sigma}$ and define
\begin{align}
\|(\upalpha,\upalpha')\|^2_{\mathcal{E}^{T,+2}_{\overline\Sigma}}:=\|(\Psi,\slashed{\nabla}_{n_{\overline\Sigma}}\Psi)\|^2_{\mathcal{E}^T_{\overline{\Sigma}}},\qquad\qquad\|(\underline\upalpha,\underline\upalpha')\|^2_{\mathcal{E}^{T,-2}_{\overline\Sigma}}:=\|(\underline\Psi,\slashed{\nabla}_{n_{\overline\Sigma}}\underline\Psi)\|^2_{\mathcal{E}^T_{\overline{\Sigma}}}.
\end{align}
The expressions $ \|\;\|^2_{\mathcal{E}^{T,+2}_{\overline\Sigma}}, \|\;\|^2_{\mathcal{E}^{T,-2}_{\overline\Sigma}}$ turn out indeed to be norms on smooth, compactly supported data sets on $\overline\Sigma$ and thus they define Hilbert space norms on the completions of such data. Note that the values on $\overline\Sigma$ of $\Psi,\underline\Psi$ and their derivatives can be computed locally using the Teukolsky equations \bref{wave equation +}, \bref{wave equation -}, out of higher order derivatives of the initial data $(\upalpha,\upalpha')$, $(\underline\upalpha,\underline\upalpha')$ on $\overline\Sigma$.\\
\indent As mentioned earlier, the energies defining the Hilbert spaces of scattering states for the Teukolsky equations stem from the $T$-energy associated to the Regge--Wheeler equations. Remarkably, on $\mathscr{I}^\pm, \mathscr{H}^\pm$, the radiation fields of $\Psi, \underline\Psi$ are related to those of $\alpha, \underline\alpha$ by tangential derivatives, and it is possible to find meaningful expressions for the corresponding norms on $\mathscr{I}^\pm, \overline{\mathscr{H}^\pm}$ directly in terms of the radiation fields of $\alpha, \underline\alpha$.
\begin{theorem*}\label{Theorem 2}
For the Teukolsky equations \bref{wave equation +}, \bref{wave equation -} of spins $\pm2$, evolution from smooth, compactly supported data on a Cauchy surface extends to unitary Hilbert space isomorphisms:
\begin{align}
{}^{(+2)}\mathscr{F}^{+}:\mathcal{E}^{T,+2}_{\overline\Sigma}\longrightarrow \mathcal{E}^{T,+2}_{\mathscr{I}^+}\oplus\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}},\qquad\qquad{}^{(-2)}\mathscr{F}^{+}:\mathcal{E}^{T,-2}_{\overline\Sigma}\longrightarrow \mathcal{E}^{T,-2}_{\mathscr{I}^+}\oplus\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^+}},\\
{}^{(+2)}\mathscr{F}^{-}:\mathcal{E}^{T,+2}_{\overline\Sigma}\longrightarrow \mathcal{E}^{T,+2}_{\mathscr{I}^-}\oplus\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^-}},\qquad\qquad{}^{(-2)}\mathscr{F}^{-}:\mathcal{E}^{T,-2}_{\overline\Sigma}\longrightarrow \mathcal{E}^{T,-2}_{\mathscr{I}^-}\oplus\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^-}}.
\end{align}
The spaces of past/future scattering states $\mathcal{E}^{T,\pm2}_{\mathscr{I}^\pm},\mathcal{E}^{T,\pm2}_{\mathscr{H}^\pm},$ are the Hilbert spaces obtained by completing suitable smooth, compactly supported data on $\mathscr{I}^\pm, \mathscr{H}^\pm$ under the corresponding norms in the following:
\begin{changemargin}{-1cm}{2cm}
\begin{center}
\setstretch{1.5}
\begin{tikzpicture}[scale=0.6,on grid]
\node (I) at ( 0,0) {};
\path
(I) +(90:4) coordinate (Itop) coordinate[label=90:$i^+$]
+(-90:4) coordinate (Ibot) coordinate[label=-90:$i^-$]
+(180:4) coordinate (Ileft)
+(0:4) coordinate (Iright) coordinate[label=0:$i^0$]
;
\draw (Ileft) -- node[align=center,yshift=15,xshift=15]{$\Big\|(\mathring{\slashed{\Delta}}-2)(\mathring{\slashed{\Delta}}-4)\left(2M\int^{\infty}_v d\bar{v}e^{\frac{1}{2M}({v}-\bar{v})}\Omega^2\alpha\right)\Big\|^2_{L^2(\overline{\mathscr{H}^+})}\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad$\\$+\Big\|6M\partial_v\left(2M\int^{\infty}_v d\bar{v}e^{\frac{1}{2M}({v}-\bar{v})}\Omega^2\alpha\right)\Big\|_{L^2(\overline{\mathscr{H}^+})}^2\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\;\qquad$} node[rotate=45,below]{$\overline{\mathscr{H}^+}$} (Itop) ;
\draw (Ileft) -- node[yshift=-15,xshift=15]{$\Big\|2M\left(-2(2M\partial_u)+3(2M\partial_u)^2-(2M\partial_u)^3\right)2M\Omega^{-2}\alpha\Big\|^2_{L^2(\overline{\mathscr{H}^-})}\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad$} node[rotate=-45,above]{$\overline{\mathscr{H}^-}$} (Ibot) ;
\draw[dash dot dot] (Ibot) -- node[align=center][yshift=-10,xshift=-15]{$\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\Big\|6M\upalpha_{\mathscr{I}^-}\Big\|^2_{L^2(\mathscr{I}^-)}$\\[1mm] $\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\left\|(\mathring{\slashed{\Delta}}-2)(\mathring{\slashed{\Delta}}-4)\left(\int^{v}_{-\infty}\upalpha_{\mathscr{I}^-} d\bar{v}\right)\right\|^2_{L^2(\mathscr{I}^-)}$} node[rotate=45,above]{$\mathscr{I}^-$}(Iright) ;
\draw[dash dot dot] (Iright) -- node[yshift=10,xshift=-10]{$\qquad\qquad\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\qquad\left\|(\partial_u)^3\upalpha_{\mathscr{I}^+}\right\|^2_{L^2(\mathscr{I}^+)}$} node[rotate=-45,below]{$\mathscr{I}^+$}(Itop) ;
\filldraw[white] (Itop) circle (3pt);
\draw[black] (Itop) circle (3pt);
\filldraw[white] (Ibot) circle (3pt);
\draw[black] (Ibot) circle (3pt);
\draw[black] (Ileft) circle (3pt);
\filldraw[black] (Ileft) circle (3pt);
\filldraw[white] (Iright) circle (3pt);
\draw[black] (Iright) circle (3pt);
\end{tikzpicture}
\end{center}
\end{changemargin}
\begin{changemargin}{-1.4cm}{2cm}
\begin{center}
\begin{tikzpicture}[scale=0.6]
\node (I) at ( 0,0) {};
\path
(I) +(90:4) coordinate (Itop) coordinate[label=90:$i^+$]
+(-90:4) coordinate (Ibot) coordinate[label=-90:$i^-$]
+(180:4) coordinate (Ileft)
+(0:4) coordinate (Iright) coordinate[label=0:$i^0$]
;
\draw (Ileft) -- node[yshift=20,xshift=25]{$\Big\|2M\left(2(2M\partial_v)+3(2M\partial_v)^2+(2M\partial_v)^3\right)2M\Omega^{-2}\underline\alpha\Big\|^2_{L^2(\overline{\mathscr{H}^+})}\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad$} node[rotate=45,below]{$\overline{\mathscr{H}^+}$} (Itop) ;
\draw (Ileft) -- node[align=center][yshift=-10,xshift=10]{$\Big\|6M\partial_u\left(2M\int^{u}_{-\infty}d\bar{u}e^{\frac{1}{2M}(u-\bar{u})}\Omega^2\underline\alpha\right)\Big\|^2_{L^2(\overline{\mathscr{H}^-})}\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad$\\[1mm] $+\left\|(\mathring{\slashed{\Delta}}-2)(\mathring{\slashed{\Delta}}-4)\left(2M\int^{u}_{-\infty}d\bar{u}e^{\frac{1}{2M}(u-\bar{u})}\Omega^2\underline\alpha\right)\right\|^2_{L^2(\overline{\mathscr{H}^-})}\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad$} node[rotate=-45,above]{$\overline{\mathscr{H}^-}$} (Ibot) ;
\draw[dash dot dot] (Ibot) -- node[yshift=-12,xshift=-12]{$\qquad\qquad\;\;\;\;\;\;\;\;\;\;\;\;\;\;\qquad\qquad\left\|(\partial_v)^3\underline\upalpha_{\mathscr{I}^-}\right\|_{L^2(\mathscr{I}^-)}^2$} node[rotate=45,above]{$\mathscr{I}^-$}(Iright) ;
\draw[dash dot dot] (Iright) -- node[align=center][yshift=10,xshift=-20]{ $\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\left\|(\mathring{\slashed{\Delta}}-2)(\mathring{\slashed{\Delta}}-4)\left(\int^{u}_{-\infty}\underline\upalpha_{\mathscr{I}^+} d\bar{u}\right)\right\|^2_{L^2(\mathscr{I}^+)}$ \\$\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\Big\|6M\underline\upalpha_{\mathscr{I}^+}\Big\|^2_{L^2(\mathscr{I}^+)}$} node[rotate=-45,below]{$\mathscr{I}^+$}(Itop) ;
\filldraw[white] (Itop) circle (3pt);
\draw[black] (Itop) circle (3pt);
\filldraw[white] (Ibot) circle (3pt);
\draw[black] (Ibot) circle (3pt);
\filldraw[white] (Iright) circle (3pt);
\draw[black] (Iright) circle (3pt);
\filldraw[black] (Ileft) circle (3pt);
\draw[black] (Iright) circle (3pt);
\end{tikzpicture}
\end{center}
\end{changemargin}
The maps ${}^{(\pm2)}\mathscr{F}^{\pm}$ lead to the Hilbert-space isomorphisms
\begin{align}
\begin{split}
&\mathscr{S}^{+2}: \mathcal{E}^{T,+2}_{\mathscr{I}^+}\oplus\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}\longrightarrow \mathcal{E}^{T,+2}_{\mathscr{I}^-}\oplus\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^-}},\\
&\mathscr{S}^{-2}: \mathcal{E}^{T,-2}_{\mathscr{I}^+}\oplus\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^+}}\longrightarrow \mathcal{E}^{T,-2}_{\mathscr{I}^-}\oplus\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^-}}.
\end{split}
\end{align}
\end{theorem*}
\begin{remark*}
The scattering maps of Theorem 2 answer the questions \ref{QI}, \ref{QII}, \ref{QIII} posed at the beginning of the introduction. In particular, the issue of asymptotic completeness is answered in the sense that the spaces $\mathcal{E}^{T,\pm2}_{\overline{\Sigma}}$ include all smooth, compactly supported Cauchy data for \bref{wave equation +}, \bref{wave equation -} as dense subspaces.
\end{remark*}
\begin{remark*}\label{introduction regular frame norm}
As the Eddington--Finkelstein coordinate system degenerates at the bifurcation sphere $\mathcal{B}$, it is necessary to use a regular coordinate system, such as the Kruskal coordinates $U=e^{-\frac{u}{2M}}, V=e^{\frac{v}{2M}}$. In this coordinate system we see that $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{W}_{AVBV}\sim V^{-2}\Omega^2\alpha \sim U^2\Omega^{-2}\alpha$ and $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{W}_{AUBU}\sim V^{2}\Omega^{-2}\alpha \sim U^{-2}\Omega^{2}\alpha$ extend regularly to the bifurcation sphere. The integrands defining $\mathcal{E}^{T,\pm2}_{\mathscr{H}^\pm}$ also extend regularly to the bifurcation sphere $\mathcal{B}$. For example,
\begin{align}\label{introduction regular frame expression H-}
&-2(2M\partial_u)+3(2M\partial_u)^2-(2M\partial_u)^3\Omega^{-2}\alpha=U\partial_U^3 U^{2}\Omega^{-2}\alpha,
\end{align}
\begin{align}\label{introduction regular frame expression I-}
\int^{\infty}_v e^{\frac{1}{2M}({v}-\bar{v})}\Omega^2\alpha\;d\bar{v}=V\int_{V}^{\infty} \overline{V}^{-2}\Omega^2\alpha\; d\overline{V}.
\end{align}
We take $L^2(\overline{\mathscr{H}^+})$ to be defined with respect to the measure $dv\sin\theta d\theta d\phi$, and we define $ L^2({\mathscr{I}^+})$ via the measure $du\sin\theta d\theta d\phi$. Analogous statements apply to $\mathscr{I}^-, \overline{\mathscr{H}^-}$.\\
\indent The detailed statement of Theorem 2 is contained in \Cref{+2 future forward scattering,,+2 future backward scattering,,+2 past forward scattering,,scatteringthm+2} of \Cref{subsubsection 4.2.1 scattering for the +2 equation}, and \Cref{-2 future forward scattering,,-2 future backward scattering,,-2 past forward scattering,,scatteringthm-2} of \Cref{subsubsection 4.2.2 Scattering for the -2 equation}.
\end{remark*}
\subsubsection{Teukolsky--Starobinsky correspondence}\label{subsubsection 1.3.3 TS}
\indent Finally, concerning the Teukolsky--Starobinsky correspondence relating $\alpha, \underline\alpha$, we may summarise our result as follows:
\begin{theorem*}\label{Theorem 3}
The constraints \bref{eq:227intro1}, \bref{eq:228intro1} can be used to define unitary Hilbert space isomorphisms:
\begin{align}
\mathcal{TS}_{\mathscr{I}^+}:\mathcal{E}^{T,+2}_{\mathscr{I}^+}\longrightarrow\mathcal{E}^{T,-2}_{\mathscr{I}^+},\qquad\qquad\mathcal{TS}_{\mathscr{H}^+}:\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}\longrightarrow\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^+}},
\end{align}
\begin{align}
\mathcal{TS}=\mathcal{TS}_{\mathscr{H}^+}\oplus\mathcal{TS}_{\mathscr{I}^+}: \mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}\oplus\mathcal{E}^{T,+2}_{\mathscr{I}^+}\longrightarrow\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^+}}\oplus\mathcal{E}^{T,-2}_{\mathscr{I}^+}.
\end{align}
Applying $\mathcal{TS}$ to scattering data, one can associate to a solution to the $+2$ Teukolsky equation \bref{wave equation +} arising from smooth scattering data in $\mathcal{E}^{T,+2}_{\mathscr{I}^+}\oplus\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}$ a unique solution $\underline\alpha$ of the $-2$ Teukolsky equation \bref{wave equation -} with smooth scattering data in $\mathcal{E}^{T,-2}_{\mathscr{I}^+}\oplus\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^+}}$ such that \bref{eq:227intro1}, \bref{eq:228intro1} are satisfied everywhere on the exterior region of Schwarzschild.
\end{theorem*}
The map $\mathcal{TS}_{\mathscr{I}^+}$ is realised by taking the limit of constraint \bref{eq:227intro1} near $\mathscr{I}^+$ and inverting either side of the constraint on smooth, compactly supported scattering data, which are by definition dense subsets of $\mathcal{E}^{T,\pm2}_{\mathscr{I}^+}$. The map $\mathcal{TS}_{\mathscr{H}^+}$ is obtained analogously by studying constraint \bref{eq:228intro1} near $\overline{\mathscr{H}^+}$. Note that in order to obtain a unique smooth radiation field $\upalpha_{\mathscr{H}^+}$ for the +2 Teukolsky equation \bref{wave equation +} on the event horizon starting from a radiation field $\underline\upalpha_{\mathscr{H}^+}$ for the $-2$ equation \bref{wave equation -}, it is necessary to specify $\underline\upalpha_{\mathscr{H}^+}$ on the entirety of $\overline{\mathscr{H}^+}$, and vice versa for $\mathscr{I}^+$. Thus the isomorphisms $\mathcal{TS}_{\mathscr{I}^+}$, $\mathcal{TS}_{\mathscr{H}^+}$ can only be defined on spaces of scattering data that determine solutions to \bref{wave equation +}, \bref{wave equation -} \textit{globally} on the Schwarzschild exterior.\\
\indent In particular, note that spacetimes of Robinson--Trautman type are excluded from our scattering theory, see \Cref{section 9 TS correspondence} and \Cref{Appendix A Robinson--Trautman}. The Robinson--Trautman spacetimes have the property that one of $\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha$ or $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}$ is non-trivial while the other is vanishing, and as such they pose a counterexample to the Teukolsky--Starobinsky correspondence if not properly formulated. We show that this possibility is eliminated when finite-energy scattering is considered globally on the entirety of the exterior of the Schwarzschild solution.\\
\indent The detailed statement of Theorem 3 is contained in \Cref{Theorem 3 detailed statement} of \Cref{subsection 4.3 the Teukolsky--Starobinsky identities}. See \Cref{section 9 TS correspondence} for the detailed treatment.
\subsubsection{A preview of scattering for the full linearised Einstein equations}\label{subsubsection 1.3.4 corollary}
In reference to Theorem 2, Theorem 3 allows us to bridge the scattering theory we build for the Teukolsky equations to develop scattering for the full system of linearised Einstein equations in double null gauge via the following corollary:
\begin{corollary*}\label{Corollary 1}
Given a smooth, compactly supported $\upalpha_{\mathscr{I}^-}$ on $\mathscr{I}^-$ such that $\int_{-\infty}^\infty d\bar{v} \; \upalpha_{\mathscr{I}^-}=0$, and an $\underline\upalpha_{\mathscr{H}^-}$ such that $U^{-2}\underline\upalpha_{\mathscr{H}^-}$ is smooth, compactly supported on $\overline{\mathscr{H}^-}$, there exists a unique smooth pair $(\alpha, \underline\alpha)$ on the exterior region of Schwarzschild, satisfying equations \bref{wave equation +}, \bref{wave equation -} respectively, where $\alpha$ realises $\upalpha_{\mathscr{H}^+}$ as its radiation field on $\overline{\mathscr{H}^+}$, $\underline\alpha$ realises $\underline\upalpha_{\mathscr{I}^+}$ as its radiation field on ${\mathscr{I}^+}$, such that constraints \bref{eq:227intro1} and \bref{eq:228intro1} are satisfied. Moreover, $\alpha, \underline\alpha$ induce smooth radiation fields $\underline{\upalpha}_{\mathscr{I}^+}, \upalpha_{\mathscr{H}^+}$ in $\mathcal{E}^{T,-2}_{{\mathscr{I}^+}}, \mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}$ respectively. This extends to a unitary Hilbert-space isomorphism:
\begin{align}
\mathscr{S}^{-2,+2}:\mathcal{E}^{T,+2}_{\mathscr{I}^-}\oplus\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^-}}\longrightarrow \mathcal{E}^{T,-2}_{\mathscr{I}^+}\oplus\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}.
\end{align}
\end{corollary*}
\begin{center}
\begin{tikzpicture}[->,scale=0.7, arrow/.style={
color=black,
draw=blue,thick,
-latex,
font=\fontsize{8}{8}\selectfont},
]
\node (I) at ( 0,0) {};
\path
(I) +(90:4) coordinate (Itop) coordinate [label={$i^+$}]
+(180:4) coordinate (Ileft) coordinate [label=180:{$\mathcal{B}\;$}]
+(0:4) coordinate (Iright) coordinate [label=0:{$\;i^0$}]
+(270:4) coordinate (Ibot) coordinate [label=-90:{$i^-$}]
;
\draw[arrow] ($(Itop)+(-90:3.6cm)$) to [in=-25,out=90] ($(Itop)+(-135:2.5cm)$);
\draw[arrow] ($(Itop)+(-90:3.6cm)$) to [in=205,out=90]($(Itop)+(-45:2.5cm)$);
\draw[arrow] ($(Ibot)+(135:2.7cm)$) to [out=10,in=-90] ($(Ibot)+(90:3.6cm)$);
\draw[arrow] ($(Ibot)+(45:2.7cm)$) to [out=170,in=-90] ($(Ibot)+(90:3.6cm)$);
\draw (Ileft) -- node[yshift=4mm,xshift=-1mm]{$\upalpha_{\mathscr{H}^+}$} (Itop) ;
\draw[dash dot dot] (Iright) -- node[yshift=4mm,xshift=1.mm]{$\underline\upalpha_{\mathscr{I}^+}$}(Itop) ;
\node[draw] at ($(Itop)+(-90:4cm)$) {$(\alpha,\underline\alpha)$};
\draw[dash dot dot] (Iright) -- node[yshift=-4mm,xshift=1.mm]{$\upalpha_{\mathscr{I}^-}$}(Ibot) ;
\draw (Ileft) -- node[yshift=-4mm,xshift=-1mm]{$\underline\upalpha_{\mathscr{H}^-}$} (Ibot) ;
\filldraw[white] (Itop) circle (3pt);
\draw[black] (Itop) circle (3pt);
\filldraw[white] (Ibot) circle (3pt);
\draw[black] (Ibot) circle (3pt);
\filldraw[white] (Iright) circle (3pt);
\draw[black] (Iright) circle (3pt);
\filldraw[black] (Ileft) circle (3pt);
\draw[black] (Ileft) circle (3pt);
\end{tikzpicture}
\end{center}
\Cref{Corollary 1} is stated again as \Cref{corollary to be proven} of \Cref{subsection 4.4 Corollary 1: mixed scattering}. The proof is contained in \Cref{subsection 9.4 mixed scattering}.\\
\indent To apply this result to scattering for the linearised Einstein equations, the strategy will be to start from data for the metric on $\mathscr{H}^-, \mathscr{I}^-$ (or $\mathscr{H}^+, \mathscr{I}^+$), obtain data for the shears $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\chi}}$ and hence $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\alpha}$ on $\mathscr{H}^+$, $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\underline{\chi}}}$ and hence $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}$ on $\mathscr{I}^+$, then use \Cref{Corollary 1} to obtain scattering data and solutions to \cref{wave equation +} and \cref{wave equation -}, and conclude by constructing the remaining quantities using the linearised Bianchi and null structure equations. This will be the subject of a forthcoming sequel to this paper \cite{M2050}.\\
\indent We can give a preview of the scattering results of the full system: assume we have a solution to the linearised Einstein equations defined on the whole of the exterior region (see \Cref{subsection 2.2 Linearised Einstein equations in double null gauge} for a full list of equations), such that $\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha, \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}$ induce radiation fields $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\upalpha}_{\mathscr{I}^+}\in\mathcal{E}^{T,-2}_{\mathscr{I}^+}$, $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\upalpha}_{\mathscr{I}^-}\in\mathcal{E}^{T,+2}_{\mathscr{I}^-}$, $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\upalpha}_{{\mathscr{H}^+}}\in\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}$, $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\upalpha}_{{\mathscr{H}^-}}\in\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^-}}$. Using \bref{example2} and its counterpart in the 4-direction, we can assert that the radiation fields belonging to the linearised shears $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat\chi}, \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\underline\chi}}$ must satisfy
\begin{align}\label{boundedness shears}
\begin{split}
&\left\|\left(\mathring{\slashed{\Delta}}-2\right)\left(\mathring{\slashed{\Delta}}-4\right)\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\underline\upchi}}_{\mathscr{I}^+}\right\|_{L^2(\mathscr{I}^+)}^2+\left\|6M\partial_u\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\underline\upchi}}_{\mathscr{I}^+}\right\|_{L^2(\mathscr{I}^+)}^2\\
&\qquad\qquad +\left\|\left(\mathring{\slashed{\Delta}}-2\right)\left(\mathring{\slashed{\Delta}}-4\right)\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\upchi}}_{\mathscr{H}^+}\right\|_{L^2(\overline{\mathscr{H}^+})}^2+\left\|6M\partial_v\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\upchi}}_{\mathscr{H}^+}\right\|_{L^2(\overline{\mathscr{H}^+})}^2\\
&\qquad\qquad\qquad\qquad\qquad=\left\|\left(\mathring{\slashed{\Delta}}-2\right)\left(\mathring{\slashed{\Delta}}-4\right)\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\upchi}}_{\mathscr{I}^-}\right\|_{L^2(\mathscr{I}^-)}^2+\left\|6M\partial_v\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\upchi}}_{\mathscr{I}^-}\right\|_{L^2(\mathscr{I}^-)}^2\\
&\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\left\|\left(\mathring{\slashed{\Delta}}-2\right)\left(\mathring{\slashed{\Delta}}-4\right)\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\underline\upchi}}_{\mathscr{H}^-}\right\|_{L^2(\overline{\mathscr{H}^-})}^2+\left\|6M\partial_u\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\underline\upchi}}_{\mathscr{H}^-}\right\|_{L^2(\overline{\mathscr{H}^-})}^2.
\end{split}
\end{align}
\indent The fact that time translation and angular momentum operators commute with $\Box_g$ means that we can project scattering data on individual azimuthal modes and consider solutions in frequency space. Since $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\chi}}, \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\underline\chi}}$ are supported on $\ell\geq2$, and in view of the unitarity of \bref{boundedness shears}, we can translate \bref{boundedness shears} in terms of fixed frequency, fixed azimuthal mode solutions to the following statement:
\begin{align}
\Big\|\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\upchi}}_{\mathscr{H}^+,\;\omega,m,\ell}\Big\|_{L^2_\omega }^2+\Big\|\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\underline\upchi}}_{\mathscr{I}^+,\;\omega,m,\ell}\Big\|^2_{L^2_\omega }\;=\;\Big\|\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\underline\upchi}}_{\mathscr{H}^-,\;\omega,m,\ell}\Big\|_{L^2_\omega }^2+\Big\|\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\upchi}}_{\mathscr{I}^-,\;\omega,m,\ell}\Big\|^2_{L^2_\omega }.
\end{align}
Resumming in $\ell_{m,\ell}^2$ and using Plancherel, we obtain the identity
\begin{align}\label{conservation law}
\Big\|\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\upchi}}_{\mathscr{H}^+}\Big\|_{L^2(\overline{\mathscr{H}^+})}^2+\Big\|\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\underline\upchi}}_{\mathscr{I}^+}\Big\|_{L^2(\mathscr{I}^+)}^2\;=\; \Big\|\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\underline\upchi}}_{\mathscr{H}^-}\Big\|^2_{L^2(\overline{\mathscr{H}^-})}+\Big\|\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\upchi}}_{\mathscr{I}^-}\Big\|^2_{L^2(\mathscr{I}^-)}.
\end{align}
The statement \bref{conservation law} above ties up with the work by Holzegel \cite{Holzegel_2016}, where a set of conservation laws are derived for the full system of linearised Einstein equations on the Schwarzschild exterior \bref{SchwMetric} (using purely physical-space methods).\\
\indent Note that in particular, for past scattering data that is vanishing on $\overline{\mathscr{H}^-}$, the identity \bref{conservation law} has the interpretation that the energy of the gravitational energy radiated to $\mathscr{I}^+$ is bounded \underline{with constant 1} by the incoming gravitational energy radiated from $\mathscr{I}^-$, i.e.~there is no superradiant amplification of reflected gravitational radiation on the Schwarzschild exterior.
\subsection{Outline of the paper}\label{subsection 1.4 outline}
This paper is organised as follows: We review the linearised Einstein equations in double null gauge around the Schwarzschild spacetime in \Cref{section 2 preliminaries}. In \Cref{TRW} we introduce the Teukolsky equations, the Regge--Wheeler equations and derive important identities connecting the equations. Detailed statements of the results of this work are presented in \Cref{section 4 main theorems}, and then the scattering theory of the Regge--Wheeler equations is studied in \Cref{section 5 scattering theory for RW}. We develop scattering for the Teukolsky equations by first working out the necessary estimates to understand the asymptotic behaviour in forward evolution for both equations in \Cref{section 6} and \Cref{section 7}. Backwards scattering for both equations is treated in \Cref{section 8 constructing the scattering maps}, followed by the study of the constraints \bref{eq:227intro1} and \bref{eq:228intro1} in \Cref{section 9 TS correspondence}. \Cref{Appendix A Robinson--Trautman} is concerned with Robinson--Trautman spacetimes, and \Cref{Appendix B Double null guage} is a brief review of the double null gauge.
\setcounter{tocdepth}{2}
\subsection*{Acknowledgements} The author would like to express his gratitude to his supervisors Mihalis Dafermos, Claude Warnick and Malcolm J. Perry for introducing him to the fascinating area of scattering theory in general relativity, and for their unwavering support throughout the undertaking of this project. The author would like to thank Dejan Gajic, Leonhard Kehrberger, Rita Teixeira da Costa and Yakov Shlapentokh-Rothman for stimulating discussions and helpful remarks. The author acknowledges support by the EPSRC grant EP/L016516/1.
\section{Preliminaries}\label{section 2 preliminaries}
\subsection{The Schwarzschild exterior in a double null gauge}\label{subsection 2.1 schwarzschild in dng}
\indent Denote by $\mathscr{M}$ the exterior of the maximally extended Schwarzschild spacetime. Using Kruskal coordinates, this is the manifold with corners
\begin{align}
\mathscr{M}=\{(U,V,\theta^A)\in(-\infty,0]\times [0,\infty)\times S^2\}
\end{align}
equipped with the metric
\begin{align}\label{metric Kruskal}
ds^2=-\frac{32M^3}{r(U,V)}e^{-\frac{r(U,V)}{2M}}dUdV+r(U,V)^2\gamma_{AB}d\theta^A d\theta^B.
\end{align}
The function $r(U,V)$ is determined by $-UV=\left(\frac{r}{2M}-1\right)e^{\frac{r}{2M}}$, $(\theta^A)$ is a coordinate system on $S^2$ and $\gamma_{AB}$ is the standard metric on the unit sphere $S^2$. The time-orientation of $\mathscr{M}$ is defined by the vector field $\partial_U+\partial_V$.
The boundary of $\mathscr{M}$ consists of the two null hypersurfaces
\begin{align}
\mathscr{H}^+&=\{0\}\times(0,\infty)\times S^2,\\
\mathscr{H}^-&=(-\infty,0)\times \{0\}\times S^2,
\end{align}
and the 2-sphere $\mathcal{B}$ where $\mathscr{H}^+$ and $\mathscr{H}^-$ bifurcate:
\begin{align}
\mathcal{B}=\{U,V=0\}\cong S^2 .
\end{align}
We define $\overline{\mathscr{H}^+}=\mathscr{H}^+\cup \mathcal{B}$, $\overline{\mathscr{H}^-}=\mathscr{H}^-\cup \mathcal{B}$. \\
\indent The interior of $\mathscr{M}$ can be covered with the familiar Schwarzschild coordinates $(t,r,\theta^A)$ and the metric takes the form \bref{SchwMetric}, i.e.
\begin{align}
ds^2=-\left(1-\frac{2M}{r}\right)dt^2+\left(1-\frac{2M}{r}\right)^{-1}dr^2+r^2\gamma_{AB}d\theta^Ad\theta^B.
\end{align}
Let $\Omega^2=\left(1-\frac{2M}{r}\right)$. It will be convenient to work instead in Eddington--Finkelstein coordinates
\begin{align}\label{EF null coordinates}
u=\frac{1}{2}(t-r_*),\qquad\qquad\qquad v=\frac{1}{2}(t+r_*),
\end{align}
where $r_*$ is defined up to a constant by $\frac{dr_*}{dr}=\frac{1}{\Omega^2}$. The coordinates $(u,v,\theta^A)$ also define a double null foliation (see Appendix B) of the interior of $\mathscr{M}$ since the metric takes the form
\begin{align}
ds^2=-4\left(1-\frac{2M}{r}\right)dudv+r(u,v)^2(d\theta^2+\sin^2\theta d\phi^2).
\end{align}
In particular the null frame defined by the coordinates \bref{EF null coordinates} is given by (see Appendix B):
\begin{align}
e_3=\frac{1}{\Omega}\partial_u,\qquad\qquad e_4=\frac{1}{\Omega}\partial_v.
\end{align}
We may relate $U,V$ to $u,v$ after fixing the residual freedom in defining $t,r_*$ by
\begin{align}\label{Kruskal}
U=-e^{-\frac{u}{2M}},\qquad\qquad V=e^{\frac{v}{2M}},
\end{align}
Note that the intersections of null hypersurfaces of constant $u,v$ are spheres with metric $\slashed{g}_{AB}:=r^2\gamma_{AB}$. We denote these spheres by $S^2_{u,v}$.\\
\indent The $(u,v)$-coordinate system degenerates on $\overline{\mathscr{H}^+}$ and $\overline{\mathscr{H}^-}$ where $u=\infty,v=-\infty$ respectively. To compensate for this we can use the Kruskal coordinates to introduce weighted quantities in the coordinates $(u,v,\theta^A)$ that are regular on $\mathscr{H}^\pm$. We note already at this stage that the regularity of $\partial_U,\partial_V$ on the event horizons implies that $\frac{1}{\Omega}e_3, \Omega e_4$ are regular on $\mathscr{H}^+$ and $\frac{1}{\Omega}e_4, \Omega e_3$ are regular on $\mathscr{H}^-$ (but not $\overline{\mathscr{H}^\pm}$, which include $\mathcal{B}$).\\
\indent We denote by $\mathscr{C}_{u^*}$ the ingoing null hypersurface of constant $u=u^*$, and similarly $\underline{\mathscr{C}}_{v^*}$ denotes the outgoing null hypersurface $v=v^*$; define $\mathscr{C}_{u^*}\cap[v_1,v_2]$ to be the subset of $\mathscr{C}_{u^*}$ for which $v\in[v_1,v_2]$, $\underline{\mathscr{C}}_v\cap[u_1,u_2]$ denotes the subset of $\underline{\mathscr{C}}_v$ for which $u\in[u_-,u_+]$. Let ${\Sigma}$ be the spacelike surface $\{t=0\}$ and let $\overline{\Sigma}=\Sigma\cup\mathcal{B}$ be the topological closure of $\Sigma$ in $\mathscr{M}$. $\overline{\Sigma}$ is a smooth Cauchy surface for $\mathscr{M}$ which connects $\mathcal{B}$ with "spacelike infinity"; in Kruskal coordinates it is given by $\{U+V=0\}$. We also work with a spacelike hypersurface $\Sigma^*$
intersecting $\mathscr{H}^+$ to the future of $\mathcal{B}$, defined as follows: let
\begin{align}
t^*=t+2M\log\left(\frac{r}{2M}+1\right).
\end{align}
The function $t^*$ can be extended to $\mathscr{H}^\pm$ to define a smooth function on all of $\mathscr{M}$, and we define $\Sigma^*$ by
\begin{align}
\Sigma^*=\{t^*=0\}
\end{align}
\noindent Note that $\Sigma^*$ intersects $\mathscr{H}^+$ at $v=0$ and asymptotes to spacelike infinity. Define $\mathscr{H}^+_{\geq 0}:=\mathscr{H}^+\cap J^+(\Sigma^*)$. We will occasionally use the notation $x:=1-\frac{1}{\Omega^2}$.
We denote the spacetime region bounded by $\mathscr{C}_{u_0}\cap[v_0,v_1], \mathscr{C}_{u_1}\cap[v_0,v_1], \underline{\mathscr{C}}_{v_0}\cap[u_0,u_1], \underline{\mathscr{C}}_{v_1}\cap[u_0,u_1]$ by $\mathscr{D}^{u_1,v_1}_{u_0,v_0}$. We also denote the spacetime region bounded by $\mathscr{C}_u,\underline{\mathscr{C}}_v, \Sigma^*$ by $\mathscr{D}^{u,v}_{\Sigma^*}$.
\begin{center}
\begin{tikzpicture}[scale=1]
\node (I) at ( 0,0) {$\mathscr{D}^{u_1,v_1}_{u_0,v_0}$};
\path
(I) +(90:4) coordinate (Itop) coordinate[label=90:$i^+$]
+(-90:4) coordinate (Ibot) coordinate[label=-90:$i^-$]
+(180:4) coordinate (Ileft)
+(0:4) coordinate (Iright) coordinate[label=0:$i^0$]
;
\path
(I) +(90:2) coordinate (Ictop)
+(-90:2) coordinate (Icbot)
+(180:2) coordinate (Icleft)
+(0:2) coordinate (Icright)
;
\draw (Ileft) -- node[rotate=45,below] {$u=\infty$} node[rotate=45,above]{$\mathscr{H}^+$} (Itop) ;
\draw (Ileft) -- node[rotate=-45,above] {$v=-\infty$} node[rotate=-45,below]{$\mathscr{H}^-$}(Ibot) ;
\draw[dash dot dot] (Ibot) -- node[rotate=45,above] {$u=-\infty$} node[rotate=45,below]{$\mathscr{I}^-$}(Iright) ;
\draw[dash dot dot] (Iright) -- node[rotate=-45,below] {$v=\infty$} node[rotate=-45,above]{$\mathscr{I}^+$}(Itop) ;
\draw(Icleft) --node[rotate=45,above] {$\mathscr{C}_{u_1}\cap[v_0,v_1]$} (Ictop);
\draw(Ictop) -- node[rotate=-45,above] {$\underline{\mathscr{C}}_{v_1}\cap[u_0,u_1]$}(Icright);
\draw(Icright) -- node[rotate=45,below] {$\mathscr{C}_{u_0}\cap[v_0,v_1]$}(Icbot);
\draw(Icbot) -- node[rotate=-45,below] {$\underline{\mathscr{C}}_{v_0}\cap[v_0,v_1]$}(Icleft);
\filldraw[white] (Itop) circle (3pt);
\draw[black] (Itop) circle (3pt);
\filldraw[white] (Ibot) circle (3pt);
\draw[black] (Ibot) circle (3pt);
\filldraw[white] (Iright) circle (3pt);
\draw[black] (Iright) circle (3pt);
\filldraw[black] (Ileft) circle (3pt);
\end{tikzpicture}
\end{center}
\begin{center}
\begin{tikzpicture}[scale=0.4]
\node (I) at ( 0,0) {};
\path
(I) +(90:4) coordinate (Itop) coordinate[label=90:$i^+$]
+(-90:4) coordinate (Ibot) coordinate[label=-90:$i^-$]
+(180:4) coordinate (Ileft)
+(0:4) coordinate (Iright) coordinate[label=0:$i^0$]
;
\draw (Ileft) -- node[rotate=45,above]{$\mathscr{H}^+_{\geq0}$} (Itop) ;
\draw (Ileft) -- (Ibot) ;
\draw[dash dot dot] (Ibot) -- (Iright) ;
\draw[dash dot dot] (Iright) -- node[rotate=-45,above]{$\mathscr{I}^+$}(Itop) ;
\draw ($(Ileft)+(45:1.2)$) to[out=-0, in=165, edge node={node [below] {$\Sigma^*$}}] ($(Iright)$);
\filldraw[white] (Itop) circle (3pt);
\draw[black] (Itop) circle (3pt);
\filldraw[white] (Ibot) circle (3pt);
\draw[black] (Ibot) circle (3pt);
\filldraw[white] (Iright) circle (3pt);
\draw[black] (Iright) circle (3pt);
\filldraw[black]($(Ileft)+(45:1.2)$) circle (3pt);
\end{tikzpicture}\hspace{2cm}\begin{tikzpicture}[scale=0.4]
\node (I) at ( 0,0) {};
\path
(I) +(90:4) coordinate (Itop) coordinate[label=90:$i^+$]
+(-90:4) coordinate (Ibot) coordinate[label=-90:$i^-$]
+(180:4) coordinate (Ileft)
+(0:4) coordinate (Iright) coordinate[label=0:$i^0$]
;
\draw (Ileft) -- node[rotate=45,above]{$\mathscr{H}^+$} (Itop) ;
\draw (Ileft) --(Ibot) ;
\draw[dash dot dot] (Ibot) -- (Iright) ;
\draw[dash dot dot] (Iright) -- node[rotate=-45,above]{$\mathscr{I}^+$}(Itop) ;
\draw ($(Ileft)$) to[out=0, in=180, edge node={node [above] {$\Sigma$}}] ($(Iright)$);
\filldraw[white] (Itop) circle (3pt);
\draw[black] (Itop) circle (3pt);
\filldraw[white] (Ibot) circle (3pt);
\draw[black] (Ibot) circle (3pt);
\filldraw[white] (Iright) circle (3pt);
\draw[black] (Iright) circle (3pt);
\filldraw[white] (Ileft) circle (3pt);
\draw[black] (Ileft) circle (3pt);
\end{tikzpicture}\hspace{2cm}\begin{tikzpicture}[scale=0.4]
\node (I) at ( 0,0) {};
\path
(I) +(90:4) coordinate (Itop) coordinate[label=90:$i^+$]
+(-90:4) coordinate (Ibot) coordinate[label=-90:$i^-$]
+(180:4) coordinate (Ileft) coordinate[label=180:$\mathcal{B}$]
+(0:4) coordinate (Iright) coordinate[label=0:$i^0$]
;
\draw (Ileft) -- node[rotate=45,above]{$\overline{\mathscr{H}^+}$} (Itop) ;
\draw (Ileft) --(Ibot) ;
\draw[dash dot dot] (Ibot) -- (Iright) ;
\draw[dash dot dot] (Iright) -- node[rotate=-45,above]{$\mathscr{I}^+$}(Itop) ;
\draw ($(Ileft)$) to[out=0, in=180, edge node={node [above] {$\overline{\Sigma}$}}] ($(Iright)$);
\filldraw[white] (Itop) circle (3pt);
\draw[black] (Itop) circle (3pt);
\filldraw[white] (Ibot) circle (3pt);
\draw[black] (Ibot) circle (3pt);
\filldraw[white] (Iright) circle (3pt);
\draw[black] (Iright) circle (3pt);
\filldraw[black] (Ileft) circle (3pt);
\draw[black] (Ileft) circle (3pt);
\end{tikzpicture}
\end{center}
\subsubsection*{Null infinity $\mathscr{I}^\pm$}
We define the notion of null infinity by directly attaching it as a boundary to $\mathscr{M}$. Define $\mathscr{I}^+,\mathscr{I}^-$ to be the manifolds
\begin{align}
\mathscr{I}^+,\mathscr{I}^-:=\mathbb{R}\times S^2
\end{align}
and define $\overline{\mathscr{M}}$ to be the extension
\begin{align}
\overline{\mathscr{M}}=\mathscr{M}\cup\mathscr{I}^+\cup\mathscr{I}^-.
\end{align}
For sufficiently large $R$ and any open set $\mathcal{O}\subset\mathbb{R}\times S^2$, declare the sets $\mathcal{O}^+_R=(R,\infty]\times\mathcal{O}$ to be open in $\overline{\mathscr{M}}$, identifying $\mathscr{I}^+$ with the points $(u,\infty,\theta,\phi)$. To the set $\mathcal{O}_R^+$ we assign the coordinate chart $(u,s,\theta,\phi)\in \mathbb{R}\times[0,1)\times S^2$ via the map
\begin{align}
(u,v,\theta,\phi)\longrightarrow(u,\frac{R}{v},\theta,\phi),
\end{align}
where $(u,v,\theta,\phi)$ are the Eddington--Finkelstein coordinates we defined earlier. The limit $\lim_{v\longrightarrow\infty} (u,v,\theta,\phi)$ exists and is unique, and we use it via the above charts to fix a coordinate system $(u,\theta,\phi)$ on $\mathscr{I}^+$. The same can be repeated to define an atlas attaching $\mathscr{I}^-$ as a boundary to $\overline{\mathscr{M}}$.
\subsubsection{$S^2_{u,v}$-projected connection and angular derivatives}\label{D1D2}
We will be working primarily with tensor fields that are everywhere tangential to the $S^2_{u,v}$ spheres foliating $\mathscr{M}$. By this we mean any tensor fields of type $(k,l)$, $\digamma\in \mathcal{T}^{(k,l)}\mathscr{M}$ on $\mathscr{M}$ such that for any point $q=(u,v,\theta^A)\in\mathscr{M}$ we have $\digamma|_q\in \mathcal{T}^{(k,l)}_{(\theta^A)}S^2_{u,v}$. (Note that a vector $X^A\in \mathcal{T}_{(\theta^A)}S^2_{u,v}$ is canonically identified with a vector $X^a\in\mathcal{T}_q\mathscr{M}$ via the inclusion map, whereas we make the identification of a 1-form $\eta_A\in\mathcal{T}^*_{(\theta^A)}\mathscr{M}$ as an element in the cotangent bundle of $\mathscr{M}$ by declaring that $\eta(X)=0$ if $X$ is in the orthogonal complement of $\mathcal{T}S^2_{u,v}$ under the spacetime metric \bref{metric Kruskal}.) We will refer to such tensor fields as "$S^2_{u,v}$-tangent" tensor fields in the following. It will also be convenient to work with an "$S^2_{u,v}$ projected" version of the covariant derivative belonging to the Levi-Civita connection of the metric \bref{SchwMetric}. We define these notions as follows:\\
\indent We denote by $\slashed{\nabla}_A$ (or sometimes simply $\slashed{\nabla}$) the covariant derivative on $S^2_{u,v}$ with the metric $\slashed{g}_{AB}$. Note that $r\slashed{\nabla}=\slashed{\nabla}_{\mathbb{S}^2}$ which we also denote by $\mathring{\slashed{\nabla}}$.\\
\indent For an $S^2_{u,v}$-tangent 1-form $\xi$, define $\slashed{\mathcal{D}}_1 \xi$ to be the pair of functions
\begin{align}
\slashed{\mathcal{D}}_1{\xi}=(\slashed{\text{div}}\xi,\slashed{\text{curl}}\xi),
\end{align}
where $\slashed{\text{div}}\xi=\slashed{\nabla}^A \xi_A$ and $\slashed{\text{curl}}\xi=\slashed{\epsilon}^{AB} \slashed{\nabla}_A\xi_B$. For an $S^2_{u,v}$-tangent symmetric 2-tensor $\Xi_{AB}$ we define $\slashed{\mathcal{D}}_2 \theta$ to be the 1-form given by
\begin{align}
(\slashed{\mathcal{D}}_2 \theta)_A=(\slashed{\text{div}}\theta)_A=\slashed{\nabla}^B \Xi_{BA}.
\end{align}
\indent We define the operator $\slashed{\mathcal{D}}^*_1 $ to be the $L^2({S^2_{u,v}})$-dual to $\slashed{\mathcal{D}}_1$. For scalars $(f,g)$ the 1-form $\slashed{\mathcal{D}}^*_1(f,g)$ is given by
\begin{align}
\slashed{\mathcal{D}}^*_1(f,g)=-\slashed{\nabla}_A f +\epsilon_{AB}\slashed{\nabla}^B g.
\end{align}
\indent Similarly we denote by $\slashed{\mathcal{D}}^*_2$ the $L^2_{S^2_{u,v}}$-dual to $\slashed{\mathcal{D}}_2$. For an $S^2_{u,v}$-tangent 1-form $\xi$ this is given by
\begin{align}
(\slashed{\mathcal{D}}^*_2\xi)_{AB}=-\frac{1}{2}\left(\slashed{\nabla}_A \xi_B+\slashed{\nabla}_B\xi_A-\slashed{g}_{AB}\slashed{\text{div}}\xi\right).
\end{align}
We also use the notation
\begin{align}
\begin{split}
\mathring{\slashed{\mathcal{D}}}_1:=r\slashed{\mathcal{D}}_1,\qquad\qquad\mathring{\slashed{\mathcal{D}}^*_1}:=r\slashed{\mathcal{D}}^*_1,\\
\mathring{\slashed{\mathcal{D}}}_2:=r\slashed{\mathcal{D}}_2,\qquad\qquad\mathring{\slashed{\mathcal{D}}^*_2}:=r\slashed{\mathcal{D}}^*_2.
\end{split}
\end{align}
For example, if $\xi$ is a 1-form on $S^2_{u,v}$ then
\begin{align}
\mathring{\slashed{\mathcal{D}}^*_2}\xi=-\frac{1}{2}\left(\mathring{\slashed{\nabla}}_A \xi_B+\mathring{\slashed{\nabla}}_B\xi_A-\slashed{g}_{AB}\mathring{\slashed{\nabla}}_C\xi^C\right).
\end{align}
and so on. Let $\xi$ be an $S^2_{u,v}$-tangent tensor field. We denote by $D\xi$ and $\underline{D}\xi$ the projected Lie derivative of $\xi$ in the 3- and 4-directions respectively. In EF coordinates we have
\begin{align}
(D\xi)_{A_1 A_2...A_n}=\partial_u(\xi_{A_1 A_2 ... A_n})\qquad (\underline{D}\xi)_{A_1 A_2...A_n}=\partial_v(\xi_{A_1 A_2 ... A_n})
\end{align}
Similarly, we define $\slashed{\nabla}_3 \xi$ and $\slashed{\nabla}_4 \xi$ to be the projections of the covariant derivatives $\nabla_3 \xi$ and $\nabla_4 \xi$ to $S^2_{u,v}$.
\subsubsection{Elliptic estimates on $S^2_{u,v}$}\label{subsubsection 2.1.2 Elliptic estimates on S2}
For a $k$-covariant $S^2_{u,v}$-tangent tensor field $\theta$ on $\mathscr{M}$, define
\begin{align}
|\theta|_{S^2}=\sqrt{\gamma^{A_1B_1}\gamma^{A_2B_2}\cdot\cdot\cdot\gamma^{A_pB_p}\Xi_{A_1...A_p}\Xi_{B_1...B_p}},\qquad |\theta|=r^{-k}|\theta|_{S^2}
\end{align}
The following is a summary of Section 4.4 of \cite{DHR16}. Given scalars $(f,g)$ we can define an $S^2_{u,v}$ 1-form by $\xi=r\slashed{\mathcal{D}}^*_1(f,g)$. In turn, given a 1-form $\xi$ we can define a symmetric traceless 2-form $\theta$ via $\theta=r\slashed{\mathcal{D}}^*_2\xi$. It turns out that these representations span the space of such $\xi$ and $\theta$:
\begin{proposition}
Let $\xi$ be an $S^2_{u,v}$-tangent 1-form. Then there exist scalars $f,g$ such that
\begin{align}
\xi=r\slashed{\mathcal{D}}^*_1(f,g).
\end{align}
Let $\Xi$ be $S^2_{u,v}$-tangent symmetric traceless 2-form. Then there exist scalars $f,g$ such that
\begin{align}
\Xi=r^2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1(f,g)
\end{align}
\end{proposition}
Note that when considering the decomposition of $f,g$ into their spherical harmonic modes, the operation of acting by $\slashed{\mathcal{D}}^*_1$ annihilates their $\ell=0$ modes and the action of $\slashed{\mathcal{D}}^*_2$ annihilates their $\ell=1$ modes. Thus in the case of a 1-form $f,g$ can be taken to have vanishing $\ell=0$ modes, in which case $f,g$ are unique. Similarly, for a symmetric traceless $S^2_{u,v}$ 2-tensor there exist a unique pair $f,g$ with vanishing $\ell=0,1$ such that $\theta$ is given by the expression above.
\begin{remark}
The operators $\slashed{\mathcal{D}}_1,\slashed{\mathcal{D}}_2,\slashed{\mathcal{D}}^*_1,\slashed{\mathcal{D}}^*_2$ defined in \Cref{D1D2} can be combined to give
\begin{align}
\begin{split}
&-2r^2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}_2=\mathring{\slashed{\Delta}}-2\qquad\qquad-r^2\slashed{\mathcal{D}}^*_1\slashed{\mathcal{D}}_1=\mathring{\slashed{\Delta}}-1\\
&-2r^2\slashed{\mathcal{D}}_2\slashed{\mathcal{D}}^*_2=\mathring{\slashed{\Delta}}+1\qquad\qquad -r^2\slashed{\mathcal{D}}_1\slashed{\mathcal{D}}^*_1=\mathring{\slashed{\Delta}}.
\end{split}
\end{align}
The operator $\mathring{\slashed\Delta}$ is the Laplacian on the unit 2-sphere $S^2$.
\end{remark}
\begin{proposition}
Let $\Xi$ be a smooth symmetric traceless $S^2_{u,v}$ 2-tensor. We have the following identities:
\begin{align}
\int_{S^2_{u,v}}\sin\theta d\theta d\phi\left[ |\slashed{\nabla}\Xi|^2+2K|\Xi|^2\right]=2\int_{S^2_{u,v}} \sin\theta d\theta d\phi |\slashed{\mathcal{D}}_2 \Xi|^2,
\end{align}
\begin{align}
\int_{S^2_{u,v}}\sin\theta d\theta d\phi\left[\frac{1}{4}|\slashed{\Delta}\Xi|^2+K|\Xi|^2 +K^2|\slashed{\nabla}\Xi|^2\right]=\int_{S^2_{u,v}} \sin\theta d\theta d\phi |\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}_2 \Xi|^2,
\end{align}
where $K=\frac{1}{r^2}$ is the Gaussian curvature of $S^2_{u,v}$.
\end{proposition}
We also note the following Poincar\'e inequality:
\begin{proposition}\label{poincaresection}
Let $\Xi$ be a smooth symmetric traceless $S^2_{u,v}$ 2-tensor, then we have
\begin{align}\label{poincare}
2K\int_{S^2_{u,v}}\sin\theta d\theta d\phi|\Xi|^2\leq \int_{S^2_{u,v}} \sin\theta d\theta d\phi|\slashed{\nabla} \Xi|^2
\end{align}
\end{proposition}
\begin{remark}
We will be using the notation
\begin{align}
\mathcal{A}_2:=-2r^2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}_2=\mathring{\slashed{\Delta}}-2.
\end{align}
\end{remark}
\subsubsection{Asymptotics of $S^2_{u,v}$-tensor fields}\label{subsubsection 2.1.3 Asymptotics of S2 tensor fields}
Let $\digamma$ be a $k$-covariant $S^2_{u,v}$-tangent tensor field on $\mathscr{M}$. We say that $\digamma$ converges to $F=F_{A_1A_2...A_p}(u)$ as $v\longrightarrow\infty$ if $r^{-k}\digamma\longrightarrow F$ in the norm $|\;\;|_{S^2}$. We may write
\begin{align}
\begin{split}
\left|\frac{1}{r^k}\digamma(u,v,\theta^A)-F(u,\theta^A)\right|_{S^2}&=\left|\int_{v}^\infty d\bar{v} \frac{d}{dv}\frac{1}{r^k}\digamma \right|_{S^2}\leq \int_{v}^\infty d\bar{v} \left|\frac{d}{dv}\frac{1}{r^k}\digamma\right|_{S^2}
\\&=\int_{v}^{\infty}d\bar{v}\left|r^k\frac{d}{dv}\frac{1}{r^k}\digamma\right|=\int_{v}^\infty d\bar{v}|\Omega\slashed{\nabla}_4 \digamma|.
\end{split}
\end{align}
Therefore, if $\Omega\slashed{\nabla}_4\digamma$ is integrable in $L^1_vL^2_{S^2_{u,v}}$ then $\digamma$ has a limit towards $\mathscr{I}^+$. It is easy to see that if $\{\digamma_n\}_n^\infty$ is a Cauchy sequence in $|\;\;|$ then $\digamma_n$ converges in the sense of this definition. The above extends to tensors of rank $(k,\ell)$, where $r^{-k}$ is replaced by $r^{-k+\ell}$. Similar considerations apply when taking the limit towards $\mathscr{I}^-$.
In particular, for a symmetric tensor $\Psi$ of rank $(2,0)$, it will be simpler to work with $\Psi^{A}{}_B$. Note that $\Omega\slashed{\nabla}_4\Psi^A{}_B=\partial_v\Psi^A{}_B$, $\Omega\slashed{\nabla}_3\Psi^A{}_B=\partial_u \Psi^A{}_B$. Unless otherwise indicated, we work with $S^2_{u,v}$-tangent $(1,1)$-tensors throughout.
\subsection{Linearised Einstein equations in a double null gauge}\label{subsection 2.2 Linearised Einstein equations in double null gauge}
When linearising the Einstein equations \bref{EVE} against the Schwarzschild background in a double null gauge, the quantities governed by the resulting equations can be organised into a collection of $S^2_{u,v}$-tangent tensor fields:
\begin{itemize}
\item The linearised metric components
\begin{align}\label{linearised metric}
\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\slashed{g}}}\;,\; \stackrel{\mbox{\scalebox{0.4}{(1)}}}{b}\;,\;\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\sqrt{\slashed{g}}}\;,\; \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\Omega}\;,
\end{align}
\item the linearised connection coefficients
\begin{align}\label{linearised connection}
\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\chi}}\;,\; \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\underline\chi}}\;, \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\eta}\;,\; \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\eta}\;,\; \stackrel{\mbox{\scalebox{0.4}{(1)}}}{(\Omega \tr\chi)}\;, \;\stackrel{\mbox{\scalebox{0.4}{(1)}}}{(\Omega \tr\underline\chi)}\;,\;\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\omega}\;,\; \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline{\omega}}\;,\;
\end{align}
\item the linearised curvature components
\begin{align}\label{linearised curvature}
\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\alpha}\;,\; \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}\;,\; \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\beta}\;,\;\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\beta}\;,\; \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\rho}\;,\; \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\sigma}\;,\; \stackrel{\mbox{\scalebox{0.4}{(1)}}}{K}.
\end{align}
\end{itemize}
See Appendix B and \cite{DHR16} for the details of linearising the vacuum Einstein equations \bref{EVE} in a double null gauge. We now state the linearised vacuum Einstein equations around the Schwarzschild black hole in a double null gauge:
\begin{itemize}
\item The equations governing the linearised metric components \bref{linearised metric}:
\begin{align}
\partial_v \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\sqrt{\slashed{g}}}\;=\;2(\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\Omega\tr\chi})-2\;\slashed{div}\stackrel{\mbox{\scalebox{0.4}{(1)}}}{b}&,\qquad\Omega\slashed{\nabla}_4 \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\slashed{g}}}\;=\;2\Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\chi}}+2\slashed{\mathcal{D}}^*_2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{b},\\
\partial_u \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\sqrt{\slashed{g}}}\;=\;2(\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\Omega\tr\underline\chi})&,\qquad \Omega\slashed{\nabla}_3\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\slashed{g}}}\;=\;2\Omega\underline{\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\chi}}}.\\
\partial_u\stackrel{\mbox{\scalebox{0.4}{(1)}}}{b}\;=\;&2\Omega^2(\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\eta}-\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\eta}),\\
\partial_v\left(\frac{\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\Omega}}{\Omega}\right)\;=\;\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\omega},\qquad\qquad\partial_u\left(\frac{\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\Omega}}{\Omega}\right)\;&=\;\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\omega},\qquad\qquad\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\eta}_A+\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline{\eta}}_A\;=\;2\slashed{\nabla}_A \left(\frac{\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\Omega}}{\Omega}\right).\label{omega omegabar eta etabar}
\end{align}
\item The equations governing the linearised connection coefficients \bref{linearised connection}:
\begin{equation} \label{start of full system}
\Omega\slashed\nabla_4\; r\overone{\Omega tr\underline\chi}=2\Omega^2\left(\slashed{div}\; r\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\eta}+2r\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\rho} -\frac{4M}{r^2}\frac{\stackrel{\mbox{\scalebox{0.45}{(1)}}}{\Omega}}{\Omega}\right)+\Omega^2\overone{\Omega tr\chi},
\end{equation}
\begin{equation}\label{D3TrChiBar}
\Omega\slashed\nabla_3\; r\overone{\Omega tr\chi}=2\Omega^2\left(\slashed{div}\; r\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\eta}+2r\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\rho}-\frac{4M}{r^2}\frac{\stackrel{\mbox{\scalebox{0.45}{(1)}}}{\Omega}}{\Omega}\right) -\Omega^2 \overone{\Omega tr\underline\chi},
\end{equation}
\begin{equation}\label{D4TrChi}
\Omega\slashed\nabla_4\frac{r^2}{\Omega^2}\overone{\Omega tr\chi}=4r\overset{\mbox{\scalebox{0.4}{(1)}}}{\omega},\qquad\qquad \Omega\slashed\nabla_3\frac{r^2}{\Omega^2}\overone{\Omega tr\underline\chi}=-4r\overset{\mbox{\scalebox{0.4}{(1)}}}{\underline\omega},
\end{equation}
\begin{equation}\label{D4Chihat}
\Omega\slashed\nabla_4\frac{r^2 \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\chi}}}{\Omega}=-r^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\alpha},\qquad\qquad \Omega\slashed\nabla_3\frac{r^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline{\hat{\chi}}}}{\Omega}=-r^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha},
\end{equation}
\begin{equation}\label{D3Chihat}
\Omega\slashed\nabla_3\; r\Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\chi}}=-2r\slashed{\mathcal{D}}^*_2 \Omega^2 \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\eta}-\Omega^2 \left(\Omega \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline{\hat{\chi}}}\right),
\end{equation}
\begin{equation}\label{D4Chihatbar}
\Omega\slashed\nabla_4\;r\Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline{\hat{\chi}}}=-2r\slashed{\mathcal{D}}^*_2\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\eta}+\Omega^2\left(\Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\chi}}\right),
\end{equation}
\begin{equation}\label{D3etabar}
\Omega\slashed\nabla_3r\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\eta}=r\Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\beta}-\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\eta},\qquad\qquad \Omega\slashed\nabla_4r\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\eta}=-r\Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\beta}+\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\eta},
\end{equation}
\begin{equation}\label{D4etabar}
\Omega\slashed\nabla_4 r^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\eta}=2r^2\slashed\nabla_A\overset{\mbox{\scalebox{0.4}{(1)}}}{\omega}+r^2\Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\beta},\qquad\qquad \Omega\slashed\nabla_3r^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\eta}=2r^2\slashed\nabla_A\underline{\overset{\mbox{\scalebox{0.4}{(1)}}}{\omega}}-r^2\Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\beta},
\end{equation}
\item The equations governing the curvature components \bref{linearised curvature}:
\begin{equation}\label{Bianchi +2}
\Omega\slashed\nabla_3 \;r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\alpha}=-2r\slashed{\mathcal{D}}^*_2 \Omega^2 \Omega \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\beta}+\frac{6M\Omega^2}{r^2}\Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\chi}},\qquad\quad \Omega\slashed\nabla_4\;r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}=2r\slashed{\mathcal{D}}^*_2\Omega^2\Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\beta}+\frac{6M\Omega^2}{r^2}\Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline{\hat{\chi}}},\;
\end{equation}
\begin{equation}\label{Bianchi +1a}
\Omega\slashed\nabla_4 \frac{r^4\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\beta}}{\Omega}=r\slashed{div}\;r^3\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\alpha},\qquad\qquad \Omega\slashed\nabla_3\frac{r^4\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\beta}}{\Omega}=-r\slashed{div}\; r^3\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha},
\end{equation}
\begin{equation}\label{Bianchi +1b}
\Omega\slashed\nabla_4r^2\Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\beta}=r\slashed{\mathcal{D}}^*_1(r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\rho},r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\sigma})+\frac{6M\Omega^2}{r}\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\eta},\qquad\quad\Omega\slashed\nabla_3 r^2\Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\beta}=r\slashed{\mathcal{D}}^*_1(-r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\rho},r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\sigma})-\frac{6M\Omega^2}{r}\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\eta},
\end{equation}
\begin{equation}\label{Bianchi 0}
\Omega\slashed\nabla_4\; r^3\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\rho}=r\slashed{div}\;r^2\Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\beta}+3M \overone{\Omega tr\chi},\qquad\qquad \Omega\slashed\nabla_3\;r^3 \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\rho}=-r\slashed{div}\;r^2\Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\beta}+3M\overone{\Omega tr\underline\chi},
\end{equation}
\begin{equation}\label{Bianchi 0*}
\Omega\slashed\nabla_4\; r^3\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\sigma}=-r\slashed{curl} \;r^2\Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\beta},\qquad\qquad \Omega\slashed\nabla_3\; r^3\sigma=-r\slashed{curl}\;r^2\Omega \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\beta}.
\end{equation}
\end{itemize}
\begin{remark}\label{regular}
The degeneration of the Eddington--Finkelstein (EF) frame near $\overline{\mathscr{H}^+}$ carries over to a degeneration of the quantities governed by equations \bref{start of full system}--\bref{Bianchi 0*}, as these quantities were derived via the EF frame (see Appendix B). By switching to a regular frame, e.g.~the Kruskal frame, it can be shown that these quantities extend regularly to $\overline{\mathscr{H}^+}$ when supplied with the appropriate weights in $U,V$. In particular, note that
\begin{align}
\tilde{\alpha}=V^{-2}\Omega^2\alpha,\qquad\underline{\widetilde{\alpha}}=U^2\Omega^{-2}\underline\alpha,
\end{align}
extend regularly to $\overline{\mathscr{H}^+}$, including $\mathcal{B}$.
\end{remark}
\section{The Teukolsky equations, the Teukolsky--Starobinsky identities and the Regge--Wheeler equations}\label{TRW}
\subsection{The Teukolsky equations and their well-posedness}\label{Chandra1}
Let $\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha$, $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}$ belong to a solution to the linearised Einstein equations \bref{start of full system}--\bref{Bianchi 0*}. It turns out that the linearised fields $\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha$, $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}$ obey decoupled 2nd order hyperbolic equations, the well-known Teukolsky equations.\\
\indent Take $\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha$ and multiply by $\frac{r^4}{\Omega^4}$:
\begin{equation}
\frac{r^4}{\Omega^4}\Omega\slashed\nabla_3\;r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha=-2r\slashed{\mathcal{D}}^*_2 \frac{r^4\stackrel{\mbox{\scalebox{0.4}{(1)}}}\beta}{\Omega}+6M\frac{r^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\chi}}}{\Omega}.
\end{equation}
Now differentiate in the $\Omega e_4$ direction and multiply by $\frac{\Omega^2}{r^2}$ to obtain the \textbf{Spin +2 Teukolsky equation}:
\begin{equation}\label{T+2}
\frac{\Omega^2}{r^2}\Omega\slashed\nabla_4 \;\frac{r^4}{\Omega^4}\Omega \slashed\nabla_3\; r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha=-2r^2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}_2r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha-\frac{6M}{r}r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha.
\end{equation}
We note that:
\begin{equation}
\slashed{\mathcal{D}}_2^*\slashed{\mathcal{D}}_2=-\frac{1}{2}\slashed{\Delta}+\frac{1}{r^2},\qquad\qquad \Omega\slashed\nabla_4 \frac{r^2}{\Omega^2}=-\Omega\slashed\nabla_3 \frac{r^2}{\Omega^2}=r(x+2).
\end{equation}
We may rewrite the equation as:
\begin{equation}\label{T+2d}
-\frac{r^2}{\Omega^2}\Omega\slashed\nabla_3\Omega\slashed\nabla_4\;r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha+r^2\slashed\Delta\;r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha-2r(x+2)\Omega\slashed\nabla_3 r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha+(3\Omega^2-5)r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha=0.
\end{equation}
\indent An analogous procedure produces the \textbf{Spin }$\bm{-2}$\textbf{ Teukolsky equation}
\begin{equation}\label{T-2}
\frac{\Omega^2}{r^2}\Omega\slashed\nabla_3 \;\frac{r^4}{\Omega^4}\Omega \slashed\nabla_4\; r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}=-2r^2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}_2r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}-\frac{6M}{r}r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha},
\end{equation}
which we may rewrite as
\begin{equation}\label{T-2d}
-\frac{r^2}{\Omega^2}\Omega\slashed\nabla_3\Omega\slashed\nabla_4\;r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}+r^2\slashed\Delta\;r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}+2r(x+2)\Omega\slashed\nabla_4 r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}+(3\Omega^2-5)r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}=0.
\end{equation}
We now state well-posedness theorems which are standard for linear second-order hyperbolic equations of the type that \cref{T+2}, \cref{T-2} fall under. Taking into account \Cref{regular}, we start with the future evolution of $\Omega^2\alpha$ and $\Omega^{-2}\underline\alpha$. \\
\indent Having derived the Teukolsky equations \bref{T+2}, \bref{T-2}, we can study these equations in isolation. Since the following theorems do not pertain to the linearised Einstein equations, we drop the superscript $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{{}}$.
\begin{proposition}\label{WP+2Sigma*}
Prescribe on $\Sigma^*$ a pair of smooth symmetric traceless $S^2_{u,v}$ 2-tensor fields $(\upalpha,\upalpha')$. Then there exists a unique smooth symmetric traceless $S^2_{u,v}$ 2-tensor field $\Omega^2\alpha$ that satisfies \bref{T+2} on $J^+(\Sigma^*)$, with $\Omega^2\alpha|_{\Sigma^*}=\upalpha, \slashed{\nabla}_{n_{\Sigma^*}}\Omega^2\alpha|_{\Sigma^*}=\upalpha'$.
\end{proposition}
\begin{proposition}\label{WP-2Sigma*}
Prescribe on $\Sigma^*$ a pair of smooth symmetric traceless $S^2_{u,v}$ 2-tensor fields $(\underline\upalpha,\underline\upalpha')$. Then there exists a unique smooth symmetric traceless $S^2_{u,v}$ 2-tensor field $\Omega^{-2}\underline\alpha$ that satisfies \bref{T-2} on $J^+(\Sigma^*)$, with $\Omega^{-2}\underline\alpha|_{\Sigma^*}=\underline\upalpha, \slashed{\nabla}_{n_{\Sigma^*}}\Omega^2\underline\alpha|_{\Sigma^*}=\underline\upalpha'$.
\end{proposition}
The same applies replacing $\Sigma^*$ with any other $\mathscr{H}^+$-penetrating spacelike surface ending at $i^0$.\\
\indent The degeneration of the EF frame discussed in \Cref{regular} is inherited by \bref{T+2}, \bref{T-2}, and we must work with $\widetilde{\alpha}=V^{-2}\Omega^2\alpha, \widetilde{\underline\alpha}=U^2\Omega^{-2}\underline\alpha$ in order to study the Teukolsky equations with data on $\overline{\Sigma}$. The weighted quantities $\widetilde{\alpha}, \widetilde{\underline\alpha}$ satisfy the following equations:
\begin{align}\label{T+2B}
\frac{1}{\Omega^2}\Omega\slashed{\nabla}_3\Omega\slashed{\nabla}_4 r\widetilde{\alpha}+\frac{1}{M}(4-3\Omega^2)\Omega\slashed{\nabla}_3 r\widetilde{\alpha}-\frac{1}{r}(3\Omega^2-5)\widetilde{\alpha}-\slashed{\Delta}r\widetilde{\alpha}=0,
\end{align}
\begin{align}\label{T-2B}
\frac{1}{\Omega^2}\Omega\slashed{\nabla}_3\Omega\slashed{\nabla}_4 r\widetilde{\underline\alpha}-\frac{1}{M}(4-3\Omega^2)\Omega\slashed{\nabla}_4 r\widetilde{\underline\alpha}-\frac{1}{r}(3\Omega^2-5)\widetilde{\underline\alpha}-\slashed{\Delta}r\widetilde{\underline\alpha}=0.
\end{align}
Equations (\ref{T+2B}) and (\ref{T-2B}) do not degenerate near $\mathcal{B}$ and we can make the following well-posedness statement:
\begin{proposition}\label{WP+2Sigmabar}
Prescribe a pair of smooth symmetric traceless $S^2_{U,V}$ 2-tensor fields $(\widetilde{\upalpha},\widetilde{\upalpha}')$ on $\overline{\Sigma}$. Then there exists a unique smooth symmetric traceless $S^2_{u,v}$ 2-tensor field $\Omega^2{\alpha}$ that satisfies (\ref{T+2}) on $ J^+(\overline{\Sigma})$ with $V^{-2}\Omega^2\alpha|_{\overline{\Sigma}}=\widetilde{\upalpha}$ and $\slashed{\nabla}_{n_{\overline{\Sigma}}}V^{-2}\Omega^2\alpha|_{\overline{\Sigma}}=\widetilde{\upalpha}'$.
\end{proposition}
\begin{proposition}\label{WP-2Sigmabar}
Prescribe a pair of smooth symmetric traceless $S^2_{U,V}$ 2-tensor fields $(\widetilde{\underline\upalpha},\widetilde{\underline\upalpha}')$ on $\overline{\Sigma}$. Then there exists a unique smooth symmetric traceless $S^2_{u,v}$ 2-tensor field $\Omega^{-2}{\underline\alpha}$ that satisfies (\ref{T-2}) on $ J^+(\overline{\Sigma})$ with $V^{2}\Omega^{-2}\underline\alpha|_{\overline{\Sigma}}=\widetilde{\underline\upalpha}$ and $\slashed{\nabla}_{n_{\overline{\Sigma}}}V^{2}\Omega^{-2}\underline\alpha|_{\overline{\Sigma}}=\widetilde{\underline\upalpha}'$.
\end{proposition}
Analogous statements to the above apply to past development from $\overline{\Sigma}$ with $U,\Omega^2$ switching places with $V,\Omega^{-2}$ respectively.\\
\indent In developing backwards scattering we will use the following well-posedness statement for the past development of a mixed initial-characteristic value problem:
\begin{proposition}\label{WP+2backwards}
Let $u_+<\infty, v_+<v_*<\infty$. Let $\widetilde{\Sigma}$ be a spacelike hypersurface connecting $\mathscr{H}^+$ at $v_*$ to $\mathscr{I}^+$ at $u_+$ and let $\underline{\mathscr{C}}=\underline{\mathscr{C}}_{v_*}\cap J^-(\widetilde{\Sigma})\cap J^+(\overline\Sigma)$. Prescribe a pair of symmetric traceless $S^2_{u,v}$ 2-tensor fields:
\begin{itemize}
\item $\alpha_{{\mathscr{H}^+}}$ on ${\mathscr{H}^+}\cap\{v\leq v_+\}$ vanishing in a neighborhood of $\mathscr{H}^+\cap\{v=v_+\}$, such that $V^{-2}\alpha_{\overline{\mathscr{H}^+}}$ extends smoothly to $\mathcal{B}$,
\item $\alpha_{0,in}$ on $\underline{\mathscr{C}}$ vanishing in a neighborhood of $\underline{\mathscr{C}}\cap\widetilde{\Sigma}$.
\end{itemize}
Then there exists a unique smooth symmetric traceless $S^2_{u,v}$ 2-tensor $\alpha$ on $D^-\left(\overline{\mathscr{H}^+}\cup\widetilde{\Sigma}\cup\underline{\mathscr{C}}\right)\cap J^+(\overline{\Sigma})$ such that $V^{-2}\Omega^2\alpha|_{\overline{\mathscr{H}^+}}=V^{-2}\alpha_{\overline{\mathscr{H}^+}}$, $\alpha|_{\underline{\mathscr{C}}}=\alpha_{0,in}$ and $\left(\Omega^2\alpha|_{\widetilde{\Sigma}},\slashed{\nabla}_{n_{\widetilde{\Sigma}}}\Omega^2\alpha|_{\widetilde{\Sigma}}\right)=(0,0)$.
\end{proposition}
\begin{proposition}\label{WP-2backwards}
Let $u_+<\infty, v_+<v_*<\infty$. Let $\widetilde{\Sigma}$ be a spacelike hypersurface connecting $\mathscr{H}^+$ at $v_+$ to $\mathscr{I}^+$ at $u_+$ and let $\underline{\mathscr{C}}=\underline{\mathscr{C}}_{v_*}\cap J^+(\widetilde{\Sigma})\cap\{t\geq0\}$. Prescribe a pair of symmetric traceless $S^2_{u,v}$ 2-tensor fields:
\begin{itemize}
\item $\underline\alpha_{{\mathscr{H}^+}}$ on ${\mathscr{H}^+}\cap\{v<v_+\}$ vanishing in a neighborhood of $v_+$, such that $V^{2}\underline\alpha_{{\mathscr{H}^+}}$ extends smoothly to $\mathcal{B}$,
\item $\underline\alpha_{0,in}$ on $\underline{\mathscr{C}}$ vanishing in a neighborhood of $\underline{\mathscr{C}}\cap\mathscr{H}^+$.
\end{itemize}
Then there exists a unique smooth symmetric traceless $S^2_{u,v}$ 2-tensor $\underline\alpha$ on $D^-\left(\overline{\mathscr{H}^+}\cup\widetilde{\Sigma}\cup\underline{\mathscr{C}}\right)\cap J^+(\overline{\Sigma})$ such that $V^{2}\Omega^{-2}\underline\alpha|_{\overline{\mathscr{H}^+}}=V^{2}\underline\alpha_{\overline{\mathscr{H}^+}}$, $\underline\alpha|_{\underline{\mathscr{C}}}=\underline\alpha_{0,in}$ and $\left(\Omega^{-2}\underline\alpha|_{\widetilde{\Sigma}},\slashed{\nabla}_{n_{\widetilde{\Sigma}}}\Omega^{-2}\underline\alpha|_{\widetilde{\Sigma}}\right)=(0,0)$.
\end{proposition}
\begin{center}
\begin{tikzpicture}[scale=0.7]
\node (I) at ( 0,0) {};
\path
(I) +(90:4) coordinate (Itop) coordinate[label=90:$i^+$]
+(180:4) coordinate (Ileft) coordinate[label=180:$\mathcal{B}$]
+(0:4) coordinate (Iright) coordinate[label=0:$i^0$]
;
\draw (Ileft) -- node[yshift=1mm,above]{$v_+$} (Itop) ;
\draw[dash dot dot] (Iright) -- node[yshift=1mm,above]{$u_+$}(Itop) ;
\draw [line width=0.3mm]($(Ileft)$)--($(Ileft)+(45:3.5cm)$);
\draw [line width=0.3mm]($(Iright)+(180:0.5cm)$)--node[below,xshift=-0.2cm,yshift=0.15cm]{$\underline{\mathscr{C}}\;$}($(Iright)+(135:3.2cm)+(180:0.8cm)$);
\draw ($(Ileft)+(45:3.5cm)$) to[out=-25, in=205, edge node={node [below] {$\widetilde{\Sigma}$}}] ($(Iright)+(135:3.5cm)$);
\draw ($(Ileft)$) to[out=0, in=180, edge node={node [below] {$\overline{\Sigma}$}}] ($(Iright)$);
\filldraw[white] (Itop) circle (3pt);
\draw[black] (Itop) circle (3pt);
\filldraw[white] (Iright) circle (3pt);
\draw[black] (Iright) circle (3pt);
\filldraw[black] (Ileft) circle (3pt);
\draw[black] (Ileft) circle (3pt);
\end{tikzpicture}
\end{center}
We will also need
\begin{proposition}\label{backwards wellposedness +2}
Let $\tilde{\alpha}_{\mathscr{H}^+}$ be a smooth symmetric traceless $S^2_{\infty,v}$ 2-tensor on $\overline{\mathscr{H}^+}\cap J^-(\Sigma^*)$, $(\widetilde{\upalpha}_{\Sigma^*},\widetilde{\upalpha}_{\Sigma^*}')$ be a pair of smooth symmetric traceless $S^2_{\infty,v}$ 2-tensors on $\Sigma^*$. Then there exists a unique solution $\widetilde{\alpha}$ to \bref{T+2B} in $J^+(\overline{\Sigma})\cap\{t^*\leq 0\}$ such that $\widetilde{\alpha}|_{\overline{\mathscr{H}^+}}=\widetilde{\alpha}_{\overline{\mathscr{H}^+}}$, $(\widetilde{\alpha}|_{\Sigma^*},\slashed{\nabla}_{n_{\Sigma^*}}\widetilde{\alpha}|_{\Sigma^*})=(\widetilde{\upalpha}_{\Sigma^*},\widetilde{\upalpha}_{\Sigma^*}')$.
\end{proposition}
\begin{proposition}\label{backwards wellposedness -2}
An analogous statement to \Cref{backwards wellposedness +2} holds for \cref{T-2B}.
\end{proposition}
Analogous statements apply for the "finite" backwards scattering problem from the past of $\overline{\Sigma}$, with $U$ replacing $V$ and $\Omega^2$ switching places with $\Omega^{-2}$.
\begin{remark}[\textbf{Time inversion}]\label{time inversion}
Under the transformation $t\longrightarrow-t$, $u\longrightarrow -v$ and $v\longrightarrow -u$ and thus $\alpha(u,v,\theta^A)\longrightarrow\alpha(-v,-u,\theta^A)=:\raisebox{\depth}{\scalebox{1}[-1]{$\alpha$}}(u,v,\theta^A)$ and $\underline\alpha(u,v,\theta^A)\longrightarrow\underline\alpha(-v,-u,\theta^A)=:\underline\raisebox{\depth}{\scalebox{1}[-1]{$\alpha$}}(u,v,\theta^A)$.\\
\indent It is clear $\raisebox{\depth}{\scalebox{1}[-1]{$\alpha$}}(u,v,\theta^A)$ satisfies the $-2$ Teukolsky equation, i.e.~the equation satisfied by $\underline\alpha$. Similarly, $\underline\raisebox{\depth}{\scalebox{1}[-1]{$\alpha$}}(u,v,\theta^A)$ satisfies the $+2$ Teukolsky equation, i.e.~the equation satisfied by $\alpha$. This observation means that the asymptotics of $\alpha$ towards the future are identical to those of $\underline\alpha$ towards the past, i.e.~determining the asymptotics of both $\underline\alpha$ and $\alpha$ towards the future is enough to determine the asymptotics of either $\alpha$ or $\underline\alpha$ in both the past and future directions. We will use this fact to obtain bijective scattering maps from studying the forward evolution of the fields $\alpha,\underline\alpha$. In particular, this prescription is sufficient to obtain well-posedness statements for the equations (\ref{T-2}) and (\ref{T+2}) for past development.
\end{remark}
\subsection{Derivation of the Teukolsky--Starobinsky identities}\label{derivation of the Teukolsky--Starobinsky identities}
We now return to the full system \bref{start of full system}--\bref{Bianchi 0*} to derive the Teukolsky--Starobinsky identities \bref{eq:227intro1}, \bref{eq:228intro1}.\\
\indent Let $\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha$ belong to a solution of the linearised Einstein equations. \Cref{Bianchi +2} implies:
\begin{align}
\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3 r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\alpha}=-2r\slashed{\mathcal{D}}^*_2r^2\Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\beta}+6M\Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\chi}}.
\end{align}
Using \bref{Bianchi +1a} and \bref{D3Chihat} we obtain
\begin{equation}
\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\right)^2r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha=-2r^2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1\left(-r^3\stackrel{\mbox{\scalebox{0.4}{(1)}}}\rho,r^3\stackrel{\mbox{\scalebox{0.4}{(1)}}}\sigma\right)+6M(r\Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\chi}}-r\Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline{\hat\chi}}).
\end{equation}
We now apply $\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3$ to both sides and use equations \bref{Bianchi 0}, \bref{Bianchi 0*}, \bref{D3Chihat} and the second equation of \bref{D4Chihat} to deduce
\begin{align}
\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\right)^3r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha=-2r^2\slashed{\mathcal{D}}_2^*\slashed{\mathcal{D}}^*_1\overline{\slashed{\mathcal{D}}}_1\left(\frac{r^4\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\beta}}{\Omega}\right)+6M\left[r^2\slashed{\mathcal{D}}_2^*\slashed{\mathcal{D}}^*_1\left(\frac{r^2}{\Omega^2}\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline{f}},0\right)+ r^3\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}-(3\Omega^2-1)\frac{r^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline{\hat\chi}}}{\Omega}-2r\slashed{\mathcal{D}}_2^*r^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}\eta\right].
\end{align}
Now we apply $\Omega\slashed{\nabla}_3$ once again and use \bref{D3TrChiBar}, the second equation of \bref{D4Chihat} and the second equations of \bref{D4etabar}:
\begin{align}
\begin{split}
\Omega\slashed{\nabla}_3 \left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\right)^3r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha&=-2r^3\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1\overline{\slashed{\mathcal{D}}}_1(-r\slashed{\mathcal{D}}_2r^3\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha})+6M\Bigg[r^2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1\left(-4r\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\omega},0\right)-(3\Omega^2-1)r^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}\\&\;\;+\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha+6M\frac{r^2}{\Omega^2} \frac{r^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline{\hat\chi}}}{\Omega}-(3\Omega^2-1)(-r^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha})-2r\slashed{\mathcal{D}}^*_2(2r\slashed{\nabla}r\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\omega}-r^2\Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\beta})\Bigg]
\\&=2r^4\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1\overline{\slashed{\mathcal{D}}}_1\slashed{\mathcal{D}}_2 r^3\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}+6M\frac{r^2}{\Omega^2}\left[\Omega\slashed{\nabla}_4+\Omega\slashed{\nabla}_3\right]r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}.
\end{split}
\end{align}
Finally, we have
\begin{align}\label{eq:TS1}
\frac{\Omega^2}{r^2}\Omega\slashed{\nabla}_3 \left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\right)^3r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha=2r^4\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1\overline{\slashed{\mathcal{D}}}_1\slashed{\mathcal{D}}_2 r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}+6M\left[\Omega\slashed{\nabla}_4+\Omega\slashed{\nabla}_3\right]r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}.
\end{align}
\indent An entirely analogous procedure starting from the equation for $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}$ in \bref{Bianchi +2} leads to
\begin{align}\label{eq:TS2}
\frac{\Omega^2}{r^2}\Omega\slashed{\nabla}_4 \left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\right)^3r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}=2r^4\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1\overline{\slashed{\mathcal{D}}}_1\slashed{\mathcal{D}}_2 r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\alpha}-6M\left[\Omega\slashed{\nabla}_4+\Omega\slashed{\nabla}_3\right]r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\alpha}.
\end{align}
Equation \bref{eq:TS2} is the constraint \bref{eq:228intro1}.
\subsection{Physical-space Chandrasekhar transformations and the Regge--Wheeler equation}\label{Chandra}
The Regge--Wheeler equation for a symmetric traceless $S^2_{u,v}$ 2-tensor $\Psi$ is given by
\begin{align}\label{RW}
\Omega\slashed{\nabla}_4\Omega\slashed{\nabla}_3\Psi-\Omega^2\slashed{\Delta}\Psi+\frac{\Omega^2}{r^2}(3\Omega^2+1)\Psi=0.
\end{align}
\indent Suppose the field $\alpha$ satisfies the +2 Teukolsky equation. Define the following hierarchy of fields
\begin{align}\label{hier+}
\begin{split}
&r^3\Omega \psi:=\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3 r\Omega^2\alpha,\\
&\Psi:=\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3 r^3\Omega \psi=\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\right)^2 r\Omega^2\alpha.
\end{split}
\end{align}
We have the following commutation relation:
\begin{align}\label{commutation relation}
\begin{split}
\Bigg[&-\frac{r^2}{\Omega^2}\Omega\slashed\nabla_3\Omega\slashed\nabla_4-(k+xk^')r\Omega\slashed\nabla_3+a\Omega^2+bx+c\Bigg]\frac{r^2}{\Omega^2}\Omega\slashed\nabla_3
\\&=\frac{r^2}{\Omega^2}\Omega\slashed\nabla_3\left[-\frac{r^2}{\Omega^2}\Omega\slashed\nabla_3\Omega\slashed\nabla_4-\left(k+2+x(k^'+1)\right)r\Omega\slashed\nabla_3+(a+2k+2k^')\Omega^2+bx+c-k-2k^'\right]\\&+2M(a+2k+2k^'),
\end{split}
\end{align}
where $a,b,c,k,k'$ are integers. We commute the operator $\left(\frac{r^2}{\Omega^2}\Omega\slashed\nabla_3\right)^2$ past the Regge--Wheeler operator:
\begin{align*}
\begin{split}
\left[-\frac{r^2}{\Omega^2}\Omega\slashed\nabla_3\Omega\slashed\nabla_4+r^2\slashed\Delta-3\Omega^2-1\right]\left(\frac{r^2}{\Omega^2}\Omega\slashed\nabla_3\right)^2=\Bigg\{\frac{r^2}{\Omega^2}\Omega\slashed\nabla_3&\Bigg[-\frac{r^2}{\Omega^2}\Omega\slashed\nabla_3\Omega\slashed\nabla_4+r^2\slashed\Delta-(2+x)r\Omega\slashed\nabla_3\\
&-3\Omega^2-1\Bigg]-6M\Bigg\}\frac{r^2}{\Omega^2}\Omega\slashed\nabla_3
\end{split}
\end{align*}
\begin{align}
\begin{split}
&=\frac{r^2}{\Omega^2}\Omega\slashed\nabla_3\Bigg\{\left[-\frac{r^2}{\Omega^2}\Omega\slashed\nabla_3\Omega\slashed\nabla_4+r^2\slashed\Delta-(2+x)r\Omega\slashed\nabla_3-3\Omega^2-1\right]\frac{r^2}{\Omega^2}\Omega\slashed\nabla_3-6M\Bigg\}
\\&=\left(\frac{r^2}{\Omega^2}\Omega\slashed\nabla_3\right)^2\Bigg\{\left[-\frac{r^2}{\Omega^2}\Omega\slashed\nabla_3\Omega\slashed\nabla_4+r^2\slashed\Delta-2(2+x)r\Omega\slashed\nabla_3+3\Omega^2-5\right]-6M+6M\Bigg\}\label{commutator}
\end{split}
\end{align}
This shows that if $\alpha$ satisfies the +2 Teukolsky equation then $\Psi$ satisfies the Regge--Wheeler equation (\ref{RW}).\\ \indent Analogously, with the following hierarchy of fields
\begin{align}\label{hier-}
\begin{split}
&r^3\Omega \underline\psi:=\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4 r\Omega^2\underline\alpha,\\
&\underline\Psi:=\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4 r^3\Omega \underline\psi=\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\right)^2 r\Omega^2\underline\alpha,
\end{split}
\end{align}
we have
\begin{align}\label{commutation relation 2}
\begin{split}
\Bigg[&-\frac{r^2}{\Omega^2}\Omega\slashed\nabla_3\Omega\slashed\nabla_4+(l+xl^')r\Omega\slashed\nabla_4+a\Omega^2+bx+c\Bigg]\frac{r^2}{\Omega^2}\Omega\slashed\nabla_4
\\&=\frac{r^2}{\Omega^2}\Omega\slashed\nabla_4\left[-\frac{r^2}{\Omega^2}\Omega\slashed\nabla_3\Omega\slashed\nabla_4+\left(l+2+x(l^'+1)\right)r\Omega\slashed\nabla_4+(a+2l+2l^')\Omega^2+bx+c-l-2l^'\right]\\&+6M(a+2l+2l^'),
\end{split}
\end{align}
where $a,b,c,l,l'$ are integers. Thus, if $\underline\alpha$ satisfies the $-2$ Teukolsky equation then $\underline\Psi$ also satisfies the Regge--Wheeler equation.\\
\indent We state a standard well-posedness result for (\ref{RW}):
\begin{proposition}\label{RWwpCauchy}
For any pair $(\uppsi,\uppsi')$ of smooth symmetric traceless $S^2_r$ 2-tensor fields on $\Sigma^*$, there exists a unique smooth symmetric traceless $S^2_{u,v}$ 2-tensor field $\Psi$ which solves \cref{RW} in $ J^+(\Sigma^*)$ such that $\Psi|_{\Sigma^*}=\uppsi$ and $\slashed{\nabla}_{n_{\Sigma^*}} \Psi|_{\Sigma^*}=\uppsi'$. The same applies when data are posed on $\Sigma$ or $\overline{\Sigma}$.
\end{proposition}
In contrast to the Teukolsky equations \bref{T+2}, \bref{T-2}, the Regge--Wheeler equation \bref{RW} does not suffer from additional regularity issues near $\mathcal{B}$, as can be seen by rewriting \cref{RW} in Kruskal coordinates:
\begin{align}
\slashed{\nabla}_U\slashed{\nabla}_V\Psi-\mathring{\slashed{\Delta}}+\frac{3\Omega^2+1}{r^2}\Psi=0.
\end{align}
If $\Psi$ is related to a field $\alpha$ that satisfies \bref{T+2}, then it is related to $\widetilde{\alpha}$ by
\begin{align}
\Psi=\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\right)^2r\Omega^2\alpha=\left(2Mr^2f(r)\slashed{\nabla}_U\right)^2r\tilde{\alpha}.
\end{align}
\begin{proposition}\label{RWwpSigmabar}
\Cref{RWwpCauchy} is valid replacing $\Sigma^*$ with $\overline{\Sigma}$ everywhere.
\end{proposition}
For backwards scattering we will need the following well-posedness statement:
\begin{proposition}\label{RWwpBackwards}
Let $u_+<\infty, v_+<v_*<\infty$. Let $\widetilde{\Sigma}$ be a spacelike hypersurface connecting $\mathscr{H}^+$ at $v=v_+$ to $\mathscr{I}^+$ at $u=u_+$ and let $\underline{\mathscr{C}}=\underline{\mathscr{C}}_{v_*}\cap J^+(\widetilde{\Sigma})\cap\{t\geq0\}$. Prescribe a pair of smooth symmetric traceless $S^2_{u,v}$ 2-tensor fields:
\begin{itemize}
\item $\Psi_{{\mathscr{H}^+}}$ on ${\overline{\mathscr{H}^+}}\cap\{v<v_+\}$ vanishing in a neighborhood of $\widetilde{\Sigma}$,
\item $\Psi_{0,in}$ on $\underline{\mathscr{C}}$ vanishing in a neighborhood of $\widetilde{\Sigma}$.
\end{itemize}
Then there exists a unique smooth symmetric traceless $S^2_{u,v}$ 2-tensor $\Psi$ on $D^-\left(\overline{\mathscr{H}^+}\cup\widetilde{\Sigma}\cup\underline{\mathscr{C}}\right)\cap J^+(\overline{\Sigma})$ such that $\Psi|_{\overline{\mathscr{H}^+}}=\Psi_{{\mathscr{H}^+}}$, $\Psi|_{\underline{\mathscr{C}}}=\Psi_{0,in}$ and $\left(\Psi|_{\widetilde{\Sigma}},\slashed{\nabla}_{n_{\widetilde{\Sigma}}}\Psi|_{\widetilde{\Sigma}}\right)=(0,0)$.
\end{proposition}
We will also need
\begin{proposition}\label{RWwp local statement near B}
Let $(\uppsi,\uppsi')$ be smooth symmetric traceless $S^2_{u,v}$ 2-tensor fields on $\Sigma^*$, $\uppsi_{\mathscr{H}^+}$ be a smooth symmetric traceless $S^2_{\infty,v}$ 2-tensor field on $\overline{\mathscr{H}^+}\cap\{t^*\leq0\}$. Then there exists a unique smooth symmetric traceless $S^2_{u,v}$ 2-tensor field $\Psi$ on $J^-(\Sigma^*)$ such that $\Psi|_{\overline{\mathscr{H}^+}\cap\{t^*\leq0\}}=\uppsi_{\mathscr{H}^+}, \left(\Psi|_{\Sigma^*},\slashed{\nabla}_{n_{\Sigma^*}}\Psi|_{\Sigma^*}\right)=(\uppsi,\uppsi')$.
\end{proposition}
\begin{remark}\label{time inversion of RW}
Unlike the Teukolsky equations \bref{T+2}, \bref{T-2}, the Regge--Wheeler equation \bref{RW} is invariant under time inversion. If $\Psi(u,v)$ satisfies \bref{RW}, then $\raisebox{\depth}{\scalebox{1}[-1]{$\Psi$}}(u,v):=\Psi(-v,-u)$ also satisfies \bref{RW}.
\end{remark}
\subsection{Further constraints among $\alpha,\Psi$ and $\underline\alpha,\underline\Psi$}\label{constraint derivation}
We can apply the same ideas as in \Cref{Chandra} to transform solutions of the Regge--Wheeler equation into solutions of the +2 Teukolsky equation. Let $\Psi$ satisfy \Cref{RW}, then using \bref{commutation relation 2} we can show that
\begin{align}\label{alpha to Psi}
\frac{\Omega^2}{r^2}\Omega\slashed{\nabla}_4\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\Psi
\end{align}
satisfies \Cref{T+2}. \\
\indent Now suppose $\alpha$ satisfies \Cref{T+2} and $\Psi$ is the solution to \Cref{RW} related to $\alpha$ by \Cref{hier+}. We can evaluate the expression \bref{alpha to Psi} using \Cref{T+2}: we apply $\Omega\slashed{\nabla}_4$ and substitute using the $+2$ equation only (we drop the superscript $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{}$):
\begin{align*}
\begin{split}
\Omega\slashed{\nabla}_4\Psi&=\Omega\slashed{\nabla}_4\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\right)^2r\Omega^2\alpha
\\&=r(x+2)\Omega\slashed{\nabla}_3\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3r\Omega^2\alpha+\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\Omega\slashed{\nabla}_3\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3r\Omega^2\alpha
\\&=\frac{3\Omega^2-1}{r}\Psi+\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\Omega\slashed{\nabla}_4\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3r\Omega^2\alpha
\\&=\frac{3\Omega^2-1}{r}\Psi+\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\Omega\slashed{\nabla}_4\frac{r^4}{\Omega^4}\frac{\Omega^2}{r^2}\Omega\slashed{\nabla}_3r\Omega^2\alpha
\\&=\frac{3\Omega^2-1}{r}\Psi+\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\left[\left(-\frac{\Omega^4}{r^4}r(x+2)\right)\frac{r^4}{\Omega^4}\Omega\slashed{\nabla}_3r\Omega^2\alpha\right]
\\&\;\;+\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\frac{\Omega^2}{r^2}\Omega\slashed{\nabla}_4\frac{r^4}{\Omega^4}\Omega\slashed{\nabla}_3r\Omega^2\alpha
\\&=\frac{3\Omega^2-1}{r}\Psi-\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\left[\frac{\Omega^2}{r^2}r(x+2)\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3r\Omega^2\alpha\right]+\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\mathcal{T}^{+2}_N r\Omega^2\alpha
\\&=-2(3\Omega^2-2)\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3r\Omega^2\alpha+\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\mathcal{T}^{+2}_Nr\Omega^2\alpha\\
\end{split}
\end{align*}
\begin{align}\label{eq:d4Psi}
=-2r^2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}_2 \frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3r\Omega^2\alpha-6Mr\Omega^2\alpha-(3\Omega^2-1)\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3r\Omega^2\alpha,
\end{align}
i.e.,
\begin{align}
\begin{split}
\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\Psi=-2r^2\slashed{\mathcal{D}}_2^*\slashed{\mathcal{D}}_2 \frac{r^4}{\Omega^4}\Omega\slashed{\nabla}_3r\Omega^2\alpha-(3\Omega^2-1)\frac{r^4}{\Omega^4}\Omega\slashed{\nabla}_3r\Omega^2\alpha-6M\frac{r^2}{\Omega^2} r\Omega^2\alpha.
\end{split}
\end{align}
We act on both sides with $\Omega\slashed{\nabla}_4$ again:
\begin{align}
\begin{split}
\Omega\slashed{\nabla}_4 \frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4 \Psi &=-2r^2\slashed{\mathcal{D}}_2^*\slashed{\mathcal{D}}_2\left[\frac{r^2}{\Omega^2}\left(-2r^2\slashed{\mathcal{D}}_2^*\slashed{\mathcal{D}}_2 r\Omega^2\alpha-\frac{6M}{r}r\Omega^2\alpha\right)\right]\qquad\qquad\qquad\qquad\qquad\\
&\;\;\;\;\;\;-6M\left[\frac{r^2}{\Omega^2}\left(\Omega\slashed{\nabla}_3+\Omega\slashed{\nabla}_4\right)r\Omega^2\alpha+r(x+2)r\Omega^2\alpha\right] \\&\qquad-\left[-2r^2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}_2-\frac{6M}{r}\right]\left[\frac{r^2}{\Omega^2}(3\Omega^2-1)r\Omega^2\alpha\right]
\\&=-2r^2\slashed{\mathcal{D}}_2^*\slashed{\mathcal{D}}_2\left[\frac{r^2}{\Omega^2}\left(-2r^2\slashed{\mathcal{D}}_2^*\slashed{\mathcal{D}}_2 r\Omega^2\alpha-2r\Omega^2\alpha\right)\right]-6M\left[\frac{r^2}{\Omega^2}\left(\Omega\slashed{\nabla}_3+\Omega\slashed{\nabla}_4\right)r\Omega^2\alpha\right].
\end{split}
\end{align}
We finally arrive at
\begin{align}\label{eq:d4d4Psi}
\begin{split}
\frac{\Omega^2}{r^2}\Omega\slashed{\nabla}_4 \frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4 \Psi&=-2r^2\slashed{\mathcal{D}}_2^*\slashed{\mathcal{D}}_2\left[-2r^2\slashed{\mathcal{D}}_2^*\slashed{\mathcal{D}}_2 r\Omega^2\alpha-2r\Omega^2\alpha\right]-6M\left[\left(\Omega\slashed{\nabla}_3+\Omega\slashed{\nabla}_4\right)r\Omega^2\alpha\right].
\end{split}
\end{align}
\indent We record the same for $\underline\Psi$: Using only the Teukolsky equation \bref{T-2} we obtain the analogue of \bref{eq:d4Psi}
\begin{align}\label{eq:d3psibar}
\Omega\slashed{\nabla}_3\underline\Psi=-(3\Omega^2-1)\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4r\Omega^2\underline\alpha+6Mr\Omega^2\underline\alpha-2r^2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}_2\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4r\Omega^2\underline\alpha,
\end{align}
and the analogue of \bref{eq:d4d4Psi}
\begin{align}\label{eq:d3d3psibar}
\frac{\Omega^2}{r^2}\Omega\slashed{\nabla}_3\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\underline\Psi=+6M\left[\Omega\slashed{\nabla}_4+\Omega\slashed{\nabla}_3\right]r\Omega^2\underline\alpha+\left[-2r^2\slashed{\mathcal{D}}_2^*\slashed{\mathcal{D}}_2-2\right]\left(-2r^2\slashed{\mathcal{D}}_2^*\slashed{\mathcal{D}}_2r\Omega^2\underline\alpha\right).
\end{align}
In the remainder of this paper we focus exclusively on the Teukolsky equations \bref{T+2}, \bref{T-2}, the Teukolsky--Starobinsky identities \bref{eq:227}, \bref{eq:228} and the Regge--Wheeler equation \bref{RW}. In particular, we do not refer to the linearised Einstein equations \bref{start of full system}--\bref{Bianchi 0*} and as such, we drop the superscript $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{{}}$.\\
\indent Throughout this paper we will we distinguish between solutions arising from data on $\Sigma^*, \Sigma$ or $\overline{\Sigma}$, and we subsequently construct separate scattering statements for each of these cases, in particular distinguishing between spaces of scattering states on $\mathscr{H}^+_{\geq0}, \mathscr{H}^\pm$ and $\overline{\mathscr{H}^\pm}$. It will be easiest to work with data $\Sigma^*$ first, and then the results for the remaining cases would follow easily.
\section{Main theorems}\label{section 4 main theorems}
We define in this section the spaces of scattering states and provide a precise statement of the results. In what follows, $L^2$ spaces on $\mathscr{I}^\pm, \mathscr{H}^+_{\geq0}, \mathscr{H}^\pm,\overline{\mathscr{H}^\pm}$ are defined with respect to the measures $du\sin\theta d\theta d\phi$, $dv\sin\theta d\theta d\phi$ induced by the Eddington--Finkelstein coordinates.
\begin{notation*}
For a spherically symmetric submanifold $\mathcal{S}$ of $\overline{\mathscr{M}}$, denote by $\Gamma(\mathcal{S})$ the space of smooth symmetric traceless $S^2_{u,v}$ 2-tensor fields on $\mathcal{S}$. The space of such fields that are compactly supported is denoted by $\Gamma_c (\mathcal{S})$. We use the same notation for smooth fields on $\mathscr{I}^\pm, \mathscr{H}^\pm,\overline{\mathscr{H}^\pm}$.
\end{notation*}
\noindent In particular, note that $A\in\Gamma(\Sigma^*)$ says that $A$ is smooth up to and including $\Sigma^*\cap\mathscr{H}^+$.
\subsection{Theorem 1: Scattering for the Regge--Wheeler equation}\label{subsection 4.1 Theorem 1}
\begin{defin}\label{RWscatteringsigma}
Let $(\uppsi,\uppsi')\in\Gamma_c (\Sigma^*)\oplus\Gamma_c(\Sigma^*)$ be Cauchy data on $\Sigma^*$ for \bref{RW} of compact support. Define the space $\mathcal{E}^{T}_{\Sigma^*} $ to be the completion of $\Gamma_c (\Sigma^*)$ data under the norm
\begin{align}\label{this22222}
\|(\uppsi,\uppsi')\|^2_{\mathcal{E}^T_{\Sigma^*}}=\int_{\Sigma^*} dr\sin\theta d\theta d\phi\; (2-\Omega^2)|\slashed{\nabla}_{T^*}\Psi|^2+\Omega^2|\slashed{\nabla}_{R}\Psi|^2+|\slashed{\nabla}\Psi|^2+\frac{3\Omega^2+1}{r^2}|\Psi|^2,
\end{align}
where $\Psi$ is smooth and satisfies $\Psi|_{\Sigma^*}=\uppsi, \slashed{\nabla}_{n_{\Sigma^*}}\Psi|_{\Sigma^*}=\uppsi'$. The space $\mathcal{E}^T_{\Sigma}$ is similarly defined with the norm
\begin{align}\label{this2222}
\|(\uppsi,\uppsi')\|^2_{\mathcal{E}^T_{\Sigma}}=\int_{\Sigma} dr\sin\theta d\theta d\phi \;|\slashed{\nabla}_{n_{\Sigma}} \Psi|^2+\Omega^2|\slashed{\nabla}_R\Psi|^2+|\slashed{\nabla}\Psi|^2+\frac{3\Omega^2+1}{r^2}|\Psi|^2.
\end{align}
Define the space $\mathcal{E}^T_{\Sigma}$ to be the completion of $\Gamma_{c}(\Sigma)$ data under the norm \bref{this2222}. The space $\mathcal{E}^{T}_{\overline{\Sigma}}$ and the norm $\|\;\|_{\mathcal{E}^T_{\overline{\Sigma}}}$ are similarly defined.
\end{defin}
\begin{remark}\label{RW enough to be in space}
The kernel of $\|\;\;\|_{\mathcal{E}^T_{\Sigma^*}}$ has trivial intersection with $\Gamma(\Sigma^*)$. It suffices for a smooth data set $(\uppsi,\uppsi')$ to satisfy $\|(\uppsi,\uppsi')\|_{\mathcal{E}^T_{\Sigma^*}}<\infty$ to have $(\uppsi,\uppsi')\in\mathcal{E}^T_{\Sigma^*}$, so $\|\;\|_{\mathcal{E}^T_{\Sigma*}}, \|\;\|_{\mathcal{E}^T_{\Sigma}}, \|\;\|_{\mathcal{E}^T_{\overline\Sigma}}$ and \bref{this2222} define normed spaces that can be extended to Hilbert spaces.
\end{remark}
\begin{defin}\label{RW def of rad at H}
Define the space $\mathcal{E}_{\mathscr{H}^+}^{T}$ to be the completion of $\Gamma_c (\mathscr{H}^+_{\geq 0})$ under the norm
\begin{align}\label{RW def rad flux at H}
||\Psi||_{\mathcal{E}_{\mathscr{H}^+_{\geq 0}}^{T}}^2=\int_{\mathscr{H}^+_{\geq v_0}}|\partial_v\Psi|^2\sin\theta d\theta d\phi dv.
\end{align}
The spaces $\mathcal{E}^T_{\mathscr{H}^+}$, $\mathcal{E}^T_{\overline{\mathscr{H}^+}}$ are analogously defined.
\end{defin}
\begin{remark}\label{Subspace of L2}
\begin{enumerate}
\item The energy $\|\;\|_{\mathcal{E}_{\mathscr{H}^+_{\geq0}}^{T}}$ indeed defines a norm on $\Gamma_c(\mathscr{H}^+_{\geq0})$, which thus extends to a Hilbert space $\mathcal{E}^{T}_{\mathscr{H}^+_{\geq0}}$ when completed under $\|\;\|_{\mathcal{E}_{\mathscr{H}^+_{\geq0}}^{T}}$.\\
\item The space $\mathcal{E}^{T}_{\mathscr{H}^+_{\geq0}}$ can be realised as the subset $\Psi_{\mathscr{H}^+}\in L^2_{loc}(\mathscr{H}^+_{\geq0})$ such that
\begin{itemize}
\item $\Omega\slashed{\nabla}_4\Psi_{\mathscr{H}^+}\in L^2(\mathscr{H}^+_{\geq0})$,
\item $\lim_{v\longrightarrow\infty} \|\Psi_{\mathscr{H}^+}\|_{L^2(S_{\infty,v}^2)}=0$.
\end{itemize}
Note that Hardy's inequality holds on elements of this space and we have
\begin{align}\label{weighted L2 statement}
\int_{\mathscr{H}^+_{\geq0}} dv \sin\theta d\theta d\phi \frac{|\Psi_{\mathscr{H}^+}|^2}{v^2+1}\lesssim\|\xi\|^2_{\mathcal{E}^{T}_{\mathscr{H}^+_{\geq0}}}<\infty.
\end{align}
\end{enumerate}
\end{remark}
\begin{defin}
Define the space $\mathcal{E}^T_{\mathscr{I}^+}$ to be the completion of $\Gamma_c(\mathscr{I}^+)$ under the norm
\begin{align}
\|\Psi\|_{\mathcal{E}_{\mathscr{I}^+}^{T}}^2=\int_{\mathscr{I}^+}|\partial_u\Psi|^2\sin\theta d\theta d\phi du.
\end{align}
\end{defin}
\begin{defin}
Define the space $\mathcal{E}_{\mathscr{H}^-}^{T}$ to be the completion of $\Gamma_c (\mathscr{H}^-)$ under the norm
\begin{align}
||\Psi||_{\mathcal{E}_{\mathscr{H}^-}^{T}}^2=\int_{\mathscr{H}^-}|\partial_u\Psi|^2\sin\theta d\theta d\phi du.
\end{align}
The space $\mathcal{E}^T_{\overline{\mathscr{H}^-}}$ is similarly defined.
\end{defin}
\begin{defin}
Define the space $\mathcal{E}_{\mathscr{I}^-}^T$ to be the completion of $\Gamma_c(\mathscr{I}^-)$ under the norm
\begin{align}
\|\Psi\|_{\mathcal{E}^T_{\mathscr{I}^-}}^2=\int_{\mathscr{I}^-} |\partial_v \Psi|^2 dv\sin\theta d\theta d\phi.
\end{align}
\end{defin}
\begin{remark} Similar statements to \Cref{Subspace of L2} apply to the norms $\|\;\;\|_{\mathcal{E}^T_{\mathscr{H}^\pm}}, \|\;\;\|_{\mathcal{E}^T_{\overline{\mathscr{H}^\pm}}}, \|\;\;\|_{\mathcal{E}^T_{\mathscr{I}^\pm}}$; they are positive-definite on smooth, compactly supported data on the respective regions of $\overline{\mathscr{M}}$, thus they define normed spaces which extend to Hilbert spaces $\mathcal{E}^T_{\mathscr{H}_{\geq0}}, \mathcal{E}^T_{\mathscr{H}^\pm}, \mathcal{E}^T_{\overline{\mathscr{H}^\pm}}, \mathcal{E}^T_{\mathscr{I}^\pm}$ upon completion. Elements of these spaces can be identified with tensor fields in $L^2_{loc}(\mathscr{H}^-)$ for which a similar statement to \bref{weighted L2 statement} applies.
\end{remark}
\begin{thm}\label{forwardRW}
Let $(\uppsi,\uppsi')\in\Gamma_c(\Sigma^*)\times\Gamma_c(\Sigma^*)$. Then the corresponding unique solution $\Psi$ to \bref{RW} given by \Cref{RWwpCauchy} on $J^+(\Sigma^*)$ induces smooth radiation fields $(\bm{\uppsi}_{\mathscr{H}^+},\bm{\uppsi}_{\mathscr{I}^+})\in \Gamma(\mathscr{H}^+_{\geq0})\oplus\Gamma(\mathscr{I}^+)$ as in definitions \ref{RW future rad field scri} and \ref{RWonH},
with $\bm{\uppsi}_{\mathscr{I}^+}, \Psi_{\mathscr{H}^+_{\geq0}}$ satisfying
\begin{align}
\left|\left|(\uppsi,\uppsi')\right|\right|_{\mathcal{E}^T_{\Sigma^*}}^2=\left|\left|\bm{\uppsi}_{\mathscr{I}^+}\right|\right|_{\mathcal{E}^T_{\mathscr{I}^+}}^2+\left|\left|\bm{\uppsi}_{\mathscr{H}^+}\right|\right|_{\mathcal{E}^T_{\mathscr{H}^+}}^2.
\end{align}
This extends to a map
\begin{align}
\mathscr{F^+}: \mathcal{E}^T_{\Sigma^*}\longrightarrow \mathcal{E}^T_{\mathscr{H}^+_{\geq0}}\oplus \mathcal{E}^T_{\mathscr{I}^+}.
\end{align}
Analogously, forward evolution from smooth compactly supported data on $\Sigma$ or $\overline{\Sigma}$ extends to the maps,
\begin{align}
\mathscr{F}^+:\mathcal{E}^T_\Sigma \longrightarrow \mathcal{E}^T_{\mathscr{{H}^+}} \oplus \mathcal{E}^T_{\mathscr{I}^+},\\
\mathscr{F}^+:\mathcal{E}^T_{\overline{\Sigma}} \longrightarrow \mathcal{E}^T_{\overline{\mathscr{{H}^+}}} \oplus \mathcal{E}^T_{\mathscr{I}^+}.
\end{align}
\end{thm}
\begin{thm}\label{backwardRW}
Let $\bm{\uppsi}_{\mathscr{I}^+}\in \Gamma_c (\mathscr{I}^+), \bm{\uppsi}_{\mathscr{H}^+} \in \Gamma_c (\mathscr{H}^+_{\geq0})$. Then there exists a unique solution $\Psi$ to \cref{RW} in $J^+(\Sigma^*)$ which is smooth, such that
\begin{align}
\lim_{v\longrightarrow\infty} \Psi(u,v,\theta^A)=\bm{\uppsi}_{\mathscr{I}^+},\qquad\qquad \Psi\big|_{\mathscr{H}^+_{\geq0}}=\bm{\uppsi}_{\mathscr{H}^+}.
\end{align}
with $\left|\left|(\Psi|_{\Sigma^*},\slashed{\nabla}_{n_{\Sigma^*}}\Psi|_{\Sigma^*})\right|\right|_{\mathcal{E}^T_{\Sigma^*}}^2=\left|\left|\bm{\uppsi}_{\mathscr{I}^+}\right|\right|_{\mathcal{E}^T_{\mathscr{I}^+}}^2+\left|\left|\bm{\uppsi}_{\mathscr{H}^+}\right|\right|_{\mathcal{E}^T_{\mathscr{H}^+}}^2$.
This extends to a map
\begin{align}
\mathscr{B}^-: \mathcal{E}^T_{\mathscr{H}^+_{\geq0}}\oplus \mathcal{E}^T_{\mathscr{I}^+}\longrightarrow \mathcal{E}^T_{{\Sigma^*}} ,
\end{align}
which inverts the map $\mathscr{F}^+$ of \Cref{forwardRW}. Thus $\mathscr{F}^+, \mathscr{B}^+$ are unitary Hilbert space isomorphisms and
\begin{align}
\mathscr{B}^-\circ\mathscr{F}^+=\mathscr{F}^+\circ\mathscr{B}^+=Id.
\end{align}
Similar statements apply to produce maps
\begin{align}
\mathscr{B}^-: \mathcal{E}^T_{\mathscr{{H}^+}} \oplus \mathcal{E}^T_{\mathscr{I}^+}\longrightarrow \mathcal{E}^T_\Sigma,\\
\mathscr{B}^-: \mathcal{E}^T_{\overline{\mathscr{{H}^+}}} \oplus \mathcal{E}^T_{\mathscr{I}^+} \longrightarrow \mathcal{E}^T_{\overline{\Sigma}}.
\end{align}
\end{thm}
\begin{thm}\label{RW isomorphisms}
Analogously to \cref{forwardRW,backwardRW}, there exist bounded maps
\begin{align}
\mathscr{F}^-:\mathcal{E}^T_\Sigma\longrightarrow \mathcal{E}^T_{\mathscr{H}^-}\oplus \mathcal{E}^T_{\mathscr{I}^-},\qquad\qquad\qquad \mathscr{B}^+:\mathcal{E}^T_{\mathscr{H}^-}\oplus \mathcal{E}^T_{\mathscr{I}^-}\longrightarrow \mathcal{E}^T_\Sigma,
\end{align}
\begin{align}
\mathscr{F}^-:\mathcal{E}^T_{\overline{\Sigma}}\longrightarrow \mathcal{E}^T_{\overline{\mathscr{H}^-}}\oplus \mathcal{E}^T_{\mathscr{I}^-},\qquad\qquad\qquad \mathscr{B}^+:\mathcal{E}^T_{\overline{\mathscr{H}^-}}\oplus \mathcal{E}^T_{\mathscr{I}^-}\longrightarrow \mathcal{E}^T_{\overline\Sigma},
\end{align}
such that $\mathscr{F}^-\circ\mathscr{B}^+=\mathscr{B}^+\circ\mathscr{F}^-=Id$ on the respective domains. The maps
\begin{align}
\mathscr{S}=\mathscr{F}^+\circ\mathscr{B}^+:\mathcal{E}^T_{\mathscr{H}^-}\oplus \mathcal{E}^T_{\mathscr{I}^-}\longrightarrow \mathcal{E}^T_{\mathscr{H}^+}\oplus\mathcal{E}^T_{\mathscr{I}^+},\\
\mathscr{S}=\mathscr{F}^+\circ\mathscr{B}^+:\mathcal{E}^T_{\overline{\mathscr{H}^-}}\oplus \mathcal{E}^T_{\mathscr{I}^-}\longrightarrow \mathcal{E}^T_{\overline{\mathscr{H}^+}}\oplus\mathcal{E}^T_{\mathscr{I}^+}
\end{align}
constitute unitary Hilbert space isomorphism with inverses
\begin{align}
\mathscr{S}=\mathscr{F}^-\circ\mathscr{B}^-:\mathcal{E}^T_{\mathscr{H}^+}\oplus \mathcal{E}^T_{\mathscr{I}^+}\longrightarrow \mathcal{E}^T_{\mathscr{H}^-}\oplus\mathcal{E}^T_{\mathscr{I}^-},\\
\mathscr{S}=\mathscr{F}^-\circ\mathscr{B}^-:\mathcal{E}^T_{\overline{\mathscr{H}^+}}\oplus \mathcal{E}^T_{\mathscr{I}^+}\longrightarrow \mathcal{E}^T_{\overline{\mathscr{H}^-}}\oplus\mathcal{E}^T_{\mathscr{I}^-}
\end{align}
on the respective domains.
\end{thm}
\begin{remark}\label{RW distinct spaces}
We emphasise that the spaces $\mathcal{E}^T_{\Sigma}$ and $\mathcal{E}^T_{\overline{\Sigma}}$ are different and $\mathcal{E}^T_{\Sigma}\subsetneq\mathcal{E}^T_{\overline{\Sigma}}$. Similarly, $\mathcal{E}^T_{\mathscr{H}^+}\subsetneq\mathcal{E}^T_{\overline{\mathscr{H}^+}}$. Our prescription in distinguishing between these spaces is consistent in the sense that elements of $\mathcal{E}^T_{\Sigma}$ are mapped into $\mathcal{E}^T_{\mathscr{H}^+}$ and vice versa. Our point of view is that the spaces $\mathcal{E}^T_{\overline{\Sigma}}, \mathcal{E}^T_{\overline{\mathscr{H}}^\pm}$ are the natural spaces to consider, since in these spaces scattering data are not restricted to vanish at the bifurcation sphere $\mathcal{B}$. It is however useful to have the statements involving $\mathcal{E}^T_{{\Sigma}}, \mathcal{E}^T_{{\mathscr{H}}^\pm}$. In particular, solutions arising from past scattering data identically vanishing on $\mathscr{H}^-$ will lie in these spaces.
\end{remark}
\subsection{Theorem 2: Scattering for the Teukolsky equations of spins $\pm2$}\label{subsection 4.2 scattering for the teukolsky equations of spins +,-2}
\subsubsection{Scattering for the +2 Teukolsky equation}\label{subsubsection 4.2.1 scattering for the +2 equation}
\begin{defin}\label{+2 norm on Sigma}
Let $(\upalpha,\upalpha')\in\Gamma_c(\Sigma^*)\oplus\Gamma_c(\Sigma^*)$ be Cauchy data for $\bref{T+2}$ on $\Sigma^*$ giving rise to a solution $\alpha$.
Define the space $\mathcal{E}^{T,+2}_{\Sigma^*}$ to be the completion of $\Gamma_c(\Sigma^*)\oplus\Gamma_c(\Sigma^*)$ under the norm
\begin{align}
||(\upalpha,\upalpha')||_{\mathcal{E}^{T,+2}_{\Sigma^*}}^2=||(\Psi,\slashed{\nabla}_{n_{\Sigma^*}}\Psi)||_{\mathcal{E}^{T}_{\Sigma^*}}^2,
\end{align}
where $\Psi$ is the weighted second derivative $\Psi=\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\right)^2r\Omega^2\alpha$ of $\alpha$. The spaces $\mathcal{E}^{T,+2}_{\Sigma}$, $\mathcal{E}^{T,+2}_{\overline{\Sigma}}$ are similarly defined.
\end{defin}
We immediately note the following:
\begin{proposition}\label{+2 norm on Sigma is coercive}
$\|\;\|_{\mathcal{E}^{T,+2}_{\Sigma}}$ indeed defines a norm on $\Gamma_c(\Sigma)\times\Gamma_c(\Sigma)$. Similar statements hold for $\|\;\|_{\mathcal{E}^{T,+2}_{\Sigma^*}}, \|\;\|_{\mathcal{E}^{T,+2}_{\overline{\Sigma}}}$.
\end{proposition}
\begin{proof}
It suffices to check that $\|(\upalpha,\upalpha')\|_{\mathcal{E}^{T,+2}_{\Sigma}}=0$ for a smooth, compactly supported pair $(\upalpha,\upalpha')$ implies that $(\upalpha,\upalpha')=(0,0)$. Let $\alpha$, $\Psi$ be as in \Cref{+2 norm on Sigma}. It is clear that $\Psi=0$, and (\ref{eq:d4d4Psi}) implies:
\begin{align}\label{424242}
\slashed{\nabla}_T\alpha=\frac{1}{12M}\mathcal{A}_2(\mathcal{A}_2-2)\alpha.
\end{align}
\Cref{eq:d4Psi} implies that on $\Sigma$
\begin{align}
\left(\mathcal{A}_2-2+\frac{6M}{r}\right)\left(\frac{1}{12M}\mathcal{A}_2(\mathcal{A}_2-2)-\slashed{\nabla}_{R^*}\right)r\Omega^2\alpha-6M\frac{\Omega^2}{r^2}r\Omega^2\alpha=0.
\end{align}
Take $F=\left(\mathcal{A}_2-2+\frac{6M}{r}\right)r\Omega^2\alpha$, then the above says $\slashed{\nabla}_{R^*}F=\frac{1}{12M} \mathcal{A}_2\left(\mathcal{A}_2-2\right)F-12M\frac{\Omega^2}{r^2}r\Omega^2\alpha$. We integrate over the region $R_0<r<R$ on $\Sigma$:
\begin{align}
\begin{split}
\|F\|^2_{S^2,r=R}=\|F\|^2_{S^2,r=R_0}+\int_{\Sigma\cap\{R_0<r<R\}} \frac{1}{6M}&\left\{|\mathcal{A}_2F|^2+2|\mathring{\slashed{\nabla}}F|^2+4|F|^2\right\}\\&+24M\frac{\Omega^2}{r^2}\left\{|\mathring{\slashed{\nabla}}r\Omega^2\alpha|^2+\left(4-\frac{6M}{r}\right)|r\Omega^2\alpha|^2\right\}.
\end{split}
\end{align}
This implies $\|F\|^2_{S^2,r=R}\geq \|F\|^2_{S^2,r=R_0}$ (notice that the integral on the right hand side remains positive by Poincar\'e's inequality). If the data are compactly supported then $F$ must vanish everywhere on $\Sigma$, and the vanishing of $F$ implies the vanishing of $\Omega^2\alpha$ for smooth $\alpha$ since the operator $\mathcal{A}_2-2+\frac{6M}{r}$ is uniformly elliptic on the set of symmetric, traceless 2-tensor field on $S^2$. This in turn implies the vanishing of $\slashed{\nabla}_T\Omega^2\alpha$ by \bref{424242}. We can repeat this argument for data on $\Sigma^*, \overline{\Sigma}$.
\end{proof}
\begin{defin}
Define the space of future scattering states $\mathcal{E}^{T,+2}_{\mathscr{H}^+_{\geq 0}}$ on $\mathscr{H}^+$ to be the completion of $\Gamma_c (\mathscr{H}^+_{\geq 0})$ under the norm
\begin{align}\label{+2 scattering norm on H+}
\begin{split}
&\|A\|_{\mathcal{E}^{T,+2}_{\mathscr{H}^+_{\geq0}}}=\left\|\mathcal{A}_2(\mathcal{A}_2-2)\left(\int_v^\infty d\bar{v}\; e^{\frac{1}{2M}(v-\bar{v})}A\right)\right\|^2_{L^2(\mathscr{H}^+_{\geq0})}+\left\|6M\partial_v \left(\int_v^\infty d\bar{v}\; e^{\frac{1}{2M}(v-\bar{v})}A\right)\right\|^2_{L^2(\mathscr{H}^+_{\geq0})}\\&+\int_{S^2}\sin\theta d\theta d\phi \left(\left|\mathring{\slashed{\Delta}}\int_{\bar{v}=0}^\infty d\bar{v}\; e^{\frac{1}{2M}(v-\bar{v})}A\right|^2+6\left|\mathring{\slashed{\nabla}}\int_{\bar{v}=0}^\infty d\bar{v}\; e^{\frac{1}{2M}(v-\bar{v})}A\right|^2+8\Big|\int_{\bar{v}=0}^\infty d\bar{v}\; e^{\frac{1}{2M}(v-\bar{v})}A\Big|^2\right).
\end{split}
\end{align}
Define the space $\mathcal{E}^{T,+2}_{\mathscr{H}^+}$ to be the completion of $\Gamma_c(\mathscr{H}^+)$ under the norm
\begin{align}
\|A\|_{\mathcal{E}^{T,+2}_{\mathscr{H}^+}}=\left\|\mathcal{A}_2(\mathcal{A}_2-2)\left(\int_v^\infty d\bar{v}\; e^{\frac{1}{2M}(v-\bar{v})}A\right)\right\|^2_{L^2(\mathscr{H}^+)}+\left\|6M\partial_v\left(\int_v^\infty d\bar{v}\; e^{\frac{1}{2M}(v-\bar{v})}A\right)\right\|^2_{L^2(\mathscr{H}^+)}.
\end{align}
Define the space $\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}$ to be the completion of the space consisting of symmetric traceless $S^2_{\infty,v}$ 2-tensor fields $A$ on $\overline{\mathscr{H}^+}$ such that $V^{-2}A\in \Gamma_c \left(\overline{\mathscr{H}^+}\right)$, under the same norm above evaluated over $\overline{\mathscr{H}^+}$.
\end{defin}
\begin{remark}\label{+2 norm is norm on H+}
Let $A\in\Gamma_c(\mathscr{H}_{\geq0}^+)$. If $\|A\|_{\mathcal{E}^{T,+2}_{\mathscr{H}^+_{\geq0}}}=0$ then $\int_v^\infty d\bar{v}\; e^{\frac{1}{2M}(v-\bar{v})}A=0$ for all $v$, which implies that $A$ must vanish if it is smooth. Thus $\|\;\|_{\mathcal{E}^{T,+2}_{\mathscr{H}^+_{\geq0}}}$ defines a norm on $\Gamma_c(\mathscr{H}^+_{\geq0})$, which then extends to the Hilbert space $\mathcal{E}^{T,+2}_{\mathscr{H}^+_{\geq0}}$. The same applies to $\mathcal{E}^{T,+2}_{\mathscr{H}^+_{\geq0}}$, $\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}$.
\end{remark}
\begin{defin}
Define the space of future scattering states $\mathcal{E}^{T,+2}_{\mathscr{I}^+}$ on $\mathscr{I}^+$ to be the completion of $\Gamma_c (\mathscr{I}^+)$ under the norm
\begin{align}
\|A\|_{\mathcal{E}_{\mathscr{I}^+}^{T,+2}}=\left|\left|\partial_u^3A\right|\right|_{L^2(\mathscr{I}^+)}.
\end{align}
\end{defin}
\begin{remark}\label{+2 norm is norm on scri+}
The energy $\|\;\|_{\mathcal{E}_{\mathscr{I}^+}^{T,+2}}$ indeed defines a norm on $\Gamma_c(\mathscr{I}^+)$, which thus extends to a Hilbert space $\mathcal{E}^{T,+2}_{\mathscr{I}^+}$ when completed under $\|\;\|_{\mathcal{E}_{\mathscr{I}^+}^{T,+2}}$.
We can identify $\mathcal{E}^{T,+2}_{\mathscr{I}^+}$ as the subset $A\in L^2_{loc}(\mathscr{I}^+)$ whose elements satisfy
\begin{itemize}
\item $\partial_u^3 A\in L^2(\mathscr{I}^+)$,
\item $\lim_{u\longrightarrow\infty} \|A\|_{L^2(S^2)}=0$.
\end{itemize}
Hardy's inequality holds and we have on this subset
\begin{align}
\int_{\mathscr{H}^+_{\geq0}} dv \sin\theta d\theta d\phi \frac{|A|^2}{v^6+1}\lesssim\|\xi\|^2_{\mathcal{E}^{T,+2}_{\mathscr{H}^+_{\geq0}}} <\infty.
\end{align}
\end{remark}
\begin{defin}\label{+2 backwards scattering H}
Define the space of past scattering states $\mathcal{E}^{T,+2}_{\mathscr{H}^-}$ on $\mathscr{H}^-$ to be the completion of $\Gamma_c (\mathscr{H}^-)$ under the norm
\begin{align}\label{this 2424}
\|A\|_{\mathcal{E}^{T,+2}_{\mathscr{H}^-}}=\left|\left|2(2M\partial_u)A-3(2M\partial_u)^2A+(2M\partial_u)^3A\right|\right|_{L^2(\mathscr{H}^-)}.
\end{align}
Define the space $\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^-}}$ to be the closure of the space consisting of symmetric traceless $S^2_{u,-\infty}$ 2-tensor fields $A$ on $\overline{\mathscr{H}^-}$ such that $U^2A\in \Gamma_c \left(\overline{\mathscr{H}^-}\right)$, under the same norm above evaluated over $\overline{\mathscr{H}^-}$.
\end{defin}
\begin{remark}\label{+2 norm is norm on H-}
As mentioned in \Cref{introduction regular frame norm} of Section 1.3.2 of the introduction, the energy defined in \bref{this 2424} can be written using the Kruskal frame as
\begin{align}
\|A\|_{\mathcal{E}^{T,+2}_{\mathscr{H}^-}}=\|U^{1/2}\partial_U^3U^2A\|_{L^2_UL^2(S^2)}.
\end{align}
This defines a norm on $\Gamma_c(\mathscr{H}^-)$, which then extends to the Hilbert space $\mathcal{E}^{T,+2}_{\mathscr{H}^-}$. It is possible to represent the elements of $\mathcal{E}^{T,+2}_{\mathscr{H}^-}$ as the subset $A\in L^2_{loc}(\mathscr{H}^-)$ whose elements satisfy
\begin{itemize}
\item $\partial_uA$, $\partial_u^2 A$, $\partial_u^3 A \in L^2(\mathscr{H}^-)$,
\item $\lim_{u\longrightarrow -\infty} \|A\|_{L^2(S^2)}=0$
\end{itemize}
Hardy's inequality holds on this space we have
\begin{align}
\int_{\mathscr{H}^-} du \sin\theta d\theta d\phi \frac{|A|^2}{u^2+1}\lesssim\|\xi\|^2_{\mathcal{E}^{T,+2}_{\mathscr{H}^-}}<\infty.
\end{align}
\end{remark}
\begin{defin}\label{+2 backwards scattering scri}
Define the space of past scattering states $\mathcal{E}^{T,+2}_{\mathscr{I}^-}$ on $\mathscr{I}^-$ to be the completion of the space
\begin{align}
A\in\Gamma(\mathscr{I}^-): \int_{-\infty}^\infty dv\;A=0
\end{align}
under the norm
\begin{align}
\|A\|^2_{\mathcal{E}^{T,+2}_{\mathscr{I}^-}}=&\int_{\mathscr{I}^-} d\bar{v}\sin\theta d\theta d\phi\left[ 6M|A|^2+\left|\mathcal{A}_2(\mathcal{A}_2-2)\int_{\bar{v}}^\infty A\right| ^2\right].
\end{align}
\end{defin}
\begin{remark}\label{+2 norm is norm on scri-}
Let $A\in\Gamma_c(\mathscr{I}^-)$. If $\|A\|^2_{\mathcal{E}^{T,+2}_{\mathscr{I}^-}}=0$ then $A=0$. Thus $\|\;\|^2_{\mathcal{E}^{T,+2}_{\mathscr{I}^-}}$ defines a norm on $\Gamma_c(\mathscr{I}^-)$ which then extends to the Hilbert space $\mathcal{E}^{T,+2}_{\mathscr{I}^-}$.
\end{remark}
\begin{thm}\label{+2 future forward scattering}
Forward evolution under the $+2$ Teukolsky equation \bref{T+2} from smooth, compactly supported data $(\upalpha,\upalpha')$ on $\Sigma^*$ gives rise to smooth radiation fields $(\upalpha_{\mathscr{H}^+},\upalpha_{\mathscr{I}^+})\in \mathcal{E}^{T,+2}_{\mathscr{H}^+_{\geq0}}\oplus\mathcal{E}^{T,+2}_{\mathscr{I}^+}$ where
\begin{enumerate}
\item $\upalpha_{\mathscr{H}^+}=2M\Omega^{2}{\alpha}\big|_{\mathscr{H}^+} \in \Gamma(\mathscr{H}^+)$,
\item $\upalpha_{\mathscr{I}^+}=\lim_{v\longrightarrow \infty} r^5\alpha(v,u,\theta^A)$, with $\upalpha_{\mathscr{I}^+}\in \Gamma(\mathscr{I}^+)$,
\end{enumerate}
with $\upalpha_{\mathscr{I}^+}, \alpha_{\mathscr{H}^+_{\geq0}}$ satisfying
\begin{align}
\left|\left|(\upalpha,\upalpha')\right|\right|_{\mathcal{E}^{T,+2}_{\Sigma^*}}^2=\left|\left|\upalpha_{\mathscr{I}^+}\right|\right|_{\mathcal{E}^{T,+2}_{\mathscr{I}^+}}^2+\left|\left|\upalpha_{\mathscr{H}^+}\right|\right|_{\mathcal{E}^{T,+2}_{\mathscr{H}^+_{\geq0}}}^2.
\end{align}
This extends to a unitary map
\begin{align}
{}^{(+2)}\mathscr{F^+}: \mathcal{E}^{T,+2}_{\Sigma^*}\longrightarrow \mathcal{E}^{T,+2}_{\mathscr{H}^+_{\geq0}}\oplus \mathcal{E}^{T,+2}_{\mathscr{I}^+}.
\end{align}
The same conclusions apply when replacing $\Sigma^*$ with $\Sigma$ and $\mathscr{H}^+_{\geq0}$ with $\mathscr{H}^+$, or when replacing with $\overline\Sigma$ and $\overline{\mathscr{H}^+}$. In the latter case, $(\upalpha,\upalpha')$ must be consistent with the well-posedness statement $\Cref{WP+2Sigma*}$ and consequently we obtain that $V^{-2}\upalpha_{{\mathscr{H}^+}}\in \Gamma(\overline{\mathscr{H}^+})$.
\end{thm}
\begin{thm}\label{+2 future backward scattering}
Let $\upalpha_{\mathscr{I}^+}\in \Gamma_c (\mathscr{I}^+), \upalpha_{\mathscr{H}^+} \in \Gamma_c (\mathscr{H}^+_{\geq0})$. Then there exists a unique solution $\alpha$ to \cref{T+2} in $J^+(\Sigma^*)$ which is smooth, such that
\begin{align}
\lim_{v\longrightarrow\infty} r^5\alpha(u,v,\theta^A)=\upalpha_{\mathscr{I}^+},\qquad\qquad \Omega^{2}\alpha\big|_{\mathscr{H}^+_{\geq0}}=\upalpha_{\mathscr{H}^+},
\end{align}
with $(\Omega^2\alpha|_{\Sigma^*},\slashed{\nabla}_{n_{\Sigma^*}}\Omega^2\alpha|_{\Sigma^*})\in \mathcal{E}^{T,+2}_{\Sigma^*} $ and $
\left\|(\Omega^2\alpha|_{\Sigma^*},\slashed{\nabla}_{n_{\Sigma^*}}\Omega^2\alpha|_{\Sigma^*})\right\|_{\mathcal{E}^{T,+2}_{\Sigma^*}}^2=\left\|\upalpha_{\mathscr{I}^+}\right\|_{\mathcal{E}^{T,+2}_{\mathscr{I}^+}}^2+\left|\left|\upalpha_{\mathscr{H}^+}\right|\right|_{\mathcal{E}^{T,+2}_{\mathscr{H}^+}}^2$.
This extends to a unitary map
\begin{align}
{}^{(+2)}\mathscr{B}^-: \mathcal{E}^{T,+2}_{\mathscr{H}^+_{\geq0}}\oplus \mathcal{E}^{T,+2}_{\mathscr{I}^+}\longrightarrow \mathcal{E}^{T,+2}_{{\Sigma^*}},
\end{align}
which inverts the map ${}^{(+2)}\mathscr{F}^+$ of \Cref{+2 future forward scattering}
\begin{align}
{}^{(+2)}\mathscr{B}^-\circ{}^{(+2)}\mathscr{F}^+={}^{(+2)}\mathscr{F}^+\circ{}^{(+2)}\mathscr{B}^-=Id.
\end{align}
The same conclusions apply when replacing $\Sigma^*$ with $\Sigma$ and $\mathscr{H}^+_{\geq0}$ with $\mathscr{H}^+$, or when replacing with $\overline\Sigma$ and $\overline{\mathscr{H}^+}$. In the latter case, we require that $V^{-2}\alpha_{{\mathscr{H}^+}}\in \Gamma(\overline{\mathscr{H}^+})$ and with that $(\alpha|_{\Sigma^*},\slashed{\nabla}_{n_{\Sigma^*}}\alpha|_{\Sigma^*})$ is consistent with \Cref{WP+2Sigma*}.
\end{thm}
\begin{thm}\label{+2 past forward scattering}
Evolution from $(\upalpha,\upalpha')\in\Gamma_c(\Sigma)\times\Gamma_c(\Sigma)$ to $J^-(\Sigma)$ gives rise to radiation fields on $\mathscr{H}^-,\mathscr{I}^-$ analogously to \Cref{+2 future forward scattering}, where the radiation fields are defined by
\begin{align}
\lim_{v\longrightarrow\infty} r\alpha(u,v,\theta^A)=\upalpha_{\mathscr{I}^-},\qquad\qquad 2M\Omega^{-2}\alpha\big|_{\mathscr{H}^-}=\upalpha_{\mathscr{H}^-}.
\end{align}
This extends to a unitary map
\begin{align}
{}^{(+2)}\mathscr{F^-}: \mathcal{E}^{T,+2}_{\Sigma}\longrightarrow \mathcal{E}^{T,+2}_{\mathscr{H}^-}\oplus \mathcal{E}^{T,+2}_{\mathscr{I}^-},
\end{align}
with inverse ${}^{(+2)}\mathscr{B}^+:\mathcal{E}^{T,+2}_{\mathscr{H}^-}\oplus \mathcal{E}^{T,+2}_{\mathscr{I}^-}\longrightarrow \mathcal{E}^{T,+2}_{\Sigma}$. The same conclusions apply when replacing $\Sigma$ with $\overline{\Sigma}$ and $\mathscr{H}^-$ with $\overline{\mathscr{H}^-}$. In this case, we require that $(U^{2}\Omega^{-2}\upalpha,U^{2}\Omega^{-2}\upalpha')$ are smooth up to and including $\mathcal{B}$, and consequently we obtain that $U^{2}\upalpha_{{\mathscr{H}^-}}\in \Gamma(\overline{\mathscr{H}^-})$.
\end{thm}
\begin{thm}\label{scatteringthm+2}
The maps
\begin{align}
{}^{(+2)}\mathscr{S}&={}^{(+2)}\mathscr{F}^+\circ{}^{(+2)}\mathscr{B}^+:\mathcal{E}^{T,+2}_{\mathscr{H}^-}\oplus \mathcal{E}^{T,+2}_{\mathscr{I}^-}\longrightarrow \mathcal{E}^{T,+2}_{\mathscr{H}^+}\oplus\mathcal{E}^{T,+2}_{\mathscr{I}^+},\\
{}^{(+2)}\mathscr{S}&={}^{(+2)}\mathscr{F}^+\circ{}^{(+2)}\mathscr{B}^+:\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^-}}\oplus \mathcal{E}^{T,+2}_{\mathscr{I}^-}\longrightarrow \mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}\oplus\mathcal{E}^{T,+2}_{\mathscr{I}^+}
\end{align}
constitute unitary Hilbert space isomorphism with inverses
\begin{align}
{}^{(+2)}\mathscr{S}^-={}^{(+2)}\mathscr{F}^-\circ{}^{(+2)}\mathscr{B}^-:\mathcal{E}^{T,+2}_{\mathscr{H}^+}\oplus \mathcal{E}^{T,+2}_{\mathscr{I}^+}\longrightarrow \mathcal{E}^{T,+2}_{\mathscr{H}^-}\oplus\mathcal{E}^{T,+2}_{\mathscr{I}^-}\\
{}^{(+2)}\mathscr{S}^-={}^{(+2)}\mathscr{F}^-\circ{}^{(+2)}\mathscr{B}^-:\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}\oplus \mathcal{E}^{T,+2}_{\mathscr{I}^+}\longrightarrow \mathcal{E}^{T,+2}_{\overline{\mathscr{H}^-}}\oplus\mathcal{E}^{T,+2}_{\mathscr{I}^-}
\end{align}
on the respective domains.
\end{thm}
\subsubsection{Scattering for the $-2$ Teukolsky equation}\label{subsubsection 4.2.2 Scattering for the -2 equation}
\begin{defin}\label{-2 norm on Sigma*}
Let $(\underline\upalpha,\underline\upalpha')\in\Gamma_c(\Sigma^*)\oplus\Gamma_c(\Sigma^*)$ be Cauchy data for $\bref{T+2}$ on $\Sigma^*$ giving rise to a solution $\underline\alpha$.
Define the space $\mathcal{E}^{T,-2}_{\Sigma^*}$ to be the completion of $\Gamma_c(\Sigma^*)\oplus\Gamma_c(\Sigma^*)$ under the norm
\begin{align}\label{equivnorm-2}
||(\underline\upalpha,\underline\upalpha')||_{\mathcal{E}^{T,-2}_{\Sigma^*}}^2=||(\underline\Psi,\slashed{\nabla}_{n_{\Sigma^*}}\underline\Psi)||_{\mathcal{E}^{T}_{\Sigma^*}}^2,
\end{align}
where $\underline\Psi$ is the weighted second derivative $\underline\Psi=\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\right)^2r\Omega^2\alpha$ of $\underline\alpha$. The spaces $\mathcal{E}^{T,-2}_{\Sigma}$, $\mathcal{E}^{T,-2}_{\overline{\Sigma}}$ are similarly defined.
\end{defin}
\begin{proposition}\label{-2 norm on Sigma is coercive}
$\|\;\|_{\mathcal{E}^{T,-2}_{\Sigma}}$ indeed defines a norm on $\Gamma_c(\Sigma)\times\Gamma_c(\Sigma)$.
\end{proposition}
\begin{proof}
It suffices to check that $\|(\underline\upalpha,\underline\upalpha')\|_{\mathcal{E}^{T,-2}_{\Sigma}}=0$ implies $(\underline\upalpha,\underline\upalpha')=(0,0)$. Let $\underline\alpha$ and $\underline\Psi$ be as in \Cref{-2 norm on Sigma*}. It is clear that $\|(\underline\upalpha,\underline\upalpha')\|_{\mathcal{E}^{T,-2}_{\Sigma}}=0$ implies $\Psi=0$. \Cref{eq:d3d3psibar} implies that
\begin{align}
\slashed{\nabla}_T r\Omega^2\underline\alpha=-\frac{1}{12M}\mathcal{A}_2(\mathcal{A}_2-2)r\Omega^2\underline\alpha.
\end{align}
\Cref{eq:d3psibar} then gives us
\begin{align}\label{this2323}
\left[\mathcal{A}+2-\frac{6M}{r}\right]\left(\frac{1}{12M}\mathcal{A}_2(\mathcal{A}_2-2)-\slashed{\nabla}_{R^*}\right)r\Omega^2\underline\alpha+6M\frac{\Omega^2}{r^2}r\Omega^2\underline\alpha=0.
\end{align}
Let $\underline{F}=\left(\mathring{\slashed{\Delta}}-\frac{6M}{r}\right)r\Omega^2\underline\alpha$, then \bref{this2323} above implies
that $\slashed{\nabla}_{R^*}\underline{F}=\frac{1}{12M}\mathcal{A}_2(\mathcal{A}_2-2)\underline{F}$. The result follows similarly to \Cref{+2 norm on Sigma is coercive}.
\end{proof}
\begin{defin}
Define the space of future scattering states $\mathcal{E}^{T,-2}_{\mathscr{H}^+_{\geq 0}}$ on $\mathscr{H}^+_{\geq0}$ to be the completion of $\Gamma_c(\mathscr{H}^+_{\geq0})$ under the norm
\begin{align}
\|\underline{A}\|_{\mathcal{E}^{T,-2}_{\mathscr{H}^+_{\geq0}}}=(2M)^2\left|\left|2(2M\partial_v)\underline{A}+3(2M\partial_v)^2\underline{A}+(2M\partial_v)^3\underline{A}\right|\right|_{L^2(\mathscr{H}^+_{\geq0})}.
\end{align}
The space $\mathcal{E}^{T,-2}_{\mathscr{H}^+}$ is defined by the same norm taken over $\mathscr{H}^+$. Define and $\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^+}}$ to be the closure of the space consisting of symmetric traceless $S^2_{\infty,v}$ 2-tensor fields $\underline{A}$ on $\overline{\mathscr{H}^+}$ such that $V^{2}\underline{A}\in \Gamma_c \left(\overline{\mathscr{H}^+}\right)$, under the same norm above evaluated over $\overline{\mathscr{H}^+}$.
\end{defin}
\begin{remark}\label{-2 norm is norm on H+}
As with \Cref{+2 norm is norm on H-} on $\|\;\|_{\mathcal{E}^{T,+2}_{\mathscr{H}^-}}$, the energy $\|\;\|_{\mathcal{E}^{T,-2}_{\mathscr{H}^+}}$ indeed defines a norm on $\Gamma_c(\overline{\mathscr{H}^+})$, which then extends to the Hilbert space $\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^+}}$. It is possible to represent the elements of $\mathcal{E}^{T,-2}_{\mathscr{H}^+_{\geq0}}$ as the subset $\underline{A}\in L^2_{loc}(\mathscr{H}^+_{\geq0})$ whose elements satisfy
\begin{itemize}
\item $\partial_v \underline{A}$, $\partial_v^2 \underline{A}$, $\partial_v^3 A \in L^2(\mathscr{H}^+_{\geq0})$,
\item $\lim_{v\longrightarrow \infty} \|\underline{A}\|_{L^2(S^2)}=0$
\end{itemize}
Hardy's inequality holds on this space we have
\begin{align}
\int_{\mathscr{H}^+_{\geq0}} dv \sin\theta d\theta d\phi \frac{|\underline{A}|^2}{v^2+1}\lesssim\|\underline{A}\|^2_{\mathcal{E}^{T,-2}_{\mathscr{H}^+_{\geq0}}}<\infty.
\end{align}
Similar statements apply to $\mathcal{E}^{T,-2}_{{\mathscr{H}^+}}$, $\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^+}}$.
\end{remark}
\begin{defin}
Define the space of future scattering states $\mathcal{E}^{T,-2}_{\mathscr{I}^+}$ on $\mathscr{I}^+$ to be the completion of the space
\begin{align}
\underline{A}\in \Gamma_c (\mathscr{I}^-): &\int_{-\infty}^{\infty} du\underline{A}=0
\end{align}
under the norm
\begin{align}\label{-2 tricky norm at scri}
\|\underline{A}\|^2_{\mathcal{E}^{T,-2}_{\mathscr{I}^+}}=&\int_{\mathscr{I}^+} d{u}\sin\theta d\theta d\phi\left[ (6M)^2|\underline{A}|^2+\left|\mathcal{A}_2(\mathcal{A}_2-2)\int_{\bar{u}}^\infty d\bar{u}\; \underline{A}\right| ^2\right].
\end{align}
\end{defin}
\begin{remark}\label{-2 norm on scri+}
As with $\|\;\|_{\mathcal{E}^{T,+2}_{\mathscr{I}^-}}$ and \Cref{+2 norm is norm on scri-}, the energy $\|\;\|_{\mathcal{E}^{T,-2}_{\mathscr{I}^+}}$ indeed defines a norm on $\Gamma_c({\mathscr{I}^+})$, which then extends to the Hilbert space $\mathcal{E}^{T,-2}_{{\mathscr{I}^+}}$.
\end{remark}
\begin{defin}
Define the space $\mathcal{E}^{T,-2}_{\mathscr{H}^-}$ to be the completion of $\Gamma_c(\mathscr{H}^-)$ under the norm
\begin{align}
\|\underline{A}\|_{\mathcal{E}^{T,-2}_{\mathscr{H}^-}}=\left\|\mathcal{A}_2(\mathcal{A}_2-2)\left(\int^u_{-\infty} d\bar{u}\; e^{\frac{1}{2M}(u-\bar{u})}\underline{A}\right)\right\|^2_{L^2(\mathscr{H}^-)}+\left\|6M\partial_u\left( \int^u_{-\infty} d\bar{u}\; e^{\frac{1}{2M}(u-\bar{u})}\underline{A}\right)\right\|^2_{L^2(\mathscr{H}^-)}.
\end{align}
Define the space $\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^-}}$ to be the completion of the space consisting of symmetric traceless $S^2_{u,-\infty}$ 2-tensor fields $\underline{A}$ on $\overline{\mathscr{H}^-}$ such that $U^{-2}A\in \Gamma_c \left(\overline{\mathscr{H}^-}\right)$, under the same norm above evaluated over $\overline{\mathscr{H}^-}$.
\end{defin}
\begin{remark}\label{-2 norm is norm on H-}
As with $\|\;\|_{\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^-}}}$ and \Cref{-2 norm is norm on H-}, the energy $\|\;\|_{\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^-}}}$ indeed defines a norm on $\Gamma_c(\overline{\mathscr{H}^-})$, which then extends to the Hilbert space $\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^-}}$.
\end{remark}
\begin{defin}\label{-2 norm on scri-}
Define the space of future scattering states $\mathcal{E}^{T,-2}_{\mathscr{I}^-}$ on $\mathscr{I}^-$ to be the completion of $\Gamma_c (\mathscr{I}^-)$ under the norm
\begin{align}
\|\underline{A}\|_{\mathcal{E}_{\mathscr{I}^-}^{T,-2}}=\left|\left|\partial_v^3\underline{A}\right|\right|_{L^2(\mathscr{I}^-)}.
\end{align}
\end{defin}
\begin{remark}\label{-2 norm is norm on scri-}
The energy $\|\;\|_{\mathcal{E}_{\mathscr{I}^-}^{T,-2}}$ indeed defines a norm on $\Gamma_c(\mathscr{I}^-)$, which thus extends to a Hilbert space $\mathcal{E}^{T,-2}_{\mathscr{I}^-}$ when completed under $\|\;\|_{\mathcal{E}_{\mathscr{I}^-}^{T,-2}}$.
We can identify $\mathcal{E}^{T,-2}_{\mathscr{I}^-}$ as the subset $A\in L^2_{loc}(\mathscr{I}^-)$ whose elements satisfy
\begin{itemize}
\item $\partial_v \underline{A}$, $\partial_v^2 \underline{A}$, $\partial_v^3 \underline{A}\in L^2(\mathscr{I}^-)$,
\item $\lim_{v\longrightarrow-\infty} \|\underline{A}\|_{L^2(S^2)}=0$.
\end{itemize}
Hardy's inequality holds and we have on this subset
\begin{align}
\int_{\mathscr{I}^-} dv \sin\theta d\theta d\phi \frac{|\underline{A}|^2}{v^6+1}\lesssim\|\underline{A}\|^2_{\mathcal{E}^{T,-2}_{\mathscr{I}^-}} <\infty.
\end{align}
\end{remark}
\begin{thm}\label{-2 future forward scattering}
Forward evolution under the $-2$ Teukolsky equation \bref{T-2} from smooth, compactly supported data $(\underline\upalpha,\underline\upalpha')$ on $\Sigma^*$ gives rise to smooth radiation fields $(\underline\upalpha_{\mathscr{H}^+},\underline\upalpha_{\mathscr{I}^+})\in \mathcal{E}^{T,-2}_{\mathscr{H}^+_{\geq0}}\oplus\mathcal{E}^{T,-2}_{\mathscr{I}^+}$ where
\begin{enumerate}
\item $\underline\upalpha_{\mathscr{H}^+}=2M\Omega^{-2}{\underline\alpha}\big|_{\mathscr{H}^+} \in \Gamma(\mathscr{H}^+)$,
\item $\underline\upalpha_{\mathscr{I}^+}=\lim_{v\longrightarrow \infty} r\underline\alpha(v,u,\theta^A)$, with $\underline\upalpha_{\mathscr{I}^+}\in \Gamma(\mathscr{I}^+)$,
\end{enumerate}
with $\underline\upalpha_{\mathscr{I}^+}, \underline\upalpha_{\mathscr{H}^+}$ satisfying
\begin{align}
\left|\left|(\underline\upalpha,\underline\upalpha')\right|\right|_{\mathcal{E}^{T,-2}_{\Sigma^*}}^2=\left|\left|\upalpha_{\mathscr{I}^+}\right|\right|_{\mathcal{E}^{T,-2}_{\mathscr{I}^+}}^2+\left|\left|\upalpha_{\mathscr{H}^+}\right|\right|_{\mathcal{E}^{T,-2}_{\mathscr{H}^+_{\geq0}}}^2.
\end{align}
This extends to a unitary map
\begin{align}
{}^{(-2)}\mathscr{F^+}: \mathcal{E}^{T,-2}_{\Sigma^*}\longrightarrow \mathcal{E}^{T,-2}_{\mathscr{H}^+_{\geq0}}\oplus \mathcal{E}^{T,-2}_{\mathscr{I}^+}.
\end{align}
The same conclusions apply when replacing $\Sigma^*$ with $\Sigma$ and $\mathscr{H}^+_{\geq0}$ with $\mathscr{H}^+$, or when replacing with $\overline\Sigma$ and $\overline{\mathscr{H}^+}$. In the latter case, $(\underline\upalpha,\underline\upalpha')$ must be consistent with the well-posedness statement $\Cref{WP-2Sigma*}$ and consequently we obtain that $V^{2}\underline\upalpha_{{\mathscr{H}^+}}\in \Gamma(\overline{\mathscr{H}^+})$.
\end{thm}
\begin{thm}\label{-2 future backward scattering}
Let $\underline\upalpha_{\mathscr{I}^+}\in \Gamma_c (\mathscr{I}^+), \underline\upalpha_{\mathscr{H}^+} \in \Gamma_c (\mathscr{H}^+_{\geq0})$ with $\int_{-\infty}^\infty d\bar{u}\; \underline\upalpha_{\mathscr{I}^+}=0$. Then there exists a unique solution $\underline\alpha$ to \cref{T-2} in $J^+(\Sigma^*)$ which is smooth, such that
\begin{align}
\lim_{v\longrightarrow\infty} r\underline\alpha(u,v,\theta^A)=\underline\upalpha_{\mathscr{I}^+},\qquad\qquad 2M\Omega^{-2}\underline\alpha\big|_{\mathscr{H}^+_{\geq0}}=\underline\upalpha_{\mathscr{H}^+},
\end{align}
with $(\underline\alpha|_{\Sigma^*},\slashed{\nabla}_{n_{\Sigma^*}}\underline\alpha|_{\Sigma^*})\in \mathcal{E}^{T,-2}_{\Sigma^*} $ and $
\left|\left|(\underline\alpha|_{\Sigma^*},\slashed{\nabla}_{n_{\Sigma^*}}\underline\alpha|_{\Sigma^*})\right|\right|_{\mathcal{E}^{T,-2}_{\Sigma^*}}^2=\left|\left|\underline\upalpha_{\mathscr{I}^+}\right|\right|_{\mathcal{E}^{T,-2}_{\mathscr{I}^+}}^2+\left|\left|\underline\upalpha_{\mathscr{H}^+}\right|\right|_{\mathcal{E}^{T,-2}_{\mathscr{H}^+}}^2$.
This extends to a unitary map
\begin{align}
{}^{(-2)}\mathscr{B}^-: \mathcal{E}^{T,-2}_{\mathscr{H}^+_{\geq0}}\oplus \mathcal{E}^{T,-2}_{\mathscr{I}^+}\longrightarrow \mathcal{E}^{T,-2}_{{\Sigma^*}},
\end{align}
which inverts the map ${}^{(-2)}\mathscr{F}^+$ of \Cref{-2 future forward scattering}
\begin{align}
{}^{(-2)}\mathscr{B}^-\circ{}^{(-2)}\mathscr{F}^+={}^{(-2)}\mathscr{F}^+\circ{}^{(-2)}\mathscr{B}^-=Id.
\end{align}
The same conclusions apply when replacing $\Sigma^*$ with $\Sigma$ and $\mathscr{H}^+_{\geq0}$ with $\mathscr{H}^+$, or when replacing with $\overline\Sigma$ and $\overline{\mathscr{H}^+}$. In the latter case, we require that $V^2\underline\upalpha_{{\mathscr{H}^+}}\in \Gamma(\overline{\mathscr{H}^+})$ and with that $(\underline\alpha|_{\Sigma^*},\slashed{\nabla}_{n_{\Sigma^*}}\underline\alpha|_{\Sigma^*})$ is consistent with $\Cref{WP-2Sigma*}$
\end{thm}
\begin{thm}\label{-2 past forward scattering}
Evolution from $(\underline\upalpha,\underline\upalpha')\in\Gamma_c(\Sigma)\times\Gamma_c(\Sigma)$ to $J^-(\Sigma)$ gives rise to radiation fields on $\mathscr{H}^-,\mathscr{I}^-$ analogously to \Cref{+2 future forward scattering}, where the radiation fields are defined by
\begin{align}
\lim_{v\longrightarrow\infty} r^5\underline\alpha(u,v,\theta^A)=\underline\upalpha_{\mathscr{I}^-}\qquad\qquad 2M\Omega^{2}\underline\alpha\big|_{\mathscr{H}^-}=\underline\upalpha_{\mathscr{H}^-}
\end{align}
This extends to a unitary map
\begin{align}
{}^{(-2)}\mathscr{F^-}: \mathcal{E}^{T,-2}_{\Sigma}\longrightarrow \mathcal{E}^{T,-2}_{\mathscr{H}^-}\oplus \mathcal{E}^{T,-2}_{\mathscr{I}^-}
\end{align}
with inverse ${}^{(-2)}\mathscr{B}^+:\mathcal{E}^{T,-2}_{\mathscr{H}^-}\oplus \mathcal{E}^{T,-2}_{\mathscr{I}^-}\longrightarrow \mathcal{E}^{T,-2}_{\Sigma}$. The same conclusions apply when replacing $\Sigma$ with $\overline{\Sigma}$ and $\mathscr{H}^-$ with $\overline{\mathscr{H}^-}$. In this case, we require that $(U^{-2}\Omega^2\underline\alpha,U^{-2}\Omega^2\underline\alpha')$ are smooth up to and including $\mathcal{B}$, and consequently we obtain that $U^{-2}\underline\alpha_{{\mathscr{H}^+}}\in \Gamma(\overline{\mathscr{H}^+})$
\end{thm}
\begin{thm}\label{scatteringthm-2}
The maps
\begin{align}
&{}^{(-2)}\mathscr{S}^+={}^{(-2)}\mathscr{F}^+\circ{}^{(-2)}\mathscr{B}^+:\mathcal{E}^{T,-2}_{\mathscr{H}^-}\oplus \mathcal{E}^{T,-2}_{\mathscr{I}^-}\longrightarrow \mathcal{E}^{T,-2}_{\mathscr{H}^+}\oplus\mathcal{E}^{T,-2}_{\mathscr{I}^+},\\
&{}^{(-2)}\mathscr{S}^+={}^{(-2)}\mathscr{F}^+\circ{}^{(-2)}\mathscr{B}^+:\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^-}}\oplus \mathcal{E}^{T,-2}_{\mathscr{I}^-}\longrightarrow \mathcal{E}^{T,-2}_{\overline{\mathscr{H}^+}}\oplus\mathcal{E}^{T,-2}_{\mathscr{I}^+}
\end{align}
constitute unitary Hilbert space isomorphism with inverses
\begin{align}
{}^{(-2)}\mathscr{S}^-={}^{(-2)}\mathscr{F}^-\circ{}^{(-2)}\mathscr{B}^-:\mathcal{E}^{T,-2}_{\mathscr{H}^+}\oplus \mathcal{E}^{T,-2}_{\mathscr{I}^+}\longrightarrow \mathcal{E}^{T,-2}_{\mathscr{H}^-}\oplus\mathcal{E}^{T,-2}_{\mathscr{I}^-}\\
{}^{(-2)}\mathscr{S}^-={}^{(-2)}\mathscr{F}^-\circ{}^{(-2)}\mathscr{B}^-:\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^+}}\oplus \mathcal{E}^{T,-2}_{\mathscr{I}^+}\longrightarrow \mathcal{E}^{T,-2}_{\overline{\mathscr{H}^-}}\oplus\mathcal{E}^{T,-2}_{\mathscr{I}^-}
\end{align}
on the respective domains.
\end{thm}
\begin{remark}
We emphasise that the spaces $\mathcal{E}^{T,\pm2}_{\Sigma}$ and $\mathcal{E}^{T,\pm2}_{\overline{\Sigma}}$ are different and $\mathcal{E}^{T,\pm2}_{\Sigma}\subsetneq\mathcal{E}^{T,\pm2}_{\overline{\Sigma}}$. Similarly, $\mathcal{E}^{T,\pm2}_{\mathscr{H}^+}\subsetneq\mathcal{E}^{T,\pm2}_{\overline{\mathscr{H}^+}}$. Our prescription in distinguishing between these spaces is consistent in the sense that elements of $\mathcal{E}^{T,\pm2}_{\Sigma}$ are mapped into $\mathcal{E}^{T,\pm2}_{\mathscr{H}^+}$ and vice versa. As mentioned for the Regge--Wheeler equation \bref{RW} in \Cref{RW distinct spaces}, our point of view is that the spaces $\mathcal{E}^{T,\pm2}_{\overline{\Sigma}}, \mathcal{E}^{T,\pm2}_{\overline{\mathscr{H}}^\pm}$ are the more natural spaces to consider, but as we make the distinction between these spaces, we additionally face the issue that the inclusion of the bifurcation sphere $\mathcal{B}$ in the domains of the scattering data requires studying both the equations \bref{T+2}, \bref{T-2} and their unknowns in a different frame near $\mathcal{B}$.
\end{remark}
\subsection{Theorem 3: The Teukolsky--Starobinsky correspondence}\label{subsection 4.3 the Teukolsky--Starobinsky identities}
\begin{thm}\label{Theorem 3 detailed statement}
Let $\upalpha_{\mathscr{I}^+}\in\Gamma_c(\mathscr{I}^+)$. There exists a unique $\underline\upalpha_{\mathscr{I}^+}\in\Gamma(\mathscr{I}^+)$ such that $\|\upalpha_{\mathscr{I}^+}\|_{\mathcal{E}^{T,+2}_{{\mathscr{I}^+}}}=\|\underline\upalpha_{\mathscr{I}^+}\|_{\mathcal{E}^{T,-2}_{{\mathscr{I}^+}}}$ and
\begin{align}\label{constraint null infinity section 4}
\partial_u^4\upalpha_{\mathscr{I}^+}=\Big[2\mathring{\fancydstar_2}\mathring{\fancydstar_1}\mathring{\overline{\fancyd_1}}\mathring{\fancyd_1}+6M\partial_u\Big]\underline{\alpha}_{\mathscr{I}^+}.
\end{align}
An analogous statement applies starting from $\underline\upalpha_{\mathscr{I}^+}\in\Gamma_c(\mathscr{I}^+)$ to obtain $\upalpha_{\mathscr{I}^+}\in\Gamma(\mathscr{I}^+)$ with $\|\underline\upalpha_{\mathscr{I}^+}\|_{\mathcal{E}^{T,-2}_{{\mathscr{I}^+}}}=\|\upalpha_{\mathscr{I}^+}\|_{\mathcal{E}^{T,+2}_{{\mathscr{I}^+}}}$ satisfying \bref{constraint null infinity section 4}.\\
\indent Let $\underline\upalpha_{{\mathscr{H}^+}}$ be such that $V^2\underline\upalpha_{{\mathscr{H}^+}}\in\Gamma_c(\overline{\mathscr{H}^+})$. There exists a unique $\upalpha_{\mathscr{H}^+}\in\Gamma(\overline{\mathscr{H}^+})$ such that $\|\upalpha_{\mathscr{H}^+}\|_{\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}}=\|\underline\upalpha_{\mathscr{H}^+}\|_{\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^+}}}$ and
\begin{align}\label{constraint horizon section 4}
\partial_V^4 V^2\underline\upalpha_{\mathscr{H}^+}=\Big[2\mathring{\fancydstar_2}\mathring{\fancydstar_1}\mathring{\overline{\fancyd_1}}\mathring{\fancyd_1}-3V\partial_V-6\Big]V^{-2}\upalpha_{\mathscr{H}^+}.
\end{align}
An analogous statement applies starting from $\upalpha_{\mathscr{H}^+}$ such that $V^{-2}\upalpha_{\mathscr{H}^+}\in\Gamma_c(\overline{\mathscr{H}^+})$ to obtain $\underline\upalpha_{\mathscr{H}^+}\in\Gamma(\overline{\mathscr{H}^+})$ with $\|\upalpha_{\mathscr{H}^+}\|_{\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}}=\|\underline\upalpha_{\mathscr{H}^+}\|_{\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^+}}}$ satisfying \bref{constraint horizon section 4}.
\indent The statements above give rise to unitary Hilbert space isomorphisms
\begin{align}
\mathcal{TS}_{\mathscr{I}^+}:\mathcal{E}^{T,+2}_{\mathscr{I}^+}\longrightarrow\mathcal{E}^{T,-2}_{\mathscr{I}^+},\qquad\qquad\mathcal{TS}_{\mathscr{H}^+}:\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}\longrightarrow\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^+}}.
\end{align}
\begin{align}
\mathcal{TS}^+=\mathcal{TS}_{\mathscr{H}^+}\oplus\mathcal{TS}_{\mathscr{I}^+}: \mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}\oplus\mathcal{E}^{T,+2}_{\mathscr{I}^+}\longrightarrow\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^+}}\oplus\mathcal{E}^{T,-2}_{\mathscr{I}^+}.
\end{align}
Let $\alpha$ be a solution to the $+2$ Teukolsky equation \bref{T+2} arising from scattering data $\upalpha_{\mathscr{I}^+}\in\Gamma_c(\mathscr{I}^+)$, $\upalpha_{\mathscr{H}^+}$ be such that $V^{-2}\upalpha_{\mathscr{H}^+}\in \Gamma_c(\overline{\mathscr{H}^+})$. Using $\mathcal{TS}^+_{\mathscr{I}^+}, \mathcal{TS}^+_{\mathscr{H}^+}$ we can find a unique set of smooth scattering data $\underline\upalpha_{\mathscr{I}^+}, \underline\upalpha_{\mathscr{H}^+}$ on $\mathscr{I}^+, \mathscr{H}^+$ with $V^2\underline\upalpha_{\mathscr{H}^+}$ regular on $\overline{\mathscr{H}^+}$, giving rise to a solution $\underline\alpha$ to the $-2$ Teukolsky equation \bref{T-2} such that the constraints
\begin{align}
\frac{\Omega^2}{r^2}\Omega\slashed{\nabla}_3 \left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\right)^3\alpha-2r^4\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1\overline{\slashed{\mathcal{D}}}_1\slashed{\mathcal{D}}_2 r\Omega^2{\underline\alpha}-6M\left[\Omega\slashed{\nabla}_4+\Omega\slashed{\nabla}_3\right]r\Omega^2{\underline\alpha}=0,\label{theorem constraint 1}\\
\frac{\Omega^2}{r^2}\Omega\slashed{\nabla}_4 \left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\right)^3{\underline\alpha}-2r^4\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1\overline{\slashed{\mathcal{D}}}_1\slashed{\mathcal{D}}_2 r\Omega^2\alpha+6M\left[\Omega\slashed{\nabla}_4+\Omega\slashed{\nabla}_3\right]r\Omega^2\alpha=0.\label{theorem constraint 2}
\end{align}
are satisfied by $\alpha, \underline\alpha$ on $\overline{\mathscr{M}}$. The data satisfy
\begin{align}\label{unitarity}
\|\upalpha_{\mathscr{I}^+}\|_{\mathcal{E}^{T,+2}_{\mathscr{I}^+}}^2=\|\underline\upalpha_{\mathscr{I}^+}\|_{\mathcal{E}^{T,-2}_{\mathscr{I}^+}}^2,\qquad\qquad \|\upalpha_{\mathscr{H}^+}\|_{\mathcal{E}^{T,+2}_{\mathscr{H}^+}}^2=\|\underline\upalpha_{\mathscr{H}^+}\|_{\mathcal{E}^{T,-2}_{\mathscr{H}^+}}^2.
\end{align}
\indent Analogously, let $\underline\alpha$ be a solution to the $-2$ Teukolsky equation \bref{T-2} arising from scattering data $\underline\upalpha_{\mathscr{I}^+}\in\Gamma_c(\mathscr{I}^+)$, $\underline\upalpha_{\mathscr{H}^+}$ be such that $V^{2}\underline\upalpha_{\mathscr{H}^+}\in \Gamma_c(\overline{\mathscr{H}^+})$. Then there exist unique smooth scattering data $\upalpha_{\mathscr{I}^+}, \upalpha_{\mathscr{H}^+}$ on $\mathscr{I}^+, \mathscr{H}^+$ with $V^{-2}\upalpha_{\mathscr{H}^+}$ regular on $\overline{\mathscr{H}^+}$, giving rise to a solution $\alpha$ to the +2 Teukolsky equation \bref{T+2} such that $\alpha, \underline\alpha$ satisfy the constraints \bref{theorem constraint 1}, \bref{theorem constraint 2}.\\
\indent An analogous statement applies to scattering from $\mathscr{I}^-, \mathscr{H}^-$ and we have the isomorphism
\begin{align}
\mathcal{TS}^-=\mathcal{TS}_{\mathscr{H}^-}\oplus\mathcal{TS}_{\mathscr{I}^-}: \mathcal{E}^{T,+2}_{\overline{\mathscr{H}^-}}\oplus\mathcal{E}^{T,+2}_{\mathscr{I}^-}\longrightarrow\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^-}}\oplus\mathcal{E}^{T,-2}_{\mathscr{I}^-}.
\end{align}
\end{thm}
\subsection{Corollary 1: A mixed scattering statement for combined ($\alpha,\underline\alpha$)}\label{subsection 4.4 Corollary 1: mixed scattering}
Importantly, we have the following corollary:
\begin{corollary}\label{corollary to be proven}
Let $\upalpha_{\mathscr{I}^-}\in\Gamma_c(\mathscr{I}^-)$, $\underline\upalpha_{\mathscr{H}^+}$ be such that $V^{2}\underline\upalpha_{\mathscr{H}^-}\in\Gamma_c(\overline{\mathscr{H}^-})$. Then there exists a unique smooth pair $(\alpha, \underline\alpha)$ on $\mathscr{M}$, such that $\alpha$ solves \bref{T+2}, $\underline\alpha$ solves \bref{T-2}, $\alpha, \underline\alpha$ satisfy \bref{theorem constraint 1}, \bref{theorem constraint 2} and $\underline\alpha$ realises $\underline\upalpha_{\mathscr{H}^-}$ as its radiation field on $\overline{\mathscr{H}^-}$, $\alpha$ realises $\upalpha_{\mathscr{I}^-}$ as its radiation field on $\mathscr{I}^-$. Moreover, the radiation fields of $\alpha$ and $\underline\alpha$ on $\overline{\mathscr{H}^+}, \mathscr{I}^+$ are such that
\begin{align}
\|\upalpha_{\mathscr{H}^+}\|_{\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}}^2\;+\;\|\underline\upalpha_{\mathscr{I}^+}\|^2_{\mathcal{E}^{T,-2}_{\mathscr{I}^+}}=\|\upalpha_{\mathscr{I}^-}\|_{\mathcal{E}^{T,+2}_{{\mathscr{I}^-}}}^2\;+\;\|\underline\alpha_{{\mathscr{H}^-}}\|^2_{\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^-}}}.
\end{align}
This extends to a unitary Hilbert-space isomorphism
\begin{align}
\mathscr{S}^{-2,+2}:\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^-}}\oplus\mathcal{E}^{T,+2}_{\mathscr{I}^-}\longrightarrow \mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}\oplus\mathcal{E}^{T,-2}_{\mathscr{I}^+}.
\end{align}
\end{corollary}
\section{Scattering theory of the Regge--Wheeler equation}\label{section 5 scattering theory for RW}
This section is devoted to proving \Cref{Theorem 1} in the introduction, whose detailed statement is contained in \Cref{forwardRW,,backwardRW,,RW isomorphisms}.\\
\indent We will first study in \Cref{subsection 5.2 subsection Radiation fields} the behaviour of future radiation fields belonging to solutions of the Regge--Wheeler equation \bref{RW} that arise from smooth, compactly supported data on $\Sigma^*$ using the estimates gathered in \Cref{subsection 5.1 Basic integrated boundedness and decay estimates}, and this will justify the definitions of radiation fields and spaces of scattering states made in \Cref{subsection 4.1 Theorem 1}. We will first prove \Cref{forwardRW} (in \Cref{subsection 5.3 the forwards scattering map}) and \Cref{backwardRW,,RW isomorphisms} (in \Cref{subsubsection 5.4 the backwards scattering map}) for the case of data on $\Sigma^*$, and most of what follows applies to $\Sigma$ and $\overline{\Sigma}$ unless otherwise stated. \Cref{subsection 5.5 auxiliary results} contain additional results on backwards scattering that will become important later on in the study of the Teukolsky--Starobinsky identities in \Cref{section 9 TS correspondence}.
\subsection{Basic integrated boundedness and decay estimates}\label{subsection 5.1 Basic integrated boundedness and decay estimates}
Here we collect basic boundedness and decay results for (\ref{RW}) proven in \cite{DHR16}. In what follows $(\uppsi,\uppsi')$ is a smooth data set for \cref{RW} as in \Cref{RWwpCauchy}.
\noindent $\bullet$ \emph{\textbf{Energy boundedness}} Let $X=T:=\Omega e_3+\Omega e_4$, multiply (\ref{RW}) by $\slashed{\nabla}_X$ and integrate by parts over $S^2$ to obtain
\begin{align}\label{T derivative identity}
\Omega\slashed{\nabla}_3\left[|\Omega\slashed{\nabla}_4\Psi|^2+\Omega^2|\slashed\nabla \Psi|^2+V|\Psi|^2\right]+\Omega\slashed{\nabla}_4\left[|\Omega\slashed{\nabla}_3\Psi|^2+\Omega^2|\slashed\nabla \Psi|^2+V|\Psi|^2\right]\stackrel{S^2}{\equiv}0.
\end{align}
For an outgoing null hypersurface $\mathscr{N}$ define
\begin{align}
F_{\mathscr{N}}^T[\Psi]:=\int_{\mathscr{N}}\sin\theta d\theta d\phi dv\left[|\Omega\slashed\nabla_4\Psi|^2+\Omega^2|\slashed\nabla\Psi|^2+V|\Psi|^2\right].
\end{align}
Similarly for an ingoing null hypersurface $\underline{\mathscr{N}}$ we define
\begin{align}
\underline{F}_{\underline{\mathscr{N}}}^T[\Psi]:=\int_{\underline{\mathscr{N}}}\sin\theta d\theta d\phi du\left[|\Omega\slashed\nabla_3\Psi|^2+\Omega^2|\slashed\nabla\Psi|^2+V|\Psi|^2\right].
\end{align}
Denote $F_{u}^T[\Psi](v_0,v)=F_{\mathscr{C}_{u}\cap\{\bar{v}\in[v_0,v]\}}^T[\Psi]$, $\underline{F}_{v}^T[\Psi](u_0,u)=\underline{F}_{\underline{\mathscr{C}}_{v}\cap\{\bar{u}\in[u_0,u]\}}^T[\Psi]$. Integrating \bref{T derivative identity} over the region $\mathscr{D}^{u,v}_{u_0,v_0}$ yields
\begin{align}
F^T_u[\Psi](v_0,v)+\underline{F}^T_v[\Psi](u_0,u)= F^T_{u_0}[\Psi](v_0,v)+\underline{F}^T_{v_0}[\Psi](u_0,u).
\end{align}
Similarly, integrating \bref{T derivative identity} over $J^+(\Sigma^*)\cap J^-(\mathscr{C}_u)\cap J^-(\underline{\mathscr{C}}_v)$ yields
\begin{align}
F^T_u[\Psi](v_0,v)+\underline{F}^T_v[\Psi](u_0,u)=\mathbb{F}_{\Sigma^*\cap J^-(\mathscr{C}_u)\cap J^-(\underline{\mathscr{C}}_v)}[\Psi],
\end{align}
where $\mathbb{F}_{\Sigma^*}[\Psi]$ is given by
\begin{align}
\mathbb{F}_{\Sigma^*}[\Psi]= \int_{\Sigma^*}dr\sin\theta d\theta d\phi\; (2-\Omega^2)|\slashed{\nabla}_{T^*}\Psi|^2+\Omega^2|\slashed{\nabla}_R\Psi|^2+|\slashed{\nabla}\Psi|^2+(3\Omega^2+1)\frac{|\Psi|^2}{r^2},
\end{align}
and $\mathbb{F}_{\mathcal{U}}$ for a subset $\mathcal{U}\in\Sigma^*$ being defined analogously.\\
\indent Integrating \bref{T derivative identity} over $J^+(\Sigma)\cap J^-(\mathscr{C}_u)\cap J^-(\underline{\mathscr{C}}_v)$ instead yields a similar identity:
\begin{align}
F^T_u[\Psi](v_0,v)+\underline{F}^T_v[\Psi](u_0,u)=\mathbb{F}_{\Sigma\cap J^-(\mathscr{C}_u)\cap J^-(\underline{\mathscr{C}}_v)}[\Psi],
\end{align}
with
\begin{align}
\mathbb{F}^T_{\Sigma}[\Psi]=\int_{\Sigma} \sin\theta dr d\theta d\phi \;\frac{1}{\Omega^2}|\slashed{\nabla}_T\Psi|^2+\Omega^2|\slashed{\nabla}_R \Psi|^2+|\slashed{\nabla}\Psi|^2+(3\Omega^2+1)\frac{|\Psi|^2}{r^2}.
\end{align}
and similarly for $\overline{\Sigma}$.\\
\indent All of the energies defined here so far become degenerate at $\overline{\mathscr{H}^+}$. We can compensate for that for energies defined over hypersurfaces do not intersect the bifurcation sphere $\mathcal{B}$, and we do this by modifying $X$ with a multiple of $\frac{1}{\Omega^2}T$ and repeating the procedure above as in \cite{DR08}, making crucial use of the positivity of the surface gravity of $\mathscr{H}^+$. We then obtain the so called 'redshift' estimates:
\begin{defin}
Define the following nondegenerate energies
\begin{align}
F_{\mathscr{N}}[\Psi]=\int_{\mathscr{N}}\sin\theta d\theta d\phi dv \left[|\Omega\slashed\nabla_4\Psi|^2+|\slashed\nabla\Psi|^2+\frac{1}{r^2}|\Psi|^2\right],
\end{align}
\begin{align}
\underline{F}_{\underline{\mathscr{N}}}[\Psi]=\int_{\underline{\mathscr{N}}}\sin\theta d\theta d\phi du\Omega^2 \left[|\Omega^{-1}\slashed\nabla_3\Psi|^2+|\slashed\nabla\Psi|^2+\frac{1}{r^2}|\Psi|^2\right],
\end{align}
\begin{align}
\mathbb{F}_{\Sigma^*}[\Psi]=\int_{\Sigma^*} \sin\theta dr d\theta d\phi\left[|\slashed{\nabla}_{T^*}\Psi|^2+|\slashed{\nabla}_R\Psi|^2+\frac{1}{r^2}|\Psi|^2+|\slashed{\nabla}\Psi|^2\right],
\end{align}
and their higher order versions
\begin{align}
F_{\mathscr{N}}^{n,T,\slashed{\nabla}}[\Psi]=\sum_{i+|\alpha|\leq n}F_{\mathscr{N}}[T^i(r\slashed{\nabla})^\alpha \Psi](v_0,v),
\end{align}
\begin{align}
\underline{F}_{\underline{\mathscr{N}}}^{n,T,\slashed{\nabla}}[\Psi]=\sum_{i+|\alpha|\leq n} \underline{F}_{\underline{\mathscr{N}}}[T^i(r\slashed{\nabla})^\alpha \Psi](u_0,u),
\end{align}
\begin{align}
F_{\mathscr{N}}[\Psi]=\sum_{i+j+|\alpha|\leq n} F_{\mathscr{N}}[(\Omega^{-1}\slashed{\nabla}_3 )^i(r\Omega\slashed{\nabla}_4)^j(r\slashed{\nabla})^\alpha\Psi](v_0,v),
\end{align}
\begin{align}
\underline{F}_{\underline{\mathscr{N}}}[\Psi]=\sum_{i+j+|\alpha|\leq n} \underline{F}_{\underline{\mathscr{N}}}[(\Omega^{-1}\slashed{\nabla}_3 )^i(r\Omega\slashed{\nabla}_4)^j(r\slashed{\nabla})^\alpha\Psi](v_0,v),
\end{align}
\begin{align}
\mathbb{F}^{n,T,\slashed{\nabla}}_{\Sigma^*}[\Psi]=\sum_{i+|\alpha|\leq n} \mathbb{F}_{\Sigma^*}[T^i(r\slashed{\nabla})^\alpha \Psi],
\end{align}
\begin{align}
\mathbb{F}^{n}_{\Sigma^*}[\Psi]=\sum_{i_1+i_2+|\alpha|\leq n}\mathbb{F}_{\Sigma^*}\left[\slashed{\nabla}_T^{i_1}\left(\Omega^{-1}\slashed{\nabla}_3\right)^{i_2}(r\slashed{\nabla})^\alpha\Psi\right].
\end{align}
\end{defin}
\begin{proposition}\label{RWredshift}
Let $\Psi$ be a solution to (\ref{RW}) arising from data as in \Cref{RWwpCauchy}, then we have
\begin{align}
F_u[\Psi](v_0,\infty)+\underline{F}_v[\Psi](u_0,\infty)\lesssim \mathbb{F}_{\Sigma^*}[\Psi].
\end{align}
Similar statements hold for $F^{n,T,\slashed{\nabla}}_u[\Psi](v_0,v), \underline{F}^{n,T,\slashed{\nabla}}_v[\Psi](u_0,u), F^n_u[\Psi](v_0,v)$ and $\underline{F}^n_v[\Psi](u_0,u)$.
\end{proposition}
\noindent $\bullet$ \emph{\textbf{Integrated local energy decay}} We have the following Morawetz-type integrated decay estimate:
\begin{proposition}\label{RWILED}
Let $\Psi$ be a solution to (\ref{RW}) arising from data as in \Cref{RWwpCauchy}, $\mathscr{D}_{\Sigma^*}^{u,v}= J^+(\Sigma^*)\cap J^-(\mathscr{C}_u\cup\underline{\mathscr{C}}_v)$ and define
\begin{align}\label{RWILEDestimate}
\begin{split}
\mathbb{I}_{deg}^{u,v}[\Psi]= \int_{\mathscr{D}_{\Sigma^*}^{u,v}}d\bar{u}d\bar{v} \sin\theta &d\theta d\phi \Omega^2 \Bigg[\frac{1}{r^2}|\slashed{\nabla}_{R^*}\Psi|^2+\frac{1}{r^3}|\Psi|^2\\
&+\frac{1}{r}\left(1-\frac{3M}{r}\right)^2\left(|\slashed{\nabla}\Psi|^2+\frac{1}{r^2}|\Omega\slashed{\nabla}_4\Psi|^2+\frac{\Omega^2}{r^2}|\Omega^{-1}\slashed{\nabla}_3\Psi|^2\right)\Bigg].
\end{split}
\end{align}
then we have
\begin{align*}
\begin{split}
\mathbb{I}_{deg}^{u,v}[\Psi]\lesssim \mathbb{F}_{\Sigma^*}[\Psi].
\end{split}
\end{align*}
A similar statement holds for
\begin{align}
\mathbb{I}_{deg}^{u,v,n,T,\slashed{\nabla}}[\Psi]=\sum_{i+|\alpha|\leq n} \mathbb{I}_{deg}^{u,v}[T^i(r\slashed{\nabla})^\alpha \Psi]
\end{align}
and
\begin{align}
\mathbb{I}_{deg}^{u,v,n}[\Psi]=\sum_{i+j+|\alpha|\leq n}\mathbb{I}_{deg}^{u,v,n}[(\Omega^{-1}\slashed{\nabla}_3)^i(r\Omega\slashed{\nabla}_4)^j(r\slashed{\nabla})^\alpha\Psi].
\end{align}
\end{proposition}
\noindent $\bullet$ \emph{\textbf{$r^p$-hierarchy of estimates near $\mathscr{I}^+$}} If we multiply (\ref{RW}) by $r^p\Omega^{-2k}\Omega\slashed{\nabla}_4\Psi$ and integrate by parts on $S^2$ we obtain the following
\begin{align}
\begin{split}
&\Omega\slashed\nabla_3\left[r^p\Omega^{-2k}|\Omega\slashed\nabla_4\Psi|^2\right]+\Omega\slashed\nabla_4\left[r^p\Omega^{-2k}(\Omega^2|\slashed\nabla\Psi|^2+V|\Psi|^2)\right]\\
&+r^{p-1}\Omega^{-2k}\Bigg\{(p+kx)|\Omega\slashed\nabla_4\Psi|^2-\left[\frac{4\Omega^2}{r^2}+V(p-3+x(k-1))\right]\Omega^2|\Psi|^2
\\&\qquad\qquad\qquad-(p-2+x(k-1))\Omega^4|\slashed\nabla\Psi|^2\Bigg\}\stackrel{S^2}{\equiv}0.
\end{split}
\end{align}
We can ensure that the bulk term is non-negative by taking $p=0,k=0$ or $p=2, 1\leq k\leq2$ or $p\in(0,2)$ and restricting to large enough $r$. Integrating in a region $\mathscr{D}^{u,v}_{\Sigma^*}\cap \{r>R\}$ yields (after averaging in $R$ and using \Cref{RWILED} to deal with the timelike boundary term)
\begin{proposition}\label{RWrp}
Let $\Psi$ be a solution to (\ref{RW}) arising from data as in \Cref{RWwpCauchy}, and define
\begin{align}
{\mathbb{I}_p}_{u_0,v_0}^{u,v}[\Psi]=\int_{\mathscr{D}_{\Sigma^*}^{u,v}\cap\{r>R\}} dudv\sin\theta d\theta d\phi r^{p-1}\left[p|\Omega\slashed{\nabla}_4\Psi|^2+(2-p)|\slashed{\nabla}\Psi|^2+{r^{-2}}|\Psi|^2\right],
\end{align}
then we have for $p\in [0,2]$
\begin{align}\label{RW rp estimate}
\begin{split}
\int_{\mathscr{C}_u\cap\{r>R\}} dv \sin\theta d\theta d\phi r^p |\Omega\slashed{\nabla}_4 \Psi|^2+ {\mathbb{I}_p}_{u_0,v_0}^{u,v}[\Psi]\lesssim \mathbb{F}_{\Sigma^*}[\Psi]+\int_{\Sigma^*\cap\{r>R\}} r^p|\Omega\slashed{\nabla}_4\Psi|^2 dr\sin\theta d\theta d\phi.
\end{split}
\end{align}
A similar statement holds for
\begin{align}
{{\mathbb{I}_p}_{u_0,v_0}^{u,v}}^{n,T,\slashed{\nabla}}[\Psi]=\sum_{i+|\alpha|\leq n}{\mathbb{I}_p}_{u_0,v_0}^{u,v}[T^i (r\slashed{\nabla})^\alpha\Psi]
\end{align}
and for
\begin{align}
{{\mathbb{I}_p}_{u_0,v_0}^{u,v}}^{n,k}[\Psi]=\sum_{i+j+|\alpha|\leq n}{\mathbb{I}_p}_{u_0,v_0}^{u,v}[(\Omega^{-1}\slashed{\nabla}_3)^i(r^k\Omega\slashed{\nabla}_4)^j(r\slashed{\nabla})^\alpha\Psi]
\end{align}
if $0\leq k\leq2$.
\end{proposition}
We sketch how to establish higher order versions of the estimates of \Cref{RWrp}. Commuting with $r^h\Omega\slashed{\nabla}_4$ for $0\leq h \leq 2$ or $r\slashed{\nabla}$ produces terms with favorable signs and we can close the argument by appealing to Hardy and Poincar\'e estimates. Consider for example $\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\Psi:=\Phi^{(1)}$, which satisfies
\begin{align}\label{Phi 1 transport equation}
\begin{split}
\Omega\slashed{\nabla}_3 \Phi^{(1)}+\frac{3\Omega^2-1}{r}\Phi^{(1)}=\mathring{\slashed{\Delta}}\Psi-(3\Omega^2+1)\Psi.
\end{split}
\end{align}
Applying $\Omega\slashed{\nabla}_4$ and using \bref{RW} we obtain
\begin{align}\label{Phi 1 wave equation}
\Omega\slashed{\nabla}_3\Omega\slashed{\nabla}_4\Phi^{(1)}+\frac{3\Omega^2-1}{r}\Omega\slashed{\nabla}_4\Phi^{(1)}-\frac{\Omega^2}{r^2}(3\Omega^2-5)\Phi^{(1)}-\Omega^2\slashed\Delta\Phi^{(1)}=-6M\frac{\Omega^2}{r^2}\Psi.
\end{align}
We see that the new $\Omega\slashed{\nabla}_4\Phi^{(1)}$ term has a good sign, so that we when we multiply by $r^p\Omega^{-2k}\Omega\slashed{\nabla}_4\Phi^{(1)}$, integrate by parts over $S^2$ and use Cauchy--Schwarz we get:
\begin{align}
\begin{split}
&\Omega\slashed{\nabla}_3\left[r^p\Omega^{-2k}|\Omega\slashed{\nabla}_4\Phi^{(1)}|^2\right]+\Omega\slashed{\nabla}_4\left[r^p\Omega^{-2k}\left(\Omega^2|{\slashed{\nabla}}\Phi^{(1)}|^2+(5-3\Omega^2)\frac{\Omega^2}{r^2}|\Phi^{(1)}|^2\right)\right]+{r^{p-1}}{\Omega^{-2(k-1)}}\times\\
&\Bigg\{(p+4+x(k+2)-\epsilon)|\Omega\slashed{\nabla}_4\Phi^{(1)}|^2+(p-2+x(k-1))\Omega^2|\slashed{\nabla}\Phi^{(1)}|^2+\left[\frac{6M}{r}+(5-3\Omega^2)(p-1+x(k-1))\right]\frac{\Omega^2}{r^2}|\Phi^{(1)}|^2\Bigg\}\\
&\stackrel{S^2}{\lesssim} r^{p-3}\Omega^{2(k-1)}|\Psi|^2,
\end{split}
\end{align}
where $\epsilon>0$ is sufficiently small. Integrating over $\mathscr{D}^{u,v}_{\Sigma^*}\cap\{r>R\}$ for large enough $R$ and using \Cref{RWrp} for $p\in[0,2]$ we get (using $d\omega=\sin\theta d\theta d\phi$):
\begin{align}
\begin{split}
&\int_{{\mathscr{C}}_u\cap\{r>R\}}d\bar{v}d\omega\; r^p|\Omega\slashed{\nabla}_4\Phi^{(1)}|^2+\int_{\mathscr{D}^{u,v}_{\Sigma^*}\cap\{r>R\}} d\bar{u}d\bar{v}d\omega\;r^{p-1}\left[(p+4)|\Omega\slashed{\nabla}_4\Phi^{(1)}|^2+(2-p)|\slashed{\nabla}\Phi^{(1)}|^2+r^{-2}|\Phi^{(1)}|^2\right]
\\ &+\int_{\mathscr{I}^+\cap\{\bar{u}\in[u_0,u]\}}d\bar{u}d\omega\; |\mathring{\slashed{\nabla}}\Phi^{(1)}|^2+2|\Phi^{(1)}|^2\lesssim \int_{\Sigma^*\cap\{r>R\}}drd\omega\; r^p|\Omega\slashed{\nabla}_4\Phi^{(1)}|^2+\int_{r=R}dtd\omega\; r^p\left[|\slashed{\nabla}\Phi^{(1)}|^2+r^{-2}|\Phi^{(1)}|^2\right]\\&+\int_{\mathscr{D}^{u,v}_{\Sigma^*}\cap\{r>R\}}d\bar{u}d\bar{v}d\omega\;r^{p-3}|\Psi|^2.
\end{split}
\end{align}
We control the second term by averaging in $R$ and appealing to \Cref{RWILED} commuted with $\Omega\slashed{\nabla}_4$, and we deal with the last term using the lower order estimate for $\Psi$ from \Cref{RWrp}. Thus
\begin{align}\label{RWrp k=1}
\begin{split}
&\int_{{\mathscr{C}}_u\cap\{r>R\}}d\bar{v}d\omega\; r^p|\Omega\slashed{\nabla}_4\Phi^{(1)}|^2+\int_{\mathscr{D}^{u,v}_{\Sigma^*}\cap\{r>R\}} d\bar{u}d\bar{v}d\omega\;r^{p-1}\left[(p+4)|\Omega\slashed{\nabla}_4\Phi^{(1)}|^2+(2-p)|\slashed{\nabla}\Phi^{(1)}|^2+r^{-2}|\Phi^{(1)}|^2\right]
\\ &+\int_{\mathscr{I}^+\cap\{\bar{u}\in[u_0,u]\}}d\bar{u}d\omega\; |\mathring{\slashed{\nabla}}\Phi^{(1)}|^2+|\Phi^{(1)}|^2 \lesssim \int_{\Sigma^*\cap\{r>R\}}d\bar{v}d\omega\; r^p\left[|\Omega\slashed{\nabla}_4\Phi^{(1)}|^2+|\Omega\slashed{\nabla}_4\Psi|^2\right]+\mathbb{F}^1_{\Sigma^*}[\Psi].
\end{split}
\end{align}
We could do this again for $\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\right)^2\Psi:=\Phi^{(2)}$ and get a similar estimate following the same steps:
\begin{align}\label{RWrp k=2}
\begin{split}
&\int_{{\mathscr{C}}_u\cap\{r>R\}} d\bar{v}d\omega\; r^p|\Omega\slashed{\nabla}_4\Phi^{(2)}|^2+\int_{\mathscr{D}^{u,v}_{\Sigma^*}\cap\{r>R\}} d\bar{u}d\bar{v}d\omega\; r^{p-1}\left[(p+8)|\Omega\slashed{\nabla}_4\Phi^{(2)}|^2+(2-p)|\slashed{\nabla}\Phi^{(2)}|^2+r^{-2}|\Phi^{(2)}|^2\right]
\\ &+\int_{\mathscr{I}^+\cap\{\bar{u}\in[u_0,u]\}}d\bar{u}d\omega\;|\mathring{\slashed{\nabla}}\Phi^{(2)}|^2-|\Phi^{(2)}|^2\lesssim \int_{\Sigma^*\cap\{r>R\}}d\bar{v}d\omega\; r^p\left[|\Omega\slashed{\nabla}_4\Phi^{(2)}|^2+|\Omega\slashed{\nabla}_4\Phi^{(1)}|^2+|\Omega\slashed{\nabla}_4\Psi|^2\right]+\mathbb{F}^2_{\Sigma^*}[\Psi].
\end{split}
\end{align}
Note that the integral on $\mathscr{I}^+$ on the right hand side is positive by Poincar\'e's inequality. See \cite{AAG16a}, \cite{AAG16b}, \cite{Mos18} for more about this method, applied to the scalar wave equation.
\subsection{Radiation fields}\label{subsection 5.2 subsection Radiation fields}
In this section we establish the properties of future radiation fields belonging to solutions that arise from smooth, compactly supported data on $\Sigma^*$
\subsubsection{Radiation on $\mathscr{H}^+$}\label{subsubsection 5.2.1 radiation on H+}
\begin{defin}\label{RWonH}
Let $\Psi$ be a solution to (\ref{RW}) arising from smooth data $(\uppsi,\uppsi')$ on $\Sigma^*, \Sigma$ or $\overline{\Sigma}$ as in \Cref{RWwpCauchy}. The radiation field $\bm{\uppsi}_{\mathscr{H}^+}$ is defined to be the restriction of $\Psi$ to $\mathscr{H}^+_{\geq0}, \mathscr{H}^+$ or $\overline{\mathscr{H}}^+$ respectively.
\end{defin}
\begin{remark}
We will be using the same notation for the radiation field on $\mathscr{H}^+_{\geq0}, \mathscr{H}^+$ or $\overline{\mathscr{H}^+}$.
\end{remark}
As a corollary to \Cref{RWwpCauchy} we have
\begin{corollary}
The radiation field $\bm{\uppsi}_{\mathscr{H}^+}$ as in \Cref{RWonH} is smooth on $\mathscr{H}^+_{\geq0}$. The same applies to $(\Omega^{-1}\slashed{\nabla}_3)^k\Psi$ for arbitrary $k$.
\end{corollary}
The integrated local energy decay statement of \Cref{RWILED} gives a quick way to show slow decay for $\bm{\uppsi}_{\mathscr{H}^+}$ and its derivatives:
\begin{proposition}\label{RWdecayfixedR}
For a solution $\Psi$ to \cref{RW} arising from smooth data of compact support on $\Sigma^*$, $\left|\Psi|_{\{r=R\}}\right|$ decays as $t\longrightarrow\infty$.
\end{proposition}
\begin{proof}
Commuting \bref{RWILEDestimate} with $\mathcal{L}_T$ twice and using the redshift estimate of \Cref{RWredshift} give us for any $R<\infty$
\begin{align}
\int_{v_0}^\infty d\bar{v}\;\left[ \underline{F}_{\underline{\mathscr{C}}_v\cap\{r<R\}}[\Psi]+ \underline{F}_{\underline{\mathscr{C}}_v\cap\{r<R\}}[\slashed{\nabla}_T\Psi]\right]<\infty.
\end{align}
This in turn implies energy decay in a neighborhood of $\mathscr{H}^+$:
\begin{align*}
\lim_{v\longrightarrow\infty} \underline{F}_v[\Psi](u_{R},\infty)=0,
\end{align*}
where $v-u_R=R^*$. Commuting with $\Omega^{-1} e_3$ and ${\mathcal{L}_{\Omega^i}}$ and using \Cref{RWredshift} again gives
\begin{align*}
\lim_{v\longrightarrow \infty} \sup_{u\in[u_R,\infty]} |\Psi|_{v}=0.
\end{align*}
\end{proof}
\begin{remark}
The preceding argument works to show that $(\Omega^{-1}\slashed{\nabla}_3)^k\Psi$ decays on any hypersurface $r=R$. See also Section 8.2 of \cite{DRSR14}.
\end{remark}
\begin{proposition}
For a solution $\Psi$ to \cref{RW} arising from smooth data of compact support on $\Sigma^*$, The energy flux on $\mathscr{H}^+$ is equal to
\begin{align*}
F^T_{\mathscr{H}^+}=\int_{\mathscr{H}^+} |\partial_v\Psi|^2 dv \sin\theta d\theta d\phi.
\end{align*}
\end{proposition}
\begin{proof}
This follows from the regularity of $\Psi$ and its angular derivatives on $\mathscr{H}^+$ together with energy conservation.
\end{proof}
\subsubsection{Radiation on $\mathscr{I}^+$}\label{subsubsection 5.2.2 radiation field on I+}
An $r^p$-estimate like \Cref{RWrp} implies the existence of radiation field on $\mathscr{I}^+$ as a "soft" corollary.
\begin{proposition}\label{RWradscri}
For a solution $\Psi$ to \cref{RW} arising from smooth data of compact support on $\Sigma^*$,
\begin{align}
\bm{\uppsi}_{\mathscr{I}^+}(u,\theta^A)=\lim_{v\longrightarrow\infty}\Psi(u,v,\theta^A)
\end{align}
exists and belongs to $\Gamma(\mathscr{I}^+)$. Moreover,
\begin{align}\label{RW limit of energy at null infinity}
\lim_{v\longrightarrow \infty}\int_{\mathscr{C}_v\cap\{u\in[u_0,u_1]\}}dud\omega\; |\Omega\slashed{\nabla}_3\Psi|^2+\Omega^2|\slashed{\nabla}\Psi|^2+V|\Psi|^2=\int_{\mathscr{I}^+\cap \{u\in[u_0,u_1]\}}dud\omega\; |\partial_u\bm{\uppsi}_{\mathscr{I}^+}|^2.
\end{align}
\end{proposition}
\begin{proof}
Let $r_2>r_1>8M$, fix $u$ and set $v(r_2,u)\equiv v_2, v(r_1,u)\equiv v_1$. The Sobolev embedding on the sphere $W^{3,1}(S^2)\hookrightarrow L^\infty(S^2)$ and the fundamental theorem of calculus give us:
\begin{align}\label{first order}
\begin{split}
|\Psi(u,v_2,\theta,\phi)-\Psi(u,v_1,\theta,\phi)|^2\leq& B\left[\sum_{|\gamma|\leq 3} \int_{S^2}d\omega\; |\slashed{\mathcal{L}}^\gamma_{S^2} (\Psi(u,v_2,\theta,\phi)-\Psi(u,v_1,\theta,\phi))|\right]^2\\
&= B\left[\sum_{|\gamma|\leq 3} \int_{S^2}d\omega\int_{v_1}^{v_2}dv |\slashed{\mathcal{L}}^\gamma_{S^2} \Omega\slashed\nabla_4\Psi|\right]^2
\end{split}
\end{align}
Cauchy--Schwarz gives:
\begin{align}
|\Psi(u,v_2,\theta,\phi)-\Psi(u,v_1,\theta,\phi)|^2 \leq \frac{B}{r_1}\Bigg[\sum_{|\gamma|\leq 3} \int_{\mathscr{C}_u\cap\{v>v_1\}}dvd\omega\; r^2|\slashed{\mathcal{L}}^\gamma_{S^2}\Omega\slashed{\nabla}_4\Psi|^2dv\sin \theta d\theta d\phi\Bigg].
\end{align}
where $\slashed{\mathcal{L}}^\gamma_{S^2}=\mathcal{L}_{\Omega_1}^{\gamma_1}\mathcal{L}_{\Omega_2}^{\gamma_2}\mathcal{L}_{\Omega_3}^{\gamma_3}$ denotes Lie differentiation on $S^2$ with respect to its $so(3)$ algebra of Killing fields. This says that $\Psi(u,v,\theta,\phi)$ converges in $L^\infty(\mathscr{I}^+\cap\{u>u_0\})$ for some $u_0>-\infty$ as $v\longrightarrow\infty$. Using higher order $r^p$-estimates we can repeat this argument to show
\begin{align}\label{second order}
\left|\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\Psi(u,v_2,\theta,\phi)-\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\Psi(u,v_1,\theta,\phi)\right|^2\lesssim \frac{1}{r_1}\Bigg[\sum_{|\gamma|\leq 3} \int_{\mathscr{C}_u\cap\{v>v_1\}} \left|r^2\slashed{\mathcal{L}}^\gamma_{S^2}\Omega\slashed{\nabla}_4\left(r^2\Omega\slashed{\nabla}_4\right)\Psi\right|^2dv\sin \theta d\theta d\phi\Bigg].
\end{align}
Commuting \bref{first order} with $T$ and $\Omega^i$ and using \bref{second order} gives that $\Psi|_{\mathscr{I}^+}=\lim_{v\longrightarrow \infty} \Psi(u,v,\theta,\phi)$ is differentiable on $\mathscr{I}^+$. We can repeat this argument with higher order $r^p$-estimates to find that $\bm{\uppsi}_{\mathscr{I}^+}$ is smooth and $\lim_{v\longrightarrow\infty}\Omega\slashed{\nabla}_3^i(r\slashed{\nabla})^\gamma \Psi=\partial_u^i\mathring{\slashed{\nabla}}{}^\gamma\bm{\uppsi}_{\mathscr{I}^+}$ for any index $i$ and multiindex $\gamma$. \Cref{RW limit of energy at null infinity} follows immediately.
\end{proof}
In the following, define $\Phi^{(k)}:=\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\right)^k\Psi$.
\begin{corollary}\label{RW transverse derivatives converge}
Under the assumptions of \Cref{RWradscri}, the limit
\begin{align}
\bm{\upphi}^{(k)}_{\mathscr{I}^+}(u,\theta^A)=\lim_{v\longrightarrow\infty}\Phi^{(k)}(u,v,\theta^A)
\end{align}
exists and defines an element of $\Gamma(\mathscr{I}^+)$.
\end{corollary}
\begin{proof}
Let $R,u_1$ be such that $\Psi$ vanishes on $\mathscr{C}_{u}\cap\{r>R\}$ for $u\leq u_0$. We can integrate \cref{RW} from a point $(u_0,v,\theta^A)$ to $(u,v,\theta^A)$ where $r(u_0,v)>R$ to find
\begin{align}
\Phi^{(1)}(u,\theta^A)=\frac{r^2}{\Omega^2}(u,v)\int_{u_0}^u\frac{\Omega^2}{r^2}\left[\mathring{\slashed{\Delta}}\Psi-(3\Omega^2+1)\Psi\right].
\end{align}
The right hand side converges as $v\longrightarrow\infty$ by \Cref{RWradscri} and Lebesgue's bounded convergence theorem. An inductive argument works to show the same for higher order derivatives.
\end{proof}
\begin{defin}\label{RW future rad field scri}
Let $\Psi$ be a solution to \cref{RW} arising from smooth data of compact support on $\Sigma^*, \Sigma$ or $\overline{\Sigma}$. The future radiation field on $\mathscr{I}^+$ is defined to be the limit of $\Psi$ towards $\mathscr{I}^+$
\begin{align*}
\bm{\uppsi}_{\mathscr{I}^+}(u,\theta,\phi)=\lim_{v\longrightarrow\infty}\Psi(u,v,\theta,\phi).
\end{align*}
\end{defin}
\begin{remark}
Note that a solution $\Psi$ arising from compactly supported data on $\overline{\Sigma}$ necessarily corresponds to compactly supported data on $\Sigma^*$.
\end{remark}
\noindent The $r^p$-estimates of \Cref{RWrp} further imply that $\bm{\uppsi}_{\mathscr{I}^+}$ decays as $u\longrightarrow\infty$:
\begin{proposition}\label{RWdecayscri}
Let $\Psi,\bm{\uppsi}_{\mathscr{I}^+}$ be as in \Cref{RWradscri}. Then $\bm{\uppsi}_{\mathscr{I}^+}$ decays along $\mathscr{I}^+$.
\end{proposition}
\begin{proof}
The fundamental theorem of calculus, Cauchy--Schwarz and a Hardy estimate give us:
\begin{align}
\begin{split}
\int_{S^2_{u,\infty}}|\Psi_{\mathscr{I}^+}|^2\leq&\int_{S^2_{u,v(r=R)}}|\Psi_{r=R}|^2+\int_{\mathscr{C}_u}\frac{1}{r^2}|\Psi|^2\times \int_{\mathscr{C}_u}r^2|\Omega\slashed{\nabla}_4\Psi|^2\\
\lesssim&\int_{S^2_{u,v(r=R)}}|\Psi_{r=R}|^2+\int_{\mathscr{C}_u}|\Omega\slashed{\nabla}_4\Psi|^2\times \int_{\mathscr{C}_u}r^2|\Omega\slashed{\nabla}_4\Psi|^2.
\end{split}
\end{align}
\Cref{RWrp} applied to $\Psi$ and $\slashed{\nabla}_T\Psi$ implies the decay of $\int_{\mathscr{C}_u\cap\{r>R\}}|\Omega\slashed{\nabla}_4\Psi|^2$ and the boundedness of $\int_{\mathscr{C}_u\cap\{r>R\}}r^2|\Omega\slashed{\nabla}_4\Psi|^2$, and the result follows considering \Cref{RWdecayfixedR}.
\end{proof}
We can in fact compute $\bm{\upphi}_{\mathscr{I}^+}^{(k)}$ out of $\bm{\uppsi}_{\mathscr{I}^+}$ for $k=1,2$:
\begin{corollary}\label{Phi 1 forward}
For a solution $\Psi$ to \cref{RW} arising from smooth data of compact support on $\Sigma^*$, we have
\begin{align}
\bm{\upphi}^{(1)}_{\mathscr{I}^+}(u,\theta^A)=-\int_u^\infty d\bar{u}\left[\mathcal{A}_2-2\right]\bm{\uppsi}_{\mathscr{I}^+}(\bar{u},\theta^A).
\end{align}
\end{corollary}
\begin{proof}
Let $-\infty<u_1<u_2<\infty$, $v$ such that $(u,v,\theta^A)\in J^+(\Sigma^*)$. We integrate \cref{Phi 1 transport equation} on $\underline{\mathscr{C}}_v$ between $u_1,u_2$ and use the fact that $\Phi^{(1)}$ has a finite limit $\bm{\upphi}^{(1)}_{\mathscr{I}^+}$ towards $\mathscr{I}^+$ to get
\begin{align}
\bm{\upphi}^{(1)}_{\mathscr{I}^+}(u_1,\theta^A)-\bm{\upphi}^{(1)}_{\mathscr{I}^+}(u_2,\theta^A)=-\int_{u_1}^{u_2}d\bar{u}\left[\mathcal{A}_2-2\right]\bm{\uppsi}_{\mathscr{I}^+}(\bar{u},\theta^A).
\end{align}
Since $\bm{\upphi}^{(1)}_{\mathscr{I}^+}$ is uniformly bounded, we have that $\left[\mathcal{A}_2-2\right]\bm{\uppsi}_{\mathscr{I}^+}$ is integrable over $\mathscr{I}^+$. The result follows since $\bm{\upphi}^{(1)}_{\mathscr{I}^+}(u,\theta^A)$ decays as $u\longrightarrow\infty$.
\end{proof}
\begin{lemma}
If $\Psi$ satisfies \bref{RW} then
\begin{align}\label{eq:191}
\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\right)^2\frac{\Omega^2}{r^2}\Omega\slashed{\nabla}_4\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\Psi=\left[\mathcal{A}_2(\mathcal{A}_2-2)-12M\slashed{\nabla}_T\right]\Psi.
\end{align}
\end{lemma}
\begin{proof}
Straightforward computation using \cref{RW}.
\end{proof}
Following the same steps as in the proof of \Cref{Phi 1 forward} we find
\begin{corollary}\label{Phi 2 forward}
For a solution $\Psi$ to \cref{RW} arising from smooth data of compact support on $\Sigma^*$, then $\bm{\upphi}^{(2)}_{\mathscr{I}^+}(u,\theta^A)$ satisfies
\begin{align}
\bm{\upphi}^{(2)}_{\mathscr{I}^+}(u,\theta^A)=\int_{u}^\infty\int_{u_1}^\infty du_1 du_2 \left[\mathcal{A}(\mathcal{A}_2-2)-6M\partial_u\right]\bm{\uppsi}_{\mathscr{I}^+}(u_2,\theta^A).
\end{align}
\end{corollary}
\begin{corollary}
For a solution $\Psi$ to \cref{RW} arising from smooth data of compact support on $\Sigma^*$, then
the radiation field $\bm{\uppsi}_{\mathscr{I}^+}$ satisfies
\begin{align}
\int_{-\infty}^\infty du_1 \bm{\uppsi}_{\mathscr{I}^+}= \int_{-\infty}^\infty \int_{u_1}^\infty du_1 du_2 \bm{\uppsi}_{\mathscr{I}^+}=0.
\end{align}
\end{corollary}
\subsection{The forwards scattering map}\label{subsection 5.3 the forwards scattering map}
This section combines the results of \Cref{subsection 5.2 subsection Radiation fields} above to prove \Cref{forwardRW}.
\begin{proposition}\label{RWfcp}
Solutions to (\ref{RW}) arising from smooth data on $\Sigma^*$ of compact support give rise to smooth radiation fields $\uppsi_{\mathscr{I}^+}\in\mathcal{E}_{\mathscr{I}^+}^{T}$ on $\mathscr{I}^+$ and $\uppsi_{\mathscr{H}^+}\in\mathcal{E}_{\mathscr{H}^+_{\geq0}}^{T}$ on $\mathscr{H}^+_{\geq0}$, such that
\begin{align}\label{818181}
||\bm{\uppsi}_{\mathscr{I}^+}||_{\mathcal{E}^T_{\mathscr{I}^+}}^2+||\bm{\uppsi}_{\mathscr{H}^+}||_{\mathcal{E}^T_{\mathscr{H}^+_{\geq0}}}^2=||(\Psi|_{\Sigma^*},\slashed{\nabla}_{n_{\Sigma^*}}\Psi|_{\Sigma^*}) ||_{\mathcal{E}^T_{\Sigma^*}}^2 .
\end{align}
\end{proposition}
\begin{proof}
For data of compact support, Propositions \ref{RWwpCauchy} and \ref{RWradscri} give us the existence of smooth radiation fields $\bm{\uppsi}_{\mathscr{I}^+}$ and $\bm{\uppsi}_{\mathscr{H}^+}$, and by Propositions \ref{RWdecayfixedR}, \ref{RWdecayscri}, $\bm{\uppsi}_{\mathscr{I}^+}$ decays towards $\mathscr{I}^+_+$ and $\bm{\uppsi}_{\mathscr{H}^+}$ decays towards $\mathscr{H}^+$. Let $R$ be sufficiently large and let $v_+,u_+$ be such that $v_+-u_+=R^*$, $v_++u_+>0$. A $T$-energy estimate on the region bounded by $\Sigma^*$, $\mathscr{H}^+_{\geq0}\cap\{v\leq v_+\}$, $\mathscr{I}^+\cap\{u\leq u_+\}$ and $\mathscr{C}_{u_+}\cap\{r\geq R\}, \underline{\mathscr{C}}_{v_+}\cap\{r\leq R\}$ gives
\begin{align}
\underline{F}^T_{v_+}[\Psi](u_+,\infty)+F^T_{u_+}[\Psi](v_+,\infty)+ \int_{\mathscr{H}^+_{\geq0}\cap \{v\leq v_+\}}dvd\omega\;|\partial_v\Psi|^2+\int_{\mathscr{I}^+\cap\{u\leq u_+\}}dud\omega\;|\partial_u\Psi|^2=||\Psi||_{\mathcal{E}^T_{\Sigma^*}}^2.
\end{align}
The integrated local energy decay statement of \Cref{RWILED} commuted with $\slashed{\nabla}_T$, along with the estimate \bref{RW rp estimate} of \Cref{RWrp} for $p=1$ commuted with $\slashed{\nabla}_T$, imply that $\underline{F}^T_{v_+}[\Psi](u_+,\infty)+F^T_{u_+}[\Psi](v_+,\infty)$ decay as $u_+\longrightarrow\infty$. This gives us that $\bm{\uppsi}_{\mathscr{I}^+}\in\mathcal{E}^T_{\mathscr{I}^+}$ and $\bm{\uppsi}_{\mathscr{H}^+}\in\mathcal{E}^T_{\mathscr{H}^+_{\geq0}}$ and that $\bm{\uppsi}_{\mathscr{I}^+}, \bm{\uppsi}_{\mathscr{H}^+}$ satisfy \bref{818181}.
\end{proof}
\begin{corollary}\label{RWfcpSigma}
Solutions to (\ref{RW}) arising from data on ${\Sigma}$ of compact support give rise to smooth radiation fields in $\mathcal{E}_{\mathscr{I}^+}^{T}$ and $\mathcal{E}_{{\mathscr{H}^+}}^{T}$. Solutions to (\ref{RW}) arising from data on $\overline{\Sigma}$ of compact support give rise to smooth radiation fields in $\mathcal{E}_{\mathscr{I}^+}^{T}$ and $\mathcal{E}_{\overline{\mathscr{H}^+}}^{T}$
\end{corollary}
\begin{proof}
The evolution of $\Psi$ on $ J^+({\Sigma^*})\cap J^-(\Sigma)$ can be handled locally. A $T$-energy estimate on $ J^+({\Sigma})\cap J^-(\Sigma^*)$ gives the result. An identical statement applies to $\overline{\Sigma}$.
\end{proof}
\Cref{RWfcp,,RWfcpSigma} allow us to define the forwards maps $\mathscr{F}^+$ from dense subspaces of $\mathcal{E}^{T}_{\Sigma^*}$, $\mathcal{E}^{T}_{\Sigma}$, $\mathcal{E}^{T}_{\overline{\Sigma}}$.
\begin{defin}
Let $(\uppsi,\uppsi')$ be a smooth data set to the Regge--Wheeler equation \bref{RW} on $\Sigma^*$ as in \Cref{RWwpCauchy}. Define the map $\mathscr{F}^+$ by
\begin{align}
\mathscr{F}^+:\Gamma_c(\Sigma^*)\times\Gamma_c(\Sigma^*)\longrightarrow \Gamma(\mathscr{H}^+_{\geq0})\times\Gamma(\mathscr{I}^+), (\uppsi,\uppsi')\longrightarrow (\uppsi_{\mathscr{H}^+},\uppsi_{\mathscr{I}^+}),
\end{align}
where $(\uppsi_{\mathscr{H}^+},\uppsi_{\mathscr{I}^+})$ are as in the proof of \Cref{RWfcp}.\\
The map $\mathscr{F}^+$ is defined analogously for data on $\Sigma, \overline{\Sigma}$:
\begin{align}
\mathscr{F}^+:\Gamma_c(\Sigma)\times\Gamma_c(\Sigma)\longrightarrow \Gamma(\mathscr{H}^+)\times\Gamma(\mathscr{I}^+), (\uppsi,\uppsi')\longrightarrow (\uppsi_{\mathscr{H}^+},\uppsi_{\mathscr{I}^+}),\\
\mathscr{F}^+:\Gamma_c(\overline{\Sigma})\times\Gamma_c(\overline{\Sigma})\longrightarrow \Gamma(\overline{\mathscr{H}^+})\times\Gamma(\mathscr{I}^+), (\uppsi,\uppsi')\longrightarrow (\uppsi_{\mathscr{H}^+},\uppsi_{\mathscr{I}^+}).
\end{align}
\end{defin}
The map $\mathscr{F}^+$ uniquely extends to the forward scattering map of \Cref{RWforwardmap}:
\begin{corollary} \label{RWforwardmap}
The map defined by the forward evolution of data in $\Gamma_c(\Sigma^*)\times\Gamma_c(\Sigma^*)$ as in \Cref{RWfcp} uniquely extends to a map
\begin{align}
\mathscr{F}^{+}: \mathcal{E}^{T}_{\Sigma^*} \longrightarrow \mathcal{E}_{\mathscr{H}^+_{\geq0}}^{T}\oplus \mathcal{E}_{\mathscr{I}^+}^{T},
\end{align}
which is bounded:
\begin{align}
||(\uppsi,\uppsi')||_{\mathcal{E}^{T}_{\Sigma^*}}^2=||\bm{\uppsi}_{\mathscr{H}^+}||_{\mathcal{E}^{T}_{\mathscr{H}^+_{\geq0}}}^2+||\bm{\uppsi}_{\mathscr{I}^+}||_{\mathcal{E}^{T}_{\mathscr{I}^+}}^2 .
\end{align}
We similarly obtain bounded maps
\begin{align}
\mathscr{F}^{+}: \mathcal{E}^{T}_{\Sigma} \longrightarrow \mathcal{E}_{\mathscr{H}^+}^{T}\oplus \mathcal{E}_{\mathscr{I}^+}^{T},\\
\mathscr{F}^{+}: \mathcal{E}^{T}_{\overline{\Sigma}} \longrightarrow \mathcal{E}_{\overline{\mathscr{H}^+}}^{T}\oplus \mathcal{E}_{\mathscr{I}^+}^{T}.
\end{align}
The map $\mathscr{F}^+$ is injective on $\Gamma_c(\Sigma^*)\times\Gamma_c(\Sigma^*)$ and therefore extends to a unitary Hilbert-space isomorphism on its image.
\end{corollary}
\subsection{The backwards scattering map}\label{subsubsection 5.4 the backwards scattering map}
This section contains the proof of \Cref{backwardRW,,RW isomorphisms}. We define backwards evolution from data on the event horizon and null infinity in \Cref{RWbackwardsexistence}, and this defines the map $\mathscr{B}^-$ which inverts $\mathscr{F}^+$. \Cref{RW isomorphisms} follows immediately by \Cref{time inversion of RW}.\\
\indent We begin by constructing a solution to the equation on $ J^-(\mathscr{I}^+\cup\mathscr{H}^+_{\geq0})$ out of compactly supported future scattering data.
\begin{proposition}\label{RWbackwardsexistence}
Let $\bm{\uppsi}_{\mathscr{H}^+}\in\Gamma_c(\mathscr{H}^+_{\geq0})$ be supported on $v<v_+<\infty$ such that $\|\bm{\uppsi}_{\mathscr{H}^+}\|_{\mathcal{E}^T_{\mathscr{H}^+_{\geq0}}}<\infty$, $\bm{\uppsi}_{\mathscr{I}^+}\in\Gamma_c(\mathscr{I}^+)$ be supported on $u<u_+<\infty$ such that $\|\bm{\uppsi}_{\mathscr{I}^+}\|_{\mathcal{E}^T_{\mathscr{I}^+}}<\infty$. Then there exists a unique smooth $\Psi$ defined on $ J^+(\Sigma^*)$ that satisfies \cref{RW} and realises $\bm{\uppsi}_{\mathscr{I}^+}$, $\bm{\uppsi}_{\mathscr{H}^+}$ as its radiation fields. Moreover, $(\Psi|_{\Sigma^*},\slashed{\nabla}_{n_{\Sigma}}\Psi|_{\Sigma^*})\in \mathcal{E}^T_{\Sigma}$.
\end{proposition}
\begin{proof}
Assume $\bm{\uppsi}_{\mathscr{H}^+}$ is supported on $\{(v,\theta^A), v\in[v_-,v_+]\}\subset\mathscr{H}^+_{\geq0}$ and $\bm{\uppsi}_{\mathscr{I}^+}$ is supported on $[u_-,u_+]$, with $-\infty<u_-,u_+,v_-,v_+<\infty$. Let $\widetilde{\Sigma}$ be a spacelike surface connecting $\mathscr{H}^+$ at a finite $v_*>v_+$ to $\mathscr{I}^+$ at a finite $u_*>u_+$. Fix $\mathcal{R}_{\mathscr{I}^+}>3M$ and let $v^\infty$ be sufficiently large so that $\underline{\mathscr{C}}_{v^\infty}\cap [u_-,u_+]\subset J^+(\Sigma^*)$ and $r(u,v^\infty)>\mathcal{R}_{\mathscr{I}^+}$ for $u\in[u_-,u_+]$. Denote by $\mathscr{D}$ the region bounded by $\mathscr{H}^+_{\geq 0}\cap\{v\in[v_-,v_*]\}$, $\widetilde{\Sigma}$,$
\underline{\mathscr{C}}_{v^\infty}$, $\Sigma^*$ and $\mathscr{C}_{u_-}$.
We can find $\Psi$ that solves the "finite" backwards problem for \bref{RW} in $\mathscr{D}$ with the following data:
\begin{itemize}
\item $\bm{\uppsi}_{\mathscr{H}^+}$ on $\mathscr{H}^+\cap\{v\in[v_-,v_+]\}$,
\item $(0,0)$ on $\widetilde{\Sigma}$,
\item $\bm{\uppsi}_{\mathscr{I}^+}$ on $\underline{\mathscr{C}}_{v^\infty}$.
\end{itemize}
\noindent From \bref{RW} we derive
\begin{align}\label{RW first transverse derivative in the 3 direction}
\Omega\slashed{\nabla}_3\left[\frac{r^2}{\Omega^2}|\Omega\slashed{\nabla}_4\Psi|^2\right]+\frac{3\Omega^2-1}{r}\frac{r^2}{\Omega^2}|\Omega\slashed{\nabla}_4\Psi|^2=-\Omega\slashed{\nabla}_4\left[|\mathring{\slashed{\nabla}}\Psi|^2+(3\Omega^2+1)|\Psi|^2\right]+\frac{6M\Omega^2}{r^2}|\Psi|^2.
\end{align}
Let $\tilde{v}<v^\infty$ be large enough that $r(u,\tilde{v})>\mathcal{R}_{\mathscr{I}^+}$ for $u\in[u_-,u_+]$. For $\tilde{v}\leq v<v^\infty$ integrate \bref{RW first transverse derivative in the 3 direction} in the region $\mathscr{D}_{v}=\mathscr{D}\cap J^+(\underline{\mathscr{C}}_v)$ with measure $dudvd\omega$ to derive
\begin{align}
\begin{split}
\int_{\mathscr{C}_u\cap[v,v^\infty]}d\bar{v}d\omega\frac{r^2}{\Omega^2}|\Omega\slashed{\nabla}_4\Psi|^2\leq &\int_{u}^{u_+}d\bar{u}\int_{\mathscr{C}_{\bar{u}}[v,v^\infty]}d\bar{v}d\omega\frac{2\Omega^2}{r}\frac{r^2}{\Omega^2}|\Omega\slashed{\nabla}_4\Psi|^2\\&+\|\Psi\|_{\mathcal{E}^T_{\mathscr{I}^+}}^2+\|\Psi\|_{\mathcal{E}^T_{\mathscr{H}^+}}^2+\int_{u_-}^{u_+}d\bar{u}\int_{S^2}d\omega|\mathring{\slashed{\nabla}}\bm{\uppsi}_{\mathscr{I}^+}|^2_{S^2}+4|\bm{\uppsi}_{\mathscr{I}^+}|^2_{S^2}.
\end{split}
\end{align}
Applying Gr\"onwall's inequality to the above gives
\begin{align}\label{this}
\int_{\mathscr{C}_u\cap[v,v^\infty]}d\bar{v}d\omega\; r^2|\Omega\slashed{\nabla}_4\Psi|^2\leq\frac{r(u,v)^2}{r(u_+,v)^2}\left[\|\Psi\|_{\mathcal{E}^T_{\mathscr{I}^+}}^2+\|\Psi\|_{\mathcal{E}^T_{\mathscr{H}^+}}^2+\int_{[u_-,u_+]\times S^2}d\bar{u}d\omega\;|\mathring{\slashed{\nabla}}\bm{\uppsi}_{\mathscr{I}^+}|_{S^2}^2+4|\bm{\uppsi}_{\mathscr{I}^+}|_{S^2}^2\right].
\end{align}
Using \bref{this} we can modify the argument of \Cref{RWradscri} to conclude that for $v>\tilde{v}$
\begin{align}\label{ptwise infinity}
\begin{split}
\left|\Psi|_{(u,v)}-\bm{\uppsi}_{\mathscr{I}^+}\right|\;\lesssim_{M,u_-,\mathcal{R}_{\mathcal{I}^+}} \frac{1}{v}\Bigg[\sum_{|\gamma|\leq2}\int_{[u_-,u_+]\times S^2}d\bar{u}d\omega\;&\left[|\slashed{\mathcal{L}}_{\Omega_i}^\gamma\bm{\uppsi}_{\mathscr{I}^+}|_{S^2}^2+|\mathring{\slashed{\nabla}}\slashed{\mathcal{L}}^\gamma_{\Omega_i}\bm{\uppsi}_{\mathscr{I}^+}|_{S^2}^2+|\slashed{\mathcal{L}}^\gamma_{\Omega_i}\partial_u\bm{\uppsi}_{\mathscr{I}^+}|_{S^2}^2\right]\\&+\|\slashed{\mathcal{L}}^\gamma_{\Omega_i}\Psi\|_{\mathcal{E}^T_{\mathscr{H}^+}}^2\Bigg].
\end{split}
\end{align}
Analogously, let $\tilde{u}$ be such that $\mathcal{R}_{\mathscr{H}^+}<r(\tilde{u},v)<3M$ for $v\in[v_-,v_+]$, where $\mathcal{R}_{\mathscr{H}^+}<3M$ is fixed. We can multiply the equation by $\frac{1}{\Omega^2}\Omega\slashed{\nabla}_3\Psi$ and integrate by parts over a region $\mathscr{D}_{u}=\mathscr{D}\cap J^+(\mathscr{C}_u)$ to get
\begin{align}
\begin{split}
\int_{\underline{\mathscr{C}}_v\cap[u,\infty]}dud\omega&\frac{1}{\Omega^2}|\Omega\slashed{\nabla}_3\Psi|^2+\int_{\mathscr{C}_u\cap[v,v_+]}dvd\omega\left[\frac{1}{r^2}|\slashed{\nabla}\Psi|^2+\frac{1}{r^2}|\Psi|^2\right]+\int_{\mathscr{D}_{u}}\Omega^2dudv\left[|\mathring{\slashed{\nabla}}\Psi|^2+|\Psi|^2\right]\\&\lesssim \int_{\mathscr{H}^+\cap[v,v_+]}dvd\omega\; \left[|\mathring{\slashed{\nabla}}\bm{\uppsi}_{\mathscr{H}^+}|^2+|\bm{\uppsi}_{\mathscr{H}^+}|^2\right]+\int_{v}^{v_+}d\bar{u}\int_{\underline{\mathscr{C}}_{\bar{v}}\cap[u,\infty]}d\omega\;\frac{2M}{r^2}\frac{1}{\Omega^2}|\Omega\slashed{\nabla}_3\Psi|^2.
\end{split}
\end{align}
Gr\"onwall's inequality implies
\begin{align}\label{RW exponential backwards near H+}
\begin{split}
\int_{\underline{\mathscr{C}}_v[u,\infty]}d\bar{u}d\omega\;\frac{1}{\Omega^2}|\Omega\slashed{\nabla}_3\Psi|^2&\lesssim e^{\frac{1}{2M}(v_+-v)}\left\{\int_{\mathscr{H}^+\cap[v,v_+]} \left[|\mathring{\slashed{\nabla}}\bm{\uppsi}_{\mathscr{H}^+}|^2+|\bm{\uppsi}_{\mathscr{H}^+}|^2\right]dvd\omega+\|\Psi\|_{\mathcal{E}^T_{\mathscr{I}^+}}^2+\|\Psi\|_{\mathcal{E}^T_{\mathscr{I}^+}}^2\right\}.
\end{split}
\end{align}
In turn, this implies pointwise control of $\Psi$ near $\mathscr{H}^+$:
\begin{align}\label{ptwise horizon}
|\Psi(u,v,\theta^A)&-\bm{\uppsi}_{\mathscr{H}^+}(v,\theta^A)|^2\lesssim \int_{u}^\infty e^{\frac{v-\bar{u}}{2M}}d\bar{u}\times \int_{\underline{\mathscr{C}}_v\cap[u,\infty]}dud\omega\sum_{|\gamma|\leq2}\frac{1}{\Omega^2}\left|\slashed{\mathcal{L}}_{\Omega_i}^\gamma\Omega\slashed{\nabla}_3\Psi\right|^2\\&\lesssim_{M} r\Omega^2(u,v_+) \left[\sum_{|\gamma|\leq2}\int_{u_-}^{u_+}d\bar{u}\int_{S^2}d\omega\left[|\slashed{\mathcal{L}}_{\Omega_i}^\gamma\bm{\uppsi}_{\mathscr{I}^+}|^2+|\mathring{\slashed{\nabla}}\slashed{\mathcal{L}}^\gamma_{\Omega_i}\bm{\uppsi}_{\mathscr{I}^+}|^2+|\slashed{\mathcal{L}}^\gamma_{\Omega_i}\partial_u\bm{\uppsi}_{\mathscr{I}^+}|^2\right]+\|\slashed{\mathcal{L}}^\gamma_{\Omega_i}\Psi\|_{\mathcal{E}^T_{\mathscr{H}^+}}^2\right].
\end{align}
In the region $\mathscr{D}\textbackslash (\mathscr{D}_{\tilde{u}}\cap\mathscr{D}_{\tilde{v}})$, $r$ is bounded and energy conservation is sufficient to control $\Psi$ in $L^\infty$. In conclusion, we find that $\Psi$ is controlled in $L^\infty(\mathscr{D})$.\\
\indent Let $\{v_n^\infty\}_{n=0}^\infty$ be a monotonically increasing sequence tending to $\infty$ with $v_0^\infty=v^\infty$ and define $\mathscr{D}$ in terms of $v_n^\infty$ analogously to $\mathscr{D}$. Denote $\underline{\mathscr{C}}_n=\underline{\mathscr{C}}_{v_n^\infty}\cap\{u\in[u_-,u_+]\}$. We can repeat the above on the region $\mathscr{D}_n$ with data $\bm{\uppsi}_{\mathscr{I}^+}$ on $\underline{\mathscr{C}}_{n}$ to obtain a sequence $\{\Psi_n\}_{n=0}^\infty$. $\Psi_n$ is bounded uniformly in $n$ in the region $\mathscr{D}_k$ for any $k<n$ and we can show uniform boundedness of the derivatives by commuting $\slashed{\nabla}_T, \slashed{\nabla}_{\Omega_i}$ and using the equation to obtain higher order versions of the estimates above. By Arzela-Ascoli we can extract a convergent subsequence in $C^k(\mathscr{D}_l)$ for any $k,l$ with a limit $\Psi$ that satisfies \bref{RW}. Note that this procedure can be used to uniquely define $\Psi$ everywhere on $ J^-(\widetilde{\Sigma})\cap J^+(\Sigma^*)$. Clearly, $\Psi|_{\mathscr{H}^+}=\bm{\uppsi}_{\mathscr{H}^+}$ and \bref{ptwise infinity} implies $\Psi\longrightarrow \bm{\uppsi}_{\mathscr{I}^+}$.
Finally, a $T$-energy estimate implies that
\begin{align}\label{subunitarity of B-}
\|(\Psi|_{\Sigma^*}, \slashed{\nabla}_{n_{\Sigma^*}}\Psi|_{\Sigma^*})\|_{\mathcal{E}^T_{\Sigma^*}}^2\leq \|\bm{\uppsi}_{\mathscr{H}^+}\|_{\mathcal{E}^T_{\mathscr{H}^+}}^2+\|\bm{\uppsi}_{\mathscr{I}^+}\|_{\mathcal{E}^T_{\mathscr{I}^+}}^2,
\end{align}
so $(\Psi|_{\Sigma^*}, \slashed{\nabla}_{n_{\Sigma^*}}\Psi|_{\Sigma^*})\in \mathcal{E}^T_{\Sigma^*}$.
\end{proof}
\begin{defin}\label{RW definition of B-}
Let $\uppsi_{\mathscr{H}^+}, \uppsi_{\mathscr{I}^+}$ be as in \Cref{RWbackwardsexistence}. Define the map $\mathscr{B}^-$ by
\begin{align}
\mathscr{B}^-:\Gamma_c(\mathscr{H}^+_{\geq0})\times\Gamma_c(\mathscr{I}^+)\longrightarrow\Gamma(\Sigma^*)\times\Gamma(\Sigma^*), (\uppsi_{\mathscr{H}^+},\uppsi_{\mathscr{I}^+})\longrightarrow (\Psi|_{\Sigma^*},\slashed{\nabla}_{n_{\Sigma^*}}\Psi|_{\Sigma^*}),
\end{align}
where $\Psi$ is the solution to \bref{RW} arising from scattering data $(\uppsi_{\mathscr{H}^+},\uppsi_{\mathscr{I}^+})$ as in \Cref{RWbackwardsexistence}.
\end{defin}
\begin{corollary}\label{B- inverts F+}
The maps $\mathscr{F}^+$, $\mathscr{B}^-$ extend uniquely to unitary Hilbert space isomorphisms on their respective domains, such that $\mathscr{F}^+\circ\mathscr{B}^-=Id$, $\mathscr{B}^-\circ\mathscr{F}^+=Id$.
\end{corollary}
\begin{proof}
We will prove the statement for the map define on data on $\Sigma^*$. We already know that $\mathscr{F}^+$ is a unitary isomorphism and that $\mathscr{F}^+\left[\mathcal{E}^T_{\Sigma^*}\right]\subset\mathcal{E}^{T}_{\mathscr{H}^+_{\geq0}}\oplus \mathcal{E}^{T}_{\mathscr{I}^+}$.
Let $\uppsi_{\mathscr{H}^+}\in\Gamma_c(\mathscr{H}^+_{\geq0})$, $\uppsi_{\mathscr{I}^+}\in\Gamma_c(\mathscr{I}^+)$. \Cref{RWbackwardsexistence} yields a solution $\Psi$ on $J^+(\Sigma^*)$ to \cref{RW}. Since $\Psi$ realises $\uppsi_{\mathscr{I}^+}$, $\uppsi_{\mathscr{H}^+}$ as its radiation fields as in \Cref{RW future rad field scri,,RWonH} and since $\mathscr{B}^-(\uppsi_{\mathscr{H}^+},\uppsi_{\mathscr{I}^+})\in\left[\Gamma(\Sigma^*)\times\Gamma(\Sigma^*)\right]\cap\mathcal{E}^T_{\Sigma^*}$ (see \Cref{RW enough to be in space}), we have that $\mathscr{F}^+\circ\mathscr{B}^-=Id$ on $\Gamma_c(\mathscr{H}^+_{\geq0})\times\Gamma_c(\mathscr{I}^+)$, which is dense in $\mathcal{E}^{T}_{\mathscr{H}^+_{\geq0}}\oplus \mathcal{E}^{T}_{\mathscr{I}^+}$. Therefore, since $\mathscr{F}^+\left[\mathcal{E}^T_{\Sigma^*}\right]$ is complete, we have that $\mathscr{F}^+\left[\mathcal{E}^T_{\Sigma^*}\right]=\mathcal{E}^{T}_{\mathscr{H}^+_{\geq0}}\oplus \mathcal{E}^{T}_{\mathscr{I}^+}$. The fact that $\mathscr{B}^-$ is bounded means that its unique extension to $\mathcal{E}^{T}_{\mathscr{H}^+_{\geq0}}\oplus \mathcal{E}^{T}_{\mathscr{I}^+}$ must be the inverse of $\mathscr{F}^+$ and we have that $\mathscr{B}^-\circ\mathscr{F}^+=Id_{\mathcal{E}^{T}_{\Sigma^*}}$.
\end{proof}
\begin{remark}\label{unitarity of B- is trivial}
Note that the proof of \Cref{RWbackwardsexistence} only establishes the boundedness of $\mathscr{B}^-$, but showing that $\mathscr{B}^-$ inverts $\mathscr{F}^+$ as was done \Cref{B- inverts F+} turns \bref{subunitarity of B-} to an equality:
\begin{align}\label{unitarity of B- formula}
\|\mathscr{B}^-(\uppsi_{\mathscr{H}^+},\uppsi_{\mathscr{I}^+})\|_{\mathcal{E}^T_{\Sigma^*}}^2=\|\uppsi_{\mathscr{H}^+}\|^2_{\mathcal{E}^T_{\mathscr{H}^+_{\geq0}}}+\|\uppsi_{\mathscr{I}^+}\|^2_{\mathcal{E}^{T}_{\mathscr{I}^+}}.
\end{align}
\end{remark}
Since the region $J^+(\overline{\Sigma})\cap J^-(\Sigma^*)$ can be handled locally via \Cref{RWwp local statement near B}, \Cref{RWwpSigmabar} and $T$-energy conservation, we can immediately deduce the following:
\begin{corollary}
The map $\mathscr{B}^-$ can be defined on the following domains:
\begin{align}
\mathscr{B}^{-}:\mathcal{E}^{T}_{\mathscr{H}^+}\oplus \mathcal{E}^{T}_{\mathscr{I}^+}\longrightarrow \mathcal{E}^{T}_{\Sigma},\\
\mathscr{B}^{-}:\mathcal{E}^{T}_{\overline{\mathscr{H}^+}}\oplus \mathcal{E}^{T}_{\mathscr{I}^+}\longrightarrow \mathcal{E}^{T}_{\overline{\Sigma}},
\end{align}
and we have
\begin{align}
\mathscr{F}^{+}\circ\mathscr{B}^{-}=Id_{\mathcal{E}^T_{\mathscr{H}^+}\oplus\;\mathcal{E}^T_{\mathscr{I}^+}},\qquad
\mathscr{B}^{-}\circ\mathscr{F}^{+}=Id_{\mathcal{E}^T_{\Sigma}},\\
\mathscr{F}^{+}\circ\mathscr{B}^{-}=Id_{\mathcal{E}^T_{\overline{\mathscr{H}^+}}\oplus\;\mathcal{E}^T_{\mathscr{I}^+}},\qquad
\mathscr{B}^{-}\circ\mathscr{F}^{+}=Id_{\mathcal{E}^T_{\overline{\Sigma}}}.
\end{align}
\end{corollary}
We have just completed the proof of \Cref{backwardRW}.\\
\indent Since the Regge--Wheeler equation \bref{RW} is invariant under time inversion, the existence of the maps $\mathscr{F}^-, \mathscr{B}^+$ is immediate:
\begin{proposition}\label{RW past scattering}
Solutions to (\ref{RW}) arising from smooth data of compact support on $\Sigma$ (or $\overline{\Sigma}$) give rise to smooth radiation fields $\uppsi_{\mathscr{I}^-}\in\mathcal{E}_{\mathscr{I}^-}^{T}$ on $\mathscr{I}^-$ and $\uppsi_{\mathscr{H}^-}\in\mathcal{E}_{\mathscr{H}^-}^{T}$ (or $\mathcal{E}_{\overline{\mathscr{H}^-}}^{T}$) on $\mathscr{H}^-$ (or $\overline{\mathscr{H}^-}$), such that
\begin{align}\label{919191}
||\bm{\uppsi}_{\mathscr{I}^-}||_{\mathcal{E}^T_{\mathscr{I}^-}}^2+||\bm{\uppsi}_{\mathscr{H}^-}||_{\mathcal{E}^T_{\mathscr{H}^-}}^2=||(\Psi|_{\Sigma},\slashed{\nabla}_{n_{\Sigma}}\Psi|_{\Sigma}) ||_{\mathcal{E}^T_{\Sigma}}^2.\\
||\bm{\uppsi}_{\mathscr{I}^-}||_{\mathcal{E}^T_{\mathscr{I}^-}}^2+||\bm{\uppsi}_{\mathscr{H}^-}||_{\mathcal{E}^T_{\overline{\mathscr{H}^-}}}^2=||(\Psi|_{\Sigma},\slashed{\nabla}_{n_{\Sigma}}\Psi|_{\Sigma}) ||_{\mathcal{E}^T_{\overline{\Sigma}}}^2.
\end{align}
As in the case of $\mathscr{F}^+$, there exist Hilbert space isomorphisms
\begin{align}
\mathscr{F}^{-}:\mathcal{E}^{T}_{\mathscr{H}^+}\oplus \mathcal{E}^{T}_{\mathscr{I}^+}\longrightarrow \mathcal{E}^{T}_{\Sigma},\\
\mathscr{F}^{-}:\mathcal{E}^{T}_{\overline{\mathscr{H}^+}}\oplus \mathcal{E}^{T}_{\mathscr{I}^+}\longrightarrow \mathcal{E}^{T}_{\overline{\Sigma}},
\end{align}
Let $\bm{\uppsi}_{\mathscr{H}^-}\in\Gamma_c(\mathscr{H}^-)$ be supported on $u>u_+>-\infty$ such that $\|\bm{\uppsi}_{\mathscr{H}^-}\|_{\mathcal{E}^T_{\mathscr{H}^-}}<\infty$, $\bm{\uppsi}_{\mathscr{I}^-}\in\Gamma_c(\mathscr{I}^-)$ be supported on $v>v_+>-\infty$ such that $\|\bm{\uppsi}_{\mathscr{I}^-}\|_{\mathcal{E}^T_{\mathscr{I}^-}}<\infty$. Then there exists a unique smooth $\Psi$ defined on $ J^-(\Sigma)$ that satisfies \cref{RW} and realises $\bm{\uppsi}_{\mathscr{I}^-}$, $\bm{\uppsi}_{\mathscr{H}^-}$ as its radiation fields. Moreover, $(\Psi|_{\Sigma},\slashed{\nabla}_{n_{\Sigma}}\Psi|_{\Sigma})\in \mathcal{E}^T_{\Sigma}$ and \bref{919191} is satisfied. A similar statement applies in the case of compactly supported smooth scattering data on $\overline{\mathscr{H}^-}, \mathscr{I}^-$ mapping into $\mathcal{E}^T_{\overline{\Sigma}}$.\\
\indent Therefore, as in the case of $\mathscr{B}^-$, there exist Hilbert space isomorphisms
\begin{align}
\mathscr{B}^{+}:\mathcal{E}^{T}_{\mathscr{H}^+}\oplus \mathcal{E}^{T}_{\mathscr{I}^+}\longrightarrow \mathcal{E}^{T}_{\Sigma},\\
\mathscr{B}^{+}:\mathcal{E}^{T}_{\overline{\mathscr{H}^+}}\oplus \mathcal{E}^{T}_{\mathscr{I}^+}\longrightarrow \mathcal{E}^{T}_{\overline{\Sigma}},
\end{align}
which satisfy
\begin{align}
\mathscr{F}^{-}\circ\mathscr{B}^{+}=Id_{\mathcal{E}^T_{\mathscr{H}^-}\oplus\;\mathcal{E}^T_{\mathscr{I}^-}},\qquad
\mathscr{B}^{+}\circ\mathscr{F}^{-}=Id_{\mathcal{E}^T_{\Sigma}},\\
\mathscr{F}^{-}\circ\mathscr{B}^{+}=Id_{\mathcal{E}^T_{\overline{\mathscr{H}^-}}\oplus\;\mathcal{E}^T_{\mathscr{I}^-}},\qquad
\mathscr{B}^{+}\circ\mathscr{F}^{-}=Id_{\mathcal{E}^T_{\overline{\Sigma}}}.
\end{align}
\end{proposition}
With \Cref{RW past scattering}, \Cref{RW isomorphisms} is immediate.
\begin{remark}
It is possible to realise the map $\mathscr{S}$ by directly studying the future radiation fields $\mathscr{I}^+$, $\overline{\mathscr{H}^+}$ on of a solution to the Regge--Wheeler equation \bref{RW} arising all the way from past scattering data on $\mathscr{I}^-$, $\overline{\mathscr{H}^-}$, instead of obtaining it by formally composing $\mathscr{F}^+, \mathscr{B}^+$. The proof uses a subset of the ideas needed to prove \Cref{Corollary 1} of the introduction, so we will state the result here.
\end{remark}
\begin{proposition}
Given smooth, compactly supported past scattering data $(\uppsi_{\mathscr{H}^-},\uppsi_{\mathscr{I}^-})$ for the Regge--Wheeler equation \bref{RW}, there exists a unique solution $\Psi$ realising $\uppsi_{\mathscr{H}^-},\uppsi_{\mathscr{I}^-}$ as its radiation fields on $\overline{\mathscr{H}^-}, \mathscr{I}^-$ respectively. The solution $\Psi$ induces future radiation fields $(\uppsi_{\mathscr{H}^+},\uppsi_{\mathscr{I}^+})\in \mathcal{E}^{T}_{\overline{\mathscr{H}^+}}\oplus\mathcal{E}^{T}_{{\mathscr{I}^+}}$ such that
\begin{align}
\|\uppsi_{{\mathscr{H}^-}}\|^2_{\mathcal{E}^{T}_{\overline{\mathscr{H}^-}}}+\|\uppsi_{{\mathscr{I}^-}}\|^2_{\mathcal{E}^{T}_{{\mathscr{I}^-}}}= \|\uppsi_{{\mathscr{H}^+}}\|^2_{\mathcal{E}^{T}_{\overline{\mathscr{H}^+}}}+\|\uppsi_{{\mathscr{I}^+}}\|^2_{\mathcal{E}^{T}_{{\mathscr{I}^+}}}
\end{align}
The same result applies with scattering data restricted to $\mathcal{E}^{T}_{{\mathscr{H}^\pm}}$.
\end{proposition}
\subsection{Auxiliary results on backwards scattering}\label{subsection 5.5 auxiliary results}
\subsubsection{Radiation fields of transverse null derivative near $\mathscr{I}^+$}\label{subsubsection 5.5.1 convergence of transverse null derivative}
We can recover the formulae of \Cref{Phi 1 forward,,Phi 2 forward} in backwards scattering from scattering data that is supported away from the future ends of $\mathscr{I}^+,\mathscr{H}^+$:
\begin{corollary}\label{Phi 1 backwards}
Let $(\bm{\uppsi}_{\mathscr{H}^+},\bm{\uppsi}_{\mathscr{I}^+})$ be smooth, compactly supported scattering data for \cref{RW} with corresponding solution $\Psi$. Then
\begin{align}
\lim_{v\longrightarrow\infty}\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\Psi=\int^{u_+}_u d\bar{u}(\mathcal{A}_2-2)\bm{\uppsi}_{\mathscr{I}^+}.
\end{align}
\end{corollary}
\begin{proof}
In a similar fashion to \Cref{Phi 1 forward}, we integrate \bref{RW first transverse derivative in the 3 direction} on a hypersurface $\underline{\mathscr{C}}_v$ from $u_+$ to $u$ to find
\begin{align}
\Phi^{(1)}=\frac{r^2}{\Omega^2}\int_u^{u_+}d\bar{u}\; \frac{\Omega^2}{r^2}\left[\mathring{\slashed{\Delta}}\Psi-(3\Omega^2+1)\Psi\right].
\end{align}
Repeating the argument leading to \Cref{RW transverse derivatives converge} gives the result:
\begin{align}
\bm{\upphi}^{(1)}_{\mathscr{I}^+}= \lim_{v\longrightarrow\infty}\Phi^{(1)}=\int_u^{u_+}d\bar{u}\left(\mathcal{A}_2-2\right)\bm{\uppsi}_{\mathscr{I}^+}.
\end{align}
\end{proof}
\Cref{Phi 2 forward} can also be recovered in backwards scattering for compactly supported data:
\begin{corollary}\label{Phi 2 backwards}
Let $\Psi$ be a solution to \cref{RW} arising from smooth, compactly supported scattering data $(\bm{\uppsi}_{\mathscr{H}^+},\bm{\uppsi}_{\mathscr{I}^+})$, then
\begin{align}
\begin{split}
\lim_{v\longrightarrow\infty}\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\right)^2\Psi&=\int_{u}^\infty\int_{u_1}^\infty du_1 du_2 \left[\mathcal{A}(\mathcal{A}_2-2)-6M\partial_u\right]\bm{\uppsi}_{\mathscr{I}^+}(u_2,\theta^A)\\
&=\int_u^{u_+}d\bar{u}(\bar{u}-u_-)\left[\mathcal{A}_2(\mathcal{A}_2-2)-6M\partial_u\right]\bm{\uppsi}_{\mathscr{I}^+}(\bar{u},\theta^A).
\end{split}
\end{align}
\end{corollary}
Note that we do not need compact support in the direction of $u\longrightarrow-\infty$ on $\mathscr{I}^+$ for the above results to hold:
\begin{corollary}
\Cref{Phi 1 backwards,,Phi 2 backwards} hold if $\bm{\uppsi}_{\mathscr{I}^+}$ is supported on $(-\infty,u]$, provided $\|\bm{\uppsi}_{\mathscr{I}^+}\|_{\mathcal{E}^T_{\mathscr{I}^+}}<~\infty$.
\end{corollary}
\subsubsection{Backwards $r^p$-estimates}\label{backwards rp estimates}
It is possible to use energy conservation to develop $r$-weighted estimates in the backwards direction that are uniform in $u$, provided $\bm{\uppsi}_{\mathscr{I}^+}$ is compactly supported in $u$. These estimates will help us show that $\mathscr{B}^-$ satisfies \bref{unitarity of B- formula} without reference to $\mathscr{F}^+$ or forwards scattering. We will also use them to show that $\Psi|_{\Sigma^*}\longrightarrow0$ towards $i^0$, and later to obtain similar statements for $\alpha,\underline\alpha$. These estimates first appeared in \cite{AAG19}.\\
\indent Let $u_-,u_+,v_-,v_+$ be as in the proof of \Cref{RWbackwardsexistence}, so that $\mathscr{C}_{u_+}\cap\{r>R\}$ is beyond the support of $\Psi$. Let $u<u_+$, then repeating the proof of \Cref{RWrp} in the region $\mathscr{D}_{u,v_+}^{u_+,\infty}$ for $p=1,2$ gives us (using $d\omega=\sin\theta d\theta d\phi$)
\begin{align}
\begin{split}
\int_{\mathscr{C}_u\cap\{v>v_+\}}dvd\omega\; r|\Omega\slashed{\nabla}_4\Psi|^2\lesssim& \int_{\mathscr{I}^+\cap\{u\in[u_-,u_+]\}}dud\omega\;r(|\slashed{\nabla}\Psi|^2+V|\Psi|^2)\\&+\int_{\mathscr{D}_{u,v_+}^{u_+,\infty}}dudvd\omega\; \left[|\Omega\slashed{\nabla}_4\Psi|^2+|\slashed{\nabla}\Psi|^2+V|\Psi|^2\right],
\end{split}
\end{align}
\begin{align}\label{136}
\int_{\mathscr{C}_u\cap\{v>v_+\}}dvd\omega\;r^2|\Omega\slashed{\nabla}_4\Psi|^2\lesssim \int_{\mathscr{I}^+\cap\{u\in[u_-,u_+]\}}dud\omega\; r^2(|\slashed{\nabla}\Psi|^2+V|\Psi|^2)+\int_{\mathscr{D}_{u,v_+}^{u_+,\infty}} dudv d\omega \;r|\Omega\slashed{\nabla}_4\Psi|^2.
\end{align}
We estimate the bulk terms on the right hand side as follows: An energy estimate applied in $\mathscr{D}_{u,v_+}^{u_+,\infty}$ gives for all $u<u_+$:
\begin{align}\label{backwards p=1}
\int_{\mathscr{C}_u\cap\{v>v_+\}}dvd\omega\;\left[|\Omega\slashed{\nabla}_4\Psi|^2+|\slashed{\nabla}\Psi|^2+V|\Psi|^2\right]\leq \int_{\mathscr{I}^+\cap\{u\in[u_-,u_+]\}}dud\omega\; |\partial_u\Psi|^2.
\end{align}
Integrating in $u$ gives
\begin{align}\label{backwards p=1 integrated}
\int_{\mathscr{D}_{u,v_+}^{u_+,\infty}}dudvd\omega\;\left[|\Omega\slashed{\nabla}_4\Psi|^2+|\slashed{\nabla}\Psi|^2+V|\Psi|^2\right]&\leq \int_{u_-}^{u_+}du_1\int_{\mathscr{I}^+\cap\{u_2\in[u_1,u_+]\}}du_2d\omega\; |\partial_u\Psi|^2\\&=\int_{\mathscr{I}^+\cap\{u\in[u_-,u_+]\}}dud\omega\; (u_+-u)|\partial_u\Psi|^2,
\end{align}
knowing that $\slashed{\nabla}_3\Psi=0$ at $u=u_+,v>v_+$. Returning to the above we have
\begin{align}
\int_{\mathscr{C}_u\cap\{v>v_+\}}dvd\omega\;r|\Omega\slashed{\nabla}_4\Psi|^2\lesssim \int_{\mathscr{I}^+\cap\{u\in[u_-,u_+]\}}dud\omega\;r(|\slashed{\nabla}\Psi|^2+V|\Psi|^2)+(u_+-u)|\partial_u\Psi|^2.
\end{align}
Integrating once more in $u$ and substituting in (\ref{136}) gives us
\begin{align}\label{RWbackwardsboundedness}
\int_{\mathscr{C}_u\cap\{v>v_+\}}dvd\omega\;r^2|\Omega\slashed{\nabla}_4\Psi|^2\lesssim \int_{\mathscr{I}^+\cap\{u\in[u_-,u_+]\}}dud\omega\;r(u_+-u)(|\slashed{\nabla}\Psi|^2+V|\Psi|^2)+\frac{1}{2}(u-u_+)^2|\partial_u\Psi|^2.
\end{align}
We can integrate in $u$ once more:
\begin{align}\label{RWbackwardsdecay}
\int_{\mathscr{D}_{u,v_+}^{u_+,\infty}}dudvd\omega\;r^2|\Omega\slashed{\nabla}_4\Psi|^2\lesssim \int_{\mathscr{I}^+\cap\{u\in[u_-,u_+]\}}dud\omega\;\frac{1}{2}r(u-u_+)^2(|\slashed{\nabla}\Psi|^2+V|\Psi|^2)+\frac{1}{6}(u_+-u)^3|\partial_u\Psi|^2.
\end{align}
Note that all of the bulk integrals above could be done over $\mathscr{D}=\mathscr{D}_{u,v_+}^{u_+,\infty}\cup\{ J^-(\mathscr{C}_{u_-})\cap J^+(\Sigma^*)\}$ provided that $\partial_u\bm{\uppsi}_{\mathscr{I}^+}$ decays sufficiently fast, such that $\int_{-\infty}^u dud\omega\; |\partial_u\bm{\uppsi}_{\mathscr{I}^+}|^2$ is integrable on $(-\infty,u_+]$.
The first application will be to show that the $\mathscr{B}^+$ is unitary:
\begin{proposition}\label{RW unitary backwards}
Let $\Psi$ arise from smooth scattering data $\bm{\uppsi}_{\mathscr{I}^+}\in \mathcal{E}^T_{\mathscr{I}^+}, \bm{\uppsi}_{\mathscr{H}^+}\in \mathcal{E}^T_{\mathscr{H}^+}$ as in \Cref{RWbackwardsexistence}. Assume that $\bm{\uppsi}_{\mathscr{I}^+}$ is supported on $u\leq u_+<\infty$, $\bm{\uppsi}_{\mathscr{H}^+}$ is supported on $v \leq v_+ < \infty$, and that $\int_{-\infty}^u dud\omega |\partial_u\bm{\uppsi}_{\mathscr{I}^+}|^2$ is integrable on $(-\infty, u_+]$. Then
\begin{align}
\lim_{u\longrightarrow-\infty} F^T_{\mathscr{C}_u\cap J^+(\Sigma^*)}[\Psi]=0.
\end{align}
\end{proposition}
\begin{proof}
The energy estimate
\begin{align}
\mathbb{F}_{\Sigma^*}^T[\Psi\cdot\theta_u]+F^T_{\mathscr{C}_u\cap J^+(\Sigma^*)}= \|\bm{\uppsi}_{\mathscr{H}^+}\|^2_{\mathcal{E}^T_{\mathscr{H}^+_{\geq0}}}+\|\bm{\uppsi}_{\mathscr{I}^+}\|^2_{\mathcal{E}^T_{\mathscr{I}^+}}
\end{align}
implies that $F^T_{\mathscr{C}_u\cap J^+(\Sigma^*)}[\Psi]$ decays monotonically as $u\longrightarrow\infty$ (here $\theta_u$ is the characteristic function of the subset $\Sigma^*\textbackslash J^-(\mathscr{C}_u)$ of $\Sigma^*$). Combining this with \bref{backwards p=1 integrated} gives the result.
\end{proof}
\begin{corollary}\label{RW unitary backwards corollary}
Let $\Psi$ be as in \Cref{RW unitary backwards}, then
\begin{align}
\|(\Psi|_{\Sigma^*},\slashed{\nabla}_{n_{\Sigma^*}}\Psi|_{\Sigma^*})\|_{\mathcal{E}^T_{\Sigma^*}}^2=\|\bm{\uppsi}_{\mathscr{H}^+}\|^2_{\mathcal{E}^T_{\mathscr{H}^+_{\geq0}}}+\|\bm{\uppsi}_{\mathscr{I}^+}\|^2_{\mathcal{E}^T_{\mathscr{I}^+}}.
\end{align}
\end{corollary}
\indent In the following, we show that if $\bm{\uppsi}_{\mathscr{I}^+}$ is compactly supported on $\mathscr{I}^+$ then we have pointwise decay for $\Psi$ towards $i^0$:
\begin{proposition}\label{RW backwards decay at sigma}
Let $\Psi$ arise from scattering data $(\bm{\uppsi}_{\mathscr{I}^+},\bm{\uppsi}_{\mathscr{H}^+})\in\Gamma_c(\mathscr{I}^+)\times\Gamma_c(\mathscr{H}^+)$ as in \Cref{RWbackwardsexistence}, then $\Psi|_{\Sigma}\longrightarrow 0$ as $r\longrightarrow \infty$.
\end{proposition}
\begin{proof}
For $R$ large enough, we can estimate
\begin{align}
\int_{S^2} \left|\Psi|_{\Sigma^*\cap\{r=R\}}-\bm{\uppsi}_{\mathscr{I}^+}\right|\lesssim \int_{v=\frac{1}{2}R^*}^{\infty} \int_{S^2}\sin\theta d{\bar{v}}d\theta d\phi|\Omega\slashed{\nabla}_4\Psi|\lesssim \frac{1}{\sqrt{R}}\int_{\mathscr{C}_{-\frac{1}{2}R^*}\cap\{v>\frac{1}{2}R^*\}}r^2|\Omega\slashed{\nabla}_4\Psi|^2.
\end{align}
The result follows noting that $\bm{\uppsi}_{\mathscr{I}^+}$ is compactly supported and that the integral on the right hand side is bounded according to (\ref{RWbackwardsboundedness}).
\end{proof}
\begin{proposition} Let $\Psi$ arise from the backwards evolution of scattering data $(\bm{\uppsi}_{\mathscr{I}^+},\bm{\uppsi}_{\mathscr{H}^+})$ in $\Gamma_c (\mathscr{I}^+)\times \Gamma_c (\mathscr{H}^+_{\geq0})$ as in \Cref{RWbackwardsexistence}, then
\begin{align}
\lim_{R\longrightarrow\infty} \int_{\underline{\mathscr{C}}_{v=\frac{1}{2}R^*} \cap J^+(\Sigma^*)} \Psi= \int_{\mathscr{I}^+} \bm{\uppsi}_{\mathscr{I}^+}.
\end{align}
\end{proposition}
\begin{proof}
Assume the support of $\bm{\uppsi}_{\mathscr{I}^+}$ is in $\mathscr{I}^+\cap \{u\in[u_-,u_+]\}, -\infty<u_-<u_+<\infty$. Let $R$ be such that $u|_{t=0,r=R}=-\frac{1}{2}R^*<u_-$ and let $\tilde{v}=v(t=0,r=R)=\frac{1}{2}R^*$, $\tilde{u}>u_+$. We have
\begin{align}
\left|\int_{\mathscr{C}_{u=-\frac{1}{2}R^*} \cap J^+(\Sigma)} \Psi-\int_{\mathscr{I}^+} \bm{\uppsi}_{\mathscr{I}^+}\right|^2\leq \left[\int_{\mathscr{D}} |\Omega\slashed{\nabla}_4 \Psi|\right]^2\lesssim \frac{1}{{R}}\int_{\mathscr{D}} r^2|\Omega\slashed{\nabla}_4\Psi|^2,
\end{align}
where $\mathscr{D}= J^+(\Sigma^*\cap\{r\geq R\})\cap J^-(\mathscr{C}_{\tilde{u}})$. The result follows as (\ref{RWbackwardsdecay}) gives us that $\int_{\mathscr{D}} r^2|\Omega\slashed{\nabla}_4\Psi|^2<\infty$.
\end{proof}
\subsubsection{Backwards scattering for data of noncompact support}\label{subsubsection 5.5.3 backwards scattering data of noncompact support}
Estimates \bref{ptwise infinity} and \bref{ptwise horizon} are uniform in the future cutoffs of $\bm{\uppsi}_{\mathscr{I}^+}, \bm{\uppsi}_{\mathscr{H}^+}$ if the relevant fluxes on $\mathscr{I}^+, \mathscr{H}^+_{\geq0}$ are finite, in which case we can remove these cutoffs altogether and work with non-compactly supported scattering data. This follows by a simple modification of the argument leading to the limit $\Psi$ in the proof of \Cref{RWbackwardsexistence}.
\begin{proposition}\label{RW backwards noncompact}
The results of \Cref{RWbackwardsexistence} hold when $\bm{\uppsi}_{\mathscr{I}^+}, \bm{\uppsi}_{\mathscr{H}^+}$ are not compactly supported, provided
\begin{align
&\int_{[u_-,\infty)\times S^2} du\sin\theta d\theta d\phi \sum_{|\gamma|\leq2}| \slashed{\mathcal{L}}^\gamma_{S^2}\partial_u\bm{\uppsi}_{\mathscr{I}^+}|^2+|\slashed{\mathcal{L}}^\gamma_{S^2}\bm{\uppsi}_{\mathscr{I}^+}|^2+|\slashed{\mathcal{L}}^\gamma_{S^2}\mathring{\slashed{\nabla}}\bm{\uppsi}_{\mathscr{I}^+}|^2 <\infty,\label{tthis_hypothesis_1}\\
&\int_{[v_-,\infty)\times S^2} dv\sin\theta d\theta d\phi\sum_{|\gamma|\leq2}| \slashed{\mathcal{L}}^\gamma_{S^2}\partial_v\bm{\uppsi}_{\mathscr{H}^+}|^2+|\slashed{\mathcal{L}}^\gamma_{S^2}\bm{\uppsi}_{\mathscr{H}^+}|^2+|\slashed{\mathcal{L}}^\gamma_{S^2}\mathring{\slashed{\nabla}}\bm{\uppsi}_{\mathscr{H}^+}|^2 <\infty.\label{thist_hypothesis_2}
\end{align}
Corollaries \ref{Phi 1 backwards} and \ref{Phi 2 backwards} also hold provided the fluxes of \bref{tthis_hypothesis_1}, \bref{thist_hypothesis_2} are finite with the sums running up to $|\gamma|\leq 4$.
\end{proposition}
\begin{proof}
Let $R>3M$ be fixed, $\{u_{+,n}\}_{n=1}^\infty$ a monotonically increasing sequence and $\{v_{+,n}\}_{n=1}^\infty$ such that $v_{+,n}-u_{+,n}=R^*$. Let $\xi_n^u,\xi_n^v$ be smooth cutoff functions cutting off at $u_{+,n}$ and $v_{+,n}$ respectively. Using $\xi_n^u\bm{\uppsi}_{\mathscr{I}^+}, \xi_n^v\bm{\uppsi}_{\mathscr{H}^+}$ as scattering data, we can apply \Cref{RWbackwardsexistence} to obtain solutions $\Psi_n$ to \cref{RW}, each defined on $\mathscr{D}_n:=J^+(\Sigma^*)\cap\{\{u<u_{+,n}\}\cup\{v<v_{+,n}\}\}$. On $\mathscr{D}_k$, the sequence $\{\Psi_n\}$ for $n>k$ is bounded and equicontinuous, so repeating the argument of \Cref{RWbackwardsexistence} we can find a subsequence converging to $\Psi$ in the topology of compact convergence. The estimate \bref{tthis_hypothesis_1} and the estimates \bref{ptwise horizon}, \bref{ptwise infinity} imply that $\Psi\longrightarrow \bm{\uppsi}_{\mathscr{I}^+}$ towards $\mathscr{I}^+$ and $\Psi\longrightarrow \bm{\uppsi}_{\mathscr{H}^+}$ towards $\mathscr{H}^+$. The solution $\Psi$ can be extended to the future by repeating the above argument for each $\mathscr{D}_k$ as $k\longrightarrow\infty$. The remaining statements follow by analogous arguments.
\end{proof}
\section{Future asymptotics of the +2 Teukolsky equation}\label{section 6}
\Cref{section 6} is devoted to the study of future radiation fields induced by solutions to the $+2$ Teukolsky equation arising from smooth, compactly supported data on $\Sigma^*$, as was done for the Regge--Wheeler equation in \Cref{subsection 5.2 subsection Radiation fields}.\\
\indent We first gather the estimates we need in \Cref{T+2estimates}. We collect in \Cref{subsubsection 6.1.1 transport estimates} results from \cite{DHR16} estimating $\alpha$ from $\Psi$ defined via (\ref{hier+}) and the estimates of \Cref{subsection 5.1 Basic integrated boundedness and decay estimates} for $\Psi$. Building upon these estimates we then use the methods of \cite{DRrp} and \cite{AAG16a} to obtain $r$-weighted estimates for $\alpha, \psi$ in \Cref{rp+2}. We apply these results to study the future radiation fields and their fluxes in \Cref{+2 radiation}.
\subsection{Integrated boundedness and decay for $\alpha$ via $\Psi$}\label{T+2estimates}
We begin with the following basic proposition, already proven in \Cref{Chandra}:
\begin{proposition}\label{+2 implies RW}
Let $(\upalpha,\upalpha')$ be data on $\Sigma^*$, $\Sigma$ or $\overline{\Sigma}$ giving rise to a solution $\alpha$ to \cref{T+2} as in \Cref{WP+2Sigma*} or \Cref{WP+2Sigmabar} respectively. Then $\Psi$ defined via \bref{hier+} out of the solution $\alpha$ on $ J^+(\Sigma^*)$, $ J^+(\Sigma)$ or $ J^+(\overline{\Sigma})$ satisfies \cref{RW}.
\end{proposition}
\subsubsection{Transport estimates for $\alpha$}\label{subsubsection 6.1.1 transport estimates}
In what follows assume a small fixed $0<\epsilon<1/8$.
\begin{proposition}\label{psiILED}
Let $\alpha, \psi, \Psi$ be as in \bref{hier+} and \Cref{+2 implies RW}, Then for any $u$ and any $v>0$ such that $(u,v,\theta^A)\in J^+(\Sigma^*)$, the following estimate holds for sufficiently small $\epsilon>0$ \footnote{All integrals on $\underline{\mathscr{C}}_v$ here are done with respect to the measure $\Omega^2\sin\theta dvd\theta d\phi$}
\begin{align}\label{locallabel1}
\int_{\mathscr{C}_{u}\cap J^+(\Sigma^*)\cap J^-(\underline{\mathscr{C}}_v)}d\bar{v}d\omega\; r^{8-\epsilon}\Omega^2|\psi|^2+\int_{\mathscr{D}^{u,v}_{\Sigma^*}}d\bar{u}d\bar{v}d\omega\; r^{7-\epsilon}\Omega^4|\psi|^2 \lesssim \mathbb{F}_{\Sigma^*}[\Psi]+\int_{\Sigma^*\cap J^-(\mathscr{C}_u)\cap J^-(\underline{\mathscr{C}}_v)}drd\omega\; r^{8-\epsilon}\Omega^2|\psi|^2.
\end{align}
\end{proposition}
\begin{proof}
Here we repeat the argument of Proposition 12.1.1 of \cite{DHR16}. Using the definition of $\psi$ in (\ref{hier+}) we can derive
\begin{align}
\partial_u \left[r^{6+n}\Omega^4|\psi|^2\right]+nr^{n+5}\Omega^4|\psi|^2=2r^{n-1}\frac{\Omega^2}{r^2}\Psi \cdot r^3\Omega\psi\leq \frac{1}{2}nr^{n+5}\Omega^4|\psi|^2+\frac{2}{n}r^{n-3}\Omega^2|\Psi|^2.
\end{align}
The result follows by integrating over $\mathscr{D}^{u,v}_{\Sigma^*}$ for $0<n<2$ and using \Cref{RWILED,,RWrp}.
\end{proof}
\begin{proposition}\label{alphaILED}
Let $\alpha, \psi, \Psi$ be as in \bref{hier+} and \Cref{+2 implies RW}, Then for any $u$ and any $v>0$ such that $(u,v,\theta^A)\in J^+(\Sigma^*)$, the following estimate holds for sufficiently small $\epsilon>0$
\begin{align}\label{locallabel2}
\begin{split}
\int_{\mathscr{C}_{u}\cap J^+(\Sigma^*)\cap J^-(\underline{\mathscr{C}}_v)}d\bar{v}d\omega\; r^{6-\epsilon}\Omega^4|\alpha|^2&+\int_{\mathscr{D}^{u,v}_{\Sigma^*}}d\bar{u}d\bar{v}d\omega\; r^{5-\epsilon}\Omega^6|\alpha|^2 \\ &\lesssim \mathbb{F}_{\Sigma^*}[\Psi]+\int_{\Sigma^*\cap J^-(\mathscr{C}_u)\cap J^-(\underline{\mathscr{C}}_v)}drd\omega\; r^{8-\epsilon}\Omega^2|\psi|^2+r^{6-\epsilon}\Omega^4|\alpha|^2.
\end{split}
\end{align}
provided the right hand side is finite.
\end{proposition}
\begin{proof}
Similar to \Cref{psiILED}. See Propositions 12.1.2, 12.2.6 and 12.2.7 of \cite{DHR16}.
\end{proof}
\begin{proposition}\label{ILED alpha 2nd angular}
Let $\alpha, \psi, \Psi$ be as in \bref{hier+} and \Cref{+2 implies RW}, Then for any $u$ and any $v>0$ such that $(u,v,\theta^A)\in J^+(\Sigma^*)$, the following estimate holds for sufficiently small $\epsilon>0$
\begin{align}\label{2ndderivativeofpsi}
\begin{split}
\int_{\mathscr{C}_{u}\cap J^+(\Sigma^*)\cap J^-(\underline{\mathscr{C}}_v)}d\bar{v}d\omega\; r^{8-\epsilon}|-2r^2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}_2&(r^3\Omega\psi)|^2+\int_{\mathscr{D}^{u,v}_{\Sigma^*}}d\bar{u}d\bar{v}d\omega\; \frac{\Omega^2}{r^3}\left(1-\frac{3M}{r}\right)^2 |-2r^2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}_2(r^3\Omega\psi)|^2 \\ &\lesssim \mathbb{F}_{\Sigma^*}[\Psi]+\int_{\Sigma^*\cap J^-(\mathscr{C}_u)\cap J^-(\underline{\mathscr{C}}_v)}dr d\omega\; r^{8-\epsilon}\Omega^2|\psi|^2+r^{6-\epsilon}\Omega^4|\alpha|^2,
\end{split}
\end{align}
provided the right hand side is finite.
\end{proposition}
\begin{proof}
Control of $\psi,\alpha$ as in \Cref{psiILED,,alphaILED} allows us to directly control the flux of $-2r^2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}_2(r^3\Omega\psi)$ on $\mathscr{C}_u$ using (\ref{eq:d4Psi}) and the flux bound of \Cref{RWrp}, while the spacetime integral can be controlled via \Cref{RWILED}.
\end{proof}
Commuting (\ref{hier+}) with $r\slashed{\mathcal{D}}_2$ and using the flux bound of the previous proposition allows us to obtain an integrated decay statement for $r\slashed{\mathcal{D}}_2\psi$:
\begin{proposition}
Let $\alpha, \psi, \Psi$ be as in \bref{hier+} and \Cref{+2 implies RW}, Then for any $u$ and any $v>0$ such that $(u,v,\theta^A)\in J^+(\Sigma^*)$, the following estimate holds for sufficiently small $\epsilon>0$
\begin{align}
\int_{\mathscr{D}^{u,v}_{\Sigma^*}} d\bar{u}d\bar{v} d\omega\; r^{7-\epsilon} \Omega^4|r\slashed{\mathcal{D}}_2\psi|^2 &\lesssim \mathbb{F}_{\Sigma^*}[\Psi]+\int_{\Sigma^*\cap J^-(\mathscr{C}_u)\cap J^-(\underline{\mathscr{C}}_v)}drd\omega\;r^{8-\epsilon}\Omega^2|r\slashed{\mathcal{D}}_2\psi|^2+r^{6-\epsilon}\Omega^4|\alpha|^2.
\end{align}
provided the right hand side is finite.
\end{proposition}
Finally, commuting the equation for $\psi$ in (\ref{hier+}) with $\slashed{\nabla}_{R^*}$ gives us control over the remaining $\Omega\slashed{\nabla}_4\psi$ using the estimates for $\Psi$ and the nondegenerate control of $\slashed{\nabla}_{R^*} \psi$ in \Cref{RWILED}. We can optimise the weights near the event horizon and null infinity by commuting further with $\Omega^{-1}\slashed{\nabla}_3$ and $r\Omega\slashed{\nabla}_4$ respectively:
\begin{proposition}\label{ILED psi higherorder}
Let $\alpha, \psi, \Psi$ be as in \bref{hier+} and \Cref{+2 implies RW}, Then for any $u$ and any $v>0$ such that $(u,v,\theta^A)\in J^+(\Sigma^*)$, the following estimate holds for sufficiently small $\epsilon>0$
\begin{align}
\begin{split}
\int_{\mathscr{C}_{u}\cap J^+(\Sigma^*)\cap J^-(\underline{\mathscr{C}}_v)} d\bar{v}d\omega\;r^{4-\epsilon} |\Omega\slashed{\nabla}_4(r^3\Omega&\psi)|^2 + \int_{\mathscr{D}^{u,v}_{\Sigma^*}}d\bar{u}d\bar{v}d\omega\;r^{7-\epsilon}\left[|(\Omega^{-1}\slashed{\nabla}_3(\Omega\psi)|^2+|r\Omega\slashed{\nabla}_4\Omega\psi|^2\right]\\&\lesssim \mathbb{F}_{\Sigma^*}[\Psi]+\int_{\Sigma^*\cap\ensuremath J^-(\mathscr{C}_u)\cap J^-(\underline{\mathscr{C}}_v)}dr d\omega\;r^{8-\epsilon}\left[|\Omega\psi|^2+|\Omega^{-1}\slashed{\nabla}_3\psi|^2+|r\Omega\slashed{\nabla}_4\psi|^2\right]
\end{split}
\end{align}
provided the right hand side is finite.
\end{proposition}
Similar estimates can be obtained for $\alpha$ by applying these ideas one more time to (\ref{hier+}), see section 12.3 of \cite{DHR16}.
\begin{proposition}\label{alphaILED higher order}
Let $\alpha, \psi, \Psi$ be as in \bref{hier+} and \Cref{+2 implies RW}, Then for any $u$ and any $v>0$ such that $(u,v,\theta^A)\in J^+(\Sigma^*)$, the following estimate holds for sufficiently small $\epsilon>0$
\begin{align}
\begin{split}
&\int_{\mathscr{C}_{u}\cap J^+(\Sigma^*)\cap J^-(\underline{\mathscr{C}}_v)}d\bar{v}d\omega\;r^{6-\epsilon}\left[|r\slashed{\mathcal{D}}_2\Omega^2\alpha|^2+|\Omega^{-1}\slashed{\nabla}_3\Omega^2\alpha|^2+|r\Omega\slashed{\nabla}_4\Omega^2\alpha|^2\right]\\&+\int_{\mathscr{D}^{u,v}_{\Sigma^*}}d\bar{u}d\bar{v}d\omega\;r^{5-\epsilon}\left[|r\slashed{\mathcal{D}}_2\Omega^2\alpha|^2+|\Omega^{-1}\slashed{\nabla}_3\Omega^2\alpha|^2+|r\Omega\slashed{\nabla}_4\Omega^2\alpha|^2\right] \\ &\lesssim \mathbb{F}_{\Sigma^*}[\Psi]+\int_{\Sigma^*\cap\ensuremath J^-(\mathscr{C}_u)\cap J^-(\underline{\mathscr{C}}_v)}drd\omega\;\Bigg\{r^{8-\epsilon}\left[|r\slashed{\mathcal{D}}_2\Omega\psi|^2+|\Omega^{-1}\slashed{\nabla}_3\Omega\psi|^2+|r\Omega\slashed{\nabla}_4\Omega\psi|^2\right]\\&+r^{6-\epsilon}\left[|r\slashed{\mathcal{D}}_2\Omega^2\alpha|^2+|\Omega^{-1}\slashed{\nabla}_3\Omega^2\alpha|^2+|r\Omega\slashed{\nabla}_4\Omega^2\alpha|^2\right]\Bigg\},
\end{split}
\end{align}
provided the right hand side is finite.
\end{proposition}
\subsubsection{An $r^p$-estimate for $\alpha,\psi$}\label{rp+2}
The structure of the $+2$ Teukolsky equation allows us to apply the method of \cite{DRrp} and \cite{AAG16a} to \Cref{T+2} in the same way it was applied in \Cref{subsection 5.1 Basic integrated boundedness and decay estimates}.
\begin{proposition}\label{T+2rp}
Let $\alpha$ be a solution to the +2 equation (\ref{T+2}), then for $p\in[0,2], u>u_0$ and $\mathscr{D}=\{(u,v,\theta,\phi): \bar{u}\in[u_0,u], r>R\}$ we have the following:
\begin{align}
\begin{split}
\int_{\mathscr{C}_{u}\cap\{r>R\}}d\bar{v}d\omega\;r^p|\Omega\slashed{\nabla}_4 r^5\Omega^{-2}\alpha|^2+\int_{\mathscr{D}}d\bar{u}d\bar{v}d\omega\; (p+8)r^{p-1}|\Omega\slashed{\nabla}_4 r^5\Omega^{-2}\alpha|^2+(2-p)r^{p-1}|\slashed{\nabla} r^5\Omega^{-2}\alpha|^2\\ \lesssim \mathbb{F}_{\Sigma^*}[\Psi]+\int_{\Sigma^*}r^{8-\epsilon}\Omega^2|\psi|^2+r^{6-\epsilon}\Omega^2|\alpha|^2+\int_{\Sigma^*\cap\{r>R\}}drd\omega\; r^p|\Omega\slashed{\nabla}_4 r^5\Omega^{-2}\alpha|^2.
\end{split}
\end{align}
\end{proposition}
\begin{proof}
Rewrite the +2 equation in terms of $r^5\Omega^2\alpha$:
\begin{align}\label{+2 equation for radiation field}
\Omega\slashed{\nabla}_4\Omega\slashed{\nabla}_3 r^5\Omega^{-2}\alpha+2\frac{3\Omega^2-1}{r}\Omega\slashed{\nabla}_4 r^5\Omega^{-2}\alpha-\Omega^2\slashed{\Delta}r^5\Omega^{-2}\alpha-\frac{\Omega^2}{r^2}(15\Omega^2-13)r^5\Omega^{-2}\alpha=0.
\end{align}
Multiply by $r^p\Omega\slashed{\nabla}_4 r^5\Omega^{-2}\alpha$ and integrate by parts:
\begin{align}
\begin{split}
&\Omega\slashed{\nabla}_3\left[r^p|\Omega\slashed{\nabla}_4 r^5\Omega^{-2}\alpha|^2\right]+\Omega\slashed{\nabla}_4\left[r^p\Omega^2\left(|\slashed{\nabla} r^5\Omega^{-2}\alpha|^2-(15\Omega^2-13)\frac{1}{r^2}|r^5\Omega^{-2}\alpha|^2\right)\right]\\
&+\left\{4(3\Omega^2-1)+p\Omega^2\right\}r^{p-1}|\Omega\slashed{\nabla}_4 r^5\Omega^{-2}\alpha|^2+\left[2-p-\frac{2M}{r}\right]r^{p-1}\left|\slashed{\nabla} r^5\Omega^{-2}\alpha\right|^2\\
&-\left[\frac{2M}{r}(30\Omega^2-13)+(2-p)(15\Omega^4-13\Omega^2)\right]r^{p-3}\Omega^2|r^5\Omega^{-2}\alpha|^2=0.
\end{split}
\end{align}
Integrating in $\mathscr{D}$, the Poincar\'e inequality (\ref{poincare}) ensures that the leading order terms in the $\mathscr{I}^+$ flux term are positive, and we similarly use (\ref{poincare}) to absorb the last term in the previous equation into the term containing the angular derivative. Finally we can deal with the $r=R$ flux term by averaging over $R$ and using the integrated decay statement of \Cref{alphaILED}.
\end{proof}
Similarly, we have
\begin{proposition}\label{T+1rp}
Let $\psi$ arise from $\alpha$ according to (\ref{hier+}), then we have
\begin{align}
\int_{\mathscr{C}_{u}\cap\{r>R\}}d\bar{v}d\omega\;r^p|\Omega\slashed{\nabla}_4 r^5\Omega^{-1}\psi|^2+\int_{\mathscr{D}}d\bar{v}d\omega\; (p+4)r^{p-1}|\Omega\slashed{\nabla}_4 r^5\Omega^{-1}\psi|^2+(2-p)r^{p-1}|\slashed{\nabla} r^5\Omega^{-1}\psi|^2\\ \lesssim \mathbb{F}_{\Sigma^*}[\Psi]+\int_{\Sigma^*}drd\omega\;r^{8-\epsilon}\Omega^2|\psi|^2+r^{6-\epsilon}\Omega^2|\alpha|^2+ \int_{\Sigma^*\cap\{r>R\}}drd\omega\;r^p|\Omega\slashed{\nabla}_4 r^5\Omega^{-1}\psi|^2.
\end{align}
\end{proposition}
\begin{proof}
Rewrite the definition of $\psi$ in terms of $r^5\Omega^{-1}\psi$ and differentiate via $\Omega\slashed{\nabla}_3$ to get
\begin{align}
\Omega\slashed{\nabla}_3\Omega\slashed{\nabla}_4 r^5\Omega^{-1}\psi+\frac{3\Omega^2-1}{r}\Omega\slashed{\nabla}_4 r^5\Omega^{-1}\psi-\Omega^2\slashed{\Delta} r^5\Omega^{-1}\psi+\frac{\Omega^2}{r^2}(3\Omega^2-5)r^5\Omega^{-1}\psi=-12M^2\frac{\Omega^4}{r^4} r^5\Omega^{-2}\alpha.
\end{align}
We repeat the argument employed in \Cref{T+2rp} using Cauchy--Schwarz to estimate the $\alpha$ term on the right hand side.
\end{proof}
\begin{remark}\label{transversealphapsi}
We have similar statements to \Cref{T+2rp,,T+1rp} for $\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4$ derivatives of $r^5\Omega^{-1}\psi$ and $r^5\Omega^{-2}\alpha$ .
\end{remark}
\subsection{Future radiation fields and fluxes}\label{+2 radiation}
In this section the notion of future radiation fields of solutions to the +2 Teukolsky equation \bref{T+2} is defined, and some of the properties of these radiation fields are studied, in particular obtaining their $\mathcal{E}^{T,+2}_{\mathscr{H}^+}$, $\mathcal{E}^{T,+2}_{\mathscr{I}^+}$ fluxes when they belong to solutions of \bref{T+2} arising from smooth data of compact support.
\subsubsection{Radiation on $\mathscr{H}^+$}\label{+2 radiation on H+}
\begin{defin}\label{+2 radiation alpha definition H}
Let $\alpha$ be a solution to (\ref{T+2}) arising from smooth data as in \Cref{WP+2Sigma*} or \Cref{WP+2Sigmabar}. The radiation field of $\alpha$ along $\mathscr{H}^+$, denoted $\upalpha_{\mathscr{H}^+}$ is defined to be the restriction of $2M\Omega^2\alpha$ to $\mathscr{H}^+$.
\end{defin}
\begin{remark}
We will use the same notation for the radiation field on $\mathscr{H}^+_{\geq0}, \mathscr{H}+$ or $\overline{\mathscr{H}^+}$.
\end{remark}
As an easy consequence of the estimates of the previous section we have the following non-quantitative decay statements: (All statements here apply to $\overline{\mathscr{H}^+}$)
\begin{corollary}\label{psi+2ptwisedecay}
For smooth data of compact support for the +2 on $\Sigma^*$, $\Sigma$ or $\overline{\Sigma}$, $\psi$ decays along any hypersurface $r=R$
\begin{align}
\lim_{v\longrightarrow \infty} \left|\left|\Omega\psi\right|\right|_{L^2(S^2_{R})}=0.
\end{align}
\end{corollary}
\begin{proof}
\Cref{psiILED} applied to $\psi$ and $\slashed{\nabla}_T\psi$ implies
\begin{align}
\lim_{v\longrightarrow \infty} \int_{\underline{\mathscr{C}}_v\cap\{r\in[2M,R]\}} \Omega^2|\psi|^2 du \sin\theta d\theta d\phi =0 .
\end{align}
Repeating this for $\Omega^{-1}\slashed{\nabla}_3 \Omega \psi$ using \Cref{ILED psi higherorder} gives the result.
\end{proof}
The same works for $\alpha$ using propositions \ref{alphaILED} and \ref{alphaILED higher order}:
\begin{corollary}\label{alpha+2ptwisedecay}
For smooth data of compact support on $\Sigma^*$, $\Sigma$ or $\overline{\Sigma}$, $\alpha$ decays along any hypersurface $r=R$:
\begin{align}
\lim_{v\longrightarrow \infty} \left|\left|\Omega^2\alpha\right|\right|_{L^2(S^2_{R})}=0.
\end{align}
\end{corollary}
Commuting with the lie derivative along angular Killing fields $\slashed{\mathcal{L}}_{\Omega_i}^\gamma$ for $|\gamma|\leq2$ gives
\begin{corollary}\label{horizonpsidecay}
For smooth data of compact support for the +2 Teukolsky equation on $\Sigma^*$, $\Sigma$ or $\overline{\Sigma}$, $\Omega\psi|_{\mathscr{H}^+}$ and $\Omega^2\alpha|_{\mathscr{H}^+}$ decay towards $\mathscr{H}^+_+$.
\end{corollary}
\subsubsection{Radiation flux on $\mathscr{H}^+$}\label{+2 radiation flux on H+}
Assume $\alpha$ satisfies \bref{T+2} and arises from smooth, compactly supported data on $\Sigma^*$. The regularity of $\Psi$ implies that
on $\mathscr{H}^+$, the radiation flux in terms of $\Psi$ is given by \bref{RW def rad flux at H}
\begin{align}
\left\|\Psi\right\|_{\mathcal{E}^T_{\mathscr{H}^+}}^2= \left\|\Omega\slashed{\nabla}_4\Psi\right\|^2_{L^2(\mathscr{H}^+)}.
\end{align}
Recall that if $\alpha$ satisfies the +2 Teukolsky equation \cref{T+2} then $\alpha, \Psi$ also satisfy (\ref{eq:d4Psi}) and (\ref{eq:d4d4Psi}):
\begin{align}\label{psi out of alpha}
\begin{split}
\Omega\slashed{\nabla}_4 \Psi=\mathcal{A}_2 \frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3r\Omega^2\alpha-6Mr\Omega^2\alpha-(3\Omega^2-1)\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3r\Omega^2\alpha,
\end{split}
\end{align}
\begin{align}\label{Psi out of alpha}
\begin{split}
\frac{\Omega^2}{r^2}\Omega\slashed{\nabla}_4 \frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4 \Psi&=\mathcal{A}_2(\mathcal{A}_2-2) r\Omega^2\alpha-6M\left(\Omega\slashed{\nabla}_3+\Omega\slashed{\nabla}_4\right)r\Omega^2\alpha.
\end{split}
\end{align}
We find the limits towards $\mathscr{H}^+$: the left hand sides of (\ref{Psi out of alpha}) reads:
\begin{align}
(\Omega\slashed{\nabla}_4)^2\Psi+\frac{3\Omega^2-1}{r}\Omega\slashed{\nabla}_4\Psi\longrightarrow \left[\partial_v-\frac{1}{2M}\right]\partial_v\bm{\uppsi}_{\mathscr{H}^+} \text{\;towards\;} \mathscr{H}^+.
\end{align}
Now the right hand side reads:
\begin{align}
\mathcal{A}_2\left[\mathcal{A}_2-2\right]\upalpha_{\mathscr{H}^+}-6M\partial_v\upalpha_{\mathscr{H}^+},
\end{align}
so we must determine $\partial_v\Psi$ from the equation
\begin{align}\label{equation for Psi out of alpha on H}
\partial_v^2\bm{\uppsi}_{\mathscr{H}^+}-\frac{1}{2M}\partial_v\bm{\uppsi}_{\mathscr{H}^+}=\mathcal{A}_2\left[\mathcal{A}_2-2\right]\upalpha_{\mathscr{H}^+}-6M\partial_v\upalpha_{\mathscr{H}^+}.
\end{align}
In Kruskal coordinates, this reads
\begin{align}
\begin{split}
\frac{1}{(2M)^2}\partial_V^2\Psi&=\mathcal{A}_2(\mathcal{A}_2-2)V^{-2}\upalpha_{\mathscr{H}^+}-3V^{-1}\partial_V \upalpha_{\mathscr{H}^+}\\
&=\left[\mathcal{A}_2(\mathcal{A}_2-2)-6\right]V^{-2}\upalpha_{\mathscr{H}^+}-3V\partial_V V^{-2}\upalpha_{\mathscr{H}^+}.
\end{split}
\end{align}
With the condition that $\Psi,\Omega\slashed{\nabla}_4\Psi$ decay as $v\longrightarrow\infty$, we have
\begin{align}\label{eq:197}
-\frac{1}{(2M)^2}\partial_V\Psi=\int_V^\infty\left\{\left[\mathcal{A}_2(\mathcal{A}_2-2)-6\right]V^{-2}\upalpha_{\mathscr{H}^+}-3V\partial_V V^{-2}\upalpha_{\mathscr{H}^+}\right\}d\widebar{V}
\end{align}
Integrating in again in $V$ and using the fact that $\upalpha_{\mathscr{H}^+}$ is compactly supported we get:
\begin{align}\label{eq:198}
\frac{1}{(2M)^2}\Psi=\int_V^\infty (V-\bar{V})\left\{\left[\mathcal{A}_2(\mathcal{A}_2-2)-6\right]V^{-2}\upalpha_{\mathscr{H}^+}-3V\partial_V V^{-2}\upalpha_{\mathscr{H}^+}\right\}d\bar{V}.
\end{align}
In Eddington-Finkelstein coordinates this reads
\begin{lemma}\label{flux+2horizon}
Let $\alpha$ be a solution to the +2 Teukolsky equation \bref{T+2} arising from data of compact support on $\mathscr{H}^+_{\geq0}$, and let $\Psi$ be the corresponding solution to the Regge--Wheeler equation arising from $\alpha$ via \bref{hier+}. Then the radiation field $\uppsi_{\mathscr{H}^+}$ on $\mathscr{H}^+$ belonging to $\Psi$ is given by:
\begin{align}\label{eq:199}
\bm{\uppsi}_{\mathscr{H}^+}=2M\int_v^\infty \left[e^{\frac{1}{2M}(v-\bar{v})}-1\right]\left\{\mathcal{A}_2\left[\mathcal{A}_2-2\right]\upalpha_{\mathscr{H}^+}-6M\partial_v\upalpha_{\mathscr{H}^+}\right\},
\end{align}
\begin{align}\label{eq:200}
\partial_v \bm{\uppsi}_{\mathscr{H}^+}=\int^{\infty}_v e^{\frac{1}{2M}(v-\bar{v})}\{-\mathcal{A}_2\left[\mathcal{A}_2-2\right]\upalpha_{\mathscr{H}^+}+6M\partial_v\upalpha_{\mathscr{H}^+}\} d\overline{v}.
\end{align}
\end{lemma}
Equations \bref{eq:197}--\bref{eq:200} are the expressions for the radiation field and flux at $\mathscr{H}^+$ that we are able to compute directly out of data there. Note that this applies equally to radiation on $\mathscr{H}^+_{\geq0}, \mathscr{H}^+$ or $\overline{\mathscr{H}^+}$.\\
\indent Now let $F_{\mathscr{H}^+}=\int_v^\infty e^{\frac{1}{2M}(v-\bar{v})} \upalpha_{\mathscr{H}^+}d\bar{v}$, then $\partial_v F= \frac{1}{2M}F-\upalpha_{\mathscr{H}^+}$, which implies
\begin{align}
-\partial_v \bm{\uppsi}_{\mathscr{H}^+}=\mathcal{A}_2(\mathcal{A}_2-2)F_{\mathscr{H}^+}-6M\partial_v F_{\mathscr{H}^+}.
\end{align}
Note that $F_{\mathscr{H}^+}$ decays towards the future end of $\mathscr{H}^+_{\geq0}$, since
\begin{align}
\lim_{v\longrightarrow\infty}F_{\mathscr{H}^+}=\lim_{v\longrightarrow\infty} \int_v^\infty e^{\frac{1}{2M}(v-\bar{v})} \upalpha_{\mathscr{H}^+}d\bar{v}=\lim_{v\longrightarrow\infty} -2M \upalpha_{\mathscr{H}^+}=0.
\end{align}
Therefore, $L^2(\mathscr{H}^+_{\geq0})$ norm of $\partial_v \bm{\uppsi}_{\mathscr{H}^+}$ is given by
\begin{align}\label{+2 norm on H+ beyond B}
\begin{split}
\left\|\partial_v \bm{\uppsi}_{\mathscr{H}^+}\right\|_{L^2(\mathscr{H}^+_{\geq0})}^2=&\left\|\mathcal{A}_2(\mathcal{A}_2-2)F_{\mathscr{H}^+}\right\|^2_{L^2(\mathscr{H}^+_{\geq0})}+\left\|6M\partial_v F_{\mathscr{H}^+}\right\|^2_{L^2(\mathscr{H}^+_{\geq0})}\\&+\int_{\Sigma^*\cap\mathscr{H}^+}\sin\theta d\theta d\phi \left(\left|\mathring{\slashed{\Delta}}F|_{\Sigma^*\cap\mathscr{H}^+}\right|^2+6\left|\mathring{\slashed{\nabla}}F|_{\Sigma^*\cap\mathscr{H}^+}\right|^2+8\Big|F|_{\Sigma^*\cap\mathscr{H}^+}\Big|^2\right).
\end{split}
\end{align}
Starting from initial data on $\Sigma$ or $\overline{\Sigma}$ and repeating the computation leading to \bref{+2 norm on H+ beyond B}, the boundary term drops out since we then have
\begin{align}
\lim_{v\longrightarrow-\infty}F_{\mathscr{H}^+}=\lim_{v\longrightarrow-\infty} \int_v^\infty e^{\frac{1}{2M}(v-\bar{v})} \upalpha_{\mathscr{H}^+}d\bar{v}=\lim_{v\longrightarrow-\infty} -2M \upalpha_{\mathscr{H}^+}=0.
\end{align}
Therefore we have
\begin{align}\label{+2 norm on H+ up to B}
\begin{split}
\left\|\partial_v \bm{\uppsi}_{\mathscr{H}^+}\right\|_{L^2(\mathscr{H}^+)}^2=&\left\|\mathcal{A}_2(\mathcal{A}_2-2)F_{\mathscr{H}^+}\right\|^2_{L^2(\mathscr{H}^+)}+\left\|6M\partial_v F_{\mathscr{H}^+}\right\|^2_{L^2(\mathscr{H}^+)}.
\end{split}
\end{align}
\begin{align}\label{+2 norm on overline H+ up to B}
\begin{split}
\left\|\partial_v \bm{\uppsi}_{{\mathscr{H}^+}}\right\|_{L^2(\overline{\mathscr{H}^+})}^2=&\left\|\mathcal{A}_2(\mathcal{A}_2-2)F_{\mathscr{H}^+}\right\|^2_{L^2(\overline{\mathscr{H}^+})}+\left\|6M\partial_v F_{\mathscr{H}^+}\right\|^2_{L^2(\overline{\mathscr{H}^+})}.
\end{split}
\end{align}
\subsubsection{Radiation on $\mathscr{I}^+$}\label{+2 radiation on scri+}
The estimates of \Cref{rp+2} lead us to define a radiation field for $\alpha$ the same way it is defined for $\Psi$
\begin{corollary}\label{psi+2scri1}
For smooth data of compact support for $\alpha$ on $\Sigma$, $r^5\psi$ has a finite pointwise limit on $\mathscr{I}^+$ which defines a smooth field there.
\end{corollary}
\begin{proof}
We follow step by step the argument of \Cref{RWradscri} and use the estimates of \Cref{T+1rp}.
\end{proof}
Similarly, using \Cref{T+2rp} we have
\begin{corollary}\label{alpha+2scri}
For smooth data of compact support for $\alpha$ on $\Sigma$, $r^5\alpha$ has a finite pointwise limit on $\mathscr{I}^+$ which defines a smooth field there.
\end{corollary}
For computational convenience we define
\begin{defin}\label{+2 radiation alpha definition scri}
For a solution $\alpha$ of (\ref{T+2}) arising from smooth data of compact support on $\Sigma^*$ as in \Cref{WP+2Sigma*} or on $\Sigma, \overline{\Sigma}$ as in (\ref{WP+2Sigmabar}), the radiation field of $\alpha$ along $\mathscr{I}^+$ is defined to be the limit $\upalpha_{\mathscr{I}^+}(u,\theta^A)=\lim_{v\longrightarrow\infty} r^5\Omega^{-2}\alpha(u,v,\theta^A)$.\\
\indent Let $\psi$ be as in \bref{hier+}. We define $\psi_{\mathscr{I}^+}$ to be the limit of $r^5\Omega^{-1}\psi$ as $v\longrightarrow\infty$.
\end{defin}
Repeating the argument of \Cref{RWdecayscri} we have
\begin{proposition}\label{T+1+2scridecay}
For a solution $\alpha$ of (\ref{T+2}) arising from smooth data of compact support on $\Sigma^*$ as in \Cref{WP+2Sigma*} or on $\Sigma, \overline{\Sigma}$ as in (\ref{WP+2Sigmabar}), the radiation fields $\upalpha_{\mathscr{I}^+}$, $\psi_{\mathscr{I}^+}$ and $\bm{\uppsi}_{\mathscr{I}^+}$ decay along $\mathscr{I}^+$ as $u\longrightarrow \infty$.\\
\end{proposition}
\begin{remark}\label{psi+2scrialternative}
We can appeal to an alternative argument that gives the existence of the limits of $r^5\psi$ and $r^5\alpha$ at $\mathscr{I}^+$ without resorting to the hierarchy of $r^p$-estimates as follows:\\
\indent Let $u\geq u_0$. From \Cref{RWradscri} we know that $\Psi$ induces a smooth radiation field $\bm{\uppsi}_{\mathscr{I}^+}$ on $\mathscr{I}^+$. For large enough $v$ the definition of $\psi$ gives
\begin{align}
r^5\Omega^{-1}\psi=\frac{r^2}{\Omega^2}\Big|_u\int_{u_0}^u \frac{\Omega^2}{r^2}\Psi d\bar{u}.
\end{align}
Therefore
\begin{align}
\begin{split}
\Big|r^5\Omega^{-1}\psi\Big|_{(u,v)}&\leq \sup_{\bar{u}\in[u_0,u]}\left|\Psi|_{(\bar{u},v)}\right|\frac{r^2}{\Omega^2}\int_{u_0}^u\frac{\Omega^2}{r^2} d\bar{u}.
\end{split}
\end{align}
Note that $ \frac{r^2}{\Omega^2}\int_{u_0}^u\frac{\Omega^2}{r^2}$ is uniformly bounded in $v$ for finite $u_0,u$. Since $\Psi$ is also uniformly bounded in $v$ on $[u_0,u]$ we can conclude (say by Lebesgue's bounded convergence theorem) that the pointwise limit $ \lim_{v\longrightarrow\infty} r^5\psi$
exists for any fixed $u$. Note now that (\ref{hier+}) also implies
\begin{align}\label{+2 Gronwall ingredient}
\Omega\slashed{\nabla}_3 r^5\Omega^{-1}\psi+\frac{3\Omega^2-1}{r} r^5\Omega^{-1}\psi=\Psi.
\end{align}
Then we have
\begin{align}
\Big|r^5\Omega^{-1}\psi\Big|_{u,v}\leq \int_{u_0}^u d\bar{u} \left|\Psi\right|+\int_{u_0}^ud\bar{u}\left(\frac{3\Omega^2-1}{r}\right)\left|r^5\Omega^{-1}\psi\right|.
\end{align}
We can apply Gr\"onwall's inequality to find:
\begin{align}\label{backwards estimate +2 Gronwall}
\Big|r^5\Omega^{-1}\psi\Big|_{u,v}\leq\int_{u_0}^ud\bar{u}\left|\Psi\right|\exp\left[\int_{u_0}^u \frac{3\Omega^2-1}{r} ds\right]\lesssim\left|\int_{u_0}^u d\bar{u}\Psi\right|\left(\frac{r(u,v)}{r(u_0,v)}\right)^2.
\end{align}
Thus $r^5\Omega^{-1}\psi$ is uniformly bounded in $v$ on $[u_0,u]$.
Existence of the $\Omega\slashed{\nabla}_3$ derivatives of the limit of $r^5\psi$ is immediate. Repeating the argument for $r\slashed{\nabla} r^5\psi$ gives differentiability in the angular directions.\\
\indent The benefit of the preceding argument is that it allows for a characterisation of the radiation fields at null infinity that is local in $u$.
\end{remark}
\subsubsection{Radiation flux on $\mathscr{I}^+$}\label{+2 radiation flux on scri+}
The radiation flux on $\mathscr{I}^+$ is easy enough to write down being already in a form that can be computed from the radiation field $\upalpha_{\mathscr{I}^+}$ given the uniform convergence of $r^5\alpha$, $r^5\psi$ and $\Psi$ towards $\mathscr{I}^+$:
\begin{align}\label{Psi out of alpha at scri}
\begin{split}
\bm{\uppsi}_{\mathscr{I}^+}&=(\partial_u)^2 \upalpha_{\mathscr{I}^+},\\
\partial_u\bm{\uppsi}_{\mathscr{I}^+}&=(\partial_u)^3\upalpha_{\mathscr{I}^+}.
\end{split}
\end{align}
\section{Future asymptotics of the $-2$ Teukolsky equation}\label{section 7}
\Cref{section 7} is devoted to the study of future radiation fields induced by solutions to the $+2$ Teukolsky equation arising from smooth, compactly supported data on $\Sigma^*$, as was done for the $+2$ Teukolsky equation in \Cref{section 6} and to the Regge--Wheeler equation in \Cref{subsection 5.2 subsection Radiation fields}.\\
\indent We first gather the estimates we need in \Cref{subsection 7.1 integrated boundedness and decay estimates for -2}, where we collect results from \cite{DHR16} estimating $\underline\alpha$ from $\underline\Psi$ defined via (\ref{hier-}) and the estimates of \Cref{subsection 5.1 Basic integrated boundedness and decay estimates} for $\underline\Psi$. We apply these results to study the future radiation fields and their fluxes in \Cref{subsection 7.2 future radiation fields and fluxes}. The estimates of \cite{DHR16} collected in \Cref{subsection 7.1 integrated boundedness and decay estimates for -2} will be sufficient to construct and estimate the radiation fields on $\mathscr{H}^+$ and $\mathscr{I}^+$.
\subsection{Integrated boundedness and decay for $\underline\alpha$ via $\underline\Psi$}\label{subsection 7.1 integrated boundedness and decay estimates for -2}
We begin with the following basic proposition, already proven in \Cref{Chandra}:
\begin{proposition}\label{-2 implies RW}
Let $(\underline\upalpha,\underline\upalpha')$ be data for \cref{T-2} on $\Sigma^*$, $\Sigma$ or $\overline{\Sigma}$ as in \Cref{WP-2Sigma*,,WP-2Sigmabar} respectively. Then $\underline\Psi$ defined out of the solution $\underline\alpha$ on $ J^+(\Sigma^*)$, $ J^+(\Sigma)$ or $ J^+(\overline{\Sigma})$ satisfies \cref{RW}.
\end{proposition}
Throughout this section we focus on the case of data on $\Sigma^*$:
\begin{proposition}\label{-2psiILED}
Let $\underline\alpha$ be a solution to (\ref{T-2}) and $\underline\Psi, \underline\psi$ be as in (\ref{hier-}) and \Cref{-2 implies RW}. Then for any $u$ and any $v>0$ such that $(u,v,\theta^A)\in J^+(\Sigma^*)$, the following estimate holds:
\begin{align}
\begin{split}
\int_{\mathscr{D}^{u,v}_{\Sigma^*}} \Omega^2 d\bar{u}d\bar{v}d\omega\;r^{4}\Omega^{-2} |\underline\psi|^2+&\int_{\underline{\mathscr{C}}_v\cap J^+(\Sigma^*)\cap J^-(\mathscr{C}_u)}\Omega^2 d\bar{u}d\omega\;r^6\Omega^{-2}|\underline\psi|^2\\
&\lesssim \mathbb{F}_{\Sigma^*}[\underline\Psi]+\int_{\Sigma^*\cap J^-(\mathscr{C}_u)\cap J^-(\underline{\mathscr{C}}_v)}drd\omega\; r^6\Omega^{-2}|\underline\psi|^2.
\end{split}
\end{align}
\end{proposition}
\begin{proof}
The definition of $\underline\psi$ (\ref{hier-}) and Cauchy--Schwarz imply
\begin{align}
\partial_v [r^{6}\Omega^{-2}|\underline\psi|^2]+M r^{4}\Omega^{-2}|\underline\psi|^2\leq \frac{1}{Mr^2}|\underline\Psi|^2.
\end{align}
The result follows by integrating over $\mathscr{D}^{u,v}_{\Sigma^*}$.
\end{proof}
\begin{proposition}\label{-2alphaILED}
Let $\underline\alpha$ be a solution to (\ref{T-2}) and $\underline\Psi, \underline\psi$ be as in (\ref{hier-}) and \Cref{-2 implies RW}. Then for any $u$ and any $v>0$ such that $(u,v,\theta^A)\in J^+(\Sigma^*)$, the following estimate holds for sufficiently small $\epsilon>0$:
\begin{align}
\begin{split}
\int_{\mathscr{D}^{u,v}_{\Sigma^*}} \Omega^2 d\bar{u}d\bar{v}d\omega\;\Omega^{-4}|\underline\alpha|^2+&\int_{\underline{\mathscr{C}}_v\cap J^+(\Sigma^*)\cap J^-(\mathscr{C}_u)}\Omega^2 d\bar{u}d\omega\;r^2\Omega^{-4}|\underline\alpha|^2\\&\lesssim \mathbb{F}_{\Sigma^*}[\underline\Psi]+\int_{\Sigma^*\cap J^-(\mathscr{C}_u\cap J^-(\underline{\mathscr{C}}_v)}drd\omega\;r^6\Omega^{-2}|\underline\psi|^2+ r^2\Omega^{-4}|\underline\alpha|^2.
\end{split}
\end{align}
\end{proposition}
\begin{proof}
Similar to \Cref{-2psiILED}. See Propositions 12.1.2, 12.2.6 and 12.2.7 of \cite{DHR16}.
\end{proof}
\begin{proposition}
Let $\underline\alpha$ be a solution to (\ref{T-2}) and $\underline\Psi, \underline\psi$ be as in (\ref{hier-}) and \Cref{-2 implies RW}. Then for any $u$ and any $v>0$ such that $(u,v,\theta^A)\in J^+(\Sigma^*)$, the following estimate holds:
\begin{align}\label{2ndderivativeofpsibar}
\begin{split}
\int_{\underline{\mathscr{C}}_v\cap J^+(\Sigma^*)\cap J^-(\mathscr{C}_u)}\Omega^2 d\bar{u}d\omega\;\left|-2r^2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}_2(r^3\Omega^{-1}\underline\psi)\right|^2&+\int_{\mathscr{D}^{u,v}_{{\Sigma^*}}} d\bar{u}d\bar{v}d\omega\;\frac{\Omega^2}{r^3}\left(1-\frac{3M}{r}\right)^2 \left|-2r^2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}_2(r^3\Omega\underline\psi)\right|^2 \\ &\lesssim \mathbb{F}_{{\Sigma^*}}[\underline\Psi]+\int_{{\Sigma^*}}drd\omega\;r^6\Omega^{-2}\left|\underline\psi\right|^2+r^2\Omega^{-4}\left|\underline\alpha\right|^2.
\end{split}
\end{align}
\end{proposition}
\begin{proposition}
Let $\underline\alpha$ be a solution to (\ref{T-2}) and $\underline\Psi, \underline\psi$ be as in (\ref{hier-}) and \Cref{-2 implies RW}. Then for any $u$ and any $v>0$ such that $(u,v,\theta^A)\in J^+(\Sigma^*)$. For sufficiently small $\epsilon>0$ the following estimate holds:
\begin{align}
\int_{\mathscr{D}^{u,v}_{{\Sigma^*}}} \Omega^2 d\bar{u}d\bar{v}d\omega\; r^{5-\epsilon} \Omega^{-2}|r\slashed{\mathcal{D}}_2\underline\psi|^2 &\lesssim \mathbb{F}_{{\Sigma^*}}[\underline\Psi]+\int_{{\Sigma^*}}drd\omega\;r^{6-\epsilon}\Omega^{-2}\left[|r\slashed{\mathcal{D}}_2\underline\psi|^2+|\underline\psi|^2\right]+r^{6-\epsilon}\Omega^{-4}|\underline\alpha|^2.
\end{align}
\end{proposition}
\begin{proposition}
Let $\underline\alpha$ be a solution to (\ref{T-2}) and $\underline\Psi, \underline\psi$ be as in (\ref{hier-}) and \Cref{-2 implies RW}. Then for any $u$ and any $v>0$ such that $(u,v,\theta^A)\in J^+(\Sigma^*)$, the following estimate holds:
\begin{align}
\begin{split}
\int_{\underline{\mathscr{C}}_v\cap J^+(\Sigma^*)\cap J^-(\mathscr{C}_u)}\Omega^2 d\bar{u}&d\omega\;r^6|\Omega^{-1}\slashed{\nabla}_3(\Omega^{-1}\underline\psi)|^2+\int_{\mathscr{D}^{u,v}_{\Sigma^*}}\Omega^2 d\bar{u}d\bar{v}d\omega\; r^4\left[|\Omega^{-1}\slashed{\nabla}_3(\Omega^{-1}\underline\psi)|^2+|r\Omega\slashed{\nabla}_4(\Omega^{-1}\underline\psi)|^2\right]\\& \lesssim \mathbb{F}_{{\Sigma^*}}[\underline\Psi]+\int_{{\Sigma^*}}drd\omega\;r^4\Omega^{-2}\left[|\underline\psi|^2+|r\slashed{\mathcal{D}}_2\underline\psi|^2+|\Omega^{-1}\slashed{\nabla}_3(\Omega^{-1}\underline\psi)|^2+|r\Omega\slashed{\nabla}_4(\Omega^{-1}\underline\psi)|^2\right].
\end{split}
\end{align}
\end{proposition}
\begin{proposition}
Let $\underline\alpha$ be a solution to (\ref{T-2}) and $\underline\Psi, \underline\psi$ be as in (\ref{hier-}) and \Cref{-2 implies RW}. Then for any $u$ and any $v>0$ such that $(u,v,\theta^A)\in J^+(\Sigma^*)$, the following estimate holds:
\begin{align}
\begin{split}
&\int_{\underline{\mathscr{C}}_v\cap J^+(\Sigma^*)\cap J^-(\mathscr{C}_u)}\Omega^2 d\bar{u}d\omega\;|r^2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}} r\Omega^{-2}\underline\alpha|^2+\int_{\mathscr{D}_{{\Sigma^*}}^{u,v}} \Omega^2 d\bar{u}d\bar{v}d\omega\;|r^2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}_2\Omega^{-2}\underline\alpha|^2
\\& \lesssim \mathbb{F}_{{\Sigma^*}}[\underline\Psi]+\int_{{\Sigma^*}}drd\omega\;r^4\Omega^{-2}\left[|\underline\psi|^2+|r\slashed{\mathcal{D}}_2\underline\psi|^2+|\Omega^{-1}\slashed{\nabla}_3(\Omega^{-1}\underline\psi)|^2+|r\Omega\slashed{\nabla}_4(\Omega^{-1}\underline\psi)|^2\right]+\int_{{\Sigma^*}} drd\omega\;|r\Omega^{-2}\underline\alpha|^2.
\end{split}
\end{align}
\end{proposition}
\begin{proposition}
Let $\underline\alpha$ be a solution to (\ref{T-2}) and $\underline\Psi, \underline\psi$ be as in (\ref{hier-}) and \Cref{-2 implies RW}. Then for any $u$ and any $v>0$ such that $(u,v,\theta^A)\in J^+(\Sigma^*)$, the following estimate holds for sufficiently small $\epsilon>0$:
\begin{align}
\begin{split}
&\int_{\underline{\mathscr{C}}_v\cap J^+(\Sigma^*)\cap J^-(\mathscr{C}_u)}\Omega^2 d\bar{u}d\omega\; \left[|r\Omega^{-2}\underline\alpha|^2+|r\slashed{\mathcal{D}}_2r\Omega^{-2}\underline\alpha|^2+|\Omega^{-1}\slashed{\nabla}_3 r\Omega^{-2}\underline\alpha|^2\right]\\&+\int_{\mathscr{D}_{\Sigma^*}^{u,v}}\Omega^2 d\bar{u}d\bar{v}d\omega\; \left[|\Omega^{-2}\underline\alpha|^2+|r\slashed{\mathcal{D}}_2\Omega^{-2}\underline\alpha|^2+|\Omega^{-1}\slashed{\nabla}_3 \Omega^{-2}\underline\alpha|^2\right]\\& \lesssim \mathbb{F}_{{\Sigma^*}}[\underline\Psi]+\int_{{\Sigma^*}}drd\omega\;r^6\left[|\Omega^{-1}\underline\psi|^2+|r\slashed{\mathcal{D}}_2\Omega^{-1}\underline\psi|^2+|\Omega^{-1}\slashed{\nabla}_3(\Omega^{-1}\underline\psi)|^2\right]\\& +\int_{{\Sigma^*}}drd\omega\; r^2\left[|\Omega^{-2}\underline\alpha|^2+|r\slashed{\mathcal{D}}_2\Omega^{-2}\underline\alpha|^2+|\Omega^{-1}\slashed{\nabla}_3\Omega^{-2}\underline\alpha|^2\right].
\end{split}
\end{align}
\end{proposition}
\subsection{Future radiation fields and fluxes}\label{subsection 7.2 future radiation fields and fluxes}
In this section the notion of future radiation fields of solutions to the -2 Teukolsky equation \bref{T-2} is defined, and some of the properties of these radiation fields are studied, in particular obtaining their $\mathcal{E}^{T,-2}_{\mathscr{H}^+}$, $\mathcal{E}^{T,-2}_{\mathscr{I}^+}$ fluxes when they belong to solutions of \bref{T-2} arising from smooth data of compact support.
\subsubsection{Radiation on $\mathscr{H}^+$}\label{-2 radiation on H+}
\begin{defin}\label{-2 radiation alpha definition H}
Let $\underline\alpha$ be a solution to \cref{T-2} arising from smooth data as in \Cref{WP-2Sigma*}. The radiation field of $\underline\alpha$ along $\mathscr{H}^+_{\geq0}$, denoted $\underline\upalpha_{\mathscr{H}^+}$, is defined to be the restriction of $2M\Omega^{-2}\underline\alpha$ to $\mathscr{H}^-$.
\end{defin}
\begin{defin}\label{-2 radiation alpha definition open H}
Let $\underline\alpha$ be a solution to \cref{T-2} arising from smooth data which is compactly supported on $\Sigma$ according to \Cref{WP-2Sigmabar}. The radiation field of $\underline\alpha$ along $\mathscr{H}^+_{\geq0}$, denoted $\underline\upalpha_{\mathscr{H}^+}$, is defined to be the restriction of $2M\Omega^{-2}\underline\alpha$ to $\mathscr{H}^-$.
\end{defin}
\begin{defin}\label{-2 radiation alpha definition overline H}
Let $\underline\alpha$ be a solution to \cref{T-2} arising from smooth data as in \Cref{WP-2Sigmabar}. The radiation field of $\underline\alpha$ along $\overline{\mathscr{H}^+}$, denoted $\underline\upalpha_{{\mathscr{H}^+}}$, is defined by $V^2\underline\upalpha_{{\mathscr{H}^+}}=2MV^2\Omega^{-2}\underline\alpha|_{{\mathscr{H}^+}}$.
\end{defin}
\begin{remark}
We will use the same notation for the radiation field on $\mathscr{H}^+_{\geq0}, \mathscr{H}^+$ or $\overline{\mathscr{H}^+}$.
\end{remark}
The following applies equally to radiation fields on $\mathscr{H}^+_{\geq0}$, $\mathscr{H}^+$ and $\overline{\mathscr{H}^+}$.
\begin{proposition}\label{-2 radiation ptwise decay H}
Assume $\underline\alpha$ arises from data which is supported away from $i^0$, then $\lim_{v\longrightarrow\infty}\underline{\bm{\uppsi}}_{\mathscr{H}^+}=~0$.
\end{proposition}
\begin{proof}
The flux estimate of \Cref{-2psiILED} commuted with $\mathcal{L}_T$ implies
\begin{align}
\int_{v_0}^\infty d\bar{v}d\omega\; \left|\Omega^{-1}\underline\psi\right|^2+ \left|\slashed{\nabla}_T\Omega^{-1}\underline\psi\right|^2 <\infty.
\end{align}
This implies $||\Omega^{-1}\underline\psi||_{S^2_{\infty,v}}\longrightarrow0$ as $v\longrightarrow\infty$. A further Sobolev embedding on the sphere gives the result.
\end{proof}
Similarly, we have
\begin{proposition}
Assume $\underline\alpha$ arises from data which is supported away from $i^0$, then $\lim_{v\longrightarrow\infty}\underline\upalpha_{\mathscr{H}^+}=0$.
\end{proposition}
\subsubsection{Radiation flux on $\mathscr{H}^+$}\label{-2 radiation flux on H+}
Now we can calculate the radiation energies in terms of $\underline\alpha$. We want to rewrite
\begin{align}
\Omega\slashed{\nabla}_4\underline\Psi=\Omega\slashed{\nabla}_4\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\right)^2r\Omega^2\underline\alpha
\end{align}
in terms of $\Omega^{-2}\underline\alpha$ and $\Omega^{-1}\underline\psi$. We have for $\underline\psi$
\begin{align}
\begin{split}
r^3\Omega^{-1}\underline\psi&=\frac{r^2}{\Omega^4}\Omega\slashed{\nabla}_4 r\Omega^2\underline\alpha=\frac{r^2}{\Omega^4}\Omega\slashed{\nabla}_4 r\Omega^4 \Omega^{-2}\underline\alpha
\\&=r^2(2-\Omega^2)\Omega^{-2}\underline\alpha+r^3\Omega\slashed{\nabla}_4 \Omega^{-2}\underline\alpha.
\end{split}
\end{align}
We can write for $\underline\Psi$
\begin{align}\label{Psi H+}
\begin{split}
\underline\Psi&=\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4 r^3\Omega\underline\psi=2M r^3\Omega^{-1}\underline\psi+r^2\Omega\slashed{\nabla}_4 r^3\Omega^{-1}\underline\psi
\\&=2r^3\Omega^{-2}\underline\alpha+r^4(3+\Omega^2)\Omega\slashed{\nabla}_4\Omega^{-2}\underline\alpha+r^5(\Omega\slashed{\nabla}_4)^2\Omega^{-2}\underline\alpha.
\end{split}
\end{align}
We can write for $\Omega\slashed{\nabla}_4\underline\Psi$
\begin{align}\label{nablav Psi H+}
\begin{split}
\Omega\slashed{\nabla}_4 \underline\Psi=&6r^2\Omega^2\Omega^{-2}\underline\alpha+r^3(2+13\Omega^2+3\Omega^4)\Omega\slashed{\nabla}_4\Omega^{-2}\underline\alpha
\\&+3r^4(1+2\Omega^2)(\Omega\slashed{\nabla}_4)^2\Omega^{-2}\underline\alpha+r^5(\Omega\slashed{\nabla}_4)^3\Omega^{-2}\underline\alpha.
\end{split}
\end{align}
At $\mathscr{H}^+$ \bref{Psi H+}, \bref{nablav Psi H+} become
\begin{align}\label{-2 Psi out of alpha H+}
\underline{\bm{\uppsi}}_{\mathscr{H}^+}=(2M)^2\left[2\underline\upalpha_{\mathscr{H}^+}+6M\partial_v\underline\upalpha_{\mathscr{H}^+}+(2M)^2\partial_v^2\underline\upalpha_{\mathscr{H}^+}\right],
\end{align}
\begin{align}\label{-2 expression on H is regular}
\Omega\slashed{\nabla}_4\underline{\bm{\uppsi}}_{\mathscr{H}^+}=(2M)\left[4M\partial_v\underline\upalpha_{\mathscr{H}^+}+3(2M)^2\partial_v^2\underline\upalpha_{\mathscr{H}^+}+(2M)^3\partial_v^3\underline\upalpha_{\mathscr{H}^+}\right].
\end{align}
\begin{remark}
On $\mathcal{E}^{T,+2}_{\mathscr{H}^-}$, the norm $\|\;\|_{\mathcal{E}^{T,+2}_{\mathscr{H}^+}}$ is equal to
\begin{align}
\|A\|_{\mathcal{E}^{T,+2}_{\mathscr{H}^+}}^2=\|2(2M\partial_v) A\|^2_{L^2(\mathscr{H}^+)}+\|3(2M\partial_v)^2 A\|^2_{L^2(\mathscr{H}^+)}+\|(2M\partial_v)^3 A\|^2_{L^2(\mathscr{H}^+)}.
\end{align}
while for $\|\;\|_{\mathcal{E}^{T,+2}_{\mathscr{H}^+_{\geq0}}}$ we have
\begin{align}
\begin{split}
\|A\|_{\mathcal{E}^{T,+2}_{\mathscr{H}^+_{\geq0}}}^2&=\|2(2M\partial_v) A\|^2_{L^2(\mathscr{H}^+_{\geq0})}+\|3(2M\partial_v)^2 A\|^2_{L^2(\mathscr{H}^+_{\geq0})}+\|(2M\partial_v)^3 A\|^2_{L^2(\mathscr{H}^+_{\geq0})}\\
&-6\|(2M\partial_v)A\|_{L^2(S^2_{\infty,0})}^2-3\|(2M\partial_v)^2A\|_{L^2(S^2_{\infty,0})}^2.
\end{split}
\end{align}
If the same computation for $\|\;\|_{\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}}$ is done with terms expressed in the Eddington--Finkelstein coordinates, it produces boundary terms that are not regular near $\mathcal{B}$. The expression \bref{-2 expression on H is regular} for $\underline\Psi$ remains well-defined over $\mathscr{H}^+$ for data on $\overline{\Sigma}$ and has a finite limit at $\mathcal{B}$, as we can see by writing it in terms of the regular Kruskal coordinates:
\begin{align}
\|\underline\upalpha_{\mathscr{H}^+}\|_{\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}}^2=\|V^{1/2}\partial_V^3 V^{-2}\underline\upalpha_{\mathscr{H}^+}\|_{L^2_VL^2(S^2_{\infty,v})}^2.
\end{align}
For smooth initial data on $\overline{\Sigma}$, \Cref{WP-2Sigmabar} guarantees the continuity of $V^2\Omega^{-2}\underline\alpha$ in a neighborhood of $\mathcal{B}$, and in the backwards direction we can show the same with \Cref{backwards wellposedness -2} and \Cref{WP-2Sigma*}.
\end{remark}
\subsubsection{Radiation on $\mathscr{I}^+$}\label{-2 radiation asymptotics on scri+}
\begin{proposition}\label{-2psiscri}
Let $\underline\alpha$ be a solution to (\ref{T-2}) arising from smooth compactly supported data on $\Sigma^*$ and let $\underline\psi,\underline\Psi$ be as in (\ref{hier-}). Then $r^3\underline\psi$ has a uniform smooth limit towards $\mathscr{I}^+$
\end{proposition}
\begin{proof}
We can integrate the definition of $\underline\Psi$ from (\ref{hier-}) from $r=R$ towards $\mathscr{I}^+$:
\begin{align}\label{above136 psi}
r^3\Omega\underline\psi|_{u,v}=r^3\Omega\underline\psi|_{u,v(u,R)}+\int_{v(u,R)}^v \frac{\Omega^2}{r^2} \underline\Psi.
\end{align}
Note that Cauchy--Schwarz and Hardy's inequality applied to the integral term give
\begin{align}
\left[\int_{S^2}d\omega\int_{v(u,R)}^vd\bar{v} \left|\frac{\Omega^2}{r^2} \underline\Psi\right|\right]^2\leq \frac{1}{R} \int_{\mathscr{C}_u\cap\{r>R\}}d\bar{v}d\omega\;\frac{\Omega^2}{r^2} \left|\underline\Psi\right|^2\lesssim\frac{1}{R}\int_{\mathscr{C}_u\cap\{r>R\}}d\bar{v}d\omega\; |\Omega\slashed{\nabla}_4\underline\Psi|^2,
\end{align}
which is finite for data of compact support. We can repeat this estimate for $r\slashed{\nabla}\underline\Psi$ conclude with a Sobolev embedding on the sphere that the integral on the right hand side of (\ref{above136 psi}) is bounded. The dominated convergence theorem gives the result. \Cref{RWrp} tells us that the convergence is uniform in $u$. Finally, we can repeat the argument having commuted with $\mathcal{L}_T, \mathcal{L}_{\Omega^i}$ to show that the limit is smooth.
\end{proof}
Similarly, \cref{eq:d3d3psibar} gives us
\begin{proposition}\label{-2 radiation at scri}
Let $\underline\alpha$ be a solution to (\ref{T-2}) arising from smooth compactly supported data on $\Sigma^*$ and let $\underline\psi$ be as in (\ref{hier-}). Then $r\underline\alpha$ has a uniform smooth limit $\underline{\upalpha}_{\mathscr{I}^+}$ towards $\mathscr{I}^+$.
\end{proposition}
\begin{proof}
We can again integrate the definition of $\underline\psi$ from (\ref{hier-}) from $r=R$ towards $\mathscr{I}^+$:
\begin{align}\label{above136 alpha}
r\Omega^2\underline\alpha|_{u,v}=r\Omega^2\underline\alpha|_{u,v(u,R)}+\int_{v(u,R)}^vd\bar{v} \frac{\Omega^2}{r^2}r^3\Omega\underline\psi.
\end{align}
Hardy's inequality gives us
\begin{align}
\int_{\mathscr{C}_u\cap\{r>R\}}d\bar{v}d\omega\;\frac{\Omega^2}{r^2}\left|r^3\Omega\underline\psi\right|^2\lesssim \int_{\mathscr{C}_u\cap\{r>R\}}d\bar{v}d\omega\;|\Omega\slashed{\nabla}_4 r^3\Omega\underline\psi|^2=\int_{\mathscr{C}_u\cap\{r>R\}} d\bar{v}d\omega\;\frac{\Omega^2}{r^2}|\underline\Psi|^2.
\end{align}
We can conclude using the above and repeating the proof of \Cref{-2psiscri}.
\end{proof}
\begin{remark}
In particular, $\slashed{\nabla}_T r\underline\alpha$ attains a limit towards $\mathscr{I}^+$ which is smooth and $\lim_{v\longrightarrow\infty} \slashed{\nabla}_T r\underline\alpha=\partial_u \underline\upalpha_{\mathscr{I}^+}$.
\end{remark}
\begin{remark}
Instead of resorting to commutation with $\mathcal{L}_T, \mathcal{L}_{\Omega^i}$ directly, one could employ the hierarchy of (\ref{eq:d3psibar}) and (\ref{eq:d3d3psibar}) to estimate the derivatives of $\underline\psi$ and $\underline\alpha$ one by one with a smaller loss of derivatives, see \cite{DHR16}.
\end{remark}
\begin{defin}\label{-2 definition radiation at scri}
For a solution $\underline\alpha$ of (\ref{T-2}) arising from smooth data of compact support on $\Sigma^*$ according to \Cref{WP-2Sigma*} or on $\Sigma, \overline{\Sigma}$ as in \Cref{WP-2Sigmabar}, the radiation field of $\underline\alpha$ along $\mathscr{I}^+$ is defined by $\underline\upalpha_{\mathscr{I}^+}(u,\theta^A)=\lim_{v\longrightarrow \infty} r\underline\alpha(u,v,\theta^A)$.
\end{defin}
\begin{proposition}\label{-2 psi ptwisedecay}
Let $\underline\alpha$ be a solution to (\ref{T-2}) arising from smooth compactly supported data on $\Sigma^*$ and let $\underline\psi$ be as in (\ref{hier-}). Then $\psi|_{r=R}$ decays as $t\longrightarrow\infty$.
\end{proposition}
\begin{proof}
The estimate of \Cref{-2psiILED} applied to $r<R$ for some fixed $R<\infty$, commuted with $T$ gives
\begin{align}
\lim_{v\longrightarrow \infty} \int_{\underline{\mathscr{C}}_v\cap\{2M<r<R\}}dud\omega\; \left|\Omega^{-1}\psi\right|=0.
\end{align}
Commuting with $\Omega^{-1}\slashed{\nabla}_3$ gives the result.
\end{proof}
\begin{corollary}\label{-2 alpha ptwisedecay}
Let $\underline\alpha$ be a solution to (\ref{T-2}) arising from smooth compactly supported data on $\Sigma^*$ and let $\underline\Psi$ be as in (\ref{hier-}). Then $\alpha|_{r=R}$ decays as $t\longrightarrow\infty$.
\end{corollary}
\begin{proposition}\label{-2 psi radiation decay}
Let $\underline\alpha$ be a solution to (\ref{T-2}) arising from smooth compactly supported data on $\Sigma^*$ and let $\underline\psi$ be as in (\ref{hier-}). Then $\underline{\psi}_{\mathscr{I}^+}:=\lim_{v\longrightarrow\infty}r^3\underline\psi$ decays towards the future end of $\mathscr{I}^+$.
\end{proposition}
\begin{proof}
This follows from integrating (\ref{hier-}) between $r=R$ and $\mathscr{I}^+$:
\begin{align}
\int_{S^2_R}\left|\frac{1}{r^2}r^3\Omega\underline{\psi}|_{(u,v)}-\underline{{\psi}}_{\mathscr{I}^+}\big|_{u}\right|_{S^2}^2\lesssim \frac{1}{R}\int_{\mathscr{C}_u\cap\{r>R\}} |\Omega\slashed{\nabla}_4\underline\Psi|^2.
\end{align}
This decays as $u\longrightarrow\infty$ by energy conservation. \Cref{-2 psi ptwisedecay} gives the result.
\end{proof}
\begin{corollary}\label{-2 alpha radiation decay}
Let $\underline\alpha$ be a solution to (\ref{T-2}) arising from smooth compactly supported data on $\Sigma^*$ and let $\underline\psi$ be as in (\ref{hier-}). Then the radiation field $\upalpha_{\mathscr{I}^+}$ of \Cref{-2 definition radiation at scri} decays towards $\mathscr{I}^+_+$
\end{corollary}
\subsubsection{Radiation flux on $\mathscr{I}^+$}\label{-2 radiation flux on scri+}
We want to find the limit towards $\mathscr{I}^+$ of
\begin{align}\label{eq:116}
\Omega\slashed{\nabla}_3\underline\Psi=-(3\Omega^2-1)\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4r\Omega^2\underline\alpha+6Mr\Omega^2\underline\alpha-2r^2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}_2\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4r\Omega^2\underline\alpha.
\end{align}
As $\underline\psi$ is related to the transverse derivative of $\underline\alpha$ near $\mathscr{I}^+$, we want to express $\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4 r\Omega^2\underline\alpha$ in terms of quantities that can be constructed intrinsically on $\mathscr{I}^+$ from data. We do this by integrating the Teukolsky equation: recall \cref{eq:d3d3psibar}
\begin{align}\label{this22}
\frac{\Omega^2}{r^2}\Omega\slashed{\nabla}_3\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\underline\Psi=6M\left[\Omega\slashed{\nabla}_4+\Omega\slashed{\nabla}_3\right]r\Omega^2\underline\alpha+\mathcal{A}_2(\mathcal{A}_2-2)r\Omega^2\underline\alpha.
\end{align}
The results of the previous section give us the asymptotics:
\begin{align}
\frac{\Omega^2}{r^2}\Omega\slashed{\nabla}_3\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\underline\Psi=(\Omega\slashed{\nabla}_3)^2\underline\Psi-\left(\frac{3\Omega^2-1}{r}\right)\Omega\slashed{\nabla}_3 \underline\Psi\longrightarrow (\partial_u)^2\underline\Psi \text{\;\;towards\;} \mathscr{I}^+.
\end{align}
The right hand side gives:
\begin{align}
6M\partial_u \underline\upalpha_{\mathscr{I}^+}+\mathcal{A}_2\left(\mathcal{A}_2-2\right) \underline\upalpha_{\mathscr{I}^+}.
\end{align}
whereas the left hand side becomes $\partial_u^2\underline{\bm{\uppsi}}_{\mathscr{I}^+}$. \bref{this22} then becomes at $\mathscr{I}^+$
\begin{align}
\partial_u^2\underline\Psi=6M\partial_u \underline\upalpha_{\mathscr{I}^+}+\mathcal{A}_2\left(\mathcal{A}_2-2\right) \underline\upalpha_{\mathscr{I}^+}.
\end{align}
We can integrate along $\mathscr{I}^+$:
\begin{align}\label{202}
\partial_u \underline\Psi|_u=\partial_u\underline\Psi|_{u_0}-6M\underline\upalpha_{\mathscr{I}^+}|_{u_0}+6M\underline\upalpha_{\mathscr{I}^+}|_{u}+\mathcal{A}_2\left(\mathcal{A}_2-2\right)\int_{u_0}^u \underline\upalpha_{\mathscr{I}^+} d\bar{u}.
\end{align}
The fact that $\lim_{u\longrightarrow\infty} \partial_u \underline{\bm{\uppsi}}_{\mathscr{I}^+}=0=\lim_{u\longrightarrow\infty} \underline\upalpha_{\mathscr{I}^+}$ tells us that
\begin{align}
\mathcal{A}_2\left(\mathcal{A}_2-2\right)\int_{u_0}^\infty r\underline\alpha=-\partial_u\underline{\bm{\uppsi}}_{\mathscr{I}^+}|_{u_0}+6M\underline\upalpha_{\mathscr{I}^+}|_{u_0}.
\end{align}
For data of compact support on $\Sigma$, we can take $u_0$ such that the right hand side vanishes. Knowing that $\mathcal{A}_2, \mathcal{A}_2-2$ are uniformly elliptic, we must have
\begin{align}\label{-2 mean is 0}
\int_{u_0}^\infty \underline\upalpha_{\mathscr{I}^+}=0.
\end{align}
We can integrate (\ref{202}) once more to find a useful expression for $\underline{\bm{\uppsi}}_{\mathscr{I}^+}$ that can be computed from data on $\mathscr{I}^+$:
\begin{align}\label{-2 Psi out of alpha on scri+}
\underline{\bm{\uppsi}}_{\mathscr{I}^+}(u,\theta^A)=6M\int_{u_0}^u d\bar{u}\underline\upalpha_{\mathscr{I}^+}+\mathcal{A}_2\left(\mathcal{A}_2-2\right)\int_{u_0}^u d\bar{u}(u-\bar{u})\underline\upalpha_{\mathscr{I}^+}.
\end{align}
Again, seeing that $\underline\Psi|_{\mathscr{I}^+}$ decays towards $\mathscr{I}^+_+$ we have:
\begin{align}
\int_{u_0}^\infty\int_{u_1}^\infty du_1du_2\underline\upalpha_{\mathscr{I}^+}=\int_{u_0}^\infty d\bar{u}(u-\bar{u})r\underline\alpha=0.
\end{align}
We can rewrite $\underline{\bm{\uppsi}}_{\mathscr{I}^+}$ and $\partial_u\underline{\bm{\uppsi}}_{\mathscr{I}^+}$:
\begin{align}\label{formula for -2 RW in backwards direction}
\underline{\bm{\uppsi}}_{\mathscr{I}^+}=-6M\int_u^{\infty}d\bar{u} \underline\upalpha_{\mathscr{I}^+}-\mathcal{A}_2\left(\mathcal{A}_2-2\right)\int_u^{\infty}d\bar{u}(u-\bar{u})\underline\upalpha_{\mathscr{I}^+}.
\end{align}
\begin{align}\label{formula for partialu -2 RW in backwards direction}
\partial_u\underline{\bm{\uppsi}}_{\mathscr{I}^+}=-\mathcal{A}_2\left(\mathcal{A}_2-2\right)\int_u^{\infty} d\bar{u} \underline\upalpha_{\mathscr{I}^+}+6M\underline\upalpha_{\mathscr{I}^+}|_u.
\end{align}
Using \bref{-2 mean is 0}, we can recover \bref{-2 tricky norm at scri}
\begin{align}
\|\partial_u\underline{\bm{\uppsi}}_{\mathscr{I}^+}\|_{L^2(\mathscr{I}^-)}^2=\int_{\mathscr{I}^+} d{u}\sin\theta d\theta d\phi\left[ 6M|\underline{\alpha}_{\mathscr{I}^+}|^2+\left|\mathcal{A}_2(\mathcal{A}_2-2)\int_{\bar{u}}^\infty d\bar{u}\; \underline{\alpha}_{\mathscr{I}^+}\right| ^2\right].
\end{align}
\begin{remark}The fact that $\int_{-\infty}^\infty du_1\; \underline{\bm{\uppsi}}_{\mathscr{I}^+}=\int_{-\infty}^\infty\int_{u_1}^\infty du_1 du_2 \;\underline{\bm{\uppsi}}_{\mathscr{I}^+}=0$ implies
\begin{align}
\int_{-\infty}^\infty\int_{u_1}^\infty\int_{u_2}^\infty du_1du_2du_3\;\underline\upalpha_{\mathscr{I}^+}=\int_{u_0}^\infty\int_{u_1}^\infty\int_{u_2}^\infty\int_{u_3}^\infty du_1du_2du_3du_4\;\underline\upalpha_{\mathscr{I}^+}=0.
\end{align}
\end{remark}
\section{Constructing the scattering maps for $\alpha, \underline\alpha$}\label{section 8 constructing the scattering maps}
We gather the results of Sections \ref{section 6} and \ref{section 7} to finally construct the scattering theory for the Teukolsky equations \bref{T+2}, \bref{T-2}. \Cref{subsection 8.1 future scattering +2} is devoted to the +2 Teukolsky equation \bref{T+2}, where \Cref{subsubsection 8.1.1 forwards scattering +2} handles forwards scattering and \Cref{subsubsection 8.1.2 backwards scattering +2} handles backwards scattering. \Cref{subsection 8.2 future scattering -2} is devoted to the -2 Teukolsky equation \bref{T-2}, where \Cref{subsubsection 8.2.1 forwards scattering -2} handles forwards scattering and \Cref{subsubsection 8.2.2 backwards scattering -2} handles backwards scattering. Taking into account \Cref{time inversion}, results concerning scattering towards the past are immediate and they are collected in \Cref{subsection 8.3 past scattering +2-2}.
\subsection{Future scattering for $\alpha$}\label{subsection 8.1 future scattering +2}
Forwards scattering for the +2 Teukolsky equation \bref{T+2} is worked out entirely analogously to the case of the Regge--Wheeler equation \bref{RW}, using the results of Section \ref{+2 radiation}.\\
\indent For backwards scattering, we make use of the transport equations \bref{hier+} and the backwards scattering theory of \Cref{subsection 5.2 subsection Radiation fields} for the Regge--Wheeler equation \bref{RW}, instead of directly appealing to a limiting argument that repeats the proof of \Cref{RWbackwardsexistence}. Throughout this process, the uniform $T$-energy estimates of $\Psi$ are vital in controlling the backwards evolution of $\alpha$, but we note here that it is possible to derive uniform, nondegenerate energy estimates for $\alpha$ near $\mathscr{H}^+$ in contrast with the case of $\Psi$. In this sense, $\alpha$ is "red-shifted" in the backwards direction.
\subsubsection{Forwards scattering for $\alpha$}\label{subsubsection 8.1.1 forwards scattering +2}
We put together the ingredients worked out in \Cref{+2 radiation} to construct the forwards scattering map.
\begin{proof}[Proof of \Cref{+2 future forward scattering}]
Let $\alpha$ be the solution to \cref{T+2} on $ J^+(\Sigma^*)$ arising out of a compactly supported data set $(\upalpha,\upalpha')$ on $\Sigma^*$ as in \Cref{WP+2Sigma*}. The radiation field $\upalpha_{\mathscr{H}^+}$ exists as in \Cref{+2 radiation alpha definition H}. \Cref{alpha+2ptwisedecay} applied for $R=2M$ says that $\upalpha_{\mathscr{H}^+}\longrightarrow 0$ towards $\mathscr{H}^+_+$. Let $\Psi$ be the solution to \cref{RW} associated to $\alpha$ via (\ref{hier+}). The fact that $(\Psi|_{\Sigma^*},\slashed{\nabla}_{T}\Psi|_{\Sigma^*})$ are compactly supported means that the results of \Cref{+2 radiation flux on H+} apply. In particular, we find that
\begin{align}
\left|\int_{v}^\infty d\bar{v}\; e^{\frac{1}{2M}(v-\bar{v})}\upalpha_{\mathscr{H}^+}(\bar{v},\theta^A)\right|\leq \frac{1}{2M} |\upalpha_{\mathscr{H}^+}(v,\theta^A)|,
\end{align}
and since $\|\uppsi_{\mathscr{H}^+}\|_{\mathcal{E}^T_{\mathscr{H}^+_{\geq0}}}<\infty$, this shows that $\|\upalpha_{\mathscr{H}^+}\|_{\mathcal{E}^{T,+2}_{\mathscr{H}^+_{\geq0}}}<\infty$ and $\upalpha_{\mathscr{H}^+}\in \mathcal{E}^{T,+2}_{\mathscr{H}^+_{\geq0}}$. Similarly by \Cref{alpha+2scri}, $r^5\alpha$ has a pointwise limit as $v\longrightarrow \infty$ which induces a smooth $\upalpha_{\mathscr{I}^+}$ on $\mathscr{I}^+$. \Cref{T+1+2scridecay} implies that $\upalpha_{\mathscr{I}^+}$ decays towards $\mathscr{I}^+_+$. As $\Psi_{\mathscr{I}^+}\in \mathcal{E}^T_{\mathscr{I}^+}$, we have that $\upalpha_{\mathscr{I}^+}\in\mathcal{E}^{T,+2}_{\mathscr{I}^+}$.
\end{proof}
\begin{corollary}\label{+2 future forward scattering Sigma Sigmabar}
Solutions to (\ref{T+2}) arising from data on ${\Sigma}$ of compact support give rise to smooth radiation fields in $\mathcal{E}_{\mathscr{I}^+}^{T,+2}$ and $\mathcal{E}_{{\mathscr{H}^+}}^{T,+2}$. Solutions to (\ref{T+2}) arising from data on $\overline{\Sigma}$ of compact support give rise to smooth radiation fields in $\mathcal{E}_{\mathscr{I}^+}^{T,+2}$ and $\mathcal{E}_{\overline{\mathscr{H}^+}}^{T,+2}$
\end{corollary}
\begin{proof}
Identical to the proof of \Cref{RWfcpSigma} using \Cref{WP+2Sigmabar,,backwards wellposedness +2}.
\end{proof}
The proof of \Cref{+2 future forward scattering} above and \Cref{+2 future forward scattering Sigma Sigmabar} allow us to define the forwards maps ${}^{(+2)}\mathscr{F}^+$ from dense subspaces of $\mathcal{E}^{T+2}_{\Sigma^*}$, $\mathcal{E}^{T,+2}_{\Sigma}$, $\mathcal{E}^{T,+2}_{\overline{\Sigma}}$.
\begin{defin}
Let $(\upalpha,\upalpha')$ be a smooth data set of compact support to the +2 Teukolsky equation \bref{T+2} on $\Sigma^*$ as in \Cref{WP+2Sigma*}. Define the map ${}^{(+2)}\mathscr{F}^+$ by
\begin{align}
{}^{(+2)}\mathscr{F}^+:\Gamma_c(\Sigma^*)\times\Gamma_c(\Sigma^*)\longrightarrow \Gamma(\mathscr{H}^+_{\geq0})\times\Gamma(\mathscr{I}^+), (\upalpha,\upalpha')\longrightarrow (\upalpha_{\mathscr{H}^+},\upalpha_{\mathscr{I}^+}),
\end{align}
where $(\upalpha_{\mathscr{H}^+},\upalpha_{\mathscr{I}^+})$ are as in the proof of \Cref{+2 future forward scattering}.\\
\indent Using \Cref{+2 future forward scattering Sigma Sigmabar}, the map ${}^{(+2)}\mathscr{F}^+$ is defined analogously for data on $\Sigma, \overline{\Sigma}$:
\begin{align}
{}^{(+2)}\mathscr{F}^+:\Gamma_c(\Sigma)\times\Gamma_c(\Sigma)\longrightarrow \Gamma(\mathscr{H}^+)\times\Gamma(\mathscr{I}^+), (\upalpha,\upalpha')\longrightarrow (\upalpha_{\mathscr{H}^+},\upalpha_{\mathscr{I}^+}),\\
{}^{(+2)}\mathscr{F}^+:\Gamma_c(\overline{\Sigma})\times\Gamma_c(\overline{\Sigma})\longrightarrow \Gamma(\overline{\mathscr{H}^+})\times\Gamma(\mathscr{I}^+), (\upalpha,\upalpha')\longrightarrow (\upalpha_{\mathscr{H}^+},\upalpha_{\mathscr{I}^+}).
\end{align}
\end{defin}
\subsubsection{Backwards scattering for $\alpha$}\label{subsubsection 8.1.2 backwards scattering +2}
Now we construct the inverse ${}^{(+2)}\mathscr{B}^-$ of \Cref{+2 future backward scattering} on a dense subspace of $\mathcal{E}^{T,+2}_{\mathscr{H}^+_{\geq0}}\oplus\mathcal{E}^{T,+2}_{\mathscr{I}^+}$. The existence of a solution to the +2 Teukolsky equation \bref{T+2} out of compactly supported scattering data on $\mathscr{H}^+_{\geq0}, \mathscr{I}^+$ is shown in \Cref{+2 backwards existence}. Showing that this solution defines an element of $\mathcal{E}^{T,+2}_{\Sigma^*}$ is done in \Cref{+2 backwards inclusion 7/2}.
\begin{proposition}\label{+2 backwards existence}
For $\upalpha_{\mathscr{H}^+}\in\Gamma_c(\mathscr{H}^+_{\geq0})\cap\mathcal{E}^{T,+2}_{\mathscr{H}^+_{\geq0}}$ supported on $\mathscr{H}^+_{\geq0}\cap\{v<v_+\}$ for $v_+<\infty$, ${\alpha}_{\mathscr{I}^+}\in\Gamma_c(\mathscr{I}^+)\cap\mathcal{E}^{T,+2}_{\mathscr{I}^+}$ supported on on $\mathscr{I}^+\cap\{u<u_+\}$ for $u_+<\infty$, there exists a unique solution $\alpha$ to \bref{T+2} in $J^+(\Sigma^*)$ that realises $\upalpha_{\mathscr{H}^+}$ and $\upalpha_{\mathscr{I}^+}$ as its radiation fields on $\mathscr{H}^+_{\geq0}, \mathscr{I}^+$.
\end{proposition}
\begin{proof}
Define
\begin{align}
{\psi}_{\mathscr{H}^+}&=\frac{1}{(2M)^3}\int_v^\infty d\bar{v}\; e^{\frac{1}{2M}(v-\bar{v})}(\mathcal{A}_2-3)\upalpha_{\mathscr{H}^+},\\
\bm{\uppsi}_{\mathscr{H}^+}&=2M\int_v^\infty d\bar{v} \left[e^{\frac{1}{2M}(v-\bar{v})}-1\right]\left\{\mathcal{A}_2\left[\mathcal{A}_2-2\right]\upalpha_{\mathscr{H}^+}-6M\partial_v\upalpha_{\mathscr{H}^+}\right\},\\
\psi_{\mathscr{I}^+}&=\partial_u\upalpha_{\mathscr{I}^+},\\
\bm{\uppsi}_{\mathscr{I}^+}&=\partial_u^2\upalpha_{\mathscr{I}^+}.
\end{align}
With scattering data $\bm{\uppsi}_{\mathscr{I}^+}, \bm{\uppsi}_{\mathscr{H}^+}$, there is a unique solution $\Psi$ to \cref{RW} on $J^+(\Sigma^*)$. Define $\psi, \alpha$ by
\begin{align}
r^3\Omega\psi=(2M)^3\psi_{\mathscr{H}^+}-\int_u^\infty\frac{\Omega^2}{r^2}\Psi d\bar{u},\qquad\qquad
r\Omega^2\alpha=\upalpha_{\mathscr{H}^+}-\int_u^\infty r{\Omega^3}\psi d\bar{u},
\end{align}
then $\psi, \alpha$ satisfy the transport relations \bref{hier+}:
\begin{align}
\Psi=\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3 r^3\Omega\psi=\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\right)^2r\Omega^2\alpha.
\end{align}
(note that we are working with $(1,1)$-tensor fields throughout). The boundedness of $F_v^T[\Psi](u,\infty)$ implies that $\Omega^2\alpha\longrightarrow\upalpha_{\mathscr{H}^+}$, $\Omega\psi\longrightarrow{\psi}_{\mathscr{H}^+}$ as $u\longrightarrow\infty$.
Since $\Psi$ satisfies \cref{RW}, the commutation relation \bref{commutation relation} implies
\begin{align}
\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\right)^2\mathcal{T}^{+2}r\Omega^2\alpha=0.
\end{align}
where $\mathcal{T}^{+2}$ is the $+2$ Teukolsky operator. We have:
\begin{align}
\begin{split}
\mathcal{T}^{+2}r\Omega^2\alpha&=\frac{3\Omega^2-1}{r}r^3\Omega\psi+\Omega\slashed{\nabla}_4 r^3\Omega\psi-\left(\mathcal{A}_2-\frac{6M}{r}\right)r\Omega^2\alpha\\
\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\mathcal{T}^{+2}r\Omega^2\alpha&=-(\mathcal{A}_2-3\Omega^2+1)r^3\Omega\psi-\Omega\slashed{\nabla}_4\Psi+6Mr\Omega^2\alpha
\end{split}
\end{align}
On $\mathscr{H}^+$ this evaluates to
\begin{align}
\mathcal{T}^{+2}r\Omega^2\alpha|_{\mathscr{H}^+}&=(2M)^3\left(\partial_v-\frac{1}{2M}\right)\psi_{\mathscr{H}^+}-(\mathcal{A}_2-3)\upalpha_{\mathscr{H}^+},\\
\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\mathcal{T}^{+2}r\Omega^2\alpha|_{\mathscr{H}^+}&=-(2M)^3(\mathcal{A}_2+1)\psi_{\mathscr{H}^+}+6M\upalpha_{\mathscr{H}^+}-\partial_v\bm{\uppsi}_{\mathscr{H}^+}.
\end{align}
It is clear that with our construction of initial data,
$\mathcal{T}^{+2}r\Omega^2\alpha|_{\mathscr{H}^+}= \frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\mathcal{T}^{+2}r\Omega^2\alpha|_{\mathscr{H}^+}=0$, therefore $\alpha$ satisfies $\mathcal{T}^{+2}r\Omega^2\alpha=0$. Note that as $\Psi(u,v)$ vanishes for $u>u_+,v>v_+$, the same applies to $\alpha,\psi$. Let $\mathcal{R}>3M$, we can estimate $\psi(u,v)$ for $r(u_+,v)>\mathcal{R}$ by:
\begin{align}
|r^5\Omega\psi|\leq\int_u^{u_+}\Omega^2|\Psi|+\int_u^{u_+}\frac{2}{r}|r^5\Omega\psi|
\end{align}
Gr\"onwall's inequality implies
\begin{align}
|r^5\Omega\psi|\lesssim \left(\frac{r(u,v)}{r(u_+,v)}\right)^2\int_u^{u_+}|\Psi|.
\end{align}
As $\Psi$ converges uniformly to $\bm{\uppsi}_{\mathscr{I}^+}$, this implies that $\partial_u r^5\Omega\psi$ converges uniformly to $\partial_u\bm{\uppsi}_{\mathscr{H}^+}$, which in turn says that $r^5\Omega\psi$ converges to $\psi_{\mathscr{H}^+}$. An identical argument shows that $r^5\alpha$ converges to $\upalpha_{\mathscr{I}^+}$.
\end{proof}
In the following we explicitly show that $\alpha$ of \Cref{+2 backwards existence} defines a member of $\mathcal{E}^{T,+2}_{\mathscr{I}^+}$:
\begin{corollary}\label{+2 backwards inclusion 7/2}
Let $\upalpha_{\mathscr{H}^+}, \upalpha_{\mathscr{I}^+}$ be as in \Cref{+2 backwards existence}. Let $\alpha$ be the solution to \cref{T+2} arising from $\upalpha_{\mathscr{H}^+}, \upalpha_{\mathscr{I}^+}$. Then $(\Omega^2\alpha|_{\Sigma^*},\slashed{\nabla}_{n_{\Sigma^*}}\Omega^2\alpha|_{\Sigma^*})\in\mathcal{E}^{T,+2}_{\Sigma^*}$
\end{corollary}
\begin{proof}
Let $\xi$ be a smooth cutoff function over $\mathbb{R}$ with $\xi=1$ for $r\leq0$, $\xi=0$ for $r\geq1$ such that all derivatives of $\xi$ are uniformly bounded. Let $\{R_n\}_{n=1}^\infty$ with $R_{1}$ large and $R_{n+1}=2R_n$ and define $\xi_n (r)=\xi\left(\frac{r-R_{n}}{R_{n+1}-R_n}\right)$. We want to show that the sequence $\alpha_n=\xi_n\alpha$ is such that $(\Omega^2\alpha_n,\slashed{\nabla}_{n_{\Sigma^*}}\Omega^2\alpha_n)$ converges to $(\Omega^2\alpha,\slashed{\nabla}_{n_{\Sigma^*}}\Omega^2\alpha)$ in $\mathcal{E}^{T,+2}_{\Sigma^*}$. Denoting by $\Psi_{n}=\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\right)^2r\Omega^2\alpha_n$ the solution to the Regge--Wheeler equation arising from $\alpha_n$, we calculate
\begin{align}
\begin{split}
\Psi_n&=\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\right)^2 r\Omega^2\alpha_n=\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\right)^2\xi_n r\Omega^2\alpha\\
&=r^2(r^2\xi_n')'r\Omega^2\alpha-2r^2\xi_n'r^3\Omega\psi+\xi_n\Psi.
\end{split}
\end{align}
We know that $\xi_n\Psi\longrightarrow \Psi$ in $\mathcal{E}^{T}_{\Sigma^*}$ (see \Cref{RW enough to be in space}). Seeing that $r^2\xi_n'\sim r, r^2(r^2\xi_n')'\sim r^2$ on $[R_n,R_{n+1}]$, we can estimate the remainder via
\begin{align}
\begin{split}
\|\Psi_n-\xi_n\Psi\|_{\mathcal{E}^T_{\Sigma^*}}^2\lesssim\int_{R_n}^{R_{n+1}} dr\sin\theta d\theta d\phi\;& \left[|r^3\Omega\psi|^2+|\mathring{\slashed{\nabla}}r^3\Omega\psi|^2+|r\Omega\slashed{\nabla}_4 r^3 \Omega\psi|^2\right]\\&+\left[|r^3\Omega\alpha|^2+|\mathring{\slashed{\nabla}}r^3\Omega\alpha|^2+|r\Omega\slashed{\nabla}_4 r^3 \Omega\alpha|^2\right]\\&+\left[\frac{1}{r^2}(|\Psi|^2+|\mathring{\slashed{\nabla}}\Psi|^2)+|\Omega\slashed{\nabla}_4\Psi|^2\right].
\end{split}
\end{align}
The result follows if we can show that $r^{\frac{7}{2}}\Omega\psi|_{\Sigma^*}, r^{\frac{7}{2}}\Omega^2\alpha|_{\Sigma^*}, r^{\frac{3}{2}}\Omega\slashed{\nabla}_4 r^3\Omega\psi, r^{\frac{3}{2}}\Omega\slashed{\nabla}_4 r^3\Omega^2\alpha$ decay as $r\longrightarrow\infty$. Let $u<u'<u_-$ and take $r=r(u',v), R=r(u,v)$ and $(u,v,\theta^A):=(R,\theta^A)\in\Sigma^*$. We estimate $R^{\frac{7}{2}}\Omega\psi|_{\Sigma^*}$ by integrating the definition of $\Psi$ (\ref{hier+}):
\begin{align}
\begin{split}
\int_{S^2}R^{\frac{7}{2}}\Omega|\psi(R,\theta^A)|d\omega&\leq\sqrt{r}\int_u^{u'}d\bar{u}\int_{S^2}d\omega\; \frac{\Omega^2}{r^2}|\Psi|+\sqrt{R}r^{3}\Omega|\psi(u',v,\theta^A)|\\&\lesssim_{u'}\sqrt{r}\int_u^{u'}d\bar{u}\int_{S^2}d\omega\; \frac{\Omega^2}{r^2}|\Psi|+r^{\frac{7}{2}}\Omega|\psi(u',v,\theta^A)|\\&\lesssim_{u'}\sqrt{F^T_v[\Psi](u,u')}+r^{\frac{7}{2}}\Omega|\psi(u',v,\theta^A)|.
\end{split}
\end{align}
We used Cauchy--Schwarz to get to the last step. The right hand side decays as $v\longrightarrow\infty$ since $F^T_v[\Psi](u,u')$ decays, $F^T_{u'}[\Psi](v,\infty)<\infty$ and $\bm{\uppsi}_{\mathscr{I}^+}$ vanishes for $u<u_-$, so that
\begin{align}
|r^{3}\Omega\psi(u',v,\theta^A)|_{L^2(S^2_{u',v})}\leq\int_v^\infty d\bar{v}\int_{S^2_{u',\bar{v}}} \frac{\Omega^2}{r^2}|\Psi|\leq\frac{1}{\sqrt{r(u',v)}}\sqrt{F^T_{u'}[\Psi](v,\infty)}.
\end{align}
and commuting with $\slashed{\mathcal{L}}_{S^2}^\gamma$ for $|\gamma|\leq 3$ gives that $R^{\frac{7}{2}}\Omega\psi|_{\Sigma^*}$ decays as $R\longrightarrow\infty$. This can be repeated to show the same for $R^{\frac{7}{2}}\Omega^2\alpha|_{\Sigma^*}$. Furthermore, we have
\begin{align}
\begin{split}
\Omega\slashed{\nabla}_3 r\Omega\slashed{\nabla}_4 r^3\Omega\psi=-\frac{\Omega^2}{r} r\Omega\slashed{\nabla}_4 r^3\Omega\psi+(3\Omega^2-1)\frac{\Omega^2}{r^2}\Psi+\frac{\Omega^2}{r}\Omega\slashed{\nabla}_4 \Psi.
\end{split}
\end{align}
We estimate
\begin{align}
\left|r\Omega\slashed{\nabla}_4 r^3\Omega\psi|_{\Sigma^*}\right|\leq \left|r\Omega\slashed{\nabla}_4 r^3\Omega\psi(u',v,\theta^A)\right|+\int_u^{u'}d\bar{u}\left[\frac{\Omega^2}{r}|r\Omega\slashed{\nabla}_4 r^3\Omega\psi|+(3\Omega^2-1)\frac{\Omega^2}{r^2}|\Psi|+\frac{\Omega^2}{r}|\Omega\slashed{\nabla}_4\Psi|\right].
\end{align}
Gr\"onwall's inequality implies
\begin{align}
\left|r\Omega\slashed{\nabla}_4 r^3\Omega\psi|_{\Sigma^*}\right|\lesssim\frac{r(u',v)}{r(u,v)}\left[\left|r\Omega\slashed{\nabla}_4 r^3\Omega\psi(u,v,\theta^A)\right|+\frac{1}{\sqrt{R}}\sqrt{F^T_v[\Psi](u,u')}\right],
\end{align}
which in turn implies that $r^{\frac{3}{2}}\Omega\slashed{\nabla}_4 r^3\Omega\psi|_{\Sigma^*}\longrightarrow 0$ as $R\longrightarrow\infty$. The same can be repeated to show $r^{\frac{3}{2}}\Omega\slashed{\nabla}_4 r^3\Omega^2\alpha|_{\Sigma^*}\longrightarrow 0$ as $R\longrightarrow\infty$.
\end{proof}
\begin{defin}\label{+2 definition of B-}
Let $\upalpha_{\mathscr{H}^+}, \upalpha_{\mathscr{I}^+}$ be as in \Cref{+2 backwards existence}. Define the map ${}^{(+2)}\mathscr{B}^-$ by
\begin{align}
{}^{(+2)}\mathscr{B}^-:\Gamma_c(\mathscr{H}^+_{\geq0})\times\Gamma_c(\mathscr{I}^+)\longrightarrow\Gamma(\Sigma^*)\times\Gamma(\Sigma^*), (\upalpha_{\mathscr{H}^+},\upalpha_{\mathscr{I}^+})\longrightarrow (\Omega^2\alpha|_{\Sigma^*},\slashed{\nabla}_{n_{\Sigma^*}}\Omega^2\alpha|_{\Sigma^*}),
\end{align}
where $\alpha$ is the solution to \bref{T+2} arising from scattering data $(\upalpha_{\mathscr{H}^+},\upalpha_{\mathscr{I}^+})$ as in \Cref{+2 backwards existence}.
\end{defin}
\begin{corollary}\label{+2B- inverts +2F+}
The maps ${}^{(+2)}\mathscr{F}^+$, ${}^{(+2)}\mathscr{B}^-$ extend uniquely to unitary Hilbert space isomorphisms on their respective domains, such that ${}^{(+2)}\mathscr{F}^+\circ{}^{(+2)}\mathscr{B}^-=Id$, ${}^{(+2)}\mathscr{B}^-\circ{}^{(+2)}\mathscr{F}^+=Id$.
\end{corollary}
\begin{proof}
Identical to the proof of \Cref{B- inverts F+}.
\end{proof}
\begin{remark}\label{unitarity of +2B- is trivial}
As in the case of \Cref{unitarity of B- is trivial}, \Cref{+2B- inverts +2F+} implies
\begin{align}\label{unitarity of +2B- formula}
\|{}^{(+2)}\mathscr{B}^-(\upalpha_{\mathscr{H}^+},\upalpha_{\mathscr{I}^+})\|_{\mathcal{E}^{T,+2}_{\Sigma^*}}^2=\|\upalpha_{\mathscr{H}^+}\|^2_{\mathcal{E}^{T,+2}_{\mathscr{H}^+_{\geq0}}}+\|\upalpha_{\mathscr{I}^+}\|^2_{\mathcal{E}^{T,+2}_{\mathscr{I}^+}}.
\end{align}
As in the case of \Cref{RW unitary backwards}, we can use the backwards $r^p$-estimates of \Cref{backwards rp estimates} to directly show \bref{unitarity of +2B- formula} without reference to the forwards map ${}^{(+2)}\mathscr{F}^+$.
\end{remark}
Since the region $J^+(\overline{\Sigma})\cap J^-(\Sigma^*)$ can be handled locally via \Cref{WP+2Sigmabar}, \Cref{backwards wellposedness +2} and $T$-energy conservation, we can immediately deduce the following:
\begin{corollary}
The map ${}^{(+2)}\mathscr{B}^-$ can be defined on the following domains:
\begin{align}
{}^{(+2)}\mathscr{B}^{-}:\mathcal{E}^{T,+2}_{\mathscr{H}^+}\oplus \mathcal{E}^{T,+2}_{\mathscr{I}^+}\longrightarrow \mathcal{E}^{T,+2}_{\Sigma},\\
{}^{(+2)}\mathscr{B}^{-}:\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}\oplus \mathcal{E}^{T,+2}_{\mathscr{I}^+}\longrightarrow \mathcal{E}^{T,+2}_{\overline{\Sigma}},
\end{align}
and we have
\begin{align}
{}^{(+2)}\mathscr{F}^{+}\circ{}^{(+2)}\mathscr{B}^{-}=Id_{\mathcal{E}^{T,+2}_{\mathscr{H}^+}\oplus\;\mathcal{E}^{T,+2}_{\mathscr{I}^+}},\qquad
{}^{(+2)}\mathscr{B}^{-}\circ{}^{(+2)}\mathscr{F}^{+}=Id_{\mathcal{E}^{T,+2}_{\Sigma}},\\
{}^{(+2)}\mathscr{F}^{+}\circ{}^{(+2)}\mathscr{B}^{-}=Id_{\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}\oplus\;\mathcal{E}^{T,+2}_{\mathscr{I}^+}},\qquad
{}^{(+2)}\mathscr{B}^{-}\circ{}^{(+2)}\mathscr{F}^{+}=Id_{\mathcal{E}^{T,+2}_{\overline{\Sigma}}}.
\end{align}
\end{corollary}
This concludes the proof of \Cref{+2 future backward scattering}.
\begin{remark}[A nondegenerate estimate near $\mathscr{H}^+$]\label{nondegenerate estimate near H+}
Note that the transport hierarchy \bref{hier+} implies (integrating in the measure $du\sin\theta d\theta d\phi$)
\begin{align}
\begin{split}
\int_{\underline{\mathscr{C}}_v\cap[u,\infty)} \frac{1}{\Omega^2} |\Omega\slashed{\nabla}_3 r^3\Omega\psi|^2&= \int_{\underline{\mathscr{C}}_v\cap[u,\infty)} \frac{\Omega^2}{r^2}|\Psi|^2\leq \underline{F}_v^T[\Psi](u,\infty),\\
\int_{\underline{\mathscr{C}}_v\cap[u,\infty)}\frac{1}{\Omega^2}|\Omega\slashed{\nabla}_3 r\Omega^2\alpha|^2&\lesssim \frac{1}{(2M)^2} \int_{\underline{\mathscr{C}}_v\cap[u,\infty)}\frac{1}{r^2}|\Omega\slashed{\nabla}_3 r^3\Omega\psi|^2\lesssim \Omega^2(u,v) \underline{F}_v^T[\Psi](u,\infty).
\end{split}
\end{align}
These estimates hold uniformly in $v$, in contrast to \bref{RW exponential backwards near H+}. This can be traced to the sign of the first order term in
\begin{align}
\Omega\slashed{\nabla}_3\Omega\slashed{\nabla}_4 r\Omega^2\alpha+\frac{2(3\Omega^2-1)}{r}\Omega\slashed{\nabla}_3 r\Omega^2\alpha-\Omega^2\slashed{\Delta} r\Omega^2 \alpha+\frac{6M\Omega^2}{r^2} r\Omega^2\alpha=0.
\end{align}
for $r<3M$.\\
\indent Near $\mathscr{I}^+$ we can use \bref{+2 equation for radiation field} and follow the same steps leading to \bref{this} to derive for $R>\mathcal{R}_{\mathscr{I}^+}$:
\begin{align}\label{this+2}
\int_{\mathscr{C}_u\cap\{r>R\}}r^2|\Omega\slashed{\nabla}_4 r^5\Omega^{-2}\alpha|^2\lesssim_{u_-,M}\left[\|\upalpha_{\mathscr{I}^+}\|_{\mathcal{E}^{T,+2}_{\mathscr{I}^+}}^2 +\|\upalpha_{\mathscr{H}^+}\|_{\mathcal{E}^{T,+2}_{\mathscr{H}^+}}^2+\int_{\mathscr{I}^+\cap[u,u_+]}|\upalpha_{\mathscr{I}^+}|^2+|\mathring{\slashed{\nabla}}\upalpha_{\mathscr{I}^+}|^2\right].
\end{align}
With these estimates we can conclude as for the Regge--Wheeler equation:
\begin{corollary}\label{+2 noncompact}
The results of \Cref{+2 backwards existence} hold when $\upalpha_{\mathscr{H}^+}$, $\upalpha_{\mathscr{I}^+}$ are not compactly supported, provided
\begin{align}\label{noncompact estimate}
\sum_{|\gamma|\leq2}\|\slashed{\mathcal{L}}^\gamma_{S^2}\upalpha_{\mathscr{H}^+}\|_{\mathcal{E}^{T,+2}_{\mathscr{H}^+}}^2+\|\slashed{\mathcal{L}}^\gamma_{S^2}\upalpha_{\mathscr{I}^+}\|_{\mathcal{E}^{T,+2}_{\mathscr{I}^+}}^2+\int_{\mathscr{I}^+}|\slashed{\mathcal{L}}^\gamma_{S^2}\upalpha_{\mathscr{I}^+}|^2+|\slashed{\mathcal{L}}^\gamma_{S^2}\mathring{\slashed{\nabla}}\upalpha_{\mathscr{I}^+}|^2<\infty.
\end{align}
\end{corollary}
\end{remark}
The results above can be extended to scattering from $\Sigma, \overline\Sigma$, since the region $J^+(\overline\Sigma)\cap J^-(\Sigma^*)$ can be handled locally with \Cref{WP+2Sigmabar} and \Cref{RWfcpSigma}.
\begin{corollary}
Let $\upalpha_{\mathscr{H}^+}\in\Gamma(\mathscr{H}^+)\cap\;\mathcal{E}^{T,+2}_{\mathscr{H}^+}$, $\upalpha_{\mathscr{I}^+}\in\Gamma (\mathscr{I}^+)\cap\;\mathcal{E}^{T,+2}_{\mathscr{I}^+}$, such that \bref{noncompact estimate} is satisfied. Then there exists a unique solution $\alpha$ to \cref{T+2} in $J^+({\Sigma})$ such that $\lim_{v\longrightarrow\infty}r^5\alpha=\upalpha_{\mathscr{I}^+}$, $2M\Omega^2\alpha\big|_{\mathscr{H}^+}=\upalpha_{\mathscr{H}^+}$. Moreover, $(\alpha\big|_{{\Sigma}},\slashed{\nabla}_{n_\Sigma}\alpha|_{{\Sigma}})\in \mathcal{E}^{T,+2}_{{\Sigma}}$ and
\begin{align}
\left\|\left(\alpha|_{{\Sigma}},\slashed{\nabla}_{n_\Sigma}\alpha|_{{\Sigma}}\right)\right\|^2_{\mathcal{E}^{T,+2}_{{\Sigma}}}=\left|\left|\upalpha_{\mathscr{I}^+}\right|\right|^2_{\mathcal{E}^{T,+2}_{\mathscr{I}^+}}+\left|\left|\upalpha_{\mathscr{H}^+}\right|\right|^2_{\mathcal{E}^{T,+2}_{{\mathscr{H}^+}}}.
\end{align}
\end{corollary}
\begin{corollary}
Let $\upalpha_{\mathscr{H}^+}\in\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}$ be such that $V^{-2}\upalpha\in \Gamma({\overline{\mathscr{H}^+}})$ and let $\upalpha_{\mathscr{I}^+}\in\Gamma(\mathscr{I}^+)\cap\;\mathcal{E}^{T,+2}_{\mathscr{I}^+}$. Then there exists a unique solution $\alpha$ to \cref{T+2} in $J^+(\overline{\Sigma})$ such that $\lim_{v\longrightarrow\infty}r^5\alpha=\upalpha_{\mathscr{I}^+}$, $2MV^{-2}\Omega^2\alpha\big|_{\mathscr{H}^+}=V^{-2}\upalpha_{\mathscr{H}^+}$. Moreover, $(\alpha\big|_{\overline{\Sigma}},\slashed{\nabla}_{n_{\overline{\Sigma}}}\alpha|_{\overline{\Sigma}})\in \mathcal{E}^{T,+2}_{\overline{\Sigma}}$ and
\begin{align}
\left\|\left(\alpha|_{\overline{\Sigma}},\slashed{\nabla}_{n_{\overline{\Sigma}}}\alpha|_{\overline{\Sigma}}\right)\right\|^2_{\mathcal{E}^{T,+2}_{\overline{\Sigma}}}=\left|\left|\upalpha_{\mathscr{I}^+}\right|\right|^2_{\mathcal{E}^{T,+2}_{\mathscr{I}^+}}+\left|\left|\upalpha_{\mathscr{H}^+}\right|\right|^2_{\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}}.
\end{align}
\end{corollary}
\subsubsection{A pointwise estimate near $i^0$ in backwards scattering}\label{subsubsection 8.1.3 pointwise estimate near i0}
As an aside, if $\upalpha_{\mathscr{I}^+}$ is compactly supported we can use the backwards $r^p$-estimates of \Cref{backwards rp estimates} to obtain better decay for $\alpha,\psi$ towards $i^0$. We illustrate this point in what follows:
\begin{proposition}
Let $\alpha$ be the solution to (\ref{T+2}) arising from scattering data $\upalpha_{\mathscr{H}^+}\in \Gamma_c (\mathscr{H}^+_{\geq0}), \upalpha_{\mathscr{I}^+}\in \Gamma_c (\mathscr{I}^+_{\geq0})$ as in \Cref{+2 backwards existence}. Then $r^5\psi|_{\Sigma^*}, r^5\alpha|_{\Sigma^*}\longrightarrow 0$. The same applies when $\Sigma^*$ is replaced by $\Sigma$ or $\overline\Sigma$.
\end{proposition}
\begin{proof}
Given that $\bm{\uppsi}_{\mathscr{I}^+}=\partial_u^2\upalpha_{\mathscr{I}^+}$ is compactly supported, we already know that $\Psi|_{\Sigma^*,r=R}\longrightarrow 0$ as $R\longrightarrow 0$. We first work with $r^5\psi$, for which we can derive a similar estimate to (\ref{backwards estimate +2 Gronwall}): Let $u<u'<u_-$ and take $(u,v,\theta^A)\in\Sigma^*$, $v-u:=R^*$. Integrating \cref{+2 Gronwall ingredient} in $u$ on $\underline{\mathscr{C}}_v$ between $u,u'$, we obtain:
\begin{align}
\Big|r^5\Omega^{-1}\psi(u,v)-r^5\Omega^{-1}\psi(u',v)\Big|\leq\int_{u}^{u'}\left|\Psi\right|\exp\left[\int_{u}^{u'} \frac{3\Omega^2-1}{r} d\bar{u}\right]\lesssim\left[\int_{u}^{u'}\left|\Psi\right|\right]\left(\frac{r(u',v)}{r(u,v)}\right)^2.
\end{align}
We further compare $\int_{u}^{u'}\left|\Psi\right|d\bar{u}$ to $\int_{-\infty}^{u'} |\Psi|_{\mathscr{I}^+}$: via the backwards $r^p$-estimates of \Cref{backwards rp estimates}:
\begin{align}
\left|\int_{u}^{u'}du \left|\Psi\right|-\int_{-\infty}^{u'}du \left|\bm{\uppsi}_{\mathscr{I}^+}\right|\right|^2\leq\left[ \int_{\mathscr{D}}du dv|\Omega\slashed{\nabla}_4\Psi|\right]^2\leq \frac{1}{\sqrt{R}}\int_{\mathscr{D}}du dv \;r^2|\Omega\slashed{\nabla}_4\Psi|^2,
\end{align}
where $\mathscr{D}=J^+(\Sigma^*)\cap J^+(\underline{\mathscr{C}}_{v})\cap J^-(\mathscr{C}_{u'})$. As in \Cref{backwards rp estimates}, we can bound the last integral by the right hand side of (\ref{RWbackwardsdecay}). As $R\longrightarrow\infty$, $\int_{u}^{u'}du \left|\Psi\right|\longrightarrow \int_{-\infty}^{u'}du \left|\bm{\uppsi}_{\mathscr{I}^+}\right|=0$. Consequently $\left|r^5\Omega^{-1}\psi(u,v)-r^5\Omega^{-1}\psi(u',v)\right|\ $ decays as $R\longrightarrow\infty$ and
\begin{align}
\lim_{R\longrightarrow\infty} r^5\psi|_{\Sigma^*,r=R}=0.
\end{align}
We can prove the same for $r^5\alpha|_{\Sigma,r=R}$ by repeating the above argument for $\int_{u_-}^{u_+} du (u-u_-)\Psi$ and noticing that $\int_{u_-}^{u_+} du (u-u_-)\bm{\uppsi}_{\mathscr{I}^+}$ also vanishes since $\bm{\uppsi}_{\mathscr{I}^+}$ is the 2nd derivative of compactly supported fields on $\mathscr{I}^+$.
\end{proof}
\subsection{Future scattering for $\underline\alpha$}\label{subsection 8.2 future scattering -2}
Forwards and backwards scattering for the $-2$ Teukolsky equation are worked out entirely analogously to the case of the $+2$ Teukolsky equation, using the scattering theory of the Regge--Wheeler equation and the results of \Cref{subsection 7.2 future radiation fields and fluxes}. In contrast to the $+2$ equation, the transport equation \bref{hier-} relating $\underline\alpha$ and $\underline\Psi$ is sufficient to obtain an estimate for the radiation field near $\mathscr{I}^+$ that is uniform in the future end of the support of $\underline\upalpha_{\mathscr{I}^+}$, while near $\mathscr{H}^+$ $\underline\alpha$ experiences an \textit{enhanced blueshift}, and it is necessary for scattering data to decay exponentially at a sufficiently fast rate towards the future in order to obtain a solution in backwards scattering that is smooth near $\mathscr{H}^+$.
\subsubsection{Forwards scattering for $\underline\alpha$}\label{subsubsection 8.2.1 forwards scattering -2}
We put together the ingredients worked out in \Cref{subsection 7.2 future radiation fields and fluxes} to construct the forwards scattering map.
\begin{proof}[Proof of \Cref{-2 future forward scattering}]
Let $\underline\alpha$ be the solution to \cref{T-2} on $J^+(\Sigma^*)$ arising out of a compactly supported data set $(\underline\upalpha,\underline\upalpha')$ on $\Sigma^*$ as in \Cref{WP+2Sigma*}. \Cref{WP-2Sigma*} guarantees the existence of the radiation field $\underline\upalpha_{\mathscr{H}^+}$ as in \Cref{-2 radiation alpha definition H}. \Cref{-2 radiation ptwise decay H} says that $\underline\upalpha_{\mathscr{H}^+}\longrightarrow 0$ towards the future end of $\mathscr{H}^+$. Let $\underline\Psi$ be the solution to \cref{RW} associated to $\underline\alpha$ via (\ref{hier-}) The fact that $(\underline\Psi|_{\Sigma^*},\slashed{\nabla}_{T}\underline\Psi|_{\Sigma^*})$ are compactly supported means that the results of \Cref{-2 radiation flux on H+} apply and $\underline\upalpha_{\mathscr{H}^+}\in \mathcal{E}^{T,-2}_{\mathscr{H}^+_{\geq0}}$. Similarly, by \Cref{-2 radiation at scri}, $r\underline\alpha$ has a pointwise limit as $v\longrightarrow \infty$ which induces a smooth $\underline\upalpha_{\mathscr{I}^+}$ on $\mathscr{I}^+$. \Cref{-2 alpha radiation decay} implies that $\underline\upalpha_{\mathscr{I}^+}$ decays towards the future end of $\mathscr{I}^+$. As $\underline\uppsi_{\mathscr{I}^+}\in \mathcal{E}^T_{\mathscr{I}^+}$, we have that
\begin{align}\label{-2 term in L2 on scri+}
\mathcal{A}_2(\mathcal{A}_2-2)\int_v^\infty d\bar{u}\underline\upalpha_{\mathscr{I}^+}-6M\underline\upalpha_{\mathscr{I}^+}\in L^2(\mathscr{I}^+).
\end{align}
The fact that $\underline\alpha$ arises from data of compact support means that \bref{-2 mean is 0} applies. This implies upon evaluating the $L^2(\mathscr{I}^+)$ norm of the left hand side of \bref{-2 term in L2 on scri+} that $\underline\upalpha_{\mathscr{I}^+}\in\mathcal{E}^{T,-2}_{\mathscr{I}^+}$.
\end{proof}
\begin{corollary}\label{-2 future forward scattering Sigma Sigmabar}
Solutions to (\ref{T-2}) arising from data on ${\Sigma}$ of compact support give rise to smooth radiation fields in $\mathcal{E}_{\mathscr{I}^+}^{T,-2}$ and $\mathcal{E}_{{\mathscr{H}^+}}^{T,-2}$. Solutions to (\ref{T-2}) arising from data on $\overline{\Sigma}$ of compact support give rise to smooth radiation fields in $\mathcal{E}_{\mathscr{I}^+}^{T,-2}$ and $\mathcal{E}_{\overline{\mathscr{H}^+}}^{T,-2}$.
\end{corollary}
\begin{proof}
Identical to the proof of \Cref{RWfcpSigma} using \Cref{WP-2Sigmabar,,backwards wellposedness -2}.
\end{proof}
The proof of \Cref{-2 future forward scattering} above and \Cref{-2 future forward scattering Sigma Sigmabar} allow us to define the forwards maps ${}^{(-2)}\mathscr{F}^+$ from dense subspaces of $\mathcal{E}^{T,-2}_{\Sigma^*}$, $\mathcal{E}^{T,-2}_{\Sigma}$, $\mathcal{E}^{T,-2}_{\overline{\Sigma}}$.
\begin{defin}
Let $(\underline\upalpha,\underline\upalpha')$ be a smooth data set of compact support to the -2 Teukolsky equation \bref{T-2} on $\Sigma^*$ as in \Cref{WP-2Sigma*}. Define the map ${}^{(-2)}\mathscr{F}^+$ by
\begin{align}
{}^{(-2)}\mathscr{F}^+:\Gamma_c(\Sigma^*)\times\Gamma_c(\Sigma^*)\longrightarrow \Gamma(\mathscr{H}^+_{\geq0})\times\Gamma(\mathscr{I}^+), (\upalpha,\upalpha')\longrightarrow (\upalpha_{\mathscr{H}^+},\upalpha_{\mathscr{I}^+}),
\end{align}
where $(\underline\upalpha_{\mathscr{H}^+},\underline\upalpha_{\mathscr{I}^+})$ are as in the proof of \Cref{-2 future forward scattering}.\\
\indent Using \Cref{-2 future forward scattering Sigma Sigmabar}, the map ${}^{(-2)}\mathscr{F}^+$ is defined analogously for data on $\Sigma, \overline{\Sigma}$:
\begin{align}
{}^{(-2)}\mathscr{F}^+:\Gamma_c(\Sigma)\times\Gamma_c(\Sigma)\longrightarrow \Gamma(\mathscr{H}^+)\times\Gamma(\mathscr{I}^+), (\upalpha,\upalpha')\longrightarrow (\upalpha_{\mathscr{H}^+},\upalpha_{\mathscr{I}^+}),\\
{}^{(-2)}\mathscr{F}^+:\Gamma_c(\overline{\Sigma})\times\Gamma_c(\overline{\Sigma})\longrightarrow \Gamma(\overline{\mathscr{H}^+})\times\Gamma(\mathscr{I}^+), (\underline\upalpha,\underline\upalpha')\longrightarrow (\underline\upalpha_{\mathscr{H}^+},\underline\upalpha_{\mathscr{I}^+}).
\end{align}
\end{defin}
\subsubsection{Backwards scattering for $\underline\alpha$}\label{subsubsection 8.2.2 backwards scattering -2}
Now we construct the inverse ${}^{(-2)}\mathscr{B}^-$ of \Cref{-2 future backward scattering} on a dense subspace of $\mathcal{E}^{T,-2}_{\mathscr{H}^+_{\geq0}}\oplus\mathcal{E}^{T,-2}_{\mathscr{I}^+}$. The existence of a solution to \bref{T-2} out of compactly supported scattering data on $\mathscr{H}^+_{\geq0}, \mathscr{I}^+$ is shown in \Cref{-2 backwards existence}. Showing that this solution defines an element of $\mathcal{E}^{T,-2}_{\Sigma^*}$ is done in \Cref{-2 backwards inclusion 7/2}.
\begin{proposition}\label{-2 backwards existence}
For $\underline\upalpha_{\mathscr{H}^+}\in\Gamma(\mathscr{H}^+_{\geq0})\cap\mathcal{E}^{T,-2}_{\mathscr{H}^+_{\geq0}}$ supported on $\mathscr{H}^+_{\geq0}\cap\{v<v_+\}$ for $v_+<\infty$, ${\underline\alpha}_{\mathscr{I}^+}\in\Gamma(\mathscr{I}^+)\cap\mathcal{E}^{T,-2}_{\mathscr{I}^+}$ supported on on $\mathscr{I}^+\cap\{u<u_+\}$ for $u_+<\infty$, there exists a unique solution $\alpha$ to \bref{T-2} in $J^+(\Sigma^*)$ that realises $\underline\upalpha_{\mathscr{H}^+}$ and $\underline\upalpha_{\mathscr{I}^+}$ as its radiation fields on $\mathscr{H}^+_{\geq0}$, $\mathscr{I}^+$ respectively.
\end{proposition}
\begin{remark}
The fact that $\underline\upalpha_{\mathscr{I}^+}\in\mathcal{E}^{T,-2}_{\mathscr{I}^+}$ automatically implies that $\int_{-\infty}^\infty d\bar{u}\; \underline\upalpha_{\mathscr{I}^+}=0$.
\end{remark}
\begin{proof}
Let $\widetilde{\Sigma}$ be a spacelike surface connecting $\mathscr{H}^+$ at a finite $v_*>v_+$ to $\mathscr{I}^+$ at a finite $u_*>u_+$. Denote by $\mathscr{D}$ the region bounded by $\mathscr{H}^+_{\geq 0}\cap\{v<v_+\}$, $\widetilde{\Sigma}$, $
\mathscr{I}^+\cap[u_-,u_+]$, $\Sigma^*$ and $\mathscr{C}_{u_-}$ for $u_->-\infty$. We define
\begin{align}
\underline{{\psi}}_{\mathscr{H}^+}&=\frac{2}{(2M)^2}\partial_v \underline\upalpha_{\mathscr{H}^+} +\frac{1}{2M}\partial_v \underline\upalpha_{\mathscr{H}^+}, \label{backwards psibar H} \\
\underline{\bm{\uppsi}}_{\mathscr{H}^+}&=2(2M)^2\underline\upalpha_{\mathscr{H}^+}+2(2M)^3\partial_v \underline\upalpha_{\mathscr{H}^+}+(2M)^4\partial_v^2\underline\upalpha_{\mathscr{H}^+}, \label{backwards P sibar H}\\
\underline{{\psi}}_{\mathscr{I}^+}&=-\int_u^\infty d\bar{u}\; \mathcal{A}_2 \;\underline\upalpha_{\mathscr{I}^+},\label{backwards psibar I}\\
\underline{\bm{\uppsi}}_{\mathscr{I}^+}&=\int_u^\infty d\bar{u}\; (u_+-u) \left[\mathcal{A}_2(\mathcal{A}_2-2)\underline\upalpha_{\mathscr{I}^+}+6M\partial_u \underline\upalpha_{\mathscr{I}^+}\right]. \label{backwards P sibar I}
\end{align}
We can find a unique solution $\underline\Psi$ to \bref{RW} with radiation fields $\underline{\bm{\uppsi}}_{\mathscr{I}^+}$, $\underline{\bm{\uppsi}}_{\mathscr{H}^+}$. Let
\begin{align}
r^3\Omega\underline\psi=(2M)^3\underline{{\psi}}_{\mathscr{I}^+}-\int_v^\infty d\bar{v}\;\frac{\Omega^2}{r^2}\underline\Psi,\qquad\qquad r\Omega^2\underline\alpha=\underline\upalpha_{\mathscr{I}^+}-\int_v^\infty d\bar{v}\; r\Omega^3\underline\psi.
\end{align}
Then $\underline\psi, \underline\alpha$ satisfy:
\begin{align}
\underline\Psi=\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4 r^3\Omega\underline\psi=\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\right)^2r\Omega^2\underline\alpha.
\end{align}
Moreover, we can see that $\lim_{v\longrightarrow\infty}r^3\Omega\underline\psi(u,v,\theta^A)=\underline{{\psi}}_{\mathscr{I}^+}(u,\theta^A)$ uniformly in $u$, as
\begin{align}
\int_{S^2}|r^3\Omega\underline\psi-(2M)^3\underline{{\psi}}_{\mathscr{I}^+}|^2=\int_{S^2}\left[\int_v^\infty \frac{\Omega^2}{r^2}\underline\Psi d\bar{v}\right]^2\lesssim \frac{1}{r}F^T_u[\underline\Psi](v,\infty),
\end{align}
and similarly $\lim_{v\longrightarrow\infty}r\Omega^2\underline\alpha(u,v,\theta^A)=\underline\upalpha_{\mathscr{I}^+}(u,\theta^A)$ uniformly in $u$. We can repeat the same for $\slashed{\nabla}_T,\mathring{\slashed{\nabla}}$-derivatives of $r\Omega^2\underline\alpha, r^3\Omega\underline\psi$, which immediately implies that $\partial_u r^3\Omega\underline\psi\longrightarrow \partial_u \underline{{\psi}}_{\mathscr{I}^+}$, $\partial_u r\Omega^2\underline\alpha\longrightarrow \partial_u \underline\upalpha_{\mathscr{I}^+}$ as $v\longrightarrow\infty$. \\
The commutation relation \bref{commutation relation 2} implies
\begin{align}
\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\right)^2\mathcal{T}^{-2}r\Omega^2\underline\alpha=0.
\end{align}
We find $\mathcal{T}^{-2}r\Omega^2\underline\alpha$ and $\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4 \mathcal{T}^{-2}r\Omega^2\underline\alpha$:
\begin{align}
\mathcal{T}^{-2}r\Omega^2\underline\alpha&=\Omega\slashed{\nabla}_3 r^3\Omega\underline\psi-\frac{3\Omega^2-1}{r}r^3\Omega\underline\psi-\left(\mathcal{A}_2-\frac{6M}{r}\right)r\Omega^2\underline\alpha,\\
\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\mathcal{T}^{-2}r\Omega^2\underline\alpha&=\Omega\slashed{\nabla}_3\underline\Psi-\left[\mathcal{A}_2-(3\Omega^2-1)\right]r^3\Omega\underline\psi-6M r\Omega^2\underline\alpha.
\end{align}
It is not hard to see from \bref{backwards psibar H}, \bref{backwards psibar I}, \bref{backwards P sibar H}, \bref{backwards P sibar I}, that in the limit $v\longrightarrow\infty$, $\mathcal{T}^{-2}r\Omega^2\underline\alpha$ and $\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4 \mathcal{T}^{-2}r\Omega^2\underline\alpha$ vanish. This implies that $\underline\alpha$ satisfies $\mathcal{T}^{-2}r\Omega^2{\underline\alpha}=0$ on $\mathscr{D}$. It is also clear that $\Omega^{-2}\underline\alpha|_{\mathscr{H}^+}=\underline\upalpha_{\mathscr{H}^+}$. Finally, we can repeat the above to extend $\underline\alpha$ to $J^+(\Sigma^*)\cap\{u\geq \tilde{u}\}$ for arbitrarily small $\tilde{u}$.
\end{proof}
\indent Note that energy conservation translates to the following $r$-weighted estimates that are uniform in $u$ as $u\longrightarrow -\infty$:
\begin{align}
\int_{{\mathscr{C}}_u} \frac{r^2}{\Omega^2}|\Omega\slashed{\nabla}_4 r^3\Omega\underline\psi|^2&\leq {F}^T_u[\underline\Psi](v,\infty),\label{334}\\
\int_{{\mathscr{C}}_u} \frac{r^2}{\Omega^2}|\Omega\slashed{\nabla}_4 r\Omega^2\underline\alpha|^2&\lesssim \int_{{\mathscr{C}}_u} |\Omega\slashed{\nabla}_4 r^3\Omega\underline\psi|^2 \lesssim \frac{1}{r^2}{F}^T_u[\underline\Psi](v,\infty).\label{335}
\end{align}
This can be traced to the good sign of the first order term in \cref{T-2} near $\mathscr{I}^+$ when evolving backwards, and similar estimates can in fact be derived directly from \cref{T-2}. We can deduce
\begin{proposition}\label{-2 backwards inclusion 7/2}
Let $\underline\upalpha_{\mathscr{H}^+}, \underline\upalpha_{\mathscr{I}^+}$ be as in \Cref{-2 backwards existence}. Let $\underline\alpha$ be the corresponding solution to \cref{T-2}. Then we have that $(\Omega^{-2}\underline\alpha|_{\Sigma^*},\slashed{\nabla}_{n_{\Sigma^*}}\Omega^{-2}\underline\alpha|_{\Sigma^*})\in\mathcal{E}^{T,-2}_{\Sigma^*}$
\end{proposition}
\begin{proof}
Using \bref{334}, \bref{335} it is easy to use the argument of \cref{+2 backwards inclusion 7/2} to show that $\lim_{r\longrightarrow\infty}\left|r^{\frac{7}{2}}\underline\psi|_{\Sigma^*}\right|=\lim_{r\longrightarrow\infty}\left|r^{\frac{7}{2}}\underline\alpha|_{\Sigma^*}\right|=0$, so we can repeat what was done to prove \Cref{+2 backwards inclusion 7/2} to obtain the result.
\end{proof}
\begin{defin}\label{-2 definition of B-}
Let $\underline\upalpha_{\mathscr{H}^+}, \underline\upalpha_{\mathscr{I}^+}$ be as in \Cref{-2 backwards existence}. Define the map ${}^{(-2)}\mathscr{B}^-$ by
\begin{align}
{}^{(-2)}\mathscr{B}^-:\Gamma_c(\mathscr{H}^+_{\geq0})\times\Gamma_c(\mathscr{I}^+)\longrightarrow\Gamma(\Sigma^*)\times\Gamma(\Sigma^*), (\underline\upalpha_{\mathscr{H}^+},\underline\upalpha_{\mathscr{I}^+})\longrightarrow (\Omega^{-2}\underline\alpha|_{\Sigma^*},\slashed{\nabla}_{n_{\Sigma^*}}\Omega^{-2}\underline\alpha|_{\Sigma^*}),
\end{align}
where $\underline\alpha$ is the solution to \bref{T-2} arising from scattering data $(\underline\upalpha_{\mathscr{H}^+},\underline\upalpha_{\mathscr{I}^+})$ as in \Cref{-2 backwards existence}.
\end{defin}
\begin{corollary}\label{-2B- inverts -2F+}
The maps ${}^{(-2)}\mathscr{F}^+$, ${}^{(-2)}\mathscr{B}^-$ extend uniquely to unitary Hilbert space isomorphisms on their respective domains, such that ${}^{(-2)}\mathscr{F}^+\circ{}^{(-2)}\mathscr{B}^-=Id$, ${}^{(-2)}\mathscr{B}^-\circ{}^{(-2)}\mathscr{F}^+=Id$.
\end{corollary}
\begin{remark}\label{unitarity of -2B- is trivial}
As in the case of \Cref{unitarity of B- is trivial,,unitarity of +2B- is trivial}, \Cref{-2B- inverts -2F+} implies
\begin{align}\label{unitarity of -2B- formula}
\|{}^{(-2)}\mathscr{B}^-(\underline\upalpha_{\mathscr{H}^+},\underline\upalpha_{\mathscr{I}^+})\|_{\mathcal{E}^{T,-2}_{\Sigma^*}}^2=\|\underline\upalpha_{\mathscr{H}^+}\|^2_{\mathcal{E}^{T,-2}_{\mathscr{H}^+_{\geq0}}}+\|\underline\upalpha_{\mathscr{I}^+}\|^2_{\mathcal{E}^{T,-2}_{\mathscr{I}^+}}.
\end{align}
As in the case of \Cref{RW unitary backwards}, we can use the backwards $r^p$-estimates of \Cref{backwards rp estimates} to directly show \bref{unitarity of -2B- formula} without reference to the forwards map ${}^{(-2)}\mathscr{F}^+$.
\end{remark}
Since the region $J^+(\overline{\Sigma})\cap J^-(\Sigma^*)$ can be handled locally via \Cref{WP-2Sigmabar}, \Cref{backwards wellposedness -2} and $T$-energy conservation, we can immediately deduce the following:
\begin{corollary}
The map ${}^{(-2)}\mathscr{B}^-$ can be defined on the following domains:
\begin{align}
{}^{(-2)}\mathscr{B}^{-}:\mathcal{E}^{T,-2}_{\mathscr{H}^+}\oplus \mathcal{E}^{T,-2}_{\mathscr{I}^+}\longrightarrow \mathcal{E}^{T,-2}_{\Sigma},\\
{}^{(-2)}\mathscr{B}^{-}:\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^+}}\oplus \mathcal{E}^{T,-2}_{\mathscr{I}^+}\longrightarrow \mathcal{E}^{T,-2}_{\overline{\Sigma}},
\end{align}
and we have
\begin{align}
{}^{(-2)}\mathscr{F}^{+}\circ{}^{(-2)}\mathscr{B}^{-}=Id_{\mathcal{E}^{T,-2}_{\mathscr{H}^+}\oplus\;\mathcal{E}^{T,-2}_{\mathscr{I}^+}},\qquad
{}^{(-2)}\mathscr{B}^{-}\circ{}^{(-2)}\mathscr{F}^{+}=Id_{\mathcal{E}^{T,-2}_{\Sigma}},\\
{}^{(-2)}\mathscr{F}^{+}\circ{}^{(-2)}\mathscr{B}^{-}=Id_{\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^+}}\oplus\;\mathcal{E}^{T,-2}_{\mathscr{I}^+}},\qquad
{}^{(-2)}\mathscr{B}^{-}\circ{}^{(-2)}\mathscr{F}^{+}=Id_{\mathcal{E}^{T,-2}_{\overline{\Sigma}}}.
\end{align}
\end{corollary}
This concludes the proof of \Cref{-2 future backward scattering}.
\subsubsection{Non-compact future scattering data and the blueshift effect}\label{subsubsection 8.3 blueshift -2}
\indent In contrast to \bref{334}, \bref{335} (and to the estimates of \Cref{nondegenerate estimate near H+}), estimates for $\Omega^{-2}\underline\alpha$ near $\mathscr{H}^+$ in the backwards direction suffer from an enhanced blueshift, which can be readily seen in the transport equations \bref{hier-}:
\begin{align}
\Omega\slashed{\nabla}_4 r^3\Omega^{-1}\underline\psi+\frac{2M}{r^2} r^3\Omega^{-1}\underline\psi=\frac{\underline\Psi}{r^2}.
\end{align}
For $r<\mathcal{R}_{\mathscr{H}^+}<3M$, we can derive
\begin{align}\label{this 3}
\begin{split}
&\int_{S^2_{u,v}}|r^3\Omega^{-1}\underline\psi - (2M)^3\underline{\bm{\uppsi}}_{\mathscr{H}^+}|^2\lesssim \underbrace{\int_{S^2_{u,v_+}}|r^3\Omega^{-1}\underline\psi - (2M)^3\underline{\bm{\uppsi}}_{\mathscr{H}^+}|^2}_{=0}\\
&+\frac{1}{M}\int_v^{v_+}d\bar{v}\int_{S^2_{u,\bar{v}}}|r^3\Omega^{-1}\underline\psi - (2M)^3\underline{\bm{\uppsi}}_{\mathscr{H}^+}|^2+\frac{1}{(2M)^2}\int_{v}^{v_+}d\bar{v}\int_{S^2_{u,\bar{v}}}|\underline\Psi-\underline{\bm{\uppsi}}_{\mathscr{H}^+}|^2.
\end{split}
\end{align}
Gr\"onwall's inequality and \bref{ptwise horizon} imply
\begin{align}
\begin{split}
\int_{S^2_{u,v}}|r^3\Omega^{-1}\underline\psi - (2M)^3\underline{\bm{\uppsi}}_{\mathscr{H}^+}|^2\lesssim_{v_+} e^{\frac{1}{M}(v_+-v)}\left[\|\underline{\bm{\uppsi}}_{\mathscr{I}^+}\|_{\mathcal{E}^T_{\mathscr{I}^+}}^2+\|\underline{\bm{\uppsi}}_{\mathscr{I}^+}\|_{\mathcal{E}^T_{\mathscr{H}^+}}^2+\int_{\mathscr{H}^+\cap[v,v_+]}|\underline{\bm{\uppsi}}_{\mathscr{H}^+}|^2+|\mathring{\slashed{\nabla}}\underline{\bm{\uppsi}}_{\mathscr{H}^+}|^2\right].
\end{split}
\end{align}
The equation
\begin{align}
\Omega\slashed{\nabla}_4 r\Omega^{-2}\underline\alpha +\frac{4M}{r^2}r\Omega^{-2}\underline\alpha=r\Omega^{-1}\underline\psi
\end{align}
implies a similar estimate with a worse exponential factor
\begin{align}
\begin{split}
\int_{S^2_{u,v}}|r\Omega^{-2}\underline\alpha - 2M\underline\upalpha_{\mathscr{H}^+}|^2\lesssim_{v_+} e^{\frac{2}{M}(v_+-v)}\left[\|\underline{\bm{\uppsi}}_{\mathscr{I}^+}\|_{\mathcal{E}^T_{\mathscr{I}^+}}^2+\|\underline{\bm{\uppsi}}_{\mathscr{I}^+}\|_{\mathcal{E}^T_{\mathscr{H}^+}}^2+\int_{\mathscr{H}^+\cap[v,v_+]}|\underline{\bm{\uppsi}}_{\mathscr{H}^+}|^2+|\mathring{\slashed{\nabla}}\underline{\bm{\uppsi}}_{\mathscr{H}^+}|^2\right].
\end{split}
\end{align}
\indent We can conclude that the statement of the backwards existence theorem holds when scattering data is not compactly supported, but the solution will not be smooth unless data decays exponentially, which we can then show with the following applied to \bref{this 3}:
\begin{lemma}
Let $f(v)>0$ and assume
\begin{align}
f(v)\leq \Lambda \int_v^{v_+} f(v) + e^{-Pv}
\end{align}
for all $v<v_+$. Then if $P>\Lambda$ we have
\begin{align}
f(v)< \frac{P}{P-\Lambda}e^{-Pv}.
\end{align}
\end{lemma}
With this, we see that if $\underline\upalpha_{\mathscr{H}^+}$, $\underline\upalpha_{\mathscr{I}^+}$ decay exponentially at a rate faster than $\frac{1}{M}$ then the we are guaranteed that
\begin{align}
\int_{S^2_{u,v}}|r\Omega^{-2}\underline\alpha - 2M\underline\upalpha_{\mathscr{H}^+}|^2\lesssim \left[\|\underline{\bm{\uppsi}}_{\mathscr{I}^+}\|_{\mathcal{E}^T_{\mathscr{I}^+}}^2+\|\underline{\bm{\uppsi}}_{\mathscr{I}^+}\|_{\mathcal{E}^T_{\mathscr{H}^+}}^2+\int_{\mathscr{H}^+\cap[v,v_+]}|\underline{\bm{\uppsi}}_{\mathscr{H}^+}|^2+|\mathring{\slashed{\nabla}}\underline{\bm{\uppsi}}_{\mathscr{H}^+}|^2\right].
\end{align}
\begin{corollary}\label{-2 noncompact}
Let $\underline\upalpha_{\mathscr{H}^+}$ be a smooth symmetric traceless $S^2_{\infty,v}$ 2-tensor field with domain $\mathscr{H}^+$, $\underline\upalpha_{\mathscr{I}^+}$ a smooth symmetric traceless $S^2_{\infty,v}$ 2-tensor field with domain $\mathscr{I}^+$. Then there exists a unique $\underline\alpha$ that is smooth on the interior of $J^+(\Sigma^*)$ and satisfies \bref{T-2}. If $\underline\upalpha_{\mathscr{H}^+}, \underline\upalpha_{\mathscr{I}^+}$ decay exponentially towards the future at rate faster than $\frac{1}{M}$ then $\Omega^{-2}\underline\alpha$ is smooth up to and including $\mathscr{H}^+$.
\end{corollary}
Since the region $J^+(\overline\Sigma)\cap J^-(\Sigma^*)$ can be handled locally with \Cref{WP+2Sigmabar} and \Cref{RWfcpSigma},
the results above can be extended to scattering from $\Sigma, \overline\Sigma$.
\begin{corollary}
Let $\underline\upalpha_{\mathscr{H}^+}\in\Gamma(\mathscr{H}^+)\cap\;\mathcal{E}^{T,-2}_{\mathscr{H}^+}$, $\underline\upalpha_{\mathscr{I}^+}\in\Gamma (\mathscr{I}^+)\cap\;\mathcal{E}^{T,-2}_{\mathscr{I}^+}$. Assume $\underline\upalpha_{\mathscr{H}^+}$, $\underline\upalpha_{\mathscr{I}^+}$ decay exponentially at a rate faster than $\frac{1}{M}$. Then there exists a unique solution $\underline\alpha$ to \cref{T-2} in $J^+({\Sigma})$ such that $\lim_{v\longrightarrow\infty}r\underline\alpha=\underline{\upalpha}_{\mathscr{I}^+}$, $2M\Omega^{-2}\underline\alpha\big|_{\mathscr{H}^+}=\underline\upalpha_{\mathscr{H}^+}$. Moreover, $(\underline\alpha\big|_{{\Sigma}},\slashed{\nabla}_T\underline\alpha|_{{\Sigma}})\in \mathcal{E}^{T,-2}_{{\Sigma}}$ and
\begin{align}
\left\|\left(\underline\alpha|_{{\Sigma}},\slashed{\nabla}_{n_{\Sigma}}\underline\alpha|_{{\Sigma}}\right)\right\|^2_{\mathcal{E}^{T,-2}_{{\Sigma}}}=\left|\left|\underline\upalpha_{\mathscr{I}^+}\right|\right|^2_{\mathcal{E}^{T,-2}_{\mathscr{I}^+}}+\left|\left|\underline\upalpha_{\mathscr{H}^+}\right|\right|^2_{\mathcal{E}^{T,-2}_{{\mathscr{H}^+}}}.
\end{align}
\end{corollary}
\begin{corollary}
Let $\underline\upalpha_{\mathscr{H}^+}\in\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^+}}$ be such that $V^{2}\underline\upalpha\in \Gamma({\overline{\mathscr{H}^+}})$ and let $\underline\upalpha_{\mathscr{I}^+}\in\Gamma(\mathscr{I}^+)\cap\;\mathcal{E}^{T,-2}_{\mathscr{I}^+}$. Assume $\underline\upalpha_{\mathscr{H}^+}$, $\underline\upalpha_{\mathscr{I}^+}$ decay exponentially at a rate faster than $\frac{1}{M}$, then there exists a unique solution $\underline\alpha$ to \cref{T-2} in $J^+(\overline{\Sigma})$ such that $\lim_{v\longrightarrow\infty}r\underline\alpha=\underline\upalpha_{\mathscr{I}^+}$, $V^{2}\Omega^{-2}\underline\alpha\big|_{\mathscr{H}^+}=V^{2}\underline\upalpha_{\mathscr{H}^+}$. Moreover, $(\underline\alpha\big|_{\overline{\Sigma}},\slashed{\nabla}_T\underline\alpha|_{\overline{\Sigma}})\in \mathcal{E}^{T,-2}_{\overline{\Sigma}}$ and
\begin{align}
\left\|\left(\underline\alpha|_{\overline{\Sigma}},\slashed{\nabla}_{n_{\overline{\Sigma}}}\underline\alpha|_{\overline{\Sigma}}\right)\right\|^2_{\mathcal{E}^{T,-2}_{\overline{\Sigma}}}=\left|\left|\underline\upalpha_{\mathscr{I}^+}\right|\right|^2_{\mathcal{E}^{T,-2}_{\mathscr{I}^+}}+\left|\left|\underline\upalpha_{\mathscr{H}^+}\right|\right|^2_{\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^+}}}.
\end{align}
\end{corollary}
\subsection{Past scattering for $\alpha, \underline\alpha$}\label{subsection 8.3 past scattering +2-2}
Taking into account \Cref{time inversion}, \Cref{+2 past forward scattering,,-2 past forward scattering} are immediate. We state the results regarding scattering on $J^-(\overline{\Sigma})$.
\begin{corollary}\label{past scattering of +2}
Given smooth data of compact support $(\upalpha,\upalpha')\in \mathcal{E}^{T,+2}_{\overline{\Sigma}}$, there exists a unique solution $\alpha$ to the +2 Teukolsky equation \bref{T+2} on $J^-(\overline{\Sigma})$ that induces smooth radiation fields
\begin{itemize}
\item $\upalpha_{\mathscr{I}^-}\in \mathcal{E}^{T,+2}_{\mathscr{I}^-}$ given by $\upalpha_{\mathscr{I}^-}(v,\theta^A)=\lim_{u\longrightarrow -\infty} r\alpha(u,v,\theta^A)$,
\item $\upalpha_{\mathscr{H}^-}\in \mathcal{E}^{T,+2}_{\overline{\mathscr{H}^-}}$ given by $U^{2}\upalpha_{\mathscr{I}^-}=2MU^2\Omega^{-2}\alpha|_{\mathscr{H}^-}$.
\end{itemize}
such that
\begin{align}\label{109091}
\left\|\left(\alpha|_{\overline{\Sigma}},\slashed{\nabla}_T\alpha|_{\overline{\Sigma}}\right)\right\|^2_{\mathcal{E}^{T,+2}_{\overline{\Sigma}}}=\left|\left|\upalpha_{\mathscr{I}^-}\right|\right|^2_{\mathcal{E}^{T,+2}_{\mathscr{I}^-}}+\left|\left|\upalpha_{\mathscr{H}^-}\right|\right|^2_{\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^-}}}.
\end{align}
Let $\upalpha_{\mathscr{H}^-}\in\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^-}}$ be such that $U^{2}\upalpha\in \Gamma({\overline{\mathscr{H}^-}})$ and let $\upalpha_{\mathscr{I}^-}\in\Gamma(\mathscr{I}^-)\cap\;\mathcal{E}^{T,+2}_{\mathscr{I}^-}$. Assume $\upalpha_{\mathscr{H}^-}$, $\upalpha_{\mathscr{I}^-}$ decay exponentially at a rate faster than $\frac{1}{M}$, then there exists a unique solution $\alpha$ to \cref{T+2} in $J^-(\overline{\Sigma})$ such that $\lim_{u\longrightarrow-\infty}r\alpha=\upalpha_{\mathscr{I}^-}$, $2MU^{2}\Omega^{-2}\alpha\big|_{\mathscr{H}^-}=U^{2}\upalpha_{\mathscr{H}^-}$. Moreover, $(\alpha\big|_{\overline{\Sigma}},\slashed{\nabla}_T\alpha|_{\overline{\Sigma}})\in \mathcal{E}^{T,+2}_{\overline{\Sigma}}$ and \bref{109091}.\\
Therefore, as in the case of ${}^{(+2)}\mathscr{F}^+,{}^{(+2)}\mathscr{B}^-$ we can define the unitary isomorphisms
\begin{align}
{}^{(+2)}\mathscr{F}^-:\mathcal{E}^{T,+2}_{\overline\Sigma}\longrightarrow\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^-}}\oplus \mathcal{E}^{T,+2}_{\mathscr{I}^-},\qquad\qquad {}^{(+2)}\mathscr{B}^+:\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^-}}\oplus \mathcal{E}^{T,+2}_{\mathscr{I}^-}\longrightarrow\mathcal{E}^{T,+2}_{\overline\Sigma},
\end{align}
with
\begin{align}
{}^{(+2)}\mathscr{F}^-\circ {}^{(+2)}\mathscr{B}^+=Id_{\mathcal{E}^{T,+2}_{\overline{\Sigma}}},\qquad\qquad{}^{(+2)}\mathscr{B}^+\circ{}^{(+2)}\mathscr{F}^-\circ=Id_{\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^-}}\oplus \mathcal{E}^{T,+2}_{\mathscr{I}^-}}.
\end{align}
An identical statement holds with $\mathcal{E}^{T,+2}_{{\Sigma}}, \mathcal{E}^{T,+2}_{{\mathscr{H}^-}}$ instead.
\end{corollary}
\begin{corollary}\label{past scattering of -2}
Given smooth data of compact support $(\underline\upalpha,\underline\upalpha')\in \mathcal{E}^{T,-2}_{\overline{\Sigma}}$, there exists a unique solution $\underline\alpha$ to the -2 Teukolsky equation \bref{T-2} on $J^-(\overline\Sigma)$ that induces radiation fields
\begin{itemize}
\item $\underline\upalpha_{\mathscr{I}^-}\in \mathcal{E}^{T,-2}_{\mathscr{I}^-}$ given by $\underline\upalpha_{\mathscr{I}^-}(v,\theta^A)=\lim_{u\longrightarrow -\infty} r^5\underline\alpha(u,v,\theta^A)$,
\item $\underline\upalpha_{{\mathscr{H}^-}}\in \mathcal{E}^{T,-2}_{\overline{\mathscr{H}^-}}$ given by $U^{-2}\underline\upalpha_{\mathscr{H}^-}=2MU^{-2}\Omega^{2}\underline\alpha|_{\mathscr{H}^-}$.
\end{itemize}
such that
\begin{align}\label{190190190}
\left\|\left(\underline\alpha|_{\overline{\Sigma}},\slashed{\nabla}_T\underline\alpha|_{\overline{\Sigma}}\right)\right\|^2_{\mathcal{E}^{T,-2}_{\overline{\Sigma}}}=\left|\left|\underline\upalpha_{\mathscr{I}^-}\right|\right|^2_{\mathcal{E}^{T,-2}_{\mathscr{I}^-}}+\left|\left|\underline\upalpha_{\overline{\mathscr{H}^-}}\right|\right|^2_{\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^-}}}.
\end{align}
Let $\underline\upalpha_{\mathscr{H}^-}\in\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^-}}$ be such that $U^{-2}\underline\upalpha\in \Gamma({\overline{\mathscr{H}^-}})$ and let $\underline\upalpha_{\mathscr{I}^-}\in\Gamma(\mathscr{I}^-)\cap\;\mathcal{E}^{T,-2}_{\mathscr{I}^-}$. Then there exists a unique solution $\underline\alpha$ to \cref{T-2} in $J^+(\overline{\Sigma})$ such that $\lim_{u\longrightarrow-\infty}r^5\underline\alpha=\underline\upalpha_{\mathscr{I}^-}$, $2MU^{-2}\Omega^2\underline\alpha\big|_{\mathscr{H}^-}=U^{-2}\underline\upalpha_{\mathscr{H}^-}$. Moreover, $(\underline\alpha\big|_{\overline{\Sigma}},\slashed{\nabla}_T\underline\alpha|_{\overline{\Sigma}})\in \mathcal{E}^{T,-2}_{\overline{\Sigma}}$ and \bref{190190190} is satisfied. An identical statement holds with $\mathcal{E}^{T,-2}_{{\Sigma}}, \mathcal{E}^{T,-2}_{{\mathscr{H}^-}}$ instead.
\end{corollary}
Finally, note that using \Cref{past scattering of +2,,past scattering of -2}, the proof of \Cref{scatteringthm+2} and \Cref{scatteringthm-2} is immediate.
\numberwithin{lemma}{section}
\numberwithin{proposition}{section}
\numberwithin{corollary}{section}
\numberwithin{remark}{section}
\section{Teukolsky--Starobinsky Correspondence}\label{section 9 TS correspondence}
We now turn to the proof of \Cref{Theorem 3} of the introduction, whose detailed statement is contained in \Cref{Theorem 3 detailed statement}. We start by stating in Section 9.1 some useful algebraic relations satisfied by the constraints \bref{eq:227intro1}, \bref{eq:228intro1}. We then study the constraints on scattering data in Section 9.2 to construct the maps $\mathcal{TS}_{\mathscr{H}^\pm}, \mathcal{TS}_{\mathscr{I}^\pm}$, and then we use the results of Section 9.1 and Section 9.2 to show that the constraints are propagated by solutions arising from scattering data consistent with the constraints, culminating in the proof of \Cref{Corollary 1} of the introduction in Section 9.4.
\subsection{Some algebraic properties of the Teukolsky--Starobinsky identities}\label{subsection 9.1 algebraic properties of TS}
\indent Let $\alpha$ be a solution to the $+2$ Teukolsky equation and let $\Psi=\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\right)^2r\Omega^2\alpha$, then the commutation relation \bref{commutation relation} implies that
\begin{align}\label{TS fact 1}
&\mathcal{T}^{-2}\left[ \frac{\Omega^2}{r^2}\Omega\slashed{\nabla}_3\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\Psi\right]=0.
\end{align}
Similarly, if $\underline\alpha$ satisfies the $-2$ Teukolsky equation and $\underline\Psi=\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\right)^2r\Omega^2\underline\alpha$, \bref{commutation relation 2} implies
\begin{align}\label{TS fact 2}
&\mathcal{T}^{+2}\left[ \frac{\Omega^2}{r^2}\Omega\slashed{\nabla}_4\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\underline\Psi\right]=0.
\end{align}
Note that were $(\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\alpha},\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha})$ to belong to a solution to the full system of equations (\ref{start of full system})-(\ref{Bianchi 0*}) then in fact we would have equations \bref{eq:TS1}, \bref{eq:TS2}:
\begin{align}
\frac{\Omega^2}{r^2}\Omega\slashed{\nabla}_3 \frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\stackrel{\mbox{\scalebox{0.4}{(1)}}}\Psi-2r^4\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1\overline{\slashed{\mathcal{D}}}_1\slashed{\mathcal{D}}_2 r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}-6M\left[\Omega\slashed{\nabla}_4+\Omega\slashed{\nabla}_3\right]r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}=0, \label{eq:227}\\
\frac{\Omega^2}{r^2}\Omega\slashed{\nabla}_4 \frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\Psi}-2r^4\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1\overline{\slashed{\mathcal{D}}}_1\slashed{\mathcal{D}}_2 r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha+6M\left[\Omega\slashed{\nabla}_4+\Omega\slashed{\nabla}_3\right]r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha=0.\label{eq:228}
\end{align}
\indent Combining \bref{TS fact 1} and \bref{TS fact 2} with the fact that $-2r^4\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1\overline{\slashed{\mathcal{D}}}_1\slashed{\mathcal{D}}_2,\slashed{\nabla}_T$ commute with both (\ref{T+2}) and (\ref{T-2}) leads to the following: denote by $\mathop{\mathbb{TS}^-}[\alpha,\underline\alpha]$ the expression on the left hand side of (\ref{eq:227}) acting on $\alpha, \underline\alpha$, such that the constraint becomes
\begin{align}\label{TS constraint -}
\mathop{\mathbb{TS}^-}[\alpha,\underline\alpha]:=\frac{1}{r^3}\Omega\slashed{\nabla}_3 \frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\Psi-2r^4\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1\overline{\slashed{\mathcal{D}}}_1\slashed{\mathcal{D}}_2 {\underline\alpha}+6M\left[\Omega\slashed{\nabla}_4+\Omega\slashed{\nabla}_3\right]{\underline\alpha}=0.
\end{align}
Similarly denote by $\mathop{\mathbb{TS}^-}[\alpha,\underline\alpha]$ the expression on the left hand side of (\ref{eq:228}) so that the constraint becomes
\begin{align}\label{TS constraint +}
\mathop{\mathbb{TS}^+}[\alpha,\underline\alpha]:=\frac{1}{r^3}\Omega\slashed{\nabla}_4 \frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\underline\Psi-2r^4\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1\overline{\slashed{\mathcal{D}}}_1\slashed{\mathcal{D}}_2 {\alpha}-6M\left[\Omega\slashed{\nabla}_4+\Omega\slashed{\nabla}_3\right]{\alpha}=0.
\end{align}
\begin{lemma}\label{propagation lemma}
For $\alpha$ satisfying the $+2$ Teukolsky equation (\ref{T+2}) and $\underline\alpha$ satisfying the $-2$ equation (\ref{T-2}), $\mathop{\mathbb{TS}^+}[\alpha,\underline\alpha]$ also satisfies the $+2$ Teukolsky equation (\ref{T+2}) and $\mathop{\mathbb{TS}^-}[\alpha,\underline\alpha]$ satisfies the $-2$ equation \bref{T-2}
\end{lemma}
This implies that if we impose both constraints \bref{eq:227},\bref{eq:228} on initial or scattering data for both the $+2$ and $-2$ Teukolsky equations then the constraints will be propagated by the solutions in evolution. More specifically, if we have scattering data for $\alpha, \underline\alpha$ such that the \textit{radiation fields} belonging to the quantities $\mathop{\mathbb{TS}^+}[\alpha,\underline\alpha]$, $\mathop{\mathbb{TS}^-}[\alpha,\underline\alpha]$ (in the sense of the definitions stated in \Cref{+2 radiation} and \Cref{subsection 7.2 future radiation fields and fluxes}) are vanishing, then we must have that $\mathop{\mathbb{TS}^+}[\alpha,\underline\alpha]=0$, $\mathop{\mathbb{TS}^-}[\alpha,\underline\alpha]=0$ by \Cref{+2 future backward scattering} and \Cref{-2 future backward scattering}.\\
\indent We would like to know the extent to which data for $\alpha$, $\underline\alpha$ are constrained by \cref{TS constraint -} and \cref{TS constraint +}. Doing this for data on a Cauchy surface is complicated, but if we restrict to data consistent with the scattering theory developed so far in this paper then we can alternatively attempt to address this question for scattering data on $\mathscr{I}^+, \mathscr{H}^+$. This is the subject of the remainder of this section.\\
\indent To start with, we can show the following by a straightforward computation
\begin{lemma}\label{not independent}
For $\alpha$ satisfying the $+2$ Teukolsky equation (\ref{T+2}) and $\underline\alpha$ satisfying the $-2$ Teukolsky equation (\ref{T-2})
\begin{align}\label{ ts- to parabolic ts+}
\frac{\Omega^2}{r^2}\Omega\slashed{\nabla}_4\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\right)^3 r\Omega^2\mathop{\mathbb{TS}^-}[\alpha,\underline\alpha]=-\left[2r^4\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1\overline{\slashed{\mathcal{D}}}_1\slashed{\mathcal{D}}_2+12M\slashed{\nabla}_T\right]r\Omega^2\mathop{\mathbb{TS}^+}[\alpha,\underline\alpha],
\end{align}
\begin{align}\label{ ts+ to parabolic ts-}
\frac{\Omega^2}{r^2}\Omega\slashed{\nabla}_3\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\right)^3 r\Omega^2\mathop{\mathbb{TS}^+}[\alpha,\underline\alpha]=\left[2r^4\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1\overline{\slashed{\mathcal{D}}}_1\slashed{\mathcal{D}}_2-12M\slashed{\nabla}_T\right]r\Omega^2\mathop{\mathbb{TS}^-}[\alpha,\underline\alpha].
\end{align}
In other terms,
\begin{align}
\mathop{\mathbb{TS}^+}\left[\mathop{\mathbb{TS}^+}[\alpha,\underline\alpha],-\mathop{\mathbb{TS}^-}[\alpha,\underline\alpha]\right]=0,\qquad\qquad\qquad\mathop{\mathbb{TS}^-}\left[-\mathop{\mathbb{TS}^+}[\alpha,\underline\alpha],\mathop{\mathbb{TS}^-}[\alpha,\underline\alpha]\right]=0,
\end{align}
regardless of whether or not the constraints $\mathop{\mathbb{TS}^+}[\alpha,\underline\alpha]=0,\mathop{\mathbb{TS}^-}[\alpha,\underline\alpha]=0$ are satisfied.
\end{lemma}
\Cref{not independent} implies that \Cref{eq:227}, \Cref{eq:228} are not independent. We will use \Cref{not independent} in \Cref{subsection 9.3 propagating the identities} to show that imposing only of the constraints on $\mathscr{I}^+$ and imposing only the other constraint on $\overline{\mathscr{H}^+}$ is enough to propagate the constraints on the solutions $\alpha, \underline\alpha$.
\subsection{Inverting the identities on $\mathscr{I}^+, \overline{\mathscr{H}^+}$}\label{subsection 9.2 inverting the identities}
\subsubsection*{Constraint \bref{eq:228} at $\mathscr{I}^+$}
We know that there are dense subspaces of $\mathcal{E}^{T,+2}_{\overline{\Sigma}}, \mathcal{E}^{T,-2}_{\overline{\Sigma}}$ consisting of smooth data for \cref{T+2}, \cref{T-2} such that
\begin{align}
\lim_{v\longrightarrow\infty} r\Omega^2\mathop{\mathbb{TS}^-}[\alpha,\underline\alpha]=\partial_u^4\upalpha_{\mathscr{I}^+}-2\mathring{\fancydstar_2}\mathring{\fancydstar_1}\mathring{\overline{\fancyd_1}}\mathring{\fancyd_1}\underline\upalpha_{\mathscr{I}^+}+6M\partial_u\underline{\alpha}_{\mathscr{I}^+},
\end{align}
so we consider
\begin{align}\label{constraint null infinity}
\partial_u^4\upalpha_{\mathscr{I}^+}-2\mathring{\fancydstar_2}\mathring{\fancydstar_1}\mathring{\overline{\fancyd_1}}\mathring{\fancyd_1}\underline\upalpha_{\mathscr{I}^+}-6M\partial_u\underline{\alpha}_{\mathscr{I}^+}=0
\end{align}
as a constraint on scattering data $\underline\upalpha_{\mathscr{I}^+}, \upalpha_{\mathscr{I}^+}$ at $\mathscr{I}^+$. We now show the following: if $\upalpha_{\mathscr{I}^+}$ is smooth and compactly supported, then there is a unique $\underline\upalpha_{\mathscr{I}^+}$ that decays towards $\mathscr{I}^+_\pm$ and satisfies \bref{constraint null infinity}:
\begin{proposition}\label{alphabar out of alpha on scri}
Let $\upalpha_{\mathscr{I}^+}\in \Gamma_c(\mathscr{I}^+)$. Then there exists a unique smooth $\underline\upalpha_{\mathscr{I}^+}$ such that
\begin{align}\label{scalarise this}
\partial_u^4\upalpha_{\mathscr{I}^+}-2\mathring{\fancydstar_2}\mathring{\fancydstar_1}\mathring{\overline{\fancyd_1}}\mathring{\fancyd_1}\underline\upalpha_{\mathscr{I}^+}-6M\partial_u\underline\upalpha_{\mathscr{I}^+}=0,
\end{align}
with $\underline\upalpha_{\mathscr{I}^+}\longrightarrow0$ as $u\longrightarrow \pm \infty$.
\end{proposition}
\begin{proof}
To make sense of (\ref{scalarise this}) we scalarise it: we associate to $\underline\upalpha_{\mathscr{I}^+}$ scalar fields $(\underline{f},\underline{g})$ on $\mathscr{M}$ with vanishing $\ell=0,1$ modes such that $\underline\upalpha_{\mathscr{I}^+}=r^2 \slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1(\underline{f},\underline{g})$. Similarly, we associate to $\upalpha_{\mathscr{I}^+}$ the two fields $(f,g)$ such that $\upalpha_{\mathscr{I}^+}=r^2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}_2(f,g)$. Define further $F=\frac{\Omega^2}{r^2}\Omega\slashed{\nabla}_3(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3)^3f$ and $G=\frac{\Omega^2}{r^2}\Omega\slashed{\nabla}_3(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3)^3g$. In the absence of $\ell=0,1$ modes, $r^2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1$ is injective and thus (\ref{eq:227}) becomes:
\begin{align}\label{eq:231}
\begin{split}
(F,G)&=2r^4\bar{\slashed{\mathcal{D}}_1}\slashed{\mathcal{D}}_2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1(\underline{f},\underline{g})+6M\Omega\slashed{\nabla}_3(\underline{f},\underline{g})
\\&=2r^4\slashed{\mathcal{D}}_1\slashed{\mathcal{D}}_2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1 (\underline{f},-\underline{g})+6M\Omega\slashed{\nabla}_3(\underline{f},\underline{g}).
\end{split}
\end{align}
Note that $r^4\slashed{\mathcal{D}}_1\slashed{\mathcal{D}}_2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1 = \frac{1}{2} r^2\slashed{\mathcal{D}}_1[-\mathring{\slashed{\Delta}}-1]\slashed{\mathcal{D}}^*_1$ and $r^2\slashed{\mathcal{D}}^*_1\slashed{\mathcal{D}}_1=-\mathring{\slashed{\Delta}}+1$, so $r^4\slashed{\mathcal{D}}_1\slashed{\mathcal{D}}_2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1=\frac{1}{2} r^4\slashed{\mathcal{D}}_1\slashed{\mathcal{D}}^*_1$ \\ $\times\{\slashed{\mathcal{D}}_1\slashed{\mathcal{D}}^*_1-2\}=\frac{1}{2}\mathring{\slashed{\Delta}}(\mathring{\slashed{\Delta}}+2)$. Equations (\ref{eq:231}) become
\begin{align}\label{eq:232}
\partial_u\underline{f}-\frac{1}{6M}\mathring{\slashed{\Delta}}(\mathring{\slashed{\Delta}}+2)\underline{f}=F,
\end{align}
\begin{align}\label{eq:233}
\partial_u \underline{g}+\frac{1}{6M}\mathring{\slashed{\Delta}}(\mathring{\slashed{\Delta}}+2)\underline{g}=G.
\end{align}
Equations (\ref{eq:232}) and (\ref{eq:233}) are two $4^{th}$order parabolic equations which are well-behaved in opposite directions in time; a unique smooth solution exists for (\ref{eq:232}) when evolving in the direction of increasing $u$ whereas (\ref{eq:233}) admits a unique smooth solution in the direction of decreasing $u$. Therefore, assuming the boundary condition $f\longrightarrow 0$ as $u\longrightarrow -\infty$ we will have a unique solution $f$ to (\ref{eq:232}) and this solution will decay for $u\longrightarrow\infty$. Similarly, there is a unique smooth $g$ solving (\ref{eq:233}) with $g\longrightarrow0$ when $u\longrightarrow \pm \infty$. Thus there is a unique smooth $\underline\upalpha_{\mathscr{I}^+}$ solving (\ref{scalarise this}) and decays towards $\mathscr{I}^+_\pm$.
\end{proof}
\begin{corollary}\label{iterated integrals}
Let $\upalpha_{\mathscr{I}^+},\underline\upalpha_{\mathscr{I}^+}$ be as in \Cref{alphabar out of alpha on scri}, then
\begin{align}\label{-2 further constraint}
\begin{split}
\int_{-\infty}^\infty \underline{\alpha}_{\mathscr{I}^+}du_1=0
\end{split}
\end{align}
\end{corollary}
\begin{proof}
\cref{scalarise this} and the decay of $\upalpha_{\mathscr{I}^+},\underline\upalpha_{\mathscr{I}^+}$ implies
\begin{align}\label{energy r ts-}
\partial_u\upalpha_{\mathscr{I}^+}=2r^4\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1\overline{\slashed{\mathcal{D}}}_1\slashed{\mathcal{D}}_2\int_{-\infty}^{u}du\; \underline\upalpha_{\mathscr{I}^+}+6M\underline\upalpha_{\mathscr{I}^+}.
\end{align}
Taking $u\longrightarrow \infty$ gives $2r^4\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1\overline{\slashed{\mathcal{D}}}_1\slashed{\mathcal{D}}_2\int_{-\infty}^{\infty}du \underline\upalpha_{\mathscr{I}^+}=0$ which implies $\int_{-\infty}^{\infty}du \underline\upalpha_{\mathscr{I}^+}=0$ as in \Cref{alphabar out of alpha on scri}.
\end{proof}
Conversely we have the following lemma which follows immediately by inspecting \bref{constraint null infinity}:
\begin{proposition}\label{alpha out of alphabar on scri}
Given $\underline\upalpha_{\mathscr{I}^+}\in \Gamma_c(\mathscr{I}^+)$, there exists a unique $\upalpha_{\mathscr{H}^+}$ that is smooth and supported away from $\mathscr{H}^+_+$, such that \bref{constraint null infinity} is satisfied by $\upalpha_{\mathscr{I}^+}, \underline\upalpha_{\mathscr{I}^+}$. Furthermore, if $\int_{-\infty}^\infty du\; \underline\upalpha_{\mathscr{I}^+}=0$ then $\upalpha_{\mathscr{I}^+}\in\mathcal{E}^{T,+2}_{\mathscr{I}^+}$.
\end{proposition}\label{TS-2 on I+}
This completes the construction of the map $\mathcal{TS}_{\mathscr{I}^+}$:
\begin{corollary}\label{TS scri +}
\Cref{alphabar out of alpha on scri} defines the map
\begin{align}
\mathcal{TS}_{\mathscr{I}^+}:\mathcal{E}^{T,+2}_{\mathscr{I}^+}\longrightarrow \mathcal{E}^{T,-2}_{\mathscr{I}^+}.
\end{align}
The map $\mathcal{TS}_{\mathscr{I}^+}$ is surjective on a dense subspace of $\mathcal{E}^{T,-2}_{\mathscr{I}^+}$ by \Cref{alpha out of alphabar on scri}. Therefore it extends to a unitary Hilbert-space isomorphism.
\end{corollary}
\begin{remark}
The argument leading to \cref{iterated integrals} can be used to show that
\begin{align}
\begin{split}
\int_{-\infty}^\infty \int_{-\infty}^{u_1} &\underline{\alpha}_{\mathscr{I}^+}du_1 du_2=\int_{-\infty}^\infty\int_{-\infty}^{u_1}\int_{-\infty}^{u_2}\underline{\alpha}_{\mathscr{I}^+}du_1 d u_2\\
&=\int_{-\infty}^\infty\int_{-\infty}^{u_1}\int_{-\infty}^{u_2}\int_{-\infty}^{u_3}\underline{\alpha}_{\mathscr{I}^+}du_1 du_2 du_3=0.
\end{split}
\end{align}
\end{remark}
\subsubsection*{Constraint \bref{eq:227} at $\overline{\mathscr{H}^+}$}
\indent Similar considerations apply to constraint $\mathop{\mathbb{TS}^+}[\alpha,\underline\alpha]=0$, which in Kruskal coordinates looks like
\begin{align}\label{constraint horizon}
\partial_V^4 V^2\underline\upalpha_{\mathscr{H}^+}=\Big[2\mathring{\fancydstar_2}\mathring{\fancydstar_1}\mathring{\overline{\fancyd_1}}\mathring{\fancyd_1}-3V\partial_V-6\Big]V^{-2}\upalpha_{\mathscr{H}^+}.
\end{align}
\begin{proposition}\label{alphabar out of alpha on H}
Given $\upalpha_{\mathscr{H}^+}$ such that $V^{-2}\upalpha_{\mathscr{H}^+}\in\Gamma_c(\overline{\mathscr{H}^+})$, solving \bref{constraint horizon} as a transport equation for $V^2\underline\upalpha_{\mathscr{H}^+}$ with decay conditions towards $\mathscr{H}^+_+$:
\begin{align}
V^2\underline\upalpha_{\mathscr{H}^+}, \partial_V V^2\underline\upalpha_{\mathscr{H}^+}, \partial_V^2 V^2\underline\upalpha_{\mathscr{H}^+}, \partial_V^3 V^2\underline\upalpha_{\mathscr{H}^+} \longrightarrow 0 \text{ as } V \longrightarrow \infty,
\end{align}
gives a unique solution such that $V^2\underline\upalpha_{\mathscr{H}^+}\in\Gamma_c(\overline{\mathscr{H}^+})$ and $\underline\upalpha_{\mathscr{H}^+}, \upalpha_{\mathscr{H}^+}$ satisfy \bref{constraint horizon}.
\end{proposition}
Conversely, we have the following:
\begin{proposition}\label{alpha out of alphabar on H}
Let $\underline\upalpha_{\mathscr{H}^+}$ be such that $V^{2}\underline\upalpha_{\mathscr{H}^+}\in\Gamma_c(\overline{\mathscr{H}^+})$, then there exists a unique $\upalpha_{\mathscr{H}^+}$ with $V^{-2}\upalpha_{\mathscr{H}^+}$ such that \bref{constraint horizon} is satisfied with $V^{-2}\upalpha_{\mathscr{H}^+}\longrightarrow 0$ as $V\longrightarrow \infty$
\end{proposition}
\begin{proof}
As in the proof of \Cref{alphabar out of alpha on scri}, we scalarise \bref{constraint horizon}: Let $V^2\underline\upalpha_{\mathscr{H}^+}=(2M)^2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1(\underline{f},\underline{g})$, $V^{-2}\upalpha_{\mathscr{H}^+}=(2M)^2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1({f},g)$ and let $\underline{F}=-\partial_V^4 \underline{f}, \underline{G}=-\partial_V^4 \underline{g}$. Then $f, g, \underline{F}, \underline{G}$ satisfy
\begin{align}
\underline{F}&=\left[3V\partial_V+6-\mathring{\slashed{\Delta}}(\mathring{\slashed{\Delta}}+2)\right]f,\label{eq:444}\\
\underline{G}&=\left[3V\partial_V+6+\mathring{\slashed{\Delta}}(\mathring{\slashed{\Delta}}+2)\right]g.\label{eq:555}
\end{align}
Equations \bref{eq:444}, \bref{eq:555} are degenerate at $V=0$. If $f,g$ satisfy \bref{eq:444} and \bref{eq:555} then at $V=0$ we must have
\begin{align}\label{elliptic}
\underline{F}|_{V=0}&=\left[6-\mathring{\slashed{\Delta}}(\mathring{\slashed{\Delta}}+2)\right]f|_{V=0},\\
\underline{G}|_{V=0}&=\left[6+\mathring{\slashed{\Delta}}(\mathring{\slashed{\Delta}}+2)\right]g|_{V=0}.
\end{align}
The above are elliptic identities that determine $(f,g)|_{V=0}$ from $F|_{V=0}, G|_{V=0}$. Denote $(f_0,g_0):=(f,g)|_{V=0}$.\\
\indent As was done in the proof of \Cref{alphabar out of alpha on scri}, we evolve \bref{eq:444} and \bref{eq:555} in opposite directions in $V$. Working with \bref{eq:555} is straightforward: let $V_\infty$ lie beyond the support of $\underline{F}$, then there is a unique $f$ satisfying \bref{eq:555} with $f|_{V_\infty}=0$ and we set $f$ to vanish for $V>V_\infty$.\\
\indent To find a solution to \bref{eq:444}, note that for $V_0>0$, there is a unique $g$ that satisfies \bref{eq:444} on $V\geq V_0$ and $g|_{V_0}=g_0$. Multiply \bref{eq:444} by $g$, integrate by parts to get:
\begin{align}
\frac{3}{2}\left[g(V)^2-g(V_0)^2\right]+\int_{V_0}^V\frac{1}{\widetilde{V}}6g^2+|f\mathring{\slashed{\Delta}}(\mathring{\slashed{\Delta}}+2)g|^2=\int_{V_0}^V\frac{1}{\widetilde{V}}g\cdot \underline{G}
\end{align}
Poincar\'e's inequality and Cauchy--Schwarz imply:
\begin{align}\label{estimate}
g(V)^2+\int_{V_0}^V\frac{5}{\widetilde{V}}g^2\lesssim\int_{V_0}^V \underline{G}^2+g_0^2
\end{align}
We obtain similar estimates for $\partial_V g$ by commuting \bref{eq:444} with $\partial_V$. We can use \bref{estimate} commuted with $\partial_V, \mathring{\slashed{\nabla}}$ to conclude that taking $V_0\longrightarrow 0$, we can find $g$ that satisfies \bref{eq:444} with $g|_{V=0}=g_0$.
\end{proof}
\begin{remark}
Were we to apply the constraint \bref{eq:227} on a smaller portion of the future event horizon, we would have needed more data to specify $\underline\upalpha_{\mathscr{H}^+}$ completely. In considering the problem on the entirety of $\overline{\mathscr{H}^+}$ no such additional data is necessary, since \bref{elliptic} determines the $f|_{\mathcal{B}}$ in terms of $\underline\upalpha_{\mathscr{H}^+}$.
\end{remark}
\begin{corollary}\label{TS+2 on H+}
\Cref{alphabar out of alpha on H} defines the map
\begin{align}
\mathcal{TS}_{\mathscr{H}^+}:\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}\longrightarrow \mathcal{E}^{T,-2}_{\overline{\mathscr{H}^+}}.
\end{align}
The map $\mathcal{TS}_{\mathscr{H}^+}$ is surjective on a dense subspace of $\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^+}}$ by \Cref{alpha out of alphabar on H}. Therefore it extends to a unitary Hilbert-space isomorphism.
\end{corollary}\label{TS on H-}
We can analogously consider the constraints on $\overline{\mathscr{H}^-}, \mathscr{I}^-$. In light of \Cref{time inversion} we can immediately deduce the appropriate statements:
\begin{corollary}\label{TS-2 on H-}
Given $\upalpha_{\mathscr{H}^-}$ such that $U^2\upalpha_{\mathscr{H}^-}\in\Gamma_c(\overline{\mathscr{H}^-})$, there exists a unique solution $\underline\upalpha_{\mathscr{H}^-}$ to the equation
\begin{align}\label{TS-2 on H- equation}
\partial_U^4 U^2\upalpha_{\mathscr{H}^-}=\Big[2\mathring{\fancydstar_2}\mathring{\fancydstar_1}\mathring{\overline{\fancyd_1}}\mathring{\fancyd_1}-3U\partial_U-6\Big]U^{-2}\underline\upalpha_{\mathscr{H}^-}.
\end{align}
such that $U^{-2}\underline\upalpha_{\mathscr{H}^-}\in\Gamma(\overline{\mathscr{H}^-})$. The solution $\underline\upalpha_{\mathscr{H}^-}(u,\theta^A)$ and its $\partial_U,\mathring{\slashed{\nabla}}$ derivatives decay exponentially as $u\longrightarrow-\infty$ at a rate $\frac{4}{M}$.\\
\indent Given $\underline\upalpha_{\mathscr{H}^-}$ such that $U^{-2}\underline\upalpha_{\mathscr{H}^-}\in\Gamma_c(\overline{\mathscr{H}^-})$, there exists a unique solution $\upalpha_{\mathscr{H}^-}$ such that $U^2\upalpha_{\mathscr{H}^-}\in\Gamma_c(\overline{\mathscr{H}^-})$.\\
\indent As in \Cref{TS+2 on H+}, we can combine the statements above to define a unitary Hilbert-space isomorphism via \bref{TS-2 on H- equation}:
\begin{align}
\mathcal{TS}_{\mathscr{H}^-}:\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^-}}\longrightarrow\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^-}}.
\end{align}
\end{corollary}
\begin{corollary}\label{TS on scri-}
Let $\upalpha_{\mathscr{I}^-}\in \Gamma_c(\mathscr{I}^-)$. Then there exists a unique smooth $\underline\upalpha_{\mathscr{I}^-}$ such that
\begin{align}\label{TS+2 on I-}
\partial_v^4\upalpha_{\mathscr{I}^-}-2\mathring{\fancydstar_2}\mathring{\fancydstar_1}\mathring{\overline{\fancyd_1}}\mathring{\fancyd_1}\underline\upalpha_{\mathscr{I}^+}-6M\partial_v\underline\upalpha_{\mathscr{I}^+}=0,
\end{align}
with $\underline\upalpha_{\mathscr{I}^+}\longrightarrow0$ as $u\longrightarrow \pm \infty$. The solution $\underline\upalpha_{\mathscr{I}^-}$ and its derivatives decay exponentially as $v\longrightarrow\pm\infty$. \\
\indent Given $\underline\upalpha_{\mathscr{I}^-}$, there exists a unique solution $\upalpha_{\mathscr{I}^-}$ to \bref{TS+2 on I-} that is supported away from the past end of $\mathscr{I}^-$. Moreover, $\int_{-\infty}^\infty d\bar{v}\;\upalpha_{\mathscr{I}^-}=0$.\\
\indent As in \bref{TS scri +}, the statements above can be combined to define via \bref{TS+2 on I-} a unitary Hilbert space isomorphism:
\begin{align}
\mathcal{TS}_{\mathscr{I}^-}:\mathcal{E}^{T,+2}_{\mathscr{I}^-}\longrightarrow\mathcal{E}^{T,-2}_{\mathscr{I}^-}.
\end{align}
\end{corollary}
\begin{corollary}\label{Both TS+ and TS-}
There exist Hilbert space isomorphisms
\begin{align}
&\mathcal{TS}^+:=\mathcal{TS}_{\mathscr{H}^+}\oplus\mathcal{TS}_{\mathscr{I}^+}:\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}\oplus\mathcal{E}^{T,+2}_{\mathscr{I}^+}\longrightarrow\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^+}}\oplus\mathcal{E}^{T,-2}_{\mathscr{I}^+},\\
&\mathcal{TS}^-:=\mathcal{TS}_{\mathscr{H}^-}\oplus\mathcal{TS}_{\mathscr{I}^-}:\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^-}}\oplus\mathcal{E}^{T,+2}_{\mathscr{I}^-}\longrightarrow\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^-}}\oplus\mathcal{E}^{T,-2}_{\mathscr{I}^-}.
\end{align}
\end{corollary}
\subsection{Propagating the identities}\label{subsection 9.3 propagating the identities}
\indent We can summarise the contents of the previous section as follows: given scattering data for either $\alpha$ or $\underline\alpha$ on $\mathscr{I}^+$ and $\overline{\mathscr{H}^+}$, there exist unique scattering data for the other that is consistent with \bref{constraint null infinity} and \bref{constraint horizon} and \cref{+2 noncompact,,-2 noncompact}.\\
\indent For $\alpha$ and $\underline\alpha$ arising from scattering data related by \bref{constraint null infinity} and \bref{constraint horizon}, if we can verify that
\begin{align}
\lim_{v\longrightarrow\infty} r^5\mathop{\mathbb{TS}^+}[\alpha,\underline\alpha]&=0,\\
V^2\Omega^{-2}\mathop{\mathbb{TS}^-}[\alpha,\underline\alpha]\Big|_{\overline{\mathscr{H}^+}}&=0,
\end{align}
then \Cref{propagation lemma} together with \Cref{+2 future backward scattering}, \Cref{-2 future backward scattering} imply that $\mathop{\mathbb{TS}^-}[\alpha,\underline\alpha]=\mathop{\mathbb{TS}^+}[\alpha,\underline\alpha]=0$ everywhere. \\
\indent Assume future scattering data with $(V^{-2}\upalpha_{\mathscr{H}^+},\upalpha_{\mathscr{I}^+})\in \Gamma_c(\overline{\mathscr{H}^+})\times\Gamma_c(\mathscr{I}^+)$ for the +2 Teukolsky equation \cref{T+2}. We can obtain $\underline\upalpha_{\mathscr{H}^+}$ that is supported away from $\mathscr{H}^+_+$ by solving \bref{constraint horizon} as a transport equation, and we can use \Cref{alphabar out of alpha on scri} to find a smooth $\underline{\alpha}_{\mathscr{I}^+}$ decays exponentially towards $\mathscr{I}^+_\pm$ at rate faster than $\frac{4}{M}$. Therefore, there exists a unique solution $\underline\alpha$ that realises scattering data $(\underline\upalpha_{\mathscr{H}^+},\underline{\alpha}_{\mathscr{I}^+})$ with $V^2\Omega^{-2}\underline\alpha$ smooth everywhere on $J^+(\overline\Sigma)$ up to and including $\overline{\mathscr{H}^+}$. In particular, since $\mathop{\mathbb{TS}^+}[\alpha,\underline\alpha]\Big|_{\overline{\mathscr{H}^+}}=0$, \cref{ ts- to parabolic ts+} implies
\begin{align}
\partial_V^4 \left\{\partial_U^4V^{-2}\upalpha_{\mathscr{H}^+}+\Big(2\mathring{\fancydstar_2}\mathring{\fancydstar_1}\mathring{\overline{\fancyd_1}}\mathring{\fancyd_1}-3V\partial_V-6\Big)V^2\underline\upalpha_{\mathscr{H}^+}\right\}=0.
\end{align}
Since $V^{-2}\upalpha_{\mathscr{H}^+}, V^2\underline\upalpha_{\mathscr{H}^+}$ and their derivatives decay as $v\longrightarrow\infty$, we conclude that $V^2\Omega^{-2}\mathop{\mathbb{TS}^-}[\alpha,\underline\alpha]\Big|_{\overline{\mathscr{H}^+}}=0$.\\
\indent Towards $\mathscr{I}^+$, $(\underline\upalpha_{\mathscr{H}^+},\underline\upalpha_{\mathscr{I}^+})$ decay at a sufficiently fast rate that we can use \Cref{-2 noncompact}, \Cref{RW backwards noncompact}, and \Cref{Phi 2 backwards} to deduce.
\begin{align}
\begin{split}
\lim_{u\longrightarrow\infty}\lim_{v\longrightarrow\infty}\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\right)^2\underline\Psi&=\lim_{u\longrightarrow\infty} \int_u^\infty (u-\bar{u})\left[\mathcal{A}_2(\mathcal{A}_2-2)-6M\partial_u\right]\underline{\bm{\uppsi}}_{\mathscr{I}^+}\\
&=\lim_{u\longrightarrow\infty}\int_u^\infty (u-\bar{u}) \left[\mathcal{A}_2^2(\mathcal{A}_2-2)^2-(6M\partial_u)^2\right]\underline\upalpha_{\mathscr{I}^+}=0.
\end{split}
\end{align}
We also have
\begin{align}
\lim_{u\longrightarrow\infty}\lim_{v\longrightarrow\infty}\partial_u^i\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\right)^2\underline\Psi=0.
\end{align}
for $0\leq i\leq3$. Taking the limit of \bref{ ts+ to parabolic ts-} as $v\longrightarrow\infty$ implies
\begin{align}
\partial_u^4\left[\lim_{v\longrightarrow\infty}\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\right)^2\underline\Psi-\Big(2\mathring{\fancydstar_2}\mathring{\fancydstar_1}\mathring{\overline{\fancyd_1}}\mathring{\fancyd_1}+6M\partial_u\Big)\underline\upalpha_{\mathscr{I}^+}\right]=0.
\end{align}
Altogether, we see that $\lim_{v\longrightarrow\infty} r^5\mathop{\mathbb{TS}^+}[\alpha,\underline\alpha]=0$. We have shown
\begin{proposition}
Assume $\alpha$ is a solution to \cref{T+2} arising from smooth scattering data $(\upalpha_{\mathscr{H}^+},\upalpha_{\mathscr{I}^+})$ such that $\upalpha_{\mathscr{I}^+} \in \Gamma_c({\mathscr{I}^+})$, $V^{-2}\upalpha_{\mathscr{H}^+}\in\Gamma_c(\overline{\mathscr{H}^+})$. There exists unique smooth scattering data $\underline\upalpha_{\mathscr{H}^+}\in\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^+}}, \underline\upalpha_{\mathscr{I}^+}\in\mathcal{E}^{T,-2}_{\mathscr{I}^+}$ giving rise to a solution $\underline\alpha$ to \cref{T-2}. Moreover, $\alpha$ and $\underline\alpha$ satisfy $\mathop{\mathbb{TS}^+}[\alpha,\underline\alpha]=\mathop{\mathbb{TS}^-}[\alpha,\underline\alpha]=0$ everywhere on $J^+(\overline\Sigma)$.
\end{proposition}
We can repeat the above arguments starting from smooth, compactly supported scattering data for the $-2$ equation to arrive at
\begin{proposition}\label{proof of corollary}
Assume $\underline\alpha$ is a solution to \cref{T-2} arising from smooth scattering data $(\underline\upalpha_{\mathscr{H}^+},\underline\upalpha_{\mathscr{I}^+})$ such that $\underline\upalpha_{\mathscr{I}^+} \in \Gamma_c({\mathscr{I}^+})$, $V^2\underline\upalpha_{\mathscr{H}^+}\in\Gamma_c(\overline{\mathscr{H}^+})$. There exists unique smooth scattering data $\upalpha_{\mathscr{H}^+}\in\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}, \upalpha_{\mathscr{I}^+}\in\mathcal{E}^{T,+2}_{\mathscr{I}^+}$ giving rise to a solution $\alpha$ to \cref{T+2}. Moreover, $\alpha$ and $\underline\alpha$ satisfy $\mathop{\mathbb{TS}^+}[\alpha,\underline\alpha]=\mathop{\mathbb{TS}^-}[\alpha,\underline\alpha]=0$ everywhere on $J^+(\overline\Sigma)$.
\end{proposition}
This concludes the proof of \Cref{Theorem 3 detailed statement}, i.e.~\Cref{Theorem 3} of the introduction.
\subsection{A mixed scattering theory: proof of Corollary 1}\label{subsection 9.4 mixed scattering}
We are in a position to prove Corollary 1 of the introduction, i.e.~\Cref{corollary to be proven} of \Cref{subsection 4.4 Corollary 1: mixed scattering}:
\begin{proof}[Proof of Corollary 1]
We will construct the map $\mathscr{S}^{+2,-2}$ only in the forward direction on a dense subset of $\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^-}}\oplus\;\mathcal{E}^{T,+2}_{\mathscr{I}^-}$. Let $\upalpha_{\mathscr{I}^-}\in \Gamma_c({\mathscr{I}^-})$, $\underline\upalpha_{\mathscr{H}^-}$ be such that $V^2\underline\upalpha_{\mathscr{H}^-}\in\Gamma_c(\overline{\mathscr{H}^-})$ and $\int_{-\infty}^\infty d\bar{v}\upalpha_{\mathscr{I}^-}=0$. The map $\mathcal{TS}^-$ of \Cref{Both TS+ and TS-} defines a scattering data set consisting of a smooth field $\underline\upalpha_{\mathscr{I}^-}$ on $\mathscr{I}^-$ which is supported away from the past end of $\mathscr{I}^-$, $\underline\upalpha_{\mathscr{H}^-}$ on $\overline{\mathscr{H}^-}$ which is supported away from the past end of $\overline{\mathscr{H}^-}$.\\
\indent The map ${}^{(+2)}\mathscr{B}^-$ of \Cref{+2 past forward scattering} gives rise to a smooth solution $\alpha$ on $J^-(\overline{\Sigma})$ such that
\begin{align}\label{545454}
\left\|\left(\alpha|_{\overline{\Sigma}}, \slashed{\nabla}_{n_{\overline\Sigma}}\alpha|_{n_{\overline\Sigma}}\right)\right\|_{\mathcal{E}^{T,+2}_{\overline{\Sigma}}}^2=\left\|\upalpha_{\mathscr{H}^-}\right\|^2_{\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^-}}}+\left\|\upalpha_{\mathscr{I}^-}\right\|^2_{\mathcal{E}^{T,+2}_{\mathscr{I}^-}},
\end{align}
and the map ${}^{(+2)}\mathscr{F}^{+}$ extends $\alpha$ to a smooth solution of \bref{T+2} on $J^+(\overline{\Sigma})$. Combining \bref{545454} with the fact that $\alpha|_{{\Sigma^*}}, \slashed{\nabla}_{n_{{\Sigma^*}}}\alpha|_{\Sigma^*}$ are smooth implies that the estimates of \Cref{psiILED,,alphaILED,,ILED psi higherorder,,alphaILED higher order} apply, and we can apply \Cref{psi+2ptwisedecay,,alpha+2ptwisedecay,,horizonpsidecay} together with \Cref{WP+2Sigmabar} to conclude that $\alpha$ realises the image of ${}^{(+2)}\mathscr{F}^+$ on $\overline{\mathscr{H}^+}$ as its radiation field there.\\
\indent The scattering data set $(\underline\upalpha_{\mathscr{H}^-},\underline\upalpha_{\mathscr{I}^-})$ give rise to a unique smooth solution $\underline\alpha$ according to \Cref{past scattering of -2}, which in particular realises $\underline\upalpha_{\mathscr{H}^-},\underline\upalpha_{\mathscr{I}^-}$ as its radiation fields on $\mathscr{H}^-$, $\mathscr{I}^-$ respectively. The quantity $\underline\Psi=\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\right)^2r\Omega^2\underline\alpha$ satisfies the Regge--Wheeler equation \bref{RW} and induces a radiation field on $\mathscr{I}^-$ that is given by $\uppsi_{\mathscr{I}^-}=\partial_v^2\underline\upalpha_{\mathscr{I}^-}$. Note that in particular, $\partial_v\uppsi_{\mathscr{I}^-}$ vanishes whenever $\underline\upalpha_{\mathscr{I}^-}$ vanishes on $\mathscr{I}^-$.\\
\indent Assume the support of $\underline\upalpha_{\mathscr{I}^-}$ on $\mathscr{I}^-$ in $v$ is contained in $[v_-,v_+]$. Since $\alpha$ arises from scattering data of compact support, we can follow the steps leading to estimate \bref{this+2} taking into account \Cref{time inversion} to obtain the following: let $R$ be sufficiently large, then
\begin{align}\label{this-2}
\begin{split}
\int_{\underline{\mathscr{C}}_v\cap\{r>R\}}d\bar{u}d\omega\; r^2|\Omega\slashed{\nabla}_3\underline\Psi|^2\lesssim_{v_+} R^2\Bigg[\|\underline\upalpha_{\mathscr{I}^-}&\|_{\mathcal{E}^{T,-2}_{\mathscr{I}^-}}^2+\|\underline\upalpha_{\mathscr{H}^-}\|_{\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^-}}}^2\\&+\int_{[v_-,v_+]\times S^2}d\bar{v}d\omega\;|\mathring{\slashed{\nabla}}\partial_v^2{\underline\upalpha}_{\mathscr{I}^-}|_{S^2}^2+4|\partial_v^2{\underline\upalpha}_{\mathscr{I}^-}|_{S^2}^2\Bigg].
\end{split}
\end{align}
Let $v_1>v_+$, then we can use \bref{this-2} to show that $\sqrt{r}\partial_v\Psi|_{u,v}\longrightarrow 0$ as $u\longrightarrow-\infty$:
\begin{align}
\begin{split}
|\Omega\slashed{\nabla}_4\Psi|&\leq \int_{-\infty}^u d\bar{u} |\Omega\slashed{\nabla}_4\Omega\slashed{\nabla}_3\Psi|\lesssim\int_{-\infty}^u d\bar{u}\frac{1}{r^2}|\mathring{\slashed{\Delta}}\underline\Psi+\underline\Psi|\lesssim \frac{1}{\sqrt{r(u,v)}}\sqrt{\int_{-\infty}^u d\bar{u}\frac{1}{r^2}|\mathring{\slashed{\Delta}}\underline\Psi|^2+|\mathring{\slashed{\nabla}}\underline\Psi|^2+\underline\Psi|^2}\\& \lesssim \frac{1}{\sqrt{r(u,v)}}\sqrt{\sum_{|\gamma|\leq2} F^T_v[\slashed{\mathcal{L}}_{\Omega^\gamma}\underline\Psi}].
\end{split}
\end{align}
where $\Omega^\gamma=\Omega_1^{\gamma_1}\Omega_2^{\gamma_2}\Omega_3^{\gamma_3}$ denotes Lie differentiation with respect to the $so(3)$ algebra of $S^2$ Killing fields. Now take $u_1<u_2, v_2>v_1$ such that $(u_2,v_1,\theta^A)\in J^-(\overline\Sigma)$ and $r(u_2,v_1)>R$. We can repeat the procedure leading to \Cref{RWrp} in the region $\mathscr{D}^{u_2,v_2}_{u_1,v_1}$ to get for $p\in[0,2]$:
\begin{align}\label{747474}
\begin{split}
&\int_{\mathscr{C}_{u_2}\cap[v_1,v_2]} d\bar{v}\sin\theta d\theta d\phi \;r^p|\Omega\slashed{\nabla}_4\underline\Psi|^2+\int_{\underline{\mathscr{C}}_{v_2}\cap[u_1,u_2]}d\bar{u}\sin\theta d\theta d\phi\;r^p\left[|\slashed{\nabla}\underline\Psi|^2+\frac{1}{r^2}|\underline\Psi|^2\right]\\&+ \int_{\mathscr{D}^{u_2,v_2}_{u_1,v_1}}d\bar{u}d\bar{v}\sin\theta d\theta d\phi\;r^{p-1}\left[p|\Omega\slashed{\nabla}_4\underline\Psi|^2+(2-p)|\slashed{\nabla}\underline\Psi|^2+r^{p-3}|\underline\Psi|^2\right]\\&\lesssim\int_{\mathscr{C}_{u_1}\cap[v_1,v_2]} d\bar{v}\sin\theta d\theta d\phi\; r^p|\Omega\slashed{\nabla}_4\underline\Psi|^2+\int_{\underline{\mathscr{C}}_{v_1}\cap[u_1,u_2]}d\bar{u}\sin\theta d\theta d\phi\;r^p\left[|\slashed{\nabla}\underline\Psi|^2+\frac{1}{r^2}|\underline\Psi|^2\right].
\end{split}
\end{align}
Set $p=1$ in \bref{747474}. Keeping $v_1,v_2$ fixed and taking $u_1\longrightarrow-\infty$, the first term on the right hand side of \bref{747474} decays. The remaining term can be estimated by \bref{this-2} and applying Hardy's inequality, knowing that $\underline\Psi$ and its angular derivatives converge pointwise towards $\mathscr{I}^-$. In conclusion we have
\begin{align}\label{rp estimate from past null infinity}
\begin{split}
&\int_{\mathscr{D}^{u_2,\infty}_{-\infty,v_1}}d\bar{u}d\bar{v}\sin\theta d\theta d\phi\;r^{p-1}\left[p|\Omega\slashed{\nabla}_4\underline\Psi|^2+(2-p)|\slashed{\nabla}\underline\Psi|^2+r^{p-3}|\underline\Psi|^2\right]\\
&\lesssim_{R}\sum_{|\gamma|\leq2} \Bigg[\|\slashed{\mathcal{L}}_{\Omega^\gamma}\underline\upalpha_{\mathscr{I}^-}\|_{\mathcal{E}^{T,-2}_{\mathscr{I}^-}}^2+\|\slashed{\mathcal{L}}_{\Omega^\gamma}\underline\upalpha_{\mathscr{H}^-}\|_{\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^-}}}^2+\int_{[v_-,v_+]\times S^2}d\bar{v}d\omega\;|\mathring{\slashed{\nabla}}\partial_v^2{\slashed{\mathcal{L}}_{\Omega^\gamma}\underline\upalpha}_{\mathscr{I}^-}|_{S^2}^2+4|\partial_v^2{\slashed{\mathcal{L}}_{\Omega^\gamma}\underline\upalpha}_{\mathscr{I}^-}|_{S^2}^2\Bigg].
\end{split}
\end{align}
We can extend the region $\mathscr{D}^{u_2,\infty}_{-\infty,v_1}$ to obtain \bref{rp estimate from past null infinity} over a region $\mathscr{D}^{\infty,\infty}_{-\infty,v_1}\cap\{r>R\}$. In view of the monotonicity of $F^T_u[\underline\Psi]\cap\{r>R\}$, this implies in particular that
\begin{align}
\lim_{u\longrightarrow\infty} \int_{\mathscr{C}_u\cap\{r>R\}}d\bar{v}\sin\theta d\theta d\phi \;\frac{\Omega^2}{r^2}|\underline\Psi|^2=0.
\end{align}
Now we show that $\underline\alpha$ induces a radiation field $\underline\upalpha_{\mathscr{I}^+}$ on $\mathscr{I}^+$ which is in $\mathcal{E}^{T,+2}_{\mathscr{I}^+}$. First, note that energy conservation is sufficient to show that $\underline\alpha, \underline\psi$ attains radiation fields on $\mathscr{I}^+$: Fix $u$ and take $v_2>v_1$:
\begin{align}\label{corollary to be proven, radiation field exists}
|r^3\Omega\underline\psi(u,v_2,\theta^A)-r^3\Omega\underline\psi(u,v_1,\theta^A)|\leq\int_{v_1}^{v_2} d\bar{v} \frac{\Omega^2}{r^2}|\underline\Psi|\leq\frac{1}{\sqrt{r(u,v_1)}}\;\sqrt{\int_{v_1}^{v_2} d\bar{v} \frac{\Omega^2}{r^2}|\underline\Psi|^2}.
\end{align}
by commuting with angular derivatives and using a Sobolev estimate as in the proof of \Cref{RWradscri}, this shows that for any sequence $\{v_n\}$ with $v_n\longrightarrow\infty$ we have that $r^3\Omega\underline\psi(u,v_n,\theta^A)$ is a Cauchy sequence, and an identical argument yields the same for $\underline\alpha$. Denote the limit of $r^3\underline\psi$ near $\mathscr{I}^+$ by $\underline\psi_{\mathscr{I}^+}$.\\
\indent Since $r^5\underline\psi$ converges near $\mathscr{I}^-$, estimate \bref{corollary to be proven, radiation field exists} can be easily modified to show that $\underline\psi_{\mathscr{I}^+}$ decays towards the past end of $\mathscr{I}^+$. As for the future end of $\mathscr{I}^+$, we repeat the estimate \bref{corollary to be proven, radiation field exists} estimating $\underline\psi_{\mathscr{I}^+}$ in terms of $\underline\psi$ along a hypersurface $\{r=R\}$ for a fixed $R$. Since $\underline\alpha$ is smooth and $\|(\underline\alpha|_{\overline{\Sigma}},\slashed{\nabla}_{n_{\overline{\Sigma}}}\underline\alpha|_{\overline{\Sigma}})\|_{\mathcal{E}^{T,-2}_{\overline{\Sigma}}}<\infty$, the results of \Cref{-2 radiation on H+} apply and we can deduce that $\underline\psi|_{r=R}$ decays as $t\longrightarrow\infty$, and this says that $\underline\psi_{\mathscr{I}^+}$ decays towards the future end of $\mathscr{I}^+$.\\
\indent We now show that $\int_{-\infty}^\infty d\bar{u} \;\underline\upalpha_{\mathscr{I}^+}=0$. Consider the $-2$ Teukolsky equations \bref{T-2}, which we write as follows:
\begin{align}
\frac{\Omega^2}{r^2}\Omega\slashed{\nabla}_3 r^5\Omega^{-1}\underline\psi=\left(\mathcal{A}_2-\frac{6M}{r}\right)r\Omega^2\underline\alpha.
\end{align}
It can be shown that the limit towards $\mathscr{I}^+$ produces
\begin{align}\label{858585}
\partial_u \underline\psi_{\mathscr{I}^+}=\mathcal{A}_2\;\underline\upalpha_{\mathscr{I}^+}
\end{align}
We can conclude by observing that $\underline\psi_{\mathscr{I}^+}$ decays towards both ends of $\mathscr{I}^+$. With this we can also conclude that $\underline\upalpha_{\mathscr{I}^+}\in \mathcal{E}^{T,-2}_{\mathscr{I}^+}$ and that
\begin{align}
\|\underline\upalpha_{\mathscr{I}^+}\|_{\mathcal{E}^{T,-2}_{\mathscr{I}^+}}^2+ \|\upalpha_{\mathscr{H}^+}\|_{\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}}^2= \|\upalpha_{\mathscr{I}^-}\|_{\mathcal{E}^{T,+2}_{\mathscr{I}^-}}^2+\|\underline\upalpha_{\mathscr{H}^-}\|_{\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^-}}}^2.
\end{align}
\end{proof}
\begin{remark}
The result above subsumes a restricted map to scattering data in $\mathcal{E}^{T,-2}_{\mathscr{H}^-}$, $\mathcal{E}^{T,+2}_{\mathscr{H}^+}$, which leads to an isomorphism
\begin{align}
\mathscr{S}^{+2,-2}:\mathcal{E}^{T,-2}_{\mathscr{I}^-}\oplus\mathcal{E}^{T,-2}_{\mathscr{H}^-}\longrightarrow \mathcal{E}^{T,-2}_{\mathscr{H}^+}\oplus \mathcal{E}^{T,-2}_{\mathscr{I}^+}.
\end{align}
\end{remark}
|
1,314,259,995,298 | arxiv | \section{Introduction}
Despite considerable progress in our understanding of the formation of
galaxies, the origin of the Hubble sequence remains a major unsolved
problem. The main morphological parameter that sets the
classification of galaxies in the Hubble diagram is the disk-to-bulge
ratio ($D/B$). Understanding the origin of the Hubble sequence is
thus intimately related to understanding the parameters and processes
that determine the ratio between the masses of disk and bulge.
Especially, we need to understand whether this ratio is imprinted in
the initial conditions (`nature') or whether it results from
environmental processes such as mergers and impulsive collisions
(`nurture').
Here I suggest a simple inside-out formation scenario for the bulge (a
`nature'-variant) and investigate the differences in properties of the
proto-galaxies that result in different disk-to-bulge ratios. A more
detailed discussion on the background and ingredients of the models
can be found in van den Bosch (1998; hereafter vdB98).
\section{The formation scenario}
In the standard picture of galaxy formation, galaxies form through the
hierarchical clustering of dark matter and subsequent cooling of the
baryonic matter in the dark halo cores. Coupled with the notion of
angular momentum gain by tidal torques induced by nearby
proto-galaxies, this theory provides the background for a model for
the formation of galactic disks. In this model, the angular momentum
of the baryons is assumed to be conserved causing the baryons to
settle in a rapidly rotating disk (e.g., Fall \& Efstathiou 1980).
The turn-around, virialization, and subsequent cooling of the baryonic
matter of a proto-galaxy is an inside-out process. First the
innermost shells virialize and heat its baryonic material to the
virial temperature. The cooling time of this dense, inner material is
very short, whereas its specific angular momentum is relatively low.
If the cooling time of the gas is shorter than the dynamical time, the
gas will condense in clumps that form stars, and this clumpiness is
likely to result in a bulge. Even if the low-angular momentum
material accumulates in a disk, the self-gravity of such a small,
compact disk makes it violently unstable, and transforms it into a
bar. Bars are efficient in transporting gas inwards, and can cause
vertical heating by means of a collective bending instability. Both
these processes lead ultimately to the dissolution of the bar; first
the bar takes a hotter, triaxial shape, but is later transformed in a
spheroidal bulge component. There is thus a natural tendency for the
inner, low angular momentum baryonic material to form a bulge
component rather than a disk. Because of the ongoing virialization,
subsequent shells of material cool and try to settle into a disk
structure at a radius determined by their angular momentum. If the
resulting disk is unstable, part of the material is transformed into
bulge material. This process of disk-bulge formation is
self-regulating in that the bulge grows until it is massive enough to
sustain the remaining gas in the form of a stable disk. I explore
this inside-out bulge formation scenario, by incorporating it into the
standard Fall \& Efstathiou theory for disk formation.
The {\it ansatz} for the models are the properties of dark halos,
which are assumed to follow the universal density profiles proposed by
Navarro, Frenk \& White (1997), and whose halo spin parameters,
$\lambda$, follow a log-normal distribution in concordance with both
numerical and analytical studies. I assume that only a certain
fraction, $\epsilon_{\rm gf}$, of the available baryons in a given
halo ultimately settles in the disk-bulge system. Two extreme
scenarios for this galaxy formation (in)efficiency are considered. In
the first scenario, which I call the `cooling'-scenario, only the
inner fraction $\epsilon_{\rm gf}$ of the baryonic mass is able to
cool and form the disk-bulge system: the outer parts of the halo,
where the density is lowest, but which contain the largest fraction of
the total angular momentum, never gets to cool. In the second
scenario, referred to hereafter as the `feedback'-scenario, the
processes related to feedback and star formation are assumed to yield
equal probabilities, $\epsilon_{\rm gf}$, for each baryon in the dark
halo, independent of its initial radius or specific angular momentum,
to ultimately end up in the disk-bulge system. The values of
$\epsilon_{\rm gf}$ are normalized by fitting the model disks to the
zero-point of the observed Tully-Fisher relation. Recent observations
of high redshift spirals suggest that the zero-point of the
Tully-Fisher relation does not evolve with redshift. This implies that
the galaxy formation efficiency, $\epsilon_{\rm gf}$, was higher at
higher redshifts (see vdb98 for details). Disks are modeled as
exponentials with a scalelength proportional to $\lambda$ times the
virial radius of the halo (as in the disk-formation scenario of Fall
\& Efstathiou). The bulge mass is determined by requiring that the
disk is stable. Since the amount of self-gravity of the disk is
directly related to the amount of angular momentum of the gas, the
disk-to-bulge ratio in this scenario is mainly determined by the spin
parameter of the dark halo out of which the galaxy forms.
\section{Clues to the formation of bulge-disk systems}
\begin{figure}
\epsfysize=7.9cm
\centerline{\epsfbox{fig1_vdb.ps}}
\caption{Results for a OCDM cosmology with $\Omega_0 = 0.3$. Plotted are the
logarithm of the spin parameter versus the logarithm of the
disk-to-bulge ratio. Solid circles correspond to disky
ellipticals, stars to S0s, open circles to HSB spirals, and
triangles to LSB spirals. The thick solid line is the stability
margin; halos below this line result in unstable disks. As can be
seen, real disks avoid this region, but stay relatively close to
the stability margin, in agreement with the self-regulating bulge
formation scenario proposed here. The dashed curves correspond to
the 1, 10, 50, 90, and 99 percent levels of the cumulative
distribution of the spin parameter. Upper panels correspond to the
cooling scenario, and lower panels to the feedback scenario.
Panels on the left correspond to $z = 0$, middle panels to $z =
1$, and panels on the right to $z = 3$.}
\label{f1}
\end{figure}
Constraints on the formation scenario envisioned above can be obtained
from a comparison of these disk-bulge-halo models with real galaxies.
From the literature I compiled a list of $\sim 200$ disk-bulge
systems, including a wide variety of galaxies: both high and low
surface brightness spirals (HSB and LSB respectively), S0, and disky
ellipticals (see vdB98 for details). After choosing a cosmology and a
formation redshift, $z$, I calculate, for each galaxy in this sample,
the spin parameter $\lambda$ of the dark halo which, for the
assumptions underlying the formation scenario proposed here, yields
the observed disk properties (scale-length and central surface
brightness). We thus use the formation scenario to link the {\it
disk} properties to those of the dark halo, and use the known
statistical properties of dark halos to discriminate between different
cosmogonies.
The main results are shown in Figure~1, where I plot the inferred
values of $\lambda$ versus the observed disk-to-bulge ratio for the
galaxies in the sample. The dotted lines outline the distribution
function of halo spin parameters of dark halos; it can thus be
inferred what the predicted distribution of disk-to-bulge ratios is
for galaxies that form at a given formation redshift. Results are
presented for an open cold dark matter (OCDM) model with $\Omega_0 =
0.3$ and no cosmological constant ($\Omega_{\Lambda} = 0$). These
results are virtually independent of the value of $\Omega_{\Lambda}$,
but depend strongly on $\Omega_0$, which sets the baryon mass fraction
of the Universe. Throughout, a universal baryon density of $\Omega_b
= 0.0125 \, h^{-2}$ is assumed, in agreement with nucleosynthesis
constraints. The inferred spin parameters are larger for higher
values of the assumed formation redshifts. This owes to the fact that
halos that virialize at higher redshifts are denser. Since the
scalelength of the disk is proportional to $\lambda$ times the virial
radius of the halo, higher formation redshifts imply larger spin
parameters in order to yield the observed disk scalelength. In the
cooling scenario, the probability that a certain halo yields a system
with a large disk-to-bulge ratio (e.g., a spiral) is rather small.
This is due to the fact that in this scenario most of the high angular
momentum material never gets to cool to become part of the disk. The
large observed fraction of spirals in the field, renders this scenario
improbable. For the feedback cosmogony, however, a more promising
scenario unfolds: At high redshifts ($z \mathrel{\spose{\lower 3pt\hbox{$\sim$} 1$) the majority of halos
yields systems with relatively small disks (e.g., S0s), whereas
systems that form more recently are more disk-dominated (e.g.,
spirals). This difference owes to two effects. First of all, halos at
higher redshifts are denser, and secondly, the redshift independence
of the Tully-Fisher relation implies that $\epsilon_{\rm gf}$ was
higher at higher redshifts. Coupled to the notion that proto-galaxies
that collapse at high redshifts are preferentially found in overdense
regions such as clusters, this scenario thus automatically yields a
morphology-density relation, in which S0s are predominantly formed in
clusters of galaxies, whereas spirals are more confined to the field.
\section{Conclusions}
\begin{itemize}
\item Inside-out bulge formation is a natural by-product of the Fall
\& Efstathiou theory for disk formation.
\item Disk-bulge systems do not have bulges that are significantly
more massive than required by stability of the disk component. This
suggests a coupling between the formation of disk and bulge, and is
consistent with the self-regulating, inside-out bulge formation
scenario proposed here.
\item A comparison of the angular momenta of dark halos and spirals
suggests that the baryonic material that builds the disk can not
loose a significant fraction of its angular momentum. This rules
against the `cooling scenario' envisioned here, in which most of the
angular momentum remains in the baryonic material in the outer parts
of the halo that never gets to cool.
\item If we live in a low-density Universe ($\Omega_0 \mathrel{\spose{\lower 3pt\hbox{$\sim$} 0.3$), the
only efficient way to make spiral galaxies is by assuring that only
a relatively small fraction of the available baryons make it into
the galaxy, and furthermore that the probability that a certain
baryon becomes a constituent of the final galaxy has to be
independent of its specific angular momentum, as described by the
`feedback scenario'.
\item If more extended observations confirm that the zero-point of the
Tully-Fisher relation is independent of redshift, it implies that
the galaxy formation efficiency, $\epsilon_{\rm gf}$, was higher at
earlier times. Coupled with the notion that density perturbations
that collapse early are preferentially found in high density
environments such as clusters, the scenario presented here then
automatically predicts a morphology-density relation in which S0s
are most likely to be found in clusters.
\item A reasonable variation in formation redshift and halo angular
momentum can yield approximately one order of magnitude variation in
disk-to-bulge ratio, and the simple formation scenario proposed here
can account for both spirals and S0s. However, disky ellipticals
have too large bulges and too small disks to be incorporated in this
scenario. Apparently, their formation and/or evolution has seen
some processes that caused the baryons to loose a significant amount
of their angular momentum. Merging and impulsive collisions (e.g.,
galaxy harassment) are likely to play a major role for these
systems.
\end{itemize}
\medskip
It thus seems that {\it both `nature' and `nurture' are accountable
for the formation of spheroids}, and that the Hubble sequence has a
hybrid origin.
\smallskip
\begin{acknowledgments}
Support for this work was provided by NASA through Hubble Fellowship
grant \# HF-01102.11-97.A awarded by the Space Telescope Science
Institute, which is operated by AURA for NASA under contract NAS
5-26555.
\end{acknowledgments}
|
1,314,259,995,299 | arxiv | \section{\label{sec:intro}Introduction}
Ionic liquid crystals (ILCs) are pure ionic systems, solely composed of cations \modifiedRed{($+$)} and
anions \modifiedRed{($-$)}.
\MODIFIED{Moreover, at least one of the ion species is characterized by a highly anisotropic
molecular shape~\cite{Binnemans2005}. This anisotropic shape is typically due to long alkyl-chains
\MODIFIEDtwo{which} are attached to charged moieties.
Although the alkyl-chains exhibit a rather strong flexibility, due to
microphase segregation of the charged parts and of the alkyl-chains, liquid-crystalline phases
are indeed observable \MODIFIEDtwo{among} ILCs~\cite{Bowlas1996,Binnemans2005,Goossens2016}.}
In the past decades various types of ILCs have been synthesized~\cite{Binnemans2005,Goossens2016}.
Different combinations of, e.g., (charged) imidazolium rings and alkyl-chains allow one to
tune not only the length of the ionic mesogenes but also the location of their charges, i.e.,
the intra-molecular charge distribution. Thereby one is able to promote distinctive properties of ILCs,
for instance, a high thermal and high electrochemical stability, which might be beneficial for technological
applications~\cite{Gordon_et_al1998,Lee_et_al2003,Binnemans2005,Ster_et_al2007,Goossens2016}.
\MODIFIED{(We note, that here the term ``mesogene'' refers to any kind of molecule \MODIFIEDtwo{which}
gives rise to the formation of mesophases, irrespective of the underlying microscopic mechanism.
\MODIFIEDtwo{Accordingly}, the aforementioned anisotropic molecules, which form mesophases via
microphase segregation, are considered to be mesogenes.)}
\MODIFIED{A specific example of an ILC system, which has been studied, e.g., in
Refs.~\cite{Yamanaka_et_al2005,Wang_et_al2012}, is composed of cations with long alkyl-chains
attached (1-dodecyl-3-methylimidazolium) and significantly smaller anions (iodide).
For such an ILC system, one observes a liquid crystalline structure, in particular the
smectic-A phase $S_A$. (The $S_A$ phase is characterized by layers of particles which are well
aligned with the layer normal and the layer spacing is of the size of the particle length.)
The layer structure of the large cations leads to a locally increased concentration of anions
in between the layers of cations~\cite{Yamanaka_et_al2005}. Thereby, the nanostructure of the
cations gives rise to ``pathways'' for the anions, which increase the conductivity measurable
in the direction parallel to the layers.}
Therefore this particular type of an ILC system is a promising candidate for technological
applications, e.g., as electrolyte in dye-sensitized solar cells
(DSSCs)~\cite{Yamanaka_et_al2005,Yamanaka_et_al2007}.
While the complexity of the underlying interactions gives rise to these interesting properties of ILCs,
it is at the same time very challenging to study these systems within theory or simulations.
\MODIFIED{Previous theoretical studies~\cite{Kondrat_et_al2010,Bartsch2017} \MODIFIEDtwo{of} ILC systems
\MODIFIEDtwo{have been} able to reduce this complexity by considering a simplified description of
ILC systems, which incorporates, \MODIFIEDtwo{however}, the generic properties of ILCs.
They rely on an effective one-species description in which one of the
ion species (referred to as counterions) is not accounted for explicitly, but is incorporated as a continuous
background, giving rise to \MODIFIEDtwo{the} screening of the coions.
\MODIFIEDtwo{On the contrary, the coions} are modeled as ellipsoidal
particles. Thus, the anisotropic molecular shape, \MODIFIEDtwo{which} gives rise to the formation of
mesophases, and the (screened) electrostatic interaction are both incorporated by this approach.
Of course, this is a simplified representation of any realistic ionic liquid crystalline system.
\MODIFIEDtwo{However, it allows one} to study the interplay of the two key features, i.e.,
an anisotropic molecular shape and the presence of charges, which are \MODIFIEDtwo{omnipresent} in ILC systems.
Yet it should be noted, that ILC systems \MODIFIEDtwo{exhibiting} a significant difference in size
of the cations and \MODIFIEDtwo{of the} anions (e.g., the aforementioned example of
1-dodecyl-3-methylimidazolium) might be candidates \MODIFIEDtwo{which} come closest to the present
theoretical representation of ILCs, as the size difference rationalizes in parts the idea of
structureless point-like counterions.}
As a first step, \MODIFIED{such a model} allows one to study the phase behavior of ILC systems and thereby
\modifiedRed{to gain insight about} how molecular properties, e.g., the aspect-ratio or the charge
distribution of the molecules, affect the phase behavior of such types of ILCs.
A comprehensive understanding of the relation between the underlying molecular properties and the resulting
phase behavior is \modifiedRed{inter alia,} necessary for a systematic synthesis of ILCs,
which should meet specific material properties. Furthermore, theoretical \modifiedRed{guidance}
is beneficial for finding and exploring novel \modifiedRed{materials properties which might occur}
in ILC systems. For instance, in Ref.~\cite{Bartsch2017} a new smectic-A structure ($S_{AW}$) has been
observed, which \modifiedRed{exhibits} an alternating layer structure. In between layers of
\modifiedGreen{elongated} particles, which prefer to be \modifiedGreen{oriented} parallel to the
layer normal, like in the ordinary $S_A$ phase, one observes secondary
layers in which the particles prefer to be \modifiedRed{oriented} perpendicular to it.
Due to this alternating structure the layer spacing of this new $S_{AW}$ phase is significantly
\textit{w}ider compared to the ordinary $S_A$ phase.
The $S_{AW}$ structure is stabilized by charges \modifiedRed{which} are located at the tips of
the molecules. This shows in an exemplary way how the combination of liquid-crystalline behavior
and electrostatics can lead to \modifiedRed{an} interesting and novel phenomenology.
The aim of the present investigation is to extend the analysis by studying spatially
inhomogeneous systems of ILCs. This is done by investigating how the structural and
orientational properties of ILC systems are affected by the presence of a free interface between
coexisting bulk states. Both smectic-A phases, $S_A$ and $S_{AW}$, observed in Ref.~\cite{Bartsch2017}
can be in coexistence with the isotropic liquid phase $L$. This is of intrinsic interest, because it
allows one to investigate interfaces which interpolate between a structured and orientationally ordered
(i.e., smectic) phase and an isotropic, homogeneous, and thus structure-less, fluid phase.
\MODIFIED{In particular, the transition in the structural and \MODIFIEDtwo{in the} orientational order
allows one to study the interplay of both properties while they build up at the interface.
Although there are theoretical analyses~\cite{Mederos1992,Somoza1995,Martinez-Raton1998,DeLasHeras2005,Wolfsheimer2006,Reich2007,Praetorius_et_al2013}
\MODIFIEDtwo{concerning} related types of free interfaces, in these studies the \MODIFIEDtwo{constituent}
particles are \MODIFIEDtwo{plain} liquid crystals without any charges.
\MODIFIEDtwo{On the other hand, there is a} vast number of theoretical studies on ionic fluids.
The thermodynamic behavior~\cite{Fisher1993,Fisher1994,Luijten2002,Kobelev2002} as well as
the structure~\cite{Stillinger1_1968,Stillinger2_1968,Lovett1968,Mitchell1977,Harnau2000} of these
types of fluids, in which long-ranged Coulomb interactions are present, have been intensively studied.
However, ionic systems are often analyzed assuming a simple geometry of the \MODIFIEDtwo{particles},
\MODIFIEDtwo{such as} a spherical shape of the particles like in the restricted primitive
model~\cite{Stell1976,Gillan1983,Dickman1999}.
In \MODIFIEDtwo{this} regard, the present \MODIFIEDtwo{study} attempts to \MODIFIEDtwo{analyze} the
\MODIFIEDtwo{aforementioned} type of interface between an isotropic and a smectic phase by
accounting for an anisotropic particle shape combined with the presence of charges.}
Moreover, different orientations between the interface normal and the smectic layer normal are possible.
In this context, an interesting question addresses the equilibrium tilt angle between the interface and
the smectic layer normal. This angle may provide insight into nucleation and growth phenomena which are
affected by the dependence of the interfacial tension on the orientation of the considered
structure~\cite{Wulff1901,Blanc2001}.
The present study is structured as follows:
In Sec.~\ref{sec:theory} the model and the employed density functional theory approach
are presented. Our results for the interfaces between the isotropic liquid $L$ and the considered
smectic-A phases $S_A$ or $S_{AW}$ are discussed in Sec.~\ref{sec:results}.
Finally, in Sec.~\ref{sec:summary} we summarize the results and draw our conclusions.
\section{\label{sec:theory}Model and methods}
This section presents in detail the molecular model of ILCs as employed here.
In particular, we discuss the intermolecular pair potential, which can be applied to a wide range of
ionic and liquid crystalline materials due to its flexibility provided by a large set of parameters.
This model is studied by (classical) density functional theory (DFT), which will be applied to
\modifiedRed{spatially} inhomogeneous systems, in particular free interfaces formed between coexisting
bulk phases. The methodological and technical details of the present DFT approach are described in
Sec.~\ref{sec:theory:DFT}.
\begin{figure}[!t]
\includegraphics[width=0.45\textwidth]{./Ellipsoids.ps}
\caption{
Cross-sectional view of two ILC molecules
in the plane spanned by the orientations $\vec\omega_i,~i=1,2$,
of their long axis.
The particles are treated as rigid prolate ellipsoids,
characterized by their length-to-breadth ratio \modifiedRed{$L/R\geq1$}.
Their orientations are fully described by the direction of
their long axis $\vec\omega_i, \modifiedRed{i=1,2}$;
$\vec{r}_{12}$ is the center-to-center distance vector.
The charges of the ILC molecules (blue dots) are
located on the long axis
at a distance $D$ from their geometrical center.
The counterions are not modeled explicitly,
but they are implicitly accounted for in terms of a background,
giving rise to the screening of the charges of the ILC molecules.
}
\label{fig:ellipsoids}
\end{figure}
\subsection{\label{sec:theory:model}Molecular model and pair potential}
We consider a coarse-grained description of the ILC molecules as rigid prolate ellipsoids of
length-to-breadth ratio \modifiedRed{$L/R\geq1$} (see Fig.~\ref{fig:ellipsoids}).
Thus, the orientation of a molecule is fully described by the direction $\vec\omega(\phi,\theta)$
of its long axis, where $\phi$ and $\theta$ denote the azimuthal and polar angle, respectively.
The two-body interaction potential consists of a hard core repulsive and an additional contribution
$U_\text{GB}+U_\text{es}$ beyond the contact distance $R\sigma$, the sum of which can
be attractive or repulsive:
\begin{equation}
U=
\begin{cases}
\infty
&,
|\vec{r}_{12}| < R\sigma(\vec{\hat r}_{12},\vec\omega_1,\vec\omega_2) \\
\begin{split}
U_\text{GB}(\vec{r}_{12},\vec\omega_1,\vec\omega_2)+\\
U_\text{es}(\vec{r}_{12},\vec\omega_1,\vec\omega_2)
\end{split}
&,
|\vec{r}_{12}| \geq R\sigma(\vec{\hat r}_{12},\vec\omega_1,\vec\omega_2),
\end{cases}
\label{eq:Pairpot}
\end{equation}
where $\vec{r}_{12}:=\vec{r}_2-\vec{r}_1$ denotes the center-to-center distance vector
between the two particles labeled as 1 and 2, and $\vec\omega_i$, $i=1,2$, are their orientations
\modifiedRed{with $|\vec\omega_i|=1$}.
The contact distance $R\sigma(\vec{\hat r}_{12},\vec\omega_1,\vec\omega_2)$ depends on the orientations
of both particles and \modifiedRed{on} the direction of the center-to-center distance vector,
which is expressed by the unit vector $\vec{\hat r}_{12}:=\vec{r}_{12}/|\vec{r}_{12}|$.
In Eq.~(\ref{eq:Pairpot}), we \modifiedRed{have} subdivided the contributions beyond the contact distance
$|\vec{r}_{12}|\geq R\sigma$ into two parts:
$U_\text{GB}(\vec{r}_{12},\vec\omega_1,\vec\omega_2)$ is the well-known Gay-Berne
potential~\cite{Berne1972,Gay_Berne1981}, which incorporates an attractive van der Waals-like interaction
between molecules and which can be understood as a generalization of the Lennard-Jones pair potential
\modifiedRed{between spherical particles} to ellipsoidal particles:
\begin{equation}
\begin{split}
& U_\text{GB}(\vec {r}_{12},\vec\omega_1,\vec\omega_2)
= 4\epsilon(\vec{\hat r}_{12},\vec\omega_1,\vec\omega_2)\\
& \times \left[ \left(1+\frac{|\vec{r}_{12}|}{R}-\sigma(\vec{\hat r}_{12},\vec\omega_1,\vec\omega_2) \right)^{-12} \right.\\
& \left.-~\left(1+\frac{|\vec{r}_{12}|}{R}-\sigma(\vec{\hat r}_{12},\vec\omega_1,\vec\omega_2) \right)^{-6} \right]\\
\end{split}
\label{eq:Pairpot_GB}
\end{equation}
with
\begin{equation}
\begin{split}
\sigma(\vec{\hat r}_{12},\vec\omega_1,\vec\omega_2)
& =\left[ 1-\frac{\chi}{2}\left(\frac{(\vec{\hat r}_{12}\cdot(\vec\omega_1+\vec\omega_2))^2}{1+\chi\vec\omega_1\cdot\vec\omega_2}\right.\right.\\
& \left. + \left.\frac{(\vec{\hat r}_{12}\cdot(\vec\omega_1-\vec\omega_2))^2}{1-\chi\vec\omega_1\cdot\vec\omega_2}\right)\right]\\
\end{split}
\end{equation}
and
\begin{equation}
\begin{split}
\epsilon(\vec{\hat r}_{12},\vec\omega_1,\vec\omega_2)
& =\epsilon_0\left(1-(\chi\vec\omega_1\cdot\vec\omega_2)^2\right)^{-1/2}\\
& \times\left[ 1-\frac{\chi'}{2}\left(\frac{(\vec{\hat r}_{12}\cdot(\vec\omega_1+\vec\omega_2))^2}{1+\chi'\vec\omega_1\cdot\vec\omega_2}\right.\right.\\
& \left. + \left.\frac{(\vec{\hat r}_{12}\cdot(\vec\omega_1-\vec\omega_2))^2}{1-\chi'\vec\omega_1\cdot\vec\omega_2}\right)\right].\\
\end{split}
\label{eq:Pairpot_GB_epsilon}
\end{equation}
The contact distance $R\sigma(\vec{\hat r}_{12},\vec\omega_1,\vec\omega_2)$ and the direction- and
orientation-dependent interaction strength $\epsilon(\vec{\hat r}_{12},\vec\omega_1,\vec\omega_2)$ are both
parametrically dependent on the length-to-breadth ratio $L/R$ via the auxiliary function
$\chi=((L/R)^2-1)/((L/R)^2+1)$. Additionally, $\epsilon(\vec{\hat r}_{12},\vec\omega_1,\vec\omega_2)$
can be tuned via $\chi'=((\epsilon_R/\epsilon_L)^{1/2}-1)/((\epsilon_R/\epsilon_L)^{1/2}+1)$,
where $\epsilon_R/\epsilon_L$ is called the anisotropy parameter, defined in terms of the ratio of
$\epsilon_R$, which is the depth of the potential minimum for parallel particles positioned side by side
$(\vec{\hat r}_{12}\cdot\vec\omega_1=\vec{\hat r}_{12}\cdot\vec\omega_2=0)$, and $\epsilon_L$, which is
the depth of the potential minimum for parallel particles positioned end to end
$(\vec{\hat r}_{12}\cdot\vec\omega_1=\vec{\hat r}_{12}\cdot\vec\omega_2=1)$.
The energy scale of the Gay-Berne pair interaction is set by $\epsilon_0$.
Thus, the Gay-Berne pair potential has four independent free parameters:
$\epsilon_0, R, L/R$, and $\epsilon_R/\epsilon_L$. Note that in the case of spherical particles, i.e.,
for $L=R$, the Gay-Berne pair potential (Eq.~(\ref{eq:Pairpot_GB})) reduces to the well-known isotropic
Lennard-Jones pair potential iff, additionally, the Gay-Berne anisotropy parameter equals unity, i.e.,
$\epsilon_R/\epsilon_L=1$, because then $\sigma(\vec{\hat r}_{12},\vec\omega_1,\vec\omega_2)=1$ and
$\epsilon(\vec{\hat r}_{12},\vec\omega_1,\vec\omega_2)=\epsilon_0$.
The second contribution $U_\text{es}(\vec{r}_{12},\vec\omega_1,\vec\omega_2)$ in Eq.~(\ref{eq:Pairpot}) is
the \emph{e}lectro\emph{s}tatic repulsion of ILC molecules. Within the scope of the present study,
the counterions are not modeled explicitly. \modifiedRed{They} will be considered to be much smaller in size
than the ILC molecules such that they can be treated as a continuous background.
On the level of linear response, this background gives rise to the screening of the pure Coulomb potential
between two charged sites on a length scale given by the Debye screening length $\lambda_D$ such, that
the effective electrostatic interaction of the ILC molecules is given by
\begin{equation}
\begin{split}
U_\text{es}(\vec {r}_{12},\vec\omega_1,\vec\omega_2)=~
& \gamma\left[\frac{\exp\left(-\frac{|\vec{r}_{12}+D(\vec\omega_1+\vec\omega_2)|}{\lambda_D}\right)}{|\vec{r}_{12}+D(\vec\omega_1+\vec\omega_2)|}\right.\\
& + \left.\frac{\exp\left(-\frac{|\vec{r}_{12}+D(\vec\omega_1-\vec\omega_2)|}{\lambda_D}\right)}{|\vec{r}_{12}+D(\vec\omega_1-\vec\omega_2)|}\right.\\
& + \left.\frac{\exp\left(-\frac{|\vec{r}_{12}-D(\vec\omega_1+\vec\omega_2)|}{\lambda_D}\right)}{|\vec{r}_{12}-D(\vec\omega_1+\vec\omega_2)|}\right.\\
& + \left.\frac{\exp\left(-\frac{|\vec{r}_{12}-D(\vec\omega_1-\vec\omega_2)|}{\lambda_D}\right)}{|\vec{r}_{12}-D(\vec\omega_1-\vec\omega_2)|}\right].
\end{split}
\label{eq:PairPot_ES}
\end{equation}
The charges $q$ are located symmetrically on \modifiedRed{the long axis of the ILC molecules}
at a distance $D$ from the geometrical center of the particles (compare Fig.~\ref{fig:ellipsoids}).
\MODIFIED{The prefactor $\gamma=q^2/(4\pi\epsilon)$ of dimension $[\text{energy}]\times[\text{length}]$
characterizes the electrostatic energy scale, where $\epsilon$ denotes the permittivity.}
In principle, the Debye screening length
\begin{equation}
\MODIFIED{\lambda_D\propto\sqrt{\frac{T}{\rho_\text{c}}}}
\label{eq:Debyelength}
\end{equation}
is a function of temperature $T$ and of the number density $\rho_\text{c}$ of the counter ions. Thus,
it depends on the thermodynamic state of the fluid. However, in the present model $\lambda_D$ is taken
to be a constant parameter.
In order to compare results, obtained within this model, with data from actual physical systems,
one could measure the value of the Debye screening length experimentally and tune the model
parameter $\lambda_D$ accordingly.
\begin{figure}[!t]
\includegraphics[width=0.45\textwidth]{./GB_ES.ps}
\caption{
Contour-plots of the pair potential $U$
for $|\vec{r}_{12}|\geq R\sigma$ in the $x$-$z$-plane
for four cases of particles with fixed length-to-breadth ratio $L/R=4$ and fixed orientations.
In each panel the centers of both particles lie in the plane $y=0$.
In order to illustrate the orientations of the ellipsoids,
they have been included in the plots at contact
with relative direction $\vec{\hat r}_{12}=\vec{\hat x}$.
The set of points at contact in the $x$-$z$-plane is illustrated by the black curve, and the
centers of the particles are shown by small black dots.
Panel (a): uncharged liquid crystal with $\epsilon_R/\epsilon_L=2$.
Panel (b): uncharged liquid crystal with $\epsilon_R/\epsilon_L=4$.
\modifiedRed{With this choice} the anisotropy of the potential is increased slightly.
Panel (c): ILC with $\epsilon_R/\epsilon_L=2,D/R=0,\lambda_D/R=5,\gamma/(R\epsilon_0)=0.25$.
Panel (d): ILC with $\epsilon_R/\epsilon_L=2,D/R=1.8,\lambda_D/R=5,\gamma/(R\epsilon_0)=0.25$.
In (c) and (d) the loci of the charges are indicated as blue dots.
The salmon-colored area is the excluded volume for given orientations of
the two particles.
}
\label{fig:pairpot}
\end{figure}
In Fig.~\ref{fig:pairpot} we illustrate the full pair potential (Eq.~(\ref{eq:Pairpot}))
beyond the contact distance for certain choices of the parameters.
The two top panels, (a) and (b), show the pure Gay-Berne potential (uncharged liquid crystals), which is
predominantly attractive in the space outside the overlap volume (\modifiedRed{salmon}-colored area).
The shape of this overlap volume changes by varying the particle orientations as well as by changing the
length-to-breadth ratio $L/R$. However, these dependences are not apparent from Fig.~\ref{fig:pairpot},
\modifiedRed{because} $L/R=4$ and the particle orientations $\vec\omega_i$ are kept fixed for all panels.
In panel (b) the anisotropy parameter $\epsilon_R/\epsilon_L=4$ is chosen to be two times larger than for
panel (a) ($\epsilon_R/\epsilon_L=2$). Thus, the ratio of the well depth at the tails and at the sides is
increased. The two bottom panels, (c) and (d), show the same choices for the Gay-Berne parameters as for
panel (a), but the electrostatic repulsion of the charged groups on the molecules, illustrated by blue dots,
is included ($\gamma/(R\epsilon_0)=0.25$). In panel (c) the loci of the two charges of the particles coincide
at \modifiedRed{their centers} (i.e., $D/R=0$) while in panel (d) they are located near the tips ($D/R=1.8$).
For both cases with charge, the effective interaction range is significantly increased compared with the
uncharged case and is governed by the Debye screening length, chosen as $\lambda_D/R=5$.
\MODIFIED{It is worth mentioning, that the present model cannot be considered \MODIFIEDtwo{as}
a quantitatively valid description of any realistic ionic liquid crystal system.
A screened electrostatic pair interaction of the Yukawa form (Eq.~(\ref{eq:PairPot_ES})) is the
extreme case of the effective pair potential between ions in a (dilute) electrolyte at high
temperatures. Nonetheless, for the purpose of the present theoretical study, which is concerned
with the basic microscopic mechanisms and the generic molecular properties present in ILC systems,
the employed model is appropriate as it incorporates the following key properties of ILCs:
First, a \MODIFIEDtwo{sufficiently} anisotropic shape (prolate) of the particles, i.e.,
they can be considered as (calamitic) mesogenes.
In this context, \MODIFIEDtwo{an assessment} of the bulk phase behavior, depending on the
length-to-breadth ratio of the particles, is provided in Ref.~\cite{Bartsch2017}. In particular,
the relevance of a sufficiently anisotropic shape (i.e., $L/R>2$) \MODIFIEDtwo{for observing} genuine
smectic phases is discussed. Second, the ionic properties of ILCs are incorporated such that they
reflect the main feature of ionic fluids, i.e., the \emph{effective} interaction of the ionic compounds
via a \emph{screened} electrostatic pair interaction.
Although the chosen functional form given by Eq.~(\ref{eq:PairPot_ES}) cannot be considered \MODIFIEDtwo{as}
a quantitatively reliable representation, it still accounts for the fact that the actual ion-ion pair
interaction in an ionic fluid is indeed \MODIFIEDtwo{short-ranged}, rather than \MODIFIEDtwo{long-ranged},
as \MODIFIEDtwo{it is the case} for the bare Coulomb interaction.}
\MODIFIED{In conclusion, Eq.~(\ref{eq:PairPot_ES}) is \MODIFIEDtwo{characterized} by an effective
interaction strength \MODIFIEDtwo{$\gamma/R$} (which will be numerically expressed as the relative
interaction strength $\gamma/(R\epsilon_0)$ compared to the interaction strength $\epsilon_0$ of the
Gay-Berne potential), an effective interaction range $\lambda_D$, and an effective location $D$ of
the charge sites \MODIFIEDtwo{inside} the coions.
In order to represent specific ILC molecules by a particular set of parameters
of the present model, one would tune the independent model parameters, i.e., $L/R$,
$\epsilon_R/\epsilon_L$, $\gamma/(R\epsilon_0)$, $D/R$, and $\lambda_D/R$ such that the resulting
total pair potential $U(\vec{r}_{12},\vec\omega_1,\vec\omega_2)/\epsilon_0$
(compare Eq.~(\ref{eq:Pairpot}) and Fig.~\ref{fig:pairpot}) resembles (qualitatively) the actual pair
potential of the considered ILC molecules. In \MODIFIEDtwo{this} regard, it is worth mentioning that
\MODIFIEDtwo{in principle} comparisons of our effective \MODIFIEDtwo{theory with} particle simulations
can be made, \MODIFIEDtwo{related} to the \MODIFIEDtwo{study} by Saielli et al.~\cite{Saielli2017},
who performed molecular dynamics (MD) simulations \MODIFIEDtwo{for} a mixture of (ellipsoidal) Gay-Berne
and (spherical) Lennard-Jones particles.
\MODIFIEDtwo{Additionally,} both species carry charges and therefore resemble cations
\MODIFIEDtwo{and anions, respectively}.
Our ad-hoc pair potential (Eq.~(\ref{eq:Pairpot})) of the coions can be compared \MODIFIEDtwo{with} the
effective interaction, which can be \MODIFIEDtwo{determined} as the logarithm of the particle-particle
distribution function of the elongated cations in the MD simulations.}
\MODIFIED{We note, that the choices $L/R=4$ and $\epsilon_R/\epsilon_L=2$, which are used throughout
\MODIFIEDtwo{our analysis}, are comparable to \MODIFIEDtwo{those used in} previous studies
\MODIFIEDtwo{(see, e.g., Refs.~\cite{Berardi1993,Kondrat_et_al2010,Bartsch2017,Saielli2017}) for}
similar kinds of particles.
While these values of the Gay-Berne parameters give rise to the formation of smectic phases,
the occurrence of nematic phases is typically observed for much larger values of $L/R$ and
$\epsilon_R/\epsilon_L$~\cite{Bates1999}.}
\subsection{\label{sec:theory:DFT}Density functional theory}
\begin{figure}[!t]
\includegraphics[width=0.45\textwidth]{./interface.eps}
\caption{
Sketch of the \modifiedGreen{interface structure} under consideration.
Consider a planar interface, illustrated by the horizontal red line, between the isotropic
\modifiedRed{bulk} liquid $L$, \modifiedRed{imposed} as the boundary condition at $z\rightarrow-\infty$,
and the smectic-A phase $S_A$ (or $S_{AW}$), \modifiedRed{imposed as} the boundary condition at
$z\rightarrow+\infty$. Thus, the interface normal
(red vertical arrow) points \modifiedRed{into the} $z$-direction. At the top, the tails of four
layers of particles \modifiedRed{of the (ordinary) $S_A$ phase are visible}, which are well aligned
with the smectic layer normal $\vec{\hat n}:=\sin(\alpha)\,\vec{\hat x}+\cos(\alpha)\,\vec{\hat z}$.
In the bulk $S_A$ phase, the system is periodic in the direction of the smectic layer normal
$\vec{\hat n}$ with periodicity $d$ which is a multiple of the smectic layer spacing.
\modifiedBlue{For the $S_A$ phase $d$ turns out to be two times the distance between neighboring
smectic layers \modifiedGreen{(see Sec.~\ref{sec:theory:DFT} below Eq.~(\ref{eq:layernormal}))}}.
Thus, for a given tilt angle $\alpha$ between the interface normal and the smectic layer normal
$\vec{\hat n}$, the system is periodic in $x$-direction with periodicity $d_x=d/\sin(\alpha)$.
Note that the interface \modifiedGreen{structure} is \modifiedRed{translationally invariant in the}
$y$-direction for all angles $0\leq\alpha\leq\pi/2$. For $\alpha=0$ the system
\modifiedRed{exhibits in addition} translational invariance in \modifiedRed{the} $x$-direction.
}
\label{fig:interface_sketch}
\end{figure}
The degrees of freedom of the particles (compare Sec.~\ref{sec:theory:model}) are fully described by the
positions $\vec{r}$ of their centers and the orientations $\vec\omega$ of their long axes.
Thus, within density functional theory, an appropriate variational grand potential functional
$\beta\Omega[\rho]$ of position- and orientation-dependent number density profiles $\rho(\vec{r},\vec\omega)$
has to be found; \modifiedRed{the equilibrium density profile minimizes the functional.}
The grand potential functional for uniaxial particles,
in the absence of external fields, can generically be expressed as
\begin{equation}
\begin{split}
\beta\Omega\left[\rho\right]=
& \Int{\mathcal{V}}{3}{r}\Int{\mathcal{S}}{2}{\omega}
\rho(\vec{r},\vec\omega)
\left[\ln\left(4\pi\Lambda^3\rho(\vec{r},\vec\omega)\right)\right.\\
&-\left.\left(1+\beta\mu\right)\right]
+\beta\mathcal{F}\left[\rho\right],
\end{split}
\label{eq:Omega}
\end{equation}
where the integration domains $\mathcal{V}$ and $\mathcal{S}$ denote the system volume
and the full solid angle, respectively.
The first term in Eq.~(\ref{eq:Omega}) is the purely entropic free energy contribution
of non-interacting uniaxial particles,
where $\beta=1/(k_BT)$ denotes the inverse thermal energy,
$\mu$ the chemical potential, and $\Lambda$ the thermal de Broglie wavelength.
The last term is the excess free energy $\beta\mathcal{F}\left[\rho\right]$ in units of $k_BT$,
which incorporates the effects of the inter-particle interactions.
Minimizing Eq.~(\ref{eq:Omega}) leads to the Euler-Lagrange equation,
which implicitly determines the equilibrium density profile $\rho(\vec{r},\vec\omega)$:
\begin{equation}
\rho(\vec{r},\vec\omega)=\frac{
\exp\left[\beta\mu+c^{(1)}\left(\vec{r},\vec\omega,[\rho]\right)\right]
}{4\pi\Lambda^3},
\label{eq:ELG}
\end{equation}
where
\begin{equation}
c^{(1)}\left(\vec{r},\vec\omega,[\rho]\right)=
-\frac{\delta\beta\mathcal{F}[\rho]}{\delta\rho}
\label{eq:Dir1CorrFunc}
\end{equation}
is the one-particle direct correlation function.
It is fully determined by the excess free energy functional $\beta\mathcal{F}[\rho]$.
The excess free energy functional is the characterizing quantity of the underlying many-body problem.
\modifiedRed{However, in general it is} not known exactly so that one has to \modifiedRed{adopt} appropriate
approximations \modifiedRed{of} it. Following the approach of our previous \modifiedRed{study}~\cite{Bartsch2017}
\modifiedRed{concerning} the bulk phase behavior of \modifiedRed{ILCs, in the spirit of Ref.~\cite{Tarazona1985}
a weighted density expression for} $\beta\mathcal{F}[\rho]$ is considered:
\begin{equation}
\beta\mathcal{F}[\rho]=
\frac{1}{2}\Int{\mathcal{V}}{3}{r}\Int{\mathcal{S}}{2}{\omega}
\rho(\vec{r},\vec\omega)
\beta\psi\left(\vec{r},\vec\omega,[\bar\rho]\right),
\label{eq:F_WDA}
\end{equation}
where $\beta\psi(\vec{r},\vec\omega,[\bar\rho])$ denotes the effective one-particle potential.
It is a functional of the so-called projected density $\bar\rho(\vec{r},\vec\omega)$:
\begin{equation}
\begin{split}
& \bar\rho(\vec{r},\vec\omega,[\rho]) = \frac{1}{4\pi}\bigg[
Q_0(\vec{r},[\rho])+
Q_1(\vec{r},[\rho])\cos\left(2\pi(\vec{r}\cdot\vec{\hat n})/d\right)\\
&+Q_2(\vec{r},[\rho])\cos\left(4\pi(\vec{r}\cdot\vec{\hat n})/d\right)+5P_2(\vec{\omega}\cdot\vec{\hat n})
\bigg(Q_3(\vec{r},[\rho])\\
&+Q_4(\vec{r},[\rho])\cos\left(2\pi(\vec{r}\cdot\vec{\hat n})/d\right)
+Q_5(\vec{r},[\rho])\cos\left(4\pi(\vec{r}\cdot\vec{\hat n})/d\right)\bigg)\bigg],
\end{split}
\label{eq:WeightedDensity}
\end{equation}
where $P_2(y)=(3y^2-1)/2$ is the Legendre polynomial of degree $2$.
\modifiedRed{We point out} that $\bar\rho(\vec{r},\vec\omega)$ represents an expansion of the density
profile $\rho(\vec{r},\vec\omega)$ in terms of a second-order \modifiedRed{Fourier, and a second-order
Legendre series, respectively}.
Thus, the coefficients $Q_i(\vec{r})$ are the corresponding expansion coefficients,
which will be defined below.
\MODIFIED{It is worth mentioning, that although the projected density $\bar\rho(\vec{r},\vec\omega)$
might take negative values, this does not imply an unphysical behavior as the actual density
$\rho(\vec{r},\vec\omega)$ is \MODIFIEDtwo{determined from} the Euler-Lagrange equation
(i.e., Eq.~(8)) and thus is strictly positive.}
The following three types of bulk phases can be studied within
this \modifiedRed{particular} framework~\cite{Bartsch2017}:
First, isotropic liquids with $Q_0=\text{const}_0$ and $Q_i=0$ for $i>0$.
Second, nematic liquids with $Q_i=\text{const}_i$, if $i=0,3$, and $Q_i=0$ otherwise.
Third, smectic-A phases with $Q_i=\text{const}_i$ for $i\in\{0,\cdots,5\}$.
While \modifiedRed{for isotropic and nematic liquids} the system is translationally invariant
in all spatial directions, in \modifiedRed{the} case of smectic-A phases the system is periodic in the
direction of the smectic layer normal $\vec{\hat n}$ with periodicity $d$,
which is a multiple of the smectic layer spacing.
For smectic-A phases the director is parallel to the smectic layer normal $\vec{\hat n}$ and therefore
\modifiedRed{the occurrence of} rotationally symmetric distributions of \modifiedRed{the} orientations
$\vec\omega$ around $\vec{\hat n}$, incorporated by the dependence
\modifiedRed{on $\vec\omega\cdot\vec{\hat n}$} in Eq.~(\ref{eq:WeightedDensity}), are plausible.
\modifiedRed{We note} that \modifiedGreen{odd} Fourier-modes in the projected density
$\bar\rho(\vec{r},\vec\omega)$ vanish for bulk smectic-A phases, if the coordinate system is chosen such
that the origin is located \modifiedRed{at} the center of one of the smectic layers due to the mirror
symmetry of smectic layers around their center.
This is a direct consequence of the underlying point symmetry of the particles
considered here \modifiedRed{(see Fig.~\ref{fig:ellipsoids})}. Considering additional terms,
corresponding to the \modifiedGreen{odd} modes in the second-order Fourier expansion of the density
$\rho(\vec{r},\vec\omega)$, would only give rise to a shift of the location of the bulk smectic layers.
Although for systems \modifiedRed{with interfaces} the \modifiedGreen{odd} modes in general do not vanish,
here we neglect these contributions completely. The implications of additionally considering
the \modifiedGreen{odd} terms (up to second order) \modifiedRed{are} discussed in Appendix~\ref{sec:appendix:UnevenModes}.
Both approaches are weighted-density-like approximations \modifiedRed{of} the exact free energy functional.
A priori, it is not obvious which one leads to better results, because considering more terms of
the Fourier series leads only to a more \modifiedRed{accurate} representation of $\rho(\vec{r},\vec\omega)$
by the projected density $\bar\rho(\vec{r},\vec\omega)$.
However, this does not imply that the resulting free energy functional $\beta\mathcal{F}[\bar\rho]$
is closer to its exact \modifiedRed{form}, because independent of the choice for
$\bar\rho(\vec{r},\vec\omega)$ it relies on the Parsons-Lee approach for its reference part and on the
so-called modified mean-field approximation for the excess part \modifiedRed{(see below)}.
Nevertheless, our approach of considering \modifiedRed{in Eq.~(\ref{eq:WeightedDensity})} only the even
modes up to \modifiedGreen{second order} captures the three types of bulk phases $L$, $N$,
\modifiedRed{as well as} $S_A,S_{AW}$, which are relevant for \modifiedRed{the present} study in the
same way as the full second-order Fourier expansion.
The effective one-particle potential $\beta\psi(\vec{r},\vec\omega)$
consists of two contributions. The first one incorporates the hard-core interactions via
the well-studied Parsons-Lee functional~\cite{Parsons1979,Lee1987},
\begin{align}
&\beta\psi_\text{PL}(\vec{r},\vec\omega,[\bar\rho]) =
-\Int{\mathcal{V}}{3}{r'}\Int{\mathcal{S}}{2}{\omega'}
\bar\rho(\vec{r}',\vec\omega')\nonumber\\
&\times\frac{\mathcal{J}(Q_0(\vec{r}))+\mathcal{J}(Q_0(\vec{r}'))}{2}
f_M(\vec{r}-\vec{r}',\vec\omega,\vec\omega'),
\label{eq:Eff1Pot_PL}
\end{align}
where $f_M(\vec{r}-\vec{r}',\vec\omega,\vec\omega')$ is the Mayer f-function~\cite{Hansen1976}
of the hard core pair interaction potential and $\mathcal{J}(Q_0)$ modifies the corresponding
original Onsager free energy functional (i.e., the second-order virial approximation) such that
the Carnahan-Starling equation of state~\cite{Lee1987} is reproduced for spheres, i.e.,
\modifiedRed{for} $L=R$~\cite{Onsager1949,VanRoij2005}:
\begin{equation}
\mathcal{J}(Q_0)=\frac{1-\frac{3}{4}\eta_0(Q_0)}{(1-\eta_0(Q_0))^2},
\label{eq:ScalingFunctionJ}
\end{equation}
where \modifiedRed{$\eta_0(Q_0)=Q_0\,LR^2\pi/6$ (for $Q_0$ see Eq.~(\ref{eq:WeightedDensity}))}
denotes the mean packing fraction within one (bulk) smectic layer.
It is proportional to the coefficient $Q_0$, which gives the mean density within such a smectic layer
(see below). The original Onsager functional is recovered \modifiedRed{in Eq.~(\ref{eq:Eff1Pot_PL})} by
replacing $\mathcal{J}(Q_0)$ by $Q_0$.
The second contribution to the effective one-particle potential $\beta\psi[\bar\rho]$
takes into account the interactions beyond the contact distance
(see the case $|\vec{r}_{12}|\geq R\sigma$ in Eq.~(\ref{eq:Pairpot}))
within the modified mean-field approximation~\cite{Teixeira1991}, \modifiedRed{which is}
a variant of the extended random phase approximation (ERPA)~\cite{Evans1979}:
\begin{align}
&\beta\psi_\text{ERPA}(\vec{r},\vec\omega,[\bar\rho]) =
\Int{\mathcal{V}}{3}{r'}\Int{\mathcal{S}}{2}{\omega'}
\bar\rho(\vec{r}',\vec\omega')\nonumber\\
&\times\beta U(\vec{r}-\vec{r}',\vec\omega,\vec\omega')
(1+f_M(\vec{r}-\vec{r}',\vec\omega,\vec\omega')).
\label{eq:Eff1Pot_ERPA}
\end{align}
The present \modifiedRed{study} is devoted to the \modifiedRed{analysis} of free interfaces which are
formed between coexisting bulk phases. In particular, \modifiedRed{the} planar interfaces between the
isotropic liquid $L$ and \modifiedRed{the} two different types of smectic-A phases ($S_A$ or $S_{AW}$,
see Sec.~\ref{sec:results}) will be considered, \modifiedRed{for which} the interface normal
\modifiedRed{is expected to} be parallel to the $z$-direction (see Fig.~\ref{fig:interface_sketch}).
Due to the isotropy of the liquid phase $L$, the direction of the smectic layer normal
\modifiedRed{
\begin{equation}
\vec{\hat n}(\alpha):=\sin(\alpha)\,\vec{\hat x}+\cos(\alpha)\,\vec{\hat z}
\label{eq:layernormal}
\end{equation}
}
can be chosen to lay in the $x$-$z$-plane. Its orientation is \modifiedRed{fully} determined by the tilt
angle $\alpha$. For $\alpha=0$ the smectic layer normal $\vec{\hat n}=\vec{\hat z}$ points
\modifiedRed{into the} $z$-direction, like the interface normal, while for $\alpha=\pi/2$ it points
\modifiedRed{into} the $x$-direction and \modifiedRed{thus} it is perpendicular to the interface normal.
The \modifiedRed{interfacial} systems considered here are \modifiedRed{translationally} invariant in the
$y$-direction and show a periodic structure in the $x$-direction with a periodicity $d_x=d/\sin(\alpha)$
\modifiedRed{(compare Fig.~\ref{fig:interface_sketch})} where $d$ is a multiple of the smectic layer spacing.
\modifiedBlue{
(We note that the value of $d$ is determined by the corresponding bulk density \modifiedGreen{distribution}
which minimizes the grand potential functional, i.e., maximizes the bulk pressure
(\modifiedGreen{see} Sec.~2.2.2 in Ref.~\cite{Bartsch2017}). It turns out that, for the $S_{AW}$ phase $d$
equals the smectic layer spacing, while for the $S_A$ phase it equals two times the layer spacing, because
for the $S_A$ phase one obtains bulk solutions $\rho^{(0)}(\vec{r},\vec\omega)$ with $Q_1=Q_4=0$,
\modifiedGreen{(see, cf., Eqs.~(\ref{eq:WeightedDensity}), (\ref{eq:ExpansionCoeffs_interface}), and
(\ref{eq:ExpansionCoeffs2_interface}))}. Thus the periodicity $d$ along the layer normal $\vec{\hat n}$
is \modifiedGreen{twice} the smectic layer spacing, i.e., the distance between neighboring layers.)
}
\modifiedRed{For} $\alpha=0$, $d_x$ diverges and the system is \modifiedRed{translationally} invariant in
the $x$-direction, too.
As mentioned above, the coefficients $Q_i(\vec{r})$ in Eq.~(\ref{eq:WeightedDensity})
arise from expanding $\rho(\vec{r},\vec\omega)$ in a second-order Legendre-
and Fourier-series~\cite{Bartsch2017}:
\begin{align}
Q_i(\vec{r},[\rho])&=\frac{1}{\mathcal{V}_d}
\Int{\mathcal{V}}{3}{r'}\Int{\mathcal{S}}{2}{\omega'}
\rho(\vec{r}',\vec\omega')w_i(\vec{r},\vec{r}',\vec\omega')
\label{eq:ExpansionCoeffs_interface}
\end{align}
with
\begin{align}
w_0&= \mathcal{T}(\vec{r}-\vec{r}'),\nonumber\\
w_1&=2\mathcal{T}(\vec{r}-\vec{r}')\cos\left(2\pi(\vec{r}'\cdot\vec{\hat n})/d\right),\nonumber\\
w_2&=2\mathcal{T}(\vec{r}-\vec{r}')\cos\left(4\pi(\vec{r}'\cdot\vec{\hat n})/d\right),\nonumber\\
w_3&= \mathcal{T}(\vec{r}-\vec{r}')P_2(\vec\omega'\cdot\vec{\hat n}),\nonumber\\
w_4&=2\mathcal{T}(\vec{r}-\vec{r}')P_2(\vec\omega'\cdot\vec{\hat n})\cos\left(2\pi(\vec{r}'\cdot\vec{\hat n})/d\right),\nonumber\\
w_5&=2\mathcal{T}(\vec{r}-\vec{r}')P_2(\vec\omega'\cdot\vec{\hat n})\cos\left(4\pi(\vec{r}'\cdot\vec{\hat n})/d\right)
\label{eq:ExpansionCoeffs2_interface}
\end{align}
where
\begin{equation}
\mathcal{T}(\vec{r}-\vec{r}')=
\begin{cases}
1,&\vec{r}-\vec{r}'\in\mathcal{V}_d\\
0,&\text{else}.
\end{cases}
\label{eq:cut-off-function}
\end{equation}
$\mathcal{T}(\vec{r}-\vec{r}')$ is a cut-off function which defines the integration domain
$\mathcal{V}_d:=\Int{\mathcal{V}}{3}{r'}\mathcal{T}(\vec{r}-\vec{r}')$ around position $\vec{r}$.
For $0<\alpha\leq\pi/2$ the considered interfaces between \modifiedRed{the} isotropic liquid $L$ and
\modifiedRed{the} smectic-A phases $S_A$ or $S_{AW}$ \modifiedRed{exhibit} periodic structures in the
$x$-direction with periodicity $d_x=d/\sin(\alpha)$. Here, $\mathcal{V}_d$ \modifiedRed{is a} slice of
length $d_x$ in $x$-direction \modifiedRed{with a} vanishing extension in $z$-direction
\modifiedRed{centered} at position $\vec{r}$, i.e.,
$\mathcal{T}(\vec{r}-\vec{r}')=\Theta(d_x/2-|x-x'|)\delta(z-z')$ where $\Theta(x)$ and $\delta(x)$ are the
Heaviside step function and the Dirac delta function, respectively.
\modifiedBlue{
The index $d$ of the integration domain $\mathcal{V}_d$ indicates that $\mathcal{V}_d$ corresponds to
a region which is specified by the periodicity $d$.
}
Due to the translational invariance in
$y$-direction the extension of the integration domain $\mathcal{V}_d$ can be chosen arbitrarily in the
$y$-direction. \modifiedRed{Due to} the periodicity of $\rho(\vec{r},\vec\omega)$ in the $x$-direction,
this choice of the integration domain $\mathcal{V}_d$ leads to coefficients $Q_i(z)$
(Eq.~(\ref{eq:ExpansionCoeffs_interface})) which depend only on $z$, i.e., on the coordinate parallel
to the interface normal.
For $0<\alpha\leq\pi/2$ one could also consider an integration domain which has a non-vanishing extent
in $z$-direction. \modifiedRed{However,} such a choice has at least two disadvantages:
First, unlike $d_x$, which corresponds to the \modifiedRed{periodicity of the system in $x$-direction,
for $0<\alpha\leq\pi/2$ there is no} obvious choice \modifiedMagenta{for the extent of $\mathcal{V}_d$}
parallel to the interface normal.
Additionally, there is no unique choice for the geometrical \modifiedRed{shape} of the integration domains;
besides using a rectangular form, one could also use any other (\modifiedRed{two}-dimensional) geometrical
object as integration domain $\mathcal{V}_d$.
In this sense the slice of length $d_x$ perpendicular to the interface normal is a simple
but also consistent choice.
Second, this choice \modifiedRed{renders} the evaluation numerically less demanding,
because it requires only a one-dimensional integration
(\modifiedRed{exploiting} the translational invariance in $y$-direction),
instead of evaluating a two-dimensional integral.
\modifiedRed{We note}, that an infinite extent of the integration domain parallel to the interface normal
leads to coefficients $Q_i$ \modifiedRed{which} are independent of the position $\vec{r}$ and therefore
cannot be used to obtain interface profiles.
If $\alpha=0$, i.e., the smectic layer normal $\vec{\hat n}=\vec{\hat z}$ is parallel to the interface
normal, $d_x$ diverges and the the system is \modifiedRed{translationally} invariant in $x$- and
$y$-direction. \modifiedRed{In this case}, the integration domain $\mathcal{V}_d$ \modifiedRed{has an
extent of} length $d$ in $z$-direction, i.e., $\mathcal{T}(\vec{r}-\vec{r}')=\Theta(d/2-|z-z'|)$ with
arbitrary \modifiedRed{extent} in \modifiedRed{the} lateral dimensions $x$ and $y$.
\modifiedRed{As before}, the coefficients $Q_i(z)$ depend only on the $z$-coordinate.
It is worth mentioning, that for all tilt angles $0\leq\alpha\leq\pi/2$ the correct (constant) bulk values
of the coefficients $Q_i$ are \modifiedRed{recovered}, although for $0<\alpha<\pi/2$ the orientation of the
integration domain $\mathcal{V}_d$ (recall that $\mathcal{V}_d$ is a slice of \modifiedRed{width} $d_x$ in
$x$-direction for all $\alpha\in(0,\pi/2]$) changes with respect to the direction of the smectic layer normal
$\vec{\hat n}(\alpha)$. However, because the integration domain \modifiedRed{covers} a full period $d_x$ in
$x$-direction, it gives the same values for the coefficients $Q_i$ in the bulk phases, as for evaluating the
\modifiedRed{coefficients} $Q_i$ with an integration domain parallel to the smectic layer normal $\vec{\hat n}$,
which is the case for $\alpha=0$ and $\pi/2$.
Finally, the one-particle direct correlation function $c^{(1)}\left(\vec{r},\vec\omega,[\rho]\right)$
can be derived by considering Eq.~(\ref{eq:Dir1CorrFunc}) \modifiedRed{which} leads to the following
(modified) expression for $c^{(1)}\left(\vec{r},\vec\omega,[\rho]\right)$
(compare Eq.~(21) in Ref.~\cite{Bartsch2017}):
\begin{equation}
\begin{split}
&c^{(1)}\left(\vec{r},\vec\omega,[\rho]\right)=
-\beta\psi(\vec{r},\vec\omega,[\bar\rho])+\\
&\frac{1}{2\mathcal{V}_d}\Int{\mathcal{V}}{3}{r'}\Int{\mathcal{S}}{2}{\omega'}
\bar\rho(\vec{r}',\vec\omega')\partial_{Q_0}\mathcal{J}(Q_0(\vec{r'}))
\mathcal{T}(\vec{r}-\vec{r}')\times\\
&\Int{\mathcal{V}}{3}{r''}\Int{\mathcal{S}}{2}{\omega''}
\bar\rho(\vec{r}'',\vec\omega'')f_M(\vec{r}'-\vec{r}'',\vec\omega',\vec\omega'').
\end{split}
\label{eq:Calc_c1_Approx_interface}
\end{equation}
\MODIFIED{
\MODIFIEDtwo{We note} that in Eq.~(\ref{eq:Calc_c1_Approx_interface})
$\frac{\delta Q_0(\vec{r}')}{\delta\bar\rho(\vec{r},\vec\omega)}$ has been replaced by
$\frac{\delta Q_0(\vec{r}')}{\delta\rho(\vec{r},\vec\omega)}=
\frac{\mathcal{T}(\vec{r}-\vec{r}')}{\mathcal{V}_d}$.
This replacement, i.e., \MODIFIEDtwo{the equation}
$\frac{\delta Q_0(\vec{r}')}{\delta\bar\rho(\vec{r},\vec\omega)}=
\frac{\delta Q_0(\vec{r}')}{\delta\rho(\vec{r},\vec\omega)}$, is \MODIFIEDtwo{valid exactly} only
for bulk phases.
In general, these two functional derivatives are related via
$\frac{\delta Q_0(\vec{r}')}{\delta\bar\rho(\vec{r},\vec\omega)}=
\Int{\mathcal{V}}{3}{r''}\Int{\mathcal{S}}{2}{\omega''}
\frac{\delta Q_0(\vec{r}')}{\delta\rho(\vec{r}'',\vec\omega'')}
\frac{\delta\rho(\vec{r}'',\vec\omega'')}{\delta\bar\rho(\vec{r},\vec\omega)}$, which, however,
cannot be calculated analytically.
\MODIFIEDtwo{Determining $\frac{\delta\rho(\vec{r}'',\,\vec\omega'')}{\delta\bar\rho(\vec{r},\,\vec\omega)}$}
requires the functional derivative of the Euler-Lagrange equation (i.e., Eq.~(\ref{eq:ELG}))
which would \MODIFIEDtwo{in turn} produce terms containing
$\frac{\delta Q_0(\vec{r}')}{\delta\bar\rho(\vec{r},\vec\omega)}$.
Nevertheless, the derivation of Eq.~(\ref{eq:Calc_c1_Approx_interface})
(following from Eq.~(21) in Ref.~\cite{Bartsch2017}) incorporates a modification
of the exact one-particle direct correlation \MODIFIEDtwo{function}
such that the density profile $\rho(\vec{r},\vec\omega)$ is replaced by the
projected density $\bar\rho(\vec{r},\vec\omega)$. In this respect, replacing
$\frac{\delta Q_0(\vec{r}')}{\delta\bar\rho(\vec{r},\vec\omega)}$ by
$\frac{\delta Q_0(\vec{r}')}{\delta\rho(\vec{r},\vec\omega)}=
\frac{\mathcal{T}(\vec{r}-\vec{r}')}{\mathcal{V}_d}$ (which follows from
Eqs.~(\ref{eq:ExpansionCoeffs_interface})-(\ref{eq:cut-off-function})) is consistent with our approach,
as it also implies \MODIFIEDtwo{an} exchange of $\rho(\vec{r},\vec\omega)$ and
$\bar\rho(\vec{r},\vec\omega)$.
\MODIFIEDtwo{Moreover, the exchange renders} the correct bulk limit of the interface profile
$\rho(\vec{r},\vec\omega)$ at the boundaries, i.e., $z\rightarrow\pm\infty$.}
\modifiedRed{Equation~(\ref{eq:ELG}) has been} solved numerically (utilizing a Picard scheme with retardation)
\modifiedRed{by} using Eq.~(\ref{eq:Calc_c1_Approx_interface}) \modifiedRed{as well as} the (constant) bulk
values of the coefficients $Q_{i,L}=Q_i(z\rightarrow-\infty)$ in the isotropic liquid phase $L$
and $Q_{i,S}=Q_i(z\rightarrow\infty)$ in the smectic-A phase ($S_A$ or $S_{AW}$) at coexistence
$(T,\mu)=(T_\text{coex},\mu_\text{coex})$. The structural properties and the orientational order
at the free interface are analyzed in terms of the interface profiles of the packing fraction
\modifiedRed{
\begin{equation}
\eta(\vec{r})=\frac{\pi}{6}LR^2n(\vec{r})
=\frac{\pi}{6}LR^2\Int{\mathcal{S}}{2}{\omega}\rho(\vec{r},\vec\omega)
\label{eq:numberdensity}
\end{equation}
}
\modifiedRed{with the number density $n(\vec{r}):=\Int{\mathcal{S}}{2}{\omega}\rho(\vec{r},\vec\omega) $,}
and \modifiedGreen{in terms of} the orientational order parameter
\modifiedRed{
\begin{equation}
S_2(\vec{r}):=\Int{\mathcal{S}}{2}{\omega}f(\vec{r},\vec\omega)P_2(\vec\omega\cdot\vec{\hat n});
\label{eq:oriorderparameter}
\end{equation}
}
$f(\vec{r},\vec\omega):=\rho(\vec{r},\vec\omega)/n(\vec{r})$
\modifiedRed{describes the orientational distribution.}
\subsection{\label{sec:theory:gibbs_div_surface}Gibbs dividing surface}
The position \modifiedRed{$z_\eta$} of the interface is determined by the density profile $\rho(\vec{r},\vec\omega)$
\modifiedRed{for which we have adopted the notion of the} \textit{Gibbs dividing surface}~\cite{Hansen1976}:
\begin{equation}
h_\eta(z_\eta):=\int_{-\infty}^{z_\eta}dz'(\eta_0(\vec{r}')-\eta_{0,L})+
\int_{z_\eta}^{\infty} dz'(\eta_0(\vec{r}')-\eta_{0,S_A})=0,
\label{eq:gibbs_dividing_surface}
\end{equation}
where $\eta_0(\vec{r})=Q_0(\vec{r})\,LR^2\pi/6$ is the mean packing fraction at position $\vec{r}$.
\modifiedRed{The quantities} $\eta_{0,L}=\eta(z\rightarrow-\infty)$ and
$\eta_{0,S_A}=\eta(z\rightarrow\infty)$ are the bulk values of $\eta_0(\vec{r})$ in the isotropic liquid
phase $L$ and the smectic-A phase $S_A$ (or $S_{AW}$), respectively. The interface position $z_\eta$ in
Eq.~(\ref{eq:gibbs_dividing_surface}) corresponds to the location of a step-like profile
\modifiedRed{such that the number of particles in excess and in deficit of the bulk values is the same
on both sides of the interface.} Taking the derivative of the left-hand side $h_\eta(z)$ of
Eq.~(\ref{eq:gibbs_dividing_surface}) with respect to $z$
leads to $h_\eta'(z=z_\eta):=\eta_{0,S_A}-\eta_{0,L}$
which is a constant.
Therefore $h_\eta(z)=(\eta_{0,S_A}-\eta_{0,L})z+h_\eta(0)$ is a linear function
and one has to evaluate $h_\eta(0)$ only once \modifiedRed{in order} to obtain
\begin{equation}
z_\eta =-h_\eta (0)/(\eta_{0,S_A}-\eta_{0,L}),
\label{eq:gibbs_dividing_surface_evaluation_eta}
\end{equation}
using Eq.~(\ref{eq:gibbs_dividing_surface}), i.e., $h_\eta(z_\eta)=0$.
While $z_\eta$ can be interpreted as the location of the
\textit{transition in the structure} from the isotropic liquid $L$ to the
smectic-A phase $S_A$ (or $S_{AW}$),
replacing $\eta$ by $S_2$ in Eq.~(\ref{eq:gibbs_dividing_surface}) defines a position
\begin{equation}
z_{S_2}=-h_{S_2}(0)/(S_{20,S_A}-S_{20,L}),
\label{eq:gibbs_dividing_surface_evaluation_s2}
\end{equation}
which corresponds to the \textit{transition in the orientational order}
from one phase to the other.
Note, that instead of using the mean packing fraction $\eta_0$ or
the mean orientational order parameter $S_{20}$ in Eq.~(\ref{eq:gibbs_dividing_surface}),
\modifiedRed{for determining the interface positions,} in principle, one could also use the profiles
$\eta(\vec{r})$ and $S_2(\vec{r})$ directly. However, the disadvantage of this \modifiedRed{latter}
approach is that \modifiedRed{in the smectic-A bulk phase $S_A$ (or $S_{AW}$)} the profiles
$\eta(\vec{r})$ and $S_2(\vec{r})$ are still functions of the position $\vec{r}$ (\modifiedRed{via}
the projection $\vec{r}\cdot\vec{\hat n}$ \modifiedRed{onto} the layer normal $\vec{\hat n}$).
\modifiedRed{Typically, this prevents the use of the latter generalized
Eqs.~(\ref{eq:gibbs_dividing_surface_evaluation_eta}) and (\ref{eq:gibbs_dividing_surface_evaluation_s2})
for determining $z_\eta$ and $z_{S_2}$.} Instead, one \modifiedRed{has to} solve
Eq.~(\ref{eq:gibbs_dividing_surface}) numerically, which requires many iterations depending on the
desired accuracy.
Nevertheless, \modifiedRed{in} the particular case $\alpha=\pi/2$ the interface normal and the smectic
layer normal are perpendicular. \modifiedRed{Due} to the translational invariance of \modifiedRed{the}
smectic phases perpendicular to their layer normal, here the density profile
$\eta(z\rightarrow\infty)$ and the orientational order parameter profile $S_2(z\rightarrow\infty)$ do
not depend on $z$ for $z\rightarrow\infty$ in the smectic bulk, but \modifiedRed{they depend}
only on the $x$-coordinate. Thus, for $\alpha=\pi/2$ one can define interface contours
$\tilde z_\eta(x)$ and $\tilde z_{S_2}(x)$, analogously to $z_\eta$ and $z_{S_2}$:
\begin{align}
\tilde h_m(\tilde z_m(x)):=&
\int_{-\infty}^{\tilde z_m(x)}dz'(m(\vec{r}')-m_{L})+\nonumber\\&
\int_{z_m(x)}^{\infty} dz'(m(\vec{r}')-m_{S_A})=0,\nonumber\\
\tilde z_m(x)=&-\tilde h_m(0)/(m_{S_A}(x)-m_{L}),
\label{eq:gibbs_dividing_surface_evaluation_contours}
\end{align}
where $m\in\{\eta,S_2\}$.
\subsection{\label{sec:theory:interface_tension}Interfacial tension}
The interfacial tension $\Gamma$ is a measure of the \modifiedRed{excess} amount of work needed to
form an interface between coexisting bulk phases~\cite{Hansen1976}. \modifiedRed{Accordingly}, it can
be calculated by \modifiedRed{determining} the increase in the grand potential $\beta\Omega[\rho]$ of
the interface system \modifiedRed{in excess of} the bulk grand potential
$\beta\Omega_0:=-\beta p\mathcal{V}$ which is given by the bulk pressure $p$ (see Eq.~(26) in
Ref.~\cite{Bartsch2017}) times the system volume $\mathcal{V}$:
\begin{equation}
\Gamma^*(\alpha):=\beta\Gamma(\alpha)=
\frac{\beta\Omega([\rho],\alpha)+\beta p_\text{coex}\mathcal{V}}{A},
\label{eq:interface_tension}
\end{equation}
where $A$ is the cross-sectional area of the system in lateral directions to the interface normal.
\modifiedBlue{Hence, $\Gamma^*(\alpha)$ has the dimension $1/\text{area}$.}
The pressure $p_\text{coex}:=p(T_\text{coex},\mu_\text{coex},d)$ at coexistence
$(T,\mu)=(T_\text{coex},\mu_\text{coex})$ is the same in the isotropic liquid $L$ and the smectic-A
phase $S_A$ or $S_{AW}$ with the equilibrium layer spacing $d$.
\modifiedRed{The equilibrium tilt angle $\alpha_\text{eq}$ minimizes} the interfacial tension
$\Gamma^*(\alpha=\alpha_\text{eq})$ (see Sec.~\ref{sec:results:tilt}).
\section{\label{sec:results}Results}
In this section \modifiedRed{we present} results for free interfaces formed between the isotropic
liquid $L$ and \modifiedRed{the} smectic-A phase $S_A$ or $S_{AW}$.
The discussion focuses on two kinds of ionic liquid crystals (ILCs) which
are described by the pair interaction potential $U(\vec{r}_{12},\vec\omega_1,\vec\omega_2)$
\modifiedRed{(Eq.~(\ref{eq:Pairpot}))}, introduced in Sec.~\ref{sec:theory:model}:
First, ILCs with charges in the center, i.e., $D=0$ (see Fig.~\ref{fig:ellipsoids} and
Eq.~(\ref{eq:PairPot_ES})), and second, ILCs with charges at the tips, i.e., $D/R=1.8$.
In particular the structural and orientational properties of the interface are discussed in terms of
the packing fraction profile $\eta(\vec{r})$ and the orientational order parameter profile
$S_2(\vec{r})$ for \modifiedRed{various} relative orientations between the interface normal and
the smectic layer normal, i.e., for different tilt angles $\alpha$
(see Fig.~\ref{fig:interface_sketch}). All results presented \modifiedRed{here have been} obtained via
the density functional approach described in Sec.~\ref{sec:theory:DFT}.
\subsection{\label{sec:results:para}Interface normal parallel to the smectic layer normal ($\alpha=0$)}
\begin{figure}[!t]
\includegraphics[width=0.45\textwidth]{./Phasediagrams1.ps}
\caption{
Bulk phase diagrams for (a) ionic liquid crystals with
$L/R=4,\epsilon_R/\epsilon_L=2,\gamma/(R\epsilon_0)=0.045,\lambda_D/R=5$, and $D=0$
and (b) with
$L/R=4,\epsilon_R/\epsilon_L=2,\gamma/(R\epsilon_0)=0.045,\lambda_D/R=5$, and $D/R=1.8$.
For $D=0$, \modifiedRed{i.e., the charges \modifiedGreen{being} concentrated in the center of the molecules,}
solely a first-order phase transition from the isotropic liquid phase $L$ to the ordinary smectic-A
phase $S_A$ occurs at sufficiently high mean packing fractions $\eta_0$.
The ordinary smectic-A phase $S_A$ is characterized by a layer structure with smectic layer spacing
$d/R\approx4.3\gtrsim L/R=4$ comparable \modifiedRed{with} the particle length $L$. The particles
in the layers are well aligned with the layer normal $\vec{\hat n}$.
In panel (b), i.e., for $D/R=1.8$ \modifiedRed{(the charges \modifiedGreen{being} located at the tips
of the molecules)}, another smectic-A structure, referred to as \modifiedRed{the} $S_{AW}$ phase can
be observed at low reduced temperatures $T^*$.
The $S_{AW}$ phase \modifiedRed{exhibits} an alternating structure,
\modifiedRed{consisting of primary} layers of particles being parallel to the layer normal
and secondary layers \modifiedRed{in which} the particles prefer to be perpendicular to it.
This leads to an increased layer spacing $d/R\geq7.5$. The black dotted line in panel (b) marks
the triple point at $T^*\approx1.23$ \modifiedRed{for} which the isotropic liquid $L$, the ordinary
smectic-A \modifiedRed{phase} $S_A$, and the $S_{AW}$ phase are in three-phase-coexistence.
A detailed description of the structural properties of the smectic-A phases $S_A$ and $S_{AW}$,
including illustrations of their microstructure, are provided in Ref.~\cite{Bartsch2017}.
\modifiedRed{The black dots ($\bullet$) in panel (a)},
\modifiedRed{respectively the red dots (\textcolor{red}{$\bullet$}) in (b), mark the coexisting
bulk states at the reduced temperature $T^*=1.3$, respectively $0.9$, imposed as boundary conditions
for the free interfaces shown in Figs.~\ref{fig:if_ilc_l-sa_para} and
\ref{fig:if_ilc_l-saw_para}-\ref{fig:if_ilc_perp_SAW}.}
}
\label{fig:pd_ilc}
\end{figure}
\begin{figure}[!t]
\includegraphics[width=0.45\textwidth]{./profiles_para_l-sa.ps}
\caption{
The $L$-$S_A$-interface profile of the packing fraction $\eta(z)$, panel (a), and the
orientational order parameter $S_2(z)$, panel (b), are shown for an ionic liquid crystal with
$L/R=4,\epsilon_R/\epsilon_L=2,\gamma/(R\epsilon_0)=0.045,\lambda_D/R=5$, and $D=0$, i.e.,
\modifiedRed{the} charges \modifiedRed{are concentrated} in the center of the molecules.
The free interface between the isotropic liquid $L$ (\modifiedRed{imposed} as boundary condition
for $z\rightarrow-\infty$) and the ordinary smectic-A phase $S_A$
(\modifiedRed{i.e.,} $z\rightarrow\infty$) is considered for \modifiedRed{the} reduced temperature
$T^*=1.3$. The corresponding coexisting bulk states are marked by the black dots ($\bullet$)
in the phase diagram in Fig.~\ref{fig:pd_ilc}(a). \modifiedRed{The} tilt angle \modifiedRed{is}
$\alpha=0$, i.e., the smectic layer normal $\vec{\hat n}=\vec{\hat z}$ is parallel
to the interface normal (see Fig.~\ref{fig:interface_sketch}). For $z/R>0$ the last layers of the
$S_A$ phase \modifiedRed{are visible, in which} the particles are still well aligned with the
$z$-axis, indicated by large values of the orientational order parameter $S_2(z/R)>0.8$ within
these layers. For $z/R<0$ the layer structure \modifiedRed{of the density dies out} rapidly and the
orientational order vanishes as well. Ultimately, the isotropic bulk limit will be approached for
$z\rightarrow-\infty$. However, already for $z/R<-10$ the profiles \modifiedGreen{have de facto reached}
their bulk limits in the isotropic liquid $L$. The black dashed lines refer to the interface positions
$z_\eta$ and $z_{S_2}$, \modifiedRed{respectively,} calculated via Eqs.~(\ref{eq:gibbs_dividing_surface_evaluation_eta})
and (\ref{eq:gibbs_dividing_surface_evaluation_s2}). The \modifiedRed{difference}
$(z_\eta-z_{S_2})/R\approx2.45-1.66=0.79$ \modifiedRed{between} the two interface positions is
considerably smaller than the smectic layer spacing $d/R\approx4.28\gtrsim L/R=4$.
\modifiedRed{Therefore} the orientational order of the $S_A$ phase vanishes within the last smectic
layer while approaching the isotropic liquid $L$.
}
\label{fig:if_ilc_l-sa_para}
\end{figure}
\begin{figure}[!t]
\includegraphics[width=0.45\textwidth]{./gibbs.ps}
\caption{
The \modifiedRed{difference} $(z_\eta-z_{S_2})/R$ between the Gibbs dividing surface position
$z_\eta$ \modifiedRed{(Eq.~(\ref{eq:gibbs_dividing_surface_evaluation_eta}))}, and the surface
position $z_{S_2}$ \modifiedRed{(Eq.~(\ref{eq:gibbs_dividing_surface_evaluation_s2})), which
corresponds} to the transition \modifiedRed{of} the orientational order at the interface, are
shown for \modifiedRed{three cases}. First, an ordinary (uncharged) liquid crystal (OLC; blue curve);
second, \modifiedRed{ILCs} with charges in \modifiedRed{their} center, i.e., $D=0$ (violet curve);
and, third, \modifiedRed{ILCs} with charges at the tips, i.e., $D/R=1.8$ (green curve).
Here, the smectic layer normal $\vec{\hat n}=\vec{\hat z}$ is parallel to the interface normal, i.e.,
$\alpha=0$. In all cases studied, the \modifiedRed{differences} $(z_\eta-z_{S_2})/R$ are smaller than
the smectic layer spacing $d\gtrsim L$, which \modifiedRed{for the $S_A$ phase} is comparable to the
particle length $L/R=4$. Thus, the loss of orientational order occurs within the last smectic layer
before approaching the isotropic liquid $L$. The inset shows data for the $L$-$S_{AW}$-interface,
\modifiedRed{which are accessible for $D/R=1.8$} at sufficiently low temperatures $T^*$.
Although the \modifiedRed{difference} $(z_\eta-z_{S_2})/R$ is enlarged for $0.7<T^*\leq0.9$,
it is still considerably smaller than the layer spacing $d/R\approx7.5$ and decreases
\modifiedRed{rapidly upon} decreasing \modifiedRed{the} temperature $T^*$. Hence, for $\alpha=0$,
the orientational order of the smectic-A phase, either $S_A$ or $S_{AW}$, vanishes
\modifiedRed{directly} with the \modifiedRed{disappearance} of the layer structure at the interface.
}
\label{fig:gibbs_parallel}
\end{figure}
\begin{figure}[!t]
\includegraphics[width=0.45\textwidth]{./profiles_para_l-saw.ps}
\caption{
\modifiedRed{For $\alpha=0$,} the $L$-$S_{AW}$-interface profiles $\eta(z)$ and $S_2(z)$
\modifiedRed{are shown} for \modifiedRed{ILCs} with charges at the tips
($L/R=4,\epsilon_R/\epsilon_L=2,\gamma/(R\epsilon_0)=0.045,\lambda_D/R=5$, and $D/R=1.8$)
at \modifiedRed{the} reduced temperature $T^*=0.9$ (see the red dots (\textcolor{red}{$\bullet$})
in Fig.~\ref{fig:pd_ilc}(b)).
For $z\rightarrow-\infty$ the isotropic liquid bulk $L$ is approached \modifiedRed{whereas} for
$z\rightarrow\infty$ the $S_{AW}$ bulk \modifiedRed{is attained}. The \modifiedRed{difference}
$(z_\eta-z_{S_2})/R\approx6.31-3.72=2.59$ \modifiedRed{between} the two interface positions is
larger \modifiedRed{than the one of} the $L$-$S_A$-interface
(compare Figs.~\ref{fig:if_ilc_l-sa_para} and \ref{fig:gibbs_parallel}) but it is still smaller than
the smectic layer spacing $d/R=7.5$. Therefore the orientational order of the $S_{AW}$ phase also
vanishes within the range of the last smectic layer at the interface.
}
\label{fig:if_ilc_l-saw_para}
\end{figure}
First, \modifiedRed{we consider the case that} the interface normal \modifiedRed{is} parallel to
\modifiedRed{the normal of the smectic layers}, i.e., $\alpha=0$ (see Fig.~\ref{fig:interface_sketch}).
Both point \modifiedRed{into the} $z$-direction and due to translational invariance in \modifiedRed{the
$x$- and $y$-directions}, the packing fraction $\eta(z)$ and the orientational order parameter $S_2(z)$
are functions solely of the spatial coordinate $z$. For the case of an ionic liquid crystal with
$L/R=4,\epsilon_R/\epsilon_L=2,\gamma/(R\epsilon_0)=0.045,\lambda_D/R=5$, and $D=0$, i.e., the charges
are localized in the center of the molecule, the bulk phase behavior is shown in the
$T^*$-$\eta_0$-phase diagrams of Fig.~\ref{fig:pd_ilc}(a) where $T^*=kT/\epsilon_0$ and
$\eta_0=Q_0\,LR^2\pi/6$ are the reduced temperature and the mean packing fraction, respectively.
\modifiedRed{Within} the considered temperature range $T^*\in[0.9,1.65]$ solely a first-order phase
transition from the isotropic liquid phase $L$ to the ordinary smectic-A phase $S_A$ occurs.
The $S_A$ phase is characterized by a layer structure with a smectic layer spacing $d/R\approx4.3$,
which is comparable to the particle length $L/R=4$. Within the smectic layers the particles
are well aligned with the smectic layer normal $\vec{\hat n}$. The blue lines in
Fig.~\ref{fig:pd_ilc}(a) correspond to $L$-$S_A$-coexistence and the light blue area in between the
coexistence lines represents the two-phase region.
The $L$-$S_A$-interface is shown in Fig.~\ref{fig:if_ilc_l-sa_para} for $T^*=1.3$.
\modifiedRed{In the phase diagram in Fig.~\ref{fig:pd_ilc}(a)} the corresponding \modifiedRed{two}
coexisting bulk states are marked by black dots ($\bullet$).
\modifiedRed{Panels (a) and (b) show} the packing fraction profile $\eta(z)$ along the interface normal
and the orientational order parameter profile $S_2(x)$\modifiedRed{, respectively}. The black dashed
vertical line in panel (a) marks the position $z_\eta$ of the \modifiedRed{Gibbs} dividing surface,
which is defined by Eq.~(\ref{eq:gibbs_dividing_surface_evaluation_eta}).
Correspondingly, the black dashed vertical line in panel (b)
marks the position $z_{S_2}$ (Eq.~(\ref{eq:gibbs_dividing_surface_evaluation_s2})).
Apparently, the two interface positions $z_\eta$ and $z_{S_2}$, which are related to the interfacial
transition in the structure and in the orientational order, respectively,
\modifiedRed{differ from each other}. In Fig.~\ref{fig:gibbs_parallel}, these \modifiedRed{differences}
$z_\eta-z_{S_2}$ are plotted as function of \modifiedRed{the} reduced temperature $T^*$ for three
different kinds of liquid-crystalline systems. The violet curve corresponds to \modifiedRed{ILCs}
with \modifiedRed{all charges concentrated} in the molecular centers, i.e., $D=0$, while the green
curve shows data points for $D/R=1.8$. The blue curve corresponds to a system of ordinary (uncharged)
liquid crystals (OLCs) described by $L/R=4$, $\epsilon_R/\epsilon_L=2$, and $\gamma/(R\epsilon_0)=0$.
\modifiedRed{The phase diagram for OLCs is not shown here; it is presented} in Fig.~4(a) of
Ref.~\cite{Bartsch2017}. Within the considered temperature ranges, \modifiedRed{in} all three cases
the \modifiedRed{differences} are at most as large as the length of \modifiedRed{the} particle diameter
$R$, which \modifiedRed{in turn} \modifiedGreen{is} much smaller than the smectic layer spacing
$d/R\approx4.3$ which is comparable to the particle length $L$, because the particles \modifiedRed{within}
the smectic layers are well aligned with the $z$-direction, indicated by $S_2(z)>0.8$ in the centers of
the smectic layers.
Thus, \modifiedRed{the small size of the differences shows that in these cases} the transition in the
orientational order and \modifiedRed{in the} fluid structure go along with each other. As soon as the
smectic layer structure \modifiedRed{dies out}, the orientational order vanishes as well.
While for ILCs with charges in \modifiedRed{their} center, \modifiedRed{within the considered
temperature range}, only $L$-$S_A$-coexistence is observable (see Fig.~\ref{fig:pd_ilc}(a)).
\modifiedGreen{For} ILCs with the charges at the tips, such as \modifiedGreen{in the case}
$L/R=4,\epsilon_R/\epsilon_L=2,\gamma/(R\epsilon_0)=0.045,\lambda_D/R=5$, and $D/R=1.8$,
the bulk phase behavior changes significantly at low temperatures\modifiedGreen{, i.e., for} $T^*<1.23$.
The bulk phase diagram in Fig.~\ref{fig:pd_ilc}(b) shows that \modifiedRed{in this case} the distinct
smectic-A phase $S_{AW}$ occurs for intermediate mean packing fraction $\eta_0$.
The $S_{AW}$ phase is characterized by an alternating layer structure of smectic layers
with a majority of particles being oriented parallel to the smectic layer normal $\vec{\hat n}$
and a minority of particles localized in secondary layers which prefer orientations
perpendicular to the smectic layer normal.
Due to \modifiedRed{this} alternating layer structure the smectic layer spacing $d/R\approx7.5$
is increased for the $S_{AW}$ phase.
A detailed discussion of the structural and orientational properties of this new and peculiar
smectic-A phase, in particular \modifiedRed{concerning} the bulk density and \modifiedRed{the}
orientational order parameters profiles, is given in Ref.~\cite{Bartsch2017}.
In Fig.~\ref{fig:if_ilc_l-saw_para} the $L$-$S_{AW}$-interface profiles $\eta(z)$
and $S_2(z)$ are shown for $\alpha=0$ and $T^*=0.9$. \modifiedRed{In the phase diagram in
Fig.~\ref{fig:pd_ilc}(b)} the corresponding coexisting bulk states are marked by red dots
(\textcolor{red}{$\bullet$}). \modifiedRed{On} the right hand side of Fig.~\ref{fig:if_ilc_l-saw_para}
the alternating layer structure of the bulk $S_{AW}$ phase is evident. In the main layers the majority
of \modifiedRed{the} particles ($\eta(z)>2$) has orientations parallel to the $z$-axis ($S_2(z)>0.8$)
and in the secondary layers, \modifiedRed{formed by} less \modifiedRed{of them} ($\eta(z)\approx0.6$),
the particles prefer orientations perpendicular to the $z$-axis ($S_2(z)<0$).
For the $L$-$S_{AW}$-interface the \modifiedRed{difference $(z_\eta-z_{S_2})/R\approx2.6$ of} the two
interface positions is increased compared to the $L$-$S_A$-interface (see Fig.~\ref{fig:gibbs_parallel}),
because the smectic layer spacing $d/R\geq7.5$ in the $S_{AW}$ phase \modifiedRed{is enlarged, too.}
\modifiedRed{As before, the orientational order directly} vanishes with the \modifiedRed{disappearance}
of the layer structure. Furthermore, the inset in Fig.~\ref{fig:gibbs_parallel} shows that
$(z_\eta-z_{S_2})/R$ decreases \modifiedRed{upon} lowering the temperature. \modifiedBlue{Thus the
difference \modifiedGreen{$z_\eta-z_{S_2}$} becomes smaller relative to the layer spacing $d$,
such that the direct vanishing of the orientational order \modifiedGreen{associated} with the
disappearance of the layer structure is observable for the whole temperature range considered here}.
\subsection{\label{sec:results:perp}Interface normal perpendicular to the smectic layer normal ($\alpha=\pi/2$)}
\begin{figure}[!t]
\includegraphics[width=0.45\textwidth]{./profiles_perp_SA.ps}
\caption{
The $L$-$S_A$-interface profiles $\eta(x,z)$, panel (a), and $S_2(x,z)$, panel (b),
are shown for $T^*=1.3$ (see the black dots ($\bullet$) in Fig.~\ref{fig:pd_ilc}(a))
and $\alpha=\pi/2$. \modifiedRed{Accordingly}, the smectic layer normal $\vec{\hat n}=\vec{\hat x}$
and the interface normal (parallel to the $z$-axis) are perpendicular.
Here, \modifiedRed{ILCs} with charges \modifiedRed{at} the center are considered,
described by the parameter set
$L/R=4,\epsilon_R/\epsilon_L=2,\gamma/(R\epsilon_0)=0.045,\lambda_D/R=5$, and $D=0$.
For $z\rightarrow-\infty$ the isotropic \modifiedRed{bulk} liquid $L$ and
for $z\rightarrow\infty$ the bulk of the $S_A$ phase is approached.
The \modifiedRed{decaying} red stripes at the \modifiedRed{upper part of these} plots show the
tails of the smectic layers located at $x/R\approx0,\pm d/R,\pm2d/R$
where $d/R\approx4.28$ is the smectic layer spacing.
The black dashed lines mark the interface positions $z_\eta$ and $z_{S_2}$
calculated via Eqs.~(\ref{eq:gibbs_dividing_surface_evaluation_eta}) and
(\ref{eq:gibbs_dividing_surface_evaluation_s2}), while the white dotted lines
mark the interface contours $\tilde z_\eta(x)$ and $\tilde z_{S_2}(x)$
calculated via Eq.~(\ref{eq:gibbs_dividing_surface_evaluation_contours}).
The \modifiedRed{difference} $(z_\eta-z_{S_2})/R\approx0.58-(-1.51)=2.09$ is larger than the particle
diameter $R$, \modifiedRed{which is} the relevant geometrical property of the particles at this
interface, because for $\alpha=\pi/2$ the particles in the $S_A$ layers are well aligned with the
$x$-axis and therefore they are oriented perpendicular to the direction of the interface normal.
The orientational order of the smectic-A phase persists up to a few particle diameters into the
liquid phase, unlike the case $\alpha=0$, \modifiedRed{in which the disappearance} of the layer
structure causes a \modifiedRed{direct} vanishing of the orientational order within the last layer
(see Figs.~\ref{fig:if_ilc_l-sa_para}-\ref{fig:if_ilc_l-saw_para}).
}
\label{fig:if_ilc_perp_SA}
\end{figure}
\begin{figure}[!t]
\includegraphics[width=0.45\textwidth]{./profiles_perp_SAW.ps}
\caption{
The interface profiles $\eta(x,z)$ and $S_2(x,z)$ for $T^*=0.9$ and $\alpha=\pi/2$.
Here the $L$-$S_{AW}$ interface (see the red dots (\textcolor{red}{$\bullet$})
in Fig.~\ref{fig:pd_ilc}(b)) for an \modifiedRed{ILC} with \modifiedRed{the} charges at the tips
($L/R=4,\epsilon_R/\epsilon_L=2,\gamma/(R\epsilon_0)=0.045,\lambda_D/R=5$, and $D/R=1.8$)
is considered. The thin red areas in panel (a) for lateral positions $x/R=0,\pm d/R=\pm7.5$
show the tails of the smectic layers where the particles prefer \modifiedRed{an orientation parallel}
to the smectic layer normal $\vec{\hat n}=\vec{\hat x}$. \modifiedRed{This is indicated by the} large
value of $S_2(x,z)>0.8$ within these layers. \modifiedRed{In panel (a)} the secondary layers of the
$S_{AW}$ phase are shown as light blue areas in panel (a) located at $x/R=\pm d/(2R)=\pm3.75$.
\modifiedRed{There}, the orientational order parameter $S_2(x,z)$, \modifiedRed{shown in} panel (b),
is negative. The black dashed lines mark the interface positions $z_\eta$ and $z_{S_2}$
calculated via Eqs.~(\ref{eq:gibbs_dividing_surface_evaluation_eta})
and (\ref{eq:gibbs_dividing_surface_evaluation_s2}), while the white dotted lines mark the interface
contours $\tilde z_\eta(x)$ and $\tilde z_{S_2}(x)$, which have been calculated via
Eq.~(\ref{eq:gibbs_dividing_surface_evaluation_contours}).
The \modifiedRed{differences} $(z_\eta-z_{S_2})/R\approx1.0-(-2.3)=3.3$, respectively
$(\tilde z_\eta(x)-\tilde z_{S_2}(x))/R\approx0.81-(-1.83)=2.64$ \modifiedRed{at the} lateral positions
$x/R\approx0,\pm7.5$, \modifiedRed{exhibit} a persisting orientational order for the main layers,
similar to the findings \modifiedRed{for} the $L$-$S_A$ interface
(compare Fig.~\ref{fig:if_ilc_perp_SA}).
Interestingly, at the secondary layers ($x/R=\pm d/(2R)=\pm3.75$) the orientational order
vanishes \modifiedRed{ahead of the disappearance} of the layer structure, i.e.,
$\tilde z_{S_2}(x/R=\pm3.75)/R\approx3.39>-0.34\approx\tilde z_{\eta}(x/R=\pm3.75)/R$.
\modifiedRed{In order to} guide the eye, the magenta dots (\textcolor{Magenta}{$\bullet$}) mark the
positions $(x/R,\tilde z_{\eta}/R)\approx(3.75,-0.34)$ and $(x/R,\tilde z_{S_2}/R)\approx(3.75,3.39)$.
}
\label{fig:if_ilc_perp_SAW}
\end{figure}
For $\alpha=\pi/2$ the interface normal and the smectic layer normal are perpendicular to each other.
The smectic layer normal points \modifiedRed{into} the $x$-direction and the interface normal
\modifiedRed{into} the $z$-direction (see Fig.~\ref{fig:interface_sketch}).
The associated $L$-$S_A$-interface at $T^*=1.3$ for an \modifiedRed{ILC system} with
\modifiedRed{the charges concentrated at the center}, described by the parameter set
$L/R=4,\epsilon_R/\epsilon_L=2,\gamma/(R\epsilon_0)=0.045,\lambda_D/R=5$, and $D=0$,
is shown in Fig.~\ref{fig:if_ilc_perp_SA}. The corresponding bulk phases are given by the state
points marked by black dots ($\bullet$) in the phase diagram in Fig.~\ref{fig:pd_ilc}(a).
Panel (a) shows the packing fraction $\eta(x,z)$ and (b) the orientational order parameter $S_2(x,z)$.
The red areas at the top of Fig.~\ref{fig:if_ilc_perp_SA}(a) show the tails of four smectic layers of
the $S_A$ phase located at \modifiedRed{$x/R=\pm d/(2R)\approx\pm2.14$ and
$x/R=\pm3d/(2R)\approx\pm6.42$} where $d/R\approx4.28$ is the smectic layer spacing.
The particles are well aligned with the smectic layer normal $\vec{\hat n}=\vec{\hat x}$
indicated by large values of the orientational order parameter $S_2(x,z)>0.8$ in the layers.
The black dashed lines in Fig.~\ref{fig:if_ilc_perp_SA} show the interface positions $z_\eta$
and $z_{S_2}$ calculated \modifiedRed{from} Eqs.~(\ref{eq:gibbs_dividing_surface_evaluation_eta})
and (\ref{eq:gibbs_dividing_surface_evaluation_s2}), while the white dotted lines show the interface
contours $\tilde z_\eta(x)$ and $\tilde z_{S_2}(x)$ obtained \modifiedRed{from}
Eq.~(\ref{eq:gibbs_dividing_surface_evaluation_contours}). The contour lines
$\tilde z_\eta(x)$ and $\tilde z_{S_2}(x)$ at the centers of the tails of the smectic layers, e.g.,
at $x/R\approx2.14$, are very close to $z_\eta$ and $z_{S_2}$, respectively.
This suggests that the two distinct definitions of the interface positions, i.e.,
using either Eqs.~(\ref{eq:gibbs_dividing_surface_evaluation_eta}) and
(\ref{eq:gibbs_dividing_surface_evaluation_s2}) or
Eq.~(\ref{eq:gibbs_dividing_surface_evaluation_contours}), are consistent with each other,
because the majority of the particles in the smectic phase are located close to the centers
of the smectic layers. \modifiedRed{In Fig.~\ref{fig:if_ilc_perp_SA}(a)} the packing fraction
interface contour $\tilde z_\eta(x)$ \modifiedRed{exhibits} discontinuities for lateral positions
$\check{x}$ at which the smectic bulk packing fraction
$\eta_{S_A}(\check{x}):=\eta(\check{x},z\rightarrow\infty)$ takes the same value
$\eta_L=\eta(\check{x},z\rightarrow-\infty)$ as in the isotropic liquid $L$, i.e.,
$\eta_{S_A}(\check{x})=\eta_L$. Thus, the numerical calculation of the Gibbs dividing surface
via Eq.~(\ref{eq:gibbs_dividing_surface_evaluation_contours}) leads to a divergence due to the
vanishing denominator. This can be considered \modifiedRed{as} an artifact, which, however, occurs
only at the particular lateral positions $\check{x}$.
Nevertheless, the benefit of considering $\tilde z_\eta(x)$ and $\tilde z_{S_2}(x)$
as interface positions is their \modifiedRed{dependence} on the lateral coordinate $x$.
In particular, for the case of the $L$-$S_{AW}$-interface it is necessary to consider
$\tilde z_\eta(x)$ and $\tilde z_{S_2}(x)$ in order to study the interface
at the \modifiedRed{main layers and at} the secondary layers separately (see below).
Interestingly, if the layer normal and the interface normal are perpendicular, one observes a
significant \modifiedRed{difference} $(z_\eta-z_{S_2})/R\approx0.72-(-1.76)=2.48$ \modifiedRed{between}
the interface position $z_\eta$, corresponding to the structural transition, and $z_{S_2}$
corresponding to the transition in the orientational order between the coexisting phases.
Hence, the alignment of the particles with the $x$-axis persists a few particle diameters
deeper into the liquid phase $L$ than the layer structure of the $S_A$ phase is maintained --
unlike in the case $\alpha=0$, i.e., \modifiedRed{in which} the smectic layer normal is parallel
to the interface normal, for which the orientational order \modifiedRed{directly} vanishes when
the smectic layers \modifiedRed{disappear} (see Sec.~\ref{sec:results:para}).
\modifiedRed{We note}, that the vanishing of the orientational \modifiedRed{order} significantly after
(\modifiedRed{upon} approaching the interface from the orientational ordered phase) the structural
transition \modifiedRed{associated with} the density profile, has already been observed
previously~\cite{Praetorius_et_al2013} \modifiedRed{in} the case of the interface between an
isotropic liquid and a plastic-triangular crystal (PTC).
For the type of \modifiedRed{ILCs} with the charges at the tips, at low temperatures the new wide
smectic-A phase $S_{AW}$ can be observed (see Fig.~\ref{fig:pd_ilc}(b)). It is characterized by an
alternating structure of layers in which the particles are \modifiedRed{predominantly} parallel to
the layer normal $\vec{\hat n}=\vec{\hat x}$ (like in the $S_A$ phase) and layers of particles
\modifiedRed{which} are preferentially perpendicular to the layer normal.
The free interface formed between the isotropic liquid $L$ and the $S_{AW}$ phase
for $T^*=0.9$ and $\alpha=\pi/2$ is shown in Fig.~\ref{fig:if_ilc_perp_SAW}.
The red regions in Fig.~\ref{fig:if_ilc_perp_SAW}(a) show the layers of particles
(at $x=0$ and $x/R\approx\pm d/R=\pm7.5$) being parallel to the layer normal,
while in between (at $x/R\approx\pm d/(2R)=\pm3.75$) in light blue color the secondary layers are
visible. The dark blue color at $x/R\approx\pm d/(2R)=\pm3.75$ in panel (b) shows that
the orientational order parameter $S_2(x,z)$ \modifiedRed{is negative} at the location of the
secondary layers, because \modifiedRed{there} the particles are preferentially perpendicular
to the layer normal. The interface at the parallel layers behaves very much like the $L$-$S_A$
interface, as can be inferred from the (white) interface contours
$\tilde z_\eta(x/R=0,\pm7.5)/R\approx0.81$ and $\tilde z_{S_2}(x/R=0,\pm7.5)/R\approx-1.83$
which show that the orientational ordering of the $S_{AW}$ phase persists into
the liquid phase $L$ for a few particle diameters.
This is also apparent from the interface positions $z_\eta/R\approx1.0$ and $z_{S_2}/R\approx-2.3$,
depicted by the black dashed lines in Fig.~\ref{fig:if_ilc_perp_SAW}.
Conversely, at lateral positions $x/R\approx d/(2R)=\pm3.75$ associated \modifiedRed{with} the centers
of the intermediate layers, it turns out that the orientational order \modifiedRed{undergoes the
transition} before the layer structure vanishes if one approaches the interface from the $S_{AW}$ side
($\tilde z_{S_2}(x/R=\pm3.75)/R\approx3.39$ and $\tilde z_\eta(x/R=\pm3.75)/R\approx-0.34$;
in order to guide the eye the magenta dots (\textcolor{Magenta}{$\bullet$})
in Fig.~\ref{fig:if_ilc_perp_SAW} mark these positions). \modifiedGreen{This behavior} is
\modifiedRed{opposite to the above one and is} presumably related to the fact, that the secondary
layers consist of particles \modifiedRed{being preferentially} perpendicular to the layer normal;
unlike the particles in the main layers of the $S_{AW}$ phase or the particles in the $S_A$ layers,
these particles do not align with the layer normal $\vec{\hat n}=\vec{\hat x}$.
Instead they are avoiding an orientation parallel to it.
While the transition across the $L$-$S_A$ interface -- from alignment with the layer normal towards
an isotropic orientational distribution -- results in an \textit{increase} \modifiedRed{of} the
effective particle diameter in the $y$- and $z$-direction, for the secondary $S_{AW}$ layers
the effective diameter is \textit{decreased} from the $S_{AW}$ phase towards the isotropic liquid $L$.
\modifiedRed{In Fig.~\ref{fig:if_ilc_perp_SAW} there are discontinuities in the (white) interface
contour lines $\tilde z_\eta(x)$ and $\tilde z_{S_2}(x)$, as in Fig.~\ref{fig:if_ilc_perp_SA}.
These discontinuities occur at lateral positions} $\check{x}$ at which the packing fraction
$\eta(\check{x},z\rightarrow\pm\infty)$ or the orientational order parameter
$S_2(\check{x},z\rightarrow\pm\infty)$ take the same value in the isotropic bulk, i.e.,
for $z\rightarrow-\infty$, \modifiedRed{as} in the $S_{AW}$ bulk, i.e., for $z\rightarrow\infty$.
\subsection{\label{sec:results:asymp}Asymptotic behavior}
\begin{figure}[!t]
\includegraphics[width=0.45\textwidth]{./profiles_asymp.ps}
\caption{
$L$-$S_A$ interface profiles of $\eta(x,z)$ and $S_2(x,z)$ for $T^*=10$ and $\alpha=\pi/2$.
\modifiedRed{Accordingly}, the smectic layer normal $\vec{\hat n}=\vec{\hat x}$ and the interface
normal (parallel to the $z$-axis) are perpendicular. Panels (a) and (b) show the logarithmic
deviations $\ln|\eta(x,z)-\eta_L|$ and $\ln|S_2(x,z)-S_{2,L}|$ of the packing fraction
and the orientational order parameter from their bulk values in the isotropic liquid $L$
for an \modifiedRed{ILC} with \modifiedRed{the} charges \modifiedRed{concentrated at the center of}
the molecule, i.e., for $D=0$. Panels (c) and (d) show $\ln|\eta(x,z)-\eta_L|$ and
$\ln|S_2(x,z)-S_{2,L}|$ for an \modifiedRed{ILC} with \modifiedRed{the} charges at the tips, i.e.,
for $D/R=1.8$. Note that on the base of each plot the interface profiles $\eta(x,z)$ and $S_2(x,z)$
are shown in order to elucidate the \modifiedGreen{viewing} angle on the interface.
\modifiedGreen{The local height of the manifold above the base corresponds to the given color code.}
Interestingly, for $D=0$ the periodic structure is still apparent \modifiedRed{even} far away
from the $L$-$S_A$-interface, i.e., $z/R<-20$, unlike the case $D/R=1.8$, for which the profiles
are rather flat in lateral direction $x$.
This can be related to the strong localization of charges \modifiedRed{at} the
centers of the smectic layers for $D=0$, pronouncing the periodic structure, while for $D/R=1.8$
the charge sites are spread and less localized along the $x$-direction.
}
\label{fig:if_ilc_perp_asymp_liq}
\end{figure}
\begin{figure}[!t]
\includegraphics[width=0.42\textwidth]{./profiles_asymp_plane.ps}
\caption{
The same \modifiedGreen{quantities as shown in} Fig.~\ref{fig:if_ilc_perp_asymp_liq}.
Panels (a) and (b) correspond to the case $D=0$ presenting $\ln|\eta(x,z)-\eta_L|$ and
$\ln|S_2(x,z)-S_{2,L}|$, respectively, \modifiedGreen{whereas panels (c) and (d) correspond to
the case $D/R=1.8$.}
\modifiedGreen{However, here} the direction of view is parallel to the $x$-axis
\modifiedGreen{so that the manifold from Fig.~\ref{fig:if_ilc_perp_asymp_liq} is projected
onto the plane spanned by the vertical axis and the $z$ axis}.
Away from the interface, i.e., \modifiedRed{for} $z/R<-20$, the decay length for
$\ln|\eta(x,z)-\eta_L|$ can be identified as the Debye screening length $\lambda_D/R=5$
for both cases (a) $D=0$ and (c) $D/R=1.8$. From the inset in panel (a), \modifiedRed{which shows}
$\ln|\eta(x,z)-\eta_L|$ for the corresponding (uncharged) ordinary liquid crystal with
$L/R=4$ and $\epsilon_R/\epsilon_L=2$, it is apparent that the contributions due to the
Gay-Berne potential (the asymptotics of which is \modifiedGreen{indicated} by the blue line) and
\modifiedRed{due to} the hard-core interaction (the asymptotics of which is depicted by the black line)
are much weaker than the (screened) electrostatic contribution and do not play a role
\modifiedRed{within the} range of $\ln|\eta(x,z)-\eta_L|$ \modifiedRed{considered here}.
\modifiedBlue{(In order to guide the eye, the blue and black lines are also shown in the main plots.
Apparently, in (a) and (c) the blue and black lines are far below the respective profiles.)}
However, for $\ln|S_2(x,z)-S_{2,L}|$, i.e., \modifiedRed{for} panels (b) and (d), one observes
\modifiedGreen{crossovers} -- indicated by the intersection of the \modifiedBlue{orange} and blue lines
at \modifiedRed{$z/R\approx-67$} \modifiedGreen{in (b)} and $z/R\approx-45$ \modifiedGreen{in (d)}
(compare the red arrows in the respective plots) -- from the electrostatic regime towards the decay
governed by the Gay-Berne contribution with decay length $\xi_\text{GB}/R\approx10$.
\modifiedGreen{Such crossovers occur}
\modifiedBlue{within the considered range $z/R\in[-80,0]$}, because for the orientational
order parameter the amplitude of the decay, due to the Gay-Berne interaction, is larger than for
the packing fraction \modifiedBlue{(compare the intersections of the blue lines with the ordinates
in panels (a) and (b))}.
\modifiedRed{Due to the hard-core interaction,} for $z/R>-20$ the decay length
$\xi_\text{PL}/R\approx1.9$ (\modifiedRed{Parsons-Lee}, black lines) is \modifiedRed{visible}
for the ordinary liquid crystal in the insets of (a) and (b) as well as for
$\ln|S_2(x,z)-S_{2,L}|$ of the two considered \modifiedRed{ILCs}.
(Due to the small amplitudes of the hard-core contributions \modifiedRed{to} $\ln|\eta(x,z)-\eta_L|$,
for the \modifiedRed{ILC considered here}, this decay \modifiedRed{has not been observed}.)
In order to confirm, that the decay length $\xi_\text{PL}/R\approx1.9$ is indeed due to the hard-core
interaction, the insets of \modifiedRed{the} panels (c) and (d) show $\ln|\eta(x,z)-\eta_L|$
and $\ln|S_2(x,z)-S_{2,L}|$ of the pure hard-core system ($\beta\psi:=\beta\psi_\text{PL}$).
Interestingly, $\ln|\eta(x,z)-\eta_L|$ and $\ln|S_2(x,z)-S_{2,L}|$ \modifiedGreen{behave} very
\modifiedGreen{similarly} close to the interface, i.e., $z/R>-10$, for all three kinds of systems
studied here.
This suggests that the structure and \modifiedRed{the} orientational properties close to the interface
are governed by the hard-core interaction which enters into the present DFT approach
(see Secs.~\ref{sec:theory:model} and \ref{sec:theory:DFT}).
}
\label{fig:if_ilc_perp_asymp_liq_plane}
\end{figure}
In this section \modifiedRed{we discuss how} the interface profiles of the packing fraction
$\eta(\vec{r})$ and the orientational order parameter $S_2(\vec{r})$ \modifiedRed{attain} their
respective values $\eta_{L}$ and $S_{2,L}$ in the bulk \modifiedRed{liquid} $L$.
In Fig.~\ref{fig:if_ilc_perp_asymp_liq} the asymptotic behavior is discussed
in terms of $\ln|\eta(x,z)-\eta_L|$ and $\ln|S_2(x,z)-S_{2,L}|$ for $\alpha=\pi/2$ and $T^*=10$,
considering \modifiedRed{ILCs} with charges in the center, i.e.,
$D=0$ (panels (a) and (b)), and with charges at the tips, i.e., $D/R=1.8$ (panels (c) and (d)).
In order to elucidate the view angle on these 3-dimensional logarithmic plots,
\modifiedRed{the interface profiles $\eta(x,z)$ and $S_{2}(x,z)$ are shown in addition
as contour plots (see Fig.~\ref{fig:if_ilc_perp_SA}) at the base of the respective plot.}
Interestingly, while for $D=0$ the periodic structure of the profiles $\eta(x,z)$ and $S_2(x,z)$
in $x$-direction is \modifiedRed{clearly apparent also} in the decays $\ln|\eta(x,z)-\eta_L|$
and $\ln|S_2(x,z)-S_{2,L}|$ far away from the $L$-$S_A$-interface ($z/R<-20$ in
Figs.~\ref{fig:if_ilc_perp_asymp_liq}(a) and (b)), for $D/R=1.8$ (panels (c) and (d))
the decays vary only little as function of $x$.
This distinct behavior can be a signature of the respective molecular charge distributions,
because if the charges are \modifiedRed{localized at} the centers of the molecules, due to the
layer structure in the $S_A$ phase \modifiedRed{the} charges are also localized \modifiedRed{at}
the centers of the smectic layers, while for $D/R=1.8$ the charges are less localized along the
lateral direction $x$. Close to the interface ($z/R>-20$) the structure is very similar
\modifiedRed{in} both cases and, as will be discussed later, it is the hard-core repulsion
which is the dominant contribution here.
Turning the view parallel to the $x$-axis, one obtains projected representations of the logarithmic
plots in Fig.~\ref{fig:if_ilc_perp_asymp_liq}, which are shown in
Fig.~\ref{fig:if_ilc_perp_asymp_liq_plane} keeping the order of panels \modifiedRed{as} in
Fig.~\ref{fig:if_ilc_perp_asymp_liq}.
\modifiedBlue{Hence, Figs.~\ref{fig:if_ilc_perp_asymp_liq_plane}(a) and (b) correspond to the case
$D=0$ presenting $\ln|\eta(x,z)-\eta_L|$ and $\ln|S_2(x,z)-S_{2,L}|$, respectively.
Similiarly, Figs.~\ref{fig:if_ilc_perp_asymp_liq_plane} (c) and (d) show the case $D/R=1.8$.}
\modifiedBlue{In both cases, at large distances, i.e., $z/R<-20$, the decay of the density profiles
is dominated by the electrostatic contribution $U_\text{es}$ to the total interaction potential $U$
(see \modifiedGreen{Figs.}~\ref{fig:if_ilc_perp_asymp_liq_plane}(a) and (c)). Accordingly,}
\modifiedRed{the decay of the envelope} is determined by the Debye screening length $\lambda_D/R=5$,
highlighted by the \modifiedBlue{orange} lines in Fig.~\ref{fig:if_ilc_perp_asymp_liq_plane}.
It is worth mentioning that a DFT study~\cite{Lu_Evans_DaGama1985} of the asymptotic behavior of the
liquid-vapor interface \modifiedRed{has yielded, unlike the present findings,} a decay length $l_b$
larger than the Debye screening length $\lambda_D$ for a hard sphere system with additional Yukawa
interaction. While in the present study the Yukawa potential is purely repulsive,
in Ref.~\cite{Lu_Evans_DaGama1985} \modifiedRed{using} an attractive Yukawa potential is indispensable,
because a sufficiently strong attraction is needed for liquid-vapor coexistence \modifiedRed{to occur}.
Interestingly, the \modifiedRed{asymptotic behavior} of the orientational order parameter at far distances,
i.e., \modifiedRed{for} $z/R<-60$, \modifiedRed{differs} from the electrostatic decay and another
regime (\modifiedBlue{highlighted by blue lines in Fig.~\ref{fig:if_ilc_perp_asymp_liq_plane}})
with \modifiedRed{a} larger decay length $\xi_\text{GB}/R\approx10$ sets in.
This longer-ranged decay is due to the Gay-Berne interaction $U_\text{GB}$ \modifiedRed{which}
is verified by calculating the interface profile for an ordinary liquid crystal \modifiedBlue{(OLC)}
without charges (compare the insets of panels (a) and (b) of Fig.~\ref{fig:if_ilc_perp_asymp_liq_plane}).
\modifiedBlue{For the OLC, at far distances, i.e., $z/R<-30$, the same large decay length
$\xi_\text{GB}/R\approx10$ is observed. However, the amplitudes of the decay of the packing fraction
and \modifiedGreen{of} the orientational order parameter differ significantly.
(The blue line in panel (a) intersects the ordinate at $\ln|\eta-\eta_L|\approx-25$,
whereas the blue line in (b) intersects the ordinate at $\ln|S_2-S_{2,L}|\approx-20$.)}
\modifiedBlue{For $D=0$,} it turns out that \modifiedRed{for the orientational order parameter}
the \modifiedGreen{crossover} from the electrostatic decay towards the Gay-Berne decay
\modifiedRed{occurs at $z/R\approx-67$} \modifiedBlue{(this position is marked by the red arrow
in Fig.~\ref{fig:if_ilc_perp_asymp_liq_plane}(b)), whereas for the case
$D/R=1.8$ the \modifiedGreen{crossover} occurs at $z/R\approx-45$ (see the red arrow in
Fig.~\ref{fig:if_ilc_perp_asymp_liq_plane}(d))}.
Ultimately, the larger Gay-Berne decay length $\xi_\text{GB}/R\approx10$ will also \modifiedRed{become}
apparent in the decay profile of the packing fraction. However, due to the
\modifiedBlue{smaller amplitude of the Gay-Berne decay of the density compared \modifiedGreen{with}
the decay of the orientational order parameter (compare the insets in
Figs.~\ref{fig:if_ilc_perp_asymp_liq_plane}(a) and (b))}, \modifiedRed{in \modifiedGreen{the present}
case the \modifiedGreen{crossover} occurs} further away from the interface
(in Fig.~\ref{fig:if_ilc_perp_asymp_liq_plane}(a) the intersection of the
\modifiedBlue{orange} line and the blue line \modifiedBlue{is located at $z/R\approx-121$}
\modifiedRed{(not visible)} and in \modifiedRed{Fig.~\ref{fig:if_ilc_perp_asymp_liq_plane}(c)} at
\modifiedBlue{$z/R\approx-97$} \modifiedBlue{(also not visible)}).
\modifiedBlue{However, at very far distances $z/R<-80$, the magnitudes $\ln|\eta-\eta_L|\lesssim-25$ are
very small and cannot be resolved numerically. For this reason,
\modifiedGreen{in Figs.~\ref{fig:if_ilc_perp_asymp_liq_plane}(a) and (c) crossovers} from the electrostatic
regime to the Gay-Berne regime are not shown.}
\modifiedGreen{We note} that, although the Gay-Berne potential $U_\text{GB}$ decays \modifiedRed{algebraicly}
$\propto (r_{12}/R)^{-6}$ (see Eq.~(\ref{eq:Pairpot_GB})), here the Gay-Berne decay is exponential,
because solving the Euler-Lagrange equation \modifiedRed{in} Eq.~(\ref{eq:ELG}) requires the
evaluation of the ERPA contribution $\beta\psi_\text{ERPA}$ of the effective one particle potential
$\beta\psi$ (see Eqs.~(\ref{eq:Eff1Pot_ERPA}) and (\ref{eq:Calc_c1_Approx_interface})).
The numerical calculation of this integral (which extends over the whole volume $\mathcal{V}$
\modifiedGreen{of the system}) requires a truncation in terms of a cut-off distance of the integral
which leads to an exponential decay of this contribution, instead of a power law decay
\modifiedGreen{$\propto (z/R)^{-3}$~\cite{DeGennes1981,Barker1982,Lu_Evans_DaGama1985}},
as it \modifiedGreen{is} expected for the full Gay-Berne potential $U_\text{GB}$.
(The exponent $3$ arises because the asymptotic behavior of an interfacial density profile,
generated by long-ranged forces, varies proportional to the corresponding (total) potential,
\clearpage
which acts on a test particle at a distance $z$ from the interface and which is due to
the pair interaction between the particles in one of the two coexisting phases
(which are separated by the considered interface) and the test particle. Thus, via an integration of the
Gay-Berne pair interaction, which decays $\propto (r_{12}/R)^{-6}$, over a half-space,
one obtains the corresponding total potential decaying
$\propto (z/R)^{-3}$~\cite{DeGennes1981,Barker1982,Dietrich1991}.)
\MODIFIED{\MODIFIEDtwo{For $z/R\rightarrow-\infty$ the algebraic} decay of the Gay-Berne
\MODIFIEDtwo{interaction} potential always dominates the exponential decay due
to the \MODIFIEDtwo{screened} electrostatic interaction, \MODIFIEDtwo{independent} of the relative
strength of the electrostatic and the Gay-Berne interaction potential.
A \MODIFIEDtwo{variation of their} relative strength $\gamma/(R\epsilon_0)$
would only lead to a \MODIFIEDtwo{shift of} the location of the \MODIFIEDtwo{corresponding} crossovers
in the density and \MODIFIEDtwo{the} order parameter profiles (\MODIFIEDtwo{see} the red arrows in
Fig.~\ref{fig:if_ilc_perp_asymp_liq_plane}) \MODIFIEDtwo{caused} by altering the amplitudes of the
respective decays \MODIFIEDtwo{of the two interactions}.}
Close to the interface, i.e., \modifiedRed{for} $-20<z/R<-5$, \modifiedRed{in the insets of
Fig.~\ref{fig:if_ilc_perp_asymp_liq_plane} one can observe} an exponential decay with a decay
length $\xi_\text{PL}/R\approx1.9$ (depicted by the black lines) which arises from the pure
hard-core Parsons-Lee contribution $\beta\psi_\text{PL}$. Thus $\xi_\text{PL}$ can be identified
as the isotropic-liquid bulk correlation length of the pure hard-core system.
Interestingly, while the hard-core correlation length $\xi_\text{PL}$ is observable
in \modifiedRed{OLCs -- within \modifiedGreen{both}} the $\eta$ and \modifiedGreen{the} $S_2$ profiles
(\modifiedBlue{at distances $z/R\in[-20,-5]$ the respective decays closely follow the black lines
which depict the hard-core decay in the insets of Figs.~\ref{fig:if_ilc_perp_asymp_liq_plane}(a) and (b)})
\modifiedRed{--, for ILCs} this decay is visible \modifiedGreen{only} \modifiedRed{within} the $S_2$ profile.
Only for the $S_2$ profile the amplitude of the hard-core decay is large enough, such that the
hard-core correlation length $\xi_\text{PL}$ is observable before the electrostatic decay becomes
\modifiedRed{dominant}. The insets in \modifiedGreen{Figs.}~\ref{fig:if_ilc_perp_asymp_liq_plane}(c) and (d)
show the interface profiles calculated for the pure hard-core system ($\beta\psi:=\beta\psi_\text{PL}$)
in order to verify that the decay close to the interface, i.e., \modifiedRed{for} $-20<z/R<-5$,
is governed by the hard-core interaction.
Finally, it is worth mentioning that \modifiedRed{for all cases shown in
Fig.~\ref{fig:if_ilc_perp_asymp_liq_plane},} the \modifiedRed{structural} and orientational properties
close to the interface, i.e., \modifiedRed{for} $z/R>-10$, agree very well. Thus,
it is the hard-core interaction which determines the structural and orientational
properties close to the interface, while the electrostatic and the Gay-Berne contributions
\modifiedBlue{dominate further away from the interface}. At intermediate distances electrostatics
dominates the decay of the interface profiles \modifiedRed{whereas} far away from the interface
ultimately the attractive Gay-Berne interaction \modifiedRed{dominates}. Furthermore,
the \modifiedRed{positions} of \modifiedGreen{the crossovers} between these regimes are distinct for the
packing fraction profile and the orientational order parameter profile.
\subsection{\label{sec:results:tilt}Tilted interfaces}
\begin{figure}[!t]
\includegraphics[width=0.45\textwidth]{./profiles_tilted_SA.ps}
\caption{
The $L$-$S_A$ interface profiles $\eta(x,z)$ \modifiedGreen{(Eq.~(\ref{eq:numberdensity}))} and $S_2(x,z)$
\modifiedGreen{(Eq.~(\ref{eq:oriorderparameter}))} for $\alpha=\pi/4$
and $T^*=1.3$ are shown. Here, an \modifiedRed{ILC} with charges \modifiedRed{localized at its}
center is considered ($L/R=4,\epsilon_R/\epsilon_L=2,\gamma/(R\epsilon_0)=0.045,\lambda_D/R=5$,
and $D=0$). For $z\rightarrow-\infty$ the isotropic liquid bulk $L$ is approached and for
$z\rightarrow\infty$ the bulk of the $S_A$ phase \modifiedRed{is attained}, i.e., the interface
normal is parallel to the $z$-axis. The red stripes at the top of the contour plots show the tails
of the smectic layers. The black dashed lines mark the interface positions $z_\eta/R\approx5.56$
and $z_{S_2}/R\approx2.79$ calculated via Eqs.~(\ref{eq:gibbs_dividing_surface_evaluation_eta}) and
(\ref{eq:gibbs_dividing_surface_evaluation_s2}). Similar to the case $\alpha=\pi/2$
\modifiedRed{(see Fig.~\ref{fig:if_ilc_perp_SA}), to a certain extent the orientational order
persists} into the liquid phase $L$.
}
\label{fig:if_ilc_tilt_SA}
\end{figure}
\begin{figure}[!t]
\includegraphics[width=0.45\textwidth]{./if_tens.ps}
\caption{
The (reduced) interfacial tension $\Gamma^*(\alpha)$ \modifiedRed{(Eq.~(\ref{eq:interface_tension}),
black line)} and the distance $z_\eta-z_{S_2}$ between the transition in the \modifiedRed{structural}
and the orientational order \modifiedRed{(orange line)} as function
of the tilt angle $\alpha$. In panel (a) the $L$-$S_A$ interface at $T^*=1$ is considered
for \modifiedRed{ILCs} with \modifiedRed{their} charges \modifiedRed{localized at} the center
\modifiedBlue{($L/R=4,\epsilon_R/\epsilon_L=2,\gamma/(R\epsilon_0)=0.045,\lambda_D/R=5$, and
$D=0$ (\modifiedGreen{see} Fig.~\ref{fig:pd_ilc}(a))}.
There are two minima: the global minimum at the equilibrium tilt angle $\alpha_\text{eq}=0$
(\modifiedRed{i.e.,} interface normal and smectic layer normal are parallel) and a local minimum at
$\alpha=\pi/2$ which shows that the \modifiedRed{orthogonal} orientation of the smectic layer normal
and the interface normal is \modifiedRed{a metastable configuration}. The increase \modifiedRed{of}
the interfacial tension \modifiedBlue{below} $\alpha=\pi/2$ is accompanied by an increase
\modifiedRed{of} the distance $z_\eta-z_{S_2}$. \modifiedRed{This suggests that maintaining
to a certain extent the local orientational order in the isotropic liquid beyond the smectic layers
costs free energy.}
\modifiedBlue{
For technical reasons we did not study small tilt angles $\alpha>0$. \modifiedGreen{Hence we} cannot
comment on the functional form of $\Gamma^*(\alpha)$ for $0<\alpha<\pi/6$ in the case $D/R=0$ or for
$0<\alpha<\pi/4$ in the case $D/R=1.8$.
This is \modifiedGreen{indicated} by connecting the data points at $\alpha=0$ and $\pi/6$ by dashed
lines in (a) (see the discussion in the main text of Sec.~\ref{sec:results:tilt}).}
In panel (b) the $L$-$S_{AW}$ interface, \modifiedBlue{which is accessible for ILCs with their
charges at the tips ($L/R=4,\epsilon_R/\epsilon_L=2,\gamma/(R\epsilon_0)=0.045,\lambda_D/R=5$, and
$D/R=1.8$)}, is considered for $T^*=0.9$ \modifiedRed{(see Fig.~\ref{fig:pd_ilc}(b)).
Also in this case} the equilibrium tilt angle $\alpha_\text{eq}=0$ corresponds to the parallel
orientation of the interface normal and \modifiedRed{the} layer normal.
\modifiedRed{Below} $\alpha=\pi/2$, \modifiedRed{as function of $\alpha$} the interfacial tension
is rather flat, \modifiedRed{taking the} value $\Gamma^*\approx0.07$.
Thus, for the $L$-$S_{AW}$ interface the perpendicular orientation of the interface normal and
\modifiedRed{of} the smectic layer normal \modifiedRed{corresponds to a labile configuration.}
\modifiedBlue{(Analogously to panel (a), the data points at $\alpha=0$ and $\pi/4$ in (b) are
connected by a dashed line.)} \modifiedRed{We note that} $\Gamma^*(\alpha)$ is symmetric around
$\alpha=\pi/2$, due to the mirror-symmetry of the particles.
}
\label{fig:if_tension}
\end{figure}
\begin{figure}[!t]
\includegraphics[width=0.45\textwidth]{./profiles_tilted_SAW.ps}
\caption{
Same as Fig.~\ref{fig:if_ilc_tilt_SA}. Here, the $L$-$S_{AW}$ interface profiles $\eta(x,z)$
\modifiedGreen{(Eq.~(\ref{eq:numberdensity}))} and $S_2(x,z)$
\modifiedGreen{(Eq.~(\ref{eq:oriorderparameter}))} \modifiedRed{are shown} for $\alpha=\pi/3$
and $T^*=0.9$. \modifiedRed{To this end}, an ionic liquid crystal with charges at the tips is considered
($L/R=4,\epsilon_R/\epsilon_L=2,\gamma/(R\epsilon_0)=0.045,\lambda_D/R=5$, and $D/R=1.8$).
For $z\rightarrow-\infty$ the isotropic liquid bulk $L$ is approached and for $z\rightarrow\infty$
the bulk of the $S_{AW}$ phase, i.e., the interface normal is parallel to the $z$-axis.
The transition in the structure occurs at $z_\eta/R\approx2.28$ and the transition in the
orientational order \modifiedRed{does so} at $z_{S_2}/R\approx-0.68$.
}
\label{fig:if_ilc_tilt_SAW}
\end{figure}
In this section \modifiedRed{we discuss} the dependence of the structural and orientational properties
of the liquid-smectic-interface on the tilt angle $\alpha$. In Fig.~\ref{fig:if_ilc_tilt_SA} the
$L$-$S_A$-interface profiles $\eta(x,z)$ and $S_2(x,z)$ are shown for \modifiedRed{the} reduced
temperature $T^*=1.3$ (see the black dots (\textcolor{black}{$\bullet$}) in Fig.~\ref{fig:pd_ilc}(a))
and $\alpha=\pi/4$. Here, \modifiedRed{we consider} the case of \modifiedRed{ILCs with the charges
localized in the center} ($L/R=4,\epsilon_R/\epsilon_L=2,\gamma/(R\epsilon_0)=0.045,\lambda_D/R=5$,
and $D=0$). Like in the case $\alpha=\pi/2$, \modifiedRed{(see Sec.~\ref{sec:results:perp})} i.e.,
the interface normal and \modifiedRed{the} smectic layer normal $\vec{\hat n}=\vec{\hat x}$ are
perpendicular, a persisting orientational order can be observed at the interface:
The structural transition \modifiedRed{occurs} at $z_\eta/R\approx5.56$, whereas the transition in the
orientational order between the two phases \modifiedRed{takes place} at $z_{S_2}/R\approx2.79$ which
is a few diameters deeper in the isotropic liquid.
In Fig.~\ref{fig:if_tension} the interfacial tension $\Gamma^*(\alpha)$ given by
Eq.~(\ref{eq:interface_tension}) and the distance $z_\eta-z_{S_2}$ between the interface
\modifiedRed{positions} associated with the mean packing fraction $\eta_0(x)$ and the mean
orientational order parameter $S_{20}(x)$ are shown as function of the tilt angle $\alpha$.
In \modifiedRed{Fig.~\ref{fig:if_tension}(a) the case of the $L$-$S_A$-interface for ILCs with the
charges at their center is considered for $T^*=1$}. Both the interfacial tension $\Gamma^*(\alpha)$
(black dots, $\bullet$) and the distance $z_\eta-z_{S_2}$ (orange dots, \textcolor{orange}{$\bullet$})
\modifiedRed{exhibit} a global minimum at $\alpha=0$ and a second, local minimum at $\alpha=\pi/2$.
Thus, the equilibrium tilt angle $\alpha_\text{eq}=0$ corresponds to
\modifiedRed{the configuration in which} the interface normal and the smectic layer normal
\modifiedRed{$\vec{\hat n}=\vec{\hat z}$ are parallel}, whereas the \modifiedRed{corresponding}
perpendicular orientation $\alpha=\pi/2$ is metastable.
This increase in the interfacial tension $\Gamma^*$ \modifiedRed{below} $\alpha=\pi/2$ suggests that
the \modifiedRed{configuration, in which} the interface normal and the layer normal
\modifiedRed{are orthogonal}, should be observable without \modifiedRed{resorting to} any external
stabilizing field which could be \modifiedRed{provided, e.g.,} by a suitably structured substrate.
\modifiedRed{This} metastability of the tilt angle $\alpha=\pi/2$ \modifiedRed{can} be checked
\modifiedRed{also} via computer simulations. Interestingly, the increase \modifiedRed{of} the
interfacial tension \modifiedRed{below} $\alpha=\pi/2$ is accompanied by an increase in the distance
$z_\eta-z_{S_2}$, suggesting that maintaining the local orientational order
\modifiedRed{in the isotropic liquid} beyond the smectic layers costs \modifiedRed{free} energy.
Consistently, in the case $\alpha_\text{eq}=0$, for which the orientational order vanishes
\modifiedRed{directly} with the \modifiedRed{disappearance} of the smectic layers, the
\modifiedRed{cost in free energy} is lowest. Apparently, \modifiedRed{for $\alpha=0$} the interfacial
tension $\Gamma^*(\alpha=0)\approx0.006$ is significantly smaller \modifiedRed{than for} all other
angles $\alpha$ shown in Fig.~\ref{fig:if_tension}(a).
\modifiedBlue{For technical reasons we did not study small tilt angles $\alpha>0$ and hence cannot
comment on the functional form of $\Gamma^*(\alpha)$ for $0<\alpha<\pi/6$ in the case $D/R=0$
or for $0<\alpha<\pi/4$ in the case $D/R=1.8$.
This is \modifiedGreen{indicated} by connecting the data points at $\alpha=0$ and $\pi/6$ by dashed lines.
(For the same reason, in (b) the data points at $\alpha=0$ and $\pi/4$ are connected by dashed lines.)}
It has been pointed out in Sec.~\ref{sec:theory:DFT}, that due to the \modifiedGreen{crossover}
\modifiedRed{at the tilt angle $\alpha=0$} from a periodic system towards
\modifiedRed{one which is translationally} invariant in lateral direction $x$, the integration domain
$\mathcal{V}_d$ for evaluating the coefficients $Q_i(\vec{r})$
\modifiedRed{(see Eq.~(\ref{eq:ExpansionCoeffs_interface}))} is not continuously evolving
\modifiedBlue{at $\alpha=0$}.
For $\alpha>0$ it is \modifiedRed{a} slice of length $d_x=d/\sin(\alpha)$ in
$x$-direction, while \modifiedRed{for $\alpha=0$} it is the subsystem of length $d$ in $z$-direction at
position $\vec{r}$. (\modifiedGreen{For} $\alpha=0$ the extent in $x$- and $y$-direction is arbitrary due
to the translational invariance in lateral direction.) In order to describe a continuous
\modifiedRed{variation of} the interfacial tension $\Gamma^*(\alpha)$ for all tilt angles
$\alpha\in[0,\pi/2]$, one \modifiedGreen{thus} needs to consider a different approach,
\modifiedBlue{which does not rely on a projected density and thereby on the direction of the bulk
smectic layer normal $\vec{\hat n}$ throughout the whole \modifiedGreen{interface structure}.}
\modifiedRed{Nonetheless, our above approach still allows one} to compare the interfacial
tension $\Gamma^*(\alpha)$ for the extreme cases $\alpha=0$ and $\pi/2$, \modifiedRed{thus predicting}
which one \modifiedRed{of the two} is preferred.
Furthermore, \modifiedRed{our approach provides an understanding} \modifiedGreen{of} the local increase in
$\Gamma^*(\alpha)$ \modifiedRed{below} $\alpha=\pi/2$, as \modifiedRed{one observes} an increasing
distance $z_\eta-z_{S_2}$ between the transition in the \modifiedRed{structural} and the orientational
order at the interface.
Figure~\ref{fig:if_tension}(b) shows data for the $L$-$S_{AW}$-interface at $T^*=0.9$ for ILCs with
charges \modifiedRed{located} at the tips. Around $\alpha=\pi/2$ the interfacial tension
(black \modifiedRed{squares}, {\tiny$\blacksquare$}) is a rather flat function of $\alpha$
\modifiedRed{taking values around} $\Gamma^*\approx0.07$. The slight variations in
$\Gamma^*$ for $\alpha\in[\pi/4,\pi/2]$ might be \modifiedRed{caused by} the numerical evaluation
of Eq.~(\ref{eq:ELG}) which has to be done separately for each tilt angle $\alpha$.
\modifiedRed{Consistently}, the distance $z_\eta-z_{S_2}$ (orange \modifiedRed{squares},
{\tiny\textcolor{orange}{$\blacksquare$}}) does not vary much \modifiedRed{as function of} the tilt
angle $\alpha$. \modifiedRed{As above}, the equilibrium tilt angle $\alpha_\text{eq}=0$ corresponds
to the \modifiedRed{configuration in which} the interface normal and the smectic layer normal
\modifiedRed{$\vec{\hat n}=\vec{\hat z}$ are parallel}.
Finally, in Fig.~\ref{fig:if_ilc_tilt_SAW}, \modifiedRed{we show} the contour plot of the
$L$-$S_{AW}$-interface for $\alpha=\pi/3$ and $T^*=0.9$ \modifiedGreen{for} an \modifiedRed{ILC} system
with $D/R=1.8$, \modifiedRed{illustrating} the structure of this \modifiedRed{type} of interface.
\section{\label{sec:summary}Summary and conclusions}
Free interfaces in systems composed of ionic liquid crystals (ILCs) have been studied within
density functional theory (see Sec.~\ref{sec:theory:DFT}). In particular, the discussion
\modifiedRed{has been} focused on two kinds of ionic liquid crystals:
\modifiedRed{first}, ILCs with \modifiedRed{the} charges \modifiedRed{localized at} the center of the
molecules, i.e., $D=0$ (see Figs.~\ref{fig:ellipsoids} and \ref{fig:pairpot}), and, second, ILCs with
\modifiedRed{the} charges at the tips of the molecules, i.e., $D/R=1.8$. All other model parameters,
i.e., $L/R=4,\epsilon_R/\epsilon_L=2,\gamma/(R\epsilon_0)=0.045,\lambda_D/R=5$, are identical in both
cases. \modifiedRed{Therefore} the two kinds differ solely by the charge distribution
\modifiedRed{within the molecules}.
For $D=0$ coexistence between the isotropic liquid $L$ and the ordinary smectic-A phase $S_A$
can be observed at \modifiedRed{a} sufficiently large mean packing fraction $\eta_0$
\modifiedRed{(see Fig.~\ref{fig:pd_ilc}(a))}. The $S_A$ phase is characterized by a layered
structure in the direction of the smectic layer normal $\vec{\hat n}$ with \modifiedRed{a}
smectic layer spacing $d\approx L$ comparable to the particle length $L$.
Within the smectic layers the particles are well aligned with the smectic layer normal.
The phase behavior of ILCs is altered by varying the molecular charge distribution,
as can be \modifiedRed{inferred from comparing the case $D=0$ (i.e., charges at the center) and}
$D/R=1.8$ \modifiedRed{(i.e., charges at the tips, see Fig.~\ref{fig:pd_ilc}(b))}.
At sufficiently low \modifiedRed{temperatures} a new smectic-A phase \modifiedRed{has been} observed,
which is referred to as \modifiedRed{the} $S_{AW}$ phase~\cite{Bartsch2017}.
The $S_{AW}$ phase shows an alternating structure of layers with \modifiedRed{the} majority of
\modifiedRed{the} particles being oriented parallel to the smectic layer normal $\vec{\hat n}$ and
\modifiedRed{the} minority of \modifiedRed{the} particles localized in secondary layers which prefer
orientations perpendicular to $\vec{\hat n}$. Due to the alternating layer structure, the smectic
layer spacing $d/R\approx7.5$ \modifiedRed{in the $S_{AW}$ phase} is increased
\modifiedRed{compared with the spacing in the $S_A$ phase}.
For a parallel orientation of the smectic layer normal $\vec{\hat n}=\vec{\hat z}$ and the
$L$-$S_A$-interface normal, i.e., \modifiedRed{for} $\alpha=0$ (see Fig.~\ref{fig:interface_sketch}),
it turns out that the interface locations $z_\eta$ and $z_{S_2}$, associated with the transition in
\modifiedRed{the structural} and in the orientational order, \modifiedRed{respectively}, are very
close to each other (see Fig.~\ref{fig:if_ilc_l-sa_para}). In fact, Fig.~\ref{fig:gibbs_parallel}
shows that for the whole temperature range considered here, the \modifiedRed{difference}
$z_\eta-z_{S_2}<d$ in the two interface positions is smaller than the smectic layer spacing $d$.
Hence, \modifiedRed{for $\alpha=0$} the orientational order vanishes within the last smectic layer
at the $L$-$S_A$-interface. \modifiedRed{Concerning the interface positions},
Fig.~\ref{fig:gibbs_parallel} demonstrates that \modifiedRed{ILCs} with $D/R=1.8$ and ordinary
(uncharged) liquid crystals with $L/R=4$ and $\epsilon_R/\epsilon_L=2$ exhibit qualitatively the
same results. Considering the $L$-$S_{AW}$-interface \modifiedRed{(see Fig.~\ref{fig:if_ilc_l-saw_para})}
one observes an increase in $z_\eta-z_{S_2}$, but it \modifiedRed{remains significantly smaller} than
the smectic layer spacing $d/R\approx7.5$. Thus, for $\alpha=0$ it turns out that the loss of
orientational order coincides with the \modifiedRed{disappearance} of the layer structure of the
respective smectic-A phase at the interface towards the isotropic liquid.
\modifiedRed{This holds for all parameter values studied here.}
Interestingly, for $\alpha=\pi/2$, i.e., changing the relative orientation of the
smectic layer normal $\vec{\hat n}=\vec{\hat x}$ and the interface normal such that they
are perpendicular to each other, leads to qualitative changes in the interfacial properties:
\modifiedRed{a} periodic structure of the interface in lateral direction $x$ can be observed, which is
a direct consequence of the periodicity in the bulk smectic-A phase with the smectic layer spacing $d$
(see Figs.~\ref{fig:interface_sketch}, \ref{fig:if_ilc_perp_SA}, and \ref{fig:if_ilc_perp_SAW}).
For the $L$-$S_A$-interface \modifiedRed{(see Fig.~\ref{fig:if_ilc_perp_SA})} one observes considerable
\modifiedRed{differences} $(z_\eta-z_{S_2})/R\gtrsim2$ \modifiedRed{between} the interface positions.
Thus, the (nearly) parallel orientations of particles in the $S_A$ layers persists a few particle
diameters $R$ into the liquid phase $L$, unlike the case $\alpha=0$, \modifiedRed{for which} the
orientational order vanishes \modifiedRed{directly} with the breakdown of the $S_A$ layer structure
at the interface, i.e., within the last smectic layer.
Due to the periodicity in (lateral) $x$-direction, in the case $\alpha=\pi/2$ one indeed observes
a qualitative change in the structure \modifiedRed{of} the $L$-$S_{AW}$-interface compared to the
$L$-$S_A$-interface. While at the tails of the $S_{AW}$ main layers the interface also features an
orientational order \modifiedRed{which} continues further into the liquid phase $L$ than the layer
structure ($(\tilde z_\eta(x)-\tilde z_{S_2}(x))/R\approx2.6$). For the secondary layers it is the
layer structure that persists deeper into the $L$ phase than the orientational order
($(\tilde z_\eta(x)-\tilde z_{S_2}(x))/R\approx-3.73$). The \modifiedRed{opposite} behavior at the
main, respectively secondary, layers is presumably driven by the orientational properties of the
respective kinds of layers: \modifiedRed{in} the main layers the particles are well aligned with
the smectic layer normal $\vec{\hat n}=\vec{\hat x}$ and therefore show an effective diameter in
the $y$-$z$-plane \modifiedRed{which} is comparable to the particle diameter $R$. However, in the
secondary layers (here \modifiedRed{with} $S_2(x,z)<0$) the particles avoid orientations parallel
to the $x$-axis, giving rise to an considerably larger effective radius.
\modifiedRed{Upon approaching the liquid phase $L$, this effective radius \textit{increases} for
the main layers of the $S_{AW}$ phase, whereas it \textit{decreases} for the secondary
layers.}
In Sec.~\ref{sec:results:asymp} the asymptotic behavior of the interface profiles has been studied.
In particular, in Figs.~\ref{fig:if_ilc_perp_asymp_liq} and \ref{fig:if_ilc_perp_asymp_liq_plane}
the $L$-$S_A$-interface for $\alpha=\pi/2$ \modifiedRed{has been} considered for the two ILC systems
with $D/R=0$ and $1.8$. For $D=0$, i.e., \modifiedRed{with the charges being localized at} the center,
the periodic structure of the interface is apparent from \modifiedRed{the quantities}
$\ln|\eta(x,z)-\eta_L|$ and $\ln|S_2(x,z)-S_{2,L}|$, showing the logarithmic deviations of the profiles
$\eta(x,z)$ and $S_2(x,z)$ from their respective liquid bulk values $\eta_L$ and $S_{2,L}$
(Figs.~\ref{fig:if_ilc_perp_asymp_liq_plane}(a) and (b)), \modifiedRed{which can be resolved}
even at far distances $z/R<-20$ from the $L$-$S_A$-interface. Conversely, for $D/R=1.8$, i.e.,
\modifiedRed{the charges being fixed} at the tips, \modifiedRed{far from the interface}
$\ln|\eta(x,z)-\eta_L|$ and $\ln|S_2(x,z)-S_{2,L}|$ vary only marginally as function of the lateral
coordinate $x$. While for $D=0$ the charges are strongly localized \modifiedRed{at} the centers of the
smectic layers, \modifiedRed{thus} promoting the periodic structure, for $D/R=1.8$ the charges are less
localized and more distributed along the $x$-direction.
The asymptotic decays \modifiedRed{of the interface profiles} towards the isotropic liquid $L$
show an interesting and rich behavior. \modifiedRed{We have found} three distinct \modifiedRed{spatial}
regimes, \modifiedRed{which are} associated with the three contributions to the underlying pair
potential \modifiedRed{(see Eq.~(\ref{eq:Pairpot}))}. Although the presence of charges is the
distinctive \modifiedRed{feature} of \modifiedRed{ILCs}, the (screened) electrostatic contribution
\modifiedRed{to the interaction (Eq.~(\ref{eq:PairPot_ES})) governs} the asymptotic decay only at
intermediate distances from the interface \modifiedRed{(see Fig.~\ref{fig:if_ilc_perp_asymp_liq_plane})}.
\modifiedRed{In this regime}, the decay length is given by the Debye screening length,
\modifiedRed{here} $\lambda_D/R=5$. Ultimately, it is the attractive Gay-Berne contribution
\modifiedRed{to the interaction (Eq.~(\ref{eq:Pairpot_GB})) which} dominates the \modifiedRed{outermost}
asymptotic behavior; \modifiedRed{for the system studied here} a considerably large decay length
$\xi_\text{GB}/R\approx10$ is observed, which \modifiedRed{is due to} the truncated power law decay
of the GB potential. Close to the interface, the hard-core interaction, \modifiedRed{which leads to the
Parsons-Lee contribution to the DFT expression (Eq.~(\ref{eq:Eff1Pot_PL})), dominates} the profiles
$\eta(x,z)$ and $S_2(x,z)$. The corresponding decay length $\xi_\text{PL}/R\approx1.9$ is comparable to
the particle diameter $R$. \modifiedBlue{This is \modifiedGreen{plausible}, because for the case
considered here the tilt angle is $\alpha=\pi/2$, i.e., the smectic layer normal is perpendicular
to the interface normal, and thus the particles in the $S_A$ layers are oriented preferentially
perpendicular to the interface normal as well}.
Interestingly, the \modifiedGreen{crossovers} between these three different regimes occur at
distances \modifiedRed{characteristic} for the packing fraction $\eta(x,z)$ and the orientational
order parameter $S_2(x,z)$. While \modifiedRed{for both types of ILCs considered in
Fig.~\ref{fig:if_ilc_perp_asymp_liq_plane}} all three decay lengths $\xi_\text{PL}$, $\xi_\text{GB}$,
and $\lambda_D$ are apparent from $\ln|S_2(x,z)-S_{2,L}|$, \modifiedRed{from} $\ln|\eta(x,z)-\eta_L|$
only the decay length $\lambda_D$ \modifiedRed{can be inferred within} the considered range $z/R>-80$.
This \modifiedRed{situation} is caused by the relative magnitudes of the respective decay amplitudes:
for the packing fraction profile the decay amplitudes due to the Gay-Berne and the hard-core
interaction are too small, compared to the \modifiedRed{corresponding} amplitude due to the
electrostatic \modifiedRed{interaction}, to be observable.
\modifiedRed{Since the structural} and orientational properties directly at the interface
\modifiedRed{position} are determined by the hard-core interaction, i.e., the Parsons-Lee contribution
$\beta\psi_\text{PL}$ (Eq.~(\ref{eq:Eff1Pot_PL})), to the effective one-particle potential $\beta\psi$,
\modifiedRed{close to the interface} the profiles for ordinary liquid crystals (OLCs) and ILCs
\modifiedRed{with} the same length-to-breadth ratio $L/R$ are very similar. In particular, this includes
the interface positions $z_\eta$ and $z_{S_2}$ \modifiedRed{(see Fig.~\ref{fig:gibbs_parallel})}
associated with the transition in the \modifiedRed{structural and orientational order, respectively.}
Nevertheless the asymptotic behavior, as discussed above, is distinct for the different kinds of
particles (hard ellipsoids, OLCs, and ILCs) and shows a rich phenomenology, specifically for ILCs,
due to the cross-overs between the distinct \modifiedRed{spatial} regimes corresponding to the
\modifiedRed{various} contributions to the pair potential. Additionally, the bulk phase behavior is
crucially affected by the type of particles, because only for the ILCs with charges at the tips,
the \modifiedRed{phase $S_{AW}$} is observed.
Finally, the dependence of the structural and orientational properties of liquid-smectic interfaces
on the tilt angle $\alpha$ between the interface normal and the smectic layer normal has been discussed.
For the $L$-$S_A$-interface \modifiedRed{(see Fig.~\ref{fig:if_tension}(a))},
it turns out, that the parallel orientation of the interface normal and \modifiedRed{of the} smectic
layer normal is the \modifiedRed{one \modifiedGreen{in} thermal equilibrium}, i.e., $\alpha_\text{eq}=0$.
The perpendicular orientation $\alpha=\pi/2$ is metastable. Interestingly, the increase in the
interfacial tension \modifiedRed{below} $\alpha=\pi/2$ is accompanied by an increase in the distance
$z_\eta-z_{S_2}$, suggesting that maintaining the local orientational order beyond the smectic layers
\modifiedRed{towards the isotropic liquid costs free energy}. Consistently, in the case
$\alpha_\text{eq}=0$, for which the orientational order vanishes \modifiedRed{directly} with the
\modifiedRed{disappearance} of the smectic layers, \modifiedRed{the cost of free energy for forming
the interface is lowest}. For the $L$-$S_{AW}$-interface \modifiedRed{(see Fig.~\ref{fig:if_tension}(b))}
again the equilibrium tilt angle $\alpha_\text{eq}=0$ corresponds to the parallel orientation of the
interface and smectic layer normal. However, \modifiedRed{in this case, around $\alpha=\pi/2$,}
the interfacial tension $\Gamma^*(\alpha)$ varies only weakly so that here the
perpendicular orientation is labile.
\MODIFIED{Additional contributions to the surface tensions might arise from elastic deformations
of the director field, i.e., spatial variations of the director $\vec{\hat n}:=\vec{\hat n}(\vec{r})$,
or deviations from a rotational-symmetric distribution of particle orientations around the director,
i.e., $f(\vec{r},\vec\omega)\neq f(\vec{r},\vec{\hat n}\cdot\vec\omega)$. These contributions are
neglected by our approach. Elastic effects can be considered \MODIFIEDtwo{through an} explicit dependence of
the free energy functional on the director field $n(\vec{r})$, \MODIFIEDtwo{i.e.,} via an elastic energy
contribution~\cite{DeGennes1974}. Alternatively, giving up the assumption of a rotational
symmetric distribution of orientations around a particular axis (and thereby enforcing a prescribed
homogeneous director field) would also allow \MODIFIEDtwo{one} to study the deformations of the director
field. However, incorporating these effects would lead to a drastic increase \MODIFIEDtwo{of} the
computational effort.}
Lastly, \modifiedGreen{we emphasize} that \modifiedRed{although} here \modifiedRed{we have focused}
solely on free interfaces between coexisting bulk phases of \modifiedRed{ILCs}, the DFT framework in
Sec.~\ref{sec:theory:DFT} can be \modifiedRed{extended} to inhomogeneous systems of ILCs
exposed\modifiedRed{, e.g.,} to external fields or \modifiedRed{ILC-electrolytes} in contact with
an electrode.
|
1,314,259,995,300 | arxiv | \section{Introduction}\label{PaperIIIsec:Introduction}
The presence of extended X-ray emitting coronae associated with present-day massive disc galaxies is a generic prediction of cold dark matter (CDM) galaxy formation theory (e.g., \citealt{White91,Benson10}). The coronae serve as a reservoir of both diffuse, metal-poor gas accreted from the intergalactic medium (IGM), and metal-rich gas that is either ejected from galaxies by energetic feedback or stripped from infalling satellites. Two mechanisms are thought to be chiefly responsible for establishing the the X-ray emission properties of coronae: feedback injected by stellar wind/supernovae (SNe) (e.g., \citealt{Strickland00a}) and active galactic nuclei (AGN), and the shock heating and adiabatic compression of gas accreted from the IGM (e.g., \citealt{White78}).
Although galactic coronae have been detected around many disc galaxies, their origin remains uncertain. For example, the coronal luminosity is observed to correlate well with many tracers of feedback activity [e.g., the infrared (IR) or radio luminosities, \citealt{Strickland04b,Grimes05,Tullmann06b,Li08,Li13b}], and the heavy element abundance of the X-ray luminous gas is also indicative of enrichment by SNe (e.g., \citealt{Martin02,Ji09,Li09,Bregman13,Li13b}). These are often regarded as direct evidence for the feedback scenario (but see \citealt{Crain13} for an alternative interpretation). Furthermore, non-gravitational heating is also indicated by the change of slope of the scaling relations between the coronal luminosity/temperature and the galaxy mass in different mass ranges (e.g., \citealt{Ponman99,Ponman03,OSullivan03}). The uncertainty is driven by the difficulty of detecting and characterizing the diffuse thermal X-ray emission at large galactocentric radii (e.g., \citealt{Benson00,Rasmussen09}). In recent years, with either stacking of large numbers of X-ray observations of various types of galaxies \citep{Anderson13} or moderately deep single-pointing X-ray observations of massive disc galaxies \citep{Anderson11,Dai12,Bogdan13a,Bogdan13b}, X-ray emission from the putative coronae on large scales (at radii up to $\sim50\rm~kpc$) has been reported around several galaxies which are not particularly active in SF. These coronae are considered to be most likely produced by the accretion of intergalactic gas. However, such detections are still rare and only concern galaxies within a narrow range of stellar mass (see Table~\ref{table:GalaxyPara}). Comparison with X-ray measurements of galaxies in a broad mass range is thus particularly desirable.
In our previous studies \citep{Li13a,Li13b} (hereafter, Paper~I, II), we investigated the galactic coronae associated with a sample of \emph{Chandra}-observed nearby highly-inclined disc galaxies. We found a tight correlation between the coronal luminosity and the SN energy input rate; the abundance ratio of the coronal gas is also consistent with a combined contribution from core collapsed and Type~Ia SNe. These results can be qualitatively interpreted, naively, as evidence that galactic coronae are mainly established by SN feedback. However, our sample only includes low and intermediate mass galaxies (typically with stellar mass $M_*\lesssim 2\times10^{11}\rm~M_\odot$), which may not be massive enough for accreted gas to contribute a dominant fraction of the detected X-ray flux. In fact, recent cosmological hydrodynamical simulations invoking both accretion and feedback predict a coronal luminosity range that is broadly consistent with X-ray observations \citep{Crain10a}. These simulations trace the dynamics of the coronal gas, a large fraction of which is quasi-hydrostatic or even infalling. \citet{Crain13} further showed that the high ($\sim$solar) metallicities inferred from the X-ray spectroscopy are not in conflict with models in which the bulk (by mass) of the hot circumgalactic medium (CGM) is established by accretion. The soft X-ray emission is dominated by collisionally-excited metal ions deposited by feedback, whilst the majority of the accreted gas is metal poor and is comparatively radiatively inefficient. The hot CGM thus appears to be typically metal-poor in a mass-weighted sense, but its X-ray luminosity-weighted metallicity is often close to solar. Therefore, accretion of intergalactic gas may still be important, especially in those massive galaxies not included in our \emph{Chandra} sample. In Paper~II, we also compared our measurements to X-ray observations of massive elliptical galaxies, which have significantly steeper $L_X-M_*$ relations. The formation of such galaxies may, however, be subject to more violent mergers and the corresponding starbursting process, potentially complicating the interpretation of observations of their coronae. Furthermore, the typically rich clustered environment of massive elliptical galaxies also potentially contaminates the measurement of coronal properties associated with individual galaxies. Therefore, comparisons with X-ray measurements of elliptical galaxies do not offer the cleanest means by which to place direct constraint on disc galaxy formation models.
We herein conduct a quantitative comparison of our X-ray measurements of disc galaxies to the X-ray measurements of several isolated massive disc galaxies from the literature (\S\ref{PaperIIIsubsec:MassiveGalaxy}), as well as the results of hydrodynamical simulations invoking both accretion and feedback (i.e., the GIMIC simulations, see \S\ref{PaperIIIsubsec:Simulations}), in order to further explore the origin of coronae around disc galaxies and place direct constraint on galaxy formation models. This paper is organized as follows. In \S\ref{PaperIIIsec:data}, we describe our procedure for correcting coronal luminosity measurements to ensure data homogenization between the galaxies selected from our original \emph{Chandra} sample, galaxies from the literature, and the GIMIC simulations. We compare the observational results to an analytical model as well as the GIMIC simulations in \S\ref{PaperIIIsec:Comparison}, and further discuss the scientific implications in \S\ref{PaperIIIsec:Discussion}. Our results and conclusions are summarized in \S\ref{PaperIIIsec:Summary}.
\section{Data Homogenization of Observations and Simulations}\label{PaperIIIsec:data}
In order to compare measurements from different observations and simulations in a uniform fashion, corrections to ensure the X-ray measurements for data homogenization are necessary. We aim to compare the coronal luminosity measured in a consistent manner, i.e., in the same band and in the same radial range.
\subsection{X-ray Measurements from the Chandra Sample}\label{PaperIIIsubsec:OurSample}
Here, we consider only field galaxies from our original \emph{Chandra} sample (Paper~I), because X-ray measurements of the clustered galaxies are potentially contaminated by the intracluster medium, and also because the massive disc galaxies whose measurements we collate from the literature (\S\ref{PaperIIIsubsec:MassiveGalaxy}) and the selected galaxies from the GIMIC simulations (\S\ref{PaperIIIsubsec:Simulations}) are in the field. We also select galaxies with published rotation velocity measurements (inclination-corrected, $v_{rot}$, as listed in Table~\ref{table:GalaxyPara}), which are used for estimating the halo masses and radii (see below). One galaxy, NGC~3384, has extremely low $v_{rot}$ ($\sim17\rm~km~s^{-1}$); this is likely a result of interaction with its companions, and hence an inaccurate reflection of the depth of its gravitational potential (Paper~I). This galaxy is thus excluded from the current sample. In total, 30 galaxies are selected and their key parameters are summarized in Table~\ref{table:GalaxyPara}.
\begin{deluxetable}{lccccccccccc}
\centering
\tiny
\tabletypesize{\tiny}
\tablecaption{Parameters of Galaxies}
\tablewidth{0pt}
\tablehead{
\colhead{Name} & \colhead{$\log M_*$} & \colhead{SFR} & \colhead{$v_{rot}$} & \colhead{$\log M_{200}$} & \colhead{$r_{200}$} & \colhead{$\log L_X$} \\
& ($\rm M_\odot$) & ($\rm M_\odot yr^{-1}$) & ($\rm km~s^{-1}$) & ($\rm M_\odot$) & (kpc) & ($\rm erg/s$)
}
\startdata
IC2560 & 10.03 & $2.05\pm0.32$ & $196\pm3$ & $12.10\pm0.02$ & 225 & $39.70\pm0.03$ \\
M82 & 10.30 & $7.70\pm0.46$ & $100\pm10$ & $11.16\pm0.14$ & 108 & $39.56\pm0.002$ \\
NGC0024 & 9.18 & $0.11_{-0.06}^{+0.01}$ & $93\pm1$ & $11.07\pm0.02$ & 101 & $37.81\pm0.08$ \\
NGC0520 & 10.57 & $11.74\pm1.60$ & $72\pm2$ & $10.70\pm0.05$ & 76 & $39.11_{-0.14}^{+0.08}$ \\
NGC0660 & 10.47 & $7.13\pm0.90$ & $140\pm3$ & $11.64\pm0.03$ & 157 & $38.35_{-0.08}^{+0.11}$ \\
NGC0891 & 10.69 & $2.46\pm0.34$ & $212\pm5$ & $12.22\pm0.03$ & 245 & $38.83\pm0.01$ \\
NGC1023 & 10.83 & - & $112\pm5$ & $11.33\pm0.07$ & 124 & $37.84_{-0.11}^{+0.09}$ \\
NGC1482 & 10.36 & $6.53\pm0.78$ & $121\pm8$ & $11.43\pm0.10$ & 133 & $39.33\pm0.04$ \\
NGC1808 & 10.58 & $7.78\pm0.62$ & $122\pm5$ & $11.44\pm0.06$ & 135 & $38.50_{-0.05}^{+0.04}$ \\
NGC2787 & 10.52 & $0.26_{-0.23}^{+0.003}$ & $181\pm13$ & $12.00\pm0.10$ & 207 & $36.86_{-0.33}^{+0.45}$ \\
NGC2841 & 10.99 & $0.49_{-0.14}^{+0.05}$ & $318\pm9$ & $12.79\pm0.04$ & 379 & $38.01_{-0.07}^{+0.06}$ \\
NGC3079 & 10.47 & $6.06\pm0.58$ & $210\pm5$ & $12.20\pm0.03$ & 242 & $39.63\pm0.02$ \\
NGC3115 & 10.83 & - & $107\pm5$ & $11.26\pm0.07$ & 117 & $36.21_{-0.20}^{+0.13}$ \\
NGC3198 & 10.02 & $0.62_{-0.12}^{+0.06}$ & $148\pm4$ & $11.71\pm0.04$ & 166 & $38.80\pm0.05$ \\
NGC3521 & 10.85 & $2.13_{-0.40}^{+0.24}$ & $233\pm6$ & $12.35\pm0.04$ & 271 & $38.87\pm0.03$ \\
NGC3556 & 10.21 & $1.57\pm0.15$ & $153\pm2$ & $11.76\pm0.03$ & 172 & $38.36_{-0.09}^{+0.08}$ \\
NGC3628 & 10.83 & $4.83\pm0.54$ & $215\pm3$ & $12.23\pm0.02$ & 248 & $39.47_{-0.04}^{+0.03}$ \\
NGC3955 & 10.11 & $2.08\pm0.27$ & 86 & 10.95 & 93 & $38.65_{-0.25}^{+0.15}$ \\
NGC4244 & 8.95 & $0.02_{-0.01}^{+0.001}$ & $89\pm2$ & $11.00\pm0.03$ & 96 & $37.94\pm0.05$ \\
NGC4594 & 11.19 & $0.28_{-0.10}^{+0.03}$ & $358\pm10$ & $12.95\pm0.04$ & 430 & $38.92_{-0.03}^{+0.02}$ \\
NGC4631 & 10.01 & $1.67\pm0.20$ & $138\pm3$ & $11.62\pm0.04$ & 155 & $39.10\pm0.02$ \\
NGC4666 & 10.61 & $4.04\pm0.45$ & $193\pm2$ & $12.08\pm0.02$ & 221 & $39.08_{-0.19}^{+0.07}$ \\
NGC5102 & 9.18 & $0.01_{-0.005}^{+0.0005}$ & 90 & 11.01 & 97 & $36.75\pm0.09$ \\
NGC5170 & 10.67 & $0.56_{-0.29}^{+0.04}$ & 245 & 12.41 & 285 & $38.88\pm0.11$ \\
NGC5253 & 8.69 & $0.34\pm0.03$ & $38\pm1$ & $9.81\pm0.06$ & 38 & $37.64\pm0.03$ \\
NGC6503 & 9.50 & $0.15\pm0.01$ & $66\pm1$ & $10.60\pm0.04$ & 70 & $37.48_{-0.06}^{+0.05}$ \\
NGC6764 & 10.00 & $2.73\pm0.25$ & $139\pm3$ & $11.63\pm0.04$ & 156 & $38.05_{-0.18}^{+0.13}$ \\
NGC7090 & 9.17 & $0.14_{-0.03}^{+0.01}$ & 102 & 11.19 & 111 & $36.40_{-0.38}^{+0.32}$ \\
NGC7582 & 10.83 & $12.84\pm1.69$ & $194\pm3$ & $12.10\pm0.02$ & 223 & $38.49_{-0.08}^{+0.07}$ \\
NGC7814 & 10.85 & - & $230\pm8$ & $12.33\pm0.05$ & 268 & $37.95\pm0.07$ \\
\hline
NGC266 & 11.30 & 2.4 & $868\pm50$ & 12.90 & 410 & $40.47\pm0.40$ \\
NGC1961 & 11.62 & 15.5 & $447\pm14$ & 13.08 & 470 & $40.70\pm1.81$ \\
NGC6753 & 11.51 & 11.8 & $395\pm21$ & 13.00 & 440 & $40.88\pm1.42$ \\
UGC12591 & 11.91 & 4.8 & $488\pm12$ & 13.38 & 601 & $40.53\pm0.36$
\enddata
\tablecomments{\scriptsize Parameters of galaxies from our \emph{Chandra} sample (Paper~I; those above the solid line) and the literature (those below the solid line): $M_*$ is the stellar mass estimated using the K-band luminosity from the 2MASS extended source catalogue \citep{Skrutskie06}; SFR is the star formation rate estimated in the same way as Paper~I using the \emph{IRAS} data; $v_{rot}$ is the maximum rotation velocity corrected for inclination, obtained from the HyperLeda database (http://leda.univ-lyon1.fr/); $M_{200}$ is the dark matter halo mass defined as the mass within a sphere with a mean density of $200\rho_{crit}$ (with a radius of $r_{200}$), where $\rho_{crit}$ is the critical density of the Universe. The extinction-corrected coronal luminosity ($L_X$) is measured in 0.5-2~keV and within $0.01-0.1~r_{200}$, as detailed in the text.
}\label{table:GalaxyPara}
\end{deluxetable}
We first estimate the virial mass and radius of each dark matter halo. We represent these quantities by $M_{200}$ and $r_{200}$, respectively, which are the total mass and radius of the halo with a mean density of $200\rho_{crit}$ [we use the Hubble constant from the \emph{WMAP} 9-years data \citep{Bennett13} to calculate $\rho_{crit}$, which is the critical density of the Universe]. We estimate $M_{200}$ using the scaling relation applicable to dark matter halos from \citet{Navarro97} ($M_{200}\propto v_{rot}^{3.23}$). $r_{200}$ is computed using the definition of $M_{200}$: $M_{200}=200\rho_{crit}\frac{4\pi}{3}r_{200}^3$. $M_{200}$ and $r_{200}$ of the sample galaxies are listed in Table~\ref{table:GalaxyPara}.
For consistency, we measure the coronal properties within a radial range that scales with the galaxy mass or $r_{200}$. The lower limit of the range should be large enough to exclude the X-ray emission from the galactic disc, while the upper limit should match or not significantly exceed the scale accessed by the \emph{Chandra} observations. For the typical coronal size of our \emph{Chandra} sample (Paper~I), we choose this range to be $0.01-0.1r_{200}$. This range also covers a significant fraction of the X-ray emission of coronae in hydrodynamical simulations (e.g., \citealt{Crain10a,Crain13}).
We renormalize the coronal luminosity measured within a typical vertical range of $\pm 5h_{exp}$ ($h_{exp}$ is the exponential scale height of the 0.5-1.5~keV diffuse X-ray intensity profile) and radial range of $\pm D_{25}/4$ ($D_{25}$ is the B-band diameter of the projected major axis at the isophotal level of $25\rm~mag~arcsec^{-2}$) by extrapolating the intensity profile with an exponential model characterized by the diffuse X-ray vertical and radial scale heights of the sample galaxies (Paper~I). The exponential model used to fit the intensity profiles potentially inaccurately describes the X-ray intensity distribution at large radii, but the choice of $0.1r_{200}$ as the upper limit of the radii range (typically not exceeding $\pm 5h_{exp}$ and $\pm D_{25}/4$) minimizes the uncertainty resulting from inaccurate exponential fits. The renormalized coronal luminosities of the sample galaxies are listed in Table~\ref{table:GalaxyPara}, with the original statistical errors from the coronal luminosity measurements (Paper~I).
\subsection{Massive Disc Galaxies from the Literature}\label{PaperIIIsubsec:MassiveGalaxy}
Our \emph{Chandra} survey does not include galaxies with $M_*\gtrsim 2\times10^{11}\rm~M_\odot$. These massive disc galaxies are rare in nature, and consequently no such galaxy satisfies the selection criteria established in Paper~I. Several recent studies, however, report the detection of large-scale diffuse X-ray emission associated with massive disc galaxies in the local Universe (e.g., \citealt{Anderson11} for \emph{Chandra} observations of NGC~1961, \citealt{Dai12} for \emph{XMM-Newton} observations of UGC~12591, \citealt{Bogdan13a} for \emph{XMM-Newton} observations of NGC~1961 and NGC~6753, and \citealt{Bogdan13b} for \emph{ROSAT} and \emph{Chandra} observations of NGC~266). We supplement our sample with these measurements, applying appropriate corrections to their coronal luminosities to ensure a uniform comparison.
$M_{200}$ and $r_{200}$ of NGC~1961, NGC~6753, and UGC~12591 are directly taken from the above cited references or estimated from the inclination-corrected rotation velocities, in the same way as for our \emph{Chandra} sample galaxies (\S\ref{PaperIIIsubsec:OurSample}). However, the inclination angle of NGC~266 ($\sim14.5^\circ$) is too low to enable a reliable estimate of the rotation velocity (the apparent maximum rotation velocity of gas is $\sim218\rm~km~s^{-1}$, c.f the inclination corrected $\sim868\rm~km~s^{-1}$). We therefore use the halo mass estimated from the stellar mass of the galaxy, as adopted in \citet{Bogdan13b}.
For NGC~266, NGC~1961 (for which the results from \citealt{Bogdan13a} are used throughout the present paper), and NGC~6753, the coronal luminosities are measured in the 0.5-2~keV band, same as for our \emph{Chandra} sample. For UGC~12591, however, the original coronal luminosity from \citet{Dai12} is measured in the 0.6-1.4~keV band. We then adopt the spectral model described in \citet{Dai12} ($kT=0.64\rm~keV$, 0.5 solar abundance, and Galactic foreground absorption column density) to estimate the luminosity in the 0.5-2~keV band. This correction factor is only of order $\sim10\%$. We also adopt a redshift-independent distance of $134\rm~Mpc$ to UGC~12591 (from the NED) instead of the artificial 100~Mpc distance taken in \citet{Dai12}. Relevant galaxy parameters ($M_*$, SFR, and $L_X$) are also corrected for this revised distance estimate.
The largest correction factor is for the radial range adopted in measuring the coronal luminosity. According to the cited references, this range is $0.05-0.15r_{200}$ for NGC~266 \citep{Bogdan13b}, NGC~1961, and NGC~6753 \citep{Bogdan13a}, but is within 50~kpc for UGC~12591 \citep{Dai12}. We use the model ($\beta$-model or modified $\beta$-model) adopted in fitting the radial intensity profiles as described in the references to renormalize the coronal luminosity to our adopted $0.01-0.1r_{200}$ range. However, the published \emph{ROSAT} and \emph{Chandra} observations of NGC~266 are insufficiently deep for such a radial profile analysis. We therefore use the ratio of the count rate within the $0.01-0.1r_{200}$ and $0.05-0.15r_{200}$ ranges (after subtracting the contribution from the scattered photons of the central low luminosity AGN) to adopt a rough correction (private communication with $\rm\acute{A}kos$ Bogd$\rm\acute{a}$n). We also subtract the estimated contributions from stellar sources [bright low mass X-ray binaries (LMXBs), \citealt{Gilfanov04}; faint LMXBs, cataclysmic variables (CVs) and coronal active binaries (ABs), \citealt{Revnivtsev08}; high mass X-ray binaries (HMXBs), \citealt{Mineo12}] within $0.01-0.1r_{200}$ of NGC~266, NGC~1961, and NGC~6753, by scaling from their stellar mass and SFR. These stellar source contributions are not accounted for in the original references. The final corrected coronal luminosities are listed in Table~\ref{table:GalaxyPara}.
\subsection{Numerical Simulations}\label{PaperIIIsubsec:Simulations}
We use the results from the Galaxies-Intergalactic Medium Interaction Calculation (GIMIC; \citealt{Crain09}), a suite of hydrodynamic resimulations of regions drawn from the Millennium simulation \citep{Springel05b}. The GIMIC simulations are performed with a variant of the TreePM-SPH code GADGET3, a substantial upgrade of GADGET2 \citep{Springel05a}. GIMIC follows the evolution of five representative roughly spherical regions with different overdensities drawn from the dark matter Millennium simulation \citep{Springel05b}. The GIMIC simulations have intermediate ($m_{gas}=1.16\times10^7 h^{-1}\rm M_\odot$) and high resolution ($m_{gas}=1.45\times10^6 h^{-1}\rm M_\odot$), compared to the ``low'' resolution for the original Millennium simulation, in which the collisionless particles, representing a composite of baryonic and dark matter, have mass of $8.6\times10^8 h^{-1}\rm M_\odot$. In the present paper, we use the intermediate resolution simulations, all of which are run to z=0. \citet{Crain10a,Crain13} demonstrate that the properties explored here are numerically converged in the regimes of interest, and that the resolution is sufficient to enable morphological classification of $\sim L^\star$ galaxies.
Metals typically dominate the specific emissivity of astrophysical plasmas (e.g., \citealt{Smith01,Wiersma09}). Therefore, the accurate prediction of coronal X-ray luminosities requires that simulations, besides modelling the gravitational and thermodynamical evolution of gas, must track the nucleosynthesis of metal species by stellar populations, and macroscopic transport of this material throughout the cosmological growth and assembly of galaxies (and groups and clusters of galaxies). These processes are modeled in GIMIC, and have been shown to play a key role in the establishment of the amplitude, and radial profile of, the luminosity and metallicity of coronae \citep{Crain13}. One shortcoming of GIMIC, however, is that it does not model the evolution of black holes or feedback effects associated with them. Therefore, we caution that the stellar mass and the radiative cooling (and hence the coronal X-ray luminosity) of the most massive galaxies in the simulated sample (where AGN feedback is expected to be non-negligible) may be inadequately modeled (see \S\ref{PaperIIIsubsec:AGNfeedback} for further discussion).
We adopt the same disc galaxy sample as \citet{Crain10a}, selecting central galaxies within friends-of-friends (FoF) halos, and focussing on $\sim L^\star$ disc galaxies by requiring a disc-to-total stellar mass ratio $D/T>0.3$ and a stellar mass in the range $10^{10}<M_*<10^{11.7}\rm M_\odot$. \citet{Crain10a} considered only isolated systems by excluding galaxies that are interacting or are members of galaxy groups and clusters. As per \citet{Crain10a}, the X-ray luminosity of a corona is computed by summing the luminosities of all gas particles bound to the corresponding subhalo. The cooling function is computed using the Astrophysical Plasma Emission Code (APEC, v1.3.1, \citealt{Smith01}) under the assumption that the gas is in collisional ionization equilibrium. As is generally the case for cosmological simulations, the adopted resolution is insufficient to model the formation of individual stars within a multi-phase interstellar medium. The simulations therefore impose a polytropic equation of state to the high density ($n_H>0.1\rm~cm^{-3}$) gas that is subject to thermo-gravitational instability (e.g. \citealt{Schaye04,Schaye08}). This gas (mostly in the disc) is assigned a temperature of that is below the minimum considered by the APEC cooling tables; therefore, by construction, the computed luminosity comprises only extraplanar emission. We herein calculate the coronal luminosity within $0.01-0.1r_{200}$, which is dominated by hot gas and less affected by the multi-phase gas in the disc. To renormalize the original coronal X-ray luminosities from \citet{Crain10a}, which were measured for the entire halo to $r=0.01-0.1r_{200}$, we compute the average stacked radial intensity profiles in five halo mass bins [$\log (M_{200}/M_\odot)=11.75-12.00, 12.00-12.25, 12.25-12.50, 12.50-12.75, 12.75-13.00$], and then calculate the luminosity fraction enclosed within $0.01-0.1r_{200}$. This fraction is typically $\sim(20-50)\%$ of the total coronal luminosity. We corrected this enclosed fraction to compute the 0.5-2~keV luminosity within $0.01-0.1r_{200}$ from the original total coronal luminosity. The SFR (stellar mass) of a simulated galaxy is estimated by summing the SFRs (stellar masses) of all gas (star) particles bound to a galaxy's subhalo. The rotation velocity, $v_{rot}$, is obtained from the maximum value of the halo velocity curve, assuming spherical symmetry.
\section{Comparison between Observation and Theory}\label{PaperIIIsec:Comparison}
\subsection{Analytical model involving only gravitational heating}\label{PaperIIIsubsec:analytical}
Before comparing our X-ray measurements to results from GIMIC, it is instructive to consider a simple analytical model involving only gravitational energy release of the accreted gas (no feedback) as a reference for the comparison. The inaccuracy of this simple accretion-only model \citep{Benson00}, as compared to GIMIC, was highlighted in \citet{Crain10a}. In this subsection, we adopt the uniformly corrected coronal luminosity from Papers~I and II, without any further corrections as described in \S\ref{PaperIIIsubsec:OurSample}, in order to include the X-ray emission from the galactic disc. Quantitative comparisons with X-ray measurements of massive disc galaxies and the results from GIMIC will be presented in the following sections.
\begin{figure}[!h]
\begin{center}
\epsfig{figure=fig01.eps,width=1.0\textwidth,angle=0, clip=}
\caption{Comparison of the X-ray measurements to the predictions from an analytical model as described in \S\ref{PaperIIIsubsec:analytical}. (a) The measured coronal luminosities of our \emph{Chandra} sample ($L_X$) plotted against the coronal luminosity expected from the accretion only model ($L_{X,acc}$). The solid line shows the best-fit linear relation (Eq.~\ref{equi:AccretionModelObservation}), while the dashed line marks $L_X=L_{X,acc}$. All the X-ray measurements are obtained from our \emph{Chandra} observations (Papers~I and II) without further corrections as described in \S\ref{PaperIIIsubsec:OurSample}; different colors and symbols denote various star formation, environmental, and morphological subclasses of the galaxies as classified in Paper~I. (b) The radiative cooling rate of the X-ray emitting coronal gas ($\dot{M}_{cool}$) vs. the SFR of the galaxies. The solid line shows the best-fit linear relation (Eq.~\ref{equi:RcoolSFR}), while the dashed line shows where $\dot{M}_{cool}=\rm SFR$. Symbols are the same as those in panel~(a).}\label{fig:analyticalmodel}
\end{center}
\end{figure}
We estimate the expected coronal X-ray luminosity ($L_{X,acc}$), in the absence of any effects of feedback on the mass, structure and emissivity of the hot CGM, following the procedure detailed in \citet{Benson00}. Assuming the accretion rate $\dot{M}_{acc}$ can be connected to the SFR by the poorly constrained SF efficiency ($\zeta={\rm~SFR}/\dot{M}_{acc}$; e.g., \citealt{Martin06,Dave11}), $L_{X,acc}$ can be predicted with the observational parameters $v_{rot}$ and SFR as:
\begin{equation}\label{equi:ModelLX}
L_{X,acc}\sim4.5 {\rm SFR} v_{rot}^2/\zeta.
\end{equation}
Assuming $\zeta\sim1$, we then compare $L_{X,acc}$ to the observed X-ray luminosity from our measurements (Fig.~\ref{fig:analyticalmodel}a) and obtain a best-fit linear relation of (see \S3.1 of Paper~II for the method of fitting):
\begin{equation}\label{equi:AccretionModelObservation}
L_X=10^{-(1.63\pm0.08)}L_{X,acc}.
\end{equation}
The relation is just qualitative, showing the general correlation and large inconsistency between the gravitational energy and the X-ray luminosity. The coefficient is, clearly, far below unity ($\sim2\%$), consistent with conclusions reached previously (e.g., \citealt{White91,Benson00,Toft02}). Reducing $\zeta$ to a value significantly below unity (as suggested by many works; e.g., \citealt{Dave11}) further increases this discrepancy, while small changes of the coefficient in Eq.~\ref{equi:ModelLX} are insufficient to compensate it (e.g., \citealt{White91} adopted a value of 2.5). The small photometry region of our sample galaxies also seems insufficient to explain this large discrepancy, because the inner region of a halo often dominates the X-ray emission owing to the high density and metallicity of the hot gas \citep{Crain13}.
We further highlight the oversimplicity of the pure accretion model by comparing the halo gas radiative cooling rate ($\dot{M}_{cool}$; Paper~I) to the SFR inferred from the IR luminosity. As shown in Fig.~\ref{fig:analyticalmodel}b, the $\dot{M}_{cool}-{\rm SFR}$ relation can be characterized with a linear function:
\begin{equation}\label{equi:RcoolSFR}
\dot{M}_{cool}=11.4(<29.5)\%{\rm SFR},
\end{equation}
indicating that the radiative cooling of the X-ray emitting corona accounts for only a small fraction of the current SFR. This is a remarkably different situation from observed in many massive elliptical galaxies, for which the radiative cooling rate is often far greater than the current SFR (e.g., \citealt{Mathews03}).
The above analytical model assumes that only the accretion of gas that has cooled via X-ray emission in the vicinity of galactic discs replenishes the gas for the disc SF. In reality, other fueling mechanisms, such as the accretion without cooling via soft X-ray emission, or the recycling of gas from existing stellar populations, are likely efficient to replenish the cool gas consumed in SF. In particular, \citet{Leitner11} found that the recycled gas from stellar mass loss can provide most or all of the fuel required to sustain the current level of SF in late-type galaxies. In addition, gas accretion is thought to be bimodal, with maximum past temperatures either of order the virial temperature or $\lesssim10^5\rm~K$ (e.g., \citealt{Keres05,vandeVoort11}). In general, the cold mode accretion (at a low temperature) is believed to dominate the growth of low mass galaxies with baryonic mass typically $\lesssim10^{10.3}\rm~M_\odot$ \citep{Keres05}. Furthermore, \citet{vandeVoort11} have found that for the accretion onto galaxies (as opposed to accretion onto halos), cold mode is always significant and the majority of stars present in any mass halos at any redshifts were formed from gas accreted in cold mode.
The discrepancy between the X-ray radiative cooling rate and the current SFR (Eq.~\ref{equi:RcoolSFR}) apparently indicates a higher than unity SF efficiency $\zeta$, which further indicates the existence of additional gas sources as discussed above. But even if we consider these gas sources (i.e., by substituting Eq.~\ref{equi:RcoolSFR} into Eq.~\ref{equi:ModelLX}), Eq.~\ref{equi:ModelLX} still tends to over-predict the coronal luminosity by a factor of five. It is thus obvious that the pure accretion model \emph{cannot} reproduce the observed X-ray luminosity, as previously pointed out by \citet{Crain10a} and others (e.g., \citealt{Benson00}).
\subsection{Comparison between Observations and Simulations}\label{PaperIIIsubsec:compareGIMIC}
We next compare the X-ray measurements of our \emph{Chandra} sample and the massive disc galaxies from the literature (\S\ref{PaperIIIsubsec:MassiveGalaxy}) to the results from the GIMIC simulations, which explicitly model both the accretion of gas in a cosmological framework and the feedback that results from the formation and evolution of stars. Here we are careful to compare the coronal luminosity ($L_X$) of the observed and simulated galaxies within the same X-ray bandpass and the same radial aperture size (\S\ref{PaperIIIsec:data}). A quantitative and uniform characterization of the thermal and chemical states (temperature and metal abundance) of the coronal gas is significantly more complicated. Readers interested in such issues may refer to \citet{Crain13} or our discussions in Papers~I and II.
As shown in Fig.~\ref{fig:compareGIMIC}, $L_X$ of the GIMIC galaxies better matches the X-ray observations than the simple accretion model predictions of \S\ref{PaperIIIsubsec:analytical}. In fact, GIMIC reproduces not only the coronal X-ray emission in $0.01-0.1r_{200}$ of $L^\star$ galaxies, but the scatter in $L_X$ at a given $v_{rot}$ or $M_*$ as well (Fig.~\ref{fig:compareGIMIC}a,c). The substantial reduction of the coronal luminosity in the GIMIC simulations as compared to the pure accretion model is due to the mass reduction of hot gas within a dark matter halo, as a result of the selective cooling and dropout of low entropy gas (e.g., via SF) and galactic feedback \citep{Crain10a}.
\begin{figure}[!h]
\begin{center}
\epsfig{figure=fig02.eps,width=1.0\textwidth,angle=0, clip=}
\caption{Comparisons of the X-ray observations to the results from the GIMIC simulations. (a) $L_X$ v.s. the inclination-corrected maximum gas rotation velocity $v_{rot}$. (b) $L_X$ v.s. $M_{200}$. (c) $L_X$ v.s. $M_*$. (d) $L_X$ v.s. SFR. Black and red symbols are the same as those in Fig.~\ref{fig:analyticalmodel}; blue boxes denote the massive disc galaxies listed in Table~\ref{table:GalaxyPara}; green dots denote the simulated galaxies from GIMIC. The solid lines in panels~(b) and (c) are the linear relations fitted to the non-starburst field galaxies of our \emph{Chandra} sample (the black symbols), while that in panel~(d) is fitted to both the starburst and non-starburst galaxies of our \emph{Chandra} sample (the black and red symbols). The green lines in panels (b) and (d) are the power laws fitted to the GIMIC galaxies (in panel d, we only use the galaxies with ${\rm SFR}>0.5\rm~M_\odot~yr^{-1}$).}\label{fig:compareGIMIC}
\end{center}
\end{figure}
The observed and simulated galaxies generally overlap with each other in the $L_X-v_{rot}$ plane (Fig.~\ref{fig:compareGIMIC}a). The most significant outlier is NGC~266, for which the inclination correction of the rotation velocity is poorly constrained (\S\ref{PaperIIIsubsec:MassiveGalaxy}). Furthermore, starburst galaxies in our \emph{Chandra} sample (the red data points) also appear to be systematically more X-ray luminous (at a given $v_{rot}$) than non-starburst ones, as well as the GIMIC galaxies.
The dark matter halo dominates the potential on large scales in a galaxy. \citet{Kim13} have recently found a tighter correlation of the coronal luminosity with the total mass (than with the stellar mass) for a sample of early-type galaxies. This result is qualitatively consistent with the $L_X-v_{rot}$ correlation shown in Fig.~\ref{fig:compareGIMIC}a, although for different types of galaxies. This consistency indicates, as is clearly implied by GIMIC, that the halo mass may be a primary factor in the retention of hot gas in general. Therefore, we further convert the rotation velocity to the dark matter halo mass for each of our observed sample galaxies, using the $M_{200}-v_{rot}$ relation of \citet{Navarro97} (\S\ref{PaperIIIsec:data}) and plot $L_X$ v.s. $M_{200}$ in Fig.~\ref{fig:compareGIMIC}b. This halo mass estimation differs from those using the stellar mass-halo mass relation (SMHMR) obtained from the abundance matching technique (e.g., \citealt{Behroozi10}), especially at the high mass end where the large halo mass in SMHMR is mostly for groups/clusters of galaxies (Fig.~\ref{fig:M200Scaling}a).
Since the starburst galaxies in our \emph{Chandra} sample and the four massive disc galaxies from the literature appear to be significantly more X-ray luminous than the non-starburst galaxies in our sample, we fit only these low-mass non-starburst galaxies to obtain a baseline of the $L_X-M_{200}$ relation, which can be characterized as a linear function:
\begin{equation}\label{equi:LXM200nonstarburst}
\frac{L_X}{10^{38}{\rm ergs~s^{-1}}}=(1.8\pm0.8)\frac{M_{200}}{{\rm 10^{12}M_\odot}}.
\end{equation}
In contrast, the average $L_X/M_{200}$ values of starburst and massive disc galaxies are $23.9_{-8.3}^{+14.6}$ and $34.2_{-10.5}^{+11.9}\times10^{26}\rm~ergs~s^{-1}~M_\odot^{-1}$, or about 13 and 19 times higher than that predicted by Eq.~\ref{equi:LXM200nonstarburst}. The GIMIC galaxies, however, exhibit a much steeper $L_X-M_{200}$ relation, although they are broadly consistent with the observed $L_X-v_{rot}$ relation (Fig.~\ref{fig:compareGIMIC}a). The $L_X-M_{200}$ relation of the GIMIC galaxies not only have a much steeper slope ($4.12\pm0.05$), but also have a much lower scatter ($rms=0.37\pm0.02\rm~dex$) than those of the observed galaxies ($rms=0.84\pm0.12\rm~dex$ for the low-mass non-starburst galaxies).
The $L_X-M_*$ correlation is poor for low-mass non-starburst galaxies, but can still be generally described with a linear function:
\begin{equation}\label{equi:LXMstarnonstarburst}
\frac{L_X}{10^{38}{\rm ergs~s^{-1}}}=(0.6\pm0.3)\frac{M_*}{{\rm 10^{10}M_\odot}}.
\end{equation}
Similarly, the average $L_X/M_*$ values of starburst and massive disc galaxies are much higher, i.e., $4.1_{-1.5}^{+2.4}$ and $11.5_{-3.4}^{+4.9}\times10^{28}\rm~ergs~s^{-1}~M_\odot^{-1}$, or about 7 and 19 times higher than that predicted by Eq.~\ref{equi:LXMstarnonstarburst}.
In contrast to the $L_X-M_{200}$ and $L_X-M_*$ relations, both the starburst and non-starburst galaxies in our \emph{Chandra} sample can be well described with an identical $L_X-{\rm SFR}$ relation (Fig.~\ref{fig:compareGIMIC}d), which can also be characterized with a linear function:
\begin{equation}\label{equi:LXSFR}
\frac{L_X}{10^{38}{\rm ergs~s^{-1}}}=(2.1\pm0.9)\frac{{\rm SFR}}{{\rm M_\odot~yr^{-1}}}.
\end{equation}
The coefficient here is only $\sim15\%$ of that for the $L_X-{\rm SFR}$ relation of the entire corona measured by our \emph{Chandra} observations (including the contribution from the inner region; Paper~II), but the $L_X-{\rm SFR}$ correlation is still good [Spearman's rank order coefficient (Paper~II) $r_s=0.62\pm0.13$]. The massive disc galaxies again show a large departure from the fitted relation, although they have SFRs comparable to the low-mass starburst galaxies. The observed galaxies with low SFRs (e.g., $\lesssim0.2\rm~M_\odot~yr^{-1}$) are typically dwarfs, which are excluded from the GIMIC sample. Besides these dwarf galaxies, the GIMIC galaxies still show significant differences from the observed sample in the $L_X-{\rm SFR}$ plane, i.e., the fitted $L_X-{\rm SFR}$ index of the GIMIC galaxies ($2.1\pm0.1$; for galaxies with ${\rm SFR}>0.5\rm~M_\odot~yr^{-1}$; the green solid line in Fig.~\ref{fig:compareGIMIC}d) is clearly larger than that of the observed sample (linear, Eq.~\ref{equi:LXSFR}). Furthermore, there are also some GIMIC galaxies with high coronal luminosities ($L_X\sim 10^{40}\rm~ergs~s^{-1}$) but just moderate SFRs ($\sim0.3\rm~M_\odot~yr^{-1}$), which are not seen in our observed galaxy sample. This is potentially an indication that massive galaxies are not being adequately quenched by feedback in the simulations.
\section{Discussion}\label{PaperIIIsec:Discussion}
\subsection{Effects of Missing AGN Feedback}\label{PaperIIIsubsec:AGNfeedback}
As introduced in \S\ref{PaperIIIsubsec:Simulations}, the GIMIC simulations do not include AGN feedback. The absence of AGN feedback in GIMIC may result in the over-prediction of the number density of \emph{massive} disc galaxies \citep{Crain09,Crain10a}. The detailed operation of AGN feedback on galaxy-wide scales remains ill-understood, but is expected to be important in galaxies with massive spheroids, where the energy output could heat and eject gas, reducing the emissivity in the central regions (e.g., \citealt{McCarthy10,McNamara12} and references therein). We have adopted a high-mass cut of $M_*\sim10^{11.7}\rm~M_\odot$ to the GIMIC galaxies to minimize the effect of missing AGN feedback (\S\ref{PaperIIIsubsec:Simulations}). Generally, this AGN effect is less important than Type~Ia SNe feedback for galaxies with $M_*\lesssim10^{11}\rm~M_\odot$ (e.g., \citealt{David06}), and should not qualitatively affect the comparison between observation and simulation in the mass range of our \emph{Chandra} sample. For galaxies with $M_*\gtrsim10^{11}\rm~M_\odot$, however, as shown in Fig.~\ref{fig:M200Scaling}a, the stellar masses at fixed $M_{200}$ are inconsistent with estimates derived from observational techniques \citep{Leauthaud12a}. This over-prediction of $M_*$ is most likely a result of the enhanced radiative cooling and SF in these massive galaxies due to the absence of AGN feedback in GIMIC. Correspondingly, this missing AGN feedback may also explain the high luminosity of coronal X-ray emission at a given $M_{200}$, for $M_{200}\gtrsim a~few\times10^{12}\rm~M_\odot$ (Fig.~\ref{fig:compareGIMIC}b).
\begin{figure}[!h]
\begin{center}
\epsfig{figure=fig03.eps,width=1.0\textwidth,angle=0, clip=}
\caption{(a) $M_{200}$ v.s. $M_*$. (b) $M_{200}$ v.s. $v_{rot}$. Symbols are the same as those in Fig.~\ref{fig:compareGIMIC}. The solid curve in panel (a) is the SMHMR from \citet{Leauthaud12a} [for the model $SIG\_MOD2$ (with the modeling of the stellar mass measurement errors) and the lowest redshift bin ($z=0.22-0.48$)]. The solid line in panel (b) marks the $M_{200}-v_{rot}$ relation of \citet{Navarro97}, which is used to compute $M_{200}$ of most of the observed galaxies, except for NGC~266.}\label{fig:M200Scaling}
\end{center}
\end{figure}
To first order, the rotation velocity of galaxies is determined by the mass and concentration of their parent halo. Whilst the concentration of halos modeled in collisionless simulations is known to follow a tight relation (e.g., \citealt{Navarro97,Bullock01}), the co-evolution of baryons and dark matter in more realistic simulations induces systematic deviations and scatter in this relation (often described as a ``back reaction'', e.g., \citealt{Duffy10}). As shown in Fig.~\ref{fig:M200Scaling}b, low-mass galaxies ($M_{200}\lesssim (2-3)\times10^{12}\rm~M_\odot$) from GIMIC exhibit a larger scatter in $v_{rot}$ than more massive counterparts. The low-$v_{rot}$ boundary of low-mass galaxies is roughly consistent with \citet{Navarro97}'s relation (derived from collisionless simulations), and is used here to estimate $M_{200}$ of all the observed galaxies except for NGC~266. On the other hand, the high-$v_{rot}$ boundary of low-mass galaxies seems consistent with high-mass ones. In contrast, high-mass galaxies with $M_{200}\gtrsim (2-3)\times10^{12}\rm~M_\odot$ have clearly lower scatter in $v_{rot}$, and no galaxies have $v_{rot}$ as low as predicted by \citet{Navarro97}'s relation. In the absence of AGN feedback, the halo of a massive galaxy will tend to over-cool and over-contract, resulting in a greater rotation velocity. Therefore, the absence of AGN feedback in GIMIC, particularly for high-mass galaxies, likely results in an artificially high baryonic matter density at small radii, and thus a higher dark matter density and maximum rotation velocity.
\subsection{Role of SF in the Coronal X-ray Emission}\label{PaperIIIsubsec:CoolHotGasInteraction}
Our sample is selected based on the available \emph{Chandra} data. Archival observations are often proposed for active starburst galaxies which tend to have luminous X-ray halos. Among the 30 galaxies used in the present paper, $\sim40\%$ are classified as ``starburst'' ($\sim35\%$ if one includes the four massive disc galaxies; Table~\ref{table:GalaxyPara}), while in the GIMIC \textit{sample} (not necessarily the simulation taken as a whole), this fraction is only $\sim9\%$ (defining a simulated galaxy with ${\rm SFR}/M_*\gtrsim0.1\rm~Gyr^{-1}$ as a starburst, broadly consistent with the definition for the observed galaxies; Fig.~\ref{fig:SFproperties}a). This low fraction in the sample is likely a consequence of the exclusion from the sample of interacting galaxies. We thus separated starburst and non-starburst galaxies in the above comparisons (\S\ref{PaperIIIsubsec:compareGIMIC}).
The coronal luminosity discrepancy between the observed starburst and the GIMIC galaxies may be partly attributed to this difference in the sample selection. However, at small radii, the X-ray intensity always drops for GIMIC galaxies \citep{Crain10a,Crain13}, but increases for the observed ones (Paper~I). As presented in \S\ref{PaperIIIsubsec:compareGIMIC}, the coronal luminosity of our \emph{Chandra} sample galaxies in $0.01-0.1r_{200}$ is on average $\sim15\%$ of those adopted in Papers~I and II (\S\ref{PaperIIIsubsec:compareGIMIC}), while the corrected coronal luminosity of the GIMIC galaxies in $0.01-0.1r_{200}$ is typically $\sim(20-50)\%$ of the luminosity in the entire halo (\S\ref{PaperIIIsubsec:Simulations}). Considering that the X-ray luminosity adopted in Papers~I and II is also measured in the inner region, we could conclude that GIMIC under-predict the coronal X-ray emission of low-mass galaxies at least at small radii, although the chosen of photometry region of $r=0.01-0.1r_{200}$ has minimized this discrepancy. In fact, this under-prediction may also exist at $0.01-0.1r_{200}$, as indicated by the steeper slope of the $L_X-{\rm SFR}$ relation for the GIMIC galaxies (with respect to the observed galaxies in our \emph{Chandra} sample; Fig.~\ref{fig:compareGIMIC}d). We consider the SFR range of $0.5-5\rm~M_\odot~yr^{-1}$, because galaxies with larger SFRs are likely massive ones in GIMIC and starburst in our \emph{Chandra} sample, while those with a lower SFR in GIMIC have too large scatter in $L_X$ and many of them are indeed massive galaxies (see below; Fig.~\ref{fig:compareGIMIC}d). We find an average coronal luminosity of $\log (L_X/{\rm ergs~s^{-1}})=38.89_{-0.16}^{+0.15}$ for our observed galaxies, compared to $\log (L_X/{\rm ergs~s^{-1}})=38.02_{-0.10}^{+0.09}$ for the simulated ones.
\begin{figure}[!h]
\begin{center}
\epsfig{figure=fig04.eps,width=1.0\textwidth,angle=0, clip=}
\caption{SF properties of the sample galaxies. (a) $L_X$ v.s. the specific SFR per unit stellar mass (${\rm SFR}/M_*$). (b) SFR v.s. $M_*$. (c) SFR v.s. $M_{200}$. Symbols are the same as those in Fig.~\ref{fig:compareGIMIC}.}\label{fig:SFproperties}
\end{center}
\end{figure}
The most intuitive reason for this under-prediction of X-ray luminosity around $L^\star$ non-starburst field galaxies may be from the adopted physical and/or numerical ingredients in GIMIC. Many physical assumptions and parameters, as well as the adopted numerical methods (e.g., \citealt{Scannapieco12}), may affect the simulated galaxy properties, e.g., the adopted feedback implementation, metal line cooling, stellar initial mass function, and the assumed cosmology (e.g., \citealt{Haas13a,Haas13b}). In particular, the choice of feedback parameters plays a key role in reproducing the observed SF properties of galaxies (e.g., \citealt{Schaye10,Scannapieco12,Vogelsberger13}), and possibly the related halo gas cooling and X-ray emission. As suggested in many cosmological simulations, there exists a fundamental problem that the specific SFRs of galaxies (particularly low-mass galaxies) at low-redshift are significantly under-predicted (e.g., \citealt{Weinmann12}). It has been suggested that this discrepancy arises because both semi-analytic and hydrodynamical simulations couple the growth of galaxies too strongly to the growth of their host dark matter haloes. Since haloes assemble early, simulated galaxies do so too, and thus form significantly earlier than is observed in nature (e.g., \citealt{Weinmann12}). Similarly, GIMIC also under-predicts the SFR of low-mass galaxies \citep{Crain09}. This under-prediction of current SF activity may have potential effects on the X-ray emission, e.g., via related stellar feedback and cool-hot gas interaction, but the net effect is not yet well understood. For example, there exists a small number of low-SFR high-$L_X$ galaxies in the GIMIC simulations (Fig.~\ref{fig:compareGIMIC}d). As indicated in Fig.~\ref{fig:SFproperties}b,c, these galaxies are most likely massive ones, which are apparently post-starbursts whose SF have already ceased but the radiative cooling of the coronal gas is still strong. Some observed galaxies (e.g., NGC~2787, NGC~2841, NGC~4594, and NGC~5170) may have similar SF properties, i.e., with low SFR but high $M_*$. The SF activities in these galaxies have probably been quenched, e.g., via morphological quenching or other mechanisms (e.g., \citealt{Martig09,Li09,Li11} and references therein). However, the diffuse X-ray luminosities of these galaxies are much lower than those of other massive galaxies with higher SFR (e.g., the massive disc galaxies from the literature; Fig.~\ref{fig:compareGIMIC}), apparently inconsistent with what is predicted by GIMIC.
The above comparisons clearly demonstrate that X-ray observations can place important constraints on the galaxy formation theory, especially its feedback implementation. Improved feedback prescriptions, especially in low-mass galaxies, may need to be considered in next generation simulations to correctly reproduce both the current SFR and the coronal X-ray luminosity. In particular, GIMIC adopted constant kinetic feedback parameters (initial wind velocity of $600\rm~km~s^{-1}$ and mass loading factor of $\dot{m}_{wind}/\dot{m}_*=4$), which were chosen to scale the global SFR density to correspond to observational estimates. Although the adopted efficiency (the wind energy accounts for $\sim80\%$ of the SN energy) and mass loading factor are roughly in agreement with observational estimates [e.g., for M82 by \citet{Strickland09}], it is likely that the macroscopic properties of outflows correlate with the detailed structure of the ISM (e.g., \citealt{Efstathiou00,Creasey13}) and, by extension, the properties of galaxies. The simplicity of the adopted physical and/or numerical ingredients in GIMIC is a possible reason that the SF and X-ray properties of the simulated galaxies are too strongly coupled with the dark matter halo mass of the galaxy (e.g., Fig.~\ref{fig:compareGIMIC}b). It will be interesting to compare the X-ray scaling relations of GIMIC galaxies with those of galaxies formed in next-generation hydrodynamical simulations adopting feedback schemes calibrated to reproduce the stellar mass function of local galaxies.
Another potential explanation for the under-prediction of soft X-ray emission is the adoption of a single-phase ISM in GIMIC. The gas is assigned an average temperature within the numerical resolution, which is typically insufficient to resolve a single massive SF region. In these SF regions, cold gas often dominates the total gas mass, while hot gas dominates the X-ray emission. Therefore, averaging the thermal state of the gases will result in a too low temperature to emit X-ray efficiently, thus significantly under-predict the X-ray emission. This effect is the most significant near a galactic disc, where the cool/hot gases strongly mix with each other, and the X-ray emission can be enhanced via enhanced radiative cooling of mass loaded cool gas and charge exchange (e.g., \citealt{Li11,Liu11,Liu12}). This effect may also help to explain the declining X-ray intensity profile toward small radii in GIMIC \citep{Crain10a,Crain13}, which is not typically observed in real galaxies.
As a general conclusion, GIMIC likely under-predict the role of SF in X-ray emission for low mass galaxies. There are two origins of this under-prediction: the under-prediction of the SFR for low-mass galaxies due to the too strong coupling of the growth of galaxies to the growth of their host dark matter halos, and the under-prediction of X-ray emission from SF due to the assumption of the single-phase ISM.
\subsection{Differences between Low and High Mass Galaxies}\label{PaperIIIsubsec:DifferenceLowHighMass}
Although the $L_X-M_{200}$ or $L_X-M_*$ slopes themselves are not well constrained due to the large scatter of the observational data, it is clear that the four massive disc galaxies (NGC~266, NGC~1961, NGC~6753, UGC~12591) are more X-ray luminous than the non-starburst galaxies below a typical transition mass of $M_*\sim2\times10^{11}\rm~M_\odot$ or $M_{200}\sim10^{13}\rm~M_\odot$ (Fig.~\ref{fig:compareGIMIC}b,c). As argued in previous studies (e.g., \citealt{Bogdan13a,Bogdan13b}), this high X-ray luminosity is most likely a signature of the accretion of intergalactic gas.
There are several possible mechanisms which may produce the high X-ray luminosity of these massive disc galaxies. \emph{Firstly}, the deeper gravitational potential, and possibly the stronger surrounding thermal and ram-pressure (e.g., \citealt{DallaVecchia08,Lu11}), may help the galaxy to retain more hot gas within its halo, resulting in an increasing baryon fraction (especially the hot gas fraction) with the halo mass (e.g., \citealt{Crain10a,Dai12}). \emph{Secondly}, the change of the $L_X/M_*$ ratio could also be a result of the change of the accretion mode. Numerical simulations have confirmed the change of the cold to hot accretion from below to above the (baryonic) mass of $\sim10^{10.3}\rm~M_\odot$ of a galaxy (e.g., \citealt{Keres05,Keres09,Crain10a,vandeVoort11}). Above this transition mass, a larger fraction of gas can be gravitationally heated to an X-ray emitting temperature in the galaxy vicinity. \emph{Finally}, the dynamical state of the coronal gas may also affect its overall X-ray emissivity by changing its radial distribution \citep{Ciotti91,OSullivan03}. A massive galaxy with a corona that tends to stay in a hydrostatic or even inflow state can have a steeper density profile (higher gas density in the center), enhancing the X-ray emission. Conversely, a galaxy with its corona in a subsonic outflow state tends to have a flatter density profile and may be less luminous in the galactic inner regions (e.g., \citealt{Tang09a}). In all cases, the dark matter halo mass appears to be a key parameter in determining the coronal luminosity and it is likely that the accretion of intergalactic gas plays an increasingly important role in producing the galactic coronae above the transition mass.
\section{Summary and Prospects}\label{PaperIIIsec:Summary}
In order to study the coronae around disc galaxies, we have constructed a \emph{Chandra} database of 53 nearby highly-inclined disc galaxies (Paper~I), and have conducted a correlation analysis of their coronal and other multi-wavelength properties (Paper~II). In this paper, we have compared our results to the predictions from a simple analytical model considering only the accretion of intergalactic gas (in the absence of feedback), the measurements of several massive disc galaxies from the literature, as well as results from recent cosmological hydrodynamical simulations invoking both accretion and feedback (the GIMIC simulations). Our main results and conclusions are set out below.
The observed X-ray emission in the vicinity of galactic discs (those explored in our \emph{Chandra} observations without further corrections in \S\ref{PaperIIIsubsec:OurSample}) is much less ($\sim2\%$) than predicted by a simple analytical model considering only the accretion of intergalactic gas. Furthermore, the radiative cooling rate of coronal gas in the same radial aperture is typically too low ($\lesssim10\%$ of the SFR) to be the primary source of gas to maintain the on-going SFR in the galaxies. The GIMIC simulations more accurately reproduce the observed X-ray scaling relations. They broadly reproduce the luminosity range of the coronal X-ray emission in $0.01-0.1r_{200}$ of $L^\star$ galaxies, including the scatter in $L_X$ at a given $v_{rot}$ or $M_*$. However, the $L_X-M_{200}$ relation of simulated galaxies differs from that inferred from observations, both in terms of the slope and scatter.
For the observed galaxies, low-mass starbursts appear to be more X-ray luminous than more quiescent counterparts at a given galaxy mass. There is a tight $L_X-{\rm SFR}$ correlation, even for corona at large radii of $r=0.01-0.1r_{200}$, similar to that of the X-ray emission at smaller radii. The overall trends of the observed and simulated galaxies on the $L_X-{\rm SFR}$ plot show little similarity, indicating that GIMIC potentially under-predicts the role of SF in producing the X-ray emission. There are two possible origins of this under-prediction: the under-prediction of the SFR for low-mass galaxies due to the too strong coupling of the growth of galaxies to the growth of their host dark matter halos, and the under-prediction of X-ray emission from SF due to the assumption of the single-phase ISM.
Coronal X-ray luminosity increases much faster with galaxy mass above a typical transition mass of $M_*\sim2\times10^{11}\rm~M_\odot$ or $M_{200}\sim10^{13}\rm~M_\odot$. Below this mass, the $L_X-M_{200}$ and $L_X-M_*$ relations of the observed galaxies can both be well characterized with a linear scaling relation. The higher $L_X/M_*$ or $L_X/M_{200}$ ratio of galaxies above the transition mass indicates that a massive disc galaxy tends to have a massive dark matter halo, to be dominated by hot-mode accretion, and/or to host a corona most likely in a hydrostatic state. The accretion of intergalactic gas likely plays an increasingly important role in producing the galactic coronae with increasing galaxy mass.
The above results demonstrate that X-ray observations can place important constraints on the galaxy formation theory, especially the astrophysics of accretion and feedback. Although the simulations have made a great progress to reproduce the observed X-ray properties of galaxies, some significant discrepancies still exist and are quite suggestive. The absence of AGN feedback in GIMIC tends to over-predict the number of massive galaxies, as well as a simultaneous over-prediction of $v_{rot}$, $M_*$, and $L_X$ at a given $M_{200}$ for massive galaxies. In addition, GIMIC may yield too strong a coupling of the galaxy and coronal properties ($v_{rot}$, $M_*$, SFR, $L_X$) to the host dark matter halo mass, which is one of the major origins of the discrepancies between the observations and simulations. Furthermore, the adoption of constant feedback parameters (mass loading and injection velocity) is likely too simplistic. It is also important to study the X-ray properties of low-mass galaxies, which are not massive enough to retain hot coronae, and the X-ray emission is mostly from stellar feedback and the related cool-hot gas interaction, which is poorly understood from current numerical simulations.
The present paper is based on the archival observations and X-ray measurements from the literature. Most of these observations are too shallow to probe the faint X-ray emission from the outskirts of the galactic halos. In addition, the sample is still not large nor uniform. In particular, very few galaxies are observed in X-rays around or above the transition mass. We have also focused on the \emph{X-ray luminosities} of the galactic coronae. Most of the archival observations are too shallow to yield valuable constraints on hot gas abundance, and hence on gas density and total baryon content of the galaxies (e.g., \citealt{Bregman07}). Therefore, deeper X-ray observations of more galaxies spanning the entire galaxy mass range are highly desirable in order to put tighter constraints on the galaxy formation models and further improve our understanding of the global properties of galactic coronae.
\acknowledgements
The authors thank $\rm\acute{A}$kos Bogd$\rm\acute{a}$n for helping us to compute the X-ray luminosity of NGC~266 in $0.01-0.1r_{200}$, Ying Zu and $\rm\acute{A}$kos Bogd$\rm\acute{a}$n for many helpful discussions, the anonymous referee for many useful suggestions that lead to improvements of the paper, as well as the referees of Papers~I and II whose constructive comments and suggestions led to the write-up of the present paper. Jiang-Tao Li acknowledges the financial support from CNES and the support from NSFC grant 11233001.
\scriptsize
|
1,314,259,995,301 | arxiv | \section{Approach for Empirical Study}
Today, when users post their photos to social media sites, they often mark the photo with text description, short expressions, classify the photos / album with keyword tags, such as ``sydney opera house'' or ``trip to sydney''. We assume such tagging mechanism, together with the data processing performed by the social media platform, helps to quickly locate similar contents for digital photos. The similarity, no matter in color, pattern, theme or a combination of them, should contribute to better compressibility using the aforementioned compression tools. We perform some evaluation work to assess the correlation between tags, similarity and compressibility.
To validate our hypothesis, we design a set of experiments on publicly available data sets to exploit the relationship of image similarity and the compression ratio of associated photo groups. We use an open source application programming interface (API) Flickr4Java ~\cite{flickr4java} to download photos from Flickr. To reduce the number of photos that are not relevant to the tags, we choose ``relevance'' as the sorting method. The results returned by the Flickr platform are sorted by the API in the descent mode by relevance to the tag theme.
\begin{figure}[h!]
\centering
\centerline{\includegraphics[width=0.5\textwidth, height=0.75in]{tag.png}}
\caption{Tag selection for photo groups}
\label{fig:tag}
\end{figure}
We select twelve tags for photo search and create photo groups according to their tags as listed in Fig.~\ref{fig:tag}. We also attempt to use multiple tags. The tags are delimited by a comma. When available, photos in original size are downloaded. If not, large, medium or small images are downloaded. So the size of the images vary, depending on the download authorization levels set by the image owners. We create subgroups consisting of 100, 50 and 20 most relevant photos for each tag/group respectively. And we create a comparison subgroup for SIFT-picked photos from the Top-100 one. The method is explained later in this section.
\begin{figure}[h!]
\centering
\centerline{\includegraphics[width=0.5\textwidth, height=1.5in]{flow.png}}
\caption{Work flow in the experiment: how photo images are processed to assess the compression results}
\label{fig:flow}
\end{figure}
All the photos downloaded from Flickr are in JPEG format. As shown in Fig.~\ref{fig:flow}, first all JPEG images are decompressed into PNM-format raw image files using djpeg ~\cite{djpeg} for all subgroups. Then, we concatenate all raw image files into one single big file. Finally, we apply two compression tools rzip v2.1 ~\cite{kolivas2008long} and 7-Zip v15.14 ~\cite{pavlov20137zip} to perform the compression. The compressed files are in .rzip and .7z formats respectively. By doing so, we are able to check the inter-file compressibility by leveraging the intra-file optimization in the compression tools. We define compression factor (CF) below as the size of the original file $S_{old}$ divided by the size of new (compressed) file $S_{new}$. The higher the CF is, the better compression result is obtained. Since we run this experiment to exploit the potential of compressibility, we do not consider the decompression phase in which photo images are to be restored from the single file. The execution time of concatenation and compression is not examined either.
\begin{displaymath}
CF = \frac{S_{old}}{S_{new}}\qquad
\end{displaymath}
For comparison purpose, we use VLFeat v0.9.20 ~\cite{vedaldi08vlfeat} to extract all SIFT local features from Top-100 image groups. Then, we use code from ~\cite{solem2012programming} to compare the features from any two images and get the number of shared ones. The number of shared features represent the similarity between the two photos. The more features they share, the more similar the two images are. The threshold value for shared features is set to 10 throughout our experiment to eliminate less relevant pairs. Identifying all images that share more than 10 features with each other is a high-dimensional computational problem. To simply the computation, we reduce the problem to finding the cluster with most number of photos. It is a trade-off between the similarity and computation complexity. By doing so, we are able to get a group of photos that are similar more quickly. We visualize the cluster selection process to make it easy to understand. Each image represents a node $n_{i}$ and the group is a set of nodes namely $N = \{n_0,n_1,n_2,n_3,...,n_t\}$ where t = 99. If two images $n_{i}$ and $n_{j}$ share at least ten local features, an edge $e_{ij}$ is established between $n_{i}$ and $n_{j}$. As a result, a diagram like Fig.~\ref{fig:sift} is generated. In Fig.~\ref{fig:sift}, there are four clusters in total. We select the first one as it is the largest cluster with seven members. The other smaller clusters are disregarded. Consequently, the images from the largest cluster are selected as ``SIFT-picked images''. The compression procedure illustrated in Fig.~\ref{fig:flow} is repeated on these images, in additional to Top-100, Top-50 and Top-20 image groups.
\begin{figure}[h!]
\centering
\centerline{\includegraphics[width=0.5\textwidth, height=1.5in]{sift.png}}
\caption{An example of choosing the largest cluster from the SIFT results}
\label{fig:sift}
\end{figure}
\begin{figure}[h!]
\centering
\centerline{\includegraphics[width=0.5\textwidth]{ss1.png}}
\caption{Example thumbnails from Top-100, Top-50, Top-20 and SIFT-picked similar photos subgroups with the same tag}
\label{fig:ss1}
\end{figure}
We created 12 groups of photos, each with top 100 photos for the given tags. Then we also create top 50 and top 20 photo groups for comparison purpose. Fig.~\ref{fig:ss1} shows some thumbnails: (a) from Top-100; (b) from Top-50; (c) from Top-20;(d) from SIFT-picked Top-100 images for the tag ``thebigben''. We denote photo group $g_{i}$ where $i = \{1,2...12\}$. We also create two mixed groups from the 1,200 photos by random selection, named m1 and m2. Two more groups are then create by random download from Flickr, named r1 and r2 respectively.
\section{Background}
To achieve better compressibility, LZMA used in 7z ~\cite{pavlov20137zip} employs larger sliding window. The compression program finds redundant strings within a certain length of window. With a greater window size, the chance of hitting redundant strings are bigger, thus the compression results are better. Similarly, rzip ~\cite{kolivas2008long} looks for identical contents over a longer distance throughout the file. It uses hash values for fixed size chunks for the check and this method allows better intra-file deduplication.
These techniques do not guarantee that good storage efficiency is obtained at system level unless the system can feed the compression tools with the right set of data: in our case, the similar photos. Digital images are expressed in pixel values composing of basic colors such as red, green, blue, often denoted as R,G,B values respectively. In this study, we aim to analyze the storage efficiency for raw images. All images are presented in a set of pixels with R,G,B values. When speaking of ``similarity'' of photos, we may refer to the color, the pattern, the content or even the theme. Things get quite complicated. Some objects that human views as similar are regarded as totally different by computer because their binary values are not equal. For instance, two images with same pattern: one in red and the other in blue. The red one denotes (1, 0, 0) for all pixels while the blue one denotes (0, 0, 1).
We do not focus on pixel level similarity detection as it is finer-grained and too complex. And it may not be viable as a pre-precessing step just to feed the compression tools for its costly computing. Then, we ask ourselves whether there is a good way to identify the basic similarity of the digital photos. Local features have been used to distinct two images. Methods such as SIFT \cite{lowe2004distinctive} have demonstrated invariance to scale or rotation and have been widely used in image processing.
\section{Conclusion and Future Work}
In this paper, we have employed the data sets and tagging system from Flickr for the empirical study. We observed storage efficiency results from two content-based similarity detection approaches for raw digital images. The results showed that with the help of similarity, the compression factor can be improved significantly, by up to 26\%. The insights obtained from the study may help direct the future system design. Our future work includes measuring and optimizing time efficiency of the aforementioned similarity detection approaches. We also expect new storage system design to be developed to pre-process raw images and utilize inter-file content-based similarity, which achieves greater storage efficiency.
\section{Introduction}
With the rapid growth of data volume, the efforts to optimize spatial and temporal efficiency have never stopped. Compression and deduplication are two well-known technologies to save storage space. Studies ~\cite{aronovich2009design, yokoo1997data} find that applying deduplication and compression techniques on similar data helps achieve better results. On the other hand, as more data are stored in a distributed environment because of the scale, the placement of data becomes important. If similar data are placed on the same node, or even a smaller number of nodes, the read performance can be significantly better than a highly fragmented placement. In a backup system, the reduction of fragmentation helps improve the performance of data restore ~\cite{fu2015design}. Therefore, for large-scale storage system, the benefit of using similarity to determine data placement is twofold: first, it helps deduplication or compression save more storage space; second, it enables quick search, sorting and read operations.
In addition, researchers find general compression or deduplication methods may not work well for all workloads and data sets. In the recent years, workload-aware deduplication or compression techniques have been proposed ~\cite{lin2015metadata, dewakar2015storage}. Instead of just checking bit-wise similarity, examining contents to put data into similar groups, can be helpful to improve the storage efficiency more significantly. The program in ~\cite{shi2014photo} has used local features detection to help compress photos albums sharing many similar contents.
One of the most common use cases for cloud storage is to upload and share personal digital photos, via social media or image repository. Photos uploaded by one user, often by albums, are more likely to be similar in contents. Raw images are increasingly popular among professional photographers, photo hobbyists, healthcare IT professionals and scientific researchers. However, more efforts to optimize the storage efficiency for raw images that preserve visual similarity are needed, which may be complementary to JPEG encoding and compression.
To this end, we propose exploiting the detection of content-based similarity for raw images. The similarity should be utilized for better spatial and temporal storage efficiency. We present our observations and insights from two approaches (one based on photo tags, the other on local feature extraction) in exploring the compressibility. We have not intended to create a specific storage system design here. Instead, we would like to share our findings and inspire more work to substantiate the methods and optimize the performance for real-life raw image workloads.
The main contributions of this paper are:
\begin{itemize}
\setlength{\itemsep}{1pt}
\item We set up and perform empirical studies on two content-based similarity detection approaches to compress similar raw images.
\item We analyze the results with statistical views and gain insights for future design.
\item We discuss technical limitations and challenges.
\end{itemize}
\subsection*{Abstract}
To improve the temporal and spatial storage efficiency, researchers have intensively studied various techniques, including compression and deduplication. Through our evaluation, we find that methods such as photo tags or local features help to identify the content-based similarity between raw images. The images can then be compressed more efficiently to get better storage space savings. Furthermore, storing similar raw images together enables rapid data sorting, searching and retrieval if the images are stored in a distributed and large-scale environment by reducing fragmentation. In this paper, we evaluated the compressibility by designing experiments and observing the results. We found that on a statistical basis the higher similarity photos have, the better compression results are. This research helps provide a clue for future large-scale storage system design.
\input{introduction}
\input{background}
\input{approach}
\input{results}
\input{related}
\input{discussion}
{\footnotesize \bibliographystyle{acm}
\section{Related Work}
Recently, to embrace the big data era, research community has shifted the focus from general storage efficiency techniques to application and data-aware specialized methods with some pre-processing capabilities. For example, some exploited the separation of metadata from data in tar files ~\cite{lin2015metadata}. By moving metadata to different locations of the file, the deduplication ratio is improved significantly. Conventional wisdom states that video data is difficult to be deduplicated. In ~\cite{dewakar2015storage}, variations such as captions, resolutions, web optimization are evaluated with different deduplication techniques. The results show that with pre-processing, video files can be effectively deduplicated. In addition, migratory compression ~\cite{lin2014migratory} has been proposed to reorder the binary sections before feeding data to compression tools to achieve better intra-file compressibility with trade-off in performance and restoring efforts. A recent study ~\cite{shi2014photo} has utilized local features rather than individual pixel values to analyze the similarity between photos from the same album, to achieve better compression results.
\section{Evaluation}
All evaluation results are obtained from a workstation equipped with one Intel Core i5 processor with 8GB RAM and 2TB disk space. Our data set includes 1,600 photos with the total size of 817MB acquired using the methods described in Section 3.
\begin{figure*}[t]
\centering
\subfigure[Results with rzip]{
\includegraphics[width=0.9\textwidth, height=0.19\textwidth]{rzip.pdf}
}
\hspace{0.05\textwidth}
\subfigure[Results with 7z]{
\includegraphics[width=0.9\textwidth, height=0.19\textwidth]{7zip.pdf}
}
\caption{Comparison of compression ratio for Flickr tagged Top-100, Top-50, Top-20 and SIFT-picked subgroups. g1 to g12 represent the tagged groups. }
\label{fig:results_1}
\end{figure*}
The CF for all twelve groups are listed in Fig.~\ref{fig:results_1}. First, we examine the results between Top-100 and Top-50 subgroups. Among the twelve groups, four groups (g1, g5, g9 and g10) see higher CF with Top-50 than Top-100. For other groups, the CF results are either very close between the Top-50 and Top-100 or lower CF is obtained on Top-50. Results from two compressors are quite consistent. Then, we look at the results between Top-100 and Top-20 subgroups. This time, more than half of the groups see a significant higher CF with Top-20, about 10\% in average and up to 26\%. Only one group g6 gets a lower CF with Top-20 subgroup. For the rest, almost equal CF results are observed. For both compression tools, SIFT-picked photos yield a higher CF than Top-100, Top-50 and Top-20 for ten groups out of twelve (about an additional 10\% compared to Top-20) and an almost equal CF for the rest two. Overall, the results from two compression tools are quite close. The exception is g8 for which rzip achieves much greater CF with Top-20. With these results, we can see that on a statistical basis, CF with SIFT-picked is better than Top-20, which is better than Top-50, followed by Top-100. The more relevant (similar) the images are, the higher CF is expected.
\begin{figure}[h!]
\centering
\centerline{\includegraphics[width=0.45\textwidth, height=1.3in]{mean.pdf}}
\caption{Mean CF from Top-100, Top-50 and Top-20 subgroups vs. mixed and random data sets using rzip. t100 stands for mean from Top-100; t50 from Top-50 and t20from Top-20. r1, r2 represent the randomly downloaded photo groups; m1, m2 represent two mixed photos from g1 through g12 repository}
\label{fig:mean}
\end{figure}
It is interesting to analyze what factors may impact the correlation of tag and CF. We find where CF improvements are more distinctive, such as g1 (thebigben), g3 (tajmahal), g7 (milfordsound) and g8 (oriental pearl), the relevant objects are symbolic and easy to be identified. Multiple tags do not make significant difference. In contrast, g6 (pizza, pepperoni) does not have a concrete pattern. And moreover, the SIFT-picked image set only includes two images for g6. There are only two images sharing at least ten SIFT local features, reflecting the diversity of the images in g6. g6 is regarded as an anomaly. According to Fig.~\ref{fig:mean} , the mean CF for all Top-100 subgroups (rzip) is 2.76 while the CF of m1 and m2 are slightly lower (2.65 and 2.74). Mixed images from the same pool yield lower CF as the relevance of the group goes down. For random groups, we actually see a different pattern when the group contains fewer photos (20 photos vs. 100 ones). We explain this with compression tool mechanism: when data is randomly organized, the CF is determined by the hit rate of identical contents in the compressor dictionary. The bigger the data pool is , the more likely the new incoming data gets a hit, thus yield a higher CF. Based on the results above, we believe Flickr tags are helping users to get more relevant images.
\begin{figure}[h!]
\centering
\centerline{\includegraphics[width=0.42\textwidth, height=1.3in]{siftres.pdf}}
\caption{The maximum, mean, the 2nd minimum and the minimum of CF, from Top-100, Top-20 and SIFT-picked subgroups using rzip. This is a statistical view of the storage efficiency of these group. }
\label{fig:siftres}
\end{figure}
Fig.~\ref{fig:siftres} shows the maximum, mean and minimum CF across the twelve groups we test with Top-100, Top-20 and SIFT-picked selection. As previously discussed, g6 is a anomaly and all its results represent the minimum of the three selection groups. So we add the 2nd minimum CF data to gain more insights. We find that statistically, the CF from SIFT-picked is 10-15\% better than CF from Top-20, which outperforms CF from Top-100 by around 15\%. By checking thumbnails illustrated in Fig.~\ref{fig:ss1}, we find that both SIFT-picked and Top-20 can help to gather more similar photos than Top-100 does. Extracting SIFT local features from 100 photos takes 15-20 minutes while sorting photos by tag relevance almost takes no computing time on the client side. SIFT approach is more accurate than tags at the cost of extra computation. The huge amount of ``sorting'' work has been accomplished by the users when photo are uploaded, or by the platform back-end program using unknown algorithms. Therefore, tags can be used as an efficient similarity detection, grouping and data placement approach.
In summary, we found correlation between photo tags and compressibility which helps to improve the storage spatial efficiency. A few limitations for tag selection are discovered. When the tags are referring to a specific and distinctive object, the correlation is higher. The mechanism of the tagging algorithm may also affect the correlation levels. It is a matter of how accurate the pre-processing can be. The comparison with SIFT local feature extraction shows that there is enough space for improvements. Ideally, the tag relevance may achieve storage efficiency results close to SIFT approach. More importantly, what we have discussed is complementary to what JPEG has done for digital image compression.
|
1,314,259,995,302 | arxiv | \section{Preface}
One of the characteristic features of quantum mechanical system is the presence of zero-point-fluctuations. That is, even in the ground state, physical quantities are characterized apart from their average values with the fluctuations, which are usually estimated by the mean square deviations. It is enough to mention that the vacuum fluctuations of the electromagnetic field is responsible for a number of well known phenomena. For instance, it stimulates a spontaneous emission of atom \cite{Weisskopf:1930au}, its another manifestation is Casimir force \cite{Casimir:1948dh} and the Lamb shift also can be explained by means of it \cite{Welton:1948zz}. In general, it is hard to estimate the rate of zero-point-fluctuations in quantum field theory, as it turns out to be a divergent quantity. Alternatively, one could try to use various Gedankenexperiments for estimating order of magnitude of the fluctuations of a given physical quantity. Such Gedankenmessungen usually account for the unavoidable disturbances caused by the interaction during the measuring process. One may recall a well known example of this sort of discussion concerning the electromagnetic field \cite{1933KDVS...12....3B, Bohr:1950zza}. In contrast to other fields, the metric that describes the gravitational field - determines at the same time the background space-time. Thus, one may consider the measurement of gravitational field by means of the motion of test particles \cite{Bronstein:2012zz, 1954RMxF....3..176A, Regge:1958wr, Peres:1960zz} (as is the case with electromagnetic field \cite{1933KDVS...12....3B, Bohr:1950zza}) or one may discuss the measurement of space-time characteristics like curvature \cite{Osborne:1949zz, 1960PhRv..120..643W, 1961JMP.....2..207W} and space-time intervals \cite{Wigner:1957ep, Salecker:1957be}. While there are no objections that the quantum fluctuations prevent one from measuring position with greater accuracy than the Planck length, $l_P \approx 10^{-33}$cm, \cite{Mead:1964zz}, there is still controversy about the rate of length fluctuations \cite{Karolyhazy:1966zz, Diosi:1989hy, Ng:1993jb, Diosi:1995tq, Adler:1999if, Ng:1999se, Baez:2002ra, Ng:2002up}. Karolyhazy supplemented the discussion of Salecker and Wigner \cite{Wigner:1957ep, Salecker:1957be} by noting that the minimum size of a clock is set by its Schwarzschild radius and found that the length $l$ cannot be measured with greater accuracy than $\delta l \gtrsim l_P^{2/3}l^{1/3}$ \cite{Karolyhazy:1966zz}. This result was criticized by devising new Gedankenexperiments \cite{Diosi:1989hy, Diosi:1995tq, Adler:1999if, Baez:2002ra} and supported again in a series of papers \cite{Ng:1993jb, Ng:1999se, Ng:2002up}. We are not going to discuss the counterexamples and their refutations but instead we shall argue that the bound $\delta l \gtrsim l_P$, which is considered by some authors to be the proper one, can readily be achieved by taking into account the effect of self-gravity in Salecker-Wigner-Karolyhazy Gedankenexperiment.
\section{Salecker, Wigner, Karolyhazy}
In order to demonstrate principal limitations on space-time measurement due to quantum and gravitational effects Salecker and Wigner proposed the following Gedankenexperiment \cite{Wigner:1957ep, Salecker:1957be}. The clocks are placed at the points the distance between which is being measured (the clock can be viewed as a spherical mirror inside which light is bouncing), and by measuring the time a light signal takes from one clock to another we estimate the distance between those points. Clock is characterized with some mass $m$ and radius $r_c$. Because of clock's size, the points are marked with the precision $\simeq r_c$. In addition clocks are subject to quantum fluctuations, $\delta p \simeq 1/r_c$, that give for fluctuation velocity: $\delta v \simeq 1/mr_c$. Thus, the total uncertainty in measuring the length $l=t$ (we use $\hbar = c =1$ system of units) takes the form \begin{equation} \delta l \gtrsim r_c + l\delta v \simeq r_c + \frac{l}{mr_c}~. \nonumber \end{equation} Minimizing this equation with respect to $r_c$, one gets
\begin{equation}\label{optimal}r_c \simeq \sqrt{\frac{l}{m}}~,~~~~\delta l \simeq \sqrt{\frac{l}{m}}~.\end{equation} It seems that at the expense of mass we can always minimize the $\delta l$ as much as we want. But, as it was noticed by Karolyhazy, gravity brings new insight into the problem \cite{Karolyhazy:1966zz, Ng:1993jb}. Namely, the clock is characterized by the Schwarzschild radius $r_g \simeq l_P^2m$ and to avoid its gravitational collapse, the size of clock should be greater than its Schwarzschild radius
\begin{equation} l_P^2m \lesssim \sqrt{\frac{l}{m}}~. \nonumber \end{equation} It gives an upper limit on $m$ \begin{equation}
m \lesssim l^{1/3}l_P^{-4/3}~, \nonumber
\end{equation} and puts a lower bound on $\delta l$
\begin{equation}\label{lengthuncertainty} \delta l_{min} \simeq l^{1/3}l_P^{2/3}~.\end{equation}
Let us note that the above discussion has been carried out without paying any attention to the self-gravitational effects. However, one has to draw attention to the fact that the optimal measurement in Salecker-Wigner-Karolyhazy Gedankenexperiment is done by a clock whose characteristics are very close to that of a black hole \cite{Barrow:1996gs}. If we bear in mind that it means the wave-function describing the clock to be shrunk to its Schwarzschild radius, we are driven to the conclusion that the gravitational attraction becomes very strong and it may drastically affect the wave-packet expansion. We discuss this matter in the next section.
\section{Suppresion of Wellenpaket expansion due to self-gravity}
In the above discussion the clock (as a whole) is treated as a free quantum mechanical object/body described by the Gau\ss sche Wellenpaket
\begin{equation} \psi(t,\,r) = \frac{e^{-r^2/4a^2}}{\left(2\pi\right)^{3/4}} \left[ r_c\left(1 +\frac{it}{2mr_c^2} \right)\right]^{-3/2} ~, \nonumber \end{equation} where \begin{equation} a^2 = r_c^2\left(1 +\frac{it}{2mr_c^2} \right) ~. \nonumber
\end{equation} From this wave-packet one finds
\begin{equation}\label{linear spread} \delta l(t) \, \simeq \, \sqrt{r_c^2 + \frac{t^2}{4m^2r_c^2}} \,\gtrsim \, r_c + \frac{t}{4mr_c} ~. \end{equation}
Taking now into account the self-gravity of the Gau\ss ian wave-packet - its dynamics gets modified. For gravity prevents expansion, on general grounds one concludes that the value of $\delta l$ should be smaller than the expression \eqref{linear spread}. To get a qualitative picture, let us denote by $r_{wp}$ the radius of the wave-packet. Without gravity \begin{equation}\label{evgaussian} r_{wp}(t) \simeq \sqrt{r_c^2 + \frac{t^2}{4m^2r_c^2}}~,~~~ r_{wp}(0) = r_c~,~~~\dot{r}_{wp}(0) = 0 ~.\end{equation} The quantum mechanical acceleration responsible for this expansion has the form \begin{equation}\label{Beschleunigung} \ddot{r}_{wp}(t) = \frac{1}{4m^2\left(r_c^2 + \frac{t^2}{4m^2r_c^2} \right)^{3/2}} = \frac{1}{4m^2 r_{wp}^3}~.\end{equation} One can derive the results (\ref{optimal}, \ref{lengthuncertainty}) immediately from Eq.(\ref{evgaussian}). Minimizing the $r_{wp}(t)$ with respect to $r_c$ one gets \[ r_c = \sqrt{\frac{t}{2m}}~.\] After substituting it into Eq.(\ref{evgaussian}) one finds \[r_{wp}(t) = \sqrt{\frac{t}{m}}~. \] On the other hand, the gravitational acceleration that prevents expansion of the wave-packet looks like \cite{Carlip:2008zf} \[ a_g = \frac{l_P^2m}{r_{wp}^2} ~.\] So that, the net acceleration takes the form
\[a = \frac{1}{4m^2 r_{wp}^3} - \frac{l_P^2m}{r_{wp}^2} ~.\] Thus, we have to solve the equation
\begin{eqnarray}\label{dziritadi}
\ddot{r}_{wp} = \frac{1}{4m^2 r_{wp}^3} - \frac{l_P^2m}{r_{wp}^2} ~,~~\Rightarrow ~~ \frac{\dot{r}_{wp}^2}{2} + \frac{1}{8m^2 r_{wp}^2} - \frac{l_P^2m}{r_{wp}} = \mbox{const.}\equiv A~.\end{eqnarray} As $r_{wp}(0) = r_c,\,\dot{r}_{wp}(0) =0$, one finds
\[ A = \frac{1}{8m^2 r_c^2} - \frac{l_P^2m}{r_c}~. \] The solution can be written in the form
\begin{equation} \int\limits_{r_c}^{r_{wp}}\frac{dx}{\sqrt{2A + \frac{2l_P^2m}{x} - \frac{1}{4m^2 x^2}}} = t~. \end{equation} A typical form of the potential governing the dynamics of $r_{wp}$ is shown in Fig.\ref{surati}. It has a minimum at
\begin{equation}\label{gravopt} r_c = \frac{1}{4l_P^2m^3}~, \end{equation} corresponding to the state of stable equilibrium. Gau\ss ian wave-packet having this radius in the initial state neither contracts nor expands in course of time. From Eq.(\ref{gravopt}) one sees that the larger the mass - the smaller the clock size. However, there is an upper bound on the mass set by the Schwarzschild radius,
\[ m_{max} \simeq \frac{r_c}{l_P^2}~, \] which together with Eq.(\ref{gravopt}) yields \[ r_c \simeq l_P~,~~~~\Rightarrow ~~~~\delta l \simeq l_P~.\]
\begin{figure}[h]
\centering
\begin{tikzpicture}
\draw[->] (0,0) -- (5,0) node[pos=0.95, blue, above] {$r_{wp}$};
\draw[->] (0,-1.5) -- (0,3) node[blue, pos=0.65, sloped, above] {$1 / 8m^2 r_{wp}^2 - l_P^2m / r_{wp}$};
\draw[scale=0.8,domain=0.754:5.5,smooth,variable=\x,brown, thick] plot ({\x},{1/(\x*\x) - 2.65/\x});
\draw[scale=0.8,domain=0.28:0.754,smooth,variable=\x,brown, thick] plot ({\x},{1/(\x*\x) - 2.65/\x});
\end{tikzpicture}
\caption{The potential: $1 / 8m^2 r_{wp}^2 - l_P^2m / r_{wp}$.} \label{surati}
\end{figure}
It seems likely that one will arrive at the same result by solving the Schr\"odinger-Newton system \cite{Carlip:2008zf, Giulini:2011uw, vanMeter:2011xr}
\begin{eqnarray}\label{schrnewt} i\partial_t \psi = -\frac{1}{2m}\triangle \psi -m\varphi\psi~,~~~~\triangle \varphi = 4\pi l_P^2 m \left|\psi\right|^2 ~,\end{eqnarray} with the initial state given by the Gaussian wave-packet \[ \psi(t=0,\,r) = \frac{e^{-r^2/4r_c^2}}{\left(2\pi r_c^2\right)^{3/4}} ~.\]
It is worth noting that, apart from the above discussed effect, the self-gravity implies also the reduction of clock's mass. As this observation is significant for all discussions concerning the space-time measurements, let us confine our attention to this problem now.
\section{Reduction of mass due to self-gravity}
\label{masisdefekti}
According to the papers \cite{Arnowitt:1962hi, Arnowitt:1960zz, Misner:1963zz, Duff:1973zz, Duff:1973ji}, we can safely say that self-gravity affects the clock's mass. The conclusion reached in the papers \cite{Arnowitt:1962hi, Arnowitt:1960zz, Misner:1963zz} implies the modification of the clock mass in the following way
\begin{eqnarray}\label{renormalized}
m \,=\, m_c \,+\, \frac{l_P^2m^2_c}{2r_c} ~~\Rightarrow ~~ m_c \,=\, l_P^{-2}\left(\sqrt{r_c^2 +2l_P^2r_cm} \,-\, r_c\right) ~,
\end{eqnarray} where $m$ is to be identified with the mass in absence of gravity: $l_P\to 0$. It is plain to see that $m_c$ is always positive. Duff, in his expository paper \cite{Duff:1973ji}, points out that it is not a proper conclusion and suggests the correct version in the form
\begin{eqnarray}\label{newtonian}
m_c \,=\, m \left(1\,-\, \frac{l_P^2m}{2r_c}\right) ~.
\end{eqnarray} The source of this mistake is well explained in \cite{Duff:1973ji}, however, we will not dwell on the details. Instead we point out that the Eq.\eqref{newtonian} itself is very suggestive for the speculation (see \cite{Arnowitt:1962hi}) that leads to the Eq.\eqref{renormalized}. Namely, one can interpret the Eq.\eqref{newtonian} as the correction to the mass due to self-gravity in the framework of Newtonian gravity. However, one may claim that in general relativity it is the total mass that interacts gravitationally and not just the mass $m$. This way one arrives at Eq.\eqref{renormalized}. We shall consider both expressions separately.
Let us assume that the reader has no objections with regard to the Eq.\eqref{renormalized} and pose the question - how to operate with these two masses in the above discussed Gedankenexperiment? Before proceeding further, we have to make a few remarks to clarify the Eq.\eqref{renormalized}. $m_c$ is the mass that enters the exterior Schwarzschild solution. Hence, this mass determines the Schwarzschild radius. In addition, one has to require $r_c > l_P^2m/2$ in order for the solution to exist \cite{Arnowitt:1962hi, Duff:1973ji}. Thus, we demand that $r_c > l_P^2m/2$ and $r_c > 2l_P^2m_c$.
To carry the idea further, let us note that in Salecker-Wigner-Karolyhazy Gedankenexperiment the clock is described by the wave-function whose breadth is given by Eq.\eqref{evgaussian}. Therefore, $r_{wp}$ plays the role of the radius of clock-mass distribution and, accordingly, one has to replace $r_c$ in Eqs.(\ref{renormalized}, \ref{newtonian}) by this expression (recall that $r_{wp}(0)=r_c$)
\begin{eqnarray}\label{clock-mass-1}
&& m_c \,=\, l_P^{-2}\left(\sqrt{r^2_{wp} +2l_P^2r_{wp}m} \,-\, r_{wp}\right) ~, \\ \label{clock-mass-2}
&& m_c \,=\, m \left(1\,-\, \frac{l_P^2m}{2r_{wp}}\right) ~.
\end{eqnarray} $m_c$ is the mass determining the gravitational field that affects the dynamics of the wave-packet. The Eq.\eqref{dziritadi} gets modified as
\begin{eqnarray}\label{entwicklung}
\frac{\dot{r}_{wp}^2}{2} + \frac{1}{8m^2 r_{wp}^2} - \frac{l_P^2m_c}{r_{wp}} = \text{const}. ~. \end{eqnarray} In view of Eq.\eqref{clock-mass-1}, the one-dimensional potential governing the time evolution of $r_{wp}$ in Eq.\eqref{entwicklung} takes the form
\begin{eqnarray}
\frac{1}{8m^2r^2_{wp}} \,-\, \sqrt{1 \,+\, \frac{2l_P^2m}{r_{{wp}}}} \,+\, 1 ~. \nonumber
\end{eqnarray} It has the same qualitative behavior as the potential depicted in Fig.\ref{surati}. It has a minimum at the point determined by the equation
\begin{eqnarray}
r_c \,=\, \frac{1}{4l_P^2m^3}\sqrt{1 \,+\, \frac{2l_P^2m}{r_c}} ~. \nonumber
\end{eqnarray} Now $r_c$ is greater than the solution \eqref{gravopt}. From this equation it readily follows that for $m\simeq l_P^{-1}$ the minimum occurs at $r_c \simeq l_P$.
Now let us turn to the Eq.\eqref{clock-mass-2}. In this case the potential governing the dynamics of $r_{wp}$ reads
\begin{eqnarray}
\left(\frac{1}{8m^2} \,+\, \frac{l_P^4m^2}{2}\right)\frac{1}{r^2_{wp}} \,-\, \frac{l_P^2m}{r} ~. \nonumber
\end{eqnarray} Hence
\begin{eqnarray}
r_c \,=\, \frac{1}{4l_P^2m^3} \,+\,l_P^2m ~, \nonumber
\end{eqnarray} and again $r_c \simeq l_P$ for $m\simeq l_P^{-1}$.
\section{Concluding remarks}
The results can be summarized as follows. Salecker and Wigner found that one can always choose the size of the clock in such way that the total uncertainty in length measurement is minimized to $\delta l \simeq \sqrt{l/m}$. One can read this result also in the following way. If there is a clock of size $r_c$ and mass $m$, then the maximum distance which can be measured by this clock with accuracy $r_c$ is $r_c \simeq \sqrt{l_{max}/m}$ \cite{Barrow:1996gs} (see Eq.\eqref{optimal}). Their discussion uses the finiteness of the speed of light and the quantum mechanical expansion of the wave packet describing the clock - no mention of the effects of general relativity. Further insight into this Gedankenexperiment was obtained by Karolyhazy, who noted that minimum size of the clock is set by the Schwarzschild radius and thus one can not measure the length with greater accuracy than $\delta l \simeq l_P^{2/3}l^{1/3}$. This rate of length fluctuations is certainly much lager than $\delta l \simeq l_P$ lending thus extra interest to the issue from the standpoint of experimental signatures. It should be noted, however, that such clock is very close to the black hole and one naturally expects strong gravitational effects that will essentially affect the wave packet dynamics. We have seen that self-gravity prevents the expansion of the wave-packet and thus reduces the uncertainty in length measurement to $\delta l \simeq l_P$. One more point of importance related to self-gravity is the mass reduction. In view of the discussion presented in section \ref{masisdefekti}, we see that it does not change our conclusion made in the previous section but, in any case, it would be desirable if one could provide a numerical study of the Schr\"odinger-Newton equation by taking into account the effect of the mass reduction due to self-gravity. For this purpose one could use basic idea underlying the Schr\"odinger-Newton equation \eqref{schrnewt} as a guide. This system makes use of the Schr\"odinger equation in the background gravitational field, which in its turn is created by the mass distribution $m|\psi(t, \mathbf{r})|^2$. But the self-gravitational mass reduction implies that the gravitational field for an external observer, $r \gtrsim r_{wp}$, is sourced by the reduced mass $m_c$. Hence, one has to make the following replacement in Eq.\eqref{schrnewt}
\begin{eqnarray}
m|\psi(t, \mathbf{r})|^2 \, \to \, m_c|\psi(t, \mathbf{r})|^2 ~. \nonumber
\end{eqnarray} From Eqs.(\ref{clock-mass-1}, \ref{clock-mass-2}) it is obvious that as far as $r_{wp} \gg l_P^2m$ - the corrections are negligibly small.
In closing this section, we wanted to draw attention to the fact that the modification of Schr\"odinger-Newton system by replacing $m$ with the gravitating mass, see Eqs.(\ref{clock-mass-1}, \ref{clock-mass-2}), implies the dependence of the equation on the wave-packet breadth. The modification of Schr\"odinger equation due to quantum fluctuations of the background space suggested in \cite{Maziashvili:2016kad, Maziashvili:2018wae} is of similar nature. To stress once more our point of view, physically meaningful incorporation of $l_P$ into quantum mechanics should be expressed by some function of the ratio $l_P/r_{wp}$ rather than by a function of $l_P \langle p\rangle$, where $\langle p\rangle$ stands for average momentum. Otherwise one may obtain evidently misleading results \cite{Maziashvili:2016kad}.
\begin{acknowledgments}
Author is indebted to Avtandil Achelashvili and Zurab Kepuladze for useful discussions.
\end{acknowledgments}
|
1,314,259,995,303 | arxiv | \section{Introduction}
2RXP J130159.6-635806, first discovered by the \textit{ROSAT}
observatory, was later rediscovered in hard X-rays by the
\textit{INTEGRAL}/IBIS telescope and designated with the name IGR\,J13020-6359
\citep{2006ApJ...636..765B,2006AstL...32..145R}. The first
comprehensive analysis of the temporal and spectral X-ray properties of this
source was done by \cite{masha2005} using data from
the \textit{ASCA}, \textit{BeppoSAX}, \textit{INTEGRAL} and
\textit{XMM-Newton} observatories. In particular, \textit{XMM-Newton} data
showed coherent pulsations with a period of around 700~s. Joint spectral analysis of
\textit{XMM-Newton} and \textit{INTEGRAL} data demonstrated that the
spectral shape is very typical for accretion-powered X-ray pulsars
(namely, an absorbed power law with a high-energy cut-off).
Based on 2MASS archival data, \cite{masha2005} proposed that the
binary companion to 2RXP J130159.6-635806 is a Be star at a distance
of 4--7~kpc. This suggestion was later confirmed by
\cite{2013A&A...560A.108C}, who reported the presence of emission
lines of He\,I $\lambda$2.0594 $\mu$m and Br(7--4) $\lambda$2.1663
$\mu$m, which are typical for a Be star. The spectral type of the
optical counterpart was determined to be B0.5Ve. The orbital period
of the binary remains unknown.
X-ray pulsars in binary systems with Be companions (BeXRPs) typically
manifest themselves as transient sources through either Type I (periodic
flares related to the periastron passage) or Type II outbursts (powerful
rare transient events), or a combination of both
\citep[e.g.,][]{2011Ap&SS.332....1R}. 2RXP\,J130159.6-635806
shows several differences from a standard transient BeXRP. Specifically, it
has a relatively low persistent flux, a long pulse period, and it does not
demonstrate either Type I or Type II outbursts. \cite{masha2005}, however,
did report some variability of its X-ray flux.
Therefore, there are substantial reasons to consider 2RXP\,J130159.6-635806
as a member of the subclass of persistent BeXRPs \citep{reig1999}. So far,
only a few members of this relatively small category of objects have been
studied in detail: 4U~0352+309/X~Persei, RX~J0146.9+6121/LS~I~+61~235,
RX~J0440.9+4431, and RX~J1037.5-564 \citep{haberl1998,reig1999}.
In this paper, we present results of a comprehensive analysis of the
temporal and spectral properties of 2RXP\,J130159.6-635806 in a broad
energy range, finding some properties that are very unusual for BeXRPs.
All errors are quoted at the 90\% confidence level unless otherwise stated.
\section{Observations}
\begin{figure*}[ht]
\includegraphics[scale=0.93,bb=47 273 555 517,clip]{fig01.eps}
\caption{Exposure-corrected FPMA (left) and FPMB (right) images of
2RXP~J130159.6-635806 for the 4th observation in $3-78$~keV\ band.
The images have been smoothed by a Gaussian kernel with 3 pixel
width ($1 {\rm\ pix}=3''$). Each image is color-coded in logarithmic
scale. The color bar on the right shows the units of the images
expressed in $10^{-4}$~cts~pix$^{-1}$~s$^{-1}$. The solid green
circle ($120''$ radius) and the yellow shape denote regions for the
source and background extraction, respectively. The dashed green
circle demonstrates an angular distance of $200''$ from the
target. The position of the nearby bright source PSR\,B1259-63\ is
indicated.}
\label{fig:map}
\end{figure*}
2RXP~J130159.6-635806\ was initially serendipitously observed with \textit{Nuclear
Spectroscopic Telescope Array (NuSTAR)} \citep{fiona2013} during
observations of the gamma-ray binary PSR\,B1259-63, with three data sets taken
in 2014 May-June (Chernyakova et al., in prep.). In one observation,
2RXP~J130159.6-635806\ appears in the corner of $\sim 13' \times 13'$ \textit{NuSTAR}'s field
of view (FOV), and, in two more, the source is at the extreme edge of
the FOV. Despite the large off-axis angles, the \textit{NuSTAR}\ data were
successfully used to extract coherent pulsations. This motivated the
\textit{NuSTAR} team to trigger an on-axis 30~ks observation of 2RXP~J130159.6-635806\
in order to obtain high-quality data for spectral and timing
analysis. Table~\ref{tab:log} lists the \textit{NuSTAR} observations
used in this work.
\begin{table*}
\begin{center}
\caption{{\it NuSTAR} observations\label{tab:log}}
\begin{tabular}{cclccccccccc}
\tableline\tableline
Seq. & Obs. & \multicolumn{1}{c}{Start Time}
& Exp. &
\multicolumn{2}{c}{Net Count Rate\tablenotemark{a}} & Period\tablenotemark{b} \\
Num. & ID & \multicolumn{1}{c}{[ UTC ]} & [ ks ] &
\multicolumn{2}{c}{ [ counts s$^{-1}$ ] } & [ s ] \\
\tableline
1 & 30002017004 & 2014-05-04 10:01 & 33.3 &
$(1.49\pm0.02)\times10^{-1}$ & $(2.94\pm0.03)\times10^{-1}$ & $643.68 \pm 0.02$\\
2 & 30002017008 & 2014-06-02 19:21 & 26.4 & $(2.00\pm0.12)\times10^{-2}$ &
$(7.41\pm0.20)\times10^{-2}$ & $643.64 \pm 0.30$ \\
3 & 30002017010 & 2014-06-14 17:21 & 29.1 &
$(7.05\pm0.19)\times10^{-2}$ & $(1.62\pm0.03)\times10^{-1}$ & $643.14 \pm 0.18$ \\
4 & 30001032002 & 2014-06-24 00:06 & 31.6 & $1.342\pm0.007$ &
$1.239\pm0.007$ & $642.90 \pm 0.01$ \\
\tableline
\end{tabular}
\tablenotetext{a}{Net count rate in $3-78$~keV\ band for FPMA and FPMB
extracted from a circular region with a radius of $120''$.}
\tablenotetext{b}{Measured pulsation period for 2RXP~J130159.6-635806.}
\tablecomments{The target for the first three observations was PSR\,B1259-63,
which placed 2RXP~J130159.6-635806\ offset by $9.55'$ from the optical axis. The 4th
observation was taken with 2RXP~J130159.6-635806\ on-axis.}
\end{center}
\end{table*}
\textit{NuSTAR} carries two co-aligned identical X-ray telescopes
operating in wide energy band from 3 to 79 keV with angular resolution
of 18'' (FWHM) and half-power diameter (HPD) of 58''. Spectral
resolution of $400$~eV (FWHM) at $10$~keV is provided by independent
focal planes for each telescope, usualy referred as focal plane module
A and B (FPMA and FPMB).
{\it NuSTAR} data can have systematic positional offsets as high as $10''$.
Prior to extraction, we therefore corrected the
world coordinate system (WCS) of the event files for the four observations
to match the PSR\,B1259-63\ and 2RXP~J130159.6-635806\ centroid positions based on cataloged coordinates.
Since the \textit{NuSTAR}\ PSF has wide wings \citep{fiona2013,an2014}, we
investigated how the surface brightness of the source changes with
radius in order to determine regions where the source dominates over
the background. We found that 2RXP~J130159.6-635806\ is well above background within
$120''$ \citep[$\sim92\%$ of PSF enclosed energy; see,
e.g.,][]{an2014}, and that the background count rate can be robustly
measured at radii $\geq$ $200''$ from the source. Taking this into
consideration, we defined the corresponding extraction regions shown
in Fig.~\ref{fig:map} for the 4th (on-axis) observation. The other
three observations, for which 2RXP~J130159.6-635806\ is far off-axis, have been
treated similarly. Following {\it NuSTAR} recommended standard
practice, we chose the background regions to be on the same detector
chip as the source.
\section{Timing analysis}
2RXP~J130159.6-635806\ is a known source of coherent X-ray pulsations at a period of
$\sim 700$~s with an average spin-up rate of $\dot\nu \simeq
2\times10^{-13}$~Hz~s$^{-1}$ \citep{masha2005}. We performed timing
analysis of the \textit{NuSTAR} data using the {\sc xronos}
\citep[epoch folding tool efsearch;][]{1983ApJ...266..160L} after
barycentering the data with {\it barycorr}. For each NuSTAR
observation, the pulse period and its uncertainty were calculated
following the procedure described in
\citep{2013AstL...39..375B}. Namely, a large number ($10^4$) of source
light curves were simulated, the pulse period of each one was
determined with efsearch, and the distribution of the corresponding
pulse periods was constructed. The mean value of this distribution and
its standard deviation were taken as the pulse period and its
$1\sigma$ uncertainty, correspondingly. Table~\ref{tab:log} lists
period results derived from the FPMA and FPMB combined light
curves. The inset of Fig.~\ref{fig:period} shows the evolution of the
spin period as a function of time.
As seen from Table~\ref{tab:log} and Fig.~\ref{fig:period}, all four
\textit{NuSTAR}\ datasets are suitable for pulsation detection. It is also
quite evident that periods recorded over the time span of 50 days are not
consistent with each other, clearly showing a decrease in the
period. We utilized the 1st and 4th \textit{NuSTAR}\ observations, which have
the most accurate period measurements and also span the full duration
of the {\it NuSTAR} coverage, to measure a period derivative of $\dot
P = -0.0154(5)$~s/day, equivalent to $-1.78(6)\times10^{-7}$
s~s$^{-1}$, or $\dot\nu \simeq 4.3\times10^{-13}$~Hz~s$^{-1}$. This is
in agreement with the \cite{masha2005} spin-up measurement of the
second interval of their data, after the `break' at MJD $\sim51900$
($\dot\nu \simeq 4\times10^{-13}$~Hz~s$^{-1}$). This is quite
remarkable since there is almost a decade between the period
measurements.
\subsection{Pulse period long-term evolution}
\begin{figure}
\vbox{
\includegraphics[width=\columnwidth,bb=20 280 515 675,clip]
{fig02.ps}}
\caption{Evolution of the pulse period as a function of time. Blue
circles show period measurements published by \cite{masha2005} using
\textit{ASCA}, \textit{BeppoSAX} and \textit{XMM-Newton} data;
values obtained in this work are shown by black crosses and red
points for data from \textit{XMM-Newton} and \textit{NuSTAR},
respectively. Dashed lines represent linear fits to the period
evolution with two different spin-up rates (see text for the
details).}\label{fig:period}
\end{figure}
2RXP~J130159.6-635806\ regularly fell into the FOV of various X-ray telescopes thanks
to extensive observational campaigns dedicated to PSR\,B1259-63\, which is
located only $9.55'$ away. This allows us to investigate the long-term
evolution of the pulse period. We analyzed the
\textit{XMM-Newton} \citep{jansen2001} archival data from 2007-2011
using the procedures described by \cite{masha2005} and Science
Analysis Software (SAS) version 14.0.0. The list of selected
\textit{XMM-Newton} observations with corresponding period
measurements are shown in Table~\ref{tab:xmm}.
\begin{table}
\begin{center}
\caption{\textit{XMM-Newton} observations\label{tab:xmm}}
\begin{tabular}{cccccccccccc}
\tableline\tableline
Obs. & Start Time
& Exp. & Period \\
ID & [ UTC ] & [ ks ] & [ s ] \\
\tableline
0504550501 & 2007-07-08 12:01 & 14.1 & $681.20\pm0.15$ \\
0504550601 & 2007-07-16 19:59 & 55.3 & $680.06\pm0.10$ \\
0504550701 & 2007-08-17 08:38 & 11.4 & $680.87\pm0.20$ \\
0653640401 & 2011-01-06 17:37 & 19.9 & $665.65\pm0.20$ \\
0653640501 & 2011-02-02 18:59 & 26.3 & $666.50\pm0.20$ \\
0653640601 & 2011-03-04 05:59 & 12.9 & $664.95\pm0.20$ \\
\tableline
\end{tabular}
\end{center}
\end{table}
Fig.~\ref{fig:period} presents the long-term evolution of the period,
showing that the 2RXP~J130159.6-635806\ neutron star has undergone very strong and
steady spin-up during the last 20 years. The first available value of
spin period, measured in 1994, is 735~s \citep{masha2005}. The
most recent measurements by \textit{NuSTAR}, from 2014, show the period to be
around 643~s (Table~\ref{tab:log}). This means that during the last
$\sim20$ years the spin period decreased by $\sim92$ s, corresponding
to a mean spin-up rate of
$\sim1.4\times10^{-7}$~s~s$^{-1}$. Fig.~\ref{fig:period} shows a
change in the average spin-up rate, first reported by
\cite{masha2005}.
We approximated the long-term period evolution with a linear function
with one change in slope. A fit shows that the break occurred at MJD
$51300\pm217$ (mid 1999) with a spin-up rate before and after the
break of $(4.3\pm2.7)\times10^{-8}$ s s$^{-1}$ and
$(1.774\pm0.003)\times10^{-7}$ s s$^{-1}$, respectively. As seen from
the fit parameters, the spin-up rate becomes significantly higher
after the break. As noted above, the inset of Fig.~\ref{fig:period}
shows that the \textit{NuSTAR}\ data points fit the long-term spin-up rate
with remarkably high precision.
Similar behavior was observed for the X-ray pulsar GX~1+4, which
showed steady spin up for more than a decade \citep[see,
e.g.,][]{1997ApJS..113..367B,2012A&A...537A..66G}. However, GX~1+4
belongs to the subclass of accreting X-ray pulsars known as symbiotic
X-ray binaries (SyXBs). For BeXRPs, persistent sources
typically demonstrate pulse periods that are relatively stable.
Examples include the population of Be systems in the Small Magellanic
Cloud \citep{klus2014} and the well-known low luminosity Galactic
system X~Persei \citep{lut2012}. Transient BeXRPs show strong spin-up
during Type I and Type II outbursts \citep{1997ApJS..113..367B} with
significant spin-down episodes in between \citep[see,
e.g.,][]{postnov2015}. Therefore, 2RXP J130159.6-635806 is a unique
source among the BeXRPs because it demonstrates steady and high
long-term spin-up with a relatively low and stable luminosity.
\subsection{Pulse profile and pulsed fraction}
\begin{figure}
\vbox{
\includegraphics[width=\columnwidth,bb=90 140 500 710,clip]
{fig03.ps}}
\caption{\textit{NuSTAR} 2RXP~J130159.6-635806\ pulse profiles in
three energy bands (3--10, 10--20 and 20--40 keV), normalized by the
mean flux. The lower panel shows two hardness ratios, (10--20 keV)/(3--10 keV)
and (20--40 keV)/(10--20 keV) in blue and red,
respectively. The profiles are shown twice for clarity.}\label{fig:pprof}
\end{figure}
Pulsar pulse profiles and their evolution with luminosity and
energy band depend on the geometrical and physical properties of the
emitting regions in the vicinity of the neutron star. In
Fig.~\ref{fig:pprof}, the \textit{NuSTAR} pulse profiles of 2RXP~J130159.6-635806\ are shown in three different energy
bands: 3--10, 10--20 and 20--40 keV. The lower panel shows ``soft''
((10--20)/(3--10) keV) and ``hard'' ((20--40)/(10--20) keV) hardness
ratios.
At all energies, the pulse profile can roughly be divided into one
main peak at phases 0.0--0.5 and two smaller peaks at phases
0.5--0.75 and 0.75--1.0. The main feature that changes with energy
is the depth of the minimum at phase 0.0. As shown in the lower
panel of Fig.~\ref{fig:pprof}, while the ``hard'' hardness
ratio is almost constant, the ``soft'' hardness ratio shows a maximum at phase
0.0 due to an increase in the depth of the minimum at 3--10 keV. Such
behavior is caused by differences in the source
spectrum with pulse phase (see Section 4).
\begin{figure}
\vbox{
\includegraphics[width=\columnwidth,bb=20 275 515 675,clip]
{fig04.ps}}
\caption{The pulsed fraction (red circles) and relative $RMS$
(blue squares) of 2RXP\,J130159.6-635806 obtained with \textit{NuSTAR}
as a function of energy.}\label{fig:ppfrac}
\end{figure}
Fig.~\ref{fig:ppfrac} shows the pulsed fraction as a function of
energy. The pulsed fraction is defined as
$\mathrm{PF}=(I_\mathrm{max}-I_\mathrm{min})/(I_\mathrm{max}+I_\mathrm{min})$,
where $I_\mathrm{max}$ and $I_\mathrm{min}$ are the maximum and
minimum intensities in the pulse profile, respectively. Defined in
this way, the pulsed fraction is very high (about 80\%). The
alternative way to characterize the pulsed fraction is the relative
Root Mean Square (RMS), which can be calculated using the following
equation:
\begin{equation}\label{rms}
RMS=\frac{\Big(\frac{1}{N}\sum_{i=1}^N(P_i-<P>)^2\Big)^{\frac{1}{2}}}{<P>},
\end{equation}
where $P_i$ is the background-corrected count rate in a given bin of
the pulse profile, $<P>$ is the count rate averaged over the pulse
period, and N is the total number of phase bins in the profile ($N=30$
in our analysis). The $RMS$ deviation obtained in this way reflects
the variability of the source pulse profile in a manner that is not
sensitive to outliers like the narrow features seen in the profile of
2RXP~J130159.6-635806. Therefore, this quantity has a value of around 30\% that is
much lower than the classically determined pulsed fraction and also
independent of the energy band (see Fig.~\ref{fig:ppfrac}).
It is interesting to note that in contrast to the majority of X-ray
pulsars \citep{lut09}, 2RXP~J130159.6-635806\ does not show an increase in the pulsed
fraction at higher energies. Such uncharacteristic behavior was
previously observed for another persistent BeXRP -- RX\,J0440.9+4431
\citep{2012MNRAS.421.2407T}. On the other hand, Fig.~\ref{fig:pprof}
shows that the pulsed fraction increases somewhat with energy if
$I_\mathrm{min}$ is defined from the the pulse plateau rather than
from the pulse minimum. In other words, the peak-to-plateau difference
slightly grows with energy.
\subsection{Power Spectrum}
\label{section:powerspec}
The observed 20-year strong and steady spin-up of 2RXP~J130159.6-635806\ reveals the
existence of a long-term accelerating torque, which indicates that the
binary interaction must lead to regular accretion over a decade-long
time scale, although this could certainly be episodic (e.g., at
periastron passages). The torque can be transferred by matter
accreted from either the disc around the neutron star or a stellar
wind from the optical counterpart. Unfortunately, there is no
strong observational evidence allowing us to distinguish between
these two different accretion channels. In both scenarios, this
process is defined mainly by the mass accretion rate and the
magnetic field strength \citep[see, e.g.,][]{1979ApJ...234..296G}.
\begin{figure}
\includegraphics[width=\columnwidth,bb=55 275 550 675,clip]{fig05.ps}
\caption{The noise power spectrum of 2RXP~J130159.6-635806\ obtained with the
\textit{NuSTAR} data in observation 30001032002 (red
histogram; the 4th observation). The solid line shows a broken power law model with the fitted
value of the break frequency, 0.0066 Hz (vertical dotted line). The power law slope
above the break is fixed to --2. The position of the pulse frequency
(0.0015 Hz) is shown by the vertical dashed line.}\label{fig:pds}
\end{figure}
Due to the unknown distance of 2RXP~J130159.6-635806, the luminosity and mass
accretion rate are highly uncertain; the magnetic field is also
unknown since no cyclotron line is found in the energy spectrum
(Sect.~\ref{section:spec}). However, some qualitative conclusions
about the interaction between the accretion disk and the neutron star
magnetosphere can be made from the noise power spectrum of the X-ray
pulsar.
According to the ``perturbation propagation'' model, stochastic
variations of viscous stresses in the accretion disc cause variations
of the mass accretion rate
\citep{1997MNRAS.292..679L,2001MNRAS.321..759C}. This, in turn,
results in a specific shape of the Power Density Spectrum (PDS) of the
emerging light curve. Namely, it will appear as a power law with slope
--1 to --1.5 (but the exact value is not well established for X-ray
pulsars) up to the break frequency, which is the highest frequency
that can be generated in the accretion disk
\citep{1997MNRAS.292..679L}.
In the case of a highly magnetized neutron star, this maximal
frequency is limited by the Keplerian frequency at the magnetospheric
radius, above which one can expect a cutoff in the source PDS
\citep{2009A&A...507.1211R}. If the source stays in spin equilibrium
(corotation), the cutoff frequency will coincide with the spin
frequency of the pulsar, whereas, in the case of spin-up (increased
mass accretion rate), the magnetosphere will be squeezed and
additional noise will be generated at higher frequencies. If the mass
accretion rate is known (i.e., the distance to the source is known),
this property of the PDS can be used to estimate the magnetic field
strength of the neutron star
\citep{2009A&A...507.1211R,2012MNRAS.421.2407T,2014A&A...561A..96D}.
The appearance of the break in the PDS does not necessarily indicate
that accretion is from a disk. There are wind-accreting sources in
spin equilibrium also showing a break in their PDSs around the pulse
frequency \citep{1993ApJ...411L..79H}. However, the evolution of the
PDS shape aw a function of mass accretion rate in such systems is not
well studied.
In Fig.~\ref{fig:pds}, we show the PDS of 2RXP~J130159.6-635806\ obtained with the
\textit{NuSTAR} data in the 4th observation after subtracting the
pulse profile folded with the measured period from the light
curve. The solid line represents the fitting model in the form of a
broken power law. The measured break frequency is 0.0066~Hz (shown by
dotted line), and it is clear that it is shifted towards higher
frequencies relative to the spin frequency in this observation
(0.0015~Hz; shown by dashed vertical line). The power-law slope above
the break frequency is fixed at --2 \citep{2009A&A...507.1211R}. The
best-fit value of the slope below the break is $-0.33$. Given the steady persistent luminosity, we conclude that
the spin-up observed during last $\sim$$20$ years is caused by a
long-term mass accretion rate that is high enough to squeeze the
magnetosphere inside the corotational radius. This finding confirms
the uniquness of 2RXP~J130159.6-635806\ among the other X-ray pulsars in binary
systems with Be companions.
\section{Spectral analysis}
\label{section:spec}
We used {\it nuproducts}, a part of the NuSTARDAS package, to extract
source and background spectra and to generate \textit{NuSTAR}\ response matrix
(RMF) and effective area (ARF) files for a point source. In our
analysis we utilized the most recent calibration database (CALDB),
version 20150316. The extracted spectra were then grouped to have more
than 20 counts per bin using the {\it grppha} tool from the HEAsoft
6.15.1 package. We fit the \textit{NuSTAR}\ spectra using {\sc xspec} version
12.8.1 \citep{xspec}.
\subsection{Pulse phase-averaged spectroscopy}
\label{section:spe}
According to measurements from the \textit{ASCA} and
\textit{XMM-Newton} observatories, the spectrum of 2RXP~J130159.6-635806\ is
characterized by a moderate absorption value $N_{\rm
H}=(2.48\pm0.07)\times10^{22}$ cm$^{-2}$, which is stable over a
dozen years \citep{masha2005}. This value was obtained by
approximating the source spectra as a power law model modified by
interstellar absorption ({\sc wabs} model in the {\sc XSPEC} package).
Note that it is just slightly higher than the value of the
interstellar hydrogen absorption $(1.7-1.9)\times 10^{22}$ cm$^{-2}$
determined by \cite{dickey1990} in the direction of 2RXP~J130159.6-635806.
\begin{figure}
\vbox{
\includegraphics[width=\columnwidth,bb=48 225 548 690,clip]
{fig06.ps}}
\caption{ {\it Top panel (a):} Pulse phase-averaged spectrum of 2RXP~J130159.6-635806\
obtained with \textit{NuSTAR} during the 4th observation. Red points
and blue crosses show module A and B data, respectively. The black
line represents the best fit by the `highecut' model with an iron
emission line at 6.4~keV. {\it Bottom panels (b,c,d)} show
corresponding residuals from the models `cutoff', `highecut', and
`highecut' with the 6.4~keV line, respectively (see text and
Table~\ref{tab:spectr}).}\label{fig:spectr_onaxis}
\end{figure}
In the first three \textit{NuSTAR}\ observations 2RXP~J130159.6-635806\ serendipitously
appeared highly offset from the optical axis, and at the large
off-axis angles the absolute flux measurements can be affected by
inaccurate PSF positioning and subsequent inapropriate weighting in
the spectrum extraction procedure. This leads to large systematics for
the extracted spectra. Therefore, we restrict detailed analysis of
the source spectrum to the data obtained in the 4th (on-axis)
observation, which provides high-quality data.
In general, the spectrum of 2RXP~J130159.6-635806\ has a shape which is typical for
accreting pulsars in binary systems, showing a hard spectrum with an
exponential cutoff at high energies. Fig.~\ref{fig:spectr_onaxis} (a)
presents the phase-averaged spectrum approximated with the most
suitable spectral model determined below. We initially modeled data
with a cutoff power-law model ({\sc cutoffpl} in the {\sc XSPEC}
package)
\begin{equation}
AE^{-\Gamma}e^{-\frac{E}{E_{\rm fold}}},
\end{equation}
where $\Gamma$ is the photon index, $E_{\rm fold}$ is the
characteristic energy of the cutoff (e.g., the folding energy), and
$A$ is a normalization. This model was modified by interstellar
absorption in the form of the {\sc wabs} model. As the working energy
range of the {\it NuSTAR} observatory begins at 3 keV, it is not very
sensitive to measuring low absorption columns. Therefore, in the
following analysis, the interstellar absorption was fixed to the value
$N_{\rm H}=2.48\times10^{22}$ cm$^{-2}$ measured by
\citet{masha2005}. Note that we simultaneously fitted spectra obtained
by both {\it NuSTAR} modules. To take into account the uncertainty in
their relative calibrations, which may be even more of a concern for the
observations where the source is highly offset from the optical axis,
we added a cross-calibration constant between the modules. The
fitting parameters for different data sets are shown in
Table~\ref{tab:spectr}.
The `cutoff' model approximates the source spectrum relatively well
with $\chi^{2}=992.18$ for 921 degrees-of-freedom (\textit{dof}; see
Table~\ref{tab:spectr}). Nevertheless, a wave-like structure is
clearly seen in the residual panel Fig.~\ref{fig:spectr_onaxis} (b).
To improve the quality of the fit, we applied another continuum model
in the form of a power-law multiplied by a high-energy cutoff ({\sc
highecut} model in the {\sc XSPEC} package; \citealt{white83}).
This model can be written as:
\begin{equation}
AE^{-\Gamma}\times \left\{ \begin{array}{ll}
1 & \mbox{($E \leq E_{\rm cut}$)}\\
e^{-(E-E_{\rm cut})/E_{\rm fold}} & \mbox{($E > E_{\rm cut}$),} \end{array} \right.
\end{equation}
where $E_{\rm cut}$ is the energy where the cutoff starts. As in the
previous case, we fixed the interstellar absorption value. Residuals
of modeling the source spectrum with `highecut' are presented in
Fig.~\ref{fig:spectr_onaxis} (c), and best fit parameters are listed
in Table~\ref{tab:spectr}. The `highecut' model significantly improves
the quality of the fit ($\chi^{2}=929.72$ for 920 \textit{dof}). The
high-energy cutoff value $E_{\rm cut}$ is found to be
$6.48_{-0.15}^{+0.22}$~keV, which is significantly lower than the
value of $\sim 25$~keV reported by \cite{masha2005} using simultaneous
\textit{XMM-Newton} and \textit{INTEGRAL} data. The apparent
discrepancy is probably due to the lower statistical quality of the
\textit{INTEGRAL} data and the gap between the energy bands covered by
the \textit{XMM-Newton} and \textit{INTEGRAL} observatories. An
additional possible problem is that the cutoff energy $E_{\rm
cut}=6.48$ keV is very close to the energy of the iron fluorescent
line at 6.4 keV. The simplistic `highecut' model might hide the
presence of the emission line in the source spectrum. Observed
deviations of the measured spectrum from the model near this energy
argue in favor of this possibility.
To investigate this issue, we added a Gaussian emission line to the
`highecut' model, fixing its energy to 6.4 keV and width to $0.1$~keV,
allowing its normalization to be a free parameter. This resulted in
an additional improvement of the fit to $\chi^{2}=916.97$ for 919
\textit{dof} for a normalization of $(2.31\pm0.70)\times10^{-5}$ ph
cm$^{-2}$ s$^{-1}$. The corresponding equivalent width of the line is
$EW=44_{-32}^{+48}$ eV ($3\sigma$ error). We determined the
significance of the line using the {\sc XSPEC} script {\it simftest}
with $4\times10^4$ trials, and found that presumption against the null
hypothesis, or no line required by the data is $3\times10^{-3}$, which
corresponds to $\sim3\sigma$ line detection, assuming a normal
distribution. The residuals of the `highecut' model with the 6.4~keV iron
line are shown in Fig.~\ref{fig:spectr_onaxis} (d). The model itself,
together with spectral data points, is shown in
Fig.~\ref{fig:spectr_onaxis} (a).
\begin{table*}
\begin{center}
\caption{Parameters for the 2RXP~J130159.6-635806\ phase-averaged spectral analysis
based on \textit{NuSTAR}\ observations.\label{tab:spectr}}
\begin{tabular}{ccllccccccccc}
\tableline\tableline
\multicolumn{1}{c}{Seq.} &
\multicolumn{1}{c}{Model\tablenotemark{a}} &
\multicolumn{1}{c}{Const\tablenotemark{b}} &
\multicolumn{1}{c}{Photon} &
\multicolumn{1}{c}{$E_{\rm cut}$} &
\multicolumn{1}{c}{$E_{\rm fold}$} &
\multicolumn{2}{c}{Flux$^{\rm c}_{2-10{\rm\ keV}}$} &
$\chi_{\nu}^2$ (dof) \\
Num.& & &\multicolumn{1}{c}{index} & \multicolumn{1}{c}{[ keV ]} & \multicolumn{1}{c}{[ keV ]}
& FPMA & FPMB & & & \\
\tableline
1 & HI & $1.24\pm0.02$ & $1.40_{-0.14}^{+0.07}$ & $7.47_{-1.55}^{+0.86}$ & $17.43_{-1.29}^{+1.79}$ & $2.16_{-0.38}^{+0.12}$ & $2.68_{-0.48}^{+0.13}$ & 1.06 (458) \\
2 & HI & $0.61\pm0.04$ & $1.37$ (frozen) & $5.72_{-1.90}^{+0.65}$ & $15.49_{-1.73}^{+2.62}$ & $3.29_{-1.47}^{+0.26}$ & $2.01_{-0.95}^{+0.04}$ & 0.92 (124) \\
3 & HI & $0.92\pm0.03$ & $1.37$ (frozen) & $5.68_{-0.62}^{+0.53}$ & $17.48_{-1.35}^{+1.45}$ & $2.11_{-0.13}^{+0.08}$ & $1.95_{-0.10}^{+0.06}$ & 0.81 (264) \\
4 & CU & $1.00\pm0.01$ & $1.04\pm0.03$ & & $12.62\pm0.48$ & $3.79_{-0.10}^{+0.05}$ & $3.79_{-0.11}^{+0.05}$ & 1.07 (921) \\
& HI & $1.00\pm0.01$ & $1.32\pm0.03$ & $6.48_{-0.15}^{+0.22}$ & $16.78_{-0.56}^{+0.68}$ & $3.79_{-0.10}^{+0.05}$ & $3.79_{-0.11}^{+0.05}$ & 1.01 (920) \\[0.5mm]
& HI+Fe\tablenotemark{d} & $1.00\pm0.01$ & $1.37\pm0.04$ & $6.94_{-0.42}^{+0.50}$ & $17.96\pm0.87$ & $3.79_{-0.10}^{+0.05}$ & $3.79_{-0.11}^{+0.05}$ & 1.00 (919) \\[0.5mm]
\tableline
\end{tabular}
\tablenotetext{a}{The {\sc xspec} spectral model used. `CU': {\sc
wabs*cutoffpl}, `HI': {\sc wabs*powerlaw*highecut}, `HI+Fe': {\sc
wabs(powerlaw*highecut+gau)}.} \tablenotetext{b}{Constant factor
of the FPMB spectrum relative to FPMA.} \tablenotetext{c}{The
absorbed flux in units of $10^{-11}$erg~s$^{-1}$~cm$^{-2}$.} \tablenotetext{d}{The
corresponding 6.4 keV iron line parameters are: $\sigma=0.1$~keV
(fixed), normalization $(2.31\pm0.70)\times10^{-5}$ ph cm$^{-2}$
s$^{-1}$, and line equivalent width $EW=44_{-32}^{+48}$ eV
($3\sigma$ error).}
\end{center}
\end{table*}
Considering the moderate spectral resolution of the
\textit{NuSTAR}\ observatory and the low significance of the detected iron
line, we investigated the possibility of the presence of an iron
fluorescence line in the \textit{XMM-Newton} spectrum of 2RXP~J130159.6-635806. As
mentioned above, the X-ray pulsar 2RXP~J130159.6-635806\ was observed with
\textit{XMM-Newton} many times during programs studying PSR\,B1259-63.
For our purposes, we chose the two observations with the longest
exposures -- ObsID 0092820301 ($\sim$41.2 ksec) and ObsID 0504550601
($\sim$55.3 ksec). As for the \textit{NuSTAR}\ observations, we modeled the
spectra of 2RXP~J130159.6-635806\ including the Gaussian line at 6.4 keV. For both of
the \textit{XMM-Newton} observations, we did not find a significant
improvement in the fits when the iron line was added and obtained a
conservative upper limit ($3\sigma$) for the equivalent width of the
iron line of 110 eV, which is consistent with the \textit{NuSTAR}\ results.
Despite the fact that the first three \textit{NuSTAR}\ observations were made
at large offset angles, and the corresponding statistics are
significantly lower than for the 4th observation, some useful spectral
information can still be extracted. For these observations we used the
same `highecut' model. Due to low statistics and poor fit we fixed
power-law slope in the 2nd and 3rd data set at $\Gamma=1.37$ value
measured in the 4th observation. As seen from Table~\ref{tab:spectr},
where the best fit parameters are listed, the principal parameters --
power-law slope ($\Gamma$), cut-off energy ($E_{\rm cut}$), and
folding energy ($E_{\rm fold}$) -- of the first three high-offset
observations are in general agreement with 4th (on-axis) observation
modeled with `highecut'.
The estimated $2-10$~keV flux of 2RXP~J130159.6-635806\ is about
$3\times10^{-11}$~erg~s$^{-1}$~cm$^{-2}$\ during our observations, which is in
agreement with the value of $(2-3)\times10^{-11}$~erg~s$^{-1}$~cm$^{-2}$\ measured by
\cite{masha2005}. An absence of strong transient activity from 2RXP~J130159.6-635806\
on long time scales is confirmed with the RXTE/ASM and Swift/BAT
instruments. Note that \cite{masha2005} reported on a flaring episode,
but the flux of the flare was relatively low raising by a factor of a
few (up to $\sim10^{-10}$~erg~s$^{-1}$~cm$^{-2}$).
Finally, in order to check for the possible presence of a cyclotron
absorption line in the source spectrum, we modified the best-fit model
{\sc (wabs*(powerlaw*highecut+gau))} by including an absorption feature in the
form of a Lorentzian optical depth profile \citep[{\sc cyclabs} model
in {\sc XSPEC};][]{1990Natur.346..250M}. The search procedure was
performed following the prescription from
\cite{2005AstL...31...88T}. Namely, we varied the energy of the line
over the range between 5 and 50 keV with 3-keV steps and the line
width between 2 and 12 keV with 2-keV steps, leaving the line depth as
a free parameter. We did not find strong evidence for a cyclotron
line in the spectrum of 2RXP~J130159.6-635806 (no trials gave a significance higher
than $\sim2\sigma$).
\subsection{Pulse phase-resolved spectroscopy}
\label{section:phase}
\begin{figure}
\vbox{
\includegraphics[width=\columnwidth, bb=55 220 548 691,clip]
{fig07.ps}}
\caption{Parameters of the best fit model {\sc (wabs*(powerlaw*highecut+gau))}
as a function of pulse phase in the 4th (on-axis) observation. {\it Top:}
the black histogram shows the pulse profile in the entire energy
range (duplicated in the other two panels). {\it Middle and bottom,
respectively:} cutoff energy ($E_{\rm cut}$) and folding energy
($E_{\rm fold}$).}\label{fig:spectr_resol}
\end{figure}
In order to study the evolution of the source spectrum over the pulse
period, we performed pulse phase-resolved spectroscopy using the data
from the 4th observation. The period was divided into 10 phase bins
with zero phase coinciding with the main minimum in the pulse profile.
According to the pulse phase-averaged spectral analysis, the fitting
model was chosen in the form of {\sc
(wabs*(powerlaw*highecut+gau))}. However, due to much lower
statistics, the photon index was fixed at the value from the averaged
spectrum ($\Gamma=1.37$). We selected this parameter due to its
virtual constancy over the pulse in our preliminary analysis of the
same data. The $N_{\rm H}$ value is not very well constrained by the
\textit{NuSTAR}\ data; however we checked that it is consistent with being
constant in phase, and we fixed it as well.
The results are shown in Fig.~\ref{fig:spectr_resol}. The average
pulse profile of 2RXP~J130159.6-635806\ across the entire \textit{NuSTAR} energy range
is presented in the upper panel. The lower panels demonstrate the
behavior of the two free spectral parameters: the cut-off energy
($E_{\rm cut}$) and the folding energy ($E_{\rm fold}$). The cut-off
energy is quite stable over the pulse except at the pulse minimum
where its value increases approximately by a factor of two. The
folding energy demonstrates an apparent correlation with the pulse
intensity, which is probably caused by increasing a spectral hardness
around the pulse maximum. Such behaviour of the spectral parameters
over the pulse can explain the corresponding behavior of the hardness
ratios constructed from the pulse profiles in different energy bands,
previously shown in Fig.~\ref{fig:pprof}. The $(20-40)/(10-20)$~keV
hardness ratio demonstrates correlation with the pulse intensity and
the $(10-20)/(3-10)$~keV ratio peaks at the pulse minimum right in
place where $E_{\rm cut}$ shifts from $\sim7$ to
$\sim16$~keV. Finally, it is necessary to note that the observed
behavior of spectral parameters with the pulse phase can be caused by
both physical and artificial reasons (in particular, due to
limitations of available data, the adopted spectral model, and which
model parameters were fixed). More data are required to confidently
constrain all the spectral parameters and trace their behavior with
the pulse phase.
\section{Summary}
We summarize the results of spectral and timing analysis of
serendipitous and dedicated \textit{NuSTAR}\ observations of the accreting
X-ray pulsar 2RXP~J130159.6-635806\ in 2014 May-June.
\begin{itemize}
\item The source demonstrates strong pulsations with a period of
$\sim640$~s. The $\sim80\%$ pulsed fraction is measured to be
constant with energy up to 40~keV.
\item The pulse profile is virtually independent of energy and can
roughly be divided into one main peak at phases 0.0--0.5 and two smaller
peaks at phases 0.5--0.75 and 0.75--1.0. The only feature that is
changing with energy is the depth of the main minimum at phase 0.
\item The measured period shows a significant change over the $\sim50$~day
time span of the \textit{NuSTAR}\ observations, which is consistent with a
spin-up rate of $\dot\nu \simeq 4.3\times10^{-13}$~Hz~s$^{-1}$. This
rate is in remarkable agreement with measurements made by
\cite{masha2005} almost a decade ago.
\item Together with the results of \cite{masha2005}, the \textit{XMM-Newton}
data taken in 2007 and 2011, and the \textit{NuSTAR}\ observations, we show a
long-term spin-up trend of the source during the last 20 years. During the
last 15 years, the source has undergone a strong and steady spin-up rate at
the level of $\dot P = (1.774\pm0.003)\times10^{-7}$ s s$^{-1}$.
\item The power density spectrum of the source shows a clear break at
0.0066~Hz, which is higher than the period frequency of
0.0015~Hz. This fact, together with the steady persistent luminosity
of the source, implies that the spin-up observed during the last
20~years is likely caused by a long-term mass accretion rate high enough
to squeeze the magnetosphere inside the corotational radius, which
makes 2RXP~J130159.6-635806\ unique among the other X-ray pulsars in binary systems
with Be companions.
\item The phase-averaged spectrum of the source has a typical shape
for accreting neutron stars in binary systems, in particular, for
X-ray pulsars, and demonstrates an exponential cutoff at high
energies. Our best-fit model contains an absorbed power-law with
$\Gamma\simeq1.4$ modified by a high-energy spectral drop with a
cut-off energy of $E_{\rm cut}\simeq7$~keV and a folding energy of
$E_{\rm fold}\simeq18$~keV. The spectrum also shows $\sim3\sigma$
evidence for an iron 6.4~keV emission line, with an improvement in
the fit when it is included.
\item The observed flux corresponds to an {\it unabsorbed} luminosity
in the range $\sim(8-26)\times10^{34}$~erg~s$^{-1}$, assuming a source
distance between 4 and 7 kpc \citep{masha2005}. These luminosity
values imply that the source is a member of the subclass of
persistent low luminosity Be systems \citep{2011Ap&SS.332....1R}.
\item The phase-resolved spectroscopy shows some differences in source
spectrum with phase. The cut-off energy is very stable over the
pulse except at zero phase where its value increases by a factor of
two. The apparent correlation of the folding energy with pulse
intensity is attributed to the change in hardness of the source
spectrum with orbital phase.
\end{itemize}
\acknowledgments
This research has made use of data obtained with \textit{NuSTAR}, a project
led by Caltech, funded by NASA and managed by NASA/JPL, and has
utilized the NUSTARDAS software package, jointly developed by the ASDC
(Italy) and Caltech (USA). This research has also made use of data
obtained with XMM-Newton, an ESA science mission with instruments and
contributions directly funded by ESA Member States. AL and ST
acknowledge support from Russian Science Foundation (grant
14-12-01287).
{\it Facilities:} \facility{NuSTAR}, \facility{XMM-Newton}.
|
1,314,259,995,304 | arxiv | \section{Introduction}
Recent advances in computer power are leading to
demands for extending the frontier of
control technologies to cover wider classes of systems.
Stimulated by such a trend, this paper focuses on
the class of discrete-time linear systems whose state transition is determined
only randomly,
also called discrete-time
linear random dynamical systems in the field of analytical dynamics \cite{Arnold-book}.
Randomness is fairly common in various kinds of phenomena
(e.g., packet interarrival times in networked
systems \cite{Paxson-IEEETN95} and failure occurrences in distributed systems \cite{Finkelstein-book}),
and discarding the information about it in modeling
might lead to the situation where the controllers designed with
the resulting models do not achieve the expected
performance for the original objectives.
Hence, if the randomness behind the real objects is essential, it
should also be modeled and exploited in controller synthesis.
Behaviors and properties of random dynamical systems have been
extensively studied, e.g., in \cite{Yu-PRL90,Arnold-DSS98,Wang-JDE12}.
However, studies dealing with such systems in control
problems are still rare.
Our ultimate goal is to develop a versatile practical framework for
controlling such systems by restricting our attention to the
linear case.
In particular, we aim at developing a systematic analysis
and synthesis approach based on linear matrix inequality (LMI) conditions,
as in the existing studies on deterministic linear systems \cite{Boyd-book,Scherer-TAC97,Svariable-Ebihara-book}.
As a step toward such a goal, this paper
first shows the equivalence of some stability notions and
derives the Lyapunov inequality condition for
stability of discrete-time linear random dynamical systems under
some key assumptions.
Our Lyapunov inequality characterizes stability of
such systems in a necessary and sufficient sense
and will be a basis for further advanced analysis and synthesis using LMIs.
The state transition of our discrete-time linear random dynamical
systems can
be seen as
determined by an underlying stochastic process, and we assume in this paper
that the process is independent and identically distributed (i.i.d.)
with respect to the discrete time (hence the process naturally becomes
stationary and ergodic \cite{Knill-book}); this assumption will play
a key role in showing the above stability equivalence.
This system class contains various kinds of linear
stochastic systems studied in the literature.
For example, systems with state-multiplicative noise \cite{Boyd-book,Gershon-Auto08}
and switched systems \cite{Geromel-IJC06} with i.i.d.\ switching signals (which
correspond to Markov jump systems \cite{Costa-book} with transition probability uniform
in all current modes) belong to this class.
Hence, the results in this paper can be seen as a generalization of
those for such particular systems,
and would become a sort of center point for bridging and unifying
the associated existing results.
For reference, we briefly summarize the technical aspect of the contributions
in this paper through the comparison with the closely related earlier
studies \cite{Costa-TAC14,Hosoe-TAC18}.
In \cite{Costa-TAC14}, a necessary and sufficient stability condition is
shown for discrete-time linear systems with stochastic dynamics
determined by a stationary Markov process.
Since i.i.d.~processes are a special case of stationary Markov
processes, one might consider that our results could be covered by those for
Markov jump systems.
However, this is not true because the above earlier results are derived
with the assumption that the maximum
singular value of the random coefficient matrix
(depending on the Markov process)
is essentially bounded,
which makes it impossible to deal with random coefficient matrices
involving, e.g., normally distributed elements.
This paper will use a milder assumption for this part (see
Assumption~\ref{as:bound} introduced later).
Hence, our results cannot generally be covered by those in
\cite{Costa-TAC14} (and vice versa).
In the earlier study \cite{Hosoe-TAC18} of the authors, a sufficient
stability condition was shown
as a part of the contributions for essentially the same
stochastic systems in the present paper.
However, only exponential stability was dealt with and the necessity
assertion of the condition was not discussed even for
that stability notion.
This paper will complement this earlier study
from several viewpoints.
The contents of this paper are as follows.
In Section~\ref{sc:sys}, the stochastic system to be dealt with in
this paper is described, and three stability notions are
introduced:
asymptotic stability, exponential stability \cite{Kozin-Auto69} and quadratic stability.
Then, the equivalence of those stability notions
is proved in Section~\ref{sc:equiv}.
Then, the Lyapunov inequality is derived in Section~\ref{sc:lyap}
as a necessary and sufficient condition for quadratic stability.
Since our Lyapunov
inequality will involve decision variables contained in the operation of
expectation, we will also provide ideas for solving the inequality as a
standard LMI involving no expectation operation.
The stabilization feedback synthesis based on the Lyapunov inequality is
further discussed in Section~\ref{sc:syn}.
Finally, two numerical examples are provided for our
stability analysis and synthesis in Section~\ref{sc:example}.
The first example has the role of demonstrating that the Lyapunov
inequality indeed gives a necessary and sufficient condition
not only for quadratic stability but for exponential stability, as is
theoretically indicated.
On the other hand, the second example is provided for showing the
potential of our approach by tackling a challenging problem;
more specifically,
we consider stabilizing the discrete-time system obtained through
discretizing a continuous-time deterministic linear system with a
randomly time-varying sampling interval, which is inspired by the studies
on aperiodic sampling \cite{Montestruque-TAC04,Hetel-Auto17} (related,
e.g., to packet interarrival times in networked systems).
We use the following notation in this paper.
${\bf R}$, ${\bf R}_+$ and ${\bf N}_0$ denote the set of real numbers,
that of positive real numbers
and that of non-negative integers, respectively.
${\bf R}^n$ and ${\bf R}^{m\times n}$ denote the set of $n$-dimensional real
column vectors and that of $m\times n$ real matrices, respectively.
${\bf S}^{n\times n}$ and ${\bf S}^{n\times n}_{+}$ denote
the set of $n\times n$ symmetric matrices and that of
$n\times n$ positive definite matrices, respectively.
$\sigma_{\rm max}(\cdot)$ and $\sigma_{\rm min}(\cdot)$
denote the maximum and minimum singular values of the matrix $(\cdot)$, respectively.
$||(\cdot)||$ denotes the Euclidean norm of
the vector $(\cdot)$.
${\rm row}(\cdot)$ denotes the vectorization of the matrix $(\cdot)$ in the row
direction, i.e., ${\rm row}(\cdot)=[{\rm row}_1(\cdot),\ldots,{\rm
row}_m(\cdot)]$ where $m$ is the number of rows of the matrix and ${\rm row}_i(\cdot)$ denotes the $i$th row.
$\otimes$ denotes the Kronecker product.
${\rm diag}(\cdot)$ denotes the (block-)diagonal matrix.
$E[(\cdot)]$
denotes the expectation of the random variable $(\cdot)$; this notation
is also used for the expectation of the random matrix $(\cdot)$.
If $s$ is a random variable obeying the distribution ${D}$,
then we represent it as $s \sim {D}$.
\section{Stability of Discrete-Time Linear Systems with Stochastic
Dynamics}
\label{sc:sys}
\subsection{Discrete-Time Linear Systems with Stochastic
Dynamics}
Let us consider the $Z$-dimensional discrete-time stochastic process
$\xi$, which is the sequence of $Z$-dimensional
random vectors $\xi_k$ with respect to the discrete time $k\in {\bf
N}_0$, and make the following key assumption on it.
\begin{assumption}
\label{as:iid}
$\xi_k$ is independent and identically distributed (i.i.d.) with
respect to $k\in {\bf N}_0$.
\end{assumption}
This assumption naturally makes $\xi$ stationary and ergodic \cite{Knill-book}.
For this stochastic process $\xi$, we denote
the cumulative distribution
function of $\xi_k$ and the corresponding support by
${\cal F}(\xi_k)$ and ${\boldsymbol {\mit\Xi}}$, respectively.
By definition, ${\boldsymbol {\mit\Xi}} \subset {\bf R}^Z$,
and ${\boldsymbol {\mit\Xi}}$ corresponds to the set of values
that $\xi_k$ can take.
Let us further consider the discrete-time linear system
\begin{equation}
x_{k+1} = A(\xi_k) x_k,
\label{eq:fr-sys}
\end{equation}
where $x_k \in {\bf R}^n$, $A:{\boldsymbol {\mit\Xi}} \rightarrow
{\bf R}^{n\times n}$, and the initial state $x_0$ is assumed to be deterministic.
Since $A(\xi_k)$ is a random matrix (while $A(\cdot)$ itself is a
deterministic mapping), the dynamics
of the above system is stochastic.
To ensure mathematical rigor throughout this paper,
we make the following assumption on
the coefficient matrix $A(\xi_k)$ of the system.
\begin{assumption}
\label{as:bound}
The squares of elements of
$A(\xi_k)$ are all Lebesgue integrable, i.e.,
\begin{align}
&
E[A_{ij}(\xi_k)^2]<\infty\ \ (\forall i, j = 1,\ldots,n),
\label{eq:as-bound}
\end{align}
where $A_{ij}(\xi_k)$ denotes the $(i,j)$-entry of $A(\xi_k)$.
\end{assumption}
In this paper, we say that the expectation of a random variable is
well-defined, if the random variable is Lebesgue integrable; hence,
$E[A_{ij}(\xi_k)^2]$ satisfying (\ref{eq:as-bound}) is said to be
well-defined.
This term is also used for the expectation of a random matrix
when its elements are all Lebesgue integrable.
The aim of this paper is to develop a theoretical basis of stability analysis
and synthesis for system (\ref{eq:fr-sys}) with $\xi$ satisfying
Assumptions~\ref{as:iid} and \ref{as:bound}.
Since we have introduced no essential restrictions on ${\cal F}(\cdot)$ and
$A(\cdot)$, this system covers a wide class of
discrete-time linear systems with stochastic dynamics;
indeed, system (\ref{eq:fr-sys}) is the most general for representing the discrete-time
linear finite-dimensional systems with stochastic dynamics (without
additive inputs) under Assumptions~\ref{as:iid} and \ref{as:bound}.
Assumption~\ref{as:bound} would not become a problem from the practical
viewpoint, and hence,
the only essential restriction on the system is Assumption~\ref{as:iid}, which plays a
crucial role throughout this paper.
\subsection{Stability Notions}
\label{ssc:stab}
We next introduce three stability notions for
system (\ref{eq:fr-sys}) with $\xi$ satisfying
Assumptions~\ref{as:iid} and \ref{as:bound}.
The first and second notions are asymptotic stability and exponential
stability \cite{Kozin-Auto69} defined as follows.
\begin{definition}[Asymptotic Stability]
\label{df:asym}
The system (\ref{eq:fr-sys}) with $\xi$ satisfying
Assumptions~\ref{as:iid} and \ref{as:bound} is said to be stable in the second moment
if for each positive $\epsilon$, there exists
$\delta=\delta(\epsilon)$ such that
\begin{align}
&
\|x_0\|^2 \leq \delta(\epsilon) \Rightarrow
E[\|x_k\|^2] \leq \epsilon\ \ (\forall k
\in {\bf N}_0).\label{eq:stab-def}
\end{align}
In addition, the
system is said to be
asymptotically stable
in the second moment if the system is stable in the second moment and
\begin{align}
&
E[\|x_k\|^2] \rightarrow 0\ \ {\rm as}\ \ k\rightarrow \infty\ \ (\forall x_0 \in {\bf R}^n).\label{eq:asy-def}
\end{align}
\end{definition}
\begin{definition}[Exponential Stability]
\label{df:expo}
The system (\ref{eq:fr-sys}) with $\xi$ satisfying
Assumptions~\ref{as:iid} and \ref{as:bound} is said to be exponentially stable
in the second moment if there exist $a\in {\bf R}_+$ and $\lambda \in
(0,1)$ such that
\begin{align}
&
\sqrt{E[||x_k||^2]} \leq a ||x_0|| \lambda^k\ \ \ (\forall k \in {\bf
N}_0, \forall x_0 \in {\bf R}^n).
\label{eq:exp-def}
\end{align}
\end{definition}
The second-moment asymptotic (resp.\ exponential) stability defined above is also
called asymptotic (resp.\ exponential) mean square stability \cite{Kozin-Auto69},
and is widely used in the field of stochastic systems control.
In Definition~\ref{df:expo},
$\lambda$ is an upper bound of the convergence rate
with respect to
the sequence
$\left(\sqrt{E[||x_k||^2]}\right)_{k \in {\bf N}_0}$.
Compared to the above two notions, the following third notion might not be
major in the field of stochastic systems control but is closely
related to our main arguments.
\begin{definition}[Quadratic Stability]
\label{df:quad}
The system (\ref{eq:fr-sys}) with $\xi$ satisfying
Assumptions~\ref{as:iid} and \ref{as:bound} is said to be quadratically stable
if there exist $P\in {\bf S}^{n\times n}_+$ and $\lambda\in (0,1)$ such
that
\begin{align}
&
E[x_{k+1}^T P x_{k+1}]\leq \lambda^2 E[x_k^T P x_k]\ \ \ (\forall k \in {\bf
N}_0, \forall x_0 \in {\bf R}^n).
\label{eq:quad-def}
\end{align}
\end{definition}
In the above definition of quadratic stability, $V(x_k)=E[x_k^T P x_k]$ is the quadratic Lyapunov function
described with the Lyapunov matrix $P$,
and (\ref{eq:quad-def}) requires the existence of a Lyapunov function (i.e.,
$P$) that decays no slower than
the rate $\lambda^2\ (<1)$, as is the
case with deterministic systems \cite{Boyd-book}.
Here, to ensure mathematical rigor, we show that
Assumptions~\ref{as:iid} and \ref{as:bound} lead to
the well-definedness of the expectations referred to in the above
definitions.
As a step for this end, we first note two facts.
The first fact is that
if $s_1\leq s_2$ (resp.\ $s_1< s_2$) for each
sample of the pair of the two random variables $s_1$ and $s_2$,
then $E[s_1]\leq E[s_2]$ (resp.\ $E[s_1]< E[s_2]$);
since this fact is almost trivial, we use it throughout this paper
without any specific notes.
Compared to the first fact, the second fact might not be trivial and we
would like to summarize it as in the following lemma,
which can be shown with the Cauchy-Schwarz inequality.
\begin{lemma}
\label{lm:product-bound}
If the expectations of $s_1^2$ and
$s_2^2$ are well-defined for the random variables $s_1$ and $s_2$, then the expectation of $s_1 s_2$ also is.
\end{lemma}
Then, by using the above two facts,
we can obtain the following result:
for the random vector $s_1$ and the square random matrix $S_2$ (of the
compatible size) such that
$s_1$ and $S_2$ are independent of each other and $E[S_2]$ is
well-defined, the expectation $E[s_1^T S_2 s_1]\ (=E[s_1^T E[S_2] s_1])$ is well-defined
if $E[\|s_1\|^2]$ is.
Hence, by taking $s_1=x_k$ and $S_2=A(\xi_k)^T A(\xi_k)$
under Assumptions~\ref{as:iid} and \ref{as:bound},
we can show that
if $E[\|x_k\|^2]$ is well-defined, then $E[\|x_{k+1}\|^2]$ also is;
the well-definedness of $E[S_2]=E[A(\xi_k)^T A(\xi_k)]$ can be
ensured by Lemma~\ref{lm:product-bound} under Assumption~\ref{as:bound}.
A recursive use of this result leads to the well-definedness of
$E[\|x_k\|^2]$ for every $k\in {\bf N}_0$.
The well-definedness of $E[x_k^T P x_k]$ can be ensured in a similar fashion.
Hence, the expectations in Definitions~\ref{df:asym} through
\ref{df:quad} are all well-defined under Assumptions~\ref{as:iid} and
\ref{as:bound}.
\section{Equivalence of Three Stability Notions}
\label{sc:equiv}
Three stability notions were introduced in the preceding section:
asymptotic stability, exponential stability and quadratic stability.
Since quadratic stability is usually introduced as a notion
related with
deterministic time-invariant Lyapunov matrices
(as in Definition~\ref{df:quad}),
it is not equivalent to asymptotic stability and
exponential stability, in general.
For example, in the case of deterministic linear time-varying systems,
such equivalence is known to fail \cite{Daafouz-SCL01}.
Hence, one might be concerned about the possibility of a similar
situation for the system (\ref{eq:fr-sys}) since it can be viewed
as a deterministic linear time-varying system when we discard
the information about
underlying randomness.
However, we can actually
establish equivalence of
all these notions also for the stochastic system (\ref{eq:fr-sys})
(and the present stability definitions), {\it
provided that Assumptions~\ref{as:iid} and \ref{as:bound} are satisfied}, as in the
case with deterministic linear time-invariant (LTI) systems.
Showing this non-trivial equivalence is one of the main
results in this paper.
\subsection{Equivalence between Asymptotic Stability and Exponential Stability}
We first give the proof
of the following theorem
about equivalence between asymptotic stability and exponential stability
(similar equivalence is known to hold for deterministic linear
systems \cite{Vidyasagar-book}).
\begin{theorem}
\label{th:eqv-asym-expo}
Suppose $\xi$ satisfies Assumption~\ref{as:iid} and $A(\xi_k)$ satisfies
Assumption~\ref{as:bound}.
The following two conditions are equivalent.
\begin{enumerate}
\item
The system (\ref{eq:fr-sys}) is asymptotically stable in the second moment.
\item
The system (\ref{eq:fr-sys}) is exponentially stable in the second moment.
\end{enumerate}
\end{theorem}
\begin{IEEEproof}
2$\Rightarrow$1:
It follows from (\ref{eq:exp-def}) and $0<\lambda<1$ that
\begin{align}
&
E[\|x_k\|^2]\leq a^2 \|x_0\|^2\ \ (\forall k \in {\bf
N}_0).
\label{eq:proof-th1-a2x2}
\end{align}
This leads us to (\ref{eq:stab-def}) with
$\delta(\epsilon)=\epsilon/a^2$, which
means the second-moment stability of the system.
In addition, (\ref{eq:asy-def}) readily follows from
(\ref{eq:exp-def}) since $0<\lambda<1$.
Hence, by definition, the system is asymptotically stable in the second moment.
\medskip
1$\Rightarrow$2:
Linearity of the system (\ref{eq:fr-sys}) frequently used in
this part of the proof is not explicitly referred to so as not to make the arguments verbose.
We first introduce the decomposition
\begin{align}
&
x_0=\beta \sum_{i=1}^n a_i \sigma_i e^{(i)}
\end{align}
with the scalars $\beta$,
$a_i\geq 0\ (i=1,\ldots,n)$ satisfying $\sum_{i=1}^n a_i=1$,
the integers $\sigma_i\in \{-1,1\}\ (i=1,\ldots,n)$
and the standard basis vectors $e^{(i)}\ (i=1,\ldots,n)$ for the
$n$-dimensional Euclidean space.
By definition, we have
\begin{align}
\|x_0\|^2
&=
\beta^2(a_1^2+\ldots +a_n^2)
\geq
\beta^2/n.\label{eq:th1-pf-x0low}
\end{align}
Associated with this decomposition of $x_0$, we can also decompose the corresponding
$x_k$ as
\begin{align}
&
x_k=\beta \sum_{i=1}^n a_i \sigma_i x^{(i)}_k,
\end{align}
where $x^{(i)}_k$ is the state at $k$ for the initial state
$x_0=e^{(i)}$.
It follows from (\ref{eq:asy-def}) that
there exists $K\in {\bf N}_0$ such that
\begin{align}
&
E[\|x^{(i)}_k\|^2]\leq 1/(2n^2)\ \ (i=1,\ldots,n; \forall k\geq K).
\end{align}
Then, we have
\begin{align}
E[\|x_k\|^2]
&=
\beta^2
E\left[
\left\|\sum_{i=1}^n a_i \sigma_i x^{(i)}_k\right\|^2\right] \notag \\
&\leq
\beta^2
E\left[
\sum_{i=1}^n a_i \|\sigma_i x^{(i)}_k\|^2\right] \notag \\
&=
\beta^2 \sum_{i=1}^n a_i E[\|x^{(i)}_k\|^2] \notag \\
&\leq
\beta^2/(2n)\ \ (\forall k \geq K),\label{eq:th1-pf-xkup}
\end{align}
where the first inequality follows from Jensen's inequality.
Hence, it follows from (\ref{eq:th1-pf-x0low}) and (\ref{eq:th1-pf-xkup})
that
\begin{align}
&
E[\|x_K\|^2]\leq\|x_0\|^2/2 \label{eq:proof-th1-rec-orig}
\end{align}
for the same $K$.
Since this inequality holds regardless of $x_0 \in {\bf R}^n$
and since $\xi$ satisfies Assumption~\ref{as:iid} (in particular,
the stationarity assumption of $\xi$),
we further have
\begin{align}
&
E[\|x_{k+K}\|^2]\leq E[\|x_k\|^2]/2\ \ (\forall k\in {\bf N}_0).
\label{eq:proof-th1-rec}
\end{align}
For each $k\in {\bf N}_0$, take $j$ and $c$ such that
$k=c+jK\ (0\leq c<K)$.
Then, a recursive use of (\ref{eq:proof-th1-rec}) leads to
\begin{align}
E[\|x_k\|^2]&=E[\|x_{c+jK}\|^2] \notag\\
&\leq
E[\|x_{c}\|^2]/2^j
\notag \\
&=
2^{c/K}E[\|x_{c}\|^2](2^{-1/K})^k
\notag \\
&\leq 2E[\|x_{c}\|^2](2^{-1/K})^k \ \ (\forall k
\in {\bf N}_0).
\label{eq:proof-th1-expo}
\end{align}
Since Assumptions~\ref{as:iid} and \ref{as:bound} ensure that
$E[\|x_{c}\|^2]$ is well-defined for every $c \in [0,K)$ (from the
arguments in the preceding subsection),
there exists a bounded positive scalar $\alpha_K$
such that
\begin{align}
&
E[\|x_{c}\|^2] \leq \alpha_K \|x_0\|^2 \ \ (\forall c \in [0,K)).
\end{align}
This, together with (\ref{eq:proof-th1-expo}), leads us to
\begin{align}
E[\|x_k\|^2]
&\leq 2\alpha_K \|x_{0}\|^2(2^{-1/K})^k \ \ (\forall k
\in {\bf N}_0),
\label{eq:proof-th1-expo-fin}
\end{align}
which implies the
existence of $a=2\alpha_K$ and $\lambda=2^{-1/K}$ such
that $a>0$, $0<\lambda<1$ and (\ref{eq:exp-def}) hold.
Hence, by definition, the system is exponentially stable in the second moment.
This completes the proof.
\end{IEEEproof}
Note that the above proof actually showed that the system is
exponentially stable if and only if (\ref{eq:asy-def}) holds; in other
words, (\ref{eq:stab-def}) was not used in the part ``1$\Rightarrow$2''.
This readily leads us to the following corollary as an
implicit result about asymptotic stability.
\begin{corollary}
\label{cr:asym}
Suppose $\xi$ satisfies Assumption~\ref{as:iid} and $A(\xi_k)$ satisfies
Assumption~\ref{as:bound}.
The
system (\ref{eq:fr-sys}) is
asymptotically stable
in the second moment if and only if (\ref{eq:asy-def}) holds.
\end{corollary}
The property described with (\ref{eq:asy-def}) is called attractivity \cite{Vidyasagar-book}.
Although asymptotic stability is usually defined
not only with attractivity but also with stability (as in
Definition~\ref{df:asym}),
it is known
in the deterministic systems case
that asymptotic stability can be ensured only with attractivity if the
system is linear and time-invariant.
Hence, the above corollary corresponds to a stochastic counterpart of
this conventional result because of Assumption~\ref{as:iid} (although system (\ref{eq:fr-sys}) itself is not time-invariant).
\subsection{Equivalence between Exponential Stability and Quadratic
Stability}
The remaining issue in this section is to show the following theorem.
\begin{theorem}
\label{th:eqv-expo-quad}
Suppose $\xi$ satisfies Assumption~\ref{as:iid} and $A(\xi_k)$ satisfies
Assumption~\ref{as:bound}.
The following two conditions are equivalent.
\begin{enumerate}
\item
The system (\ref{eq:fr-sys}) is exponentially stable in the second moment.
\item
The system (\ref{eq:fr-sys}) is quadratically stable.
\end{enumerate}
\end{theorem}
\begin{IEEEproof}
2$\Rightarrow$1:
A recursive use of (\ref{eq:quad-def})
leads to
\begin{align}
&
E[x_k^T P x_k] \leq \lambda^{2k}x_0^T P x_0\ \ (\forall k\in {\bf N}_0,
\forall x_0\in {\bf R}^n).
\end{align}
For the left-hand side of this inequality,
\begin{align}
&
\sigma_{\min}(P)E[\|x_k\|^2]\leq E[x_k^T P x_k],
\end{align}
while for the right-hand side,
\begin{align}
&
\lambda^{2k}x_0^T P x_0 \leq \sigma_{\max}(P)\|x_0\|^2 \lambda^{2k}.
\end{align}
Hence, we have (\ref{eq:exp-def}) with
$a=\sqrt{\sigma_{\max}(P)/\sigma_{\min}(P)}$ and the same $\lambda$,
which means by definition that
the system is exponentially stable in the second moment.
\medskip
1$\Rightarrow$2:
Take a positive $\epsilon$ such that
$\lambda_\epsilon:=\lambda+\epsilon<1$ and define
\begin{align}
&
\Gamma^{k_2}_{k_1}:=
\begin{cases}
I & (k_2=k_1-1)\\
(A(\xi_{k_2})/\lambda_\epsilon)\ldots
(A(\xi_{k_1})/\lambda_\epsilon)
& (k_2\geq k_1)
\end{cases}
\end{align}
for non-negative integers $k_1$ and $k_2\ (\geq k_1-1)$.
Then, (\ref{eq:exp-def}) can be rewritten as
\begin{align}
&
x_0^T E[(\Gamma^{k-1}_{0})^T \Gamma^{k-1}_{0}]x_0 \leq x_0^T
(a^2(\lambda^2/\lambda^2_\epsilon)^k I)x_0\notag \\
&
(\forall k\in {\bf N}_0,
\forall x_0\in {\bf R}^n),
\label{eq:th2-pf-rw}
\end{align}
where the well-definedness of the expectation in the left-hand side is ensured under
Assumptions~\ref{as:iid} and \ref{as:bound} (in essentially the same manner as
Subsection~\ref{ssc:stab}).
Since $\xi$ satisfies Assumption~\ref{as:iid}, the above inequality leads to
\begin{align}
&
E[(\Gamma^{k_2}_{k_1})^T \Gamma^{k_2}_{k_1}] \leq
a^2(\lambda^2/\lambda^2_\epsilon)^{k_2-k_1+1} I\notag \\
&
(\forall k_1, k_2\in {\bf N}_0\ {\rm s.t.}\ k_2\geq k_1).
\label{eq:th2-pf-rw-any}
\end{align}
We next define
\begin{align}
&
P^K_k
:=
\lambda_\epsilon^{-2}I+
\lambda_\epsilon^{-2}(\Gamma^{k}_{k})^T \Gamma^{k}_{k}+
\ldots
+
\lambda_\epsilon^{-2}(\Gamma^{K}_{k})^T \Gamma^{K}_{k}
\label{eq:def-PKk}
\end{align}
for $k$ and $K\in {\bf N}_0$ such that $K\geq k\geq 0$. Then, it satisfies
\begin{align}
&
\lambda^2_\epsilon P^K_k - A(\xi_k)^T P^K_{k+1} A(\xi_k) =I\notag \\
&
(\forall k, K\in {\bf N}_0\ {\rm s.t.}\ K>k\geq 0),
\end{align}
and it follows from (\ref{eq:fr-sys}) that
\begin{align}
&
\lambda^2_\epsilon E[x_{k} E[P^K_k] x_{k}] - E[x_{k+1}^T E[P^K_{k+1}]
x_{k+1}^T] \geq 0.
\label{eq:th2-pf-quad-K}
\end{align}
On the other hand,
(\ref{eq:def-PKk}) also implies that
the sequence of
\begin{align}
&
E[P^K_k]
=
\lambda_\epsilon^{-2}I+
\lambda_\epsilon^{-2}E[(\Gamma^{k}_{k})^T \Gamma^{k}_{k}]+
\ldots
+
\lambda_\epsilon^{-2}E[(\Gamma^{K}_{k})^T \Gamma^{K}_{k}]
\end{align}
with respect to $K\ (\geq k)$ for each fixed $k$ is
monotonically non-decreasing under the
semi-order relation based on positive semidefiniteness
(i.e., $E[P^K_k]\leq E[P^{K+1}_k]$).
In addition,
it follows from
(\ref{eq:th2-pf-rw-any}) (and $a\geq 1$) that
\begin{align}
&
E[P^K_k]
\leq
\lambda_\epsilon^{-2}
a^2
(1+(\lambda^2/\lambda^2_\epsilon)+\ldots+(\lambda^2/\lambda^2_\epsilon)^{K-k+1})I,
\end{align}
whose right-hand side converges to a constant matrix as $K\rightarrow\infty$.
Hence, this sequence
also converges to a constant matrix as
$K\rightarrow\infty$.
Since this constant matrix does not depend on $k$
because of Assumption~\ref{as:iid}, we
denote it by $P$, which is
obviously positive definite.
Then,
letting $K\rightarrow \infty$ in
(\ref{eq:th2-pf-quad-K}) leads to
\begin{align}
&
\lambda^2_\epsilon E[x_{k} P x_{k}] - E[x_{k+1}^T P x_{k+1}^T] \geq 0,
\end{align}
which holds for every $k\in {\bf N}_0$.
Hence,
(\ref{eq:quad-def}) with $\lambda$ replaced by
$\lambda_\epsilon\ (<1)$ is satisfied,
which means by definition
that the system is quadratically stable.
This completes the proof.
\end{IEEEproof}
As stated at the beginning of this section,
no equivalence similar to that
in the above theorem holds
in the case with the usual deterministic linear time-varying systems.
Hence, this equivalence cannot be obtained without dealing
with randomness behind our system and thus the relevant stability
definitions for the system viewed as stochastic systems appropriately.
In particular, Assumption~\ref{as:iid} played a crucial role in showing
such equivalence.
To see this, let us temporarily consider a Markov chain $\xi$ (which fails to satisfy
Assumption~\ref{as:iid}) and the associated system (\ref{eq:fr-sys}),
which can be seen as the so-called Markov jump linear system \cite{Costa-book}.
Then, as is well known, the necessary and sufficient condition for
exponential stability of the system can be described only with the
mode-dependent Lyapunov matrix; this is true even when the Markov chain behind the
system is time-homogeneous (i.e., stationary) and ergodic.
Hence, the quadratic stability defined with a constant Lyapunov matrix
cannot be equivalent to exponential stability in such a case.
This in turn implies that assuming $\xi$ is stationary and ergodic is
insufficient for showing the
equivalence between quadratic stability and exponential stability,
and thus, Assumption~\ref{as:iid} is indeed essential.
In addition, it is also noted that deterministic LTI systems can be seen as a
special case of our systems with $\xi$ satisfying
Assumptions~\ref{as:iid} and \ref{as:bound} if we restrict our attention
to the distribution of $\xi_k$ that can take only a single value.
Since the definitions of stability in this paper immediately reduce to those for
deterministic LTI systems in that case, and since their equivalence is known to
hold, our results can be seen as a stochastic extension of such
conventional results.
\section{Stability Analysis Based on Lyapunov Inequality}
\label{sc:lyap}
Theorems~\ref{th:eqv-asym-expo} and \ref{th:eqv-expo-quad}
in the preceding section
showed the complete equivalence of the three stability
notions defined in Section~\ref{sc:sys} for system (\ref{eq:fr-sys})
under Assumptions~\ref{as:iid} and \ref{as:bound}.
Since the definition of quadratic stability is, unlike the other two,
expected to be compatible with
the analysis based on Lyapunov inequalities,
we deal with this stability notion and discuss the corresponding Lyapunov
inequality in this section.
\subsection{Lyapunov Inequality for Quadratic Stability}
We first show the following theorem, which gives key inequality
conditions for stability analysis.
\begin{theorem}
\label{th:lyap}
Suppose $\xi$ satisfies Assumption~\ref{as:iid} and $A(\xi_k)$
satisfies Assumption~\ref{as:bound}.
The following three conditions are equivalent.
\begin{enumerate}
\item
The system (\ref{eq:fr-sys}) is quadratically stable.
\item
There exist $P\in {\bf S}^{n\times n}_+$ and $\lambda\in (0,1)$ such
that
\begin{align}
&
E[\lambda^2 P - A(\xi_0)^T P A(\xi_0)]\geq 0.\label{eq:lyap-lambda}
\end{align}
\item
There exists $P\in {\bf S}^{n\times n}_+$ such
that
\begin{align}
&
E[P - A(\xi_0)^T P A(\xi_0)]> 0.\label{eq:lyap}
\end{align}
\end{enumerate}
\end{theorem}
\begin{IEEEproof}
1$\Rightarrow$2:
Taking $k=0$ in inequality (\ref{eq:quad-def}) implies
\begin{align}
x_0^T E[\lambda^2 P - A(\xi_0)^T P A(\xi_0)] x_0 \geq 0\ \ (\forall
x_0\in {\bf R}^n),
\end{align}
which is nothing but (\ref{eq:lyap-lambda}).
\medskip
2$\Rightarrow$1:
Since $\xi$ satisfies Assumption~\ref{as:iid},
(\ref{eq:lyap-lambda}) implies
\begin{align}
&
E[\lambda^2 P - A(\xi_k)^T P A(\xi_k)]\geq 0 \ \ \ (\forall k \in {\bf
N}_0).
\label{eq:lyapunov-exp-allk}
\end{align}
Since $x_k$ and $A(\xi_k)$ are independent of each other, this further
implies
\begin{align}
&
E[x_k^T(\lambda^2 P - A(\xi_k)^T P A(\xi_k))x_k]\geq 0 \ \ \ (\forall k \in {\bf
N}_0),
\end{align}
which is nothing but (\ref{eq:quad-def}).
\medskip
2$\Leftrightarrow$3:
Adding $(1-\lambda^2)P>0$ to (\ref{eq:lyap-lambda}) immediately
leads to (\ref{eq:lyap}). The opposite assertion is obvious.
\end{IEEEproof}
If $A(\xi_0)$ is deterministic, then (\ref{eq:lyap}) obviously
reduces to the usual Lyapunov inequality for deterministic linear
systems.
Hence, (\ref{eq:lyap}) is a natural extension of the usual Lyapunov
inequality for the stochastic systems case.
The well-definedness of the expectation in the Lyapunov inequality
(\ref{eq:lyap}) is
ensured under Assumption~\ref{as:bound}.
In addition,
the proof on the equivalence between 1) and 2)
of the above theorem implies that, for each $\lambda$ and every $P>0$,
(\ref{eq:lyap-lambda}) holds if and only if (\ref{eq:quad-def})
holds.
This implies that the decay rate of
the Lyapunov function in the definition of quadratic stability can be
evaluated in a necessary and sufficient sense through
(\ref{eq:lyap-lambda}).
This, together with the proof of Theorem~\ref{th:eqv-expo-quad}, further implies that
we can also evaluate the convergence rate of the sequence
$(\sqrt{E[\|x_k\|^2]})_{k\in {\bf N}_0}$ (i.e., minimal $\lambda$ satisfying
(\ref{eq:exp-def})) through (\ref{eq:lyap-lambda}).
Hence, the alternative representation (\ref{eq:lyap-lambda}) of the
Lyapunov inequality is also useful.
\subsection{Connections to Relevant Results}
Since our system description covers a wide class of discrete-time linear
systems with stochastic dynamics, the associated results can be seen as
a generalization of some existing results.
For instance, the following cases are relevant to our study.
{\it Case of Systems with State-Multiplicative Noise}:
Let us consider the $Z$-dimensional stochastic process $\xi$
satisfying Assumption~\ref{as:iid} and
\begin{align}
& E[\xi_0]=0,\ \ E[\xi_0 \xi_0^T]={\rm diag}(v_1,\ldots,v_Z),
\end{align}
where $v_i \in {\bf R}_+\ (i=1,\ldots,Z)$ are given constants.
For $\xi_k=[\xi_{1k},\ldots,\xi_{Zk}]^T$,
let us further consider the system (\ref{eq:fr-sys}) with
\begin{align}
&
A(\xi_k)=A_0 + \sum_{i=1}^Z A_i \xi_{ik},
\end{align}
where $A_i\in {\bf R}^{n\times n}\ (i=0,\ldots,Z)$ are given constant matrices.
This class of stochastic systems are called systems with
state-multiplicative noise; obviously, this class is a special case of
our systems.
Hence, it readily follows from Theorem~\ref{th:lyap} that the
system is quadratically (i.e., exponentially) stable if and only if
there exists $P\in {\bf S}^{n\times n}_+$ such that
\begin{align}
&
P-A_0^TPA_0-\sum_{i=1}^Z v_i A_i^T P A_i>0.
\end{align}
This LMI condition is nothing but that in Chapter~9 of \cite{Boyd-book}.
{\it Case of Switched Systems with i.i.d.\ Switching Signal}:
Let us next consider the $1$-dimensional stochastic process $\xi$
satisfying Assumption~\ref{as:iid} and
\begin{align}
&
\xi_k \sim D(d,p),\ d=[1,2,\ldots,S],\ p=[p_1,p_2,\ldots,p_S],
\end{align}
where $D(d,p)$ denotes the discrete distribution such that the
event $\xi_k=i$ occurs with probability $p_i$ for each $i=1,\ldots,S$.
Let us further consider the system (\ref{eq:fr-sys}) with
\begin{align}
&
A(\xi_k)= A_{[\xi_k]},
\end{align}
where $A_{[i]}\in {\bf R}^{n\times n}\ (i=1,\ldots,S)$ are given
constant matrices.
We see that the value $A_{[\xi_k]}$ is switched in accordance with the
i.i.d.\ switching signal $\xi$, and hence, the above system is a
switched system with an i.i.d.\ switching signal.
Since this system is also a special case of our systems, we can see that
the system is quadratically stable if and only if
there exists $P\in {\bf S}^{n\times n}_+$ such that
\begin{align}
&
P-\sum_{i=1}^S p_i A_{[i]}^T P A_{[i]}>0.
\end{align}
This LMI condition is nothing but that in Chapter~3 of \cite{Costa-book} (see Corollary~3.26).
\subsection{LMI Optimization}
\label{ssc:lmi-op}
We next discuss how to solve the Lyapunov inequality (\ref{eq:lyap-lambda}) or (\ref{eq:lyap})
for stability analysis of system (\ref{eq:fr-sys}).
As in the preceding subsection, our Lyapunov inequality
readily reduces to standard LMIs with
given deterministic (scalars and) matrices in cases with some specific systems.
In the general case, however,
the form of
inequalities (\ref{eq:lyap-lambda}) and (\ref{eq:lyap}), in which
the decision variable $P$ is contained in the operation of
expectation, makes it nontrivial to solve them.
This issue can be resolved as follows.
Let us first define
\begin{align}
&
A_{\rm e}(\xi_0):={\rm row}(A(\xi_0))^T {\rm row}(A(\xi_0)),
\label{eq:equiv-rep-vecvec}
\end{align}%
whose elements
cover all the second degree products of the elements of $A(\xi_0)$.
$E[A_{\rm e}(\xi_0)]$ is well-defined by
Lemma~\ref{lm:product-bound} under
Assumption~\ref{as:bound} and becomes a positive semidefinite matrix.
Let us further take $\bar{A}\ (\in{\bf R}^{n^2\times n^2})$ such that
\begin{align}
&
\bar{A}^T \bar{A}=E[A_{\rm e}(\xi_0)],
\label{eq:equiv-rep-decom}
\end{align}
and introduce the following partitioning of $\bar{A}$.
\begin{align}
&
\bar{A}=:
\left[\bar{A}_1, \bar{A}_2, \ldots, \bar{A}_n\right] \ \
(\bar{A}_i \in {\bf R}^{n^2\times n}\ (i=1,\ldots,n))
\end{align}
Then, for
\begin{align}
\bar{A}^\prime :=[\bar{A}_1^T, \bar{A}_2^T, \ldots,
\bar{A}_n^T]^T \in {\bf R}^{n^3\times n},
\end{align}
the matrix
\begin{align}
&
(\bar{A}^\prime)^T
(P\otimes I_{n^2})
\bar{A}^\prime
\label{eq:equiv-rep1}
\end{align}
with the decision variable $P$ can be confirmed
to coincide with $E[A(\xi_0)^T P A(\xi_0)]$.
In addition, another representation of $E[A(\xi_0)^T P A(\xi_0)]$ can be
also given by
\begin{align}
&
\bar{A}_{\rm e2}(I_n\otimes {\rm row}(P)^T)
\label{eq:equiv-rep2}
\end{align}%
for $\bar{A}_{\rm e2}=E[A_{\rm e2}(\xi_0)]\in {\bf R}^{n\times n^3}$,
where
\begin{align}
&
A_{\rm e2}(\xi_0):=
\begin{bmatrix}
{\rm row}(a_1 a_1^T) & \cdots & {\rm row}(a_n a_1^T) \\
\vdots & \ddots & \vdots \\
{\rm row}(a_1 a_n^T) & \cdots & {\rm row}(a_n a_n^T)
\end{bmatrix}
\end{align}%
under the partitioning $A(\xi_0)=:[a_1, a_2, \ldots, a_n]$ ($\xi_0$
is omitted in the column random vectors for notation simplicity).
Although (\ref{eq:equiv-rep1}) has a form compatible with the extension
toward stabilization synthesis discussed in the next section, (\ref{eq:equiv-rep2})
has the advantage that we do not need to decompose matrices as in
(\ref{eq:equiv-rep-decom}).
The above arguments can be summarized by the following lemma.
\begin{lemma}
\label{lm:lyap-lmi}
For given $P$, the expectation
$E[A(\xi_0)^T P A(\xi_0)]$ is equivalent to
(\ref{eq:equiv-rep1}) and (\ref{eq:equiv-rep2}).
\end{lemma}
The important point here is that in both (\ref{eq:equiv-rep1}) and
(\ref{eq:equiv-rep2}),
the decision variable $P$ has been taken out from the expectation operation.
The implication is that once we calculate
$\bar{A}^\prime$ in (\ref{eq:equiv-rep1}) or
$\bar{A}_{\rm e2}$ in (\ref{eq:equiv-rep2}), we can
then solve (\ref{eq:lyap-lambda}) and (\ref{eq:lyap}) as the standard
linear matrix inequalities (LMIs).
\section{Stabilization State Feedback Synthesis Based on Lyapunov Inequality}
\label{sc:syn}
In this section, we discuss stabilization
state feedback synthesis based on the Lyapunov inequality condition
derived in the preceding section.
\subsection{Problem of Stabilization State Feedback Synthesis}
We first state the synthesis problem to be tackled in this section.
Let us consider the $Z$-dimensional process
$\xi$ satisfying Assumption~\ref{as:iid} and the associated system
\begin{align}
&
x_{k+1} = A_{\rm op}(\xi_k) x_k + B_{\rm op}(\xi_k) u_k,\label{eq:open-sys}
\end{align}
where $x_k \in {\bf R}^n$, $u_k \in {\bf R}^{m}$,
$A_{\rm op}:{\boldsymbol {\mit\Xi}} \rightarrow {\bf R}^{n\times n}$ and
$B_{{\rm op}}:{\boldsymbol {\mit\Xi}} \rightarrow {\bf R}^{n\times m}$.
On the coefficient matrices of the above system, we make the following
assumption similar to Assumption~\ref{as:bound}.
\begin{assumption}
\label{as:inf-syn}
The squares of elements of
$A_{\rm op}(\xi_k)$ and $B_{\rm op}(\xi_k)$ are all Lebesgue integrable.
\end{assumption}
Let us consider the state feedback
\begin{align}
&
u_k=F x_k \label{eq:state-feedback}
\end{align}
with the static time-invariant gain $F\in {\bf R}^{m\times n}$.
The closed-loop system can be described by
(\ref{eq:fr-sys}) with
\begin{align}
&
A(\xi_k)=A_{\rm op}(\xi_k)+B_{\rm op}(\xi_k)F.
\label{eq:closed-loop}
\end{align}
Note that if $A_{\rm op}(\xi_k)$ and $B_{\rm op}(\xi_k)$ satisfy Assumption~\ref{as:inf-syn}
then the above $A(\xi_k)$ also satisfies Assumption~\ref{as:bound} (for
each fixed $F$) by Lemma~\ref{lm:product-bound}.
This section studies the synthesis problem of
$F$ such that the closed-loop system is quadratically stable.
\subsection{LMI for Synthesis}
For a given $F\in {\bf R}^{m\times n}$, it readily follows from
Theorem~\ref{th:lyap} that
the closed-loop system is quadratically stable if and only if
there exists $P\in {\bf S}^{n\times n}_+$ such that
\begin{align}
&
E[P-(A_{\rm op}(\xi_0)+B_{\rm op}(\xi_0)F)^T P (A_{\rm op}(\xi_0)+B_{\rm
op}(\xi_0)F)]\!>\!0.
\label{eq:lyap-syn}
\end{align}
Hence, our synthesis problem reduces to that of
searching for $F$ such that there exists $P>0$ satisfying the
above inequality.
Since the inequality not only involves the expectation operation but also is
nonlinear in the decision variables $P$ and $F$,
it is more difficult to deal with than (\ref{eq:lyap}) about the
analysis.
Fortunately, however, a technique similar to (\ref{eq:equiv-rep1}) can
indeed lead us to an alternative representation of (\ref{eq:lyap-syn})
that is compatible with the Schur complement technique
\cite{Boyd-book}.
To see this, let us first define
\begin{align}
G_{\rm e}(\xi_0)
:=&
[{\rm row}(A_{\rm op}(\xi_0)), {\rm row}(B_{\rm op}(\xi_0))]^T
\notag \\
&\cdot [{\rm row}(A_{\rm op}(\xi_0)), {\rm row}(B_{\rm op}(\xi_0))],
\label{eq:equiv-rep-vecvec-syn}
\end{align}%
whose elements cover all the second order products of the elements of
$[A_{\rm op}(\xi_0), B_{\rm op}(\xi_0)]$.
$E[G_{\rm e}(\xi_0)]$ is well-defined under
Assumption~\ref{as:inf-syn} and becomes a positive semidefinite matrix.
Let us further take $\bar{G}\ (\in {\bf R}^{(n+m)n\times(n+m)n})$ such
that
\begin{align}
&
\bar{G}^T \bar{G}=E[G_{\rm e}(\xi_0)],
\label{eq:equiv-rep-decom-syn}
\end{align}
and introduce the following partitioning of $\bar{G}$.
\begin{align}
&
\bar{G}=:
\left[\bar{G}_{A1}, \ldots, \bar{G}_{An}, \bar{G}_{B1}, \ldots,
\bar{G}_{Bn}\right] \notag\\
&
(\bar{G}_{Ai} \in {\bf R}^{(n+m)n\times n}, \bar{G}_{Bi} \in {\bf
R}^{(n+m)n\times m}\ (i=1,\ldots,n))
\end{align}
Then, for
\begin{align}
&
\bar{G}^\prime_A :=[\bar{G}_{A1}^T, \ldots,
\bar{G}_{An}^T]^T \in {\bf R}^{(n+m)n^2 \times n},\\
&
\bar{G}^\prime_B :=[\bar{G}_{B1}^T, \ldots,
\bar{G}_{Bn}^T]^T \in {\bf R}^{(n+m)n^2 \times m},
\label{eq:def-GpB}
\end{align}
the matrix
\begin{align}
&
\left(\bar{G}^\prime_{A}+\bar{G}^\prime_{B} F\right)^T (P\otimes I_{(n+m)n})
\left(\bar{G}^\prime_{A}+\bar{G}^\prime_{B} F\right)
\label{eq:equiv-rep-syn}
\end{align}
with the decision variables $P$ and $F$ can be confirmed
to coincide with
$E[(A_{\rm op}(\xi_0)+B_{\rm
op}(\xi_0)F)^T P (A_{\rm op}(\xi_0)+B_{\rm
op}(\xi_0)F)]$.
Hence, once we calculate $\bar{G}^\prime_{A}$ and
$\bar{G}^\prime_{B}$,
the inequality condition
(\ref{eq:lyap-syn}) can be dealt with as a standard
matrix inequality; in particular, the resulting inequality has a
form compatible with the Schur complement technique.
Since $P\otimes I_{(n+m)n} >0$ for $P\in {\bf S}^{n\times n}_+$,
the above arguments lead us to the following lemma.
\begin{lemma}
\label{lm:extension}
For given $P\in {\bf S}^{n\times n}_+$ and
$F\in {\bf R}^{m\times n}$, (\ref{eq:lyap-syn}) holds
if and only if
\begin{align}
&
\begin{bmatrix}
P& \ast\\
(P\otimes I_{(n+m)n})
\left(\bar{G}^\prime_{A}+\bar{G}^\prime_{B} F\right) &
P\otimes I_{(n+m)n}
\end{bmatrix}>0,
\label{eq:extension}
\end{align}
where $\ast$ denotes the transpose of the lower left block in the matrix.
\end{lemma}
This lemma, together with
the congruence transformation with ${\rm diag}(X, X\otimes I_{(n+m)n})$
for $X=P^{-1}$ and the change of variables $Y=FX$,
further leads us to the following theorem about the synthesis.
\begin{theorem}
\label{th:syn}
Suppose $\xi$ satisfies Assumption~\ref{as:iid} and $A_{\rm op}(\xi_k)$
and $B_{\rm op}(\xi_k)$
satisfy Assumption~\ref{as:inf-syn}.
There exists a gain $F$ such that the closed-loop system
(\ref{eq:fr-sys}) with (\ref{eq:closed-loop}) is quadratically stable
if and only if there exist $X\in {\bf S}^{n\times n}_+$ and
$Y\in {\bf R}^{m\times n}$ satisfying
\begin{align}
&
\begin{bmatrix}
X& \ast\\
\bar{G}^\prime_{A}X+\bar{G}^\prime_{B} Y &
X\otimes I_{(n+m)n}
\end{bmatrix}>0 \label{eq:lmi-syn}
\end{align}
for $\bar{G}^\prime_{A}$ and $\bar{G}^\prime_{B}$ defined by (\ref{eq:equiv-rep-vecvec-syn})--(\ref{eq:def-GpB}).
In particular, $F=YX^{-1}$ is one such stabilization gain.
\end{theorem}
Although the above theorem is derived from (\ref{eq:lyap}) without
$\lambda$, the same technique can be applied also to
(\ref{eq:lyap-lambda}) with $\lambda$, which leads to the following
corollary.
\begin{corollary}
\label{cr:syn}
Suppose $\xi$ satisfies Assumption~\ref{as:iid} and $A_{\rm op}(\xi_k)$
and $B_{\rm op}(\xi_k)$
satisfy Assumption~\ref{as:inf-syn}.
There exists a gain $F$ such that the corresponding closed-loop system
is quadratically stable if and only if
there exist $X\in {\bf S}^{n\times n}_+$,
$Y\in {\bf R}^{m\times n}$
and $\lambda\in (0,1)$
satisfying
\begin{align}
&
\begin{bmatrix}
\lambda^2 X& \ast\\
\bar{G}^\prime_{A}X+\bar{G}^\prime_{B} Y &
X\otimes I_{(n+m)n}
\end{bmatrix}\geq0.\label{eq:lmi-syn-lambda}
\end{align}
In particular, $F=YX^{-1}$ is one such stabilization gain.
\end{corollary}
If we aim not only at stabilizing the closed-loop system but also at
minimizing $\lambda$ in (\ref{eq:quad-def})
(which corresponds to the convergence rate related to the definition of exponential
stability by Theorem~\ref{th:eqv-expo-quad}), this corollary will play an important role.
\section{Numerical Examples}
\label{sc:example}
This section is devoted to numerical examples.
We first numerically demonstrate with a simple
example that the Lyapunov inequality (\ref{eq:lyap-lambda}) gives a necessary and
sufficient condition for quadratic stability (and thus exponential
stability) of system (\ref{eq:fr-sys})
as indicated by Theorems~\ref{th:eqv-expo-quad} and \ref{th:lyap}.
Then, we provide a more challenging example for motivating our study,
in which the stabilization state feedback is designed for the discrete-time system obtained through discretizing a
continuous-time deterministic linear system with a randomly time-varying
sampling interval.
\subsection{Demonstration of Strictness in Stability Analysis Based on
Lyapunov Inequality}
Let us consider the $2$-dimensional stochastic process $\xi$
that satisfies Assumption~\ref{as:iid} and is given by the sequence of
$\xi_k=[\xi_{1k}, \xi_{2k}]^T,\
\xi_{1k}\sim N(\mu,\sigma^2)\ (\mu=0, \sigma=0.2),\
\xi_{2k}\sim U(\underline{d},\overline{d})\ (\underline{d}=-0.5, \overline{d}=0.5)$,
where $N(\mu,\sigma^2)$ and $U(\underline{d},\overline{d})$ respectively
denote the normal distribution with mean $\mu$
and standard deviation $\sigma$ and
the continuous uniform distribution with minimum $\underline{d}$ and
maximum $\overline{d}$.
Let us further consider the stochastic system (\ref{eq:fr-sys}) with
%
\begin{align}
&
A(\xi_k)=
\begin{bmatrix}
0.3+\xi_{2k} & 0.8+\xi_{1k} & -0.5\\
0.5 & 0.3+\xi_{1k}\xi_{2k} & -1.2+(\xi_{1k})^2\\
-0.2 & 0.8 & 0.6
\end{bmatrix}.
\end{align}
Through numerical
stability analysis of this system, we discuss the strictness of
our Lyapunov inequality condition.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\linewidth]{decay}
\caption{Time response of estimate of $\sqrt{E[||x_k||^2]}$ calculated with
$10^5$ sample
paths of $\xi$.}
\label{fig:mscs17_si}
\end{figure}
We first search for the minimal $\lambda$ such that there exists $P>0$
satisfying (\ref{eq:lyap-lambda}).
As stated in Lemma~\ref{lm:lyap-lmi},
the matrix (\ref{eq:equiv-rep1}) is an alternative representation of
the expectation $E[A(\xi_0)^T P
A(\xi_0)]$ in (\ref{eq:lyap-lambda}).
Hence, once we calculate $\bar{A}^{\prime}$ in (\ref{eq:equiv-rep1})
for the above $A(\xi_0)$, it readily follows that we can
solve (\ref{eq:lyap-lambda}) as an LMI for each fixed $\lambda$.
This enables us to achieve
the aforementioned minimization through a bisection method with
respect to $\lambda$; the resulting minimum is expected to
correspond to the convergence rate with respect to
$\left(\sqrt{E[||x_k||^2]}\right)_{k \in {\bf N}_0}$
by Theorems~\ref{th:eqv-expo-quad} and
\ref{th:lyap}.
We performed calculation of $\bar{A}^{\prime}$ with MATLAB and
Symbolic Math Toolbox,
and minimized $\lambda$ with MATLAB,
YALMIP \cite{YALMIP} and SDPT3 \cite{SDPT3}.
Then, the minimal $\lambda$ became 0.9219$\ (<1)$,
which implies
exponential stability of the system
by Theorems~\ref{th:eqv-expo-quad} and
\ref{th:lyap}.
We next confirm that the above minimal $\lambda$ indeed corresponds to
the convergence rate with respect to
$\left(\sqrt{E[||x_k||^2]}\right)_{k \in {\bf N}_0}$, through
calculating its estimate $\lambda_{\rm est}$ with $N_{\rm s}$ sample paths of $\xi$.
For $N_{\rm s}=10^5$, such sample-based estimation
of $\sqrt{E[||x_k||^2]}$ provided us with the time response shown in
Fig.~\ref{fig:mscs17_si},
where we took $x_0=[1,0,0]^T$ as the initial state of the system.
As we can see from the figure,
the estimate of
$\sqrt{E[||x_k||^2]}$ decays with an almost constant rate
after the elapse of sufficient time.
The decay rate $\lambda_{\rm est}$ obtained from the data at $k=50$ and $100$ in
Fig.~\ref{fig:mscs17_si} was 0.9213
(similar results were obtained
regardless of the initial state under the same sample of $\xi$).
Since this value is close to the above minimization result,
it numerically suggests that
the minimal $\lambda$ obtained with the Lyapunov
inequality corresponds to the true convergence rate.
\subsection{Stabilization of Discrete-Time System Obtained under
Randomly Time-Varying Sampling Interval}
\begin{figure}[t]
\setlength{\unitlength}{1cm}
\centering
\scalebox{1}{\input{sdsystem}}
\caption{Sampled-data system with sampler and hold running under
randomly time-varying sampling interval.}
\label{fig:sdsystem}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.85\linewidth]{sdsim_x1}
\includegraphics[width=0.85\linewidth]{sdsim_x2}
\includegraphics[width=0.85\linewidth]{sdsim_x3}
\includegraphics[width=0.85\linewidth]{sdsim_u}
\caption{Overlays of responses of continuous-time
$x_{\rm c}$ and $u_{\rm c}$
generated with 100 sample paths of $\xi$
and the initial state $x_{\rm c}(t_0)=[1,0,0]^T$.}
\label{fig:sdsim}
\end{figure}
Let us next consider the sampled-data system, shown in Fig.~\ref{fig:sdsystem},
consisting of
the continuous-time deterministic linear unstable system
$P_{\rm c}$ given by
\begin{align}
&
\dot{x}_{\rm c}=A_{\rm c}x_{\rm c} + B_{\rm c}u_{\rm c},\ \
A_{\rm c}=
\begin{bmatrix}
-4 & 3 & -8 \\
3 & 7 & -6 \\
0 & 8 & -2
\end{bmatrix},\ \
B_{\rm c}=
\begin{bmatrix}
0 \\ 0 \\ 1
\end{bmatrix},
\end{align}
the static time-invariant
state feedback gain $F$ to be designed,
and the sampler ${\cal S}$ and the zero-order hold ${\cal H}$ running
with the sampling instants $t_k\ (k\in {\bf N}_0)$, where
\begin{align}
&
t_0=0,\ \ t_{k+1}-t_k>0,\ \ \lim_{k\rightarrow \infty}t_k=\infty.
\end{align}
The relation between the continuous-time signals and
the discrete-time signals in Fig.~\ref{fig:sdsystem} is described as follows.
\begin{align}
&
x_k=x_{\rm c}(t_k),\ \
u_{\rm c}(t)=u_k\ \
(t\in [t_k, t_{k+1}); k\in {\bf N}_0)
\end{align}
For such a sampled-data system, we assume that
the sampling interval $h_k=t_{k+1}-t_k$ is randomly time-varying
(i.e., the
random case of aperiodic sampling
\cite{Montestruque-TAC04,Hetel-Auto17})
and given by
$h_k=h(\xi_k)=0.01+\xi_k$ with the $1$-dimensional stochastic process
$\xi$ that satisfies Assumption~\ref{as:iid} and
$\xi_k \sim {\rm Exp}(\nu)\ (\nu=20)$, where
${\rm Exp}(\nu)$ denotes the exponential distribution with expectation $1/\nu$.
In this subsection, we consider designing $F$ stabilizing this
sampled-data stochastic system;
if we focus only on the signal values at the sampling
instants, this synthesis problem reduces to that of designing a state
feedback (\ref{eq:state-feedback}) stabilizing the discrete-time stochastic system (\ref{eq:open-sys}) with the random coefficients
\begin{align}
&
A_{\rm op}(\xi_k)=
e^{A_{\rm c}h(\xi_k)},\ \
B_{\rm op}(\xi_k)=
\int_0^{h(\xi_k)}e^{A_{\rm c}t}B_{\rm c}dt.
\end{align}
For the above discrete-time system, we searched for
a solution of (\ref{eq:lmi-syn-lambda}) minimizing $\lambda$
with MATLAB, Symbolic Math Toolbox, YALMIP and SDPT3,
where matrix exponentials were dealt with through the second-order
Pad\'{e} approximation in the computation.
Then, we obtained the gain
$F=\left[2.9242, 4.9123, -10.0501\right]$
with $\lambda=0.9193$, which implies the stability of the corresponding
discrete-time closed-loop system by Corollary~\ref{cr:syn}.
Since our control approach is developed for discrete-time stochastic
systems,
it can only ensure the convergence of the state
of the sampled-data system (in the stochastic sense) with respect to the sampling instants immediately.
Fortunately, however, the responses of the continuous-time signals $x_{\rm c}$
and $u_{\rm c}$ in the present sampled-data
system (with the above $F$) indeed converged to zero in the simulations of the authors;
Fig.~\ref{fig:sdsim} shows the overlays of
the responses of $x_{\rm c}$ and $u_{\rm c}$
generated with 100 sample paths of $\xi$
and the initial state $x_{\rm c}(t_0)=[1,0,0]^T$.
Since there is virtually no limitation on the class of continuous-time
linear systems (and that of
the distributions of $h_k$) in the above synthesis,
other synthesis problems could also be solved in a similar fashion.
This suggests strong potential of the proposed approach.
\section{Conclusion}
In this paper, we first showed that asymptotic stability, exponential
stability and quadratic stability are all equivalent for
discrete-time linear systems with stochastic dynamics
under the assumption that the underlying process $\xi$ has an
i.i.d.\ property.
Then, we discussed a Lyapunov inequality that can characterize
stability in a necessary and sufficient sense.
Our Lyapunov inequality
readily reduces to standard LMIs for some subclasses of stochastic systems.
In the general case, however,
the original form of the inequality seemed unsuitable
for numerical stability analysis because it must be solved for decision variables
contained in the expectation operation.
Hence, we also provided an idea to solve the inequality as a standard LMI
even in the general case; this idea was also used in the extension of
the Lyapunov inequality condition toward stabilization state feedback
synthesis.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
|
1,314,259,995,305 | arxiv | \section{\centerline{\sc References}}
\addtolength{\textwidth}{2cm} \addtolength{\hoffset}{-1cm}
\begin{document}
\title
[LOWER ORDER TERMS]
{Lower order terms for the moments of symplectic and orthogonal families of $L$-functions}
\author[Goulden]{Ian P. Goulden}\email{ipgoulden@uwaterloo.ca}
\author[Huynh]{Duc Khiem Huynh}\email{dkhuynhms@gmail.com}
\author[Rishikesh]{Rishikesh}\email{rishikes@gmail.com}
\author[Rubinstein]{Michael O. Rubinstein}
\thanks{Support for work on this paper was provided by the
National Science Foundation under award DMS-0757627 (FRG grant),
and two NSERC Discovery Grants.}
\email{mrubinst@uwaterloo.ca}
\address{Department of Pure Mathematics, University of Waterloo, Waterloo, ON, N2L 3G1, Canada}
\date{\today}
\thispagestyle{empty} \vspace{.5cm}
\begin{abstract}
We derive formulas for the terms in the conjectured asymptotic expansions of
the moments, at the central point, of quadratic Dirichlet $L$-functions,
$L(1/2,\chi_d)$, and also of the $L$-functions associated to quadratic twists
of an elliptic curve over $\mathbb{Q}$. In so doing, we are led to study determinants
of binomial coefficients of the form $\det \left( \binom{2k-i-\lambda_{k-i+1}}{2k-2j}\right)$.
\end{abstract}
\maketitle
\tableofcontents
\section{Introduction}
In this paper we describe formulas, derived from conjectures of Conrey, Farmer,
Keating, Rubinstein, and Snaith \cite{CFKRS}, for the moments of quadratic
Dirichlet $L$-functions at the central point, and the moments of $L$-functions
associated to quadratic twists of an elliptic curve.
We are motivated to study moments in these two families of $L$-functions
because of their apparent connection to the moments of characteristic
polynomials of unitary symplectic and orthogonal matrices.
Montgomery was the first to discover a link between an $L$-function and
characteristic polynomials of unitary matrices~\cite{M}. He
computed, with restrictions on the allowed test functions, the limiting pair
correlation of the zeros of the Riemann zeta function, and found that it coincides with
the average pair correlation of the eigenvalues of large random (according to Haar measure)
unitary matrices that had been computed earlier by Dyson~\cite{Dy}. Odlyzko
later confirmed this agreement numerically, without restriction~\cite{O}.
Rudnick and Sarnak generalized Montgomery's result to higher correlations and
to any primitive $L$-function~\cite{RS}.
Katz and Sarnak then made precise connections between
various families of $L$-functions and matrices from specific classical compact
groups, based on results linking the density of zeros $L$-functions and
analogous zeta functions over function fields, to the eigenvalue densities of
random matrices in the classical compact groups~\cite{KS}~\cite{KS2}. For
instance, their work showed a statistical connection between the zeros of
quadratic Dirichlet $L$-functions and eigenvalues of unitary symplectic
matrices, and between the zeros of $L$-functions of quadratic twists of an
elliptic curve and eigenvalues of orthogonal matrices. The papers~\cite{R}
and~\cite{R2}, provided further theoretical and numerical support for the
relevance of these matrix groups to our two families of $L$-functions.
Subsequently, Keating and Snaith were able to predict the leading term in the
asymptotics for the moments of the Riemann zeta function on the critical line by carrying
out an analogous computation for random unitary matrices~\cite{KeS}. In a
companion paper~\cite{KeS2}, they also conjectured the leading term in the
asymptotics for the moments in our two families of $L$-functions by computing the moments of
the characteristic polynomials of random unitary symplectic and even orthogonal
matrices. See also the paper of Conrey and Farmer~\cite{CF} which contains some
arithmetic information needed for the Keating and Snaith approach to moments.
The method of Keating and Snaith for predicting moments of $L$-functions relies
on computations in random matrix theory, for example it uses Weyl's integration
formula and the Selberg integral, and some guesswork to make the heuristic leap
to number theoretic moments. It also has the drawback of requiring, as input,
the relevant classical compact group as predicted by Katz and Sarnak.
On the other hand, the approach, referred to above, of Conrey, Farmer, Keating,
Rubinstein, and Snaith does not rely on random matrix theory to derive,
heuristically, the moments of various families of $L$-functions. Their method
is based strictly on number theoretic techniques involving the approximate
functional equation, the traditional equation that is used to study moments of
$L$-functions~\cite{T}~\cite{J}. While random matrix theory is not
needed in their approach, the formulas that their heuristic approach yields for
$L$-functions have provable analogues in random matrix theory. CFKRS were also
able to make progress by introducing `shifts' into the moments, a strategy that
was inspired by Motohashi's evaluation of the fourth moment of the zeta function
~\cite{Mot} and also by an analogous problem in random matrix theory.
Their method, therefore, produces an answer that can be compared against
various moment computations in random matrix theory, and, instead of using the
predictions of Katz and Sarnak, it provides evidence for them. Furthermore, the
conjectured formulas of CFKRS go well beyond the leading asymptotic of Keating
and Snaith, providing, implicitly, a full asymptotic expansion for a variety of
$L$-function moment problems. Because their conjectured formulas provide a full asymptotic
expansion for moments, one can test them numerically by comparing the predicted moments
against those computed from $L$-function data. See for instance~\cite{CFKRS}
\cite{CFKRS2} \cite{AR} \cite{RY}.
Our goal is to turn the implicit formulas of CFKRS
into asymptotic expansions with explicitly given coefficients. We elaborate
on the CFKRS formulas, for the family of quadratic Dirichlet $L$-functions, in
Section~\ref{sec:CFKRS quadratic} and for quadratic elliptic curve
$L$-functions in Section~\ref{sec:elliptic}.
Besides the approaches of Keating and Snaith and of CFKRS, two additional
methods have yielded interesting results for the moments of $L$-functions.
Gonek, Hughes, and Keating~\cite{GHK}, and Bui and Keating~\cite{BK} use the
explicit formula for an $L$-function to realize the $L$-function as a hybrid
between partial Hadamard and Euler products. They assume statistical
independence between these two products and study the moments of the partial
Euler product using number theoretic heuristics. The moments of the partial
Hadamard product are studied by modeling the zeros of the Hadamard product
based on the predicted classical compact group. Their approach therefore
suffers the same disadvantage of the Keating and Snaith method of requiring the
predictions of Katz and Sarnak as input. The main advantage of their method
over the Keating and Snaith method is that it explains, rather than guesses,
the appearance of an `arithmetic factor' in moment formulas for $L$-functions.
Another disadvantage is that it only seems to correctly predict the leading
asymptotic for the $L$-function moments that they consider, and thus only
agrees with the CFKRS prediction to leading order. Presumably this is because
their assumptions are too strong, for example the statistical independence
between the partial Hadamard and Euler products, and their use of matrix
eigenvalues to model the partial Hadamard product.
Another method for studying moments of $L$-functions has been developed by
Diaconu, Goldfeld, and Hoffstein~\cite{GH}~\cite{DGH} and uses the theory of
multiple Dirichlet series. It has the advantage of proving asymptotic formulas
for some $L$-function moments, for example the first three moments of quadratic
Dirichlet $L$-functions at the central point. However, it has the disadvantage
of involving an elaborate sieving process (in the case of quadratic
characters), that makes it unwieldy for producing explicit formulas for the
asymptotic expansion. Interestingly, their method predicts the existence of
additional lower order terms of smaller magnitude that go beyond those of the
asymptotic expansion of CFKRS. See the paper of DGH as well as that of
Zhang~\cite{Z}, and Alderson and Rubinstein~\cite{AR} for discussions and
computations regarding these additional lower terms.
\subsection{The CFKRS conjecture for $L(1/2,\chi_d)$}
\label{sec:CFKRS quadratic}
We begin by describing the CFKRS conjecture for quadratic Dirichlet $L$-functions.
Let $D$ be a squarefree integer, $D\neq 0,1$, and let $K= \mathbb{Q}(\sqrt{D})$
be the corresponding quadratic field. The fundamental discriminant $d$ of $K$ equals
$D$ if $D = 1 \mod 4$, and $4D$ if $D = 2,3 \mod 4$.
Let $\chi_d(n)$ be the Kronecker symbol $\kr{d}{n}$,
and $L\pr{s,\chi_d}$ the quadratic Dirichlet $L$-function given by the
Dirichlet series
\begin{equation}
L\pr{s,\chi_d} = \sum_{n=1}^\infty \fr{\chi_d(n)}{n^s}, \qquad \Re(s)>0,
\end{equation}
satisfying the functional equation
\begin{equation}
L\pr{s,\chi_d} = \lr{d}^{\fr{1}{2} - s} X(s,a) L\pr{1-s, \chi_d},
\end{equation}
where
\begin{equation}
\label{eq:X}
X(s,a) =
\pi^{s-\fr{1}{2}} \fr{\Gamma\pr{\fr{1-s+ a}{2}}}{\Gamma\pr{\fr{s+a}{2}}}, \qquad a =
\begin{cases} 0 \quad &\text{if $d> 0$,} \\ 1 \quad &\text{if $d<0$.} \end{cases}
\end{equation}
Let $S(X)$ denote the set of fundamental discriminants with $|d|<X$. The Gamma factor
in functional equation for $L(s,\chi_d)$ depends on whether $d<0$ or $d>0$. Thus, define
further
\begin{eqnarray}
S_+(X) &=& \{ d \in S(X) : d>0\} \notag \\
S_-(X) &=& \{ d \in S(X) : d<0\},
\end{eqnarray}
to be, respectively, the sets of positive and negative fundamental discriminants with $|d|<X$.
CFKRS conjectured \cite{CFKRS} the asymptotic expansion:
\begin{equation}
\label{eq: moment asympt}
\sum_{d \in S\pm(X)} L\pr{1/2,\chi_d}^k \sim
\frac{3}{\pi^2} X \mathcal Q_\pm(k,\log X),
\end{equation}
where $\mathcal Q_+(k,x)$ and $\mathcal Q_-(k,x)$ are polynomials of degree
$k(k+1)/2$ in $x$ that we will describe below. The fraction $3/\pi^2$ accounts
for the density of fundamental discriminants amongst all the integers.
The polynomial $\mathcal Q_\pm(k,\log X)$ is expressed
in terms of a more fundamental polynomial $Q_\pm(k,x)$ of the same degree
that captures the moments locally:
\begin{equation}
\mathcal Q_\pm(k,\log{X}) = \frac{1}{X} \int_1^X Q_\pm(k,\log{t}) dt.
\end{equation}
One of the main achievements of CFKRS was to give a general recipe/heuristic
for producing formulas for moments of various families of $L$-functions. Their
formula (see Conjecture 1.5.3 in \cite{CFKRS}) for the polynomial
$Q_\pm(k,x)$ is given implicitly in terms of a $k$-fold multivariate residue:
\begin{equation}
\label{eq:Qkresidue}
Q_\pm(k,x)= \frac{(-1)^{k(k-1)/2} 2^k}{k!} \frac {1}{(2\pi i)^k}
\oint \cdots \oint
\frac{G_\pm(z_1,\ldots,z_k)\Delta(z_1^2,\ldots,z_k^2)^2}{\prod_{j=1}^k z_j^{2k-1}}
e^{\frac x 2 \sum_{j=1}^k z_j} \, dz_1\ldots dz_k,
\end{equation}
where $\Delta(w_1,\dotsc,w_k)$ is the Vandermonde determinant
\begin{equation}
\label{eq:1}
\Delta(w_1,\dotsc,w_k) = \det(w_i^{j-1})_{k\times k}
=\prod_{1\leq i < j \leq k} (w_j -w_i),
\end{equation}
and
\begin{equation}
\label{eq:Gdefinition}
G_\pm(z_1,\ldots,z_k) = A_k(z_1,\ldots,z_k) \prod_{j=1}^k X(\frac 1 2 +z_j,
a)^{-1/2} \prod_{1\leq i\leq j \leq k} \zeta(1+z_i+z_j).
\end{equation}
Here, $a=0$ for $G_+$ and $a=1$ for $G_-$, $X(s,a)$ is given in~\eqref{eq:X},
and $A_k$ equals the Euler product, absolutely convergent in a neighbourhood
of $(z_1,\ldots,z_k)=(0,\ldots,0)$, defined by
\begin{multline}
\label{eq:Ak}
\index{$A_k$}A_k(z_1, \ldots, z_k) = \prod_p \prod_{1\leq i\leq j \leq k}
\left(1-\frac{1}{p^{1+z_i+z_j}} \right)\\
\times \left(\frac 1 2 \left( \prod_{j=1}^{k}\left(1-\frac 1
{p^{\frac 1 2 +z_j}} \right)^{-1} +
\prod_{j=1}^{k}\left(1+\frac 1{p^{\frac 1 2 +z_j}}
\right)^{-1} \right) +\frac 1 p \right) \left(1+\frac 1 p
\right)^{-1}.
\end{multline}
One advantage of equation~\eqref{eq:Qkresidue} is that it allows one to easily
see that $Q_\pm(k,x)$ is a polynomial of degree $k(k+1)/2$ in $x$. That is
because the denominator of the multivariate residue picks up terms in the
numerator involving $\prod_{j=1}^k z_j^{2k-2}$, which is of degree $2k(k-1)$.
Now, the factor $\Delta(z_1^2,\ldots,z_k^2)^2$ is a homogeneous polynomial,
also of degree $2k(k-1)$. However, the factor $G_\pm(z_1,\ldots,z_k)$ cancels
$k(k+1)/2$ of the factors of the Vandermonde, because each $\zeta(1+z_i+z_j)$
has a Laurent expansion that begins $1/(z_i+z_j)$ coming from the pole at $s=1$
of $\zeta(s)$. Therefore, in considering the multivariate Taylor expansion of
the numerator about $z_1=\ldots=z_k=0$, we only need to take terms in the
series
\begin{equation}
\label{eq:exp}
\exp\left(\frac x 2 \sum_{j=1}^k z_j\right) = \sum_{0}^\infty \frac{x^n}{2^nn!} (z_1+\ldots +z_k)^n
\end{equation}
up to $n=k(k+1)/2$. Hence, in the $x$ aspect, the $k$-fold
residue only involves terms up to $x^{k(k+1)/2}$.
Equation~\eqref{eq:Qkresidue} has the disadvantage of expressing $Q_\pm(k,x)$ implicitly.
Let us therefore write
\begin{equation}
\label{eq:Qexpansion}
Q_\pm(k,x) = c_\pm(0,k) x^{k(k+1)/2} + c_\pm(1,k) x^{k(k+1)/2 - 1} + \ldots + c_\pm(k(k+1)/2,k).
\end{equation}
Our main result, described in the following theorem, gives explicit formulae for the
coefficients $c_{\pm}(r,k)$. We first define
\begin{equation}
\label{eq:a_k}
a_k := A_k(0,\ldots,0) =
\prod_p
\frac{\left(1 - \frac{1}{p}\right)^{\frac{k(k+1)}{2}}}
{1 + \frac{1}{p}}
\left(
\frac{
\left( 1 - \frac{1}{\sqrt{p}}\right)^{-k}
+
\left(1 + \frac{1}{\sqrt{p}}\right)^{-k}
}{2} +
\frac{1}{p}
\right).
\end{equation}
\begin{theorem}\label{thm:maintheorem}
In \reff{eq:Qexpansion}, the leading coefficient $c_\pm(0,k)$ of $Q_+(k,x)$ or
$Q_-(k,x)$ are both equal to
\begin{equation}
\label{eq:146}
\frac{ a_k}{ 2^{k} }
\prod_{j=0}^{k-1}\frac{(2j)!}{(k+j)!}=:
c(0,k),
\end{equation}
and, for given $r\geq 1$, we have
\begin{equation}
\label{eq:147}
\index{$c_\pm(r,k)$}c_\pm(r,k) = c(0,k) \sum_{|\lambda|=r} b^\pm_\lambda(k ) N_\lambda(k),
\end{equation}
where $N_\lambda(k)$ is a polynomial in $k$ of degree at most $2|\lambda|$, defined in~\eqref{eq:N},
$a_k$ is defined in~\eqref{eq:a_k}, and the
$b^{\pm}_\lambda(k)$'s are the Taylor coefficients of a holomorphic function, defined in
\eqref{eq:powerseries} and \eqref{eq:145}.
The sum is over all partitions $|\lambda| = r$, with $\sum \lambda_i = r$ and
$\lambda_1\geq \lambda_2\geq \ldots > 0$.
\end{theorem}
We remark that formula~\eqref{eq:146} for the leading term agrees with the
prediction of Keating and Snaith. See (34),(45), and (47) of Keating and Snaith
\cite{KS2} (replacing $\log{D}$ by $x$ in their equation (45)). Their
derivation is heuristic and based on the Selberg integral. Compare also to the
leading term of equation (1.5.17) of~\cite{CFKRS}, with $N=x/2$ in that
equation. To verify the agreement between these, one can check, inductively,
that:
\begin{equation}
\label{eq:comparison to KS}
\frac{1}{2^k}\prod_{j=0}^{k-1}\frac{(2j)!}{(k+j)!}
= \frac{1}{2^{k(k+1)/2}} \prod_{j=1}^k \frac{1}{(2j-1)!!}
= \prod_{j=1}^k \frac{j!}{(2j)!}.
\end{equation}
Note that~\eqref{eq:147} is analogous to formula (1.16) of~\cite{CFKRS} which
provides a formula for the coefficients of the moment polynomials of the
Riemann zeta function. See also Dehaye's paper~\cite{D}, also for the Riemann zeta
function, where he gives a combinatorial formula for the analogue of our
polynomial $N_\lambda(k)$.
We work out examples, for $r=1$ and $r=2$.
Table~\ref{tab:N_lambda} provides $N_{(1)}(k) = k(k+1)$,
$N_{(1,1)}(k)=\frac 1 2 k (k-1)(k+1)(k+2)$, and $N_{(2)}(k)=0$. Thus,
\begin{align}
\label{eq:c1}
c_\pm(1,k) &= c(0,k) b^\pm_{(1)}(k) N_{(1)}(k) \notag \\
&= \frac{a_k}{2^k}\prod_{j=0}^{k-1}\frac{(2j)!}{(k+j)!}\ k(k+1) b^\pm_{(1)}(k).
\end{align}
and
\begin{align}
c_\pm(2,k) &= c(0,k)\left( b^\pm_{(1,1)}(k)N_{(1,1)}(k) +
b^\pm_{(2)}(k)N_{(2)}(k)\right) \notag \\
&=\frac{a_k}{2^k}\prod_{j=0}^{k-1}\frac{(2j)!}{(k+j)!}
\times \frac 1 2 k(k-1)(k+1)(k+2) \ b^\pm_{(1,1)}(k).
\end{align}
Let
\begin{equation}
\label{eq:zeta gammas}
\zeta(1+s) = \frac{1}{s} + \sum_{n=0}^\infty (-1)^n \gamma_n \frac{s^n}{n!},
\end{equation}
be the Laurent expansion about 0 of $\zeta(1+s)$ ($\gamma_0$ is Euler's constant),
and define
\begin{equation}
\label{eq:f}
f_j(p) :=
\frac{(-1)^{j}(p^{1/2}-1)^{-j-k} + (p^{1/2}+1)^{-j-k}}
{(p^{1/2}-1)^{-k}+(p^{1/2}+1)^{-k}+ 2 p^{-1-k/2}},
\end{equation}
Formulas for the coefficients $b_{(1)}^\pm(k)$, $b_{(1,1)}^\pm(k)$ can be derived
using the method described in Section~\ref{sec:b}, and are given by
\begin{equation}
\label{eq:b1}
b_{(1)}^\pm(k) = -\frac 1 2 \log \pi + \frac{1}{2} \frac{\Gamma'}{\Gamma} \left(1/4+a/2 \right)
+ (k+1)\gamma_0
+\sum_p
\left(
\frac{(k+1)}{p-1} + f_1(p)
\right) \log{p},
\end{equation}
where $a=0$ for $b^+_{(1)}$, and $a=1$ for $b^-_{(1)}$, and
\begin{multline}
\label{eq:b11}
b^\pm_{(1,1)}(k) = b^\pm_{(1)}(k)^2
-\gamma_0^2-2\gamma_1
-\sum_{p}
\left(
\frac{p}{(p-1)^2}+f_1(p)^2-f_2(p)
\right) \log(p)^2.
\end{multline}
In Section~\ref{sec:reform-probl} we derive
formula~\eqref{eq:146} for the leading coefficient of $Q_{\pm}(k,x)$. Our tools
are then applied, in Section~\ref{sec:further-lower-order}, to the general term
$c_\pm(r,k)$, where we obtain a formula for $N_\lambda(k)$ expressed as a sum
of determinants of the form:
\begin{equation}
\label{eq:i1}
D_\lambda(k)=
\det \left( \binom{2k-i-\lambda_{k-i+1}}{2k-2j}\right)_{1\leq i,j\leq k},
\end{equation}
where $\lambda=(\lambda_1,\dotsc,\lambda_m)$ is a partition with length
$l(\lambda)\leq k$ (see Section~\ref{sec:symmetric} for definitions).
In Section~\ref{sec:determinant} we derive some
interesting formulas for these determinants. To describe our formulas,
let $y=(y_1,\dotsc,y_m)$.
We define the {\em coefficient operator} $[y^\beta]$ on the set of formal
multivariate Taylor or Laurent series in $y$, which picks the coefficient of
the monomial $y^\beta$ in the series. More precisely, if
\begin{equation}
\label{eq:i3}
f(y_1,\dotsc,y_m)=\sum_{r_1,\dotsc,r_m \in \mathbb{Z}}
a_{r_1,\dotsc,r_m}y_1^{r_1}\dots y_m^{r_m},
\end{equation}
define
\begin{equation}
\label{eq:i55}
[y_1^{u_1}\dots
y_m^{u_m}]f =a_{u_1,\dotsc,u_m}.
\end{equation}
We prove the following Theorem.
\begin{theorem}
\label{thm:i2} Let $\lambda$ be a partition and $\mu$ be the
conjugate partition. Let $m=l(\lambda)$, and $n=l(\mu) = \lambda_1$.
For $k\geq \max(l(\lambda),\lambda_1)$, we have
\begin{multline}
\label{eq:i4}
D_\lambda(k)= 2^{\binom{k}{2} -|\lambda|}\\
\times [y_1^{\lambda_1+m-1}\dots
y_m^{\lambda_m}]\left( \prod_{1\leq i < j \leq m} (y_i
-y_j)(1-y_i-y_j) \prod_{l=1}^m(1-y_l)^{-k-m}
\right),
\end{multline}
and also
\begin{multline}
\label{eq:i6}
D_\lambda(k)= 2^{\binom{k}{2} -|\lambda|}\\
\times [z_1^{\mu_1+n-1}\dots z_n^{\mu_n}]
\left( \prod_{1\leq i < j \leq n} (z_i -z_j)(1+z_i+z_j) \prod_{l=1}^n(1+2z_l)(1+z_l)^{k-n}
\right).
\end{multline}
\end{theorem}
\begin{corollary}
\label{cor:1}
Let $\lambda$ be a partition, with $l(\lambda)=m$. There is a polynomial $P_\lambda(k)$,
integer valued at integers,
of degree $|\lambda|$ such that for $k\geq \max(l(\lambda),\lambda_1)$,
\begin{equation}
\label{eq:ii1}
D_\lambda(k)= 2^{\binom k 2 - |\lambda|} \times \ P_\lambda(k).
\end{equation}
The leading coefficient of $P_\lambda(k)$ is
\begin{equation}
\label{eq:2}
\frac{
\prod_{1\leq i < j \leq m } (\lambda_i-\lambda_j-i+j)
}
{
\prod_{1\leq i \leq m }(\lambda_i + m - i)!
}
= \chi^\lambda(1)/|\lambda|!,
\end{equation}
where $\chi^\lambda(1)$ is the degree of the irreducible representation of the symmetric group
$S_{|\lambda|}$ indexed by $\lambda$.
In particular,
\begin{equation}
\label{eq:ii2}
D_0(k)= 2^{\binom k 2}.
\end{equation}
where $D_0(k)$ is the determinant associated to the empty partition.
\end{corollary}
Table \ref{tab:plambda} gives a list of the polynomials $P_\lambda(k)$
for partitions up to weight $7$. Observe, in the table, that
$P_\lambda(k)$ often has many linear factors. This fact plays a role in our formula
for $N_\lambda(k)$ so we encode it in the following corollary.
\begin{corollary}
\label{cor:2}
Let $\lambda$ be a partition. Then $P_\lambda(k)$ is divisible by
\begin{equation}
\label{eq:divisibility}
(k-\lambda_1)(k-\lambda_1-1)\ldots(k-l(\lambda)+1)
\times (k + \lambda_1)(k + \lambda_1 - 1) \ldots (k + l( \lambda )),
\end{equation}
where we take the first product to be 1 if $\lambda_1\geq l(\lambda)$,
and the second product to be 1 if $\lambda_1 < l( \lambda )$.
\end{corollary}
Finally, in Section~\ref{sec:elliptic} we discuss the application of our
techniques to the related problem of the moments of the $L$-functions associated
to quadratic twists of an elliptic curve.
\subsection{Symmetric function theory}
\label{sec:symmetric}
We collect here some definitions and results from the theory of symmetric functions
that we use in our paper.
The details can be found in \cite[Chapter 1]{macdonald_symmetric_1995}. We have
used the notations of \cite{macdonald_symmetric_1995}.
A {\em partition} $\lambda$ is a sequence of non negative integers
$(\lambda_1,\lambda_2,\dotsc)$ such that
\begin{equation} \label{eq:i30}
\lambda_1\geq \lambda_2 \geq \cdots,
\end{equation} and only finitely many $\lambda_i$s are non zero.
The {\em length} of the partition $\lambda$ is defined to be the number of non zero
$\lambda_i$s. We denote it by $l(\lambda)$. The {\em weight} of a
partition $\lambda$, denoted by $|\lambda|$ is
\begin{equation}
\label{eq:i31}
|\lambda|=\sum_{i\geq 1} \lambda_i.
\end{equation}
The {\em diagram} of a partition is the set of points
\begin{equation}
\label{eq:i32}
\{(i,j) \, | \, 1\leq i \leq l(\lambda) ,\ 1\leq j \leq \lambda_i\}.
\end{equation}
The \emph{conjugate partition} $\lambda'$ of a partition $\lambda$ is the
partition whose diagram is
\begin{equation}
\label{eq:i33}
\{(i,j)\, |\, (j,i) \textrm{ is in the diagram of } \lambda\}.
\end{equation}
Equivalently, the conjugate partition of $\lambda$ is a partition
$\lambda'=(\lambda_1',\lambda_2',\dotsc)$ where
\begin{equation}
\label{eq:i34}
\lambda_i' = \# \{ \lambda_j\, |\, \lambda_j \geq i\}.
\end{equation}
The symmetric group $S_n$ acts on the polynomial ring $\mathbb
Z[x_1,\dotsc,x_n]$ by permuting the independent variables
$x_1,\dotsc,x_n$. The \emph{ring of symmetric polynomials}
in $n$-variables, $\Lambda_n$, is the set of polynomials in $\mathbb Z[x_1,\dotsc,x_n]$ which are
invariant under this action of $S_n$. The ring $\Lambda_n$ is a
graded ring:
\begin{equation}
\label{eq:i35}
\Lambda_n = \bigoplus_{k\geq 0} \Lambda^k_n,
\end{equation}
where $\Lambda_n^k$ is the set of homogeneous symmetric polynomials of
degree $k$.
For $m>n$, there is a ring homomorphism
\begin{equation}
\label{eq:i36}
\rho_{m,n}: \mathbb Z[x_1,\dotsc,x_m] \to \mathbb Z[x_1,\dotsc,x_n],
\end{equation}
where $\rho_{m,n}(x_i)=x_i$ for $i\leq n$, and $\rho_{m,n}(x_i)=0$ for
$i>n$. This restricts to a map
\begin{equation}
\label{eq:i37}
\rho_{m,n}: \Lambda^k_m \to \Lambda_n^k.
\end{equation}
The maps given by \eqref{eq:i37} define an inverse system. Let
\begin{equation}
\label{eq:i38}
\Lambda^k = \varprojlim \Lambda^k_n,
\end{equation}
and
\begin{equation}
\label{eq:i39}
\Lambda=\bigoplus_{k\geq 0} \Lambda^k.
\end{equation}
The ring $\Lambda$ is called the \emph{ring of symmetric functions}. This is
a graded ring. The definition of $\Lambda$ gives us maps
\begin{equation}
\label{eq:i40}
\rho_n: \Lambda \to \Lambda_n.
\end{equation}
In this paper, we shall use four $\mathbb Z$-bases, parametrized by
partitions, of the ring $\Lambda$: the monomial symmetric functions
($m_\lambda$), elementary symmetric functions ($e_\lambda$), complete
symmetric functions ($h_\lambda$) and the Schur symmetric functions
($s_\lambda$). In addition, we shall be using power symmetric
functions ($p_\lambda$). The power symmetric functions form a $\mathbb
Q$ basis of $\Lambda\otimes_{\mathbb Z}\mathbb Q$. We shall use the
same symbols to denote their image under $\rho_n$ in $\Lambda_n$.
Given $\alpha=(\alpha_1,\dotsc,\alpha_n)$, we write $x^\alpha$ to
denote $x_1^{\alpha_1}\cdots x_n^{\alpha_n}$. Let
$\lambda$ be a partition of length less than or equal to $n$. We
define the \emph{monomial symmetric function} $m_\lambda$ by its image
under $\rho_n$ for every $n$. If $n\geq l(\lambda)$, then
\begin{equation}
\label{eq:i41}
m_\lambda(x_1,\dotsc,x_n)= \sum_\alpha x^\alpha,
\end{equation}
where the $\alpha$ ranges over distinct permutations of
$(\lambda_1,\dotsc,\lambda_n)$. If $l(\lambda)>n$, then
$m_\lambda(x_1,\dotsc,x_n)=0$. For the only partition of $0$, the empty
partition, we define $m_0=1$.
Let $r\geq 0$ be an integer. The \emph{elementary symmetric function} $e_r
\in \Lambda$ is given by
\begin{equation}
\label{eq:i42}
e_r=\sum_{1\leq i_1< i_2<\cdots<i_r} x_{i_1}\dots x_{i_r} = m_{(1,\ldots,1)},
\end{equation}
and $e_0=1$.
For a partition $\lambda$, we define
\begin{equation}
\label{eq:i43}
e_\lambda=e_{\lambda_1}e_{\lambda_2}\dots.
\end{equation}
The generating function for $e_r$ is
\begin{equation}
\label{eq:i44}
E(t)=\sum_{r\geq 0} e_rt^r = \prod_{i\geq 1}(1+x_i t).
\end{equation}
Let $r\geq 0$ be an integer. The \emph{complete symmetric function}
$h_r$ is defined to be
\begin{equation}
\label{eq:i45}
h_r=\sum_{|\lambda|=r} m_\lambda.
\end{equation}
Given a partition $\lambda$, we define
\begin{equation}
\label{eq:i46}
h_\lambda=h_{\lambda_1}h_{\lambda_2}\dots.
\end{equation}
The generating function for $h_r$ is
\begin{equation}
\label{eq:i47}
H(t)=\sum_{r\geq 0} h_r t^r = \prod_{i\geq 1} (1-x_it)^{-1}.
\end{equation}
Equations \eqref{eq:i44} and \eqref{eq:i47} give us the identity,
\begin{equation}
\label{eq:i48}
H(t) E(-t)=1.
\end{equation}
For $r\geq 1$, the \emph{power symmetric function} $p_r$ is defined as
\begin{equation}
\label{eq:i49}
p_r = \sum_{i\geq 1} x_i^r = m_{(r)}.
\end{equation}
For a partition $\lambda$, we define
\begin{equation}
\label{eq:i50}
p_\lambda=p_{\lambda_1}p_{\lambda_2}\dots.
\end{equation}
Let $(\alpha_1,\dotsc,\alpha_n)\in \mathbb N^n$. We define
$a_\alpha\in \mathbb Z[x_1,\dotsc,x_n]$ by
\begin{equation}
\label{eq:i51}
a_\alpha(x_1,\dotsc,x_n) = \det (x_i^{\alpha_j})_{1\leq i,j \leq n}.
\end{equation}
Clearly $a_\alpha$ is skew-symmetric; that is, for $w\in S_n$,
$w(a_\alpha) = {\text{sgn}}(w) a_\alpha$, where ${\text{sgn}}(w)$ is
the sign of permutation $w$. Let $\delta_n$ be the partition
\begin{equation}
\delta_n=(n-1,n-2,\dotsc,1,0).
\end{equation}
For a partition $\lambda$ of length less than or equal to $n$, we append 0's as
necessary to $\lambda$ to create an $n$-tuple, and define
\begin{equation}
\label{eq:i52}
s_\lambda(x_1,\dotsc,x_n) =
\frac{a_{\scriptscriptstyle{\delta_n\! +\!
\lambda}}(x_1,\dotsc,x_n)}{a_{\scriptscriptstyle{\delta_n}}(x_1,\dotsc,x_n)}.
\end{equation}
This is a polynomial. Since $s_\lambda(x_1,\dotsc,x_n)$ is a ratio of
skew-symmetric polynomials, it is a symmetric polynomial. These
symmetric polynomials are called \emph{Schur symmetric polynomials.}
For $m>n$, $\rho_{m,n}\left(s_\lambda(x_1,\dotsc,x_m)\right)
=s_\lambda(x_1,\dotsc,x_n)$, hence they are represented by a
function $s_\lambda\in \Lambda$.
Using the definitions of $a_\lambda$ and $s_\lambda$, it is easy to
check that
\begin{equation}
\label{eq:i64}
a_{\delta_n}(x_1,\dotsc,x_n)= \prod_{1\leq i < j \leq n} (x_i -x_j),
\end{equation}
and
\begin{equation}
\label{eq:i68}
s_{{\delta_n}}(x_1,\dotsc,x_n)= \prod_{1\leq i < j \leq n}(x_i +x_j).
\end{equation}
Let $\lambda$ be a partition and $\lambda'$ be the conjugate
partition. Then, for $n\geq l(\lambda)$
\cite[p.41]{macdonald_symmetric_1995}
\begin{equation}
\label{eq:i53}
s_\lambda=\det\left(h_{\lambda_i-i+j}\right)_{1\leq i,j\leq n},
\end{equation}
and for $m \geq l(\lambda')$
\begin{equation}
\label{eq:i54}
s_\lambda=\det\left( e_{\lambda_i'-i+j}\right)_{1\leq i,j\leq m}.
\end{equation}
Identity \eqref{eq:i53} is called the Jacobi-Trudi identity, and
\eqref{eq:i54} is called the dual Jacobi-Trudi identity. Schur symmetric
functions satisfy \cite[p.63]{macdonald_symmetric_1995}
\begin{equation}
\label{eq:i11}
\prod_{i,j\geq 1} (1-x_iy_j)^{-1} = \sum_\lambda s_\lambda(x) s_\lambda(y),
\end{equation}
and
\begin{equation}
\label{eq:i56}
\prod_{i,j\geq 1} (1+x_iy_j) = \sum_\lambda s_\lambda(x) s_{\lambda'}(y).
\end{equation}
The sum in \eqref{eq:i11} and \eqref{eq:i56} is over all partitions
$\lambda$. Identity \eqref{eq:i11} is called the Cauchy identity,
and \eqref{eq:i56} is called the dual Cauchy identity.
There is a \emph{fundamental involution} $\omega$, a ring
automorphism, defined on the ring of symmetric functions:
\begin{equation}
\label{eq:i61}
\omega(e_r)=h_r.
\end{equation}
Using \eqref{eq:i48}, we can prove that
\begin{equation}
\label{eq:i62}
\omega(h_r)=e_r.
\end{equation}
We also have
\begin{equation}
\label{eq:i63}
\omega(s_\lambda)=s_{\lambda'}, \ \textrm{and}\ \omega(p_n)=(-1)^{n-1}p_n.
\end{equation}
\section{Terms of the asymptotic expansion}
\label{sec:reform-probl}
We begin by rewriting the integrand on the right hand side of~\eqref{eq:Qkresidue}
as a ratio of a holomorphic function and a
monomial. The function $G(z_1,\dotsc,z_k)$ in \reff{eq:Gdefinition} has a
pole in each $z_j$ at $(0,\dotsc,0)$ coming from the product of the
zeta functions.
These poles are eliminated by a portion of the
Vandermonde determinants. Note that
\begin{equation}
\label{eq:148}
\Delta(z_1^2,\ldots,z_k^2)^2=
\left(\prod_{1\leq i \leq j \leq k} (z_i+z_j) \right)
\frac{\Delta(z_1,\dotsc,z_k)
\Delta(z_1^2,\dotsc,z_k^2)}{2^k \prod_{j=1}^k z_j}.
\end{equation}
Specifically each factor $(z_i+z_j)$ occurring here cancels a pole coming
from $\zeta(1+z_i+z_j)$. We obtain \eqref{eq:148} by observing
\begin{align} \nonumber
\Delta(z_1^2,\ldots,z_k^2) &= \prod_{i<j}(z_j^2 - z_i^2)
= \Delta(z_1,\ldots,z_k)\prod_{j>i} (z_i+z_j)\\ \label{eq:233}
& = \Delta(z_1,\ldots,z_k)\frac{\prod_{j\geq i} (z_i+z_j)}{2^k \prod_{j=1}^k z_j}.
\end{align}
Substituting \eqref{eq:148} into \reff{eq:Qkresidue}, we have
\begin{multline}
\label{eq:Qk}
Q_{\pm}(k,x)=
\frac{(-1)^{k(k-1)/2}}{k!} \frac {1}{(2\pi i)^k} \oint \cdots
\oint A_k(z_1,\ldots,z_k) \\ \prod_{j=1}^k X(\tfrac 1 2 +z_j,a)^{-\frac
1 2} \prod_{1\leq i \leq j \leq k} (z_i+z_j) \zeta(1+z_i+z_j) \\
\frac{\Delta(z_1,\ldots,z_k)\Delta(z_1^2,\ldots,z_k^2)}{\prod_{j=1}^k
z_j^{2k-1} \prod_{j=1}^k z_j}\, \exp{\bigg(\frac x 2 \sum_{j=1}^k z_j\bigg)} \,
dz_1\ldots dz_k.
\end{multline}
Now the integrand is written as a ratio of a function which is holomorphic
in a neighbourhood of $(0,\dotsc,0)$ and a monomial.
Recall that $a_k=A_k(0,\ldots,0)$.
Let $z=(z_1,\ldots,z_k)$ and \index{$m_\lambda$}$m_\lambda(z)$ be the monomial symmetric
polynomial defined in \eqref{eq:i41}. Let
\begin{equation}
\label{eq:powerseries}
\sum_{i=0}^\infty \sum_{|\lambda|=i}\index{$b^\pm_\lambda(k)$}b^\pm_\lambda(k) m_\lambda(z)
\end{equation}
be the power series expansion of
\begin{equation}
\label{eq:145}
\frac{1}{a_k} A_k(z_1,\ldots,z_k) \\ \prod_{j=1}^k X(\tfrac 1 2 +z_j,a)^{-\frac
1 2} \prod_{1\leq i \leq j \leq k} (z_i+z_j) \zeta(1+z_i+z_j).
\end{equation}
Here, the coefficients $b^+_{\lambda}$ are associated to the $a=1$ case, and
$b^-_{\lambda}$ with $a=-1$.
In \eqref{eq:powerseries}, the sum is over all partitions
$\lambda_1+\ldots+\lambda_k=i$, with
$\lambda_1\geq \lambda_2\geq \ldots \lambda_k \geq 0$.
We divide the expression by $a_k$ to ensure that the constant term in
the power series is $1$. We shall calculate the Taylor series of
\eqref{eq:145} by calculating the Taylor series of its logarithm. This
calculation is simpler if the constant term is $1$, i.e. $b^\pm_{\bf 0}(k)=1$ in
\eqref{eq:powerseries}.
So \reff{eq:Qk} becomes
\begin{multline}\label{eq:sumofintegrals}
Q_{\pm}(k,x) = \frac{(-1)^{k(k-1)/2}}{k!} \frac {a_k}{(2\pi i)^k}
\sum_{i=0}^\infty \sum_{|\lambda|=i}b^\pm_\lambda(k)\oint \cdots\oint m_\lambda
(z_1,\dotsc, z_k) \\ \frac{\Delta(z_1,\dotsc,z_k)\Delta(z_1^2,\dotsc,z_k^2)}
{\prod_{j=1}^k z_j^{2k}} \exp{\left(\frac x 2 \sum_{j=1}^k z_j\right)}\,
dz_1\dots dz_k.
\end{multline}
Only finitely many integrals in the sum \eqref{eq:sumofintegrals}
are nonzero. Each of the integrals in \eqref{eq:sumofintegrals} picks up the
coefficient of $z_1^{2k-1}\dots z_k^{2k-1}$ in the Taylor expansion
of the numerator of the corresponding integrand. If $\deg m_\lambda(z_1,\dotsc,z_k)+\deg
\Delta(z_1,\dotsc,z_k) + \deg \Delta(z_1^2,\dotsc,z_k^2) > \deg
(z_1^{2k-1}\dots z_k^{2k-1})$,
that is $|\lambda| > k(k+1)/2$,
then in the Taylor expansion of the numerator of
\eqref{eq:sumofintegrals} the coefficient of $z_1^{2k-1}\dots
z_k^{2k-1}$ is $0$.
Given $k$, and a $\lambda$ in the sum
\eqref{eq:sumofintegrals}, the coefficient of the
monomial $z_1^{2k-1}\dots z_k^{2k-1}$ in the Taylor expansion of the
numerator of the integrand is a constant, depending on $\lambda$ and $k$,
times $x^{\frac{k(k+1)}{2}-|\lambda|}$.
\subsection{The leading term}
\label{sec:leading-term}
In this section, we shall calculate the leading coefficient of $Q_{\pm}(k,x)$,
i.e. the coefficient $c_\pm(0,k)$ of $x^{\frac{k(k+1)}{2}}$. The calculation will
also provide insight into how to calculate the lower order terms of $Q_{\pm}(k,x)$.
The leading coefficient is the same for $Q_+(k,x)$ and for $Q_-(k,x)$, and is given
in the following proposition.
\begin{proposition}
\label{prop:3}
The leading coefficient $c_{\pm}(0,k)$ of $Q_{\pm}(k,x)$ in \reff{eq:Qexpansion} is
\begin{equation}
\frac{ a_k}{ 2^{k} }
\prod_{j=0}^{k-1}\frac{(2j)!}{(k+j)!}.
\end{equation}
\end{proposition}
The leading term in ~\reff{eq:Qexpansion} corresponds to the $i=0$ term
of \eqref{eq:sumofintegrals}. In this case there is only one integral
within the inner summation sign, giving
\begin{equation}
\label{eq:75}
c_\pm(0,k)x^{k(k+1)/2} = \frac{(-1)^{\frac{k(k-1)}2}}{k!(2\pi i)^k}a_k \oint\cdots\oint
\frac{\Delta(z_1,\ldots,z_k)\Delta(z_1^2,\ldots, z_k^2)}{
\prod_{j=1}^{k}z_j^{2k}} \exp({\tfrac x 2 \sum_{j=1}^k z_j}) dz_1\ldots dz_k.
\end{equation}
Substituting $u_j=x z_j/2$, simplifying, and then relabeling $u_j$ with $z_j$, we obtain
\begin{equation}
\label{eq:183}
c_\pm(0,k)x^{k(k+1)/2}= \frac{(-1)^{\frac{k(k-1)}{2}}}{k!(2\pi i)^k} a_k \left(\frac x 2
\right)^{\frac {k(k+1)}{2}} \oint\cdots\oint
\frac {\Delta(z_1,\ldots,z_k)\Delta(z_1^2,\ldots,z_k^2)} {\prod_{j=1}^k z_j^{2k}} \\ \exp({\sum_{j=1}^k z_j}) \,
dz_1\cdots dz_k.
\end{equation}
The presence of the Vandermonde determinants prevents us from separating the integrals.
However, we apply the following trick to move the Vandermonde determinants outside the integral.
Introduce new variables $x_1,\dotsc,x_k$ and
consider the more general integral
\begin{align}
I(x_1,\ldots,x_k):=
\frac{1}{(2\pi i)^k}
\oint\cdots\oint
\frac {\Delta(z_1,\ldots,z_k)\Delta(z_1^2,\ldots,z_k^2)}
{\prod_{j=1}^k z_j^{2k}} \label{eq:Ixdefinition}
\exp({\sum_{j=1}^k x_j z_j}) \,
dz_1\cdots dz_k.
\end{align}
Thus, the evaluation of $c_{\pm}(0,k)$ boils down to determining $I(1,\ldots,1)$.
Next, we introduce a partial
differential operator which will help us move the Vandermonde
determinants outside the integral.
Note that for a polynomial $P(x_1,\dotsc,x_k)$ in $k$ variables, we have
\begin{equation}
\label{eq:185}
P\left(\frac{\partial}{\partial x_1},\dotsc,
\frac{\partial}{\partial x_k} \right) \exp\left(
\sum_{j=1}^kx_j z_j \right) = P(z_1,\dotsc,z_k) \exp\left(
\sum_{j=1}^kx_j z_j \right).
\end{equation}
We set
\begin{equation}
\label{eq:246}
q(z_1,\ldots,z_k) := \Delta(z_1,\ldots,z_k)\Delta(z_1^2,\ldots,z_k^2).
\end{equation}
Then \eqref{eq:Ixdefinition} equals
\begin{equation}
\label{eq:186}
\frac{1}{(2\pi i)^k}
\oint\cdots\oint
q\left(\frac{\partial}{\partial
x_1},\dotsc,\frac{\partial}{\partial x_k} \right)\frac{
\exp({\sum_{j=1}^k x_j z_j}) }
{\prod_{j=1}^k z_j^{2k}} \,
dz_1\cdots dz_k.
\end{equation}
Pulling the differential operator outside the integral (Leibniz's
rule) we conclude
that \eqref{eq:186} equals
\begin{equation}
\label{eq:187}
q\left(\frac{\partial}{\partial x_1},\dotsc,\frac{\partial}{\partial x_k} \right)
\frac{1}{(2\pi i)^k}
\oint\cdots\oint
\frac{
\exp({\sum_{j=1}^k x_j z_j}) }
{\prod_{j=1}^k z_j^{2k}} \,
dz_1\cdots dz_k.
\end{equation}
The integrand in~\eqref{eq:187} can be written as a product of integrals
in one variable,
\begin{equation}
\label{eq:188}
q\left(\frac{\partial}{\partial x_1},\dotsc,\frac{\partial}{\partial x_k} \right)
\frac{1}{(2\pi i)^k}
\prod_{j=1}^k \oint \frac{\exp({x_jz_j})}{ z_j^{2k}} dz_j .
\end{equation}
Each integral in the above product can be
evaluated by expanding $\exp(x_jz_j)=\sum_{n=0}^\infty (x_jz_j)^n/n!$.
The coefficient of $z_j^{2k-1}$ is $x_j^{2k-1}/(2k-1)!$,
and thus \eqref{eq:188} equals
\begin{equation}
\label{eq:189}
q\left(\frac{\partial}{\partial x_1},\ldots,\frac{\partial}{\partial
x_k}\right) \prod_{i=1}^k \frac{x_i^{2k-1}}{(2k-1)!}.
\end{equation}
We have turned our computation of $c_\pm(0,k)$ into the question of determining
the result of applying $q(\frac{\partial}{\partial
x_1},\dotsc,\frac{\partial}{\partial x_k})$ to $\prod_{i=1}^k
\frac{x_i^{2k-1}}{(2k-1)!}$, and finding the value of the resulting polynomial
at $(1,\dotsc,1)$. This calculation is done in Lemma \ref{lem:8}. The proof of
Lemma \ref{lem:8} uses Lemmas \ref{lem:10}, and \ref{lem:9}.
Lemma~\ref{lem:10}, a variant of Lemma 2.1 in \cite{CFKRS2}, gives a formula
for applying the differential operator $ \Delta\left(\frac{\partial^2}{\partial
x^2_1},\ldots,\frac{\partial^2}{\partial x^2_k}\right)$ to a product of
functions.
\begin{lemma
\label{lem:10}
\begin{equation}
\Delta\left(\frac{\partial^2}{\partial
x_1^2},\ldots,\frac{\partial^2}{\partial x_k^2}\ \right) \prod_{i=1}^k f(x_i) =
\left| f^{(2j-2)}(x_i) \right|_{k\times k}.
\end{equation}
\end{lemma}
\begin{proof}
Write
\begin{equation}
\Delta\left(\frac{\partial^2}{\partial
x^2_1},\ldots,\frac{\partial^2}{\partial x^2_k}\right) =
\left| \frac{\partial^{2j-2}}{ x_i^{(2j-2)}} \right|_{k\times k}.
\end{equation}
Applying this to $\prod_{i=1}^k f(x_i)$, and noticing that $x_I$ only appears
in the $i$-th row of the determinant, we can move $f(x_i)$ into that row.
\end{proof}
Lemma \ref{lem:9} gives a formula for applying a product of
differentials to a determinant of functions.
\begin{lemma} \label{lem:9}
Let $f_1(x),\dotsc,f_k(x)$ be smooth functions of one
variable. Then
\begin{equation}
\label{eq:79}
\frac{\partial^{n_1}}{\partial x_1^{n_1}} \dots
\frac{\partial^{n_k}}{\partial x_k^{n_k}}
\begin{vmatrix}
f_1(x_1)& \hdotsfor{2} & f_k(x_1)\\
\vdots & \ddots & & \vdots \\
\vdots & &\ddots & \vdots \\
f_1(x_k)&\hdotsfor{2}& f_k(x_k)
\end{vmatrix} = \begin{vmatrix}
f_1^{(n_1)}(x_1)& \hdotsfor{2} & f_k^{(n_1)}(x_1)\\
\vdots & \ddots & & \vdots \\
\vdots & &\ddots & \vdots \\
f_1^{(n_k)}(x_k)&\hdotsfor{2}& f_k^{(n_k)}(x_k)
\end{vmatrix}.
\end{equation}
\end{lemma}
\begin{proof}
It is easy to see if we first look at a simple case, say
$\tfrac{\partial}{\partial x_1}$ applied to the determinant on the left hand
side of \eqref{eq:79}.
\end{proof}
\begin{lemma}
\label{lem:8}Let $q(z_1,\ldots,z_k)= \Delta(z_1,\ldots,z_k)
\Delta(z_1^2,\ldots,z_k^2)$, then
\begin{equation}
\label{eq:155}
q\left(\frac{\partial}{\partial x_1},\dotsc,\frac{\partial}{\partial
x_k}\right) \prod_{i=1}^k \frac{x_j^{2k-1}}{(2k-1)!}
\end{equation}
evaluated at $(x_1,\dotsc, x_k)=(1, \dotsc ,1)$ is
\begin{equation}
\label{eq:190}
(-1)^{\frac{k(k-1)}{2}} \times k! \left(\prod_{j=0}^{k-1}
\frac{(2j)!}{(k+j)!}\right) 2^{\frac{k(k-1)}{2}} .
\end{equation}
\end{lemma}
\begin{proof} To prove the Lemma, we relate the value of
\eqref{eq:155} evaluated at $(x_1,\dotsc, x_k)=(1,\dotsc,1)$ to a
determinant of a matrix whose entries are binomial
coefficients. We then use an identity for binomial coefficients to
rewrite the determinant as a product of two determinants, and
evaluate each of them separately.
Applying Lemma \ref{lem:10}, we can deduce that
\begin{align}
\Delta\left(\frac{\partial}{\partial
x_1},\ldots,\frac{\partial}{\partial x_k}\right)
\Delta\left(\frac{\partial^2}{\partial^2
x_1^2},\ldots,\frac{\partial^2}{\partial
x_k^2}\right)\prod_{j=1}^k f(x_j)
\end{align}
equals
\begin{equation}
\label{eq:259}
\Delta\left(\frac{\partial}{\partial x_1},\ldots,\frac{\partial}{\partial
x_k}\right) \left| f^{(2(j-1))}(x_i) \right|_{k\times k}.
\end{equation}
Expanding the Vandermonde determinant of partial differential
operators, we obtain
\begin{equation}
\label{eq:74}
\sum_{\mu \in S_k} {\text{sgn}}( \mu) \frac{\partial^{\mu_1-1}}{\partial
x_1^{\mu_1-1}}\dots\frac{\partial^{\mu_k-1}}{\partial x_k^{\mu_k-1}}
\begin{vmatrix}
f(x_1) & f^{(2)}(x_1) &\cdots & f^{(2(k-1))}(x_1) \\
f(x_2) & f^{(2)}(x_2) &\cdots & f^{(2(k-1))}(x_2) \\
\vdots & \vdots & \ddots & \vdots \\
f(x_k) & f^{(2)}(x_k) &\cdots & f^{(2(k-1))}(x_k)
\end{vmatrix},
\end{equation}
where $\mu_1,\dotsc,\mu_k$ is the image of the permutation $\mu$ of $1,\dotsc,k$. Applying
Lemma \ref{lem:9}, we can see that \eqref{eq:74} equals
\begin{equation}
\label{eq:73}
\sum_{\mu \in S_k} {\text{sgn}}( \mu)
\begin{vmatrix}
f^{(\mu_1 -1)}(x_1) & f^{(\mu_1 +1)}(x_1) &\cdots & f^{(\mu_1 -1 +2(k-1))}(x_1) \\
f^{(\mu_2 -1)}(x_2) & f^{(\mu_2 +1)}(x_2) &\cdots & f^{(\mu_2 -1 +2(k-1))}(x_2) \\
\vdots & \vdots & \ddots & \vdots \\
f^{(\mu_k - 1)}(x_k) & f^{(\mu_k + 1)}(x_k) &\cdots & f^{(\mu_k - 1 +2(k-1))}(x_k)
\end{vmatrix}.
\end{equation}
Let \index{$f(x)$}$f(x)=\frac{x^{2k-1}}{(2k-1)!}$.
Expression \eqref{eq:73} evaluated at $(x_1,\dotsc,x_k)=(1,\dotsc,1)$ is
\begin{equation}
\label{eq:239}
\sum_{\mu \in S_n} {\text{sgn}}( \mu)
\begin{vmatrix}
\frac{1}{(2k-\mu_1)!} &\frac{1}{(2k-\mu_1-2)!} &\cdots & \frac{1}{(-\mu_1+2)!}\\
\frac{1}{(2k-\mu_2)!} &\frac{1}{(2k-\mu_2-2)!} &\cdots & \frac{1}{(-\mu_2+2)!}\\
\vdots & \vdots & \ddots & \vdots \\
\frac{1}{(2k-\mu_k)!} &\frac{1}{(2k-\mu_k-2)!} &\cdots & \frac{1}{(-\mu_k+2)!}
\end{vmatrix}.
\end{equation}
Rearranging the rows to cancel the effect of $\mu$ (this introduces another ${\text{sgn}}(\mu)$ in
front of the determinant) and evaluating at
$(x_1,\dotsc, x_k)=(1,\dotsc,1)$, we get \eqref{eq:239} equals
\begin{equation}
\label{eq:213}
k!
\begin{vmatrix}
\frac {1} {(2k-1)!} & \frac {1}{(2k-3)!} &\cdots & \frac{1}{1!} \\
\frac {1} {(2k-2)!} & \frac {1}{(2k-4)!} &\cdots & \frac{1}{0!} \\
\vdots & \vdots & \ddots & \vdots \\
\frac{1}{k!} & \frac {1}{(k-1)!} &\cdots & 0
\end{vmatrix}.
\end{equation}
We can convert the determinant \eqref{eq:213} into a determinant of
matrices whose entries are binomial coefficients. Multiplying
the $j^\textrm{th}$ column by $\frac 1{(2(j-1))!} $ and the
$i^\textrm{th}$ row by $(2k-i)!$, we see that \eqref{eq:213} equals
\begin{equation}
\label{eq:159}
k! \frac{0! 2! \cdots (2k-2)!}{(2k-1)! (2k-2)! \cdots k!}
\begin{vmatrix}
\binom{2k-1}{0} & \binom{2k-1}{2} & \cdots & \binom{2k-1}{2k-2}\\
\binom{2k-2}{0} & \binom{2k-2}{2} & \cdots & \binom{2k-2}{2k-2}\\
\vdots & \vdots & \ddots & \vdots \\
\binom{k}{0} & \binom{k}{2} &\cdots &\binom{k}{2k-2}
\end{vmatrix}.
\end{equation}
The determinant in \eqref{eq:159} is
\begin{equation}
\label{eq:143}
\left|\binom{2k-i}{2j-2} \right|_{k\times k}.
\end{equation}
In Section~\ref{sec:determinant} we study this determinant. From \eqref{eq:i1}
and Corollary~\ref{cor:1}, the determinant of this matrix equals $(-2)^{\binom
k 2}$. The extra $(-1)^{\binom k 2}$ here comes about from the fact that the
$D_0(k)$ in~\eqref{eq:i1} has its columns reversed from the above
determinant.
\end{proof}
Applying Lemma \ref{lem:8} to (\ref{eq:189}), we find that the
leading term is:
\begin{align}
\label{eq:161}
\notag & \frac{a_k}{k!} \left(\frac x 2 \right)^{\frac{k(k+1)}{2}} \left(k!
\frac{0! 2!\cdots (2k-2)!}{(2k-1)!\cdots k!} \right) 2^{\frac{k(k-1)}{2}}\\
=& \frac{a_k}{2^k}
\prod_{j=0}^{k-1} \frac{(2j)!}{(k+j)!}
x^{k(k+1)/2}
\end{align}
Hence the coefficient of the leading term is
\begin{equation}
\frac{ a_k}{ 2^{k} }
\prod_{j=0}^{k-1}\frac{(2j)!}{(k+j)!}.
\end{equation}
This proves Proposition \ref{prop:3}.
\subsection{Further lower order terms}
\label{sec:further-lower-order}
In this section we consider a general term occurring in the
sum of integrals \eqref{eq:sumofintegrals}. Let $\lambda$ be
a partition. We shall calculate
\begin{multline}
\label{eq:194}
\frac{(-1)^{k(k-1)/2} 2^k}{k!} \frac {a_k}{(2\pi i)^k}
b^\pm_\lambda(k)\oint \cdots\oint
m_\lambda (z_1,\dotsc, z_k) \\
\frac{\Delta(z_1,\dotsc,z_k)\Delta(z_1^2,\dotsc,z_k^2)}
{2^k\prod_{j=1}^k z_j^{2k}}\exp\left(\frac x 2 \sum_{j=1}^kz_j \right) dz_1\dots dz_k.
\end{multline}
Modifying the approach of the previous section to incorporate
the extra monomial $m_\lambda(z_1,\dotsc,z_k)$, we define
\begin{equation}
q_\lambda(z_1,\dotsc , z_k) =m_\lambda (z_1,\dotsc,z_k)
\Delta(z_1,\dotsc,z_k) \Delta(z_1^2,\dotsc,z_k^2) .
\end{equation}
Following the same steps as in the evaluation of the leading term,
expression \eqref{eq:194} becomes
\begin{equation}\label{generalpartition}
\frac{(-1)^{\frac{k(k-1)}2} a_k b^\pm_\lambda(k)}{k!}\left(\frac x
2\right)^{\frac{k(k+1)}2-|\lambda|} \left( q_\lambda\left(\frac{\partial}{\partial
x_1},\dotsc,\frac{\partial}{\partial x_k} \right) \prod_{j=1}^k
\frac{x_j^{2k-1}} {(2k-1)!}\right)_{\textrm{evaluated at } {x_j=1.}}
\end{equation}
This section is devoted to calculating \eqref{generalpartition}.
Let $f(x)={x^{2k-1}}/{(2k-1)!} $. Let
\index{$\lvert\lambda\rvert$}$|\lambda|=\sum_i \lambda_i$,
and length $\index{$l(\lambda)$}l(\lambda)$. Thus, $l(\lambda)$ is
the number of non zero elements of the partition $\lambda$, i.e.
$\lambda_j=0$ for $j>l(\lambda)$.
Let $m_j(\lambda)$ be the number of $j$'s in the partition $\lambda$, so that
$|\lambda| = m_1(\lambda)+2m_2(\lambda)+3m_3(\lambda)+\ldots$.
There are
$\binom{k}{l(\lambda)}\binom{l(\lambda)}{m_1(\lambda),m_2(\lambda),\dotsc}$
monomials in $m_\lambda(x_1,\dotsc,x_k)$ (\!\!\cite{Sta99}, 7.8). Here
\index{$\binom{l(\lambda)}{m_1(\lambda),m_2(\lambda),\dotsc}$}
$\binom{l(\lambda)}{m_1(\lambda),m_2(\lambda),\dotsc}$
is the multinomial coefficient. Since we are working with symmetric functions,
it is enough to compute \eqref{eq:194}, i.e.
\eqref{generalpartition}, for one
monomial of $m_\lambda\left(\frac{\partial}{\partial
x_1},\dots,\frac{\partial}{\partial x_k}\right)$. Therefore,
\begin{equation}\label{eq:72}
q_\lambda\left(\frac{\partial}{\partial x_1},\dotsc,\frac{\partial}{\partial
x_k} \right) \prod_{j=1}^k f(x_j)
\Bigg\vert_{\textrm{ evaluated at } x_j=1}
\end{equation}
equals
\begin{equation}
\label{eq:254}
\binom{k}{l(\lambda)}
{l(\lambda) \choose m_1(\lambda),m_2(\lambda),\dots} \frac{\partial^{|\lambda|}}{\partial
x_1^{\lambda_1}\dots\partial x_{l(\lambda)}^{\lambda_{l(\lambda)}}}\Delta\left(
\frac{\partial }{\partial x_1} \dots
\frac{\partial }{\partial x_k} \right)\Delta \left(
\frac{\partial^2 }{\partial x_1^2} \dots
\frac{\partial^2 }{\partial x_k^2} \right)\prod_{j=1}^k f(x_j)
\end{equation}
evaluated at $(x_1,\dotsc,x_k)=(1,\dotsc,1)$. We already have the
expression for the effect of Vandermonde determinant operators in
\eqref{eq:73}. Therefore by Lemma~\ref{lem:9}, the expression
\eqref{eq:254} equals
\begin{equation}
\label{eq:156}\binom{k}{l(\lambda)}
{l(\lambda) \choose m_1(\lambda),m_2(\lambda),\dots} \frac{\partial^l}{\partial
x_1^{\lambda_1}\dots\partial x_{l(\lambda)}^{\lambda_{l(\lambda)}}} \sum_{\mu \in S_k}
{\text{sgn}}(\mu) \det \left(f^{(\mu_i -1) +2(j-1))}(x_i) \right).
\end{equation}
The expression \eqref{eq:156} is equal to
\begin{equation}
\label{eq:201}
\binom{k}{l(\lambda)}
{l(\lambda) \choose m_1(\lambda),m_2(\lambda),\dots}
\sum_{\mu \in S_k} {\text{sgn}}(\mu) \det \left(f^{(\mu_i -1 +2(j-1)+
\lambda_i)}(1) \right).
\end{equation}
In each summand of \eqref{eq:201}, rearrange the rows so as to
reverse the effect of $\mu$. We get
\begin{equation}
\label{eq:157} \binom{k}{l(\lambda)} {l(\lambda) \choose m_1(\lambda),m_2(\lambda),\dots}
\sum_{\nu \in S_k} \det \left(f^{(i -1 +2(j-1)+
\lambda_{\nu_i})}(1) \right).
\end{equation}
Here $\nu$ is $\mu^{-1}$. The expression \eqref{eq:157} is
\begin{equation}
\label{eq:160}\binom{k}{l(\lambda)} {l(\lambda) \choose m_1(\lambda),m_2(\lambda),\dots}
\sum_{\nu \in S_k} \det \left(
\frac{1}{\left(2k-1 -(i-1)-2(j-1)- \lambda_{\nu_i
}\right)!} \right)_{ij}.
\end{equation}
Each determinant inside the sum is of the form
\begin{equation}
\label{eq:165}
\det \left( \frac{1}{\left(2k-1 -(i-1)-2(j-1)- d_i \right)!}
\right)_{ij},
\end{equation}
and $\sum d_i =|\lambda|$. In Proposition~\ref{prop:5}, we determine a
necessary condition for the determinant \eqref{eq:165} to be non
zero. This condition will imply that a large portion of terms in
\eqref{eq:160} are zero.
\begin{prop}
\label{prop:5}
Consider the determinant
\begin{equation}
\label{eq:231}
\det \left( \frac{1}{\left(2k-1 -(i-1)-2(j-1)- d_i \right)!}
\right)_{ij}.
\end{equation}
Assume that $\sum_{i=1}^k d_i =|\lambda|$, with $d_i \in \mathbb{Z}_{\geq 0}$
The determinant \eqref{eq:231} is zero if any of
$d_1,\dotsc,d_{k-|\lambda|}$ is non zero.
\end{prop}
\begin{proof} Let $u$ be a number between $1$ and $k$
such that $d_u$ is non zero. The $u^\textrm{th}$ row in the matrix is
\begin{equation}
\label{eq:166}
\left(
\frac{1}{\left(2k-1 -(u-1)-2(j-1)- d_u \right)!} \right)_{1\leq j
\leq k}.
\end{equation}
Now look at the row which is $d_u$ rows below the row $u$ in the matrix
\eqref{eq:231}. Let this be row $v$ where $v=u+d_u$. Row $v$,
\begin{equation}
\label{eq:167}
\left(
\frac{1}{\left(2k-1 -(v-1)-2(j-1)- d_v \right)!} \right)_{1\leq j
\leq k},
\end{equation}
is identical to row $u$ if $d_v$ is zero. We have a
necessary condition for the matrix to have a non zero determinant;
for every $u$ such that $d_u\neq 0$, either $d_{u+d_u}$ is also non zero or $
u+d_u >k$. We look at this cascading process, and see that if we start at a
row above the row $k-|\lambda|$, that is if $d_u \neq 0$ for some $u\leq
k-|\lambda|$, then we cannot go down beyond row $k$ since all $d_i$ add to
$|\lambda|$. Hence we will have two identical rows. We can then conclude that
we obtain non zero determinants in \eqref{eq:231} only when $d_u =0$ for
$1\leq u \leq k-|\lambda|$.
\end{proof}
The above proposition tells us that, in~\eqref{eq:76} all the action takes
place in the last $|\lambda|$ rows or lower. Thus, let
${\bf u}=(u_k,\dotsc,u_1) $ be a permutation of $(\lambda_1,\dotsc,\lambda_k)$.
Notice that we have reversed the order of the subscripts on the $u$'s, starting
at $k$ and ending at $1$. Applying the above proposition, we shall assume
$u_i=0$ for $i>|\lambda|$. Note, however, that some of the $u_i$'s, with $i\leq
|\lambda|$ can also equal 0. For a given permutation {\bf u}, let $i({\bf u})$ be the
smallest positive integer such that $u_i=0$ for all $i>i({\bf u})$. Thus,
$i({\bf u}) \leq |\lambda|$.
Next, any two permutations that have identical non-zero $u_i$'s, i.e.
that move around the $0$'s, produce the same determinant. For any given way of selecting
where the non-zero $\lambda_i$'s go, there are $(k-l(\lambda))!$ ways to move around the
remaining zero-valued $\lambda_i$'s. Furthermore, permuting identical
non-zero $\lambda_i$'s also produces the same determinant. For a given permutation,
there are $m_1(\lambda)! m_2(\lambda)! \ldots$ ways to move around the identical non-zero
$\lambda_i$'s. Using the fact that
\begin{equation}
\label{eq:simplify multinomial}
\binom{k}{l(\lambda)}{l(\lambda)\choose m_1(\lambda),m_2(\lambda),\dotsc}
(k-l(\lambda))! m_1(\lambda)! m_2(\lambda)! \ldots = k!,
\end{equation}
and taking into account the above two paragraphs, expression
\eqref{eq:160} can thus be written as
\begin{equation}\label{eq:76}
k!
\sum_{\bf u}{^{'}} \left|
\begin{array}{cccccc}
\frac{1}{(2k-1)!} & \frac{1}{(2k-3)!} &\hdotsfor{3} & \frac{1}{1!} \\
\frac{1}{(2k-2)!} & \frac{1}{(2k-4)!} & \hdotsfor{3} & \frac{1}{0!}\\
\vdots & \vdots & & & & \vdots \\
\frac{1}{(k+i({\bf u}))!} & \frac{1}{(k-i({\bf u})-2)!} && & &\vdots\\
& && & & \\
\hdashline\\
\frac{1}{(k+i({\bf u})-1-u_{i({\bf u})})!} & \frac{1}{(k+i({\bf u})-3-u_{i({\bf u})})!} & & & &\vdots \\
\vdots & \vdots & & && \vdots \\
\frac{1}{(k-u_1)!} & \frac{1}{(k-u_1-2)!} &\hdotsfor{3}&0
\end{array}
\right|.
\end{equation}
There are $k-i({\bf u})$ rows above the
horizontal dashed line and $i({\bf u})$ rows below the dotted line. The sum is
over {\em distinct} permutations $(u_k,\dotsc,u_1)$ of
$(\lambda_1,\dotsc,\lambda_k)$, satisfying $u_i=0$ for $i>|\lambda|$. Note
that, in order for a given permutation $\bf u$ to appear in the sum, we require
that $k \geq i({\bf u})$.
We may also reduce the number of terms in the sum by excluding matrices where
two or more rows of the matrix are identical. The $'$ on the sum indicates that
such terms have been excluded from the sum.
Now consider one specific term in the sum \eqref{eq:76}. As in the calculation
of the leading coefficient, multiply its $i^{\mathrm{th}}$ row by $(2k-i)!$ and
its $j^{\mathrm{th}}$ column by $1/(2(j-1))!$. This enables us to write
the determinant in a term of \eqref{eq:76} as a product of a known quantity
and a determinant of binomial coefficients,
\begin{multline}\label{eq:78}
\frac{\prod_{j=1}^k (2(j-1))! }{\prod_{i=1}^k (2k-i)!}
\times
\begin{vmatrix}
{2k-1 \choose 0} & {2k-1\choose 2} & \hdotsfor{2} & {2k-1\choose 2k-2} \\
{2k-2\choose 0} & {2k-2\choose 2} & \hdotsfor{2} & {2k-2\choose 2k-2} \\
\vdots & \vdots & \ddots & & \vdots \\
{k+i({\bf u}) \choose 0} & {k+i({\bf u}) \choose 2} & & \ddots & {k+i({\bf u})\choose 2k-2} \\
{k+i({\bf u})-1-u_{i({\bf u})} \choose 0} & {k+i({\bf u})-1-u_{i({\bf u})} \choose 2} & \hdotsfor{2}&
{k+i({\bf u})-1-u_{i({\bf u})}\choose 2k-2}\\
\vdots & \vdots & \ddots & & \vdots \\
{k-u_1\choose 0} & {k-u_1\choose 2} &\hdotsfor{2} & {k-u_1\choose
2k-2}
\end{vmatrix}\\
\times (k+i({\bf u})-1)_{u_{i({\bf u})}} (k+i({\bf u})-2)_{u_{i({\bf u})-1}}\dotsm(k)_{u_1}.
\end{multline}
Here \index{$(x)_n$}$(x)_n$ is the falling factorial
$x(x-1)\dots(x-n+1)$. The last factor, the product of falling
factorials, is a polynomial of degree $|\lambda|$ in $k$. Expressions~\eqref{eq:76}
and~\eqref{eq:78} for the lower terms are the analogue of \eqref{eq:159} for
the leading coefficient. The difference is the presence of
$u_1,\dotsc,u_{i({\bf u})}$ in the determinant and the appearance of the product
of falling factorials. The latter are accounted for by the fact that the
$(2k-i)!$ is not entirely cancelled by the numerator of the binomial
coefficients in the last $i({\bf u})$ rows.
We study the above determinant in the next section. To apply the formulas of
that section, we require $u_1 \geq u_2 \geq \ldots$, which does not typically
hold for the terms in the sum of~\eqref{eq:76}. However, by swapping adjacent
rows, we can arrange that these inequalities hold. More precisely, say that
$u_m < u_{m+1}$. We can assume that, in fact, $u_m +2 \leq u_{m+1}$ since if
$u_m+1 = u_{m+1}$ then the $m$-th and $m+1$-st rows from the bottom of the
matrix coincide, and such terms are excluded from~\eqref{eq:76} since the
determinant in such cases is $0$.
Consider what happens when we swap the $m$-th and $m+1$-st rows from the
bottom. The binomial coefficient $k+m-1-u_m \choose 2j-2$ gets switched with
$k+m-u_{m+1} \choose 2j-2$ at a cost of a sign change to the determinant. On
the other hand, the new determinant is of the same form, but
with $\bf u$ replaced by $\bf u'$,
where $u'_j=u_j$ for all $j$, except for $u'_m = u_{m+1}-1$ and $u'_{m+1} =
u_m+1$. Thus we have reversed the inequality, i.e. $u'_m \geq u'_{m+1}$. Notice also
that this swapping also satisfies $\sum u'_j = \sum u_j = |\lambda|$.
Therefore, continuing in this fashion, any given determinant in the sum
in~\eqref{eq:78} is equal, up to a power of $-1$, to the same
kind of determinant but with ${\bf u}$ replaced
by, say, $\alpha({\bf u})$,
where $\alpha$ is a partition of $|\lambda|$, i.e. with $\alpha_1 \geq \alpha_2
\geq \ldots \geq 0$. Let the power of $-1$ introduced by
the row swaps that take ${\bf u}$ to $\alpha({\bf u})$ be denoted by $n({\bf
a})$. Thus, a given determinant in~\eqref{eq:78} is equal, on performing the
row swaps, to
\begin{equation}
(-1)^{n({\bf u})+ {k \choose 2}} D_{\alpha({\bf u})}(k),
\label{eq:one term}
\end{equation}
where $D$ is the determinant defined in~\eqref{eq:i1}. The extra $(-1)^{\binom
k 2}$ arises because the columns of $D$ in~\eqref{eq:i1} are in the reverse
order from the determinants in~\eqref{eq:78}.
Therefore,
returning to equations~\eqref{eq:sumofintegrals},~\eqref{generalpartition},
and~\eqref{eq:194}, we have, on simplifying, that the coefficient $c_{\pm}(r,k)$ of
$x^{\frac{k(k+1)}2-r}$ in $Q_{\pm}(k,x)$ can be expressed as:
\begin{multline}
\label{eq:formula for c(r,k)}
\frac{a_k}{2^{\frac{k(k+1)}2-r}}
\prod_{j=0}^{k-1}\frac{(2j)!}{(k+j)!}
\\
\times
\sum_{|\lambda|=r}
b^\pm_\lambda(k)
\sum_{\bf u}{^{'}} (-1)^{n({\bf u})} D_{\alpha({\bf u})}(k)
\times (k+i({\bf u})-1)_{u_{i({\bf u})}} (k+i({\bf u})-2)_{u_{i({\bf u})-1}}\dotsm(k)_{u_1}.
\end{multline}
In Theorem~\ref{thm:i2} and Corollary~\ref{cor:1} we show that, for $k\geq
\max(l(\alpha), \alpha_1)$,
\begin{equation}
\label{eq: D alpha}
D_\alpha(k) = 2^{ {k\choose 2}-r} P_{\alpha}(k),
\end{equation}
where $P_{\alpha}(k)$ is a polynomial in $k$ of degree $|\alpha|$. Theorem~\ref{thm:i2}
also gives a formula for determining the polynomials $P$.
Hence
\begin{equation}
\label{eq:formula for c(r,k) b}
c_\pm(r,k) =
\left(
\frac{a_k}{2^k}
\prod_{j=0}^{k-1}\frac{(2j)!}{(k+j)!}
\right) \sum_{|\lambda|=r}
b^\pm_\lambda(k) N_\lambda(k),
\end{equation}
where
\begin{equation}
\label{eq:N}
N_\lambda(k) =
\sum_{\bf u}{^{'}} (-1)^{n({\bf u})} P_{\alpha({\bf u})}(k)
\times (k+i({\bf u})-1)_{u_{i({\bf u})}} (k+i({\bf u})-2)_{u_{i({\bf u})-1}}\dotsm(k)_{u_1}.
\end{equation}
The sum is
over distinct permutations ${\bf u}=(u_k,\ldots,u_1)$ formed from the partition $\lambda$ by
appending $0$'s if necessary. Furthermore, we have restricted to permutations such
that $i({\bf u}) \leq |\lambda|$, and have also excluded $\bf u$ where the
corresponding matrix has any identical rows. Finally, for a given $\bf u$ to
appear in the sum we have assumed that $i({\bf u}) \leq k$, and to apply~\eqref{eq: D alpha},
we also required that $k\geq \max(l(\alpha({\bf u})), \alpha({\bf u})_1)$.
We show that the latter assumption can
be removed. First note that $i({\bf u})=l(\alpha({\bf u}))$, because our
swapping procedure that replaces a given $\bf u$ with $\bf u'$
has $i({\bf u'})=i({\bf u})$. Next, Corollary~\ref{cor:2} tells us that
$P_\alpha(k)$ vanishes for $\alpha_1 \leq k \leq l(\alpha)-1$. Furthermore,
\begin{equation}
\label{eq:falling}
(k+i({\bf u})-1)_{u_{i({\bf u})}} (k+i({\bf u})-2)_{u_{i({\bf u})-1}}\dotsm(k)_{u_1}
\end{equation}
vanishes if $0 \leq k < \alpha_1$ as can be seen by examining the factor associated
to $\alpha_1$: let $u_j$ be the term that, under our swapping procedure, gets swapped
down to $\alpha_1$. The corresponding falling factorial is $(k+j-1)_{u_j}$.
But $\alpha_1 = u_j - (j-1)$, because $u_j$ gets moved down $j-1$
rows to the bottom row. Therefore,
\begin{equation}
(k+j-1)_{u_j} = (k+j-1)_{\alpha_1+j-1} = (k+j-1) (k+j-2) \dots (k-\alpha_1+1)
\end{equation}
which is divisible by
\begin{equation}
\label{eq:divisibility part ii}
k (k-1) \dots (k-\alpha_1+1).
\end{equation}
Thus, we have shown that
\begin{equation}
\label{eq:N term}
P_{\alpha({\bf u})}(k)
\times (k+i({\bf u})-1)_{u_{i({\bf u})}} (k+i({\bf u})-2)_{u_{i({\bf u})-1}}\dotsm(k)_{u_1}
\end{equation}
vanishes for $0 \leq k < \max(l(\alpha({\bf u})),\alpha({\bf u})_1)$.
We can, therefore, ignore, in~\eqref{eq:N},
the condition that $k\geq \max(l(\alpha({\bf u})), \alpha({\bf u})_1)$, since, in including terms with
$k< \max(l(\alpha({\bf u})), \alpha({\bf u})_1)$,
the corresponding summand in~\eqref{eq:N} vanishes.
Hence, $N_\lambda(k)$ is given by a sum over a {\em fixed}, i.e. depending only on $\lambda$
but not on $k$, number of terms $\bf u$. Each term is a polynomial of degree $2|\lambda|$
in $k$, thus $N_\lambda(k)$ is a polynomial in $k$ of degree $\leq 2|\lambda|$.
This completes the proof of Theorem \ref{thm:maintheorem}.
As an example, We compute $N_{(2,1,1)}(k)$ using \eqref{eq:N}. In this case, the
sum \eqref{eq:N} is over the 12 distinct permutations of $(2,1,1,0)$. We can truncate
at 4 terms because $|\lambda|=4$, and $i({\bf u}) \leq |\lambda|$.
Of these 12 permutations, only $(2,1,1,0)$, $(0,2,1,1)$, $(1,0,2,1)$,
and $(1,1,0,2)$ give non zero determinants. The sign $(-1)^{n({\bf u})}$ is 1 for
$(2,1,1,0)$, and -1 for the rest. We have
\begin{align}
\label{eq:3}
&N_{(2,1,1)}(k)=P_{(2,1,1)}\, (k)_2 (k+1)_1 (k+2)_1 - P_{(1,1,1,1)}\,
(k+1)_2 (k+2)_1 (k+3)_1 \notag \\
\notag & {}\quad - P_{(1,1,1,1)}\, (k)_1 (k+2)_2 (k+3)_1 - P_{(1,1,1,1)}\, (k)_1
(k+1)_1 (k+3)_2 \\
=& \frac{1}{8} (k-2)(k+3)(k^2+k-4)\times k(k-1) (k+1)(k+2)
-\frac{1}{24}(k-3)(k-2)(k-1)(k+4)\notag \\&\times(
(k+1)k(k+2)(k+3)+k(k+2)(k+1)(k+3)+k(k+1)(k+3)(k+2)
) \notag \\
= &k (k - 1) (k - 2) (k + 3) (k + 2) (k + 1)
\end{align}
Having shown that $N_\lambda(k)$ is a polynomial in $k$ of degree $\leq
2|\lambda|$, we can determine it either using formula~\eqref{eq:N} and
the formulas in Theorem~\ref{thm:i2} for the polynomials $P$,
or else by evaluating~\eqref{eq:76} for $2|\lambda|+1$ values of $k$
and applying polynomial interpolation. More specifically,
we can work back from~\eqref{eq:76} to~\eqref{eq:194}, and divide
by $\frac{a_k}{2^k} \prod_{j=0}^{k-1}\frac{(2j)!}{(k+j)!}$ to get the formula
\begin{equation}
\label{eq:N direct}
N_\lambda(k) =
\left(\frac{-1}{2}\right)^{k(k-1)/2} 2^{|\lambda|}
\prod_{j=0}^{k-1}\frac{(k+j)!}{(2j)!}
\sum_{\bf u}{^{'}} \left|
\begin{array}{cccccc}
\frac{1}{(2k-1)!} & \frac{1}{(2k-3)!} &\hdotsfor{3} & \frac{1}{1!} \\
\frac{1}{(2k-2)!} & \frac{1}{(2k-4)!} & \hdotsfor{3} & \frac{1}{0!}\\
\vdots & \vdots & & & & \vdots \\
\frac{1}{(k+i({\bf u}))!} & \frac{1}{(k-i({\bf u})-2)!} && & &\vdots\\
& && & & \\
\hdashline\\
\frac{1}{(k+i({\bf u})-1-u_{i({\bf u})})!} & \frac{1}{(k+i({\bf u})-3-u_{i({\bf u})})!} & & & &\vdots \\
\vdots & \vdots & & && \vdots \\
\frac{1}{(k-u_1)!} & \frac{1}{(k-u_1-2)!} &\hdotsfor{3}&0
\end{array}
\right|.
\end{equation}
This formula can be used for a specific choice of $\lambda$ and several values of $k$ to create a table
of values of $N_\lambda(k)$ to which polynomial interpolation can be applied. Table~\ref{tab:N_lambda}
lists the polynomials $N_\lambda(k)$ for all $|\lambda|\leq 7$.
\begin{table}[h!]
\centerline{
\begin{tabular}{|c|c|c|}
\hline
$\lambda$ & $N_\lambda(k)/r_\lambda(k)$ & $r_\lambda(k)$ \\
\hline
$[1]$ & $k+1$ & $(k)_1$ \\ \hline
$[1, 1]$ & $(k+2)(k+1)$ & $(k)_2/2$ \\ \hline
$[2]$ & $0$ & $(k)_1$ \\ \hline
$[1, 1, 1]$ & $(k+3)(k+2)(k+1)$ & $(k)_3/6$ \\ \hline
$[2, 1]$ & $(k+2)(k+1)$ & $(k)_2$ \\ \hline
$[3]$ & $-(k-1)(k+2)(k+1)$ & $(k)_1$ \\ \hline
$[1, 1, 1, 1]$ & $(k+4)(k+3)(k+2)(k+1)$ & $(k)_4/24$ \\ \hline
$[2, 1, 1]$ & $2(k+3)(k+2)(k+1)$ & $(k)_3/2$ \\ \hline
$[2, 2]$ & $0$ & $(k)_2/2$ \\ \hline
$[3, 1]$ & $-(k-2)(k+3)(k+2)(k+1)$ & $(k)_2$ \\ \hline
$[4]$ & $0$ & $(k)_1$ \\ \hline
$[1, 1, 1, 1, 1]$ & $(k+5)(k+4)(k+3)(k+2)(k+1)$ & $(k)_5/120$ \\ \hline
$[2, 1, 1, 1]$ & $3(k+4)(k+3)(k+2)(k+1)$ & $(k)_4/6$ \\ \hline
$[2, 2, 1]$ & $4(k+3)(k+2)(k+1)$ & $(k)_3/2$ \\ \hline
$[3, 1, 1]$ & $-(k-3)(k+4)(k+3)(k+2)(k+1)$ & $(k)_3/2$ \\ \hline
$[3, 2]$ & $-2(k-2)(k+3)(k+2)(k+1)$ & $(k)_2$ \\ \hline
$[4, 1]$ & $-2(k-2)(k+3)(k+2)(k+1)$ & $(k)_2$ \\ \hline
$[5]$ & $2(k-1)(k-2)(k+3)(k+2)(k+1)$ & $(k)_1$ \\ \hline
$[1, 1, 1, 1, 1, 1]$ & $(k+6)(k+5)(k+4)(k+3)(k+2)(k+1)$ & $(k)_6/720$ \\ \hline
$[2, 1, 1, 1, 1]$ & $4(k+5)(k+4)(k+3)(k+2)(k+1)$ & $(k)_5/24$ \\ \hline
$[2, 2, 1, 1]$ & $10(k+4)(k+3)(k+2)(k+1)$ & $(k)_4/4$ \\ \hline
$[2, 2, 2]$ & $0$ & $(k)_3/6$ \\ \hline
$[3, 1, 1, 1]$ & $-(k-4)(k+5)(k+4)(k+3)(k+2)(k+1)$ & $(k)_4/6$ \\ \hline
$[3, 2, 1]$ & $-(k+3)(k+2)(k+1)(3k^2+3k-40)$ & $(k)_3$ \\ \hline
$[3, 3]$ & $(k-2)(k-4)(k+5)(k+3)(k+2)(k+1)$ & $(k)_2/2$ \\ \hline
$[4, 1, 1]$ & $-4(k+3)(k+2)(k+1)(k^2+k-10)$ & $(k)_3/2$ \\ \hline
$[4, 2]$ & $0$ & $(k)_2$ \\ \hline
$[5, 1]$ & $2(k-2)(k+3)(k+2)(k+1)(k^2+k-10)$ & $(k)_2$ \\ \hline
$[6]$ & $0$ & $(k)_1$ \\ \hline
$[1, 1, 1, 1, 1, 1, 1]$ & $(k+7)(k+6)(k+5)(k+4)(k+3)(k+2)(k+1)$ & $(k)_7/5040$ \\ \hline
$[2, 1, 1, 1, 1, 1]$ & $5(k+6)(k+5)(k+4)(k+3)(k+2)(k+1)$ & $(k)_6/120$ \\ \hline
$[2, 2, 1, 1, 1]$ & $18(k+5)(k+4)(k+3)(k+2)(k+1)$ & $(k)_5/12$ \\ \hline
$[2, 2, 2, 1]$ & $30(k+4)(k+3)(k+2)(k+1)$ & $(k)_4/6$ \\ \hline
$[3, 1, 1, 1, 1]$ & $-(k-5)(k+6)(k+5)(k+4)(k+3)(k+2)(k+1)$ & $(k)_5/24$ \\ \hline
$[3, 2, 1, 1]$ & $-2(k+4)(k+3)(k+2)(k+1)(2k^2+2k-45)$ & $(k)_4/2$ \\ \hline
$[3, 2, 2]$ & $-10(k-3)(k+4)(k+3)(k+2)(k+1)$ & $(k)_3/2$ \\ \hline
$[3, 3, 1]$ & $(k-3)(k-5)(k+6)(k+4)(k+3)(k+2)(k+1)$ & $(k)_3/2$ \\ \hline
$[4, 1, 1, 1]$ & $-6(k+4)(k+3)(k+2)(k+1)(k^2+k-15)$ & $(k)_4/6$ \\ \hline
$[4, 2, 1]$ & $-10(k-3)(k+4)(k+3)(k+2)(k+1)$ & $(k)_3$ \\ \hline
$[4, 3]$ & $5(k-2)(k-3)(k+4)(k+3)(k+2)(k+1)$ & $(k)_2$ \\ \hline
$[5, 1, 1]$ & $2(k-3)(k+4)(k+3)(k+2)(k+1)(k^2+k-15)$ & $(k)_3/2$ \\ \hline
$[5, 2]$ & $5(k-2)(k-3)(k+4)(k+3)(k+2)(k+1)$ & $(k)_2$ \\ \hline
$[6, 1]$ & $5(k-2)(k-3)(k+4)(k+3)(k+2)(k+1)$ & $(k)_2$ \\ \hline
$[7]$ & $-5(k-1)(k-2)(k-3)(k+4)(k+3)(k+2)(k+1)$ & $(k)_1$ \\ \hline
\end{tabular}
}
\caption[$N_\lambda(k)$]
{We display the polynomials $N_\lambda(k)$, for all $|\lambda| \leq 7$.
Because each monomial of $m_\lambda(z)$ contributes the same
to~\eqref{eq:sumofintegrals}, $N_\lambda(k)$ has, as a factor, the polynomial:
$r_\lambda(k):=\binom{k}{l(\lambda)}{l(\lambda)\choose
m_1(\lambda),m_2(\lambda),\dotsc} = (k)_{l(\lambda)}/(m_1(\lambda)!
m_2(\lambda)!\ldots)$, where $(k)_m = k(k-1)\ldots(k-m+1)$. Therefore, rather
than display $N_\lambda(k)$, here we list $N_\lambda(k)/r_\lambda(k)$.
}\label{tab:N_lambda}
\end{table}
\section{The coefficients $b^\pm_\lambda(k)$}
\label{sec:b}
In order to compute the multivariate Taylor expansion of~\eqref{eq:multivariateM},
we consider the series expansion of its logarithm. We first examine the
arithmetic product, and let
\begin{equation}
\label{eq:log A}
\log(A_k(z_1, \ldots, z_k)/a_k)
=: \sum_{r=1}^\infty \sum_{|\lambda| = r} B_\lambda(k) m_\lambda(z).
\end{equation}
We start the sum at $r=1$ because the division by $a_k$ makes the constant term 0.
Now, the lhs is symmetric in the $z_i$'s, and we can find $B_\lambda(k)$ by
applying
\begin{equation}
\label{eq:diff}
\frac{1}{\lambda_1! \lambda_2! \ldots \lambda_l!}
\frac{\partial^{\lambda_1}}{\partial z_1^{\lambda_1}}
\frac{\partial^{\lambda_2}}{\partial z_2^{\lambda_2}}
\ldots
\frac{\partial^{\lambda_l}}{\partial z_l^{\lambda_l}},
\end{equation}
where $l=l(\lambda)$,
and setting $z_1 = \ldots = z_k = 0$. Since the partial derivatives do
not involve $z_{l+1},\ldots,z_k$ we can set these to 0 before the differentiation.
Thus, by~\eqref{eq:Ak}, $B_\lambda(k)$ is equal to~\eqref{eq:diff} applied to
\begin{multline}
\label{eq:log A l}
-\log(a_k)
+ \sum_p
\sum_{1\leq i\leq j \leq l} \log \left(1-\frac{1}{p^{1+z_i+z_j}} \right)
+
\sum_{1\leq i \leq l} (k-l) \log \left(1-\frac{1}{p^{1+z_i}} \right) \\
+
\log
\left(\frac 1 2
\left(
\prod_{j=1}^{l}
\left(1-\frac 1 {p^{\frac 1 2 +z_j}} \right)^{-1}
\left(1-\frac 1 {p^{\frac 1 2 }} \right)^{l-k}
+
\prod_{j=1}^{l}
\left(1+\frac 1 {p^{\frac 1 2 +z_j}} \right)^{-1}
\left(1+\frac 1 {p^{\frac 1 2 }} \right)^{l-k}
\right)
+\frac 1 p
\right)
- \log \left(1+\frac 1 p \right),
\end{multline}
evaluated at $z_1=\ldots=z_l=0$. Likewise, we can find the coefficients of
the expansions
\begin{equation}
\label{eq:log X}
-\frac{1}{2} \sum_{j=1}^k \log X(1/2+z_j,a)
=: \sum_{r=1}^\infty \sum_{|\lambda| = r} f^{\pm}_\lambda(k) m_\lambda(z),
\end{equation}
where $a=1$ for $f^-$ and $0$ for $f^+$, and of
\begin{equation}
\label{eq:log zeta}
\sum_{1\leq i\leq j \leq k} \log \left(\zeta(1+z_i+z_j)(z_i+z_j)\right)
=: \sum_{r=1}^\infty \sum_{|\lambda| = r} g_\lambda(k) m_\lambda(z),
\end{equation}
by applying~\eqref{eq:diff}, at $z_1=\ldots=z_l=0$, to
\begin{equation}
\label{eq:log X l}
-\frac{1}{2} \sum_{j=1}^l \log X(1/2+z_j,a),
\end{equation}
and to
\begin{equation}
\label{eq:log zeta l}
\sum_{1\leq i\leq j \leq l} \log \left(\zeta(1+z_i+z_j)(z_i+z_j)\right)
+\sum_{1\leq i \leq l} (k-l)\log \left(\zeta(1+z_i)z_i\right),
\end{equation}
respectively.
Next, by composing the three series expansions~\eqref{eq:log A}, \eqref{eq:log
X}, \eqref{eq:log zeta} with the series for the exponential function, we can
derive formulas for the coefficients $b^{\pm}_\lambda(k)$. Example formulas,
for $b_{(1)}^\pm(k)$ and $b_{(1,1)}^\pm(k)$, are displayed in the introduction.
To obtain numerical approximations to $b^{\pm}_\lambda(k)$ for specific choices
of $k$ and $\lambda$ one needs to compute infinite sums over primes where the
summand is a rational function of $p^{1/2}$ times $\log(p)^{|\lambda|}$. This
can be achieved to high precision using Mobius inversion as described, in the
context of the moment polynomials of the Riemann zeta function, in Section 4.1
of~\cite{CFKRS2}. In this fashion, and using~\eqref{eq:147}, we computed the
values of $c_{\pm}(r,k)$, for $r\leq 10$ and $k\leq 9$, given in
Tables~\ref{tab:cminusrk} and \ref{tab:cplusrk}.
\section{Determinant of a matrix of binomial coefficients}
\label{sec:determinant}
\begin{proof}[Proof of Theorem \ref{thm:i2}]
We shall first prove \eqref{eq:i6}, and use it to prove \eqref{eq:i4}. \\
{\em Proof of \eqref{eq:i6}}:
\\
For a $k$-tuple $(\alpha_1,\dotsc,\alpha_k)$ and $x = (x_1, \dots ,x_k)$, let
$x^\alpha$ denote the monomial $x_1^{\alpha_1}\dots x_k^{\alpha_k}$. For a
partition $\lambda$ of length less than or equal to $k$, $x^\lambda$ can be
defined by appending zeros after the positive elements of $\lambda$ to make it
a $k$-tuple.
Reversing the rows of the matrix in \eqref{eq:i1}, we see that
\begin{equation}
\label{eq:i5}
D_\lambda(k)=(-1)^{\binom k 2}\det \left( \binom{k+i-1-\lambda_{i}}{2k-2j}\right)_{1\leq i,j\leq k}.
\end{equation}
The $(i,j)$th entry of the matrix in \eqref{eq:i5} can be written using the
coefficient operator defined in \eqref{eq:i55}. Let $x=(x_1,\dotsc,x_k)$. Then
\begin{equation}
\label{eq:i2}
D_\lambda(k) =(-1)^{\binom k 2} \det \left(
[x_j^{2k-2j}](1+x_j)^{k+i-1-\lambda_{i}} \right)_{1\leq i,j \leq k}.
\end{equation}
Noticing that column $j$ only involves $x_j$, we can move all the $[x_j^{2k-2j}]$
in front of the determinant to get
\begin{align}
\notag & (-1)^{\binom k 2} [x^{2\delta_k}] \det \left(
(1+x_j)^{k+i-1-\lambda_{i}}\right)_{1\leq i,j \leq k} \\ &=(-1)^{\binom k
2} [x^{2\delta_k}] \det \left( (1+x_j)^{-(k-i+\lambda_i)} \right)
\prod_{l=1}^k (1+x_l)^{2k-1}.
\end{align}
The determinant in \eqref{eq:i2} can be written in terms of
$a_{\delta_k}$ and $s_\lambda$ defined in \eqref{eq:i51} and \eqref{eq:i52},
\begin{multline}
\label{eq:i9}
D_\lambda(k)=(-1)^{\binom k 2}
[x^{2\delta_k}] a_{\lambda+\delta_k} (\tfrac
1{1+x_1}, \dotsc, \tfrac 1 {1+x_k} ) \prod_{l=1}^k (1+x_l)^{2k-1}\\
= (-1)^{\binom k 2}[x^{2\delta_k}] a_{\delta_k} (\tfrac
1{1+x_1}, \dotsc, \tfrac 1 {1+x_k} )s_{\lambda} (\tfrac
1{1+x_1}, \dotsc, \tfrac 1 {1+x_k} ) \prod_{l=1}^k (1+x_l)^{2k-1}.
\end{multline}
But \eqref{eq:i64} gives $a_{\delta_k}(x_1,\dotsc,x_k)$ explicitly. Hence
\begin{align}
\label{eq:i10}\nonumber
a_{\delta_k} (\tfrac
1{1+x_1}, \dotsc, \tfrac 1 {1+x_k} ) &= \prod_{1\leq i < j \leq k}
\left( \frac{1}{1+x_i} -\frac{1}{1+x_j}\right)
=\prod_{1\leq i < j \leq k } \frac{(1+x_j) - (1+x_i)}{(1+x_j)(1+x_i)} \notag \\
& =\frac{\prod_{1\leq i < j \leq k } (x_j-x_i)}{\prod_{j=1}^{k}
(1+x_j)^{k-1}}
= \frac{(-1)^{\binom k 2} a_{\delta_k}(x)}{
\prod_{j=1}^{k}(1+x_j)^{k-1} }.
\end{align}
Using \eqref{eq:i10} in \eqref{eq:i9}, we have
\begin{equation}
\label{eq:i13}
D_\lambda(k) = [x^{2\delta_k}] a_{\delta_k}(x_1,\dotsc,x_k)
s_\lambda(\tfrac{1}{1+x_1},\dotsc,\tfrac{1}{1+x_k})
\prod_{l=1}^k (1+x_l)^{k}.
\end{equation}
We shall now express $s_\lambda$
as a coefficient in a polynomial which is easier to work with. The
dual Jacobi-Trudi identity,
\eqref{eq:i54}, gives
\begin{equation}
\label{eq:i15}
s_\lambda = \det \left(e_{\mu_i-i +j}\right)_{1\leq i,j\leq
n}
\end{equation}
where $(\mu_1,\ldots,\mu_n)$ is the conjugate
partition of $\lambda$, and $n=l(\mu)$.
Expanding the determinant, we get
\begin{equation}
\label{eq:i16}
s_\lambda = \sum_{\sigma\in S_n}{\text{sgn}}(\sigma) \prod_{i=1}^n
e_{\mu_i - i +\sigma(i)}(x).
\end{equation}
From \eqref{eq:i44}, we have
$e_r=[t^r]E(t)$. We rewrite \eqref{eq:i16} using
this notation. Let $t=(t_1,\dotsc,t_n)$. Then
\begin{align}
\label{eq:i20}
\nonumber
s_\lambda(x) &= \sum_{\sigma\in S_n} {\text{sgn}}(\sigma) [t_1^{\mu_1-1+\sigma(1)}\dots
t_n^{\mu_n -n +\sigma(n)}]\prod_{i=1}^nE(t_i)\\
\nonumber &= [t_1^{\mu_1}\dots
t_n^{\mu_n}]\prod_{i=1}^nE(t_i)
\sum_{\sigma\in S_n} \left({\text{sgn}}(\sigma)
\prod_{i=1}^n t_i^{i-\sigma(i)}\right).
\end{align}
Next, pull out $\prod t_i^i$ from the sum, and multiply and divide by $\prod t_i^n$, to get
\begin{align}
&[t_1^{\mu_1}\dots
t_n^{\mu_n}]\prod_{i=1}^n E(t_i)\prod_{1\leq i < j \leq n}(t_i
-t_j) \prod_{l=1}^n t_l^{l-n} \notag \\
\nonumber &= [t^{\mu+\delta_n}] \prod_{i=1}^n E(t_i) \prod_{1\leq i < j \leq n }(t_i
-t_j)
\\
& = [t^{\mu+\delta_n}]a_{\delta_n}(t) \prod_{i=1}^n E(t_i).
\end{align}
Here we have also used the Vandermonde determinant (up to $(-1)^{n\choose2}$):
\begin{equation}
\sum_{\sigma\in S_n} {\text{sgn}}(\sigma)
\prod_{i=1}^n t_i^{n-\sigma(i)} = \prod_{1\leq i < j \leq n }(t_i -t_j).
\end{equation}
We have expressed $s_\lambda(x)$ as a coefficient in a
polynomial. Substituting \eqref{eq:i20} for $s_\lambda$ in
\eqref{eq:i13}, and using
the product form of $E(t)$, \eqref{eq:i44}, we have
\begin{align}
\label{eq:i21}
\nonumber D_\lambda(k)& = [t^{\mu+\delta_n} x^{2\delta_k}] a_{\delta_k}(x)
a_{\delta_n}(t)
\left[\prod_{i=1}^n \left( \prod_{l=1}^k (1+
\tfrac{t_i}{1+x_l})\right)
\right]\prod_{l=1}^k (1+x_l)^k\\
\nonumber &=[t^{\mu+\delta_n} x^{2\delta_k}] a_{\delta_k}(x) a_{\delta_n}(t)
\prod_{l=1}^k\left( (1+x_l)^{k-n} \prod_{i=1}^n (1+x_l+t_i)\right)
\\
& =[t^{\mu+\delta_n} x^{2\delta_k}] a_{\delta_k}(x) a_{\delta_n}(t)
\prod_{l=1}^k\left( (1+x_l)^{k-n} \prod_{i=1}^n
(1+\tfrac{x_l}{1+t_i})\right) \prod_{i=1}^n(1+t_i)^k
\end{align}
Applying the dual Cauchy identity \eqref{eq:i56} to the double product on the rhs above
gives
\begin{equation}
\label{eq:i12}
\prod_{l=1}^k\left( (1+x_l)^{k-n} \prod_{i=1}^n
(1+\tfrac{x_l}{1+t_i})\right) = \sum_\lambda
s_\lambda(x_1,\dotsc,x_k)s_{\lambda'}(1,\dotsc,1,\tfrac{1}
{1+t_1},\dotsc,\tfrac{1}{1+t_n}).
\end{equation}
The number of $1$'s in the second factor on the right hand side of
\eqref{eq:i12} is $k-n$.
Recall from \eqref{eq:i52} that $a_{\delta}s_\lambda=a_{\delta+\lambda}$. Hence \eqref{eq:i21} equals
\begin{equation}
\label{eq:i14}
[t^{\mu+\delta_n}][x^{2\delta_k}] a_{\delta_n}(t) \prod_{i=1}^n
(1+t_i)^k \sum_\lambda a_{\lambda\! +\! \delta_k}(x_1,\dotsc,x_k)
s_{\lambda'}(1,\dotsc,1, \tfrac {1}{1+t_1},\dotsc, \tfrac
{1}{1+t_n}).
\end{equation}
The monomial $x^{2\delta_k}$ occurs in the sum in \eqref{eq:i14} only
when $\lambda=\delta_k$. The coefficient of
$x^{2\delta_k}$ in $a_{2\delta_k}(x_1,\dotsc,x_k)$ is 1. Simplifying
\eqref{eq:i14}, we have
\begin{equation}
\label{eq:i24}
D_\lambda(k)=
[t^{\mu+\delta_n}] a_{\delta_n}(t) \prod_{i=1}^n (1+t_i)^k
s_{\delta_k}(1,\dotsc,1, \tfrac {1}{1+t_1},\dotsc, \tfrac
{1}{1+t_n}).
\end{equation}
Note that we have used $\delta_k' =\delta_k$. Applying the formula
$s_{\delta_k}$ in~\eqref{eq:i68}, we have
\begin{equation}
\label{eq:s explicit}
s_{\delta_k}(1,\dotsc,1, \tfrac {1}{1+t_1},\dotsc, \tfrac
{1}{1+t_n})=
2^{\binom{k-n}2}
\prod_{i=1}^n\left( 1+\frac 1{1+t_i} \right)^{k-n}
\prod_{1\leq i < j \leq n}\left( \frac
1 {1+t_i}+ \frac{1}{1+t_j}\right).
\end{equation}
The $2^{\binom{k-n}2}$ comes from pairing, in applying~\eqref{eq:i68}, the $k-n$ 1's. The middle factor arises
from matching each $1/(1+t_i)$ with $k-n$ 1's, and the last factor from matching all pairs of
distinct $1/(1+t_i), 1/(1+t_j)$. Substituting~\eqref{eq:s explicit}
into~\eqref{eq:i24}, and collecting the powers of $(1+t_i)$ gives
\begin{equation}
D_\lambda(k) =[t^{\mu+\delta_n}] a_{\delta_n}(t)\ 2^{\binom{k-n}{2}}
\left(\prod_{i=1}^n(1+t_i)(2+t_i)^{k-n} \right)
\prod_{1\leq i < j \leq n}(2+t_i+t_j).
\end{equation}
Substituting $z_i=t_i/2$, and collecting powers of $2$ (note that
${k-n \choose 2} +(k-n)n + {n \choose 2}= {k \choose 2}$), we get
\begin{multline}
\label{eq:i29}
D_\lambda(k)= [z^{\mu+\delta_n}] a_{\delta_n}(z) 2^{\binom k 2 + \binom n 2 -
|\mu+\delta_n|} \prod_{i=1}^n(1+2z_i)(1+z_i)^{k-n} \\
\times \prod_{1\leq i < j \leq n} (1+z_i+z_j).
\end{multline}
Here we have also used $a_{\delta_n}(t) = a_{\delta_n}(z) 2^{n \choose 2}$.
Since $|\delta_n|=\binom n 2$, and $|\mu|=|\lambda|$, this proves \eqref{eq:i6}.\\
{\em Proof of \eqref{eq:i4}}:\\
We now use~\eqref{eq:i6} to prove \eqref{eq:i4}. As above, let $z=(z_1,\dotsc,z_n)$.
Since Schur symmetric functions form a $\mathbb Z$-basis for the ring
of symmetric functions, the coefficient of $s_\gamma$ of a symmetric function
$F$ is well defined. We denote this coefficient by $[s_\gamma]F$.
For a symmetric polynomial $F(z)$ in $n$-variables, and a partition $\gamma$
with length at most $n$, we have
\begin{equation}
\label{eq:i7}
[s_\gamma(z)] F(z) = [z^{\gamma+\delta_n}] a_{\delta_n}(z) F(z).
\end{equation}
This can be seen by writing $F(z)$ in terms of our Schur basis
\begin{equation}
F(z) = \sum_\gamma v_\gamma s_\gamma(z).
\end{equation}
We wish to find the coefficient $v_\gamma$.
Multiplying by $a_{\delta_n}(z)$ and using~\eqref{eq:i52} gives
\begin{equation}
a_{\delta_n}(z) F(z) = \sum_\gamma v_\gamma a_{ \gamma + \delta_n }(z).
\end{equation}
Now, the monomials in $a_{ \gamma + \delta_n }(z)$
are all distinct, and distinct from the monomials in
$a_{\gamma' + \delta_n }$ for any different partition $\gamma'$ of length at most
$n$. Furthermore, $z^{\gamma+\delta_n}$ appears in $a_{ \gamma + \delta_n }(z)$
with coefficient $1$, coming from the main diagonal of~\eqref{eq:i51}. Thus,
$v_\gamma$ is equal to the coefficient of $z^{\gamma+\delta_n}$ in
$a_{\delta_n}(z) F(z)$.
Therefore we can rewrite \eqref{eq:i6} as
\begin{equation}
\label{eq:i59}
D_\lambda(k)= 2^{\binom k 2 - |\lambda|}
[s_{\mu}(z)] \prod_{1\leq i< j \leq n}
\left(\frac{1+z_i+z_j}{(1+z_i)(1+z_j)}\right)
\prod_{i=1}^n (1+2z_i)(1+z_i)^{k-1}.
\end{equation}
We shall work with the ring of symmetric functions $\Lambda$ instead of the ring
of symmetric polynomials in $n$ variables $\Lambda_n$. The right hand
side of \eqref{eq:i59} equals
\begin{equation}
\label{eq:i60}
2^{\binom k 2-|\lambda|} [s_\mu(z)] \prod_{ 1\leq i < j}
\left(\frac{1+z_i+z_j}{(1+z_i)(1+z_j)}\right) \prod_{i\geq 1} (1+2z_i)(1+z_i)^{k-1}.
\end{equation}
Note that in \eqref{eq:i60}, we are looking at elements in the ring of
symmetric functions, $\Lambda$, i.e. as a product involving a countable number of
variables $z_1,z_2,\ldots$,
whereas in \eqref{eq:i59}, we were
considering the elements in the ring of symmetric polynomials in $n$
variables, $\Lambda_n$.
Applying $\omega$, and using \eqref{eq:i63} we obtain
\begin{equation}
D_\lambda(k) =2^{\binom k 2 - |\lambda|} [s_\lambda(y)]\ \omega\! \left(
\prod_{1\leq i<j} \! \left( 1 -\frac{y_iy_j}{(1+y_i)(1+y_j)}\right)\!
\prod_{i\geq 1 }(1+2y_i)(1+y_i)^{k-1}
\right)
\end{equation}
We use the fact that $\exp (\log(1+u))=1+u$ to write the argument of
$\omega$ as formal power series:
\begin{multline}
\notag
D_\lambda(k) =2^{\binom k 2 -|\lambda|} [s_\lambda(y)]\ \omega\! \left(
\exp\sum_{a\geq 1}\frac 1 a\left( -\sum_{1\leq i<j} y_i^ay_j^a
(1+y_i)^{-a}(1+y_j)^{-a} \right.\right.\\ \left. \left.
- (-2)^a \sum_{i\geq 1}y_i^a -(k-1)
(-1)^a\sum_{i\geq 1} y_i^a\right) \right)
\end{multline}
\begin{multline}
\label{eq:i67}
\phantom{D_\lambda(k)} =2^{\binom k 2 -|\lambda|} [s_\lambda(y)]\
\omega\! \left(
\exp\sum_{a\geq 1}\frac 1 a\left( - \sum_{b,c\geq 0}
\binom{-a}{b}\binom{-a}{c}
\sum_{1\leq i<j} y_i^{a+b}y_j^{a+c} \right.\right.\\ \left.\left.
- (-2)^a \sum_{i\geq 1}y_i^a -(k-1)
(-1)^a\sum_{i\geq 1} y_i^a\right)\right).
\end{multline}
We can rewrite the argument of $\omega$ in \eqref{eq:i67} using power
symmetric functions;
\begin{multline}
D_\lambda(k) =2^{\binom k 2 -|\lambda|} [s_\lambda(y)]\ \omega\! \left(
\exp\sum_{a\geq 0}\frac 1 a\left( -\sum_{b,c\geq 0}
\right.\right.\\ \left.\left. \phantom{\sum_A}
\binom{-a}{b}\binom{-a}{c}\tfrac 1 2 (p_{a+b}p_{a+c} -p_{2a+b+c})
- (-2)^a p_a -(k-1)
(-1)^a p_a\right)\right).
\end{multline}
In \eqref{eq:i63} we have seen that $\omega(p_a) = (-1)^{a-1}p_a$. This gives
\begin{multline}
D_\lambda(k) =2^{\binom k 2 -|\lambda|} [s_\lambda(y)]
\exp\sum_{a\geq 0}\frac 1 a\left( -\sum_{b,c\geq 0}
\binom{-a}{b}\binom{-a}{c}\right. \\ \left. \phantom{\sum_A}
(-1)^{2a+b+c}\tfrac 1 2 (p_{a+b}p_{a+c} +p_{2a+b+c})
+2^a p_a +(k-1)
p_a\right)
\end{multline}
\begin{equation}
\label{eq:i72}
\phantom{D_\lambda(k) a } =2^{\binom k 2 -|\lambda|}
[s_\lambda(y)]\prod_{1\leq i\leq j} \left(
1-\frac{y_iy_j}{(1-y_i)(1-y_j)} \right) \prod_{i\geq 1}
(1-2y_i)^{-1} (1-y_i)^{-k+1}.
\end{equation}
If we isolate factors corresponding to $i=j$ in the first product, we
are able to cancel some factors in the second product. Simplifying, we
get
\begin{equation}
\label{eq:i75}
D_\lambda(k) = 2^{\binom k 2 -|\lambda|}
[s_\lambda(y)]\prod_{1\leq i< j} \left(
1-\frac{y_iy_j}{(1-y_i)(1-y_j)} \right) \prod_{i\geq 1} (1-y_i)^{-k-1}.
\end{equation}
To calculate the coefficient of $s_\lambda$ in \eqref{eq:i75}, we only
have to look at the projection in any $\Lambda_m$ such
that $m$ is greater than or equal to $l(\lambda)$. We can choose it to
be equal to $l(\lambda)$ (also equals $\mu_1$). Let $m=l(\lambda)$. Then
\begin{equation}
D_\lambda(k)
=
2^{\binom k 2 -|\lambda|} [s_\lambda(y)]\prod_{1\leq i < j\leq m}
\left( 1-\frac{y_iy_j}{(1-y_i)(1-y_j)} \right)
\prod_{i=1}^m (1-y_i)^{-k-1},
\end{equation}
which is equal to
\begin{equation}
\label{eq:i74}
2^{\binom k 2 -|\lambda|} [s_\lambda(y)]\prod_{1\leq i < j\leq m}
(1-y_i-y_j) \prod_{i=1}^m
(1-y_i)^{-k-m}.
\end{equation}
Another application of \eqref{eq:i7} proves \eqref{eq:i4}.
\end{proof}
\begin{proof}[Proof of Corollary~\ref{cor:1}]
It is immediate from \eqref{eq:i4} or \eqref{eq:i6} that $P_\lambda(k)$ is a
polynomial in $k$ with integer values at integers of degree at most
$|\lambda|$. We will show that it is in fact of degree $|\lambda|$ and determine
its leading coefficient.
From \eqref{eq:i4}, the highest power of $k$ occurs when we pick as many
powers of $y_i$ as possible from the last product. This happens when none
of the $y_i$ are picked from $(1-y_i-y_j)$. Note that
\begin{equation}
\label{eq:binomial -}
(1-y)^{-k-m} = 1 +(k+m) y +\frac{(k+m)(k+m+1)}{2!} y^2 + \ldots
= \sum_{j=0}^\infty (k+m+j-1)_{j}\, y^j/j!
\end{equation}
The coefficient of the highest power of $k$ that appears in
the $j$-th term of this Taylor series is $1/j!$. Thus, the coefficient of $k^{|\lambda|}$
in $P_\lambda(k)$ equals
\begin{align}
\label{eq:leading}
\nonumber & [y^{\lambda_{1+m-1}}_1\dots y_m^{\lambda_m}] \left( \prod_{1\leq i <
j \leq m}(y_i -y_j)\right) \exp(y_1+\dots+y_m)\\
\nonumber =& [y^{\lambda_{1+m-1}}_1\dots y_m^{\lambda_m}] \sum_{\sigma\in
S_m}{\text{sgn}}(\sigma) \left(\prod_{i=1}^m y_i^{m-\sigma(i)}
e^{y_i}\right)\\
\nonumber
=& \sum_{\sigma\in S_m}{\text{sgn}}(\sigma) \frac{1}{(\lambda_i-i+\sigma(i))!}
= \det(1/(\lambda_i-i+j)!)_{m\times m} \\
=&
\frac{
\prod_{1\leq i < j \leq m } (\lambda_i-\lambda_j-i+j)
}
{
\prod_{1\leq i \leq m }(\lambda_i + m - i)!
} = \frac{\chi^\lambda(1)}{|\lambda|!},
\end{align}
where $\chi^\lambda(1)$ is the degree of the irreducible representation
of $S_{|\lambda|}$ indexed by $\lambda$.
See example $6$ in Chapter I.7 of~\cite{macdonald_symmetric_1995} for
the last two equalities.
\end{proof}
\begin{proof}[Proof of Corollary~\ref{cor:2}]
We use equation~\eqref{eq:i6} which gives a formula for $P_\lambda(k)$. As part
of the process of identifying the coefficient of $z_1^{\mu_1+n-1}\dots z_n^{\mu_n}$ in
that formula, we focus on the coefficient of $z_1^{\mu_1+n-1}$. Now, $\mu_1 = l(\lambda)$,
and $n=\lambda_1$, hence $\mu_1+n-1 = l(\lambda)+\lambda_1-1$.
When we expand~\eqref{eq:i6}, some of the powers of $z_1$
come from the factor $(1+z_1)^{k-\lambda_1}$, and the rest from
\begin{equation}
\label{eq:z_1 poly}
\prod_{1 < j \leq \lambda_1} (z_1 -z_j)(1+z_1+z_j) (1+2z_1).
\end{equation}
Consider the terms arising from taking a $z_1^j$ from the above. Notice
that~\eqref{eq:z_1 poly} is a polynomial in $z_1$ of degree $2\lambda_1-1$, and thus
$0 \leq j \leq 2\lambda_1-1$. The remaining $l(\lambda)+\lambda_1-1-j$
powers of $z_1$ come from expanding $(1+z_1)^{k-\lambda_1}$ using the binomial theorem,
so that the term associated with a particular choice of $j$ is divisible by
\begin{equation}
{k-\lambda_1 \choose l(\lambda)+\lambda_1-1-j}
= \frac{ (k-\lambda_1)(k-\lambda_1-1)\ldots(k-2 \lambda_1-l(\lambda)+2+j) }
{(l(\lambda)+\lambda_1-1-j)!}.
\end{equation}
For all $0 \leq j \leq 2\lambda_1-1$, this is divisible by
\begin{equation}
\label{eq:z_1 coeff}
(k-\lambda_1)(k-\lambda_1-1)\ldots(k-l(\lambda)+1).
\end{equation}
The coefficient of $z_1^{l(\lambda)+\lambda_1-1}=z_1^{\mu_1+n-1}$ in the
expression in~\eqref{eq:i6} is therefore divisible by~\eqref{eq:z_1 coeff}.
Thus, so is the coefficient of $z_1^{\mu_1+n-1}\dots z_n^{\mu_n}$.
The same analysis applied to~\eqref{eq:i4}, and using~\eqref{eq:binomial -} gives
that $P_\lambda(k)$ is divisible, for $l(\lambda)\leq \lambda_1$, by
\begin{equation}
(k + l( \lambda )) \ldots (k + \lambda_1-1)(k + \lambda_1).
\end{equation}
\end{proof}
\section{Family of quadratic twists of elliptic curve L-functions}
\label{sec:elliptic}
Here we modify our techniques to the family of $L$-functions associated to the
quadratic twists of an elliptic curve over $\mathbb{Q}$. To keep things explicit,
we focus on the elliptic curve of conductor 11:
\begin{equation}
\label{eq:E11}
E_{11a}: y^2+y =x^3-x.
\end{equation}
The $L$-function of $E_{11a}$ is given by an Euler product of the form
\begin{equation}
L_{11}(s)=
\frac{1}{1-11^{-s-1/2 }}
\prod_{p \neq 11}
\frac{1}{1-a(p)p^{-s-1/2 }+p^{-2s}},
\end{equation}
which can be expanded into the Dirichlet series
\begin{equation}
\sum_{n=1}^\infty \frac{a(n)}{n^{1/2 +s}}.
\end{equation}
The Dirichlet series above is absolutely convergent in $\Re{s}>1$.
The coefficients $a(n)$ can be obtained from the Fourier expansion of
the cusp form of weight two and level 11 given by
\begin{equation}
\sum_{n=1}^\infty a(n) q^n=
q \prod_{n=1}^\infty (1-q^n)^2 (1-q^{11n})^2,
\end{equation}
or, alternatively, by counting points on $E_{11a}$ over the fields $F_p$, $p$ prime.
The function $L_{11}(s)$ has analytic continuation to all of $\mathbb{C}$ and
satisfies the functional equation
\begin{equation}
L_{11}(s) = X(s) L_{11}(1-s),
\end{equation}
where
\begin{equation}
X(s) = \frac{\Gamma(3/2-s)}{\Gamma(s+1/2)}
\left(\frac{2\pi}{11^{1/2}}\right)^{2s-1}.
\end{equation}
The $L$-function associated to a quadratic twist of $E_{11a}$ has
a Dirichlet series of the form
\begin{equation}
L_{11}(s,\chi_d)=\sum_{n=1}^\infty \frac{a(n)}{n^{1/2 +s}} \chi_d(n),
\end{equation}
where $d$ is a fundamental discriminant which we further assume satisfies $(d,11)=1$.
$L_{11}(s,\chi_d)$ satisfies the functional equation
\begin{equation}
L_{11}(s,\chi_d) = \chi_d(-11)
|d|^{1-2s} X(s)
L_{11}(1-s,\chi_d).
\end{equation}
When considering the moments of $L_{11}(1/2 ,\chi_d)$ we should restrict
$L(s,\chi_d)$ to have an even functional equation, i.e. $\chi_d(-11)=1$, otherwise
$L(1/2,\chi_d)$ is trivially equal to $0$.
In~\cite{CFKRS}, $d$ was also restricted to being negative
since it allowed them to exploit a theorem of Kohnen and Zagier~\cite{KZ}
to easily gather numerical data for $L_{11}(1/2 ,\chi_d)$ with which to check their
conjecture.
When $d<0$, $\chi_d(-1)=-1$, hence, in order to have an even
functional equation, we require $\chi_d(11)=-1$, i.e.
$d=2,6,7,8,10 \mod 11$. CFKRS conjectured, see Section~5.3 of~\cite{CFKRS}, the
asymptotic expansion:
\begin{equation}
\sum_{d \in S_-(X) \atop d = 2,6,7,8,10 \mod 11}
L_{11}(1/2,\chi_d)^k \sim \frac{15}{11\pi^2}
\frac{1}{X} \int_1^X \Upsilon_k(\log t) dt.
\label{eqn:L11 CFKRS}
\end{equation}
The extra factor of $5/11$ on the rhs, compared to~\eqref{eq: moment asympt},
reflects the fact that the sum on the left
is over 5 out of 11 possible residue classes mod 11.
Here, $\Upsilon_k$ is the polynomial of degree $k(k-1)/2$ given by
the $k$-fold residue
\begin{equation}
\label{eq:Upsilon}
\Upsilon_k(x)=\frac{(-1)^{k(k-1)/2}2^k}{k!} \frac{1}{(2\pi i)^{k}}
\oint \cdots \oint \frac{R_{11}(z_1,
\dots,z_{k})\Delta(z_1^2,\dots,z_{k}^2)^2} {\displaystyle
\prod_{j=1}^{k} z_j^{2k-1}} e^{x \sum_{j=1}^{k}z_j}\,dz_1\dots
dz_{k} ,
\end{equation}
where
\begin{equation}
\label{eq:R}
R_{11}(z_1,\dots,z_k)=A_k(z_1,\dots,z_k)
\prod_{j=1}^k X(1/2+z_j)^{-1/2}
\prod_{1\le i < j\le k}\zeta(1+z_i+z_j),
\end{equation}
and, overloading the notation of Section~\ref{sec:CFKRS quadratic},
$A_k$ is the Euler product which is absolutely convergent for
$\sum_{j=1}^k |z_j|<\frac12 $,
\begin{equation}
A_k(z_1,\dots,z_k) =
\prod_p R_{11,p}(z_1,\ldots,z_k)
\prod_{1\le i < j \le k}
\left(1-\frac{1}{p^{1+z_i+z_j}}\right)
\end{equation}
with, for $p \neq 11$,
\begin{equation}
R_{11,p} =
\left(1+\frac 1 p\right)^{-1}
\left(\frac 1 p +\frac{1}{2}
\left(
\prod_{j=1}^k
\frac{1}{1-a(p)p^{-1-z_j}+p^{-1-2z_j}}
+ \prod_{j=1}^k
\frac{1}{1+a(p)p^{-1-z_j}+p^{-1-2z_j}}
\right)
\right)
\end{equation}
and
\begin{equation}
R_{11,11} =
\prod_{j=1}^k \frac{1}{1+11^{-1-z_j}}.
\end{equation}
Note that, although here we are working with the specific elliptic curve
$E_{11a}$, CFKRS' recipe provides a similar conjecture for the quadratic twists of any elliptic
curve over $\mathbb{Q}$. For many examples, see the paper~\cite{CPRW}. The only difference
is in the conductor, in the local factors of $A_k$ for the primes dividing the
conductor, and in the allowed residue classes (and modulus) for $d$.
Next, $\Upsilon_k(x)$ is a polynomial of degree of $k(k-1)/2$ given by the
$k$-fold residue \reff{eq:Upsilon}. The degree works out smaller compared to
$Q_{\pm}(k,x)$ because the product of zetas in~\eqref{eq:R} involves fewer
zetas, i.e. the product over $i<j$ has $k \choose 2$ factors. Therefore,
we can write
\begin{equation} \label{eq:Upsilonexpansion}
\Upsilon_k(x) = c_0(k) x^{k(k-1)/2} + c_1(k) x^{k(k-1)/2 - 1} + \ldots + c_{k(k-1)/2}(k).
\end{equation}
Also note that the exponential in~\eqref{eq:Upsilon} has an $x$ rather than $x/2$. This will impact
the powers of 2 that enter into our formulas for the coefficients $c_r(k)$.
To address the poles coming from the zeta-product $\prod \zeta(1+z_i+z_j)$ we absorb some
of the factors of $\Delta(z_1^2, \ldots, z_k^2) = \prod_{1 \leq i < j \leq k} (z_j - z_i)(z_j+z_i)$.
Thus,
\begin{align} \nonumber
\Upsilon_k (x) & = \frac{(-1)^{k(k-1)/2}2^k}{k!} \frac{1}{(2\pi i)^k} \oint
\ldots \oint A_k(z_1, \ldots, z_k) \prod_{j=1}^k X(1/2+z_j)^{-1/2}\\ & \times
\prod_{1 \leq i < j \leq k} \zeta(1 + z_i + z_j) (z_i + z_j)
\label{eq:NewUpsilonOld} \frac{\Delta(z_1, \ldots, z_k) \Delta(z_1^2, \ldots,
z_k^2)}{\prod_{j=1}^kz_j^{2k - 1}} \exp(x \sum_{j=1}^k z_j)dz_1 \ldots dz_k.
\end{align}
We overload notation again and set
\begin{equation}
a_k:= A_k(0, \ldots, 0)
\end{equation}
and expand
\begin{equation}
\label{eq:multivariateM} \frac{1}{a_k} A_k(z_1, \ldots, z_k) \prod_{j=1}^k
X(1/2+z_j)^{-1/2}\prod_{1 \leq i < j \leq k} \zeta(1 + z_i + z_j) (z_i + z_j)
=: \sum_{j=0}^\infty \sum_{|\lambda| = j} b_\lambda(k) m_\lambda(z),
\end{equation}
where, as before, $m_\lambda(z)$ is the monomial symmetric function for the partition
$\lambda$. The lhs above is holomorphic in a neighbourhood of $z_1=\ldots=z_k=0$,
because the poles from the zeta-product $\prod \zeta(1 + z_i + z_j)$ are
canceled by the product $\prod (z_i + z_j)$. We normalize by $a_k$ so that
the first coefficient is 1.
So \reff{eq:NewUpsilonOld} becomes
\begin{align} \nonumber
\Upsilon_k (x) &= \frac{(-1)^{k(k-1)/2}2^k}{k!} \frac{a_k}{(2\pi i)^k} \oint
\ldots \oint \sum_{j=0}^\infty \sum_{|\lambda| = j} b_\lambda(k) m_\lambda(z) \\
& \times \frac{\Delta(z_1, \ldots, z_k) \Delta(z_1^2, \ldots,
z_k^2)}{\prod_{j=1}^kz_j^{2k - 1}} \exp(x \sum_{j=1}^k z_j)dz_1 \ldots dz_k.
\label{eq:NewUpsilon}
\end{align}
Comparing to equation~\eqref{eq:sumofintegrals} we notice three differences:
the extra $2^k$ in front of the integral, the $2k-1$ powers of each $z_j$,
rather than $2k$ powers, in the denominator, and the $x$ rather than $x/2$ in
the exponential. The first two differences are accounted for by the fact that the
product over zetas in~\eqref{eq:Qk} includes $i=j$, and this introduces,
from~\eqref{eq:233}, an extra $2z_j$, for each $j$. Therefore, proceeding as
in Section~\ref{sec:further-lower-order}, we get:
\begin{align} \nonumber
c_r(k) &= \frac{(-1)^{k(k-1)/2}2^k}{k!} \frac{a_k}{(2\pi i)^k} \oint
\ldots \oint \sum_{|\lambda| = r} b_\lambda(k) m_\lambda(z) \\
\label{eq:beforepullingq} & \times \frac{\Delta(z_1, \ldots, z_k)
\Delta(z_1^2, \ldots, z_k^2)}{\prod_{j=1}^k z_j^{2k - 1}}
\exp(\sum_{j=1}^k z_j)dz_1 \ldots dz_k.
\end{align}
Analogously to~\eqref{eq:formula for c(r,k)}, we have
\begin{multline}
\label{eq:formula for c(r,k) E}
c_r(k) =
2^k a_k
\prod_{j=0}^{k-1}\frac{(2j)!}{(k+j-1)!}
\\
\times
\sum_{|\lambda|=r}
b_\lambda(k)
\sum_{\bf u}{^{'}} (-1)^{n({\bf u})} E_{\alpha({\bf u})}(k)
\times (k+i({\bf u})-2)_{u_{i({\bf u})}} (k+i({\bf u})-3)_{u_{i({\bf u})-1}}\dotsm(k-1)_{u_1}.
\end{multline}
where, for a partition $\alpha$,
\begin{equation}
\label{eq:E_alpha}
E_\alpha(k)=\det \left( \binom{2k-i-1-\alpha_{k-i+1}}{2k-2j}\right)_{1\leq i,j\leq k}.
\end{equation}
The equation for $c_r(k)$ differs from~\eqref{eq:formula for c(r,k)} in the power of $2$
that appears, and also some of the factorials have an extra $-1$ in them. The latter
comes from the one missing $z_j$ in the denominator of~\eqref{eq:beforepullingq} compared
to~\eqref{eq:Qk}.
Notice that $E_\alpha(k)$ is very similar to $D_\alpha(k)$. The only difference is the
extra $-1$ in the binomial coefficient. We can relate the two determinants by taking advantage
of the entries in the first column of the matrix for $E_\alpha(k)$, which are all 0 except
for the $1,1$ entry. Assume, for now, that $k\geq \max(l(\alpha)+1,\alpha_1)$
(so, in particular, $\alpha_k=0$).
Expanding along the first column, and then reindexing $i,j$ with $i+1,j+1$:
\begin{eqnarray}
\label{eq:E vs D}
E_\alpha(k) &=&
\det \left( \binom{2k-i-1-\alpha_{k-i+1}}{2k-2j}\right)_{2\leq i,j\leq k} \\
&=&
\det \left( \binom{2(k-1)-i-\alpha_{(k-1)-i+1}}{2(k-1)-2j}\right)_{1\leq i,j\leq k-1}\\
&=& D_\alpha(k-1) = 2^{ {k-1 \choose 2} - |\alpha|} \times \ P_\alpha(k-1)
\end{eqnarray}
Also note, while we have assumed that $k>l(\alpha)$, Corollary~\ref{cor:2} tells us
that $P_\alpha(k-1)$, and hence the rhs of~\eqref{eq:E vs D}, vanishes for
$\alpha_1+1 \leq k \leq l(\alpha)$. Furthermore, by the same
method as was used around~\eqref{eq:divisibility part ii},
\begin{equation}
(k+i({\bf u})-2)_{u_{i({\bf u})}} (k+i({\bf u})-3)_{u_{i({\bf u})-1}}\dotsm(k-1)_{u_1}
\end{equation}
is divisible by $(k-1)\ldots (k-\alpha_1)$, and thus vanishes for $1 \leq k \leq \alpha_1$.
Therefore, writing
\begin{multline}
\label{eq:formula for c(r,k) E b}
c_r(k) =
2^{k+{k-1\choose 2}-r} a_k \prod_{j=0}^{k-1}\frac{(2j)!}{(k+j-1)!}
\\
\times
\sum_{|\lambda|=r}
b_\lambda(k)
\sum_{\bf u}{^{'}} (-1)^{n({\bf u})} P_{\alpha({\bf u})}(k-1)
\times (k+i({\bf u})-2)_{u_{i({\bf u})}} (k+i({\bf u})-3)_{u_{i({\bf u})-1}}\dotsm(k-1)_{u_1},
\end{multline}
we can replace the requirement that $k\geq \max(l(\alpha)+1,\alpha_1)$ with $k>0$. Finally,
using~\eqref{eq:N}, we get, for $k>0$,
\begin{equation}
\label{eq:formula for c(r,k) E c}
c_r(k) =
2^{k+{k-1\choose 2}-r} a_k
\prod_{j=0}^{k-1}\frac{(2j)!}{(k+j-1)!}
\sum_{|\lambda|=r}
b_\lambda(k) N_\lambda(k-1).
\end{equation}
For example, the $r=0$ term equals
\begin{equation}
\label{eq:c0 E}
c_0(k) =
2^{k+{k-1 \choose 2}} a_k \prod_{j=0}^{k-1}\frac{(2j)!}{(k+j-1)!}.
\end{equation}
One can verify inductively that this matches the leading term as described in (1.5.26)
of~\cite{CFKRS}:
\begin{equation}
\label{eq:check c0}
2^{k+{k-1 \choose 2}} \prod_{j=0}^{k-1}\frac{(2j)!}{(k+j-1)!} =
2^{(k+1)k/2} \prod_{j=0}^{k-1}\frac{j!}{(2j)!}.
\end{equation}
One should also pay attention here that the Taylor coefficients $b_\lambda(k)$, and also $a_k$,
depend on the underlying elliptic curve $E_{11a}$ and its $a(p)$'s. While we
can derive similar formulas for $b_\lambda(k)$ as for quadratic Dirichlet $L$-functions (see the
examples~\eqref{eq:b1},~\eqref{eq:b11}), in order to
accelerate their numerical evaluation we would need to use the symmetric power
$L$-functions associated to the $L$-function $L_{11}(s)$. Acceleration for
$b_\lambda(k)$ will be discussed in a forthcoming paper of Alderson and
Rubinstein~\cite{AR2}.
\begin{table}
\centering
\begin{tabular}{|r|c|}
\hline
Partition & $P_\lambda(k)$ \\
\hline
$ [1] $ & $ k + 1 $ \\
$ [2] $ & $ (1/2) (k + 1) (k + 2) $ \\
$ [1, 1] $ & $ (1/2) (k - 1) (k + 2) $ \\
$ [3] $ & $ (1/6) (k + 1) (k + 2) (k + 3) $ \\
$ [2, 1] $ & $ (1/3) (k + 2) (k^2 + k - 3) $ \\
$ [1, 1, 1] $ & $ (1/6) (k - 2) (k - 1) (k + 3) $ \\
$ [4] $ & $ (1/24) (k + 1) (k + 2) (k + 3) (k + 4) $ \\
$ [3, 1] $ & $ (1/8) (k + 2) (k + 3) (k^2 + k - 4) $ \\
$ [2, 2] $ & $ (1/12) (k - 2) (k + 1) (k + 2) (k + 3) $ \\
$ [2, 1, 1] $ & $ (1/8) (k - 2) (k + 3) (k^2 + k - 4) $ \\
$ [1, 1, 1, 1] $ & $ (1/24) (k - 3) (k - 2) (k - 1) (k + 4) $ \\
$ [5] $ & $ (1/120) (k + 1) (k + 2) (k + 3) (k + 4) (k + 5) $ \\
$ [4, 1] $ & $ (1/30) (k + 2) (k + 3) (k + 4) (k^2 + k - 5) $ \\
$ [3, 2] $ & $ (1/24) (k + 1) (k + 2) (k + 3) (k^2 + k - 8) $ \\
$ [3, 1, 1] $ & $ (1/20) (k + 3) (k^4 + 2k^3 - 11k^2 - 12k + 40) $ \\
$ [2, 2, 1] $ & $ (1/24) (k - 2) (k + 1) (k + 3) (k^2 + k - 8) $ \\
$ [2, 1, 1, 1] $ & $ (1/30) (k - 3) (k - 2) (k + 4) (k^2 + k - 5) $ \\
$ [1, 1, 1, 1, 1] $ & $ (1/120) (k - 4) (k - 3) (k - 2) (k - 1) (k + 5) $ \\
$ [6] $ & $ (1/720) (k + 1) (k + 2) (k + 3) (k + 4) (k + 5) (k + 6) $ \\
$ [5, 1] $ & $ (1/144) (k - 2) (k + 2) (k + 4) (k + 5) (k + 3)^2 $ \\
$ [4, 2] $ & $ (1/80) (k + 1) (k + 2) (k + 3) (k + 4) (k^2 + k - 10) $ \\
$ [4, 1, 1] $ & $ (1/72) (k + 3) (k + 4) (k^4 + 2k^3 - 13k^2 - 14k + 60) $ \\
$ [3, 3] $ & $ (1/144) (k - 3) (k + 1) (k + 3) (k + 4) (k + 2)^2 $ \\
$ [3, 2, 1] $ & $ (1/45) (k + 1) (k + 3) (k^4 + 2k^3 - 16k^2 - 17k + 75) $ \\
$ [3, 1, 1, 1] $ & $ (1/72) (k - 3) (k + 4) (k^4 + 2k^3 - 13k^2 - 14k + 60) $ \\
$ [2, 2, 2] $ & $ (1/144) (k - 3) (k - 2) (k - 1) (k + 2) (k + 3) (k + 4) $ \\
$ [2, 2, 1, 1] $ & $ (1/80) (k - 3) (k - 2) (k + 1) (k + 4) (k^2 + k - 10) $ \\
$ [2, 1, 1, 1, 1] $ & $ (1/144) (k - 4) (k - 3) (k + 3) (k + 5) (k - 2)^2 $ \\
$ [1, 1, 1, 1, 1, 1] $ & $ (1/720) (k - 5) (k - 4) (k - 3) (k - 2) (k - 1) (k + 6) $ \\
$ [7] $ & $ (1/5040) (k + 1) (k + 2) (k + 3) (k + 4) (k + 5) (k + 6) (k + 7) $ \\
$ [6, 1] $ & $ (1/840) (k + 2) (k + 3) (k + 4) (k + 5) (k + 6) (k^2 + k - 7) $ \\
$ [5, 2] $ & $ (1/360) (k - 3) (k + 1) (k + 2) (k + 3) (k + 5) (k + 4)^2 $ \\
$ [5, 1, 1] $ & $ (1/336) (k + 3) (k + 4) (k + 5) (k^4 + 2k^3 - 15k^2 - 16k + 84) $ \\
$ [4, 3] $ & $ (1/360) (k + 1) (k + 3) (k + 4) (k + 2)^2 (k^2 + k - 15) $ \\
$ [4, 2, 1] $ & $ (1/144) (k + 1) (k + 3) (k + 4) (k^4 + 2k^3 - 19k^2 - 20k + 108) $ \\
$ [4, 1, 1, 1] $ & $ (1/252) (k + 4) (k^6 + 3k^5 - 26k^4 - 57k^3 + 277k^2 + 306k - 1260) $ \\
$ [3, 3, 1] $ & $ (1/240) (k - 3) (k + 1) (k + 2) (k + 3) (k + 4) (k^2 + k - 10) $ \\
$ [3, 2, 2] $ & $ (1/240) (k - 3) (k - 1) (k + 2) (k + 3) (k + 4) (k^2 + k - 10) $ \\
$ [3, 2, 1, 1] $ & $ (1/144) (k - 3) (k + 1) (k + 4) (k^4 + 2k^3 - 19k^2 - 20k + 108) $ \\
$ [3, 1, 1, 1, 1] $ & $ (1/336) (k - 4) (k - 3) (k + 5) (k^4 + 2k^3 - 15k^2 - 16k + 84) $ \\
$ [2, 2, 2, 1] $ & $ (1/360) (k - 3) (k - 2) (k - 1) (k + 2) (k + 4) (k^2 + k - 15) $ \\
$ [2, 2, 1, 1, 1] $ & $ (1/360) (k - 4) (k - 2) (k + 1) (k + 4) (k + 5) (k - 3)^2 $ \\
$ [2, 1, 1, 1, 1, 1] $ & $ (1/840) (k - 5) (k - 4) (k - 3) (k - 2) (k + 6) (k^2 + k - 7) $ \\
$ [1, 1, 1, 1, 1, 1, 1] $ & $ (1/5040) (k - 6) (k - 5) (k - 4) (k - 3) (k - 2) (k - 1) (k + 7) $ \\
\hline
\end{tabular}
\caption{Table of $P_\lambda(k)$}
\label{tab:plambda}
\end{table}
\begin{table}
\begin{tabular}[t]{cc c c}
\toprule[2pt]
$r$ & $ c_-(r,1)$ & $ c_-(r,2)$& $ c_-(r,3)$ \\
\midrule
\begin{tabular}[t]{r}
0\\ 1\\ 2\\ 3\\ 4\\ 5\\ 6
\end{tabular} &
\begin{tabular}[t]{c}
3.522211004995827732e-01
\\
6.175500336140218316e-01
\end{tabular} &
\begin{tabular}[t]{c}
1.238375103096108452e-02
\\
1.807468351186638511e-01
\\
3.658991414081511628e-01
\\
-1.398953902867718369e-01
\end{tabular} &
\begin{tabular}[t]{c}
1.528376099282021425e-05
\\
8.968276397996084726e-04
\\
1.701420175947633562e-02
\\
1.093281830681910732e-01
\\
1.358556940901993748e-01
\\
-2.329509111366616925e-01
\\
4.735303837788046866e-01
\end{tabular}\\
\bottomrule[2pt]\\
$r$& $ c_-(r,4)$ & $ c_-(r,5)$& $ c_-(r,6)$ \\
\midrule \\
\begin{tabular}[t]{r}
0\\ 1\\ 2\\ 3\\ 4\\ 5\\ 6\\7 \\ 8 \\ 9 \\ 10
\end{tabular} &
\begin{tabular}[t]{c}
3.158268332443340154e-10
\\
5.062201340608140133e-08
\\
3.252070477914552180e-06
\\
1.065078255299183117e-04
\\
1.865791348720969960e-03
\\
1.658674128885722146e-02
\\
5.985999910494527870e-02
\\
5.231179842747744717e-03
\\
-1.097356193524353096e-01
\\
5.581253300381869842e-01
\\
1.918594095122517496e-01
\end{tabular}
&
\begin{tabular}[t]{c}
6.712517611066278238e-17
\\
2.341233253582258184e-14
\\
3.571169234103129887e-12
\\
3.127118490785452708e-10
\\
1.734617312939144360e-08
\\
6.342941105701246722e-07
\\
1.541064437383931078e-05
\\
2.441498848686470880e-04
\\
2.390928284573956911e-03
\\
1.275610736275904766e-02
\\
2.430382016767882944e-02
\end{tabular} &
\begin{tabular}[t]{c}
1.036004645427003276e-25
\\
6.796814066740219201e-23
\\
2.037808336505920108e-20
\\
3.698051408075659748e-18
\\
4.534838798273249707e-16
\\
3.972866885083416336e-14
\\
2.563279107875100164e-12
\\
1.237229229636910631e-10
\\
4.491515829566301398e-09
\\
1.222154548508955419e-07
\\
2.461203700713661380e-06
\end{tabular}\\
\bottomrule[2pt]
$r$& $ c_-(r,7)$ & $c_-(r,8)$& $ c_-(r,9)$ \\
\midrule\\
\begin{tabular}[t]{r}
0\\1\\2\\3\\4\\5\\6\\7\\8\\9\\10
\end{tabular}
&
\begin{tabular}[t]{c}
8.864927187204894781e-37
\\
9.894437508330137269e-34
\\
5.176293026015439716e-31
\\
1.686724585610585967e-28
\\
3.837267516078630273e-26
\\
6.474635477336820480e-24
\\
8.402114103039537077e-22
\\
8.581764459399681586e-20
\\
7.002464589632248733e-18
\\
4.607034349981096374e-16
\\
2.455973970379903840e-14
\end{tabular} &
\begin{tabular}[t]{c}
3.372009502181036150e-50
\\
5.951191608649093822e-47
\\
5.002043249634522587e-44
\\
2.664702289380503418e-41
\\
1.010164553397544484e-38
\\
2.900498887294046119e-36
\\
6.555588245821587108e-34
\\
1.196609980002393296e-31
\\
1.795828629692653400e-29
\\
2.244368542496810519e-27
\\
2.357312576663548340e-25
\end{tabular} &
\begin{tabular}[t]{c}
4.727735796587526113e-66
\\
1.248019487993274422e-62
\\
1.585820955757896443e-59
\\
1.291823649274241834e-56
\\
7.580660624239738211e-54
\\
3.413900516458523702e-51
\\
1.227404779731471396e-48
\\
3.618608212113140382e-46
\\
8.916974338520402569e-44
\\
1.862786263819570034e-41
\\
3.334524507937658586e-39
\end{tabular}\\
\bottomrule[2pt]
\end{tabular}
\caption[$c_-(r,k)$]
{The coefficients $ c_-(r,k)$ of $Q_-(k)$.
}\label{tab:cminusrk}
\end{table}
\begin{table}
\centering
\begin{tabular}{cccc}
\toprule[2pt]\\
$r$ & $ c_+(r,1)$ & $ c_+(r,2)$& $ c_+(r,3)$ \\
\midrule\\
\begin{tabular}[t]{r}
0\\ 1\\ 2\\ 3\\ 4\\ 5\\ 6\
\end{tabular}
&
\begin{tabular}[t]{c}
3.522211004995827732e-01
\\
-4.889851881547797041e-01
\end{tabular} &
\begin{tabular}[t]{c}
1.238375103096108452e-02
\\
6.403273133040673915e-02
\\
-4.030985462971436450e-01
\\
8.784723252866324383e-01
\end{tabular} &
\begin{tabular}[t]{c}
1.528376099282021425e-05
\\
6.087355322740111135e-04
\\
5.189536257221761054e-03
\\
-2.070416696161206729e-02
\\
-4.836560144295628388e-02
\\
6.305676273169569246e-01
\\
-1.231149543676485214
\end{tabular}\\
\bottomrule[2pt]\\
$r$ & $c_+(r,4)$ & $ c_+(r,5)$& $c_+(r,6)$ \\
\toprule\\
\begin{tabular}[t]{r}
0\\ 1\\ 2\\ 3\\ 4\\ 5\\ 6\\ 7 \\ 8 \\ 9 \\ 10
\end{tabular}
&
\begin{tabular}[t]{c}
3.158268332443340154e-10
\\
4.070002081481211197e-08
\\
1.961035634727995841e-06
\\
4.187933734218812260e-05
\\
3.233832982317403053e-04
\\
-7.264209058002128044e-04
\\
-9.741303115420443803e-03
\\
6.254058547607513341e-02
\\
5.338039400180279170e-02
\\
-1.125787514381924481e+00
\\
2.125417457224375362
\end{tabular} &
\begin{tabular}[t]{c}
6.712517611066278238e-17
\\
2.024913313371989448e-14
\\
2.611003455556346309e-12
\\
1.870888923760240058e-10
\\
8.086250862410257040e-09
\\
2.126496335543600159e-07
\\
3.194157049041922835e-06
\\
2.120198748289444789e-05
\\
-3.390055513847315853e-05
\\
-7.750613901748660065e-04
\\
3.339978554290242568e-03
\end{tabular} &
\begin{tabular}[t]{c}
1.036004645427003276e-25
\\
6.113326104276961713e-23
\\
1.632224321325099403e-20
\\
2.605311255686981285e-18
\\
2.766415183453526818e-16
\\
2.056437432501927988e-14
\\
1.095709499896029594e-12
\\
4.206172871179562219e-11
\\
1.149109718292255815e-09
\\
2.154509460431619112e-08
\\
2.543371224701971233e-07
\end{tabular} \\
\bottomrule[2pt]\\
$r$ & $ c_+(r,7)$ & $ c_+(r,8)$& $ c_+(r,9)$ \\
\midrule\\
\begin{tabular}[t]{r}
0 \\1\\ 2\\ 3\\ 4\\ 5\\ 6\\ 7 \\ 8 \\ 9 \\ 10
\end{tabular}
&
\begin{tabular}[t]{c}
8.864927187204894781e-37
\\
9.114637784804059894e-34
\\
4.370089613567423486e-31
\\
1.297363094463138851e-28
\\
2.670392092372496088e-26
\\
4.043466811338890795e-24
\\
4.663148139710778893e-22
\\
4.183154331210266578e-20
\\
2.954857264190019988e-18
\\
1.652770327042906306e-16
\\
7.319238365079051443e-15
\end{tabular} &
\begin{tabular}[t]{c}
3.372009502181036150e-50
\\
5.569826318573164385e-47
\\
4.368642207198861832e-44
\\
2.164658555649376388e-41
\\
7.604817314362535383e-39
\\
2.015327809331532264e-36
\\
4.184593239584908611e-34
\\
6.980465161514108456e-32
\\
9.516651650236242059e-30
\\
1.073015400698217206e-27
\\
1.008662233782716849e-25
\end{tabular} &
\begin{tabular}[t]{c}
4.727735796587526113e-66
\\
1.181182697783246367e-62
\\
1.417926553457661234e-59
\\
1.089051480593133551e-56
\\
6.012641112088390226e-54
\\
2.541594397695401893e-51
\\
8.555207141044511720e-49
\\
2.354807833463352272e-46
\\
5.400892227120418237e-44
\\
1.046573394851932219e-41
\\
1.731269798305270612e-39
\end{tabular}\\
\bottomrule[2pt]
\end{tabular}
\caption{The coefficients $c_+(r,k)$ of $Q_+(r,k)$.}
\label{tab:cplusrk}
\end{table}
\vspace*{20mm}
|
1,314,259,995,306 | arxiv | \section{Introduction}
Sasaki manifolds have gained their prominence in physics and in algebraic geometry and Riemannian geometry \cite{BG}.
There are tremendous work in the last two decades in Sasaki geometry, in particular on Sasaki-Einstein manifolds, see \cite{BG, GMSW, BGK, FOW, MSY, HeSun, CS} and reference therein.
On the other hand, Sasaki geometry is an odd dimensional analogue of K\"ahler geometry and almost all results in K\"ahler geometry have their counterparts in Sasaki geometry. Calabi's extremal metric \cite{Ca1, Ca2} (and csck) has played a very important role in K\"ahler geometry and it has a direct adaption in Sasaki setting \cite{BGS1}.
In 1997, S. K. Donaldson \cite{Don97} proposed an extremely fruitful program to approach existence of csck (extremal metrics) on a compact K\"ahler manifold with a fixed K\"ahler class.
Donaldson's program has also been extended to Sasaki setting, see \cite{GZ1, he14} for example.
A major problem in K\"ahler geometry is to characterize exactly when a K\"ahler class contains a csck (extremal). The analytic part for existence of csck is to solve a fourth order highly nonlinear elliptic equation, the scalar curvature type equation. This problem is regarded as a very hard problem in the field. Recently Chen and Cheng \cite{CC1, CC2, CC3} have solved a major conjecture that existence of csck is equivalent to well studied conditions such as properness of Mabuchi's $K$-energy, or geodesic stability. The first named author \cite{he182} proved the following counterpart in Sasaki setting,
\begin{thm}[\cite{he182}]\label{cscs}There exists a Sasaki metric with constant scalar curvature if and only if the ${\mathcal K}$-energy is reduced proper with respect to $\text{Aut}_0(\xi, J)$, the identity component of automorphism group which preserves the Reeb vector field and transverse complex structure.
\end{thm}
The proof of Theorem \ref{cscs} is an adaption of recent breakthrough of Chen-Cheng \cite{CC3} on the existence of csck in K\"ahler setting to Sasaki setting. Technically the arguments consist of two major parts: a priori estimates of nonlinear PDE and pluripotential theory.
Building up on previous development of pluripotential theory, T. Darvas \cite{D1, D2} has developed profound theory to study the geometric structure of space of K\"ahler potentials. Among others, he introduced a Finsler metric $d_1$, and proved very effective estimates of distance function $d_1$ in terms of well studied energy functionals such as Aubin's $I$-functional. Darvas's results turn out to be very useful to understand the geometric structure of space of K\"ahler potentials, in particular in the study of csck \cite{DR, BDL2, CC3}.
In this paper we extend many results in pluripotential theory on K\"ahler manifolds, notably in \cite{GZ, D1, D2} to Sasaki setting. These results play an important role in the proof of Theorem \ref{cscs}. To prove these results, we need to explore the geometric structures of Sasaki manifolds, in particular the K\"ahler cone structure and transverse K\"ahler structure.
Let $(M, g)$ be a compact Riemannian manifold of dimension $2n+1$, with a Riemannian metric $g$. Sasaki manifolds have very rich geometric structures and have many equivalent descriptions. A probably most straightforward formulation is as follows: its metric cone \[X=\mathbb{R}_{+}\times M, \bar g_X=dr^2+r^2 g.\] is a K\"ahler cone. Hence there exists a complex structure $J$ on $X$ such that $(g_X, J)$ defines a K\"ahler structure. We identify $M$ with its natural embedding $M\rightarrow \{r=1\}\subset X$.
The $1$-form $\eta$ is given by $\eta=J(r^{-1}dr)$ and it defines a contact structure on $M$. The vector field $\xi:=J(r\partial_r)$ is a nowhere vanishing, holomorphic Killing vector field and it is called the \emph{Reeb vector field} when it is restricted on $M$. The integral curves of $\xi$ are geodesics, and give rise to a foliation on $M$, called the \emph{Reeb foliation}. Then there is a K\"ahler structure on the local leaf space of the Reeb foliation, called the \emph{transverse K\"ahler structure}. A standard example of a Sasaki manifold is the odd dimensional round sphere $S^{2n+1}$. The corresponding K\"ahler cone is $\mathbb{C}^{n+1}\backslash\{0\}$ with the flat metric and its transverse K\"ahler structure descends to $\mathbb{CP}^n$ with the Fubini-Study metric.
We can also formulate Sasaki geometry, in particular the transverse K\"ahler structure via its contact bundle $\mathcal D=\text{Ker}(\eta)\subset TM$. The complex structure $J$ on the cone descends to the contact bundle via $\Phi:=J|_\mathcal D$. The Sasaki metric can be written as follows,
\begin{equation*}
g=\eta\otimes\eta+g^T,
\end{equation*}
where $g^T$ is the transverse K\"ahler metric, given by $g^T:=2^{-1}d\eta(\Phi\otimes \mathbb{I})$. The transverse K\"ahler form is denoted by $\omega^T=2^{-1}d\eta$.
We shall study the transverse K\"ahler geometry of Sasaki metrics, with the Reeb vector field $\xi$ and transverse complex structure (equivalently the complex structure $J$ on the cone) both fixed.
This means that we fix the basic K\"ahler class $[\omega^T]$ with $\omega^T=2^{-1}d\eta$ and study the Sasaki structures induced by the space of transverse K\"ahler potentials,
\begin{equation*}
{\mathcal H}=\{\phi\in C_B^\infty(M): \omega_\phi=\omega^T+dd^c_B \phi>0\},
\end{equation*}
where $C_B^\infty(M)$ is the space of smooth \emph{basic functions}. The main result in the paper is,
\begin{thm}\label{pluri01}$({\mathcal E}_p(M, \xi, \omega^T), d_p)$ is a complete geodesic metric space for $p\in [1, \infty)$, which is the metric completion of $({\mathcal H}, d_p)$. For any $u, v\in {\mathcal E}_p$,
$d_p(u, v)$ is realized by a unique finite energy geodesic ${\mathcal E}_p$ connecting $u$ and $v$.
There exists a uniform constant $C=C(n, p)>1$ such that
\begin{equation*}\label{d01}
C^{-1} I_p(u, v)\leq d_p(u, v)\leq CI_p(u, v),
\end{equation*}
where the energy functional $I_p$ is given by
\[
I_p(u, v)=\|u-v\|_{p, u}+\|u-v\|_{p, v}
\]
Moreover, we have
\begin{equation*}\label{d02}
d_p(u, \frac{u+v}{2})\leq C d_p(u, v).
\end{equation*}
\end{thm}
We refer to Section 3 for notions such as ${\mathcal E}_p, d_p$. Theorem \ref{pluri01} is the counterpart of main results in \cite{D2} in Sasaki setting.
An important notion in the study of csck is the convexity of ${\mathcal K}$-energy along $C^{1, \bar 1}$ geodesics \cite{BB} (see also \cite{CLP}), which was generalized to Sasaki setting by \cite{JZ, VC}. Given the results above, one can then extend ${\mathcal K}$-energy to ${\mathcal E}_1$-class and keep its convexity along finite energy geodesics as in \cite{BDL}. Moreover, this allows to define precisely the properness of ${\mathcal K}$-energy in terms of the distance $d_1$. One can then prove Theorem \ref{cscs} using a priori estimates of scalar curvature type equation together with properness assumption, where the effective estimates of $d_1$ in Theorem \ref{pluri01} play an important role; for details see \cite{he182}.
Along the way to prove Theorem \ref{pluri01}, it is necessary to extend results as in \cite{GZ, D4} to Sasaki setting. Certainly the essential ideas lie in results in K\"ahler setting and T. Darvas' lecture notes \cite{D4} is an excellent reference. On the other hand, we should emphasize that in Sasaki setting, there are many new difficulties when the Reeb foliation is irregular. We have to utilize the K\"ahler cone structure and transverse K\"ahler structure in an effective way. For example, one can use Type-I deformation to approximate irregular structure by quasiregular structure. Such an approximation is very useful at times for extension to Sasaki setting. We also construct explicit holomorphic charts on the K\"ahler cone out of its transverse K\"ahler structure, see Lemma \ref{chart}. This very explicit relation between the holomorphic charts and foliations charts of transverse K\"ahler structure seems to appear in literature for first time, to the authors' knowledge. This explicit construction of holomorphic charts builds a very straightforward relation between plurisubharmonic functions on cone and (transverse) plurisubharmonic functions via transverse K\"ahler structure. This construction plays an important role in our arguments. \\
We organize the paper as follows. In Section 2 we introduce basic notations and concepts of Sasaki geometry. We study the geometric structure of the space of transverse K\"ahler potentials using geodesic equation and pluripotential theory in Section 3. In Section 4 we prove the main theorem. We include a brief discussion of Sasaki-extremal metric in Section 5. Appendix (Section 6) contains various topics in pluripotential theory, including complex Monge-Ampere operator and various energy functionals on ${\mathcal E}_1$; we prove various results which are stated in \cite{he182}[Section 2.2].\\
{\bf Acknowledgement:} The first named author wants to thank Prof. Xiuxiong Chen for encouragements. The first named author is also grateful for T. Darvas for his enlightening influence in pluripotential theory, which makes it possible for us to extend relevant results in pluripotential theory to Sasaki setting. The first named author is supported in part by an NSF grant, award no. 1611797. The second named author wants to thank Prof. Xiangyu Zhou and Prof. Yueping Jiang for help and encouragements. He is partially supported by NSFC 11701164.
\numberwithin{equation}{section}
\numberwithin{thm}{section}
\section{Preliminary on Sasaki geometry}
A good reference on Sasaki geometry can be found in the monograph \cite{BG} by Boyer-Galicki.
Let $M$ be a compact differentiable manifold of dimension $2n+1 (n\geq 1)$. A Sasaki structure on $M$ is defined to be a K\"ahler cone structure on $X=M\times \mathbb{R}_{+}$, i.e. a K\"ahler metric $(g_X, J)$ on $X$ of the form $$g_X=dr^2+r^2g,$$ where $r>0$ is a coordinate on $\mathbb{R}_{+}$, and $g$ is a Riemannian metric on $M$. We call $(X, g_X, J)$ the \emph{K\"ahler cone} of $M$. We also identify $M$ with the link $\{r=1\}$ in $X$ if there is no ambiguity. Because of the cone structure, the K\"ahler form on $X$ can be expressed as
$$\omega_X=\frac{1}{2}\sqrt{-1}\partial\overline{\partial} r^2=\frac{1}{2}dd^c r^2.$$
We denote by $r\partial_r$ the homothetic vector field on the cone, which is easily seen to be a real holomorphic vector field.
A tensor $\alpha$ on $X$ is said to be of homothetic degree $k$ if
$${\mathcal L}_{r\partial_r} \alpha=k\alpha.$$
In particular, $\omega$ and $g$ have homothetic degree two, while $J$ and $r\partial_r$ has homothetic degree zero.
We define the \emph{Reeb vector field} $$\xi=J(r\partial_r).$$
Then $\xi$ is a holomorphic Killing field on $X$ with homothetic degree zero. Let $\eta$ be the dual one-form to $\xi$:
\[\eta(\cdot)=r^{-2}g_X(\xi, \cdot)=2d^c \log r=\sqrt{-1} (\overline{\partial}-\partial)\log r\ .\]
We also use $(\xi, \eta)$ to denote the restriction of them on $(M, g)$. Then we have
\begin{itemize}
\item $\eta$ is a contact form on $M$, and $\xi$ is a Killing vector field on $M$ which we also call the Reeb vector field;
\item $\eta(\xi)=1, \iota_{\xi} d\eta(\cdot)=d\eta (\xi, \cdot)=0$;
\item the integral curves of $\xi$ are geodesics.
\end{itemize}
The Reeb vector field $\xi$ defines a foliation ${\mathcal F}_\xi$ of $M$ by geodesics. There is a classification of Sasaki structures according to the global property of the leaves. If all the leaves are compact, then $\xi$ generates a circle action on $M$, and the Sasaki structure is called {\it quasi-regular}. In general this action is only locally free, and we get a polarized orbifold structure on the leaf space. If the circle action is globally free, then the Sasaki structure is called {\it regular}, and the leaf space is a polarized K\"ahler manifold. If $\xi$ has a non-compact leaf the Sasaki structure is called {\it irregular}.
One can also understand Sasaki structure through contact metric structure.
There is an orthogonal decomposition of the tangent bundle \[TM=L\xi\oplus \mathcal D,\] where $L\xi$ is the trivial bundle generalized by $\xi$, and $\mathcal D=\text{Ker} (\eta)$.
The metric $g$ and the contact form $\eta$ determine a $(1,1)$ tensor field $\Phi$ on $M$ by
\[
g(Y, Z)=\frac{1}{2} d\eta(Y, \Phi Z), Y, Z\in \Gamma(\mathcal D).
\]
$\Phi$ restricts to an almost complex structure on $\mathcal D$: \[\Phi^2=-\mathbb{I}+\eta\otimes \xi. \]
Since both $g$ and $\eta$ are invariant under $\xi$, there is a well-defined K\"ahler structure $(g^T, \omega^T, J^T)$ on the local leaf space of the Reeb foliation. We call this a \emph{transverse K\"ahler structure}. In the quasi-regular case, this is the same as the K\"ahler structure on the quotient. Clearly
$\omega^T=2^{-1}d\eta. $
The upper script $T$ is used to denote both the transverse geometric quantity, and the corresponding quantity on the bundle $\mathcal D$. For example we have on $M$
$$g=\eta\otimes \eta+g^T.$$
From the above discussion it is not hard to see that there is an intrinsic formulation of a Sasaki structure as a compatible integrable pair $(\eta, \Phi)$, where $\eta$ is a contact one form and $\Phi$ is a almost CR structure on $\mathcal D=\text{Ker} \eta$. Here ``compatible" means first that
$d\eta(\Phi U, \Phi V)=d\eta(U, V)$ for any $U, V\in \mathcal D$, and $d\eta(U, \Phi U)>0$ for any non zero $U\in \mathcal D$. Further we require $\mathcal L_{\xi}\Phi=0$, where $\xi$ is the unique vector field with $\eta(\xi)=1$, and $d\eta(\xi, \cdot)=0$.
$\Phi$ induces a splitting $$\mathcal D\otimes \mathbb{C}=\mathcal D^{1,0}\oplus \mathcal D^{0,1}, $$
with $\overline{\mathcal D^{1,0}}=\mathcal D^{0,1}$.
``Integrable" means that $[\mathcal D^{0,1}, \mathcal D^{0,1}]\subset \mathcal D^{0,1}$. This is equivalent to that the induced almost complex structure on the local leaf space of the foliation by $\xi$ is integrable. For more discussions on this, see \cite{BG} Chapter 6.
\begin{defn} A $p$-form $\theta$ on $M$ is called basic if
\[
\iota_\xi \theta=0, L_\xi \theta=0.
\]
Let $\Lambda^p_B$ be the bundle of basic $p$-forms and $\Omega^p_B=\Gamma(S, \Lambda^p_B)$ the set of sections of $\Lambda^p_B$.
\end{defn}
The exterior differential preserves basic forms. We set $d_B=d|_{\Omega^p_B}$.
Thus the subalgebra $\Omega_{B}({\mathcal F}_\xi)$ forms a subcomplex of the de Rham complex, and its cohomology ring $H^{*}_{B}({\mathcal F}_\xi)$ is called the {\it basic cohomology ring}. When $(M, \xi, \eta, g)$ is a Sasaki structure, there is a natural splitting of $\Lambda^p_B\otimes \mathbb{C}$ such that
\[
\Lambda^p_B\otimes \mathbb{C}=\oplus \Lambda^{i, j}_B,
\]
where $\Lambda^{i, j}_B$ is the bundle of type $(i, j)$ basic forms. We thus have the well defined operators
\[
\begin{split}
\partial_B: \Omega^{i, j}_B\rightarrow \Omega^{i+1, j}_B,\\
\bar\partial_B: \Omega^{i, j}_B\rightarrow \Omega^{i, j+1}_B.
\end{split}
\]
Then we have $d_B=\partial_B+\bar \partial_B$.
Set $d^c_B=\frac{1}{2}\sqrt{-1}\left(\bar \partial_B-\partial_B\right).$ It is clear that
\[
d_Bd_B^c=\sqrt{-1}\partial_B\bar\partial_B, d_B^2=(d_B^c)^2=0.
\]
We shall recall the transverse complex (K\"ahler) structure on local coordinates. Let $U_\alpha$ be an open covering of $M$ and $\pi_\alpha: U_\alpha\rightarrow V_\alpha\subset \mathbb{C}^n$ submersions
such that
\[
\pi_\alpha\circ \pi^{-1}_\beta: \pi_\beta(U_\alpha\cap U_\beta)\rightarrow \pi_\alpha (U_\alpha\cap U_\beta)
\]
is biholomorphic when $U_\alpha\cap U_\beta$ is not empty. One can choose local coordinate charts $(z_1, \cdots, z_n)$ on $V_\alpha$ and local coordinate charts $(x, z_1, \cdots, z_n)$ on $U_\alpha\subset M$ such that $\xi=\partial_x$, where we use the notations
\[
\partial_x=\frac{\partial}{\partial x}, \partial_i=\frac{\partial}{\partial z_i}, \bar \partial_{ j}=\partial_{\bar j}=\frac{\partial}{\partial \bar z_{ j}}=\frac{\partial}{\partial z_{\bar j}}.
\]
The map $\pi_\alpha: (x, z_1, \cdots, z_n)\rightarrow (z_1, \cdots, z_n)$ is then the natural projection. There is an isomorphism, for any $p\in U_\alpha$,
\[
d\pi_\alpha: \mathcal D_p\rightarrow T_{\pi_\alpha(p)}V_\alpha.
\]
Hence the restriction of $g$ on $\mathcal D$ gives an Hermitian metric $g^T_\alpha$ on $V_\alpha$ since $\xi$ generates isometries of $g$.
One can verify that there is a well defined K\"ahler metric $g_\alpha^T$ on each $V_\alpha$ and
\[
\pi_\alpha\circ \pi^{-1}_\beta: \pi_\beta(U_\alpha\cap U_\beta)\rightarrow \pi_\alpha (U_\alpha\cap U_\beta)
\]
gives an isometry of K\"ahler manifolds $(V_\alpha, g^T_\alpha)$. The collection of K\"ahler metrics $\{g^T_\alpha\}$ on $\{V_\alpha\}$ can be used as an alternative definition of the transverse K\"ahler metric. The (local) transverse holomorphic (K\"ahler) structure is essential for us and we shall use these these charts enormously. We summarize as follows,
\begin{defn}[Local foliation charts]
We can choose the open covering $\{U_\alpha\}$ of $M$ such that it a local product structure for each $\alpha$, determined by its foliation structure and transverse complex structure. That is, there are charts
\begin{equation*}
\Psi_\alpha: U_\alpha\rightarrow W_\alpha\subset \mathbb{R}\times \mathbb{C}^n,
\end{equation*}
where $W_\alpha= (-\delta, \delta) \times V_\alpha.$ For a point $p\in W_\alpha$, we write $p=(x, z_1, \cdots, z_n)$ with $\xi=\partial_x$ and $V_\alpha= B_r(0)\subset \mathbb{C}^n$ for $0<r$ . We assume that $\delta, r$ are sufficiently small depending only on $(M, \xi, \eta, g)$, and $\omega^T_\alpha$ is uniformly equivalent to an Euclidean metric on each $V_\alpha=B_r\subset \mathbb{C}^n$,
\[
\frac{1}{2}\delta_{i\bar j}\leq \omega^T_\alpha\leq 2\delta_{i\bar j}
\]
\end{defn}
In Sasaki geometry, it is often mostly convenient to work with these charts when we need to consider the Sasaki structure locally.
For each $U_\alpha$, we assume it is contained in the geodesic normal neighborhood of its ``center", $\Psi_\alpha^{-1}(0, 0, \cdots, 0)$, by choosing $\delta, r$ small enough. We call these charts \emph{foliation charts}.
The existence of foliation charts is well-known in the subject, see \cite{GKN}; in particular, any Sasaki metric $g$ can be locally expressed in terms of a real function of $2n$ variables. Given a foliation chart $W_\alpha=(-\delta, \delta)\times V_\alpha$, for $(x, z_1, \cdots, z_n)\in U_\alpha$, locally there exists a strictly plurisubharmonic function $h: V_\alpha\rightarrow \mathbb{R}$, and the Sasaki structure reads
\begin{equation}\label{foliation02}
\begin{split}
&\xi=\partial_x; \; \eta=dx-\sqrt{-1} \sum_i(h_i dz^i-h_{\bar i} dz^{\bar i})\\
&\omega^T=\sqrt{-1} h_{i\bar j} dz^i\wedge dz^{\bar j};\; g=\eta\otimes \eta+2h_{i\bar j} dz^i\otimes dz^{\bar j}
\end{split}
\end{equation}
If we consider a Sasaki structure induced by a transverse K\"ahler potential $\phi$, then locally we have $h\rightarrow h+\phi$. In particular, we have
\[\eta_\phi=\eta+\sqrt{-1}(\bar\partial-\partial) \phi, \omega_\phi=\omega^T+\sqrt{-1}\partial\bar\partial\phi.\]
We shall also use holomorphic charts on its K\"ahler cone $X$. There exist indeed holomorphic charts on the K\"ahler cone $X$ which are closely related to foliation charts on $M$. This seems to be much less well-known and we shall describe them now.
\begin{lemma}[Holomorphic coordinates on the K\"ahler cone]\label{chart}For a Sasaki structure locally generated by a plurisubharmonic function $h: V_\alpha\rightarrow \mathbb{R}$ in foliations charts on $M$, then the following gives a local holomorphic structure on its K\"ahler cone $X$, for $w=(w_0, \cdots, w_n)\in \tilde U_\alpha\subset \mathbb{C}\times V_\alpha$,
\begin{equation}\label{holo01}
w_0=\log r-h(z, \bar z)+\sqrt{-1}x, w_i=z_i, i=1, \cdots, n, z=(z_1, \cdots, z_n)
\end{equation}
The holomorphic structure $J$ is given by the holomorphic coordinates $w=(w_0, \cdots, w_n)$,
\begin{equation}\label{holo02}
J\frac{\partial}{\partial w_i}=\sqrt{-1}\frac{\partial }{\partial w_i}, i=0, \cdots, n.
\end{equation}
\end{lemma}
\begin{proof}Given \eqref{foliation02}, it is straightforward to check that \eqref{holo01} gives a holomorphic chart satisfying \eqref{holo02}.
\end{proof}
\begin{rmk}These holomorphic charts would be very useful for us later, in particular when we consider plurisubharmonic functions on $X$ and transverse plurisubharmonic functions on $M$.
The explicit holomorphic charts given above seem to appear in literature first time to our knowledge, while the foliation charts are well-known.
\end{rmk}
When the Reeb vector field $\xi$ is irregular, the local foliation charts satisfy cocycle condition but they do not give a manifold (or orbifold) structure of the quotient $M/{\mathcal F}_\xi$.
We shall recall \emph{Type-I deformation} defined in \cite{BGM}. Let $(M, \xi_0, \eta_0, g_0)$ be a compact Sasaki manifold, denote its automorphism group by $\text{Aut}(M, \xi_0, \eta_0, g_0)$. We fix a torus \[T\subset \text{Aut}(M, \xi_0, \eta_0, g_0)\; \text{such that}\; \xi_0\in \mathfrak{t}=\text{Lie algebra}(T).\]
\begin{defn}[Type-I deformation]\label{type-01}
Let $(M, \xi_0, \eta_0, g_0)$ be a $T$-invariant Sasaki structure. For any $\xi\in \mathfrak{t}$ such that $\eta_0(\xi)>0$. We define a
new Sasaki structure on $M$ explicitly as
\begin{equation*}\label{e-2-7}
\eta=\frac{\eta_0}{\eta_0(\xi)}, \Phi=\Phi_0-\Phi_0\xi\otimes \eta, g=\eta\otimes \eta+\frac{1}{2}d\eta(\mathbb{I}\otimes \Phi).
\end{equation*}
\end{defn}
Note that under Type-I deformation, the essential change is the Reeb vector field $\xi_0 \leftrightarrow \xi$ and this construction can be done vice versa.
\section{The space of transverse K\"ahler potentials}
In this section we consider the space of transverse K\"ahler potentials on a compact Sasaki manifold through its transverse K\"ahler structure.
It turns out to be necessary to consider these objects not only from point of view of PDE, but also from the point of view of pluripotential theory.
Geometric pluripotential theory on K\"ahler manifolds turns out to be one crucial piece in the proof of properness conjecture \cite{BDL2, CC3}. We refer \cite{GZ, D4} and references therein for details of pluripotential theory. We need to extend these results to Sasaki manifolds. This would form a crucial piece for existence of cscs on Sasaki manifolds as well, see \cite{he182} for details. We start with the basic notion of quasiplurisubharmonic functions on Sasaki manifolds.
\subsection{The quasiplurisubharmonic functions on Sasaki manifolds}
Denote ${\mathcal H}=\{\phi \in C^\infty_B(M): \omega_\phi=\omega^T+\sqrt{-1}\partial_B\bar\partial _B \phi>0\}$,
the space of transverse K\"ahler potentials on a Sasaki manifold $(M, \xi, \eta, g)$. Given $\phi\in {\mathcal H}$, it defines a new Sasaki structure, $(M, \xi, \eta_\phi, g_{\eta_\phi})$ as follows,
\begin{equation*}
\eta_\phi=\eta+2d^c_B\phi, \omega_\phi=\omega^T+\sqrt{-1}\partial_B\bar\partial_B \phi, g_{\eta_\phi}=\eta_\phi\otimes \eta_\phi+\omega_\phi
\end{equation*}
The most relevant results in pluripotential theory for us lie in in \cite{GZ}, \cite{BBEGZ}[Section 2] and \cite{D4}. Part of them has been done by van Covering \cite{VC}[Section 2], including the Monge-Ampere operator and weak convergence, with main focus on $L^\infty$ and $C^0$ potentials. We shall need most of results on the energy classes ${\mathcal E}$ and ${\mathcal E}_p$ (defined below). \\
Given a Sasaki structure $(M, \xi, \eta, g)$, we recall the following definition,
\begin{defn}An $L^1$, upper semicontinuous (usc) function $u: M\rightarrow \mathbb{R}\cup \{-\infty\}$ is called a transverse $\omega^T$-plurisubharmonic (TPSH for short) if $u$ is invariant under the Reeb flow, and $u$ is $\omega^T$-plurisubharmonic on each local foliation chart $V_\alpha$, that is $\omega^T_\alpha+\sqrt{-1}\partial_B\bar\partial_B u\geq 0$ as a positive closed $(1, 1)$-current on $V_\alpha$. \end{defn}
It is apparent that the definition above does not depend on the choice of foliation charts. Indeed, $u$ is invariant along the flow of $\xi$ and we extend $u$ trivially in the cone direction to a function on cone. Using the holomorphic structure on the cone (see Lemma \ref{chart}), $u$ is a TPSH if and only if $\omega^T+\sqrt{-1}\partial\bar \partial u\geq 0$ is a closed, positive $(1, 1)$ current $X$.
We use the notation,
\begin{equation*}
\text{PSH}(M, \xi, \omega^T)=\{u\in L^1(M), u\; \text{is usc and invariant under the Reeb flow}; \omega_u\geq 0\}
\end{equation*}
One of the cornerstones of Bedford-Taylor theory \cite{BT1} is to associate a complex Monge-Ampere measure to a bounded psh function. Their construction generalizes to bounded K\"ahler potentials in a straightforward manner \cite{GZ} and it has direct adaption to Sasaki setting. We refer to \cite{VC}[Section 2] and Section \ref{CMA001} for definition of complex Monge-Ampere measures $\omega_u^n\wedge \eta$ for $u\in \text{PSH}(M,\xi,\omega^T)\cap L^{\infty}$ on Sasaki manifolds, which is a direct adaption of Bedford-Taylor theory \cite{BT1}.
\begin{prop}\label{weakcon0}
Suppose that the sequence $u_j \in \text{PSH}(M,\xi,\omega^T)\cap L^{\infty}$ decreases to $u \in \text{PSH}(M,\xi,\omega^T)\cap L^{\infty}$. Then for $k=1, \cdots, n$, we have the following weak convergences of complex Monge-Ampere measures, \begin{equation}\label{measure001}\omega_{u_j}^k\wedge (\omega^T)^{n-k} \wedge \eta \rightarrow \omega_u^k\wedge(\omega^T)^{n-k} \wedge \eta\end{equation}
\end{prop}
\begin{proof}By applying a partition of unity subordinated to covering by foliation charts, we need to show that for $f\in C^\infty$, supported on a foliation chart $W_\alpha=(-\delta, \delta)\times V_\alpha$
\begin{equation}\label{measure002}
\int_M f \omega_{u_j}^k\wedge (\omega^T)^{n-k} \wedge \eta\rightarrow \int_M f \omega_{u}^k\wedge (\omega^T)^{n-k} \wedge \eta
\end{equation}
We should emphasize that $f$ is not a basic function in general. The weak convergence in K\"ahler setting implies that for each $x\in (-\delta, \delta)$
\[
\int_{V_\alpha} f(x, z, \bar z) \omega_{u_j}^k\wedge (\omega^T)^{n-k}\rightarrow \int_{V_\alpha} f(x, z, \bar z) \omega_{u}^k\wedge (\omega^T)^{n-k} .
\]
Note that for each $x$, $f$ is supported on $V_\alpha$. Taking integration with respect to $dx$, this leads to \eqref{measure002}, since on $W_\alpha$, $\omega_{u}^k\wedge (\omega^T)^{n-k} \wedge \eta=\omega_{u}^k\wedge (\omega^T)^{n-k} \wedge dx$ as a product measure.
\end{proof}
The following Bedford-Taylor identity in Sasaki setting would be used numerously,
\begin{prop}\label{measure1}For $u, v\in \text{PSH}(M, \xi, \omega^T)\cap L^\infty$,
\begin{equation}\label{si1}
\chi_{\{u>v\}}\omega^n_{\max(u, v)}\wedge \eta=\chi_{\{u>v\}} \omega^n_u\wedge \eta.
\end{equation}
\end{prop}
\begin{proof}We only need to prove this in foliation charts. Recall for each foliation chart $W_\alpha=(-\delta, \delta)\times V_\alpha$, $V_\alpha=B_r(0)\subset \mathbb{C}^n$ gives the local transverse complex structure. For a point $p\in W_\alpha$, we write $p=(x, z)$ with $\xi=\partial_x$. Given $u\in \text{PSH}(M, \xi, \omega^T)\cap L^\infty$ it defines a K\"ahler current $\omega_u^n$ on $V_\alpha$. Since both $u$ and $v$ are basic functions, $u, v$ are independent of $x$ in $W_\alpha$. Hence on $W_\alpha\cap \{u>v\}=(-\delta, \delta)\times \{z\in V_\alpha: u>v\}.$ Note that $\omega^T_u\wedge \eta$ is invariant along the Reeb direction, and it coincides with the product measure $dx\wedge \omega_u^n$ on $W_\alpha=(-\delta, \delta)\times V_\alpha$.
On each $W_\alpha$, we have
\[
\begin{split}
&\chi_{\{(x, z)\in W_\alpha: u>v\}}\omega^n_{\max(u, v)}\wedge \eta=\chi_{\{z\in V_\alpha: u>v\}}\omega^n_{\max(u, v)}\wedge dx\\
&\chi_{\{(x, z)\in W_\alpha: u>v\}} \omega^n_u\wedge \eta=\chi_{\{z\in V_\alpha: u>v\}}\omega^n_{u}\wedge dx.
\end{split}
\]
To prove \eqref{si1}, it reduces to show that
\[
\chi_{\{z\in V_\alpha: u>v\}}\omega^n_{\max(u, v)}=\chi_{\{z\in V_\alpha: u>v\}}\omega^n_{u}.
\]
This is just the Bedford-Taylor identity \cite{BT1}.
\end{proof}
It is possible to generalize the Bedford-Taylor constructions to a much larger class on a compact K\"ahler manifold, see Guedj-Zeriahi \cite{GZ}. The reference \cite{D4}[Section 2] is sufficient for our purpose. These constructions in K\"ahler setting have a direct extension to Sasaki setting, where Proposition \ref{measure1} plays an important role. First we prove the following well-known result in pluripotential theory.
\begin{prop}\label{upperbound01}There exists $C=C(M, g)$ such that for any $u\in \text{PSH}(M, \xi, \omega^T)$,
\begin{equation*}
\sup_M u\leq \frac{1}{\text{Vol}(M)}\int_M u d\mu_g+C
\end{equation*}
\end{prop}
\begin{proof}When $u$ is $C^2$ this is obvious by the fact that $\Delta_g u+n\geq 0$. In general we can prove this using sub-mean value property of plurisubharmonic functions, similar as in \cite{D4}[Lemma 3.45]. In this proof we can either use foliation charts on $M$ or K\"ahler cone structure on $X=C(M)$. We use foliation charts in this argument.
We assume $\sup_M u=0$ and want to show that the integration of $u$ is uniformly bounded below.
We cover $M$ by nested foliation charts $U_k\subset W_k\subset M$ such that there exist diffeomorphisms $\varphi_k: B(0, 4)\times (-2\delta, 2\delta)\rightarrow W_k$ with $\varphi_k: (B(0, 1)\times (-\delta, \delta))=U_k$, where $\delta$ is a fixed positive constant and $B(0, 1)\subset B(0, 4)\subset \mathbb{C}^n$ are Euclidean balls in $\mathbb{C}^n$. We assume that $(z, x)\in B(0, 4)\times (-2\delta, 2\delta)$ such that $z\in B(0, 4)$ represents transverse holomorphic charts and $x\in (-2\delta, 2\delta)$ represents the Reeb direction (i.e. $\xi=\partial_x$).
On each $W_k$, there exists a function $\psi_k=\psi_k(z)$ such that $\omega^T=\sqrt{-1}\partial_z\bar\partial_z \psi_k$.
Note that we only need to show that, there exists a uniformly bounded constant $C>0$, such that
\[
\int_{U_k} u d\mu_g\geq -C, k\in \{1, \cdots, N\}
\]
Note that $u$ is basic, we have
\[
\int_{B(0, 1)\times (-\delta, \delta)} u\circ \varphi_k d\mu_{x, z}=2\delta \int_{B(0, 1)} u\circ \varphi_k(z, x_0) d\mu_z, x_0\in (-\delta, \delta)
\]
where $d\mu_{x, z}$ and $d\mu_z$ are Euclidean measure on $\mathbb{C}^n\times \mathbb{R}$ and $\mathbb{C}^n$ respectively. Hence we only need to show that
\begin{equation}\label{step0}
\int_{B(0, 1)} u\circ \varphi_k(z, x_0) d\mu_z\geq -C, k\in \{1, \cdots, N\}
\end{equation}
Note that by our construction, $(\psi_k+u)\circ \varphi_k$ is independent of $x$ and is plurisubharmonic on $B(0, 4)$ for each $k$. As $u$ is usc, its supremum is realized at some point $p_1\in M$ such that $u\leq u(p_1)=0$. Since $U_k$ covers $M$, we can assume $p_1\in U_1$ with the coordinate $\varphi_1(z_1, x_1)=p_1$ for some $(z_1, x_1)\in B(0, 1)\times (-\delta, \delta)$. Note that since $u$ is basic, hence it is independent of $x$-coordinate we can also take $x_1=0$. Since $B(z_1, 2)\subset B(0, 4)$, we have the following sub-mean value property for $(\psi_1+u)\circ \varphi_1$,
\[
\psi_1\circ \varphi_1(z_1, 0)=(\psi_1+u)\circ \varphi_1(z_1, 0)\leq \frac{1}{\mu(B(z_1, 2))}\int_{B(z_1, 2)} (\psi_1+u)\circ \varphi_1(z, 0) d\mu_z
\]
Since $u\leq 0$ and $B(0, 1)\subset B(z_1, 2)$, there exists $C_1>0$, independent of $u$, such that
\begin{equation}\label{step01}
\int_{B(0, 1)} u\circ \varphi_1 d\mu_z\geq -C_1.
\end{equation}
Since $\{U_k\}_k$ covers $M$, we can assume $U_1$ intersects $U_2$. We can choose $r_2>0$, such that $\varphi_2(B(z_2, r_2)\times (\delta_1, \delta_2))\subset U_1\cap U_2$ for some $B(z_2, r_2)\subset B(0, 4)$ and $-\delta<\delta_1<\delta_2<\delta$.
Since $u\leq 0$, it follows that there exists $\tilde C_1>0$, independent of $u$ ($\tilde C_1$ depends only on $C_1$, $r_2$ and $\psi_2$), such that
\[
\frac{1}{\mu(B(z_2, r_2))}\int_{B(z_2, r_2)} (u+\psi_2) \circ \varphi_2 d\mu_z\geq -\tilde C_1.
\]
Since $(u+\psi_2) \circ \varphi_2$ is plurisubharmonic in $B(0, 4)$, we can obtain that
\[
\frac{1}{\mu(B(z_2, 2))}\int_{B(z_2, 2)} (u+\psi_2) \circ \varphi_2 d\mu_z\geq \frac{1}{\mu(B(z_2, r_2))}\int_{B(z_2, r_2)} (u+\psi_2) \circ \varphi_2 d\mu_z\geq -\tilde C_1.
\]
Since $u\leq 0$ and $B(0, 1)\subset B(z_2, 2)$, we obtain for some $C_2>0$
\[
\int_{B(0, 1)} u\circ \varphi_2 d\mu_z\geq -C_2
\]
We continue this process to consider that $U_1\cup U_2$ intersects a member, say $U_3$. After at most $N-2$ step we prove \eqref{step0}.
\end{proof}
As a direct consequence, we know the following (see \cite{Demailly}[Proposition I.5.9]),
\begin{prop}\label{compactness001}
The set ${\mathcal C}=\{u\in \text{PSH}(M, \xi, \omega^T): \sup_M u\leq C\}$ is bounded in $L^1$ and it is precompact in $L^1(d\mu_g)$ topology.
\end{prop}
\begin{proof}By the above we know that $\sup_M u$ bounded above implies that $\int_M |u|d\mu_g$ is uniformly bounded. By the Motel property of subharmonic functions and plurisubharmonic functionals \cite{Demailly}[Proposition I.4.21, Proposition I.5.9] that ${\mathcal C}$ is precompact with respect to $L^1(d\mu_g)$ topology. Note that in Sasaki setting we apply the compactness of plurisubharmonic functions to nested foliations charts $U_k\subset W_k$ as above for $\omega^T_k$-plurisubharmonic functions locally, that ${\mathcal C}$ is precompact in $L^1$ topology in each $U_k$. After passing by subsequence if necessary, we can then get weak compactness of ${\mathcal C}$ with respect to $L^1(d\mu_g)$ topology.
\end{proof}
Let $v\in \text{PSH}(M, \xi, \omega^T)$.
For $h\in \mathbb{R}$, we denote $v_h=\max\{v, -h\}$ to be the \emph{canonical cutoffs} of $v$. By Proposition \ref{upperbound01}, $v_h\in L^\infty$. It is evident that $v_h$ is invariant under the Reeb flow and hence $v_h\in \text{PSH}(M, \xi, \omega^T)\cap L^\infty$. If $h_1<h_2$, then Proposition \ref{measure1} implies that
\[
\chi_{\{v>-h_1\}}\omega^n_{v_{h_1}}\wedge \eta=\chi_{\{v>-h_1\}} \omega^n_{v_{h_2}}\wedge \eta\leq \chi_{\{v>-h_2\}} \omega^n_{v_{h_2}}
\]
Hence $\chi_{\{v>-h\}}\omega^n_{v_{h}}\wedge \eta$ is an increasing sequence of Borel measure on $M$ with respect to $h$. This leads to the following definition,
\begin{defn}
We define
\begin{equation}\label{measure2}
\omega^n_v\wedge \eta:=\lim_{h\rightarrow \infty}\chi_{\{v>-h\}}\omega^n_{v_{h}}\wedge \eta
\end{equation}
\end{defn}
We shall emphasize that by the definition above, we have for any Borel set $B\subset M$,
\begin{equation}\label{measure21}
\int_B \omega^n_v\wedge \eta=\lim_{h\rightarrow \infty}\int_B \chi_{\{v>-h\}}\omega^n_{v_{h}}\wedge \eta
\end{equation}
Hence the convergence in \eqref{measure2} is a stronger notion than the weak convergence of measures.
To proceed, we need the following approximation of TPSH functions. Our proof uses the K\"ahler cone structure and builds up on Blocki-Kolodziej \cite{BK}.
\begin{lemma}\label{BK}Given $u\in \text{PSH}(M, \xi, \omega^T)$, there exists a decreasing sequence $\{u_k\}_k\subset {\mathcal H}$ such that $u_k$ converges to $u$.
\end{lemma}
\begin{proof}
First we assume that $u$ has zero Lelong number.
Recall $X$ is the K\"ahler cone and we identify $M$ with the link $\{r=1\}\subset X$. For $u\in \text{PSH}(M, \xi, \omega^T)$, we extend $u$ to be a function on $X$ such that $u(r, p)=u(p)$, for any $r>0$.
We recall that $\omega^T=\frac{1}{2}d\eta=dd^c(\log r)=\sqrt{-1}\partial\bar\partial (\log r)$. Hence for $u\in \text{PSH}(M, \xi, \omega^T)$, we have the following,
\[
\sqrt{-1}\partial\bar \partial (\log r+u)\geq 0
\]
In other words, $v=u+\log r$ is a plurisubharmonic function on $X$. This is transparent in foliations charts and corresponding holomorphic charts as in Lemma \ref{chart}.
Let $h_\alpha$ be a local potential for $\omega^T$ in a foliation chart $V_\alpha$, and we write $h=h(w_1, \bar w_1, \cdots, w_n, \bar w_n)$ in the holomorphic chart on cone, then
$\log r=h_\alpha+\text{Re}(w_0)$. Denote $\omega_X$ to be the K\"ahler form on $X$. Since $u$ has zero Lelong number, applying Blocki-Kolodziej \cite{BK}[Theorem 2], we get a sequence of functions $v_k$ converges to $u$, decreasing in $k$, such that on $X^{'}\subset X$
\begin{equation}\label{approx001}
\sqrt{-1}\partial\bar \partial (v_k)+\omega^T+k^{-1}\omega_X\geq 0, X^{'}=\left\{2^{-1}\leq r\leq 2\right\}
\end{equation}
We can assume in addition that $v_k$ is invariant under the flow of $\xi$, by taking average with respect to the torus action generated by $\xi\in \text{Aut}(\xi, \eta, g)$.
We define a basic function $u_k$ on $M$ such that, by taking $r=1$, $u_k=v_k|_{r=1}$.
Now for any point on $X^{'}$, we choose holomorphic charts $\tilde U_\alpha$ as in Lemma \ref{chart} to cover $X^{'}$. We write the function in a holomorphic chart as \[v_k=v_k(\text{Re}(w_0), x, w_1, \bar w_1 \cdots, w_n, \bar w_n).\] We recall the relation between the holomorphic charts and the foliation charts,
\begin{equation}\label{chart02}
w_0=\log(r)+\sqrt{-1}x-h_\alpha(z, \bar z), w_i=z_i, i=1, \cdots, n.
\end{equation}
Note we assume that $v_k$ is invariant under the flow of $\xi$, hence $v_k$ is independent of $x=\text{Im}(w_0)$.
We write $v_k$ as follows, using \eqref{chart02},
\[
v_k(\text{Re}(w_0), w_1, \bar w_1, \cdots, w_n, \bar w_n)=v_k(\log r-h(z, \bar z), z, \bar z)
\]
Locally this gives
\begin{equation}\label{basic01}
u_k(z, \bar z)= v_k(-h_\alpha(z, \bar z), z, \bar z).
\end{equation}
The tangent space $T_pX$ is given by, in terms of coordinate $(r, x, z_1, \cdots, z_n)$,
\[
T_pX\otimes \mathbb{C}=\text{span}\left\{\frac{\partial}{\partial r}, r^{-1}\frac{\partial }{\partial x}, X_i=\frac{\partial}{\partial z_i}+\sqrt{-1} h_i \frac{\partial }{\partial x}, \bar X_j=\frac{\partial}{\partial \bar z_j}-\sqrt{-1} h_{\bar j} \frac{\partial }{\partial x} \right\}
\]
Note that the contact bundle $\mathcal D_p=\text{span}\{X_i, X_{\bar i}, i=1, \cdots, n\}$.
For $p\in M\subset X$, we can assume that $h(z, \bar z)=\partial h=\bar\partial h=0, h_{i\bar j}=\delta_{i\bar j}$ at $p$, and hence \[T_pX=T_pM\oplus \left\{\frac{\partial}{\partial r}\right\}=\text{span}\left\{\frac{\partial}{\partial z_i}, \frac{\partial}{\partial \bar z_j}, r^{-1}\frac{\partial}{\partial x}, \frac{\partial }{\partial r}\right\}\]
By \eqref{approx001}, we compute (at $p$),
\begin{equation}
\left(\sqrt{-1}\partial\bar\partial v_k+\omega^T+k^{-1}\omega_X\right)\left(\frac{\partial}{\partial z_i}, -\sqrt{-1}\frac{\partial}{\partial \bar z_i}\right)=-\partial_t v_k+1+k^{-1}+(v_k)_{i\bar i}\geq 0,
\end{equation}
where $t$ stands for the first argument of $v_k$.
This is equivalent to the following, on $M$ we have,
\[
\sqrt{-1}\partial_B\bar \partial_B u_k+(1+k^{-1})\omega^T\geq 0.
\]
It is clear that $u_k$ converges to $u$, deceasing in $k$. Without loss of generality, we can assume that $u\leq -1$ and $u_k\leq 0$. It follows that $k(k+2)^{-1}u_k\in {\mathcal H}$ such that $k(k+2)^{-1}u_k$ converges to $u$, decreasing in $k$. This completes the proof when $u$ has zero Lelong number.
Now suppose $u\in \text{PSH}(M, \xi, \omega^T)$. We consider the canonical cutoffs $u_j=\max\{u, -j\}\in \text{PSH}(M, \xi, \omega^T)\cap L^\infty$. By the above we know that for each $j$, there exists a sequence of smooth functions $\{v_j^k\}_k\subset {\mathcal H}$ which decreases to $u_j$. By adding a small constant $k^{-1}$ to each $v^k_j$, we can assume that $\{v^k_j\}_k$ strictly decreases (for each $j$). Then for each $k$, we can find $k_{j+1}$ such that
\begin{equation}\label{selection01}v_{j+1}^{k_{j+1}}<v^k_j.\end{equation}Indeed we consider the open set $U^l:=\{x\in M: v_{j+1}^l<v^k_j\}$. Clearly $\{U^l\}_l$ is an increasing sequence of open sets such that $\cup_l U^l=M$, since \[\lim_{l\rightarrow \infty} v_{j+1}^l=u_{j+1}\leq u_j<v^k_j.\] Since $M$ is compact, there exists $k_{j+1}$ such that $U^{k_{j+1}}=M$. By \eqref{selection01}, we can find a sequence $\{v_{j}^{k_j}\}_j$ inductively such that $v_j^{k_j}\searrow u$. This completes the proof.
\end{proof}
\begin{rmk}The K\"ahler cone structure, in particular the relation between holomorphic charts and foliation charts as in Lemma \ref{chart} play a very important role in Sasaki setting. If the Reeb vector field is irregular, the approximation from transverse K\"ahler structure can produce local approximation. But it seems to be hard to patch such a local construction together when the Reeb vector field is irregular. Instead we do approximation on the K\"ahler cone.
We shall mention that in \eqref{selection01}, the assumption that each sequence $\{v_j^k\}_k$ strictly decreases is necessary. For example, we can take $u=1$ over $[0, 1]$, $v=0$ over $[0, 1)$ and $v(1)=1$. We can choose $u_k=1$ for each $k$, and $v_k(x)=x^k+k^{-1}$. Then $v\leq u$ and $\{u_k\}_k$ decreases to $u$ and $v_k$ (strictly) decreases to $v$. But for $\{u_k\}_k$ and $\{v_k\}_k$, \eqref{selection01} does not hold: given $u_k$, there does not exist $l$ such that $v_{l}\leq u_k$ since $v_{l}(1)>1$ for all $l$.
\end{rmk}
As a direct consequence, we have the following (just as in K\"ahler setting),
\begin{prop}\label{volume}
For $u\in \text{PSH}(M, \xi, \omega^T)\cap L^\infty$,
\begin{equation}\label{approx0}
\text{Vol}(M):=\int_M \omega^n_u\wedge \eta=\int_M \omega^n_T\wedge \eta
\end{equation}
\end{prop}
\begin{proof}
By Lemma \ref{BK}, we can choose a smooth sequence $u_k$ converges to $u$ as a decreasing sequence. It then follows from Bedford-Taylor theory (see Proposition \ref{weakcon0}) that $\omega_{u_k}^n\wedge \eta$ converges to $\omega_u^n\wedge \eta$ weakly, we obtain \eqref{approx0}.
\end{proof}
It is then clear that, given \eqref{measure2}, we have only $\int_M \omega^n_v\wedge \eta\leq \text{Vol}(M)$ for $v\in \text{PSH}(M, \xi, \omega^T)$.
\begin{defn}We define the full-mass elements in $\text{PSH}(M, \xi, \omega^T)$ as
\begin{equation}
{\mathcal E}(M, \xi, \omega^T):=\{v: v\in \text{PSH}(M, \xi, \omega^T)\; \text{such that}\; \int_M \omega^n_v\wedge \eta=\text{Vol}(M)\}
\end{equation}
\end{defn}
As in K\"ahler case, many of the properties that hold for bounded TPSH functions hold for elements of ${\mathcal E}(M, \xi, \omega^T)$ as well. We include the \emph{comparison principle, monotonicity property} and \emph{generalized Bedford-Taylor identity} as follows. These properties are proved in \cite{GZ} for K\"ahler setting. Given \eqref{si1} and \eqref{approx0}, our proof follows almost identical as in K\"ahler setting (see \cite{GZ}[Theorem 1.5, Proposition 1.6, Corollary 1.7]). Nevertheless we include the details.
\begin{prop}[Comparison principle]Suppose $u, v\in {\mathcal E}(M, \xi, \omega^T)$. Then
\begin{equation}\label{cp01}
\int_{\{v<u\}}\omega_u^n\wedge \eta\leq \int_{\{v<u\}} \omega_v^n \wedge \eta.
\end{equation}
\end{prop}
\begin{proof}First we show \eqref{cp01} for $u, v$ bounded. Using \eqref{si1} we write
\begin{equation*}
\begin{split}
\int_{\{v<u\}}\omega_u^n\wedge\eta=&\int_{\{v<u\}}\omega^n_{\max\{u, v\}}\wedge \eta=\int_M \omega^n_{\max\{u, v\}}\wedge \eta-\int_{\{u\leq v\}}\omega^n_{\max\{u, v\}}\wedge \eta\\
\leq&\int_M \omega^n_{\max\{u, v\}}\wedge \eta-\int_{\{u<v\}}\omega^n_{\max\{u, v\}}\wedge \eta\\
\leq&\text{Vol}(M)-\int_{\{u<v\}}\omega^n_{\max\{u, v\}}\wedge \eta.
\end{split}
\end{equation*}
Using Proposition \ref{volume} and Proposition \ref{measure1} we write the above as
\begin{equation*}
\int_{\{v<u\}}\omega_u^n\wedge\eta\leq \int_{M}\omega^n_v\wedge \eta-\int_{\{u<v\}}\omega^n_v\wedge \eta\leq\int_{\{v\leq u\}}\omega^n_v\wedge \eta
\end{equation*}
Replacing $v$ by $v+\epsilon$, we have
\begin{equation*}
\int_{\{v+\epsilon<u\}}\omega_u^n\wedge\eta\leq \int_{\{v+\epsilon \leq u\}}\omega^n_v\wedge \eta
\end{equation*}
We get \eqref{cp01} for bounded potentials by letting $\epsilon\rightarrow 0$,
noting that \[\{v<u\}=\cup_{\epsilon>0}\{v+\epsilon<u\}=\cup_{\epsilon>0}\{v+\epsilon\leq u\}.\]
In general, let $u_l=\max\{u, -l\}, v_{k}=\max\{v, -k\}, l, k\in \mathbb{N}$ be the canonical cutoffs of $u, v$ respectively. We apply \eqref{cp01} for these to get
\begin{equation*}
\int_{\{v_l<u_k\}} \omega^n_{u_k}\wedge \eta\leq \int_{\{v_l<u_k\}} \omega^n_{v_l}\wedge \eta.
\end{equation*}
Together with the inclusions
$
\{v_l<u\}\subset \{v_l< u_k\}\subset\{v<u_k\}
$
we have
\begin{equation}\label{cp02}
\int_{\{v_l<u\}} \omega^n_{u_k}\wedge \eta\leq \int_{\{v<u_k\}} \omega^n_{v_l}\wedge \eta.
\end{equation}
Letting $l\rightarrow \infty$ and using the definition \eqref{measure2} on $\omega^n_{v_l}\wedge \eta$, \eqref{cp02} gives
\[
\int_{\{v<u\}} \omega^n_{u_k}\wedge \eta\leq \int_{\{v<u_k\}} \omega^n_{v}\wedge \eta.
\]
Letting $k\rightarrow \infty$ and using the definition \eqref{measure2} on $\omega^n_{u_k}\wedge \eta$, we get
\[
\int_{\{v<u\}} \omega^n_{u}\wedge \eta\leq \int_{\{v\leq u\}} \omega^n_{v}\wedge \eta.
\]
The replacing $v$ by $v+\epsilon$ in the above inequality, we can then argue as in the bounded case, taking the limit $\epsilon\rightarrow 0$ yields \eqref{cp01}.
\end{proof}
\begin{prop}[Monotonicity property]\label{mp01}Suppose $u\in {\mathcal E}(M, \xi, \omega^T)$ and $v\in \text{PSH}(M, \xi, \omega^T)$. If $u\leq v$ then $v\in {\mathcal E}(M, \xi, \omega^T)$.
\end{prop}
\begin{proof}This is proved in \cite{GZ}[Proposition 1.6] and our argument is almost identical. First we show that $\psi=v/2\in {\mathcal E}(M, \xi, \omega^T)$. We can assume that $u\leq v<-2$, hence $\psi<-1$. This normalization gives the following inclusions for the canonical cutoffs $u_j, v_j, \psi_j$,
\[
\{\psi\leq -j\}=\{\psi_j\leq -j\}\subset \{u_{2j}<\psi_j-j+1\}\subset \{u_{2j}\leq -j\}
\]
By Proposition \ref{cp01} and the inclusions above, we have
\[
\int_{\{\psi_j\leq -j\}}\omega_{\psi_j}^n\wedge \eta\leq \int_{ \{u_{2j}<\psi_j-j+1\}}\omega_{\psi_j}^n\wedge \eta\leq \int_{ \{u_{2j}<\psi_j-j+1\}}\omega_{u_{2j}}^n\wedge \eta\leq \int_{\{u_{2j}\leq -j\}}\omega_{u_{2j}}^n\wedge \eta.
\]
Note that we have
\[
\int_{\{u_{2j}\leq -j\}}\omega_{u_{2j}}^n\wedge \eta=\text{Vol}(M)-\int_{\{u_{2j}>-j\}} \omega_{u_{2j}}^n\wedge \eta.
\]
Applying Proposition \ref{measure1} to $\max\{u_{2j}, -j\}=u_{j}$ on the set $\{u_{2j}>-j\}=\{u_j>-j\}$, we have
\[
\int_{\{u_{2j}>-j\}} \omega_{u_{2j}}^n\wedge \eta=\int_{\{u_{j}>-j\}} \omega_{u_{j}}^n\wedge \eta.
\]
It then follows that
\[
\int_{\{u_{2j}\leq -j\}}\omega_{u_{2j}}^n\wedge \eta=\int_{\{u_j\leq -j\}}\omega_{u_{j}}^n\wedge \eta=\int_{\{u\leq -j\}}\omega_{u_{j}}^n\wedge \eta.
\]
By definition of $u\in {\mathcal E}(M, \xi, \omega^T)$, it follows that, as $j\rightarrow \infty$,
\[
\int_{\{\psi_j\leq -j\}}\omega_{\psi_j}^n\wedge \eta\leq \int_{\{u\leq -j\}}\omega_{u_{j}}^n\wedge \eta\rightarrow 0.
\]
Hence $\psi=v/2\in {\mathcal E}(M, \xi, \omega^T)$. To show that $v\in {\mathcal E}(M, \xi, \omega^T)$, we observe that $\{v\leq -2j\}=\{\psi\leq -j\}$ and $\omega_{\psi_j}\geq \omega_{v_{2j}}/2$, hence
\[
\int_{\{v\leq -2j\}} \omega^n_{v_{2j}}\wedge \eta\leq 2^n \int_{\{v\leq -2j\}} \omega_{\psi_j}^n\wedge \eta\leq 2^n\int_{\{\psi\leq -j\}}\omega_{\psi_j}^n\wedge \eta.
\]
By letting $j\rightarrow \infty$, we can then conclude that $v\in {\mathcal E}(M, \xi, \omega^T)$.
\end{proof}
\begin{prop}[Generalized Bedford-Taylor identity]\label{GBTI}For $u\in {\mathcal E}(M, \xi, \omega^T)$, $v\in \text{PSH}(M, \xi, \omega^T)$, then $\max\{u, v\}\in {\mathcal E}(M, \xi, \omega^T)$ and
\begin{equation}\label{si2}
\chi_{\{u>v\}}\omega^n_{\max(u, v)}\wedge \eta=\chi_{\{u>v\}} \omega^n_u\wedge \eta.
\end{equation}
\end{prop}
\begin{proof}Our argument is identical to the K\"ahler setting; see \cite{GZ}[Corollary 1.7] and \cite{D4}[Lemma 2.5]. Proposition \ref{mp01} implies that $w:=\max\{u, v\}\in {\mathcal E}(M, \xi, \omega^T)$. Now observe that $\max\{u_j, v_{j+1}\}=\max\{u, v, -j\}=w_j$. Since the cutoffs are bounded we have
\begin{equation}\label{measure31}
\chi_{\{u_j>v_{j+1}\}}\omega^n_{w_j}\wedge \eta=\chi_{\{u_j>v_{j+1}\}}\omega^n_{u_j}\wedge \eta
\end{equation}
By \ref{measure21}, we know that $\chi_{u>v}\omega^n_{u_j}\wedge \eta\rightarrow \chi_{u>v}\omega^n_{u}\wedge \eta$ and $\chi_{u>v}\omega^n_{w_j}\wedge \eta\rightarrow \chi_{u>v}\omega^n_{w}\wedge \eta$ as $j\rightarrow \infty$ (we also use the fact that $u, w\in {\mathcal E}(M, \xi, \omega^T)$). Since
\[
\{u>v\}\subset \{u_j>v_{j+1}\}\;\text{and}\; \{u_j>v_{j+1}\}\backslash \{u>v\}\subset \{u\leq -j\},
\]
it follows that
\[
0\leq (\chi_{\{u_j>v_{j+1}\}}-\chi_{\{u>v\}}) \omega^n_{u_j}\wedge \eta\leq \chi_{\{u\leq -j\}} \omega^n_{u_j}\wedge \eta\rightarrow 0.
\]
Similarly since
\[
\{u_j>v_{j+1}\}\backslash \{u>v\}\subset \{w\leq -j\}
\]
we also obtain that
\[
0\leq (\chi_{\{u_j>v_{j+1}\}}-\chi_{\{u>v\}}) \omega^n_{w_j}\wedge \eta\leq \chi_{\{w\leq -j\}} \omega^n_{w_j}\wedge \eta\rightarrow 0.
\]
By taking limit in \eqref{measure31} together with the limit facts above, we get the desired result.
\end{proof}
Next we introduce finite energy class on Sasaki manifolds, following \cite{GZ}.
By considering Young weights $\chi\in {\mathcal W}^+_p$ (see \cite{D4}[Chapter 1] for a short introduction to Young wrights), one can introduce various finite energy subclasses of ${\mathcal E}(M, \xi, \omega^T)$,
\[
{\mathcal E}_\chi(M, \xi, \omega^T):=\{u\in {\mathcal E}(M, \xi, \omega^T)\; \text{s. t.}\; E_\chi(u)<\infty\},
\]
where $E_\chi$ is the $\chi$-energy defined by
\[
E_\chi(u):=\int_M \chi(u)\omega^n_u\wedge \eta.
\]
Of special importance are the weights $\chi^p(t)=|t|^p/p$ and the associated classes ${\mathcal E}_p(M, \xi, \omega^T)$. For theses weights it is clear that ${\mathcal E}_p(M, \xi, \omega^T) \subset {\mathcal E}_1(M, \xi, \omega^T)$ for $p \geq 1$. We will need the following straightforward fact,
\begin{prop}
For any $u\in {\mathcal E}_1(M, \xi, \omega^T)$, $u$ has Lelong number zero at every point.
\end{prop}
\begin{proof}This is straightforward. We can assume $\sup u=0$. For $u\in {\mathcal E}_1(M, \xi, \omega^T)$, we have
\[
\int_M (-u) \omega^n_u\wedge \eta<\infty.
\]
We consider locally $(0, 0)\in W_\alpha=(-\delta, \delta)\times V_\alpha$ in a foliation chart. Then we have
\[
2\delta \int_{V_{\alpha}} (-u) \omega_u^n<\int_M (-u) \omega^n_u\wedge \eta<\infty.
\]
This implies that $u$ has Lelong number zero at $(0, 0)$.
\end{proof}
The following result implies that to test membership in ${\mathcal E}_\chi(M, \xi, \omega^T)$ it is enough to test the finiteness condition $E_\chi(u)<\infty$ on canonical cutoffs.
\begin{prop}\label{cte}Suppose $u\in {\mathcal E}(M, \xi, \omega^T)$ with canonical cutoffs $\{u_k\}_{k\in \mathbb{N}}$. If $h: \mathbb{R}_+\rightarrow \mathbb{R}_+$ is continuous and increasing, then
\[
\int_M h(|u|)\omega_u^n\wedge \eta<\infty\Longleftrightarrow\limsup_{k\rightarrow \infty}\int_M h(|u_k|)\omega_{u_k}^n\wedge \eta<\infty.
\]
Moreover, if the above condition holds, then
\[
\int_M h(|u|)\omega_u^n\wedge \eta=\lim_{k\rightarrow \infty}\int_M h(|u_k|)\omega_{u_k}^n\wedge \eta
\]
\end{prop}
\begin{proof} Without loss of generality we can assume that $u \leq 0$.
If $\limsup_{k\rightarrow \infty}\int_M h(|u_k|)\omega_{u_k}^n\wedge \eta<\infty$, we obtain that
the sequence of Radon measures $h(|u_k|)\omega^n_{u_k} \wedge \eta$ is weakly compact. Hence there exists a subsequence $h(|u_{k_j}|)\omega_{u_{k_j}}^n \wedge \eta $ converging weakly to a Radon measure $\mu$. Recall that $h(|u_{k_j}|)$ is an increasing sequence of lower semicontinuous functions converging to $h(|u|)$ and $\omega_{u_{k_j}}^n \wedge \eta \xrightarrow{w} \omega_u^n \wedge \eta$, this yields that $h(|u|)\omega_u^n \wedge \eta \leq \mu$ as measure. In particular $\int_M \omega_u^n \wedge \eta \leq \mu(M)<\infty$.
Now assume $\int_M h(|u|)\omega_{u}^n \wedge \eta < \infty$. If $\lim\limits_{t \rightarrow +\infty}h(t)=+\infty$, we have
\begin{equation*}
\lim_{k \rightarrow \infty} \int_{\{u \leq -k\}} h(|u|)\omega_u^n \wedge \eta=\lim_{l \rightarrow +\infty} \int_{\{h(|u|) >l\}} h(|u|) \omega_u^n \wedge \eta=0
\end{equation*}
It follows from Proposition \ref{volume} and the Generalized Bedford-Taylor identity \ref{GBTI} that
\begin{equation*}
\int_{\{u \leq -k\}} \omega_{u_k}^n \wedge \eta =\int_{\{u \leq -k\}} \omega_u^n \wedge \eta
\end{equation*}
Then we have
\begin{equation*}
\begin{split}
|\int_M h(|u_k|)\omega_{u_k}^n \wedge \eta-\int_M h(|u|)\omega_u^n \wedge \eta| & \leq \int_{\{u \leq -k\}}h(k)\omega_{u_k}^n \wedge \eta+\int_{\{u \leq -k\}} h(|u|)\omega_u^n \wedge \eta \\
&=h(k) \int_{\{u \leq -k\}} \omega_u^n \wedge \eta +\int_{\{u \leq -k\}} h(|u|) \omega_u^n \wedge \eta \\
&\leq 2 \int_{\{u \leq -k\}} h(|u|) \omega_u^n \wedge \eta
\end{split}
\end{equation*}
It follows that $\int_M h(|u_k|)\omega_{u_k}^n\wedge \eta$ is bounded and $\int_M h(|u|)\omega_u^n\wedge \eta=\lim_{k\rightarrow \infty}\int_M h(|u_k|)\omega_{u_k}^n\wedge \eta$.
If $\lim\limits_{t \rightarrow +\infty} h(t)=L<\infty$, it follows from Proposition \ref{volume} that $\int_M h(|u_k|)\omega_{u_k}^n\wedge \eta$ is bounded. Moreover for any $\epsilon>0$ there exists $N>0$ such that $0<L-h(t) <\epsilon$ for all $t>N$. Then for $k >N$ we have
\begin{equation*}
\begin{split}
|\int_M h(|u_k|)\omega_{u_k}^n \wedge \eta-\int_M h(|u|)\omega_u^n \wedge \eta| &=|\int_M (L-h(|u_k|))\omega_{u_k}^n \wedge \eta-\int_M (L-h(|u|))\omega_u^n \wedge \eta| \\
&=|\int_{\{u \leq -k\}} (L-h(|u_k|))\omega_{u_k}^n \wedge \eta-\int_{\{u\leq-k\}}(L-h(|u|))\omega_u^n \wedge \eta| \\
&\leq 2\epsilon
\end{split}
\end{equation*}
That is $\int_M h(|u|)\omega_u^n\wedge \eta=\lim_{k\rightarrow \infty}\int_M h(|u_k|)\omega_{u_k}^n\wedge \eta$.
\end{proof}
With the proposition above, we can then prove the so-called \emph{fundamental estimate}
\begin{prop}[Fundamental estimate]\label{fe}
Suppose $\chi\in {\mathcal W}^+_p$ and $u, v\in {\mathcal E}_\chi(M, \xi, \omega^T)$ such that $u\leq v\leq 0$. Then
\begin{equation}\label{fe01}
E_\chi(v)\leq (p+1)^n E_\chi(u)
\end{equation}
\end{prop}
\begin{proof}
First of all we assume that $u, v \in \text{PSH}(M,\xi,\omega^T) \cap L^{\infty}$. For $0 \leq j \leq n-1$ we have
\begin{equation*}
\int_M \chi(u)\omega_v^{j+1} \wedge \omega_u^{n-j-1}\wedge \eta =\int_M \chi(u)\omega^T \wedge \omega_v^j \wedge \omega_u^{n-j-1} \wedge \eta+\int_M \chi(u) i\partial_B\overline{\partial}_B v \wedge \omega_v^j \wedge \omega_u^{n-j-1} \wedge \eta
\end{equation*}
Recall that $\chi'(l) \leq 0$ for $l<0$. Using integration by parts we have
\begin{equation*}
\begin{split}
\int_M \chi(u)\omega^T \wedge \omega_v^j \wedge \omega_u^{n-j-1} \wedge \eta &=\int_M \chi(u) \wedge \omega_v^j \wedge \omega_u^{n-j} \wedge \eta-\int_M \sqrt{-1}\chi(u)\partial_B\overline{\partial}_B u \wedge \omega_v^j \wedge \omega_u^{n-j-1} \wedge \eta \\
&=\int_M \chi(u) \wedge \omega_v^j \wedge \omega_u^{n-j} \wedge \eta+\int_M \sqrt{-1} \chi'(u)\partial_B u \wedge \overline{\partial}_B u \wedge \omega_v^j \wedge \omega_u^{n-j-1} \wedge \eta\\
&\leq \int_M \chi(u) \wedge \omega_v^j \wedge \omega_u^{n-j} \wedge \eta
\end{split}
\end{equation*}
Recall that $\chi'(l) \leq 0$ for $l<0$ and $l\chi'(l) \leq p \chi(l)$ for $l \geq 0$. Using the integration by parts repeatedly we have
\begin{equation*}
\begin{split}
&\int_M \chi(u) i\partial_B\overline{\partial}_B v \wedge \omega_v^j \wedge \omega_u^{n-j-1} \wedge \eta \\
&=\int_M \sqrt{-1} v\chi^{''}(u) \partial_B u \wedge \overline{\partial}_B u \wedge \omega_v^j \wedge \omega_u^{n-j-1} \wedge \eta +\int_M \sqrt{-1}v\chi'(u)\partial_B \overline{\partial}_B u \wedge \omega_v^j \wedge \omega_u^{n-j-1} \wedge \eta \\
&\leq \int_M \sqrt{-1}v\chi'(u)\partial_B \overline{\partial}_B u \wedge \omega_v^j \wedge \omega_u^{n-j-1} \wedge \eta \\
&\leq \int_M v\chi'(u) \omega_v^j \wedge \omega_u^{n-j} \wedge \eta =\int_M |v|\chi'(|u|) \omega_v^j\wedge \omega_u^{n-j} \wedge \eta \\
&\leq \int_M |u| \chi'(|u|) \omega_v^j \wedge \omega_u^{n-j} \wedge \eta \leq p \int_M \chi(|u|) \omega_v^j \wedge \omega_u^{n-j} \wedge \eta
\end{split}
\end{equation*}
Combine the inequalities above we obtain
\begin{equation*}
\int_M \chi(u)\omega_v^{j+1} \wedge \omega_u^{n-j-1}\wedge \eta \leq (p+1) \int_M \chi(u) \omega_v^j \wedge \omega_u^{n-j} \wedge \eta
\end{equation*}
It follows that
\begin{equation*}
E_{\chi}(v) \leq \int_M \chi(u) \omega_v^n \wedge \eta \leq (p+1)^n E_{\chi}(u)
\end{equation*}
In the general case $u, v \in {\mathcal E}_{\chi}(M,\xi,\omega^T)$, we have $E_{\chi}(v_k) \leq E_{\chi}(u_k)$ for the canonical cutoffs $u_k, v_k$. It follows from Proposition \ref{cte} that $E_{\chi}(v) \leq (p+1)^n E_{\chi}(u)$.
\end{proof}
As a direct consequence, we obtain the \emph{monotonicity property} for ${\mathcal E}_\chi(M, \xi, \omega^T)$
\begin{prop}\label{oder}Suppose $u\in {\mathcal E}_\chi(M, \xi, \omega^T)$ and $v\in \text{PSH}(M, \xi, \omega^T)$. If $u\leq v$, then $v\in {\mathcal E}_\chi(M, \xi, \omega^T)$
\end{prop}
\begin{proof}
Without loss of generality we can assume that $u \leq v \leq 0$.The monotonicity property implies that $v \in {\mathcal E}(M,\xi,\omega^T)$. We have $u \leq v_k$ for the canonical cutoffs of $v$,then $E_{\chi}(v_k) \leq (p+1)^nE_{\chi}(u)$ according to the Proposition \ref{fe}. It follows from Proposition \ref{cte} that $E_{\chi}(v) (p+1)^n\leq E_{\chi}(u)$ and $v \in {\mathcal E}_{\chi}(M,\xi,\eta)$.
\end{proof}
We also have the following,
\begin{prop}\label{mixedenergy}Suppose $u, v\in {\mathcal E}_\chi(M, \xi, \omega^T)$ for $\chi\in {\mathcal W}^+_p$. If $u, v\leq 0$, then
\[
\int_M \chi(u)\omega_v^n\wedge \eta\leq p 2^p(E_\chi(u)+E_\chi(v))
\]
\end{prop}
\begin{proof}
Take $\tilde{\chi}(t)=\chi(t)+\delta |t| \in {\mathcal W}^+_p$. Assume that $t>0$, It is obvious $\tilde{\chi}(t),\tilde{\chi}'(t)>0$. Recall that $\epsilon^p\tilde{\chi}(t) \leq \tilde{\chi}(\epsilon t)$ and $t\tilde{\chi}'(t) \leq p\tilde{\chi}(t)$ for $\tilde{\chi} \in {\mathcal W}_p^+$ and $0 < \epsilon <1$, hence we have $\tilde{\chi}(2t) \leq 2^p \tilde{\chi}(t)$. It follows from the convexity of the function $\tilde{\chi}(t)$ that $\frac{\tilde{\chi}(t)}{t} \leq \tilde{\chi}'(t)$. Then
\begin{equation*}
\tilde{\chi}'(2t)= \frac{1}{2}\frac{2t \tilde{\chi}'(2t)}{\tilde{\chi}(2t)} \frac{\tilde{\chi}(2t)}{\tilde{\chi}(t)} \frac{\tilde{\chi}(t)}{t} \leq p2^{p-1} \tilde{\chi}'(t)
\end{equation*}
Then $\delta \rightarrow 0$ implies that $\chi'(2t) \leq p 2^{p-1}\chi'(t)$ for $t>0$.
Consider the generalized Bedford-Taylor identity and $\{ |u|>2t\} \subset \{ u <v-t\} \cup \{ v < -t\}$, we have
\begin{equation*}
\begin{split}
\int_M \chi(u) \omega_v^n \wedge \eta &=\int_0^{\infty} \chi'(t) \omega_v^n \wedge \eta\{|u|>t\}dt \\
&\leq p2^p \int_0^{\infty} \chi'(t) \omega_v^n \wedge \eta\{|u|>2t\}dt \\
&\leq p2^p(\int_0^{\infty} \chi'(t) \omega_v^n \wedge \eta \{u<v-t\}dt +\int_0^{\infty} \chi'(t) \omega_v^n \wedge \eta \{v<-t\} dt) \\
&\leq p2^p(\int_0^{\infty} \chi'(t) \omega_u^n \wedge \eta \{u<v-t\}dt +E_{\chi}(v)) \\
& \leq p2^p(\int_0^{\infty} \chi'(t) \omega_u^n \wedge \eta \{u<-t\}dt +E_{\chi}(v)) \\
&=p2^p(E_{\chi}(u)+E_{\chi}(v))
\end{split}
\end{equation*}
\end{proof}
\begin{prop}\label{weightchange}Suppose $u\in {\mathcal E}_\chi(M, \xi, \omega^T), \chi\in {\mathcal W}^+_p$. Then there exists $\tilde \chi\in {\mathcal W}^{+}_{2p+1}$ such that $\chi(t)\leq \tilde \chi(t), \chi(t)/\tilde \chi(t)\rightarrow 0$ as $t\rightarrow \infty$ and $u\in {\mathcal E}_{\tilde \chi}(M, \xi, \omega^T)$
\end{prop}
\begin{proof}
Take $\chi_0=\chi$, recall that $\lim\limits_{t \rightarrow \infty} \chi_0(t)=\infty$ and $u \in {\mathcal E}_{\chi}(M,\xi,\omega^T)$, we have
\begin{equation*}
\lim_{t \rightarrow \infty} \int_{\{|u| > t\}} \chi(|u|)\omega_u^n\wedge\eta =\lim_{s \rightarrow \infty} \int_{\{\chi(u) > s\}} \chi(|u|) \omega_u^n\wedge \eta=0
\end{equation*}
Then one can choose $t_1>0$ such that $\int_{|u|>t_1} \chi(|u|)\omega_u^n\wedge\eta<\frac{1}{2^2}$. We define $\chi_1:\mathbb{R}^+ \rightarrow \mathbb{R}^+$ by the formula:
\begin{equation*}
\chi_1(t)=
\begin{cases}
\chi_0(t) & \text{if} \quad t\leq t_1 \\
\chi_0(t_1)+2(\chi_0(t)-\chi_0(t_1)) &\text{if} \quad t>t_1.
\end{cases}
\end{equation*}
Then it is easy to check that
\begin{enumerate}
\item $\chi_0(t) \leq \chi_1(t)$;
\item $\lim\limits_{t \rightarrow \infty} \frac{\chi_1(t)}{\chi_0(t)}=2$;
\item $E_{\chi_1}(u) \leq E_{\chi_0}(u)+\frac{1}{2}$;
\item $\sup\limits_{t>0} \frac{|t\chi'_1(t)|}{|\chi_1(t)|} \leq \sup\limits_{t>0}\frac{2|t\chi'_0(t)|}{|\chi_0(t)|}<2p+1$;
\item $\lim\limits_{t \rightarrow \infty} \frac{t\chi'_1(t)}{\chi_1(t)} \leq p$
\end{enumerate}
These properties imply that for $t_2>t_1$ big enough, the function $\chi_2:\mathbb{R}^+ \rightarrow \mathbb{R}^+$
\begin{equation*}
\chi_2(t)=
\begin{cases}
\chi_1(t) & \text{if} \quad t\leq t_2 \\
\chi_1(t_2)+2(\chi_1(t)-\chi_1(t_2)) &\text{if} \quad t>t_2.
\end{cases}
\end{equation*}
satisfies
\begin{enumerate}
\item $\chi_1(t) \leq \chi_1(t)$;
\item $\lim\limits_{t\rightarrow \infty} \frac{\chi_2(t)}{\chi_1(t)}=2$;
\item $E_{\chi_2}(u) \leq E_{\chi_1}(u)+\frac{1}{2^2}$;
\item $\sup\limits_{t>0} \frac{|t\chi'_2(t)|}{|\chi_2(t)|} <2p+1$;
\item $\lim\limits_{t \rightarrow \infty} \frac{t\chi'_2(t)}{\chi_2(t)} \leq p$
\end{enumerate}
Continuing the above construction we can obtain an increasing sequence $\{\chi_k\}_k$ and the limit weight $\tilde{\chi}(t)=\lim\limits_{k \rightarrow \infty} \chi_k(t)$ will satisfy the requirements of the Proposition.
\end{proof}
\begin{prop}\label{weakcon3}
Assume that $\{\psi_k\}_{k\in \mathbb{N}}, \{\phi_k\}_{k\in \mathbb{N}}, \{v_k\}_{k\in \mathbb{N}}\subset {\mathcal E}_\chi(M, \xi, \omega^T)$ decrease (increase a. e) to $\phi, \psi, v\in {\mathcal E}_\chi(M, \xi, \omega^T)$ respectively. Suppose
\begin{enumerate}
\item $\psi_k\leq \phi_k$ and $\psi_k\leq v_k$.
\item $h: \mathbb{R}\rightarrow \mathbb{R}$ is continuous with $\limsup_{|l|\rightarrow \infty}|h(l)|/\chi(l)\leq C$ for some $C\geq 0$.
\end{enumerate}
Then we have the weak convergence of
\[
h(\phi_k-\psi_k)\omega_{v_k}^n\wedge \eta\rightarrow h(\phi-\psi)\omega^n_v\wedge \eta.
\]
\end{prop}
\begin{proof}Without loss of generality one can assume all the functions $\phi_k,\phi,\psi_k,\psi, v, v_k$ are negative. We will only prove the Proposition for decreasing sequences, the case of increasing sequences can be proved similarly.
First of all we suppose that the functions involved are uniformly bounded below, that is, there exists $L>1$ such that $-L \leq \phi_k,\phi,\psi_k,\psi, v_k, v \leq 0$. Given $\epsilon>0$, it follows from Theorem \ref{quasicontinuity} that there exists an open subset $O_{\epsilon} \subset M$ such that $\text{cap}(O_{\epsilon})<\epsilon$ and $\phi_k,\phi,\psi_k,\psi, v_k, v$ are continuous on $M-O_{\epsilon}$. Then $\phi_k \rightarrow \phi$ and $\psi_k \rightarrow \psi$ uniformly on $M-O_{\epsilon}$. Hence there exists $N$ such that for $k>N$ we have $|h(\phi_k-\psi_k)-h(\phi-\psi)|<\epsilon$ on $M-O_{\epsilon}$ and the term
\begin{equation*}
\int_M h(\phi_k-\psi_k)\omega_{v_k}^n \wedge \eta-\int_M h(\phi-\psi)\omega_{v_k}^n \wedge \eta=(\int_{O_{\epsilon}}+\int_{M-O_{\epsilon}}) [h(\phi_k-\psi_k)-h(\phi-\psi)] \omega_{v_k}^n \wedge \eta
\end{equation*}
is bounded by $2\epsilon L^n \max\limits_{0 \leq l \leq L}|h(l)| +\epsilon$. Hence
\begin{equation}\label{weakcon1(1)}
\int_M h(\phi_k-\psi_k) \omega_{v_k}^n \wedge \eta-\int_M h(\phi-\psi)\omega_{v_k}^n \wedge \eta \rightarrow 0
\end{equation}
Given $\epsilon>0$, it follows from Theorem \ref{quasicontinuity} that there exists an open subset $\tilde{O}_{\epsilon}$ such that $\text{cap}(\tilde{O}_{\epsilon}) <\epsilon$ and $\phi,\psi$ are continuous on $M-\tilde{O}_{\epsilon}$. By the Tietze's extension theorem the function $h(\phi-\psi)|_{M-\tilde{O}_{\epsilon}}$ can be extended to a continuous function $\alpha$ on $M$ bounded by $\max\limits_{0\leq l\leq L}|h(l)|$. By Proposition \ref{weakcon0} we have $\omega_{v_k}^n\wedge\eta \rightarrow \omega_v^n\wedge\eta$ weakly. Then there exists a constant $N$ such that for $k>N$ we have $|\int_M \alpha\omega_{v_k}^n\wedge\eta-\int_M\alpha\omega_v^n\wedge\eta|<\epsilon$ and the term
\begin{equation*}
\begin{split}
&\quad \int_M h(\phi-\psi)\omega_{v_k}^n\wedge\eta-\int_M h(\phi-\psi)\omega_v^n\wedge\eta \\
&= \int_{O_{\epsilon}} (h(\phi-\psi)-\alpha)\omega_{v_k}^n\wedge\eta-\int_{O_{\epsilon}}(h(\phi-\psi)-\alpha)\omega_v^n\wedge\eta+(\int_M \alpha \omega_{v_k}^n\wedge\eta-\int_M\alpha\omega_v^n\wedge\eta)
\end{split}
\end{equation*}
is bounded by $4\epsilon L^n \max\limits_{0\leq l\leq L}|h(l)|+\epsilon$. Hence
\begin{equation}\label{weakcon1(2)}
\int_M h(\phi-\psi)\omega_{v_k}^n\wedge \eta-\int_Mh(\phi-\psi)\omega_v^n \wedge \eta \rightarrow 0
\end{equation}
It follows from \ref{weakcon1(1)} and \ref{weakcon1(2)} that $h(\phi_k-\psi_k)\omega_{v_k}^n\wedge\eta \rightarrow h(\phi-\psi)\omega_v^n\wedge\eta$.
Now consider the general case when $\phi_k,\phi,\psi_k,\psi, v_k, v$ are unbounded. Let $\phi_k^l,\phi^l,\psi_k^l,\psi^l, v_k^l, v^l$ be the canonical cutoffs of the corresponding potentials, then we only have to show that
\begin{equation}\label{weakcon1(3)}
\int_M h(\phi_k-\psi_k)\omega_{v_k}^n\wedge\eta-\int_Mh(\phi_k^l-\psi_k^l)\omega_{v_k^l}^n\wedge\eta \rightarrow 0
\end{equation}
and
\begin{equation}\label{weakcon1(4)}
\int_Mh(\phi-\psi)\omega_v^n\wedge\eta-\int_Mh(\phi^l-\psi^l)\omega_{v^l}^n\wedge\eta \rightarrow 0
\end{equation}
as $l\rightarrow \infty$ uniformly with respect to $k$.
By Proposition \ref{weightchange} there exists $\tilde{\chi} \in {\mathcal W}_{2p+1}^+$ such that $\chi \leq \tilde{\chi},\lim\limits_{t \rightarrow \infty} \frac{\chi(t)}{\tilde{\chi}(t)}=0$ and $\psi \in {\mathcal E}_{\tilde{\chi}}(M,\xi,\omega^T)$. Then $\psi_k,\phi_k,\phi, v_k, v \in {\mathcal E}_{\tilde{\chi}}(M,\xi,\omega^T)$ according to Proposition \ref{oder}.
Recall that there exists $L>0$ such that $\chi(L) \geq 1$ and $|h(t)| \leq (C+1)\chi(t)$ for $|t|>L$. Take $\tilde{C}= \max\{C+1,\frac{\max\limits_{0 \leq l \leq L}|h(l)|}{\chi(L)}\}$, then we have
\begin{equation*}
|h(l_1-l_2)| \leq \tilde{C} \chi(l_2)
\end{equation*}
for $l_2 \leq -L$ and $l_2 \leq l_1 \leq 0$.
Using the Generalized Bedford-Taylor identity, the fundamental estimate and Proposition \ref{mixedenergy} we have
\begin{equation*}
\begin{split}
&\quad |\int_M h(\phi_k-\psi_k)\omega_{v_k}^n\wedge\eta-\int_Mh(\phi_k^l-\psi_k^l)\omega_{v_k^l}^n\wedge\eta| \\
&=|\int_{\{\psi_k \leq -l\}} h(\phi_k-\psi_k)\omega_{v_k}^n\wedge\eta-\int_{\{\psi_k \leq -l\}}h(\phi_k^l-\psi_k^l)\omega_{v_k^l}^n\wedge\eta| \\
&\leq \int_{\{\psi_k \leq -l\}} |h(\phi_k-\psi_k)|\omega_{v_k}^n\wedge\eta+\int_{\{\psi_k \leq -l\}}|h(\phi_k^l-\psi_k^l)|\omega_{v_k^l}^n\wedge\eta \\
&\leq \tilde{C}(\int_{\{\psi_k \leq -l\}} \chi(\psi_k)\omega_{v_k}^n\wedge\eta+\int_{\{\psi_k \leq -l\}} \chi(\psi_k^l)\omega_{v_k^l}^n\wedge\eta) \\
&\leq \tilde{C}\sup_{s\leq-l}\frac{\chi(s)}{\tilde{\chi}(s)} (\int_{\{\psi_k \leq -l\}} \tilde{\chi}(\psi_k)\omega_{v_k}^n\wedge\eta+\int_{\{\psi_k \leq -l\}} \tilde{\chi}(\psi_k^l)\omega_{v_k^l}^n\wedge\eta) \\
&\leq \tilde{C}\sup_{s \leq -l}\frac{\chi(s)}{\tilde{\chi}(s)} (\int_M \tilde{\chi}(\psi_k)\omega_{v_k}^n\wedge\eta+\int_M \tilde{\chi}(\psi_k^l)\omega_{v_k^l}^n\wedge\eta) \\
&\leq (2p+1)2^{2p+1}\tilde{C}\sup_{s\leq-l}\frac{\chi(s)}{\tilde{\chi}(s)}(E_{\tilde{\chi}}(\psi_k)+E_{\tilde{\chi}}(v_k)+E_{\tilde{\chi}}(\psi_k^l)+E_{\tilde{\chi}}(v_k^l)) \\
&\leq 4(2p+1)(2p+2)^n2^{2p+1}\tilde{C}E_{\tilde{\chi}}(\psi)\sup_{s\leq-l}\frac{\chi(s)}{\tilde{\chi}(s)}
\end{split}
\end{equation*}
for $l>L$ and the statement \ref{weakcon1(3)} follows.
We also have
\begin{equation*}
\begin{split}
&\quad |\int_Mh(\phi-\psi)\omega_v^n\wedge\eta-\int_Mh(\phi^l-\psi^l)\omega_{v^l}^n\wedge\eta| \\
&=|\int_{\{\psi\leq-l\}}h(\phi-\psi)\omega_v^n\wedge\eta-\int_{\{\psi\leq-l\}}h(\phi^l-\psi^l)\omega_{v^l}^n\wedge\eta| \\
&\leq \int_{\{\psi\leq-l\}}|h(\phi-\psi)|\omega_v^n\wedge\eta+\int_{\{\psi\leq-l\}}|h(\phi^l-\psi^l)|\omega_{v^l}^n\wedge\eta \\
&\leq \tilde{C} (\int_{\{\psi\leq-l\}}\chi(\psi)\omega_v^n\wedge\eta+\int_{\{\psi\leq-l\}}\chi(\psi^l)\omega_{v^l}^n\wedge\eta) \\
&\leq \tilde{C} \sup_{s\leq-l} \frac{\chi(s)}{\tilde{\chi}(s)}(\int_{\{\psi\leq-l\}}\tilde{\chi}(\psi)\omega_v^n\wedge\eta+\int_{\{\psi\leq-l\}}\tilde{\chi}(\psi^l)\omega_{v^l}^n\wedge\eta) \\
&\leq \tilde{C} \sup_{s\leq-l} \frac{\chi(s)}{\tilde{\chi}(s)}(\int_M\tilde{\chi}(\psi)\omega_v^n\wedge\eta+\int_M\tilde{\chi}(\psi^l)\omega_{v^l}^n\wedge\eta) \\
&\leq (2p+1)2^{2p+1}\tilde{C}\sup_{s\leq-l}\frac{\chi(s)}{\tilde{\chi}(s)}(E_{\tilde{\chi}}(\psi)+E_{\tilde{\chi}}(v)+E_{\tilde{\chi}}(\psi^l)+E_{\tilde{\chi}}(v^l)) \\
&\leq 4(2p+1)(2p+2)^n2^{2p+1}\tilde{C}E_{\tilde{\chi}}(\psi)\sup_{s\leq-l}\frac{\chi(s)}{\tilde{\chi}(s)}
\end{split}
\end{equation*}
for $l>L$ and the statement \ref{weakcon1(4)} follows. This completes the proof.
\end{proof}
\begin{prop}\label{boundedenergy}Suppose $\chi\in {\mathcal W}^+_p$ and $\{u_k\}_{k\in \mathbb{N}}\subset {\mathcal E}_\chi(M, \xi, \omega^T)$ is a decreasing sequence converging to $u\in \text{PSH}(M, \xi, \omega^T)$. If $\sup_k E_\chi(u_k)<\infty$ then $u\in {\mathcal E}_\chi(M, \xi, \omega^T)$ and
\[
E_\chi(u)=\lim_{k\rightarrow \infty} E_\chi(u_k).
\]
\end{prop}
\begin{proof}
Without loss of generality we assume that $u_1\leq0$. The canonical cutoffs $u_k^l=\max\{u_k,-l\}$ decreases to the canonical cutoff $u^l=\max\{u,-l\}$. As $-l \leq u^l \leq u_k^l \leq 0$, Proposition \ref{weakcon3} and the fundamental estimate imply that
\begin{equation*}
E_{\chi}(u^l) =\lim_{k \rightarrow \infty} E_{\chi}(u_k^l) \leq (p+1)^n \sup_k E_{\chi}(u_k)
\end{equation*}
By Proposition \ref{cte}, $u \in {\mathcal E}_{\chi}(M,\xi,\omega^T)$. Applying the previous Proposition in the case $\psi_k=v_k=u_k,\phi_k=0$ gives that $E_{\chi}(u)=\lim\limits_{k \rightarrow \infty} E_{\chi}(u_k)$.
\end{proof}
A very important notion in pluripotential theory is the \emph{envelop construction}, which we shall describe below. In our setting on a compact Sasaki manifold, given a usc function $f\in M\rightarrow [-\infty, \infty)$ such that $f$ is invariant under the Reeb flow, we consider the envelop
\begin{equation}\label{envelop01}
P(f):=\sup\{u\in \text{PSH}(M, \xi, \omega^T)\; \text{such that}\; u\leq f\}.
\end{equation}
As in K\"ahler setting, we have the following
\begin{prop}The envelop construction $P(f)\in \text{PSH}(M, \xi, \omega^T)$.
\end{prop}
\begin{proof}This statement is local in nature, hence we only need to argue in foliations charts $W_\alpha=(-\delta, \delta)\times V_\alpha$, where $V_\alpha\subset \mathbb{C}^n$ give a transverse holomorphic charts. Since $P(f)$ is invariant under the Reeb flow, its usc regularization $P(f)^{*}$ is invariant under the Reeb flow. Hence by $P(f)^*$ is $\omega^T_\alpha$-psh on each $V_\alpha$, see \cite{BN}[Theorem 1.2.3 (viii)]. Since $f$ is usc, hence $P(f)^*\leq f^*=f$. Hence $P(f)^*$ is a candidate in the definition of $P(f)$, gives that $P(f)^*\leq P(f)$. This implies that $P(f)=P(f)^*$ and $P(f)\in \text{PSH}(M, \xi, \omega^T)$.
\end{proof}
We also introduce the notion \emph{rooftop envelop}, for usc functions $\{f_1, \cdots, f_n\}$ which are invariant under the Reeb flow,
\[
P(f_1, \cdots, f_n):=P(\min\{f_1, \cdots, f_n\}).
\]
We have the following,
\begin{thm}\label{rooftop101}Given $f\in C^\infty_B$, then
we have the following estimate
\[
\|P(f)\|_{C^{1, \bar 1}}\leq C(M, \omega^T, g, \|f\|_{C^{1, \bar 1}}).
\]
Moreover, if
$u_1, \cdots, u_k\in {\mathcal H}_\Delta$, where we use the notation
\[
{\mathcal H}_\Delta=\{u\in \text{PSH}(M, \xi, \omega^T): \|u\|_{C^{1, \bar 1}}<\infty\}
\]
then $P(u_1, \cdots, u_k)\in {\mathcal H}_\Delta$.
\end{thm}
We shall prove Theorem \ref{rooftop101} in Appendix.
The following result would be very essential for the rooftop envelop $P(u_0, u_1)$: that is, on the non-contact set $\Gamma:=\{P(u_0, u_1)<\min (u_0, u_1)\}, \omega_{P(u_0, u_1)}^n\wedge \eta=0$.
\begin{lemma}For $u_0, u_1\in {\mathcal H}_\Delta$, then on $\Gamma$,
\begin{equation}\omega_{P(u_0, u_1)}^n\wedge \eta=0
\end{equation}
\end{lemma}
\begin{proof}
First we assume $\xi$ is regular or quasiregular, then the proof follows similarly as in K\"ahler setting. We sketch the proof briefly. We consider the quotient K\"ahler manifold (orbifold) $(Z=M/{\mathcal F}_\xi, \omega_Z)$ such that $\omega^T=\pi^*\omega_Z$, where $\pi: M\rightarrow Z$ is the natural quotient map. Since $u_0, u_1$ and $P(u_0, u_1)$ are all basic functions, and they descend to $Z$ to define the functions on $Z$, which we still denote as $u_0, u_1$ and $P(u_0, u_1)$. We only need to show that $(\omega_Z+\sqrt{-1}\partial\bar \partial P(u_0, u_1))^n=0$ on $\Gamma_Z:=\{z\in Z: P(u_0, u_1)<\min (u_0, u_1)\}$. Note that $\Gamma_Z=\pi(\Gamma)$. This simply follows from \cite{BT1}[Corollary 9.2].
Now we deal with the case when $\xi$ is irregular. We need to use a Type-I deformation to approximate $(M, \xi, \eta, g, \Phi)$, as in Theorem \ref{type101}. Denote $T^k$ to be the torus in $\text{Aut}(\xi, \eta, g)$ with the Lie algebra $\mathfrak{t}$. Take $\rho_i\in \mathfrak{t}$ such that $\rho_i\rightarrow 0$ (convergence is smooth with respect to a fixed metric $g$). We can take $\rho_i$ such that $\xi_i=\xi+\rho_i$ is quasiregular. Consider the Type-I deformation $(M, \xi_i, \eta_i, g_i, \Phi_i)$ as in Definition \ref{type-01}. Given $u_0, u_1\in {\mathcal H}_\Delta$ and we know that $P(u_0, u_1)\in {\mathcal H}_\Delta$ (see Theorem \ref{rooftop101}), by Lemme \ref{type1}, there exists $\epsilon_i\rightarrow 0$ such that $(1-\epsilon_i) u_0, (1-\epsilon_i)u_1, (1-\epsilon_i)P(u_0, u_1)\in \text{PSH}(M, \xi_i, \omega^T_i)$.
Define
\begin{equation}\label{rooftop102}
P_i=P_i((1-\epsilon_i)u_0, (1-\epsilon_i) u_1)=\sup\{v\in \text{PSH}(M, \xi_i, \omega^T_i), v\leq (1-\epsilon_i)u_0, (1-\epsilon_i) u_1\}.
\end{equation}
Since $(1-\epsilon_i)P(u_0, u_1)\in \text{PSH}(M, \xi_i, \omega^T_i)$ and $(1-\epsilon_i)P(u_0, u_1)\leq (1-\epsilon_i)u_0, (1-\epsilon_i) u_1$, hence $(1-\epsilon_i)P(u_0, u_1)\leq P_i$. On the other hand, we apply Lemma \ref{type1} and we know there exists $\varepsilon_i\rightarrow 0$, such that $(1-\varepsilon_i)P_i\in \text{PSH}(M, \xi, \omega^T)$. It follows that
\[
(1-\varepsilon_i) P_i\leq P(u_0, u_1)\leq P_i (1-\epsilon_i)^{-1}
\]
By Theorem \ref{rooftop101}, we know that $|d\Phi d P_i|$ is uniformly bounded and hence $P_i\rightarrow P(u_0, u_1)$ in $C^{1, \alpha}$. For any compact subset $ K\subset \Gamma=\{P(u_0, u_1)<\min (u_0, u_1)\}$, we can choose $i$ sufficiently large, such that $P_i<\min\{(1-\epsilon_i)u_0, (1-\epsilon_i) u_1\}$. Since $\xi_i$ is quasiregular, by \eqref{rooftop102}, we can then get that
\[(\omega_i^T+\frac{1}{2}d\Phi_i d P_i)^n \wedge \eta_i=0, \;\text{on}\; K.\]
Taking $i\rightarrow \infty$, by Lemma \ref{measure100}, we get that
\[(\omega^T+\frac{1}{2}d\Phi d P(u_0, u_1))^n \wedge \eta=0, \;\text{on}\; K.\]
This completes the proof.
\end{proof}
As a consequence, we get a volume partition formula for $\omega^n_{P(u_0, u_1)}\wedge\eta$ as follows,
\begin{lemma}\label{decomposition}
For $u_0, u_1\in {\mathcal H}_\Delta$, denote $\Lambda_{u_0}=\{P(u_0, u_1)=u_0\}$ and $\Lambda_{u_1}=\{P(u_0, u_1)=u_1\}$. Then we have the following
\begin{equation}
\omega^n_{P(u_0, u_1)}\wedge \eta=\chi_{\Lambda_{u_0}} \omega^n_{u_0}\wedge \eta+\chi_{\Lambda_{u_1}\backslash \Lambda_{u_0}} \omega^n_{u_1}\wedge \eta.
\end{equation}
\end{lemma}
\begin{proof}
The previous Lemma implies that the measure $\omega^n_{P(u_0,u_1)}\wedge \eta$ is supported on the set $\Lambda_{u_0} \cup \Lambda_{u_1}$. It follows from Theorem \ref{rooftop101} that $P(u_0,u_1)$ has bounded Laplacian, hence all second partial derivatives of $P(u_0,u_1)$ are in $L^p(M)$ for all $p>1$. Then all the second order partial derivatives of $P(u_0,u_1)$ and $u_0$ coincide on $\Lambda_{u_0}$ a.e., all the second order partial derivatives of $P(u_0,u_1)$ and $u_1$ coincide on $\Lambda_{u_1}$ a.e..Recall the definition of Monge-Ampere operators on functions belong to $W^{2,n}$,we can write:
\begin{equation*}
\omega^n_{P(u_0, u_1)}\wedge \eta=\chi_{\Lambda_{u_0}} \omega^n_{u_0}\wedge \eta+\chi_{\Lambda_{u_1}\backslash \Lambda_{u_0}} \omega^n_{u_1}\wedge \eta.
\end{equation*}
\end{proof}
\begin{lemma}\label{}
Suppose $\chi\in {\mathcal W}^+_p$ and $u_0, u_1\in {\mathcal E}_\chi(M, \xi, \omega^T)$. Then $P(u_0, u_1)\in {\mathcal E}_\chi(M, \xi, \omega^T)$. If $u_0, u_1\leq 0$, then the following estimates hold
\begin{equation}
E_\chi(P(u_0, u_1))\leq (p+1)^n(E_\chi(u_0)+E_\chi(u_1)).
\end{equation}
\end{lemma}
\begin{proof}
Without loss of generality we can assume $u_0,u_1<0$. It follows from Lemma \ref{BK} that there exist negative transverse K\"ahler potentials $u_0^k, u_1^k \in \mathcal{H}$ deceasing to $u_0,u_1$ respectively. By Theorem \ref{rooftop101}, the rooftop envelopes $P(u_0^k, u_1^k) \in \mathcal{H}_{\triangle}$ decreases to $P(u_0,u_1)$. And we have the following inequality by Lemma \ref{decomposition}:
\begin{equation*}
\omega^n_{P(u_0^k,u_1^k)}\wedge \eta \leq \chi_{\Lambda_{u_0}} \omega^n_{u_0} \wedge \eta +\chi_{\Lambda_{u_1}} \omega^n_{u_1} \wedge \eta
\end{equation*}
Then
\begin{equation*}
\begin{split}
E_{\chi}(P(u^k_0,u^k_1)) &=\int_M \chi(P(u^k_0,u^k_1)) \omega^n_{P(u^k_0,u^k_1)} \wedge \eta \\
&\leq \int_{P(u^k_0,u^k_1)=u^k_0} \chi(u_0^k)\omega^n_{u_0^k} \wedge \eta+\int_{P(u_0^k,u_1^k)=u_1^k} \chi(u_1^k) \omega^n_{u_1^k} \wedge \eta \\
&\leq E_{\chi}(u_0^k)+E_{\chi}(u_1^k) \\
&\leq (p+1)^n(E_{\chi}(u_0)+E_{\chi}(u_1))
\end{split}
\end{equation*}
By Proposition \ref{boundedenergy} we have $P((u_0,u_1)) \in {\mathcal E}_{\chi}(M,\xi,\omega^T)$ and the required inequality holds.\end{proof}
As a corollary we know that ${\mathcal E}_\chi(M, \xi, \omega^T)$ is convex,
\begin{cor}\label{convex1}
If $u_0, u_1\in {\mathcal E}_\chi(M, \xi, \omega^T)$, then $tu_0+(1-t)u_1\in {\mathcal E}_\chi(M, \xi, \omega^T)$ for any $t\in [0, 1]$.
\end{cor}
\begin{proof}
By the previous Lemma we have $P(u_0,u_1) \in {\mathcal E}_\chi(M, \xi, \omega^T)$. Notice that $P(u_0,u_1) \leq tu_0+(1-t)u_1$ for $t \in [0,1]$, then the monotonicity property of ${\mathcal E}_\chi(M, \xi, \omega^T)$ implies that $tu_0+(1-t)u_1 \in {\mathcal E}_\chi(M, \xi, \omega^T)$.\end{proof}
\begin{lemma}
Let $U \subset M$ be a Borel set with $(\omega^T)^n\wedge\eta(U)>0$. and $u \in {\mathcal E}_1(M, \xi, \omega^T)$. There there exists $\varphi \in {\mathcal E}_1(M,\xi, \omega^T)$ with $\varphi \leq u$ and $\omega_{\varphi}^n\wedge\eta(U) >0$.
\end{lemma}
\begin{proof}
Without loss of generality we can assume that $u<0$. Then we can choose a sequence $u_k \in {\mathcal H}$ decreasing to $u$ with $u_k <0$. For a constant $\tau>0$, we have $\{P(u_k+\tau, 0)=u_k+\tau\} \subset \{ u_k \leq -\tau\}$. It follows from Proposition \ref{decomposition} that
\[
\omega_{P(u_k+\tau,0)}^n\wedge\eta \leq \chi_{\{u_k \leq -\tau\}} \omega_{u_k}^n\wedge\eta+ (\omega^T)^n\wedge\eta \leq -\frac{u_k}{\tau}\omega_{u_k}^n\wedge\eta+(\omega^T)^n\wedge\eta
\]
The sequence $P(u_k+\tau, 0) \in {\mathcal E}_1(M,\xi,\omega^T)$ decreases to $P(u+\tau, 0) \in {\mathcal E}_1(M, \xi, \omega^T)$. It follows from Proposition \ref{weakcon3} that
\[
\omega_{P(u+\tau,0)}^n\wedge\eta \leq -\frac{u}{\tau}\omega_u^n\wedge\eta+(\omega^T)^n\wedge\eta
\]
Hence we have
\[
\omega_{P(u+\tau,0)}^n\wedge\eta(M-U) \leq \frac{1}{\tau} \int_{M-U}|u|\omega_u^n\wedge\eta+(\omega^T)^n\wedge\eta(M-U) \leq \frac{1}{\tau}\int_M|u|\omega_u^n\wedge\eta+(\omega^T)^n\wedge\eta(M-U)
\]
It follows from $\omega_{P(u+\tau,0)}^n\wedge\eta(M)=(\omega^T)^n\wedge\eta(M)$ that
\[
\omega_{P(u+\tau,0)}^n\wedge\eta(U) \geq (\omega^T)^n\wedge\eta(U)- \frac{1}{\tau}\int_M|u|\omega_u^n\wedge\eta
\]
and $\omega_{P(u+\tau,0)}^n\wedge\eta(U)>0$ for $\tau$ big enough. Then $\varphi=P(u+\tau, 0) -\tau$ is the potential required.
\end{proof}
\begin{lemma}\label{domination2}(The domination principle)
If $u, v\in {\mathcal E}_1(M,\xi,\omega^T)$ and $u \leq v$ almost everywhere with respect to the measure $\omega_v^n\wedge\eta$. Then $u\leq v$.
\end{lemma}
\begin{proof}
We only have to prove $u \leq v$ almost everywhere with respect to $(\omega^T)^n\wedge\eta$ for $u, v<0$.
Suppose that $(\omega^T)^n\wedge\eta(\{u>v\}) >0$. The previous Lemma implies that there exists $\varphi \in {\mathcal E}_1(M, \xi, \omega^T)$ with $\varphi \leq u$ and $\omega_{\varphi}^n\wedge\eta(\{u>v\}) >0$. It follows from Corollary \ref{convex1} that $t \varphi +(1-t) u \in {\mathcal E}_1(M, \xi, \omega^T)$ for $t \in [0,1]$. Using the fact $\omega_{t\varphi+(1-t)u}^n\wedge\eta \geq t^n \omega_{\varphi}^n\wedge\eta$ , the Comparison principle $(\ref{cp01})$ and $\{v < t\varphi+(1-t)u\} \subset \{v < u\}$ we have
\begin{equation*}
\begin{split}
t^n\int_{\{v < t\varphi +(1-t)u\}} \omega_{\varphi}^n\wedge\eta &\leq \int_{\{v<t\varphi+(1-t)u\}} \omega_{t\varphi+(1-t)u}^n\wedge\eta \\
&\leq \int_{\{ v < t\varphi+(1-t)u\}} \omega_v^n\wedge\eta \\
& \leq \int_{\{v < u\}} \omega_v^n\wedge\eta \\
&=0
\end{split}
\end{equation*}
and $\omega_{\varphi}^n\wedge\eta(\{v < t\varphi+(1-t)u\})=0$ for $t \in (0,1]$. Then
\[
\omega_{\varphi}^n\wedge\eta(\{v<u\})=\lim_{k\rightarrow\infty}\omega_{\varphi}^n\wedge\eta(\{v<\frac{1}{k}\varphi+(1-\frac{1}{k})u\})=0
\]
This leads to a contradiction.
\end{proof}
\subsection{The space of transverse K\"ahler potentials and $({\mathcal H}, d_2)$}
The Riemannian structure on ${\mathcal H}$ has been studied extensively, notably by Guan-Zhang \cite{GZ1}. Guan-Zhang proved that for any two points $\phi_1, \phi_2\in {\mathcal H}$, there exists a unique $C^{1, \bar 1}_B$ geodesic which realizes the distance of $({\mathcal H}, d_2)$ and $({\mathcal H}, d_2)$ is a metric space. The Riemannian structure would play a very central role, as in Chen's result \cite{chen01} in K\"ahler setting.
We shall recall these results.
For $\psi_1,\psi_2 \in T_{\phi} \mathcal{H}=C_B^{\infty}(M)$, define a $L^2$ inner product on this tangent space
\begin{equation*}
(\psi_1,\psi_2)_{\phi}=\int_{M} \psi_1\psi_2 d\mu_{\phi}
\end{equation*}
and the length $||\psi||_{\phi}$ of a vector $\psi \in T_{\phi} {\mathcal H}$ is
\begin{equation*}
||\psi||_{2, \phi}=(\int_{M} \psi_1\psi_2 d\mu_{\phi})^{\frac{1}{2}},
\end{equation*}
where we use the notation
\begin{equation}
d\mu_\phi=\omega_\phi^n\wedge \eta_\phi=\omega^n_\phi \wedge \eta.
\end{equation}
For a smooth path $\phi_t \in {\mathcal H}$, the length of the path is defined to be
\begin{equation*}
l(\phi_t)=\int_0^1 ||\dot{\phi}_t||_{2, \phi_t}dt
\end{equation*}
This is a direct adaption of Mabuchi's metric \cite{M1} on the space of K\"ahler potentials to Sasaki setting.
The Levi-Civita connection $\nabla$ is torsion free and satisfies
\begin{equation*}
\frac{d}{dt}(u_t, v_t)_{\phi_t}=(\nabla_{\dot{\phi}_t} u_t, v_t)_{\phi_t}+(u_t,\nabla_{\dot{\phi}_t} v_t)_{\phi_t}
\end{equation*}
for any smooth vector fields $u_t, v_t$ along the path $\phi_t$ in $\mathcal{H}_{\omega^T}$.
Let $u_t\in C^\infty_B(M)$ be smooth vector fields along a smooth curve $\phi_t$ in $\mathcal{H}$,then
\begin{equation}
\nabla_{\dot{\phi}_t} u_t=\dot{u}_t-\frac{1}{4}<\nabla \dot{\phi}_t,\nabla u_t>_{{\phi_t}}
\end{equation}
The geodesic equation can be written as
\begin{equation}\label{cma01}
\nabla_{\dot \phi_t}(\dot\phi_t)=\ddot{\phi}_t-\frac{1}{4}|\nabla \dot{\phi}_t|^2_{{\phi_t}}=0
\end{equation}
Given $\phi_0, \phi_1\in {\mathcal H}$, to solve the geodesic equation, Guan-Zhang \cite{GZ1} introduced the following perturbation equation, for a path $\phi_t: M\times [0, 1]\rightarrow \mathbb{R}$,
\begin{equation}\label{cma02}
\left\{\begin{matrix}
\left(\ddot{\phi}_t-\frac{1}{4}|\nabla \dot{\phi}_t|^2_{\omega_{\phi_t}}\right)\omega_\phi^n \wedge \eta=\epsilon (\omega^T)^n\wedge \eta, M\times (0, 1)\\
\phi|_{t=0}=\phi_0\\
\phi|_{t=1}=\phi_1
\end{matrix}
\right.
\end{equation}
Define a function $\psi$ on $M\times [1, 3/2]$, as a subset of the cone $X$,
\[
\psi(\cdot, r)=\phi_{t}(\cdot)+4\log r,\;\;t=2r-2
\]
Set a $(1, 1)$-form by,
\[
\Omega_\psi=\omega_X+\frac{r^2}{2}\sqrt{-1}\left(\partial\bar\partial \psi-\frac{\partial \psi}{\partial r}\partial\bar\partial r\right)
\]
Guan-Zhang wrote an equivalent form of \eqref{cma02} in terms of a complex Monge-Ampere equation on $\psi$ of the following form (with $f=r^2, \epsilon\in(0, 1] $),
\begin{equation}\label{cma03}
\begin{split}
&(\Omega_\psi)^{n+1}=\epsilon f (\omega_X)^{n+1}, M\times \left(1, \frac{3}{2}\right)\\
&\psi|_{M\times \{r=1\}}=\phi_0, \psi|_{M\times \{r=3/2\}}=\psi_1+4\log(3/2)
\end{split}
\end{equation}
Guan-Zhang proved the following results reagrding \eqref{cma03},
\begin{thm}[Guan-Zhang]Fix a Sasaki structure $(M, \xi, \eta, g)$ on a compact manifold $M$. For any positive basic function $f$ and any two points $\phi_0, \phi_1\in {\mathcal H}$, there exists a unique smooth solution of $\psi$ to \eqref{cma03}, satisfying the following estimates: $\psi$ is basic and there exists a constant $C>0$, depending only on $\|f^{\frac{1}{n}}\|_{C^{2}(M\times [1, \frac{3}{2}])}, \|\phi_0\|_{C^{2, 1}}, \|\phi_1\|_{C^{2, 1}}$ such that
\begin{equation}\label{cma05}
\|\psi\|_{C^{2}_w}:= \|\psi\|_{C^1}+\sup |\Delta \psi| \leq C.
\end{equation}
Denote the corresponding solution of \eqref{cma02} by $\phi_t^\epsilon$, then $\phi^\epsilon_t$ is called a $\epsilon$-geodesic (smooth) connecting $\phi_0, \phi_1$ satisfying
\begin{equation}\label{cma04}
\|\phi^\epsilon_t\|_{C^1}+\sup(\ddot\phi^\epsilon+|\nabla \dot\phi^\epsilon_t|_g+\Delta_g \phi^\epsilon_t)\leq C
\end{equation}
When $\epsilon \rightarrow 0$, there exists a unique (weak $C^2_w$) limit $\phi_t$ of $\phi^\epsilon_t: M\times [0, 1]\rightarrow \mathbb{R}$ connecting $\phi_0, \phi_1$ such that $\Omega_{\phi^\epsilon+4log r}$ is positive. The later is equivalent to \[\omega_{\phi^\epsilon_t}>0, \ddot{\phi^\epsilon_t}-\frac{1}{4}|\nabla \dot{\phi^\epsilon_t}|^2_{\omega_{\phi^\epsilon_t}}>0.\]
As a consequence, $({\mathcal H}, d_2)$ is a metric space.
\end{thm}
\begin{rmk}The constant $1/4$ appears in the geodesic equation
\[
\ddot{\phi}_t-\frac{1}{4}|\nabla \dot{\phi}_t|^2_{\omega_{\phi_t}}=0
\]
This constant is insignificant. In K\"ahler setting, some authors write the constant as $1/2$ and some write as $1$, depending on the gradient $\nabla$ is interpreted as real or complex; they differ by a constant $2$. The constant $1/4$ appears in Sasaki setting in \cite{GZ1} since the authors use the real gradient and use the space of Sasaki potentials (transverse K\"ahler potentials) defined as
\[
\{\phi: d\eta+\sqrt{-1}\partial_B\bar\partial_B \phi>0. \}
\]
In the following, we shall write the geodesic equation as
\[
\ddot{\phi}_t-|\nabla \dot{\phi}_t|^2_{\omega_{\phi_t}}=0,
\]
where we use complex gradient, and our choice space of transverse K\"ahler potentials is as
\[
{\mathcal H}=\{\phi\in C^\infty_B(M): \omega^T+\sqrt{-1}\partial_B\bar\partial_B\phi>0\}.
\]
\end{rmk}
To prove $({\mathcal H}, d_2)$ is a metric space, Guan-Zhang \cite{GZ1}[Lemma 14, proof of Theorem 2] proved the following triangle inequality,
\begin{lemma}[Guan-Zhang]\label{triangle01}Let $\psi(s): [0, 1]\rightarrow {\mathcal H}$ be a smooth curve, $\phi\in {\mathcal H}\backslash \psi([0, 1])$. Fix $\epsilon\in (0, 1]$. Let $u^{\epsilon}\in C^\infty_B([0, 1]\times [0, 1]\times M)$ be the function such that $u^\epsilon_t(\cdot, s)$ is the $\epsilon$-geodesic connecting $\phi$ and $\psi_s$, for $t\in [0, 1]$. Then the following estimate holds,
\begin{equation}
l(u^{\epsilon}_t(\cdot, 0))\leq l(\psi)+l(u^\epsilon_t(\cdot, 1))+\epsilon C,
\end{equation}
where $C=C(\phi, \psi, g)$ is a uniform constant, independent of $\epsilon$.
\end{lemma}
There are several estimates which are not explicitly stated or not proved in \cite{GZ1}. We include these estimates below since we shall need them below.
Regarding \eqref{cma02}, first we have the following comparison principle,
\begin{lemma}
Suppose we have two solutions $\varphi, \phi$ with boundary datum $\varphi_0, \varphi_1$ and $\phi_0, \phi_1$ respectively,
\begin{equation} \left(\ddot{\phi}_t-|\nabla \dot{\phi}_t|^2_{\omega_{\phi_t}}\right)\omega_\phi^n \wedge \eta=\epsilon (\omega^T)^n\wedge \eta=\left(\ddot{\varphi}_t-|\nabla \dot{\varphi}_t|^2_{\omega_{\varphi_t}}\right)\omega_\varphi^n \wedge \eta,
\end{equation}
then we have the following
\begin{equation}\label{comparison01}
\max |\phi-\varphi|\leq \max |\phi_0-\varphi_0|+\max |\phi_1-\varphi_1|.
\end{equation}
\end{lemma}
\begin{proof}This is a standard comparison principle. We sketch the proof for completeness. Denote the operator
\[
F(D^2\phi)=\log \det \begin{pmatrix} \ddot \phi & \nabla \dot \phi\\
(\nabla \dot \phi)^t& g_{i\bar j}^T+\phi_{i\bar j}
\end{pmatrix}-\log \det(g^T_{i\bar j})=\log \left(\ddot{\phi}_t-|\nabla \dot{\phi}_t|^2_{\omega_{\phi_t}}\right)+\log\frac{ \det(g_{i\bar j}^T+\phi_{i\bar j})}{\det(g_{i\bar j}^T)}
\]
The $\epsilon$-geodesic equation can be written as $F(D^2\phi)=\epsilon$. Now suppose $F(D^2\phi)=F(D^2\varphi)=\epsilon>0$, then \eqref{comparison01} holds. Otherwise suppose at some interior point
\[\phi-\varphi>\max |\phi_0-\varphi_0|+\max |\phi_1-\varphi_1|. \]
Hence $\phi-\varphi+at(1-t)$ obtains its maximum at an interior point $p$ for some $a>0$. Denote $v=\phi+at(t-1)$. Then on one hand,
\[
F(D^2v)>F(D^2\phi)=\epsilon
\]
On the other hand at $p$, $D^2 v\leq D^2 \varphi$. It follows from the concavity of $F$, we have at $p$,
\[
F(D^2v)-F(D^2\varphi)\leq {\mathcal L}_{F} (v-\varphi)\leq 0,
\]
where ${\mathcal L}_{F_v}$ is the linearized operator of $F$ at $v$. Contradiction.
\end{proof}
One can actually be more precise about the estimate \eqref{cma04} (and \eqref{cma05}). For simplicity, we state the result for \eqref{cma02},
\begin{lemma}\label{cma10}The $\epsilon$ geodesic $\phi^\epsilon_t$ connecting $\phi_0, \phi_1\in {\mathcal H}$ satisfies the following estimate,
\begin{equation}\label{cma06}
\max |\dot\phi^\epsilon_t|\leq \max |\phi_1-\phi_0|+C \max |\nabla(\phi_1-\phi_0)|^2_g+\epsilon,
\end{equation}
where $C$ depends only on $\phi_0, \phi_1$. Moreover, we have
\begin{equation}\label{cma07}
|\nabla \phi^\epsilon_t|_g+\sup \Delta_g\phi^\epsilon\leq C(\|\phi_0\|_{C^1}, \|\phi_1\|_{C^1}, \sup \Delta_g\phi_0, \Delta_g \phi_1, g)
\end{equation}
\end{lemma}
\begin{proof}The first estimate follows from $\ddot \phi^\epsilon_t>0$ and the following $C^0$ estimate \eqref{cma08}, which can be proved similarly using the concavity of $F$.
First there exists $a>0$ such that
\begin{equation}\label{cma08}
at(t-1)+(1-t)\phi_0+t\phi_1
\leq \phi_t^\epsilon\leq (1-t)\phi_0+t\phi_1
\end{equation}
The righthand side is a direction consequence of $\ddot \phi^\epsilon_t>0$, while the lefthand side can be argued as follows. Denote $U^a=at(t-1)+(1-t)\phi_0+t\phi_1$; we know $\phi^\epsilon_t$ agrees with $U^a$ on the boundary. Hence if $\phi^\epsilon_t<U^a$, then $\phi^\epsilon_t-U^a$ takes its minimum at some interior point $p$. At $p$, we know $D^2 \phi^\epsilon\geq D^2 U^a$. By concavity of $F$, we get (at $p$)
\[
0\leq {\mathcal L}_{F_a} (\phi^\epsilon_t-U^a)\leq F(D^2\phi^\epsilon_t)-F(D^2 U^a)
\]
That is $F(D^2 U^a)\leq \log \epsilon$. This is a contradiction when $a>0$ is sufficiently large. Indeed, a direct computation shows that if $a\geq C\max |\nabla(\phi_1-\phi_0)|^2+\epsilon$, then $F(D^2 U^a)>\log \epsilon$. Hence for such choice of $a$, \eqref{cma08} holds. By convexity in $t$ direction, we know that
\[
\dot\phi^\epsilon_t (\cdot, 0)\leq \dot \phi^\epsilon_t\leq \dot \phi^\epsilon_t(\cdot, 1)
\]
It is evident to show that
\[-a+ \phi_1-\phi_0 \leq \dot\phi^\epsilon_t (\cdot, 0) \leq \phi_1-\phi_0\leq \dot \phi^\epsilon_t(\cdot, 1)\leq a+\phi_1-\phi_0\]
Hence \eqref{cma06} follows. The gradient estimate $|\nabla \phi^\epsilon_t|$ is given by \cite{GZ1}[Proposition 2]. The estimate on $\Delta _g \phi^\epsilon_t$, depending only on $\phi_0, \phi_1$ up to second order derivative, was proved for K\"ahler setting by the first named author \cite{he12}[Theorem 1.1] (for $\epsilon=0$, it was proved earlier in \cite{BD} using pluripotential theory). The method in \cite{he12} is to deal with the equation \eqref{cma02} directly, and it can be carried over to prove the interior estimate of $\Delta_g \phi^\epsilon$ word by word (since in Sasaki setting, this estimate only involves transverse K\"ahler structure and basic functions). We skip the details.
\end{proof}
By taking $\epsilon \rightarrow 0$, we have the following,
\begin{lemma}\label{zero01}Suppose $\phi$ is the weak geodesic connecting $\phi_0, \phi_1\in {\mathcal H}$, then for some positive constant $C=C(M, g, \|\phi_0\|_{C^2}, \|\phi_1\|_{C^2})$, we have
\[
|\dot \phi|\leq \max |\phi_1-\phi_0|+C \max |\nabla\phi_1-\nabla \phi_0|_g^2
\]
As a consequence, when $\phi_0\rightarrow \phi_1$ in ${\mathcal H}$, then $d_2(\phi_0, \phi_1)\rightarrow 0$.
\end{lemma}
\begin{rmk}One can get a much sharper estimate,
\[
|\dot \phi|\leq \max |\phi_1-\phi_0|
\]
using the uniqueness and comparison for the generalized solutions of complex Monge-Ampere in the sense of Bedford-Taylor, see \cite{D4}[Lemma 3.5] for K\"ahler setting. We shall prove this sharper version below.
\end{rmk}
Using Lemma \ref{triangle01} and Lemma \ref{zero01}, it follows that the distance function $d_2(\phi_0, \phi_1)$ is realized by the weak geodesic $\phi$ connecting $\phi_0, \phi_1$. In particular,
\begin{lemma}\label{distance}
Given $\phi_0, \phi_1\in {\mathcal H}$, we have,
\begin{equation}\label{distance01}
d_2(\phi_0, \phi_1)=\|\dot\phi\|_{2, \phi_t}, \forall t\in [0, 1]
\end{equation}
\end{lemma}
\begin{proof}Let $\phi^\epsilon_t$ be the $\epsilon$ geodesic connecting $\phi_0, \phi_1$. Then we compute
\begin{equation}\label{dis03}
\begin{split}
\frac{d}{dt}\int_M |\dot\phi^\epsilon_t|^2 (\omega_{\phi^\epsilon_t})^n\wedge \eta=&2\int_M \dot \phi^\epsilon_t (\ddot \phi^\epsilon_t-|\nabla \dot\phi^\epsilon_t|_{\phi_t^\epsilon}) (\omega_{\phi^\epsilon_t})^n\wedge \eta\\
=&2\epsilon \int_M \dot\phi^\epsilon_t (\omega^T)^n\wedge \eta
\end{split}
\end{equation}
Since $|\dot \phi^\epsilon_t|$ is uniformly bounded, letting $\epsilon\rightarrow 0$, we get that
\[
\frac{d}{dt}\int_M |\dot\phi_t|^2 (\omega_{\phi_t})^n\wedge \eta=0.
\]
This proves \eqref{distance01}. In particular if $\phi_0\neq \phi_1$, $\dot\phi_t$ is not identically zero for any $t$. Moreover, if $\epsilon$ is small enough, depending on $\phi_0\neq \phi_1$, then $\dot\phi^\epsilon_t$ is not identically zero for any $t\in [0, 1]$. This follows from \eqref{dis03} and it is easy to see that $\int_M |\dot\phi^\epsilon_t|^2 (\omega_{\phi^\epsilon_t})^n\wedge \eta$ has a positive lower bound for any $t$ (say $l(\phi^\epsilon_t)/2$), if $\epsilon$ is sufficiently small.
\end{proof}
We also have the following
\begin{thm}[Guan-Zhang, Theorem 2]\label{triangle100}For $u, v, w\in {\mathcal H}$,
\[
d_2(u, w)\leq d_2(u, v)+d_2(v, w).
\]
\end{thm}
\subsection{The Orlicz-Finsler geometry on Sasaki manifolds}
The Orlicz-Finsler geometry on the space of K\"ahler potentials was introduced by T. Darvas \cite{D2} and it has played an important role in problems regarding csck and Calabi's extremal metric in K\"ahler geometry. In particular the Finsler metric $d_1$ will play an important role and it is used to define the properness of ${\mathcal K}$-energy.
In this section we discuss the Orlicz-Finsler geometry on Sasaki geometry.
We prove the following theorem, which is the counterpart of Darvas's \cite{D2}[Theorem 1] in Sasaki setting.
\begin{thm}\label{darvas1}If $\chi\in {\mathcal W}_p^{+}, p\geq 1$, then $({\mathcal H}, d_\chi)$ is a metric space and for any $u_0, u_1\in {\mathcal H}$, the $C^{1, \bar 1}_B$ geodesic $t\rightarrow u_t$ connecting $u_0, u_1$ satisfies
\begin{equation}
d_\chi(u_0, u_1)=\|\dot u_t\|_{\chi, u_t}, t\in [0, 1].
\end{equation}
\end{thm}
Theorem \ref{darvas1} is the generalization for $d_2$ to general smooth Young weights.
This important result in T. Darvas's theory says that, the same $C^{1, \bar 1}_B$ geodesic (with respect to $d_2$) is ``length minimizing" for all $d_\chi$ metric structures and this holds in Sasaki setting.
The proof of Theorem \ref{darvas1} pretty much follows Darvas's proof \cite{D4}[Theorem 3.4], with minor modifications adapted to Sasaki setting. The main point is that only transverse K\"ahler structure is involved, and hence this is essentially the same as in K\"ahler setting. We include the details for completeness.
Following T. Darvas (see \cite{D4}[Chapter 3]), we define the Orlicz-Finsler length of $v\in T_u{\mathcal H}=C^\infty_B(M)$ for any weight $\chi\in {\mathcal W}^+_p$:
\begin{equation}
\|v\|_{\chi, u}=\inf\left\{r>0: \frac{1}{\text{Vol}(M)}\int_M \chi\left(\frac{v}{r}\right)\omega_u^n\wedge d\eta\leq \chi(1)\right\}
\end{equation}
For simplicity, we shall assume $\text{Vol}(M)=1$ in this section.
Given a smooth curve $\gamma: t\in [0, 1]\rightarrow {\mathcal H}$, its length is computed by the formula
\begin{equation}
l_\chi(\gamma_t)=\int_0^1\|\dot\gamma_t\|_{\chi, \gamma_t}dt
\end{equation}
Furthermore, the distance $d_\chi(u_0. u_1)$ between $u_0, u_1\in {\mathcal H}$ is the infimum of the $l_\chi$-length of smooth curves joining $u_0$ and $u_1$:
\begin{equation}\label{dis10}
d_{\chi}(u_0, u_1)=\inf\{l_\chi(\gamma_t): \gamma_t\;\text{is a smooth curve with}\; \gamma_0=u_0, \gamma_1=u_1\}.
\end{equation}
First we have the following,
\begin{prop}\label{derivativeoflength}
Suppose $\chi \in {\mathcal W}_p^+ \cap C^{\infty}(\mathbb{R})$. For a smooth curve $u_t (t \in [0,1])$ in ${\mathcal H}$ and a vector field $f_t \in C_B^{\infty}(X)$ along this curve with $f_t \not\equiv 0$,we have
\begin{equation}
\frac{d}{dt} ||f_t||_{\chi, u_t}=\frac{\int_M \chi'(\frac{f_t}{||f_t||_{\chi,\phi_t}})\nabla_{\dot{u}_t} f_t d\mu_{u_t}}{\int_M\chi'(\frac{f_t}{||f_t||_{\chi, u_t}}) \frac{f_t}{||f_t||_{\chi, u_t}}d\mu_{u_t}}
\end{equation}
\end{prop}
\begin{proof}This works as in \cite{D2}[Proposition 3.1] word by word. We skip the details.
\end{proof}
\begin{lemma}\label{kai length estimate}
Suppose $\chi \in \mathcal{W}_p^+ \cap C^{\infty}(\mathbb{R})$ and $u_0,u_1 \in \mathcal{H},u_0 \neq u_1$.Then the $\epsilon$-geodesics $[0,1] \owns t \rightarrow u_{t}^{\epsilon} \in \mathcal{H} $ connecting $u_0,u_1$ satisfies the following estimate:
\begin{equation}
\int_M \chi(\dot{u}_t^{\epsilon})\omega_{u_t^{\epsilon}}^n \wedge \eta \geq \max(\int_M \chi(\min(u_1-u_0,0)) \omega_{u_0}^n \wedge \eta, \int_M \chi(\min(u_0-u_1,0)) \omega_{u_1}^n \wedge \eta)-\epsilon C
\end{equation}
for all $t \in [0,1]$, where $C:=C(\chi,||u_0||_{C^2(M)},||u_1||_{C^2(M)})$
\end{lemma}
\begin{proof}This follows exactly as in K\"ahler setting \cite{D4}[Lemma 3.8], by a direct computation and the convexity of $\chi$.
\end{proof}
\begin{lemma}\label{derivativeofsmoothgeodesics}
Suppose $\chi \in \mathcal{W}_p^+ \cap C^{\infty}(\mathbb{R})$ and $u_0,u_1 \in \mathcal{H},u_0 \neq u_1$.Then there exists a constant $\epsilon_0$ depends on $u_0, u_1$ such that for all $\epsilon \in (0, \epsilon_0]$ the $\epsilon$-geodesic $[0,1] \owns t \rightarrow u_{t}^{\epsilon} \in \mathcal{H} $ connecting $u_0,u_1$ satisfies:
\begin{equation}
\frac{d}{dt} ||\dot{u}_t^{\epsilon}||_{\chi, u_t^{\epsilon}}=\epsilon \frac{\int_M \chi'(\frac{\dot{u}_t^{\epsilon}}{||\dot{u}_t^{\epsilon}||_{\chi,\dot{u}_t^{\epsilon}}})(\omega^T)^n \wedge \eta}{\int_M \frac{\dot{u}_t^{\epsilon}}{||\dot{u}_t^{\epsilon}||_{\chi,\dot{u}_t^{\epsilon}}} \chi'(\frac{\dot{u}_t^{\epsilon}}{||\dot{u}_t^{\epsilon}||_{\chi,\dot{u}_t^{\epsilon}}}) \omega_{u_t^{\epsilon}} \wedge \eta_{u_t^{\epsilon}} },\quad t \in [0,1].
\end{equation}
\end{lemma}
\begin{proof}
If we choose $\epsilon_0>0$ sufficiently small, then $\dot u^\epsilon_t$ is not identically zero for any $t\in [0, 1]$, if $\epsilon \in (0, \epsilon_0]$, given $u_0\neq u_1$, see Lemma \ref{distance01}. Then the results follows from Proposition \ref{derivativeoflength}.
\end{proof}
We have the following, similar to Lemma \ref{distance01} (for $d_2$),
\begin{prop}\label{tangentestimate}
Suppose $\chi \in \mathcal{W}_p^+ \cap C^{\infty}(\mathbb{R})$ and $u_0,u_1 \in \mathcal{H},u_0 \neq u_1$.Then there exists $\epsilon_0>0$ such that for any $\epsilon \in (0,\epsilon_0]$ the $\epsilon$-geodesic $[0,1] \owns t \rightarrow u_{t}^{\epsilon} \in \mathcal{H} $ connecting $u_0,u_1$ satisfies
\begin{itemize}
\item[(i)] $||\dot{u}_t^{\epsilon}||_{\chi,\dot{u}_t^{\epsilon}}>R_0,t \in [0,1]$;
\item[(ii)] $|\frac{d}{dt} ||\dot{u}_t^{\epsilon}||_{\chi,\dot{u}_t^{\epsilon}}| \leq \epsilon R_1,t \in [0,1]$.
\end{itemize}
where $\epsilon_0,R_0,R_1$ depends on upper bounds for $||u_0||_{C^2(M)},||u_1||_{C^2(M)}$ and lower bounds for $||\chi(u_1-u_0)||_{L^1((\omega^T)^n \wedge \eta)},\frac{\omega_{u_0}^n \wedge \eta_{u_0}}{ (\omega^T)^n\wedge \eta}$ and $\frac{\omega_{u_1}^n \wedge \eta_{u_1}}{ (\omega^T)^n\wedge \eta}$.
\end{prop}
\begin{proof}
\begin{itemize}
\item[(i)] Recall the equation $(1.11)$ in \cite{D4}
\begin{equation*}
||f||_{\chi,\mu} \geq \min\{\frac{\int_{\Omega} \chi(f)d\mu}{\chi(1)} ,(\frac{\int_{\Omega} \chi(f)d\mu}{\chi(1)})^{\frac{1}{p}}\}
\end{equation*}
and Lemma \ref{kai length estimate}, the estimate in (i) follows immediately.
\item[(ii)] Choose $\epsilon_0$ small so that Lemma\ref{derivativeofsmoothgeodesics} applies. Recall the Young identity
\begin{equation*}
\chi(a)+\chi^*(\chi'(a))=a\chi'(a),a, b \in \mathbb{R},\chi'(a) \in \partial \chi(a)
\end{equation*}
Then we have
\begin{equation}
\begin{split}
|\frac{d}{dt} ||\dot{u}_t^{\epsilon}||_{\chi, u_t^{\epsilon}}| &=\epsilon \frac{|\int_M \chi'(\frac{\dot{u}_t^{\epsilon}}{||\dot{u}_t^{\epsilon}||_{\chi,\dot{u}_t^{\epsilon}}})(\omega^T)^n \wedge \eta|}{\int_M \frac{\dot{u}_t^{\epsilon}}{||\dot{u}_t^{\epsilon}||_{\chi,\dot{u}_t^{\epsilon}}} \chi'(\frac{\dot{u}_t^{\epsilon}}{||\dot{u}_t^{\epsilon}||_{\chi,\dot{u}_t^{\epsilon}}}) \omega_{u_t^{\epsilon}} \wedge \eta_{u_t^{\epsilon}} } \\
&=\epsilon \frac{|\int_M \chi'(\frac{\dot{u}_t^{\epsilon}}{||\dot{u}_t^{\epsilon}||_{\chi,\dot{u}_t^{\epsilon}}})(\omega^T)^n \wedge \eta|}{\chi(1)+\int_M \chi^*(\chi'(\frac{\dot{u}_t^{\epsilon}}{||\dot{u}_t^{\epsilon}||_{\chi,\dot{u}_t^{\epsilon}}})) \omega_{u_t^{\epsilon}} \wedge \eta_{u_t^{\epsilon}} } \\
&\leq \frac{\epsilon}{\chi(1)} |\int_M \chi'(\frac{\dot{u}_t^{\epsilon}}{||\dot{u}_t^{\epsilon}||_{\chi,\dot{u}_t^{\epsilon}}})(\omega^T)^n \wedge \eta|
\end{split}
\end{equation}
Then the estimates (ii) follows from (i) and the fact that $\dot{u}_t^{\epsilon}$ is uniformly bounded in terms of $||u_0||_{C^2(M)},||u_1||_{C^2(M)}$.
\end{itemize}
\end{proof}
Next we are ready to prove the triangle inequality, as in Lemma \ref{triangle01} for $d_2$ and \cite{D2}[Proposition 3.4] in K\"ahler setting,
\begin{prop}\label{triangleinequality}
Suppose $\chi \in \mathcal{W}_p^+ \cap C^{\infty}(\mathbb{R})$ ,$\psi_s \in \mathcal{H}$ is a smooth curve,$\phi \in \mathcal{H}\backslash{\psi([0,1])}$ and $\epsilon>0$.$u^{\epsilon} \in C^{\infty}([0,1] \times [0,1] \times M)$ is the smooth function for which $t \rightarrow u_t^{\epsilon}(.,s)=u^{\epsilon}(t,s,.)$ is the $\epsilon$-geodesic connecting $\phi$ and $\psi_s$. There exists $\epsilon_0(\phi,\psi)>0$ such that for any $\epsilon \in (0,\epsilon_0)$ the following holds:
\begin{equation*}
l_{\chi}(u_t^{\epsilon}(.,0)) \leq l_{\chi}(\psi_s)+l_{\chi}(u_t^{\epsilon}(.,1)) +\epsilon R
\end{equation*}
for some $R(\phi,\psi,\chi,\epsilon_0)>0$ independent of $\epsilon$.
\end{prop}
\begin{proof}
Fix $s \in [0,1]$. By Proposition \ref{derivativeoflength} and Proposition \ref{tangentestimate}, there exists a constant $\epsilon_0(\phi,\psi)>0$ such that for $\epsilon \in (0,\epsilon_0)$
\begin{equation*}
\begin{split}
\frac{d}{ds}l_{\chi}(u_t(.,s)) &=\int_0^1 \frac{d}{ds}||\dot{u}(t,s,.)||_{\chi,u(t,s,.)}dt \\
& =\int_0^1 \frac{\int_M \chi'(\frac{\dot{u}}{||\dot{u}||_{\chi,u}})\nabla_{\frac{du}{ds}} \dot{u} d\mu_{u_t}}{\int_M\chi'(\frac{\dot{u}}{||\dot{u}||_{\chi,u}}) \frac{\dot{u}}{||\dot{u}||_{\chi,u}}d\mu_{u_t}}dt \\
&=\int_0^1 \frac{\int_M \chi'(\frac{\dot{u}}{||\dot{u}||_{\chi,u}})\nabla_{\frac{du}{ds}} \dot{u} d\mu_{u_t}}{\chi(1)+\int_M\chi^*(\chi'(\frac{\dot{u}}{||\dot{u}||_{\chi,u}}) )d\mu_{u_t}}dt \\
&=\int_0^1 \frac{\frac{d}{dt}\int_M \chi'(\frac{\dot{u}}{||\dot{u}||_{\chi,u}})\frac{du}{ds} d\mu_{u_t}-\int_M\frac{du}{ds}\nabla_{\dot{u}}(\chi'(\frac{\dot{u}}{||\dot{u}||_{\chi,u}}))d\mu_{u_t}}{\chi(1)+\int_M\chi^*(\chi'(\frac{\dot{u}}{||\dot{u}||_{\chi,u}}) )d\mu_{u_t}}dt
\end{split}
\end{equation*}
Moreover we have
\begin{equation}
\nabla_{\dot{u}}(\chi'(\frac{\dot{u}}{||\dot{u}||_{\chi,u}}))d\mu_{u_t}=\chi''(\frac{\dot{u}}{||\dot{u}||_{\chi,u}})(\frac{\nabla_{\dot{u}}\dot{u}}{||\dot{u}||_{\chi,u}}-\frac{\dot{u}}{||\dot{u}||_{\chi,u}^2}\frac{d}{dt}||\dot{u}||_{\chi,u})d\mu_{u_t}
\end{equation}
It follows from Proposition \ref{tangentestimate} that $||\dot{u}||_{\chi,t}$ is uniformly bounded away from zero and both $\nabla_{\dot{u}}\dot{u}d\mu_{u_t}$ and $\frac{d}{dt} ||\dot{u}||_{\chi, u}$ are uniformly bounded by the form $\epsilon R$, where $R$ is uniformly bounded.
Moreover $\dot{u},\frac{du}{ds}$ are uniformly bounded independent of $\epsilon$ \cite{GZ1}[Lemma 14]. Hence
\begin{equation*}
\frac{d}{ds}l_{\chi}(u_t(.,s))=\int_0^1 \frac{\frac{d}{dt}\int_M \chi'(\frac{\dot{u}}{||\dot{u}||_{\chi,u}})\frac{du}{ds} d\mu_{u_t}}{\chi(1)+\int_M\chi^*(\chi'(\frac{\dot{u}}{||\dot{u}||_{\chi,u}}) )d\mu_{u_t}}dt +\epsilon R
\end{equation*}
where $R$ is uniform bounded independent of $\epsilon$.
Recall that $\chi*'(\chi'(l))=l$ for $l \in \mathbb{R}$,the expression
\begin{equation*}
\frac{d}{dt} (\chi(1)+\int_M\chi^*(\chi'(\frac{\dot{u}}{||\dot{u}||_{\chi,u}}) )d\mu_{u_t})=\int_M \frac{\dot{u}}{||\dot{u}||_{\chi,u}} \chi''(\frac{\dot{u}}{||\dot{u}||_{\chi,u}}) \nabla_{\dot{u}}(\frac{\dot{u}}{||\dot{u}||_{\chi, u}})d\mu_{u_t}
\end{equation*}
is a term of type $\epsilon R$.Hence we can write
\begin{equation*}
\begin{split}
\frac{d}{ds}l_{\chi}(u_t(.,s)) &=\int_0^1 \frac{d}{dt}\frac{\int_M \chi'(\frac{\dot{u}}{||\dot{u}||_{\chi,u}})\frac{du}{ds} d\mu_{u_t}}{\chi(1)+\int_M\chi^*(\chi'(\frac{\dot{u}}{||\dot{u}||_{\chi,u}}) )d\mu_{u_t}}dt +\epsilon R \\
&= \frac{\int_M \chi'(\frac{\dot{u}(1,s)}{||\dot{u}(1,s)||_{\chi,\psi}})\frac{d\psi}{ds} d\mu_{\psi}}{\chi(1)+\int_M\chi^*(\chi'(\frac{\dot{u}(1,s)}{||\dot{u}(1,s)||_{\chi,\psi}}) )d\mu_{\psi}} \\
&\geq -||\frac{d\psi}{ds}||_{\chi,\psi}+\epsilon R
\end{split}
\end{equation*}
where the last line follows from the Young inequality
\begin{equation*}
\chi(a)+\chi^*(b) \geq ab ,a,b \in \mathbb{R}
\end{equation*}
The integration of the above inequality with respect to $s \in [0,1]$ yields the desired inequality.
\end{proof}
Now we are ready to prove Theorem \ref{darvas1}. Certainly the proof follows closely Darvas's result in K\"ahler setting \cite{D2}[Section 3].
\begin{proof}
First we show that for $u_0,u_1 \in \mathcal{H}$ and the weak $C^{1,\overline{1}}$-geodesic $u_t$ connecting $u_0,u_1$
\begin{equation}
d_{\chi}(u_0,u_1)=l_{\chi}(u_t)
\end{equation}
We assume $u_0 \neq u_1$. We first assume $\chi \in C^{\infty}(\mathbb{R})$. Recall that, by Guan-Zhang \cite{GZ1}, $\epsilon$-geodesics $u_t^{\epsilon}$ connecting $u_0,u_1$ converge to the weak $C^{1,\overline{1}}_B$ geodesic $u_t$ in $C^{1,\alpha}$. Hence $\dot{u}_t^{\epsilon}$ converges uniformly to $\dot{u}_t$.
\begin{clm}
$||\dot{u}_t^{\epsilon}||_{\chi, u_t^{\epsilon}} \rightarrow ||\dot{u}_t^{\epsilon}||_{\chi, u_t}$ as $\epsilon \rightarrow 0$.
\end{clm}
Recall that $\dot{u}_t^{\epsilon}$ is uniformly bounded in terms of $||u_0||_{C^2(M)},||u_1||_{C^2(M)}$ and the estimate (i) in Proposition \ref{tangentestimate}, there exist constants $0<C_1<C_2$ such that for sufficiently small $\epsilon>0$
\begin{equation*}
C_1 \leq ||\dot{u}_t^{\epsilon}||_{\chi, u_t^{\epsilon}} \leq C_2
\end{equation*}
Then the claim follows immediately if we can prove the only cluster point of $\{||\dot{u}_t^{\epsilon}||_{\chi, u_t^{\epsilon}}\}_{\epsilon}$ is $||\dot{u}_t||_{\chi, u_t}$.Take a cluster point $N$, after taking a subsequence, we can assume that $||\dot{u}_t^{\epsilon}||_{\chi, u_t^{\epsilon}}\rightarrow N$ as $\epsilon \rightarrow 0$.Then $\frac{\dot{u}_t^{\epsilon}}{||\dot{u}_t^{\epsilon}||_{\chi, u_t^{\epsilon}}}$ converges to $\frac{\dot{u}_t}{N}$ uniformly.
Moreover, we have $\omega_{u_t^{\epsilon}}^n \wedge \eta_{u_t^{\epsilon}}$ converges to $\omega_{u_t}^n \wedge \eta_{u_t}$ weakly. Hence
\begin{equation*}
\chi(1)=\int_M \chi(\frac{\dot{u}_t^{\epsilon}}{||\dot{u}_t^{\epsilon}||_{\chi, u_t^{\epsilon}}}) \omega_{u_t^{\epsilon}}^n \wedge \eta_{u_t^{\epsilon}} \rightarrow \int_M \chi(\frac{\dot{u}_t}{N}) \omega_{u_t}^n \wedge \eta_{u_t}
\end{equation*}
Recall $||f||_{\chi,\mu} =\alpha>0$ if and only if $\int_{\Omega} \chi(\frac{f}{\alpha})d\mu=\chi(1)$. Hence $N=||\dot{u}_t||_{\chi, u_t}$.
Then it follows from the dominated convergence theorem that
\begin{equation}\label{smoothapproximation}
\lim\limits_{\epsilon \rightarrow 0} l_{\chi}(u_t^{\epsilon}) =l_{\chi}(u_t)
\end{equation}
and $d_{\chi}(u_0,u_1) \leq l_{\chi}(u_t)$.
Next we show that
\begin{equation}
l_{\chi}(\phi_t) \geq l_{\chi}(u_t)
\end{equation}
for all smooth curves $\phi_t$ in $\mathcal{H}$ connecting $u_0,u_1$.
We can assume that $u_1 \notin \phi([0,1))$ and take $h \in [0,1)$. Applying Proposition \ref{triangleinequality} to the case $\phi=u_1$ and $\psi_s=\phi|_{[0,h]}$, letting $\epsilon \rightarrow 0$, we can obtain
\begin{equation*}
l_{\chi}(u_{t}) \leq l_{\chi}(\phi_t|_{[0,h]})+l_{\chi}(w_t^h)
\end{equation*}
where $u_{t}$ is the $C^{1,\bar 1}$ geodesic connecting $u_1, u_0$ and $w_t^h$ is the $C^{1,\bar 1}$ geodesic connecting $u_1,\phi_h$.
By Lemma \ref{cma10}, $l_{\chi}(w_t^h) \rightarrow 0$ as $h \rightarrow 1$. Hence $l_{\chi}(\phi_t) \geq l_{\chi}(u_t)$
For the general weight $\chi \in \mathcal{W}_p^+$,
we need to do approximation as in \cite{D2}[Proposition 2.4]. There exists sequence $ \chi_k \in \mathcal{W}_{p_k}^+ \cap C^{\infty}(\mathbb{R})$ such that $\chi_k$ converges to $\chi$ uniformly on compact subsets. Then we have
\begin{equation*}
\int_0^1 ||\dot{\phi}_t||_{\chi_k,\phi_t} dt=l_{\chi_k}(\phi_t) \geq l_{\chi_k}(u_t)=\int_0^1 ||\dot{u}_t||_{\chi_k,u_t} dt
\end{equation*}
and $||\dot{\phi}_t||_{\chi_k,\phi_t} \rightarrow ||\dot{\phi}_t||_{\chi,\phi_t},||\dot{u}_t||_{\chi_k,u_t} \rightarrow ||\dot{u}_t||_{\chi, u_t}$. Moreover,$\dot{u}_t,\dot{\phi}_t$ are uniformly bounded. By the dominated convergence theorem, $l_{\chi}(\phi_t) \geq l_{\chi}(u_t)$.
Recall $l_{\chi}(u_t)=\int_0^1 ||\dot{u}_t||_{\chi,u_t} dt$ and by Lemma \ref{arcparameter} below, we have
\begin{equation*}
d_\chi(u_0, u_1)=\|\dot u_t\|_{\chi, u_t}, t\in [0, 1]\end{equation*}
Suppose $u_0 \neq u_1 \in \mathcal{H}$, take $\epsilon \rightarrow 0$ in the estimate Lemma \ref{kai length estimate} we obtain $\dot{u}_0 \not\equiv 0$ and $d_{\chi}(u_0,u_1)=||\dot{u}_0||_{\chi,u_0} >0$. This implies that $(\mathcal{H},d_{\chi})$ is a metric space.
\end{proof}
\begin{lemma}\label{arcparameter}
Let $u_t$ be the weak $C^{1, \bar 1}_B$-geodesic connecting $u_0,u_1$. Then for any $\chi \in \mathcal{W}_p^+$ and $t_0, t_1 \in [0,1]$ the following hold
\begin{equation}
d_\chi(u_0, u_1)=||\dot{u}_{t_0}||_{\chi, u_{t_0}} =||\dot{u}_{t_1}||_{\chi, u_{t_1}}
\end{equation}
\end{lemma}
\begin{proof}
It had been shown that for $\epsilon$-geodesics $u^{\epsilon}_t$ joining $u_0,u_1$, we have
\begin{equation*}
||\dot{u}_{t_0}^{\epsilon}||_{\chi, u_{t_0}^{\epsilon}} \rightarrow ||\dot{u}_{t_0}||_{\chi, u_{t_0}}, ||\dot{u}_{t_1}^{\epsilon}||_{\chi, u_{t_1}^{\epsilon}} \rightarrow ||\dot{u}_{t_1}||_{\chi, u_{t_1}}\end{equation*}
as $\epsilon \rightarrow 0$.
Proposition \ref{tangentestimate} implies that
\begin{equation*}
| ||\dot{u}_{t_0}^{\epsilon}||_{\chi, u_{t_0}^{\epsilon}}- ||\dot{u}_{t_1}^{\epsilon}||_{\chi, u_{t_1}^{\epsilon}}| \leq |t_0-t_1|\epsilon R_1
\end{equation*}
Then taking $\epsilon \rightarrow 0$ we have $||\dot{u}_{t_0}||_{\chi, u_{t_0}} =||\dot{u}_{t_1}||_{\chi, u_{t_1}}$.
\end{proof}
Finally, we have the following triangle inequality,
\begin{lemma}\label{triangle200}For $u, v, w\in {\mathcal H}$, $\chi\in {\mathcal W}^+_p, p\geq 1$,
\[
d_\chi(u, w)\leq d_\chi(u, v)+d_\chi(v, w).
\]
\end{lemma}
\section{The metric space $({\mathcal E}_p(M, \xi, \omega^T), d_p)$}
In this section we prove Theorem \ref{pluri01}. We shall follow the K\"ahler setting closely as in \cite{D2}[Section 4], but we shall only consider $d_p$ distance. Given $u_0, u_1\in {\mathcal E}_p(M, \xi, \omega^T), p\geq 1$, by Lemma \ref{BK} there exists decreasing sequences $u_0^k, u_1^k\in {\mathcal H}$ such that $u_0^k\searrow u_0$ and $u_1^k\searrow u_1$. We shall prove that the following formula for distance $d_p$ is well-defined,
\begin{equation}\label{distance10}
d_p(u_0, u_1)=\lim_{k\rightarrow \infty} d_p(u_0^k, u_1^k)
\end{equation}
and the definition in \eqref{distance10} coincides with \eqref{dis10} (we only consider $\chi(l)=|l|^p/p$).
We have the following
\begin{thm}\label{distance11}$({\mathcal E}_p, d_p)$ is a geodesic metric space extending $({\mathcal H}, d_p)$.
\end{thm}
We start with the notion of generalized solution of complex Monge-Ampere in the sense of Bedford-Taylor in Sasaki setting, which was considered by van Coevering in \cite{VC}, by adapting the complex Monge-Ampere operator for basic functions in $\text{PSH}(M, \xi, \omega^T)\cap L^\infty$ to Sasaki setting. van Coevering discussed in particular weak solution in $\text{PSH}(M, \xi, \omega^T)\cap C^0(M)$ \cite{VC}[Section 2.4].
Let $S=[0, 1]\times S^1$ be the cylinder and $N=M\times S$. Then $N$ is a manifold of dimension $2n+3$ with boundary and $N$ has a transverse holomorphic structure, simply the product structure of transverse holomorphic structure on $M$ and holomorphic structure on $S$. A path $\phi: [0, 1]\rightarrow C^\infty_B(M)$ corresponds to an $S^1$-invariant function $\Phi_w$ on $N$.
If $\phi_t$ is a smooth path in ${\mathcal H}$ then a direct computation gives,
\begin{equation}\label{vc01}
(\pi^*\omega^T+\sqrt{-1}\partial_B\bar\partial_B \Phi)^{n+1}=c_m(\ddot \phi-|\nabla\dot\phi|^2_{\omega^T_{\phi_t}}) (\omega^T_{\phi_t})^n\wedge{dw\wedge d\bar w}
\end{equation}
Note that this choice of complexification (see van Coevering \eqref{vc01}) is different with the choice of Guan-Zhang \eqref{cma03}. It seems that \eqref{vc01} would be more natural to discuss weak solutions. By \eqref{vc01}, a smooth geodesic then corresponds to a solution of homogeneous complex Monge-Ampere for basic function $\Phi: N\rightarrow \mathbb{R}$,
\[
(\pi^*\omega^T+\sqrt{-1}\partial_B\bar\partial_B \Phi)^{n+1}\wedge \eta=0.
\]
We define a \emph{weak geodesic} between $u_0, u_1\in \text{PSH}(M, \xi, \omega^T)\cap L^\infty$ as follows, for $\Phi(\cdot, w)=\Phi(\cdot, t)\in \text{PSH}({N}^{\circ}, \xi, \pi^*\omega^T)\cap L^\infty$, $(t=\text{Re}(w))$, it satisfies
\begin{equation}\label{bounded01}
\begin{cases}
(\pi^*\omega^T+\sqrt{-1}\partial_B\bar\partial_B \Phi)^{n+1}\wedge \eta=0\\
\lim_{t\rightarrow 0}\Phi(\cdot, t)=u_0,\; \lim_{t\rightarrow 1}\Phi(\cdot, t)=u_1
\end{cases}
\end{equation}
We have the following \emph{strong maximum principle}, see \cite{VC}[Theorem 2.5.3], \cite{Blocki12}[Theorem 21] and \cite{D4}[Theorem 3.2].
\begin{lemma}\label{strong01}
Let $u, v\in \text{PSH}({N}^{\circ}, \xi, \pi^*\omega^T)\cap L^\infty(N)$. Suppose that
\[
(\pi^*\omega^T+\sqrt{-1}\partial_B\bar\partial_B u)^{n+1}\wedge \eta\leq (\pi^*\omega^T+\sqrt{-1}\partial_B\bar\partial_B v)^{n+1}\wedge \eta
\]
and $\lim_{x\rightarrow \partial N} (u-v)(x)\geq 0$, then $u\geq v$ on $N$.
\end{lemma}
\begin{proof} Our proof is similar to K\"ahler case, see \cite{D4}[Theorem 3.2]. Fix $\epsilon>0$ and $v_\epsilon:=\max\{u, v-\epsilon\}\in \text{PSH}(N^\circ, \xi, \omega^T)\cap L^\infty$. Then $v_\epsilon=u$ near the boundary $\partial N=M\times S^1\times \{t=0\}\cup M\times S^1\times \{t=1\}$. Hence it is enough to show that $u=v_\epsilon$ on $N$.
We write $N=M\times S$ and $\omega_u=\pi^*\omega^T+dd^c_B u$ etc.
Note that on each foliation chart $W_\alpha=(-\delta, \delta)\times V_\alpha$ of $M$, we have the following inequality on $V_\alpha\times S$ for complex Monge-Ampere measure \cite{BN}[Theorem 2.2.10]
\[
\omega_{v_\epsilon}^{n+1}\geq \chi_{\{u\geq v-\epsilon\}\cap V_\alpha} \omega_u^{n+1}+ \chi_{\{u< v-\epsilon\}\cap V_\alpha} \omega_v^{n+1}\geq \omega_u^{n+1}
\]
It follows that on $N$, we have
\[
\omega_{v_\epsilon}^{n+1}\wedge \eta\geq \omega_u^{n+1}\wedge \eta
\]
Then we have the following,
\begin{equation}\label{i001}
0\leq \int_N (v_\epsilon-u)(\omega_{v_\epsilon}^{n+1}-\omega_u^{n+1})\wedge \eta
\end{equation}
Using integration by parts, we obtain that
\[
\int_N d(u-v_\epsilon)\wedge d^c_B(u-v_\epsilon)\wedge \omega_u^k\wedge \omega_{v_\epsilon}^{n-k}\wedge \eta=0, 0\leq k\leq n.
\]
By an induction argument as in \cite{D4}[Theorem 3.2], we can prove that
\[
\int_N d(u-v_\epsilon)\wedge d^c_B(u-v_\epsilon)\wedge \omega_u^k\wedge (\pi^*\omega^T)^{n-k}\wedge \eta=0, 0\leq k\leq n.
\]
For $k=n$, this shows that
\[
\int_{M\times S} d(u-v_\epsilon)\wedge d^c_B(u-v_\epsilon)\wedge (\pi^*\omega^T)^n\wedge \eta=0.
\]
Writing $\rho=u-v_\epsilon$, this reads
\[\int_{M\times S} |\partial_t\rho|^2 dt\wedge ds \wedge (\pi^*\omega^T)^n\wedge \eta=0\]
Hence $\partial_t\rho=0$. Since $\rho=0$ near the boundary $\partial N=M\times S^1\times \{t=0\}\cup M\times S^1\times \{t=1\}$, this shows that $\rho=0$. It completes the proof.
\end{proof}
\begin{rmk}One can certainly formulate a general version of comparison principle as in \cite{D4}[Theorem 3.2]. But one would need certainly a (transverse) K\"ahler form. Note that $\pi^*\omega^T$ is not transverse K\"ahler (it is zero along $S$-direction). Here we use the product structure of $N$.
\end{rmk}
With this maximum principle for bounded TPSH, we have the following,
\begin{lemma}\label{strong02}Given $u_0, u_1\in {\mathcal H}$, let $u_t: [0, 1]\rightarrow {\mathcal H}$ be the unique $C^{1, \bar 1}_B$ geodesic connecting $u_0, u_1$. Then we have the following,
\[
\|\dot u_t\|_{C^0}\leq \|u_0-u_1\|_{C^0}, \forall t\in [0, 1].
\]
\end{lemma}
\begin{proof} Note that this gives a much sharper estimate than Lemma \ref{zero01}. The proof follows the K\"ahler setting \cite{D4}[Lemma 3.5]. Denote $C=\max|u_0- u_1|$. By the convexity of $u$ in $t$-variable, we know that
\[
\dot u_0\leq \dot u_t\leq \dot u_1.
\]
Note that $v_t=u_0-Ct$ is a smooth geodesic connecting $u_0$ and $u_0-C$. Hence its complexification gives a solution to \eqref{bounded01}. By Lemma \ref{strong01}, we know that $v_t\leq u_t$, for $t\in [0, 1]$, since $u_0-C\leq u_1$. It follows that $-C\leq \dot u_0$. Similarly one can prove that $\dot u_1\leq C$, by considering $\tilde v_t=u_0+Ct$.
\end{proof}
\begin{rmk}The upper envelop construction was used to construct bounded weak geodesic segment in K\"ahler setting by Berndtsson \cite{Berndtsson}, where he proved that Lemma \ref{strong02} holds for $u_0, u_1\in \text{PSH}(M, \omega)$ (when $(M, \omega)$ is K\"ahler). A direct adaption to Sasaki setting using Lemma \ref{strong01} would lead to an extension of Berndtsson's result to Sasaki setting.
\end{rmk}
In general, $\Phi(\cdot, w)\in \text{PSH}(N^\circ, \xi, \pi^*\omega^T)$ will be called \emph{weak subgeodesic}, if $\Phi(\cdot, )=\Phi(\cdot, \text{Re}(w))$, $(t=\text{Re}(w))$. For $u_0, u_1\in \text{PSH}(M, \xi, \omega^T)$, we define
\begin{equation}\label{weakgeodesic}
u=\sup\{ \Phi: \Phi(\cdot, t)\in \text{PSH}(N^\circ, \xi, \pi^*\omega^T), \lim_{t\rightarrow 0, 1}\Phi(\cdot, t)\leq u_{0, 1}\}
\end{equation}
We have the following,
\begin{prop} $u\in \text{PSH}(N^\circ, \xi, \pi^*\omega^T)$. Denote $u_t=u(\cdot, t)$. We refer $t\rightarrow u_t$ to the weak geodesic segment connecting $u_0, u_1$.
\end{prop}
\begin{proof}Note that usc $u^*$ is basic, and $u^*\in \text{PSH}(N^\circ, \xi, \pi^*\omega^T)$. Since $\Phi$ is convex in $t$ direction, it follows that $\Phi(\cdot, t)\leq (1-t)u_0+tu_1$. Hence $u_t\leq (1-t)u_0+t u_1$. It follows that
\[
u^*\leq (1-t)u_0+tu_1
\]
In other words, $u^*\leq u$ by definition. It follows that $u^*=u$.
\end{proof}
\begin{prop}\label{coincide4}
If $u_0, u_1 \in \text{PSH}(M, \xi, \omega^T)\cap L^{\infty}(M)$, $u$ is defined by $(\ref{weakgeodesic})$ and $u_t=u(\cdot, t)$ is the weak geodesic. Let $C$ be a constant $\geq ||u_1-u_0||_{L^{\infty}(M)}$.
\begin{enumerate}
\item We have
\begin{equation}\label{convex4}
\max(u_0 -Ct, u_1-C(1-t)) \leq u_t \leq (1-t) u_0+tu_1
\end{equation}
\item $u_t \in \text{PSH}(M, \xi, \omega^T) \cap L^{\infty}(M)$ and $u$ is the unique solution of (\ref{bounded01}).
\item $u_t$ is uniformly Lipschitz continuous with respect to $t$:
\[
|u_t-u_s| \leq C|s-t|.
\]
for $s, t \in [0,1]$.
\item The derivatives $\dot{u}_0, \dot{u}_1$ exists and
\[
|\dot{u}_0|\leq C, \quad |\dot{u}_1| \leq C.
\]
\end{enumerate}
\end{prop}
\begin{proof}
\begin{enumerate}
\item It is obvious that $u_0-Ct, u_1-C(1-t)$ are weak subgeodesics. It follows from the definition of $u_t$ $(\ref{weakgeodesic})$ that
\[
\max(u_0-Ct, u_1-C(1-t)) \leq u_t
\]
The other half of the inequality comes from the convexity of $u_t$ with respect to $t$.
\item By the inequality $(\ref{convex4})$ we have $u_t \in \text{PSH}(M, \xi, \omega^T) \cap L^{\infty}(M)$ and $\lim\limits_{t \rightarrow 0,1} u_t=u_{0,1}$. Then $u \in \text{PSH}({N}^{\circ}, \xi, \pi^*\omega^T)\cap L^\infty$. Using the classical Perron-Bremmerman argument we have $(\pi^*\omega^T+\sqrt{-1}\partial_B\overline{\partial}_B u)^{n+1}\wedge\eta=0$. Hence $u$ is a solution of $(\ref{bounded01})$. The uniqueness of the solution of $(\ref{bounded01})$ follows from the strong maximum principle.
\item If one of $s, t$ equals to $0$ or $1$, the required inequality is a direct consequence of $(\ref{convex4})$. If $0<s<t<1$, by the convexity of $u_t$ with respect to $t$ we have
\[
\frac{t-s}{s}(u_s-u_0) \leq u_t-u_s \leq \frac{t-s}{1-s}(u_1-u_s)
\]
and the inequality follows from the case $t=0,1$ we have proved.
\item By the convexity of $u_t$ we have
\[
\frac{u_{t_1}-u_0}{t_1} \leq \frac{u_{t_2}-u_0}{t_2}
\]
for $0<t_1 <t_2$. These quantities are uniformly bounded by $C$. Hence $\dot{u}_0$ exists and $|\dot{u}_0| \leq C$. The case of $\dot{u}_1$ follows by a similar argument.
\end{enumerate}
\end{proof}
\begin{rmk}
If $u_0, u_1 \in {\mathcal H}_{\triangle}$, the weak geodesic $u_t$ coincides with the $C_B^{1,\bar1}$ geodesic.
\end{rmk}
\begin{prop}\label{homweakgeodesic}
Let $u_0^k,u_1^k\in\text{PSH}(M,\xi,\omega^T)$ be sequences decreasing to $u_0,u_1\in \text{PSH}(M,\xi,\omega^T)$ respectively. Suppose that $u_t^k,u_t \in \text{PSH}(M,\xi,\omega^T)$ be the weak geodesic connecting $u_0^k,u_1^k$ and $u_0,u_1$ respectively. Then
\begin{enumerate}
\item $u_t^k$ decreases to $u_t$ for $t \in [0,1]$;
\item For any $t_1,t_2\in[0,1]$, $[0,1] \ni t \rightarrow u_{(1-t)t_1+tt_2} \in \text{PSH}(M, \xi, \omega^T)$ is the weak geodesic connecting $u_{t_1}$ and $u_{t_2}$.
\end{enumerate}
\end{prop}
\begin{proof}
\begin{enumerate}
\item By the definition of $u^k_t$ (\ref{weakgeodesic}) it is obvious that $\{u_t^k\}_{k\in\mathbb{N}}$ is decreasing and $v_t := \lim\limits_{k\rightarrow\infty} u_t^k \in \text{PSH}(M, \xi, \omega^T)$. Again by the definition of $u_t^k, u_t$ (\ref{weakgeodesic}) we have $u^k_t \geq u_t$, hence $v_t \geq u_t$.
Recall that $u_t^k$ is convex with respect to $t$. Then $u_t^k \leq (1-t)u_0^k+t u_1^k$ and $v_t \leq (1-t)u_0+t u_1$. It follows from the definition of $u_t$ (\ref{weakgeodesic}) that $v_t \leq u_t$.
Consequently the sequence $u_t^k$ decreases to $u_t$ for $t \in [0,1]$.
\item Recall that $u_0, u_1$ are the decreasing limits of their canonical cutoffs, it follows from part (1) that we only have to prove the proposition for $u_0, u_1$ in $L^{\infty}(M)$. $v_t:=u_{(1-t)t_1+tt_2}$ be a path connecting $u_{t_1}, u_{t_2}$. By Proposition \ref{coincide4} we have $\lim\limits_{t\rightarrow0,1} v_t=u_{t_1,t_2}$ and $\Phi(\cdot,t)=v_t$ is a solution of the equation $(\ref{bounded01})$ with initial data $u_{t_1},u_{t_2}$. Then it follows from Proposition \ref{coincide4}(2) that $v_t=u_{(1-t)t_1+tt_2}$ is the weak geodesic connecting $u_{t_1}, u_{t_2}$.
\end{enumerate}
\end{proof}
\begin{lemma}[Rooftop formula]\label{rf}Suppose $u_0, u_1\in \text{PSH}(M, \xi, \omega^T)$ and $t\rightarrow u_t$ is the weak geodesic segment connecting $u_0, u_1$. Then
\begin{equation}
\inf_{t\in (0, 1)} (u_t-t\tau)=P(u_0, u_1-\tau), \tau\in \mathbb{R}
\end{equation}
Moreover, for any $\tau\in \mathbb{R}$, we have
\begin{equation}
\{\dot u_0\geq \tau\}=\{P(u_0, u_1-\tau)=u_0\}.
\end{equation}
If $u_0, u_1\in {\mathcal E}_p(M, \xi, \omega^T)$, then $u_t\in {\mathcal E}_p(M, \xi, \omega^T)$.
\end{lemma}
\begin{proof}First note that $t\rightarrow v_t=u_t-\tau t$ is the weak geodesic connecting $u_0, u_1-\tau$, hence the proof can be reduced to the particular case $\tau=0$.
By definition $P(u_0, u_1)\leq u_0, u_1$. As a result, the constant weak subgeodesic $t\rightarrow h_t:=P(u_0, u_1)$ is a candidate for definition of $u_t$, hence $h_t\leq u_t, t\in [0, 1]$. It follows that $P(u_0, u_1)\leq \inf_{t\in [0, 1]} u_{t}$.
For the other direction, we use Kiselman minimum principle \cite{Demailly}[Chapter I, Theorem 7.5], which asserts that $w:=\inf_{t\in [0, 1]} u_t\in \text{PSH}(M, \xi, \omega^T)$ (note that $u_t$ is a genuine plurisubharmonic function on foliation charts, for each $t$ and $u_t$ is convex in $t$-variable; hence Kiselman minimum principle applies, as in K\"ahler setting). Note that $u_t\leq (1-t)u_0+t u_1$, it follows that $w$ is a candidate for $P(u_0, u_1)$ and hence $w\leq P(u_0, u_1)$. This completes the proof.
\end{proof}
Now we prove Theorem \ref{distance11}, through a series of propositions and lemmas, following \cite{D2}[Section 4] (and in particular \cite{D4}[Section 3]).
\begin{lemma}\label{dis100}Suppose $u, v\in {\mathcal H}$ with $u\leq v$. We have
\begin{equation}\label{distance12}
\max\left\{\frac{1}{2^{n+p}}\int_M |u-v|^p\omega_u^n\wedge \eta, \int_M |u-v|^p\omega_v^n\wedge\eta\right\}\leq d_p(u, v)^p\leq \int_M |u-v|^p\omega_u^n\wedge\eta
\end{equation}
\end{lemma}
\begin{proof}Let $w_t: [0, 1]\rightarrow {\mathcal H}$ be the $C^{1, \bar 1}_B$ geodesic connecting $u$ and $v$. By Theorem \ref{darvas1}, we have
\begin{equation}\label{distance13}
d_p(u, v)^p=\int_M |\dot w_0|^p\omega_u^n\wedge \eta=\int_M |\dot w_1|^p\omega_v^n\wedge \eta
\end{equation}
By Lemma \ref{strong01}, we have $u\leq w_t$ given $u\leq v$. Since $w_t$ is convex in $t$, it follows that
\begin{equation}\label{distance14}
0\leq \dot w_0\leq v-u\leq \dot w_1.
\end{equation}
It then follows that, by \eqref{distance13} and \eqref{distance14},
\begin{equation}\label{distance15}
\int_M |u-v|^p\omega_v^n\wedge\eta\leq d_p(u, v)^p\leq \int_M |v-u|^p \omega_u^n\wedge \eta.
\end{equation}
Next we use $\omega_u^n\wedge \eta\leq 2^n \omega_{(\frac{u+v}{2})}^n\wedge \eta$ to obtain that
\begin{equation*}
2^{-n}\int_M |u-v|^p\omega_u^n\wedge \eta\leq \int_M |u-v|^p \omega_{(\frac{u+v}{2})}^n\wedge\eta
\end{equation*}
We write the righthand side above as follows and apply \eqref{distance15} for $u\leq (u+v)/2$ to obtain,
\[
2^{-p}\int_M |u-v|^p \omega_{(\frac{u+v}{2})}^n\wedge\eta= \int_M \left|u-\frac{u+v}{2}\right|^p \omega_{(\frac{u+v}{2})}^n\wedge\eta\leq d_p\left(u, \frac{u+v}{2}\right)^p
\]
The lemma below implies that $d_p(u, (u+v)/2)\leq d_p(u, v)$, completing the proof.
\end{proof}
\begin{lemma}\label{order1}Suppose $u, v, w\in {\mathcal H}$ and $u\leq v\leq w$. Then we have, \[d_p(u, v)\leq d_p(u, w), d_p(v, w)\leq d_p(u, w)\]
\end{lemma}
\begin{proof}Let $\alpha_t, \beta_t$ be the $C^{1, \bar 1}_B$ geodesic segments connecting $u, v$ and $u, w$ respectively. Since $u\leq v\leq w$, by Lemma \ref{strong01} we have
$u\leq \alpha_t\leq v$ and $u\leq \beta_t\leq w$; moreover, $\alpha_t\leq \beta_t$. Since $\alpha_0=\beta_0$, this gives that $0\leq \dot \alpha_0\leq \dot \beta_0$. Theorem \ref{darvas1} then implies that $d_p(u, v)\leq d_p(u, w)$. Similarly we can prove $d_p(v, w)\leq d_p(u, w)$.
\end{proof}
Next we prove that the distance formula \eqref{distance10} is well-defined and agrees with the original definition \eqref{dis10}.
\begin{lemma}Given $u_0, u_1\in {\mathcal E}_p(M, \xi, \omega^T)$, the limit \eqref{distance10} is finite and independent of the approximating sequences $u_0^k, u_1^k\in {\mathcal H}$.
\end{lemma}
\begin{proof}
First we show that given $u\in {\mathcal E}_p(M, \xi, \omega^T)$ and a sequence $\{u_k\}_{k\in \mathbb{N}}\subset {\mathcal H}$ is a decreasing sequence converging to $u$. Then as $l, k\rightarrow \infty$,
$d_p(u_l, u_k)\rightarrow 0.$ We can assume that $l\leq k$ and hence $u_k\leq u_l$. Lemma \ref{dis100} then implies that
\[
d_p(u_l, u_k)^p\leq \int_M |u_l-u_k|^p\omega_{u_k}^n\wedge \eta.
\]
Clearly we have $u-u_l\leq u_k-u_l\leq 0$ and $u-u_l, u_k-u_l\in {\mathcal E}_p(M, \xi, \omega_{u_l})$. Hence applying Proposition \ref{fe} for the class ${\mathcal E}_p(M, \xi, \omega_{u_l})$, we obtain that
\begin{equation}\label{dis102}
d_p(u_l, u_k)^p\leq \int_M |u_l-u_k|^p\omega_{u_k}^n\wedge \eta\leq (p+1)^n\int_M |u-u_l|^p\omega_{u_l}^n\wedge \eta.
\end{equation}
As $u_l$ decreases to $u\in {\mathcal E}_p(M, \xi, \omega^T)$, the monotone convergence theorem implies that the righthand side above converges to zero as $l\rightarrow\infty$, hence $d_p(u_l, u_k)\rightarrow 0$ as $l, k\rightarrow\infty$.
Now by Lemma \ref{triangle200}, we know that
\[
|d_p(u_0^l, u_1^l)-d_p(u_0^k, u_1^k)|\leq d_p(u_0^l, u_0^k)+d_p(u_1^l, u_1^k)\rightarrow 0, l, k\rightarrow \infty.
\]
Hence this proved that the limit \eqref{distance10} is convergent and finite.
Next we show that the limit is independent of the choice of approximating sequences. Let $v_0^l, v_1^l$ be other approximating sequences. Certainly we can assume the sequences are strictly decreasing, by adding small constants if necessary. Fix $k$ and consider the sequence $\{\max \{u_0^{k+1}, v_0^j\}_{j\in \mathbb{N}}\}$ decreases pointwise to $u_0^{k+1}$. By Dini's lemma, the convergence is uniform (for fixed $k$) and hence we can choose $j_k$ sufficiently large such that $v^j_0<u^k_0$, $j\geq j_k$. Repeating the argument we can assume $v_1^{j}<u^k_1$, for $j\geq j_k$. By triangle inequality again, we have
\[
|d_p(v_0^j, v_1^j)-d_p(u_0^k, u_1^k)|\leq d_p(v_0^j, u_0^k)+d_p(v_1^j, u_1^k), j\geq j_k
\]
By \eqref{dis102} we know that if $k$ is sufficiently large, $d_p(v_0^j, u_0^k)+d_p(v_1^j, u_1^k)$ is sufficiently small. Hence the distance $d_p(u_0, u_1)$ is independent of the choice of approximating sequence.
\end{proof}
We choose a decreasing sequence $\{u_0^k\}_{k\in \mathbb{N}}, \{u_1^k\}_{k\in \mathbb{N}}\in {\mathcal H}$ such that $u_0^k\searrow u_0, u^k_0\searrow u_1$. We connect $u_0^k, u_1^k$ by the unique $C^{1, \bar 1}$ geodesic segment $u^k_t$. By Lemma \ref{strong01}, it follows that $u^k_t$ decreases in $k$. Hence the limit $\lim_{k\rightarrow \infty} u^k_t
$ exists. Using Dini's lemma as above, one can show that the limit does not depends on the choice of approximating sequence. Indeed, the limit coincides with the weak geodesic segment defined above,
\[
u_t=\lim_{k\rightarrow \infty} u^k_t
\]
\begin{lemma}We have $t\rightarrow u_t$ is a $d_p$-geodesic in the sense that \[d_p(u_{t_1}, u_{t_2})=|t_1-t_2|d_p(u_0, u_1), s, t\in [0, 1].\]
\end{lemma}
\begin{proof}
Let $\{u_0^k\}_k,\{u_1^k\}_k \in {\mathcal H}$ be sequences strictly decreasing to $u_0,u_1$ respectively and $u_t^k \in {\mathcal H}_{\triangle}$ the $C^{1,\bar 1}$ geodesic connecting $u_0^k,u_1^k$. By Theorem \ref{darvas1} we have
\begin{equation*}
d_p(u_0,u_1)^p=\lim_{k\rightarrow\infty}d_p(u_0^k,u_1^k)^p=\lim_{k\rightarrow\infty}\int_M |\dot{u}_0^k|^p\omega_{u_0^k}^n\wedge\eta
\end{equation*}
For $l \in (0,1)$ the strong maximum principle Lemma \ref{strong01} implies that $u_l^k$ strictly decreases to $u_l$. Then one can choose a sequence $\{w_l^k\}_k \in {\mathcal H}$ such that
\begin{enumerate}
\item $u_l^k \leq w_l^k \leq u_l^{k+1}$;
\item For the $C^{1,\bar 1}$ geodesic $v_t^k$ connecting $u_0^k$ and $w_l^k$ with $v_0^k=u_0^k, v_1^k=w_l^k$ we have
\begin{equation*}
|\int_M|\dot{v}_0^k|^p \omega_{u_0^k}^n\wedge\eta-l^p\int_M|\dot{u}_0^k|^p\omega_{u_0^k}\wedge\eta|<\frac{1}{k}
\end{equation*}
\end{enumerate}
In fact there exists a sequence $\varphi^j \in {\mathcal H}$ decreasing to $u_l^k$. By Dini's Lemma $\varphi^j$ converges to $u_k^l$ uniformly. It follows from Lemma \ref{tangentapp04} that for $j$ big enough, $w_l^k=\varphi^j$ will satisfy our requirements.
Then we have
\begin{equation*}
d_p(u_0,u_l)^p=\lim_{k\rightarrow\infty}d_p(u_0^k, w_l^k)^p=\lim_{k\rightarrow\infty}\int_M||\dot{v}_0^k||\omega_{u_0^k}^n\wedge\eta=l^pd_p(u_0,u_1)^p
\end{equation*}
Hence $d_p(u_0,u_l)=ld_p(u_0,u_1)$ for $l \in [0,1]$.
Without loss of generality we assume that $0 \leq t_1\leq t_2\leq 1$. By the Proposition \ref{homweakgeodesic} $h_t=u_{(1-t)t_2}$ is the weak geodesic connecting $u_{t_2}$ and $u_0$. It follows from the results above we have
\[
d_p(u_{t_2},u_{t_1})=(1-\frac{t_1}{t_2})d_p(u_{t_2},u_0)=(t_2-t_1)d_p(u_1,u_0)
\]
This completes the proof.
\end{proof}
\begin{lemma}\label{tangentapp04}
$u_0,u_1\in \text{PSH}(M,\xi,\omega^T)\cap L^{\infty}$. Let $\{u_1^k\}_{k\in \mathbb{N}} \in \text{PSH}(M,\xi,\omega^T)\cap L^{\infty}$ be a sequence decreasing to $u_1$ and $u_t, u_t^k \in \text{PSH}(M,\xi,\omega^T)\cap L^{\infty}$ the weak geodesic connecting $u_0,u_1$ and $u_0, u_1^k$ respectively. Then
\[
\lim_{k \rightarrow \infty} \int_M |\dot{u}_0^k|^p\omega_{u_0}^n\wedge\eta=\int_M |\dot{u}_0|^p\omega_{u_0}^n\wedge\eta
\]
\end{lemma}
\begin{proof}
Denote by $C=\max(||u_1^1-u_0||_{L^{\infty}}, ||u_1-u_0||_{L^{\infty}})$. It follows Proposition \ref{coincide4} that $||\dot{u}_0||_{L^{\infty}} \leq C, ||\dot{u}_0^k||_{L^{\infty}} \leq C$. By Proposition \ref{homweakgeodesic} the sequence $\{u_t^k\}_{k \in \mathbb{N}}$ decreases to $u_t$ hence the sequence $\{\dot{u}_0^k\}_{k\in\mathbb{N}}$ is decreasing with $\dot{u}_0^k \geq \dot{u}_0$.
Moreover we have $\dot{u}_0^k$ decreases to $\dot{u}_0$. If this is not true, we can find $x_0 \in M, a \in \mathbb{R}$ such that $\dot{u}_0^k > a > \dot{u}_0$. Then there exists $0<t_0<1$ such that $u_t^k(x_0) > u_0+at > u_t(x_0)$ for $t \in [0, t_0]$. It contradicts with the fact that $u_t^k$ decreases to $u_t$.
Then the Lemma follows from Lebesgue's dominated convergence theorem.\end{proof}
The following Pythagorean formula plays an essential role in Darvas's results \cite{D1, D2} and we have the same,
\begin{thm}[Pythagorean formula]\label{Pythagorean}
Given $u_0, u_1\in {\mathcal E}_p(M, \xi, \omega^T)$, we have $P(u_0, u_1)\in {\mathcal E}_p(M, \xi, \omega^T)$ and
\begin{equation}
d_p(u_0, u_1)^p=d(u_0, P(u_0, u_1))^p+d_p(u_1, P(u_0, u_1))^p.
\end{equation}
\end{thm}
\begin{proof}
First we assume that $u_0,u_1\in{\mathcal H}$. It follows from Theorem \ref{rooftop101} that $P(u_0,u_1) \in {\mathcal H}_{\triangle}$. Let $u_t$ be the $C_B^{1,\bar1}$ geodesic connecting $u_0,u_1$.
Let $v_t$ be the weak geodesic connecting $P(u_0,u_1),u_1$. It follows from the strong maximum principle that $P(u_0,u_1) \leq v_t$ for $t \in [0,1]$. Hence we have $\dot{v}_0 \geq 0$. By Lemma \ref{distancef2}, Lemma \ref{rf}, the definition of rooftop and Lemma \ref{decomposition} we have
\begin{equation*}
\begin{split}
d_p(P(u_0,u_1),u_1)^p &=\int_M|\dot{v}_0|^p\omega_{P(u_0,u_1)}^p\wedge\eta \\
&=\int_{\{\dot{v}_0>0\}} |\dot{v}_0|^p\omega_{P(u_0,u_1)}^n\wedge\eta \\
&=p\int_0^{\infty}s^{p-1}\omega_{P(u_0,u_1)}^n\wedge\eta\{\dot{v}_0 \geq s\} ds\\
&=p\int_0^{\infty}s^{p-1}\omega_{P(u_0,u_1)}^n\wedge\eta\{P(P(u_0,u_1),u_1-s)=P(u_0,u_1)\}ds\\
&=p\int_0^{\infty}s^{p-1}\omega_{P(u_0,u_1)}^n\wedge\eta\{P(u_0,u_1-s)=P(u_0,u_1)\}ds \\
&=p\int_0^{\infty}s^{p-1}\omega_{u_0}^n\wedge\eta\{P(u_0,u_1-s)=P(u_0,u_1)=u_0\}ds\\
&=p\int_0^{\infty}s^{p-1}\omega_{u_0}^n\wedge\eta\{P(u_0,u_1-s)=u_0\}ds \\
&=p\int_0^{\infty}s^{p-1}\omega_{u_0}^n\wedge\eta\{\dot{u}_0 \geq s\}ds \\
&=\int_{\{\dot{u}_0>0\}} |\dot{u}_0|^p\omega_{u_0}^n\wedge\eta
\end{split}
\end{equation*}
By a similar argument we also have
\begin{equation*}
d_p(u_0,P(u_0,u_1))^p =\int_{\{\dot{u}_0<0\}}|\dot{u}_0|^p\omega_{u_0}^n\wedge\eta
\end{equation*}
Now using Theorem \ref{darvas1} we have
\begin{equation*}
\begin{split}
d_p(u_0,u_1)^p &=\int_M |\dot{u}_0|^p\omega_{u_0}^n\wedge\eta\\
&=\int_{\{\dot{u}_0<0\}}|\dot{u}_0|^p\omega_{u_0}^n\wedge\eta+\int_{\{\dot{u}_0>0\}}|\dot{u}_0|^p\omega_{u_0}^n\wedge\eta\\
&=d_p(u_0,P(u_0,u_1))^p+d_p(P(u_0,u_1),u_1)^p
\end{split}
\end{equation*}
and the Pythagorean formula holds for smooth potentials $u_0,u_1\in {\mathcal H}$.
For the general case we can choose sequences $\{u_0^k\}_{k\in \mathbb{N}}, \{u_1^k\}_{k\in\mathbb{N}} \in {\mathcal H}$ decreases to $u_0,u_1$ respectively. Then the sequence $P(u_0^k, u_1^k) \in {\mathcal H}_{\triangle}$ decreases to $P(u_0,u_1)$ and the Pythagorean formula follows from Lemma \ref{approximation4}.
\end{proof}
\begin{lemma}\label{distancef2}
Let $u_t$ be the weak geodesic connecting $u_0,u_1 \in {\mathcal H}_{\triangle}$.Then the following holds:
\[
d_p(u_0,u_1)^p=\int_M |\dot{u}_0|^p\omega_{u_0}^p\wedge\eta=\int_M|\dot{u}_1|^p\omega_{u_1}^n\wedge\eta
\]
\end{lemma}
\begin{proof}
$v_t=u_{1-t}$ is the weak geodesic connecting $u_1,u_0$. By Lemma \ref{rf} we have
\begin{equation*}
\begin{split}
\{P(u_0+s,u_1)<u_1\} &=M-\{P(u_0+s,u_1)=u_1\} \\
&=M-\{\dot{v}_0 \geq -s\} \\
&=\{\dot{u}_1 >s\}
\end{split}
\end{equation*}
Recall that $\omega_{u_1}^n\wedge\eta$ has total finite measure $\text{Vol}(M)$, hence except for a countably many $s \in \mathbb{R}$ we have $\omega_{u_1}^n\wedge\eta(\{u_0=u_1-s\})=0$ and $\omega_{u_1}^n\wedge\eta(\{\dot{u}_1 \geq s\})=\omega_{u_1}^n\wedge\eta(\{\dot{u}_1>s\})$. For such real number $s$, it follows from Lemma \ref{decomposition} that
\begin{equation*}
\omega_{P(u_0,u_1-s)}^n\wedge\eta=\chi_{\{P(u_0,u_1-s)=u_0\}} \omega_{u_0}^n\wedge\eta+\chi_{\{P(u_0,u_1-s)=u_1-s\}} \omega_{u_1}^n\wedge\eta
\end{equation*}
and
\begin{equation*}
\text{Vol}(M)=\omega_{u_0}^n\wedge\eta(\{P(u_0,u_1-s)=u_0\})+\omega_{u_1}^n\wedge\eta(\{P(u_0,u_1-s)=u_1-s\})
\end{equation*}
It follows from Lemma \ref{rf}, the definition of rooftop envelope that
\begin{equation*}
\begin{split}
\int_{\{\dot{u}_0>0\}} |\dot{u}_0|^p\omega_{u_0}^n\wedge\eta &= p\int_0^{\infty} s^{p-1} \omega_{u_0}^n\wedge\eta(\{\dot{u}_0 \geq s\})ds \\
&= p\int_0^{\infty} s^{p-1} \omega_{u_0}^n\wedge\eta(\{P(u_0,u_1-s)=u_0\})ds \\
&=p\int_0^{\infty} s^{p-1}(\text{Vol}(M)-\omega_{u_1}^n\wedge\eta(\{P(u_0,u_1-s)=u_1-s\}))ds \\
&=p\int_0^{\infty} s^{p-1}\omega_{u_1}^n\wedge\eta(\{P(u_0,u_1-s) <u_1-s\})ds \\
&=p\int_0^{\infty} s^{p-1}\omega_{u_1}^n\wedge\eta(\{P(u_0+s,u_1) < u_1\})ds \\
&=p\int_0^{\infty} s^{p-1}\omega_{u_1}^n\wedge\eta(\{\dot{u}_1> s\}) ds \\
&= p\int_0^{\infty} s^{p-1}\omega_{u_1}^n\wedge\eta(\{\dot{u}_1 \geq s\}) ds \\
&=\int_{\{\dot{u}_1>0\}} |\dot{u}_1|^p\omega_{u_1}^n\wedge\eta
\end{split}
\end{equation*}
A similar arguments gives that
\begin{equation*}
\int_{\{\dot{u}_0<0\}} |\dot{u}_0|^p\omega_{u_0}^n\wedge\eta=\int_{\{\dot{u}_1<0\}} |\dot{u}_1|^p\omega_{u_1}^n\wedge\eta
\end{equation*}
It follows that
\begin{equation*}
\int_M |\dot{u}_0|^p\omega_{u_0}^n\wedge\eta=\int_M |\dot{u}_1|^p\omega_{u_1}^n\wedge\eta
\end{equation*}
Now choose sequence $\{u_0^k\}_{k\in\mathbb{N}}, \{u_1^k\}_{k\in\mathbb{N}} \subset {\mathcal H}$ decreasing to $u_0,u_1$ respectively. Let $u_t^{kl},u_t$ be the $C_B^{1,\bar1}$ geodesic connecting $u_0^k, u_1^l$ and $u_0,u_1$ respectively. Let $u_t^k$ be the $C_B^{1,\bar1}$ geodesic connecting $u_0^k, u_1$.
It follows from Lemma \ref{approximation4}, Lemma \ref{tangentapp04} and the above results that
\begin{equation*}
d_p(u_0^k,u_1)^p=\lim_{l\rightarrow\infty}d_p(u_0^k,u_1^l)^p=\lim_{l\rightarrow\infty}\int_M |\dot{u}_0^{kl}|^p\omega_{u_0^k}^n\wedge\eta=\int_M|\dot{u}_0^k|^p\omega_{u_0^k}^n\wedge\eta=\int_M |\dot{u}_1^k|^p\omega_{u_1}^n\wedge\eta
\end{equation*}
Then use Lemma \ref{approximation4} ,Proposition \ref{homweakgeodesic} and Lemma \ref{tangentapp04}, we have
\[
d_p(u_0,u_1)^p=\lim_{k\rightarrow\infty}d_p(u_0^k,u_1)^p=\lim_{k\rightarrow\infty}\int_M|\dot{u}_1^k|^p\omega_{u_1}^n\wedge\eta=\int_M|\dot{u}_1|^p\omega_{u_1}^n\wedge\eta
\]
This completes the proof.
\end{proof}
\begin{lemma}\label{order4}
Assume that $u, v\in {\mathcal E}_p(M,\xi,\omega^T)$ with $u\leq v$.Then we have
\begin{equation*}
\max(\frac{1}{2^{n+p}}\int_M |v-u|^p\omega_u^n\wedge\eta,\int_M|u-v|^p\omega_v^n\wedge\eta) \leq d_p(u, v)^p \leq \int_M |v-u|^p \omega_u^p\wedge\eta
\end{equation*}
\end{lemma}
\begin{proof}
First we can choose $u_k, w_k \in {\mathcal H}$ strictly decreasing to $u, v$ respectively. Then $\max(u_k, w_k) \in \text{PSH}(M,\xi,\omega^T)$ are continuous and strictly decreases to $v$. By Dini's Lemma there exists $v_k \in {\mathcal H}$ such that $\max(u_{k-1},v_{k-1}) \geq v_k \geq \max(u_k, v_k)$. Then $v_k$ decreases to $v$ and $u_k \leq v_k$. It follows from Lemma \ref{dis100} that
\begin{equation*}
\max(\frac{1}{2^{n+p}}\int_M |v_k-u_k|^p\omega_{u_k}^n\wedge\eta,\int_M|u_k-v_k|^p\omega_{v_k}^n\wedge\eta) \leq d_p(u_k, v_k)^p \leq \int_M |v_k-u_k|^p \omega_{u_k}^p\wedge\eta
\end{equation*}
By the Proposition \ref{weakcon3} the required inequality follows as $k \rightarrow \infty$.
\end{proof}
\begin{lemma}\label{approximation4}
If the sequence $\{u_k\}_{k\in \mathbb{N}} , \{v_k\}_{k \in \mathbb{N}}\in {\mathcal E}_p(M,\xi,\omega^T)$ decreases (increases) to $u ,v\in {\mathcal E}_p(M,\xi,\omega^T)$ respectively, then $d_p(u_k, v_k) \rightarrow d_p(u, v)$ as $k \rightarrow \infty$. In particular, $d_p(u_k,u) \rightarrow 0$.
\end{lemma}
\begin{proof}
If the sequence $\{u_k\}_{k\in \mathbb{N}}$ is decreasing, using the triangle inequality and Lemma \ref{order4} we have
\begin{equation*}
\begin{split}
|d_p(u_k, v_k)-d_p(u, v)| &\leq d_p(u_k, u)+d_p(v, v_k) \\
&\leq (\int_M|u_k-u|^p\omega_u^n\wedge\eta)^{\frac{1}{p}}+(\int_M|v_k-v|^p\omega_v^n\wedge\eta)^{\frac{1}{p}}
\end{split}
\end{equation*}
and the Lemma follows from Lemma \ref{weakcon3}.
If the sequence $\{u_k\}_{k\in\mathbb{N}}$ is increasing, using the triangle inequality and Lemma \ref{order4} we have
\begin{equation*}
\begin{split}
|d_p(u_k, v_k)-d_p(u, v)| &\leq d_p(u_k,u)+d_p(v, v_k) \\
&\leq (\int_M|u_k-u|^p\omega_{u_k}^n\wedge\eta)^{\frac{1}{p}}+(\int_M|v_k-v|^p\omega_{v_k}^n\wedge\eta)^{\frac{1}{p}}
\end{split}
\end{equation*}
and the Lemma follows from Lemma \ref{weakcon3}.
\end{proof}
\begin{lemma}Suppose $u_0, u_1\in {\mathcal E}_p(M, \xi, \omega^T)$. Then we have
\[
d_p\left(u_0, \frac{u_0+u_1}{2}\right)^p\leq Cd_p(u_0, u_1)^p
\]
\end{lemma}
\begin{proof}
It is obvious that $ P(u_0,u_1) \leq P(u_0,\frac{u_0+u_1}{2}) \leq u_0$ and $P(u_0,u_1) \leq P(u_0,\frac{u_0+u_1}{2}) \leq \frac{u_0+u_1}{2}$. By The Pythagorean Theorem \ref{Pythagorean} ,Lemma \ref{order1} and Lemma \ref{order4} we have
\begin{equation*}
\begin{split}
d_p(u_0,\frac{u_0+u_1}{2})^p &=d_p(u_0,P(u_0,\frac{u_0+u_1}{2}))^p+d_p(\frac{u_0+u_1}{2},P(u_0,\frac{u_0+u_1}{2}))^p \\
&\leq d_p(u_0,P(u_0,u_1))^p+d_p(\frac{u_0+u_1}{2},P(u_0,u_1))^p \\
&\leq \int_M |u_0-P(u_0,u_1)|^p\omega_{P(u_0,u_1)}^n\wedge\eta+\int_M|\frac{u_0+u_1}{2}-P(u_0,u_1)|^p\omega_{P(u_0,u_1)}^n\wedge\eta \\
&\leq 2(\int_M|u_0-P(u_0,u_1)|\omega_{P(u_0,u_1)}^n\wedge\eta+\int_M|u_1-P(u_0,u_1)|\omega_{P(u_0,u_1)}^n\wedge\eta) \\
&\leq 2^{n+p+1}(d_p(u_0,P(u_0,u_1))^p+d_p(u_1,P(u_0,u_1))^p) \\
&=2^{n+p+1}d_p(u_0,u_1)^p
\end{split}
\end{equation*}
This completes the proof.
\end{proof}
\begin{thm}\label{comparison04}For any $u_0, u_1\in {\mathcal E}_p(M, \xi, \omega^T)$ we have
\begin{equation}
C^{-1}d_p(u_0, u_1)^p\leq \int_M |u_0-u_1|^p(\omega^n_{u_0}\wedge \eta+\omega^n_{u_1}\wedge \eta)\leq Cd_p(u_0, u_1)^p.
\end{equation}
\end{thm}
\begin{proof}
Using the triangle inequality, arithmetic-geometric mean inequality, and Lemma \ref{order4} we have:
\begin{equation*}
\begin{split}
d_p(u_0,u_1)^p & \leq (d_p(u_0,\max(u_0,u_1))+d_p(u_1,\max(u_0,u_1)))^p \\
&\leq 2^{p-1}(d_p(u_0,\max(u_0,u_1))^p+d_p(u_1,\max(u_0,u_1))^p) \\
& \leq 2^{p-1}(\int_M |u_0-\max(u_0,u_1)|^p\omega_{u_0}^n\wedge\eta+\int_M|u_1-\max(u_0,u_1)|^p\omega_{u_1}^n\wedge\eta) \\
& = 2^{p-1}(\int_{\{u_0 <u_1\}} |u_0-u_1|^p\omega_{u_0}^n\wedge\eta+\int_{\{u_1<u_0\}} |u_1-u_0|^p\omega_{u_1}^n\wedge\eta) \\
&\leq 2^{p-1}\int_M |u_0-u_1|^p(\omega_{u_0}^n\wedge\eta+\omega_{u_1}^n\wedge\eta)
\end{split}
\end{equation*}
By the previous Lemma , the Pythagorean formula and Lemma \ref{order4}, there exists a constant $C$ such that
\begin{equation*}
\begin{split}
Cd_p(u_0,u_1)^p & \geq d_p(u_0,\frac{u_0+u_1}{2})^p \\
& \geq d_p(u_0,P(u_0,\frac{u_0+u_1}{2}))^p \\
& \geq \int_M|u_0-P(u_0,\frac{u_0+u_1}{2})|\omega_{u_0}^n\wedge\eta
\end{split}
\end{equation*}
Similarly we also have:
\begin{equation*}
\begin{split}
Cd_p(u_0,u_1)^p &\geq d_p(u_0,\frac{u_0+u_1}{2})^p \\
& \geq d_p(\frac{u_0+u_1}{2}, P(u_0,\frac{u_0+u_1}{2}))^p \\
& \geq \int_M |\frac{u_0+u_1}{2}-P(u_0,\frac{u_0+u_1}{2})|^p\omega_{\frac{u_0+u_1}{2}}^n\wedge\eta \\
& \geq \frac{1}{2^n} \int_M |\frac{u_0+u_1}{2}-P(u_0,\frac{u_0+u_1}{2})|^p\omega_{u_0}^n\wedge\eta
\end{split}
\end{equation*}
Hence by the Holder inequality we have:
\begin{equation*}
\begin{split}
(2^n+1)Cd_p(u_0,u_1)^p &\geq \int_M(|u_0-P(u_0,\frac{u_0+u_1}{2})|^p+|\frac{u_0+u_1}{2}-P(u_0,\frac{u_0+u_1}{2})|^p)\omega^n_{u_0}\wedge\eta \\
& \geq \frac{1}{2^p} \int_M|u_0-u_1|^p\omega_{u_0}^n\wedge\eta
\end{split}
\end{equation*}
By symmetry of $u_0,u_1$ we also have:
\begin{equation*}
(2^n+1)Cd_p(u_0,u_1)^p \geq \frac{1}{2^p} \int_M |u_0-u_1|\omega_{u_1}^n\wedge\eta
\end{equation*}
Adding the last two inequalities we obtain:
\begin{equation*}
2^p(2^n+1)C d_p(u_0,u_1)^p \geq \int_M |u_0-u_1|^p(\omega_{u_0}^n\wedge\eta+\omega_{u_1}^p\wedge\eta)
\end{equation*}
This completes the proof.
\end{proof}
\begin{lemma}\label{close4}
Let $\{u_k\}_{k\in\mathbb{N}} \subset {\mathcal E}_p(M,\xi,\omega^T)$ be a $d_p$-bounded sequence decreasing (increasing) to $u$. Then $u \in {\mathcal E}(M,\xi,\omega^T)$ and $d_p(u_k,u)\rightarrow 0$.
\end{lemma}
\begin{proof}
If $\{u_k\}_{k\in\mathbb{N}}$ is decreasing, we can assume that $u_k <0$. It follows from Lemma \ref{order4} that
\[
\max(\frac{1}{2^{n+p}}\int_M|u_k|^p\omega_{u_k}^n\wedge\eta, \int_M |u_k|^p(\omega^T)^n\wedge\eta) \leq d_p(u_k,0)^p
\]
are uniformly bounded. $\int_M |u_k|^p(\omega^T)^n\wedge\eta$ is uniformly bounded, the monotone convergence theorem and the dominated convergence theorem imply that $u_k \rightarrow u $ in $L_{loc}^1$ and $u \in \text{PSH}(M, \xi, \omega^T)$. $E_p(u_k)= \int_M |u_k|^p\omega_{u_k}^n\wedge\eta$ is uniformly bounded, it follows from Proposition \ref{boundedenergy} and Lemma \ref{approximation4} that $u \in {\mathcal E}_p(M, \xi, \omega^T)$ and $d_p(u_k, u) \rightarrow 0$.
If $\{u_k\}_{k\in \mathbb{N}}$ is increasing, it follows from Theorem \ref{comparison04} that there exists a constant $C$ such that
\[
\int_M|u_k|^p(\omega_{u_k}^n\wedge\eta+(\omega^T)^n\wedge\eta) \leq Cd_p(u_k, 0)
\]
is uniformly bounded. By Proposition \ref{compactness001} we have $u_k \rightarrow u$ in $L^1$ for some $u \in \text{PSH}(M,\xi,\omega^T)$. By Proposition \ref{boundedenergy} and Lemma \ref{approximation4} we have $u \in {\mathcal E}_p(M, \xi, \omega^T)$ and $d_p(u_k, u) \rightarrow 0$.
\end{proof}
\begin{prop}\label{rooftop04}Given $u_0, u_1, v\in {\mathcal E}_p(M, \xi, \omega^T)$,
\[
d_p(P(u_0, v), P(u_1, v))\leq d_p(u_0, u_1)
\]
\end{prop}
\begin{proof}
By Theorem \ref{rooftop101} and Lemma \ref{approximation4} we only have to prove the inequality for $u_0, u_1, v \in {\mathcal H}_{\triangle}$. In this case $P(u_0, v), P(u_1, v) \in {\mathcal H}_{\triangle}$ according to Theorem \ref{rooftop101}.
First we assume that $u_0 \leq u_1$. Let $u_t, v_t$ be the $C_B^{1,\bar1}$ geodesic connecting $u_0,u_1$ and $P(u_0,v), P(u_1, v)$ respectively. Then $P(u_0, v) \leq P(u_1,v) \leq v$ and the strong maximum principle implies that $P(u_0, v) \leq v_t \leq v$. Hence for $x \in \{P(u_0,v)=v\}$, $v_t(x)$ is independent of $t$ and $\dot{v}_0(x)=0$. Then we have
\[
\int_{\{P(u_0, v)=v\}} |\dot{v}_0|^p\omega_v^n\wedge\eta=0.
\]
$P(u_0,v) \leq P(u_1,v), P(u_0,v)\leq u_0, P(u_1, v) \leq u_1$ and the strong maximum principle implies that $P(u_0, v) \leq v_t \leq u_t$ for $t \in [0,1]$ and $\dot{v}_0 \geq 0$. Moreover for $x \in \{P(u_0,v)=u_0\}$ we have
\[
\dot{v}_0(x) =\lim_{t\rightarrow 0+} \frac{v_t(x)-v_0(x)}{t} \leq \lim_{t\rightarrow 0+}\frac{u_t(x)-u_0(x)}{t}=\dot{u}_0(x).
\]
Then it follows from Lemma \ref{distancef2}, Lemma \ref{decomposition} that
\begin{equation*}
\begin{split}
d_p(P(u_0,v), P(u_1, v))^p &= \int_M |\dot{v}_0|\omega_{P(u_0,v)}^n\wedge\eta \\
&\leq \int_{\{P(u_0,v)=u_0\}}|\dot{v}_0|^p\omega_{u_0}^n\wedge\eta+\int_{\{P(u_0, v)=v\}} |\dot{v}_0|^p\omega_v^n\wedge\eta \\
&\leq \int_{\{P(u_0,v)=u_0\}} |\dot{u}_0|^p\omega_{u_0}^n\wedge\eta \\
&\leq \int_M |\dot{u}_0|^p\omega_{u_0}^n\wedge\eta \\
&=d_p(u_0, u_1)^p.
\end{split}
\end{equation*}
For the general case, using the Pythagoreans formula we have
\begin{equation*}
\begin{split}
d_p(P(u_0,v), P(u_1, v))^p &=d_p(P(u_0,v),P(u_0,u_1,v))^p +d_p(P(u_1, v), P(u_0, u_1, v))^p \\
&=d_p(P(u_0,v), P(P(u_0,u_1), v))^p+ d_p(P(u_1,v), P(P(u_0,u_1), v))^p \\
&\leq d_p(u_0, P(u_0,u_1))^p+ d_p(u_1, P(u_0,u_1))^p \\
&=d_p(u_0, u_1)^p.
\end{split}
\end{equation*}
This completes the proof.
\end{proof}
\begin{prop}$({\mathcal E}_p(M, \xi, \omega^T), d_p)$ is a complete metric space.
\end{prop}
\begin{proof}
First we show that $({\mathcal E}_p(M,\xi,\omega^T), d_p))$ is a metric space. The symmetry of $d_p$ is obvious and the triangle inequality follows from Lemma \ref{triangle200}. We only have to check the non-degeneracy of $d_p$. Suppose $w_1,w_2 \in {\mathcal E}(M,\xi,\omega^T)$ and $d_p(w_1,w_2)=0$. It follows from the Pythagorean formula that $d_p(w_1,P(w_1,w_2))=0$ and $d_p(P(w_1,w_2),w_2)=0$. Then Lemma \ref{order4} implies that $w_1=P(w_1,w_2)=w_2$ with respect to the measure $\omega_{P(w_1,w_2)}^n\wedge\eta$. Then the domination principle Lemma \ref{domination2} implies that $w_1=P(w_1,w_2)=w_2$. Hence $({\mathcal E}_p(M, \xi, \omega^T),d_p)$ is a metric space.
Then we show that the metric space $({\mathcal E}_p(M, \xi, \omega^T), d_p)$ is complete. Suppose $\{u_k\}_{k\in\mathbb{N}} \subset {\mathcal E}_p(M, \xi, \omega^T)$ is a $d_p$ Cauchy sequence. We will prove that there exists $u \in {\mathcal E}_p(M, \xi, \omega^T)$ such that $d_p(u_k, u) \rightarrow 0$.
Without loss of generality we can assume that
\begin{equation*}
d_p(u_k,u_{k+1}) \leq \frac{1}{2^k}
\end{equation*}
for $k \in \mathbb{N}$. Denote by $u_k^l=P(u_k, u_{k+1},..., u_{k+l})$ for $k, l \in \mathbb{N}$ and $u_k^0=u_k$. It follows from the definition of rooftop envelope and Proposition \ref{rooftop04} that
\begin{equation*}
d_p(u_k^l,u_k^{l+1})=d_p(P(u_k^l, u_{k+l}), P(u_k^l,u_{k+l+1})) \leq d_p(u_{k+l}, u_{k+l+1}) \leq \frac{1}{2^{k+l}}
\end{equation*}
and the sequence $\{u_k^l\}_{l \in \mathbb{N}} \subset {\mathcal E}_p(M, \xi, \omega^T)$ is $d_p$ bounded and decreasing. According to Lemma \ref{close4} $\tilde{u}_k =\lim\limits_{l\rightarrow\infty} u_k^l \in {\mathcal E}_p(M, \xi, \omega^T)$ and $d_p(u_k^l, \tilde{u}_k) \rightarrow 0$ as $l \rightarrow \infty$. Moreover $u_k^{l+1} \leq u_{k+1}^l$ implies that $\tilde{u}_k \leq \tilde{u}_{k+1}$ and $\{\tilde{u}_k\}_{k\in\mathbb{N}}$ is a increasing sequence in ${\mathcal E}_p(M, \xi, \omega^T)$.
It follows from Lemma \ref{approximation4}, the definition of rooftop envelope and Proposition \ref{rooftop04} that
\begin{equation*}
\begin{split}
d_p(\tilde{u}_k, \tilde{u}_{k+1}) &=\lim_{l\rightarrow \infty} d_p(u_k^{l+1}, u_{k+1}^l) \\
&=\lim_{l\rightarrow\infty} d_p(P(u_{k+1}^l, u_k), P(u_{k+1}^l, u_{k+1})) \\
&\leq \lim_{l\rightarrow\infty} d_p(u_k, u_{k+1}) \\
&\leq \frac{1}{2^k}
\end{split}
\end{equation*}
and the sequence $\{\tilde{u}_k\}_{k\in\mathbb{N}} \subset {\mathcal E}_p(M, \xi, \omega^T)$ is $d_p$-bounded and increasing. By Lemma \ref{close4} $u=\lim\limits_{k \rightarrow\infty} \tilde{u}_k \in {\mathcal E}_p(M, \xi, \omega^T)$ and $\lim\limits_{k\rightarrow\infty}d_p(\tilde{u}_k, u)=0$.
Moreover by Proposition \ref{rooftop04} we have
\begin{equation*}
\begin{split}
d_p(u_k^l, u_k) =d_p(P(u_k,u_{k+1}^{l-1}),P(u_k,u_k)) \leq d_p(u_{k+1}^{l-1}, u_k) \leq d_p(u_{k+1}^{l-1},u_{k+1}) +d_p(u_k,u_{k+1})
\end{split}
\end{equation*}
and
\begin{equation*}
d_p(u_k^l,u_k) \leq d_p(u_{k+l}^0,u_{k+l})+\sum_{j=1}^{l}d_p(u_{k+j-1},u_{k+j})=\sum_{j=1}^ld_p(u_{k+j-1},u_{k+j})
\end{equation*}
It follows from Lemma \ref{approximation4} that
\begin{equation*}
d_p(\tilde{u}_k,u_k) \leq \sum_{j=1}^{\infty}\frac{1}{2^{k+j-1}}=\frac{1}{2^{k-1}}
\end{equation*}
By the triangle inequality
\begin{equation*}
d_p(u_k,u) \leq d_p(\tilde{u}_k,u_k)+d_p(\tilde{u}_k,u)
\end{equation*}
we have $d_p(u_k ,u) \rightarrow 0$. This completes the proof.
\end{proof}
\section{Sasaki-extremal metric}
We give a brief discussion of existence of Sasaki-extremal metric and properness of modified ${\mathcal K}$-energy. Calabi's extremal metric was extended to Sasaki setting by Boyer-Galicki-Simanca \cite{BGS1}. A Sasaki metric is called Sasaki-extremal if its transverse K\"ahler metric is extremal in the sense of Calabi \cite{Ca1}. As in K\"ahler setting, given a priori estimates \cite{he182} and the pluripotential theory developed in the paper, we have the following,
\begin{thm}\label{extremal}A compact Sasaki manifold $(M, \xi, \eta, g)$ admits a Sasaki-extremal metric in the transverse K\"ahler class $[\omega^T]$ if and only if the modified ${\mathcal K}$-energy is reduced proper.
\end{thm}
We recall some basic notions \cite{Fut, M2, Ca1, FM, BGS1}.
We use the group $\text{Aut}_0(\xi, J)$ to denote the subgroup of diffeomorphism group of $M$ which preserves both $\xi$ and transverse holomorphic structure. Its Lie algebra is the Lie algebra of all \emph{Hamiltonian holomorphic vector fields} in the sense of \cite{FOW}[Definition 4.4].
First one can define Sasaki-Futaki invariant as follows, given $X\in \mathfrak{aut}$, the Lie algebra of $\text{Aut}_0(\xi, J)$,
\begin{equation}\label{fut01}
{\mathcal F}_X(\omega^T)=\int_M X(f) \omega_T^n\wedge \eta,
\end{equation}
where $f$ is the potential of transverse scalar curvature,
\[
\Delta f=R^T-\underline {R}.
\]
The first step is certainly to verify that \eqref{fut01} does not depend on a particular choice of transverse K\"ahler form in $[\omega^T]$ (see \cite{BGS1}[Proposition 5.1]). We are interested in the reduced part $\mathfrak{h}_0$ of $\mathfrak{aut}$, which consists of \emph{Hamiltonian holomorphic vector fields} such that $\eta(Y)$ has non empty zero. When $(M, \xi, \eta, g)$ is a Sasaki-extremal metric, then similar as in Calabi's decomposition, we have \cite{BGS1}[Theorem 4.8] the decomposition
\[
\mathfrak{h}=\mathfrak{a}\oplus \mathfrak{h}_0,
\]
where $\mathfrak{a}$ consists of parallel vector fields of the transverse K\"ahler metric $g^T$. Moreover the reduced part $\mathfrak{h}_0$ has the decomposition
\[
\mathfrak{h}_0=\mathfrak{z}_0\oplus J\mathfrak{z}_0\oplus(\oplus_{\lambda>0}\mathfrak{h}^\lambda),
\]
where $\mathfrak{z}_0=\text{aut}(\xi, \eta, g)/\{\xi\}$ and
\[
\mathfrak{h}^\lambda=\{Y\in \mathfrak{h}: {\mathcal L}_{X} Y=\lambda Y, X=(\bar\partial R)^{\#},\}
\]
where $X:=(\bar\partial R)^{\#}$ is the dual vector and it is the extremal vector field in $\mathfrak{h}_0$. In general, we can define Futaki-Mabuchi bilinear form \cite{FM} on $\mathfrak{h}_0$ as in K\"ahler setting (in Sasaki setting this is well-defined on $\mathfrak{aut}$ since every Hamiltonian vector field has a potential, simply given by $\eta(Y)$; for example, $\xi$ has potential $1$). Given $Y, Z\in \mathfrak{aut}$, define
\begin{equation}\label{fm01}
B(Y, Z)=\int_M \eta(Y) \eta(Z) (\omega^T)^n\wedge \eta.
\end{equation}
It is straightforward to check that \eqref{fm01} remains unchanged if $\eta\rightarrow \eta+d^c_B\phi$ for $\phi\in {\mathcal H}$. If we restrict us on the \emph{real Hamiltonian holomorphic vector fields} such that $\eta(Y)$ is real, then there exists a unique vector field $V$ such that
\begin{equation}\label{fut03}
{\mathcal F}_{\text{Re}(Y)}=B(\text{Re}(Y), V)
\end{equation}
We call such $V$ and its corresponding $X=V-\sqrt{-1}JV$ \emph{the extremal vector field.} As in K\"ahler setting, for $JV$-invariant metrics in ${\mathcal H}$, we define the modified ${\mathcal K}$-energy \cite{Guan, Simanca} as
\begin{equation}
\delta {\mathcal K}_V=-\int_M \delta \phi (R_\phi-\underline{R}-\eta_\phi(V)) \omega^n_\phi\wedge \eta.
\end{equation}
Let
$\text{Aut}_0(\xi, J, V)$ be the subgroup of $\text{Aut}_0(\xi, J)$ which commutes with the flow of $JV$.
\begin{prop}The ${\mathcal K}_V$ energy is invariant under the action of $\text{Aut}_0(\xi, J, V)$
\end{prop}
\begin{proof}The proof is similar to K\"ahler setting \cite{he18}[Lemma 2.1] and it follows in a tautologic way from Futaki-invariant and definition of extremal vector field through Futaki-Mabuchi bilinear form. We fix a background transverse K\"ahler structure $\omega^T$ such that it is $JV$ invariant.
For $\sigma\in \text{Aut}_0(\xi, J, V)$, let $\sigma_t$ be one parameter subgroup generated by the flow of $Y_\mathbb{R}:=\text{Re}(Y)$ for some $Y\in \mathfrak{aut}$. Since $Y$ commutes with $V$, hence $\sigma_t^* \omega_0$ is invariant with respect to $JV$ if $\omega_0\in [\omega^T]$ is invariant. We compute
\[
\begin{split}
\frac{d}{dt}{\mathcal K}(\sigma_t^*\omega_0)=&-\int_M\sigma_t^*(\eta_0(\text{Re}(Y)) (R_0-\underline R-\eta_0(V))\omega_0^n\wedge \eta_0)\\
=&-\int_M \eta_0(Y_\mathbb{R}) (R_0-\underline{R})\omega_0^n\wedge \eta_0+\int_M \eta_0(Y_\mathbb{R}) \eta_0 (V)\omega_0^n\wedge \eta_0
\end{split}
\]
The righthand side is zero by \eqref{fut03}.
\end{proof}
We define the distance $d_1$ modulo the group action $G_0:=\text{Aut}_0(\xi, J, V)$. Fix a compact subgroup $K$ of $G_0$ such that $K$ contains the flow of $JV$ (and $\xi$ of course).
Denote \[{\mathcal H}^K_0=\{\phi\in {\mathcal H}_0, \phi\; \text{is invariant under the flow of}\; K\}\]
Note that $G_0$ acts on ${\mathcal H}_0$ through $\omega_\phi\rightarrow \sigma^*\omega_\phi=\omega^T+\sqrt{-1}\partial_B\bar\partial_B \sigma[\phi]$. Given any $\phi, \psi\in {\mathcal H}_0$, we can consider the distance modulo $G_0$ as follows \cite{CPZ}
\[
d_{1, G_0}(\phi, \psi)=\inf_{\sigma_1, \sigma_2\in G_0} d_1(\sigma_1[\phi], \sigma_2[\psi])=\inf_{\sigma\in G_0}d_1(\phi, \sigma[\psi]).
\]
\begin{defn}We say ${\mathcal K}_V$ is reduced proper for $K$-invariant metrics with respect to $d_{1, G_0}$ if the following conditions hold
\begin{enumerate}
\item ${\mathcal K}_V$ is bounded below over ${\mathcal H}^K$.
\item There exists constant $C, D>0$ such that for $\phi\in {\mathcal H}^K$
\[
{\mathcal K}_V(\phi)\geq C d_{1, G_0}(0, \phi)-D.
\]
\end{enumerate}
\end{defn}
To prove Theorem \ref{extremal}, we proceed exactly as in \cite{he18}, to consider the modified Chen's continuity path \cite{chen15}, for a $K$-invariant transverse K\"ahler metric $\omega^T$,
\begin{equation}\label{se001}
t(R_\phi-\underline{R}-\eta_\phi(V))+(1-t)(\Lambda_{\omega_\phi}\omega^T-n)=0
\end{equation}
Given a priori estimates as in \cite{he182} and the pluripotential theory on Sasaki manifolds developed in this paper, we can then follow \cite{he18, he182} to prove Theorem \ref{extremal}. Since the argument is almost identical, we only sketch the process and skip the details.
\begin{enumerate}
\item The openness of \eqref{se001} is proved similarly \cite{he18}[Theorem 3.4]; note that we assume transverse K\"ahler metrics and potentials are $K$-invariant.
\item For $0<t<1$, ${\mathcal K}_V$ bounded below over ${\mathcal H}^K$ implies that the distance $d(0, \phi_t)$ is uniformly bounded by a constant in the order $C((1-t)^{-1}+1)$, where $\phi_t$ is the solution of \eqref{se001} at $t$. This together with the fact that $\phi_t$ minimizes $t{\mathcal K}_V+(1-t)\mathbb{J}$, gives the uniform upper bound of entropy of $H(\phi_t)$ (depending on $(1-t)^{-1}$). Hence estimates
in \cite{he182}[Theorem 2] applies to get the solution for any $t<1$.
\item Choose an increasing sequence $t_i\rightarrow 1$, first using the properness assumption we can assume that there are $\sigma_i\in G$ such that $\psi_i:=\sigma_i[\phi_{t_i}]$ ($\omega_{\psi_i}=\sigma_i^{*}\omega_{\phi_{t_i}}$) satisfies that $d(0, \psi_i)$ is uniformly bounded above. Then $\psi_i$ satisfies a scalar curvature type equation
\[
\begin{split}
&\omega_{\psi_i}^n=e^{F_i}(\omega^T)^n\\
&\Delta_{\psi_i} F_i=h_i+\text{tr}_{\psi_i}(Ric(\omega^T)-\frac{1-t_i}{t_i}\omega_i)
\end{split}
\]
where $h_i$ is uniformly bounded and $\omega_i=\sigma_i^{*}(\omega^T)$. One can use \cite{he182}[Theorem 3] and arguments as in \cite{he18}[Theorem 3.5] to conclude the convergence of $\psi_i, F_i$ to a smooth Sasaki-extremal structure.
\end{enumerate}
\section{Appendix}
\subsection{Approximation through Type-I deformation and Regularity of rooftop envelop}
Using Type-I deformation, we can obtain
the following approximation of irregular Sasaki structure $(M, \xi, \eta, g)$, which would be important for us; see \cite{Ruk} and in particular \cite{BG}[Theorem 7.1.10] for the approximation.
Suppose $\xi$ is irregular, then the Reeb flow generates an isometry in $\text{Aut}(M, \xi, \eta, g)$. Let $T^k\subset \text{Aut}(M, \xi, \eta, g)$ ($k\geq 2$) be the torus generated by $\xi$ and denote $\mathfrak{t}$ to be its Lie algebra. We can then choose $\rho_i\rightarrow 0, \rho_i\in \mathfrak{t}$ such that $\xi_i=\xi+\rho_i$ is quasiregular. Define
\begin{equation}\label{approx}
\eta_i=\frac{\eta}{1+\eta(\rho_i)}, \Phi_i=\Phi-\frac{1}{1+\eta(\rho_i)}\Phi\rho_i\otimes \eta, \omega_i^T=\frac{1}{2}d\eta_i, g_i=\eta_i\otimes \eta_i+\omega_i^T(\mathbb{I}\otimes \Phi_i),
\end{equation}
where $\Phi$ is the $(1, 1)$ tensor field defined on the contact bundle $\mathcal D=\text{Ker}(\eta)$.
We recall the following,
\begin{thm}[Approximation of irregular Sasaki structure]\label{type101}Let $(M, \xi, \eta, g)$ be an irregular Sasaki structure on a compact manifold $M$. Then we can choose $\rho_i\rightarrow 0$ such that $\xi_i$ is quasiregular and \eqref{approx} defines a quasi-regular Sasaki structure which is invariant under the action of $T^k$, the torus generated by $\xi$ in $\text{Aut}(M, \xi, \eta, g)$.
\end{thm}
\begin{lemma}\label{type1}Let $(M, \xi, \eta, g)$ be a Sasaki structure on a compact manifold $M$. Consider a torus $T\subset \text{Aut}(M, \xi, \eta, g)$ and $\xi_i\in \mathfrak{t}$.
Choose $\xi_i=\xi+\rho_i$ for $\rho_i$ sufficiently small. Consider two Sasaki structures $(\xi, \eta, \Phi, g)\leftrightarrow (\xi_i, \eta_i, \Phi_i, g_j)$ via Type-I deformation. Then we have the following.
Suppose $u$ is $T$ invariant and $u\in \text{PSH}(M, \xi, \omega^T)$ with $|d \Phi du|\leq C_0$. Then for $\rho_i$ sufficiently small, there exists positive constant $\epsilon_i\rightarrow 0$ (as $\rho_i\rightarrow 0$) such that,
\begin{equation}\label{approx100}
(1-\epsilon_i) u\in \text{PSH}(M, \xi_i, \omega_i^T)
\end{equation}
Similarly, suppose $|d \Phi du|\leq C_0$ and $u\in \text{PSH}(M, \xi_i, \omega_i^T)$, then there exists positive constant $\epsilon_i \rightarrow 0$ as $i\rightarrow \infty$, such that
\begin{equation}\label{approx101}
(1-\epsilon_i) u\in \text{PSH}(M, \xi, \omega^T)
\end{equation}
\end{lemma}
\begin{proof}Since $u$ is $T^k$-invariant, hence $u$ is a basic function with respect to both $\xi$ and $\xi_i$.
We write
\[
\omega_i^T+\sqrt{-1}\partial^i_B\bar\partial^i_B u
=\omega^T_i+\frac{1}{2} d\Phi_i d u.\]
Using \eqref{approx}, we compute
\begin{equation}\label{approx1005}
\begin{split}
\omega^T_i+\frac{1}{2} d\Phi_i d u=&\frac{\omega^T}{1+\eta(\rho_i)}+\eta\wedge d\left(\frac{1-du(\Phi \rho_i)}{1+\eta(\rho_i)}\right)+\frac{1}{2}d\Phi du+2\omega^T \frac{du(\Phi \rho_i)}{1+\eta(\rho_i)}\\
=&\frac{1+2du(\Phi \rho_i)}{1+\eta(\rho_i)}\omega^T+\frac{1}{2}d\Phi du+\eta\wedge d\left(\frac{1-du(\Phi \rho_i)}{1+\eta(\rho_i)}\right)\\
=&\omega^T+\frac{1}{2}d\Phi du+\left(\frac{1+2du(\Phi \rho_i)}{1+\eta(\rho_i)}-1\right)\omega^T+\eta\wedge d\left(\frac{1-du(\Phi \rho_i)}{1+\eta(\rho_i)}\right)
\end{split}
\end{equation}
If $|d\Phi d u|\leq C_0$, then \eqref{approx1005} implies that $|d\Phi_i d u|\leq C_1$ (vice versa). Moreover, when $\rho_i\rightarrow 0$, \[\frac{1+2du(\Phi \rho_i)}{1+\eta(\rho_i)}\rightarrow 1,\;\;\; d\left(\frac{1-du(\Phi \rho_i)}{1+\eta(\rho_i)}\right)\rightarrow 0.\] We can then choose $\epsilon_i\rightarrow 0$ as $\rho_i\rightarrow 0$, such that
\[
\omega^T_i+\frac{1}{2} d\Phi_i d (u(1-\epsilon_i))\geq 0.
\]
This proves \eqref{approx100}.
Note that given the relation of $\Phi$ and $\Phi_i$, then $|d \Phi du|\leq C_0$ implies that $|d\Phi_i d u|$ is uniformly bounded (we suppose $\rho_i$ is uniformly small in smooth topology). Interchanging $\xi$ and $\xi_i$, this proves \eqref{approx101}.
\end{proof}
\begin{rmk}
Note that the complex structure on the cone remains unchanged under Type-I deformation \cite{HeSun}[Lemma 2.2]. The transverse holomorphic structure is changed since the foliation is changed, due to the change of Reeb vector foliation; on the other hand, the contact bundle $\mathcal D$ remains unchanged. Note that $(\mathcal D, \Phi)$ and $(\mathcal D, \Phi_i)$ can be identified to transverse holomorphic tangent bundle $T^{1, 0}({\mathcal F}_\xi)$ and $T^{1, 0}({\mathcal F}_{\xi_i})$ (the foliations are different). Since the term $\eta\wedge d\left(\frac{1-du(\Phi \rho_i)}{1+\eta(\rho_i)}\right)$ vanishes on $\mathcal D$ and $\left(\frac{1+2du(\Phi \rho_i)}{1+\eta(\rho_i)}-1\right)\omega^T$ involves with only $du$, hence the above statement holds if we only assume that $|du|$ is uniformly bounded. Since we shall not need this, we skip the argument. However, it seems that assumption like $|du|\leq C$ is necessary and we are not able to extend this to $\text{PSH}(M, \xi, \omega^T)$.
\end{rmk}
As above we fix a torus $T\subset \text{Aut}(N, \xi, \eta, g)$ and consider $\rho_i\in \mathfrak{t}$ sufficiently small. Let $\xi_i=\xi+\rho_i$ and let $(\xi_i, \eta_i, g_i, \Phi_i)$ be the Type-I deformation of $(\xi, \eta, g, \Phi)$.
\begin{lemma}\label{measure100}Let $\rho_i\rightarrow 0$. Suppose a sequence of $T$-invariant functions $u_i\in \text{PSH}(M, \xi_i, \omega_i^T)$ with $|d\Phi d u_i|_{\omega^T}\leq C_0$ converges to $u\in \text{PSH}(M, \xi, \omega^T)$. Then $|d\Phi du|_{\omega^T}\leq C_0$ and we have the following weak convergence of the measure
\[
(\omega_i^T+\frac{1}{2}d\Phi_i d u_i)^n\wedge \eta_i\rightarrow (\omega^T+\frac{1}{2}d\Phi d u)^n\wedge \eta
\]
\end{lemma}
\begin{proof}By \eqref{approx1005} and $|d\Phi d u_i|_{\omega^T}\leq C_0$, $\omega_i^T+\frac{1}{2}d\Phi_i d u_i$ and $\omega^T+\frac{1}{2}d\Phi d u_i$ differ by a term with small $L^\infty$ norm, hence we only need to prove that
\[(\omega^T+\frac{1}{2}d\Phi d u_i)^n\wedge \eta_i\rightarrow (\omega^T+\frac{1}{2}d\Phi d u)^n\wedge \eta.\]
Note that $\eta_i=\eta/(1+\eta(\rho_i))$ converges smoothly to $\eta$, then the above follows from the weak convergence of $(\omega^T+\frac{1}{2}d\Phi d u_i)^n\wedge \eta$.
\end{proof}
Next we give a proof of Theorem \ref{rooftop101} in Sasaki setting, regarding the regularity of envelop construction.
\begin{thm}Given $f\in C^\infty_B(M)$, then
we have the following estimate
\[
\|P(f)\|_{C^{1, \bar 1}}\leq C(M, \omega^T, g, \|f\|_{C^{1, \bar 1}}).
\]
Moreover, if
$u_1, \cdots, u_k\in {\mathcal H}_\Delta$, where we use the notation
\[
{\mathcal H}_\Delta=\{u\in \text{PSH}(M, \xi, \omega^T): \|u\|_{C^{1, \bar 1}}<\infty\}
\]
then $P(u_1, \cdots, u_k)\in {\mathcal H}_\Delta$.
\end{thm}
\begin{proof}
The first result was proved by Berman-Demailly \cite{BD} in K\"ahler setting.
For the first statement, we follow \cite{D4}[Theorem A.7] and it is a direct adaption to Sasaki setting. Consider the following complex Monge-Ampere equation on Sasaki manifolds,
\[
\omega_{u_\beta}^n\wedge \eta= e^{\beta(u_\beta-f)}\omega_T^n\wedge\eta.
\]
Since all quantities are basic and only transverse K\"ahler structure is involved, then the argument as in K\"ahler setting has a direct adaption; see \cite{D4}[Theorem A.7] and we skip the details. For the second statement, first note that we only need to show that $u_0, u_1\in {\mathcal H}_\Delta$, then $P(u_0, u_1)\in {\mathcal H}_\Delta$. Let $u_t$ be the geodesic segment connecting $u_0, u_1$, then by Lemma \ref{cma10}, we know that $u_t\in {\mathcal H}_\Delta$ (see \cite{BD} and \cite{he12} for K\"ahler setting). Now we have already known $P(u_0, u_1)=\inf_{t\in [0, 1]} u_t$, then by \cite{DR1}[Proposition 4.4] (applied to each foliation charts), $\Delta u_t$ is uniformly bounded. This shows that $P(u_0, u_1)\in {\mathcal H}_\Delta$.
\end{proof}
More generally, one can obtain results as in \cite{DR1} that $P(f_1, \cdots, f_n)\in C^{1, \bar 1}_B$ given $f_1, \cdots, f_n\in C^{1, \bar 1}_B$. The point is that given two functions $f_1, f_2$, $h=\min \{f_1, f_2\}$ satisfies $\Delta h\leq \max\{\Delta f_1, \Delta f_2\}$ in viscosity sense, writing $h=\frac{f_1+f_2}{2}-\frac{|f_1-f_2|}{2}$. The argument as in \cite{D4}[Theorem A.7] applies using the maximum principle in viscosity sense. Since we do not need this, we shall skip the details.
\subsection{Complex Monge-Ampere operator and intrinsic capacity on compact Sasaki manifolds}\label{CMA001}
We discuss briefly the Bedford-Taylor theory on Sasaki manifolds. For details for complex Monge-Ampere operator, see Bedford-Taylor \cite{BT1}.
We also extend intrinsic Monge-Ampere capacity to Sasaki setting, see \cite{GZ01} for K\"ahler setting.
Given a Sasaki structure, there is a splitting of tangent bundle $TM=L\xi\otimes \mathcal D$, where $\mathcal D=\text{Ker}(\eta)$, with $\Phi: \mathcal D\rightarrow \mathcal D$ inducing a splitting $\mathcal D\otimes \mathbb{C}=\mathcal D^{1, 0}\oplus \mathcal D^{0, 1}$. Hence
the subbundle $\Lambda^{2p}(\mathcal D^*)$ of $\Lambda^{2p}M$ is well-defined and $\Phi$ induces a splitting to give bidegree of forms in $\Lambda^{2p}(\mathcal D^*)$. Note that we have the following,
\[\Lambda^{2p}(\mathcal D^*)=\{\theta: \theta\in \Lambda^{2p}M, \iota_\xi \theta=0\}.\]
We do not assume that $\theta\in \Lambda^{2p}(\mathcal D^*)$ is basic. That is, the coefficients of $\theta$ might not be invariant under the Reeb flow. A simple observation shows that if $\theta\in \Lambda^{2p}(\mathcal D^*)$, then $\theta$ is basic if it is closed, $d\theta=0$ (since $\iota_\xi \theta=0$). Hence a closed $2p$-form in $\Lambda^{2p}(\mathcal D^*)$ is basic and can be regarded as a \emph{transverse} closed $2p$-form, defined as in \cite{VC}. In general $d\Lambda^{2p}(\mathcal D^*)$ is not in $\Lambda^{2p+1}(\mathcal D^*)$.
Next we give a very brief discussion of \emph{transverse} positive closed currents of bidegree of $(p, p)$ on $M$, $0\leq p\leq n$; see \cite{VC} for similar treatment.
We simply treat them as closed differential forms of bidegree $(p, p)$ in $\Lambda^{2p}(\mathcal D^*)$ with measurable coefficients which are invariant under the Reeb flow. Its total variation is controlled by
\[
\|T\|:=\int_M T\wedge (\omega^T)^{n-p}\wedge \eta.
\]
Given $\phi\in \text{PSH}(M, \xi, \omega^T)$, we write $\phi\in L^1(T)$ if $\phi$ is integrable with respect to the measure $T\wedge (\omega^T)^{n-p}\wedge \eta$. In this case, the current $\phi T$ is well-defined and we write
\[
\begin{split}
&\omega_\phi\wedge T:=\omega^T\wedge T+dd^c_B(\phi T)\\
&\omega_\phi\wedge T\wedge (\omega^T)^{n-p-1}\wedge \eta= T\wedge (\omega^T)^{n-p}\wedge \eta+dd^c_B(\phi T)\wedge (\omega^T)^{n-p-1}\wedge \eta.
\end{split}
\]
The positivity is a local notion and we simply think $T$ as a positive closed $(p, p)$-form on each foliation chart. Hence $\omega_\phi\wedge T$ is also a transverse closed positive $(p+1, p+1)$ form. Note that we think transverse positive closed currents of bidegree of $(p, p)$-type as a linear functional on $\Lambda^{n-p, n-p}(\mathcal D^*)$, hence the test forms are of bidegree $(n-p, n-p)$. A main point is that test forms are not restricted to basic forms. In other words, given such a current $T$ and $\gamma\in \Lambda^{n-p, n-p}(\mathcal D^*)$, we have the following paring,
\[
\gamma\rightarrow \int_M \gamma\wedge T\wedge \eta.
\]
When $\phi\in \text{PSH}(M, \xi, \omega^T)\cap L^\infty$, it follows that $\phi\in L^1(T)$ for any transverse positive closed current $T$ of bidegree $(p, p)$ and hence one can define inductively $\omega_\phi^k\wedge (\omega^T)^{n-k}$; in particular, this leads to the definition of transverse complex Monge-Ampere operator $\omega_\phi^n$ of bidegree $(n, n)$.
Moreover, the cocycle condition on transverse holomorphic structure ensures that $\omega_\phi^k\wedge (\omega^T)^{n-k}$ is well-defined on $M$.
In particular $\omega_\phi^n\wedge \eta$ defines a positive Borel measure on $M$.
It is more convenient to consider this construction locally in foliations charts $W_\alpha=(-\delta, \delta)\times V_\alpha$.
By taking test forms $\gamma\in \Lambda^{n-p, n-p}(\mathcal D^*)$ with compact support, we can consider $T\wedge \eta$ on a foliation chart for a transverse positive closed $(p, p)$ current $T$. In particular this give a local description of the complex Monge-Ampere measures $\omega_\phi^k\wedge (\omega^T)^{n-k}\wedge \eta$.
By taking test functions $f$ supported in a foliation chart, the measure $\omega_\phi^k\wedge (\omega^T)^{n-k}\wedge \eta$ for each $k$ is regarded as the product measure $\omega_\phi^k\wedge (\omega^T)^{n-k}\wedge dx$ on $W_\alpha$, where $\xi=\partial_x$ is the Reeb direction. Note that $\omega_\phi^k\wedge (\omega^T)^{n-k}$ is defined on $V_\alpha$ as the usual way in K\"ahler setting, and the cocycle condition on transverse holomorphic structure ensures that $\omega_\phi^k\wedge (\omega^T)^{n-k}$ is well-defined as a transverse positive closed current of bidegree $(n, n)$. On each foliation chart, we have $\omega_\phi^k\wedge (\omega^T)^{n-k}\wedge\eta=\omega_\phi^k\wedge (\omega^T)^{n-k}\wedge dx$ as a product measure. This coincides with the local description given by van Coevering \cite{VC}[Section 2].
Moreover, when $u, v\in \text{PSH}(M, \xi, \omega^T)\cap L^\infty$, $du\wedge d^c_B v\wedge T$ can also be defined, where $T$ is a transverse closed positive current of bidegree $(n-1, n-1)$.
By the polarization formula we only need to define $du\wedge d^c_B u\wedge T$. By adding a positive constant if necessary, we assume $u\geq 0$. Then we define
\begin{equation}
du\wedge d^c_B u\wedge T:=\frac{1}{2} dd^c_B (u^2)\wedge T- udd^c_B u\wedge T.
\end{equation}
In particular, $du\wedge d^c_B u\wedge T$ is positive if $T$ is a transverse closed positive current of bidegree $(n-1, n-1)$. We can then define $du\wedge d^c_B u\wedge T\wedge \eta$ as a positive
Borel measure. Using the polarization formula, we have the following Cauchy-Schwartz inequality, for $u, v\in \text{PSH}(M, \xi, \omega^T)\cap L^\infty$,
\begin{equation}\label{cs01}
|\int_M du\wedge d^c_B v\wedge T\wedge \eta|^2\leq \left(\int_M du\wedge d^c_B u \wedge T\wedge \eta\right)\left(\int_M dv\wedge d^c_B v \wedge T\wedge \eta\right)
\end{equation}
We also record the following Stokes' theorem in Sasaki setting, and its proof follows the Bedford-Taylor theory as in K\"ahler setting via approximation (Lemma \ref{BK}); see \cite{VC}[Theorem 2.3.1, Proposition 2.3.2].
\begin{lemma}Let $u, v, \phi\in \text{PSH}(M, \xi, \omega^T)\cap L^\infty$, then for each $0\leq k\leq n-1$, we have
\begin{equation}
\begin{split}
\int_M u dd^c_B v\wedge \omega_\phi^{k}\wedge (\omega^T)^{n-k-1}\wedge \eta=&\int_M v dd^c_B u \wedge \omega_\phi^{k}\wedge (\omega^T)^{n-k-1}\wedge \eta\\
=&-\int_M d u\wedge d^c_B v \wedge \omega_\phi^{k}\wedge (\omega^T)^{n-k-1}\wedge \eta
\end{split}
\end{equation}
\end{lemma}
We record a basic inequality in Sasaki setting, usually referred to Chern-Levine-Nirenberg inequality,
\begin{prop}[Chern-Levine-Nirenberg inequalities]\label{CLN} Let $T$ be a positive closed current
of bidegree $(p, p)$ on M and $\phi\in \text{PSH}(M, \xi, \omega^T)\cap L^\infty$. Then $\|\omega_\phi\wedge T\|=\|T\|$. Moreover, if $\psi\in \text{PSH}(M, \xi, \omega^T)\cap L^1(T)$, then $\psi\in L^1(\omega_\phi\wedge T)$ and
\begin{equation}\label{cln01}
\|\psi\|_{L^1(T\wedge \omega_\phi)}\leq \|\psi\|_{L^1(T)}+(2\max\{\sup \psi, 0\}+\sup \phi-\inf \phi) \|T\|.
\end{equation}
\end{prop}
\begin{proof}By Stokes' theorem, we have $\int_M dd^c_B (\phi T)\wedge (\omega^T)^{n-p-1}\wedge \eta=0$, hence
\[
\|\omega_\phi\wedge T\|=\int_M \omega^T\wedge T\wedge (\omega^{T})^{n-p-1}\wedge \eta=\|T\|.
\]
To prove \eqref{cln01}, we first assume $\psi\leq 0, \phi\geq 0$. By assumption, $\psi\in L^1(T)$, then
\[
\|\psi\|_{L^1(T\wedge \omega_\phi)}:=\int_M -\psi T\wedge \omega_\phi \wedge (\omega^T)^{n-p-1}\wedge \eta =\|\psi\|_{L^1(T)}+\int_M -\psi dd^c_B(\phi T)\wedge (\omega^T)^{n-p-1}\wedge \eta
\]
By Stokes' theorem we compute
\[
\begin{split}
\int_M -\psi dd^c_B(\phi T)\wedge (\omega^T)^{n-p-1}\wedge \eta=&\int_M dd^c_B(-\psi) \wedge \phi T\wedge (\omega^T)^{n-p-1}\wedge \eta\\
\leq &\int_M \phi T\wedge (\omega^T)^{n-p}\wedge \eta\\
\leq &\sup_M \phi \int_M T\wedge (\omega^T)^{n-p}\wedge \eta=(\sup_M\phi) \|T\|.
\end{split}
\]
Now suppose $\sup \psi>0$.
Replacing $\phi$ by $\phi-\inf\phi$, we compute
\[
\|\psi\|_{L^1(T\wedge \omega_\phi)}\leq \int_M (2\sup \psi-\psi)T\wedge \omega_\phi \wedge (\omega^T)^{n-p-1}\wedge \eta
\]
The same argument as above leads to \eqref{cln01} for the general case.
\end{proof}
For a Borel subset $E$ on a Sasaki manifold $(M,\xi,\omega^T)$, we define the capacity as
\begin{equation*}
\text{cap}_{\omega^T}(E):=\sup\{\int_E \omega_{\varphi}^n \wedge \eta: \varphi \in \text{PSH}(M,\xi,\omega^T), 0 \leq \varphi \leq 1 \}
\end{equation*}
It is obvious that $\text{cap}_{\omega^T}(\cup_{k=1}^{\infty}E_k)\leq\sum\limits_{k=1}^{\infty}\text{cap}_{\omega^T}(E_k)$ for a sequence of Borel sets $E_k$.
We have the following,
\begin{prop}\label{capacity0}Let $\phi\in \text{PSH}(M, \xi, \omega^T)$ with $0\leq \phi\leq 1$ and $\psi\in \text{PSH}(M, \xi, \omega^T)$ such that $\psi\leq 0$. Then
\begin{equation}\label{capacity001}
\int_M -\psi \omega_\phi^n\wedge \eta\leq \int_M (-\psi)(\omega^T)^n\wedge \eta+n \int_M (\omega^T)^n\wedge \eta
\end{equation}
\end{prop}
\begin{proof}We only need to prove \eqref{capacity001} for canonical cutoffs $\psi_k=\max\{\psi, -k\}$ ($-\psi_k$ increases to $-\psi$ and we can apply monotone convergence theorem). We have the following \[
\begin{split}
\int_M -\psi_k\omega_\phi^n\wedge \eta=&\int_M -\psi_k\omega_\phi^{n-1}\wedge (\omega^T+\sqrt{-1}\partial_B\bar\partial_B \phi)\wedge \eta\\
=&\int_M -\psi_k\omega_\phi^{n-1}\wedge \omega^T\wedge \eta+\int_M -\psi_k \omega_\phi^{n-1}\wedge\sqrt{-1}\partial_B\bar\partial_B \phi\wedge \eta\\
=&\int_M -\psi_k\omega_\phi^{n-1}\wedge \omega^T\wedge \eta+\int_M \phi \omega_\phi^{n-1}\wedge (-\sqrt{-1}\partial_B\bar\partial_B \psi_k)\wedge \eta\\
\leq& \int_M -\psi_k\omega_\phi^{n-1}\wedge \omega^T\wedge \eta+\int_M (\omega_\phi)^{n-1}\wedge \omega^T\wedge \eta\\
\leq& \int_M -\psi_k\omega_\phi^{n-1}\wedge \omega^T\wedge \eta+\int_M (\omega^T)^{n}\wedge \eta
\end{split}
\]
We can then proceed inductively to obtain \eqref{capacity001}. Note that the argument above is a special case of \eqref{cln01}.
\end{proof}
\begin{prop}\label{capacity1}
Suppose that $u \in \text{PSH}(M,\xi,\omega^T)$ and $u\leq0$. Then for $t>0$ we have
\begin{equation*}
\text{cap}_{\omega^T}(\{u <-t\}) \leq \frac{1}{t}(\int_M (-u) (\omega^T)^n\wedge\eta+n\int_M(\omega^T)^n\wedge\eta)
\end{equation*}
\end{prop}
\begin{proof}This is a direct consequence of Proposition \ref{capacity0}. Denote $K_t=\{u<-t\}$, then
\[
\begin{split}
\int_{K_t}\omega_\phi^n\wedge \eta\leq& \frac{1}{t}\int_M -\psi \omega_\phi^n\wedge \eta\\
\leq& \frac{1}{t}\left(\int_M -\psi (\omega^T)^n\wedge \eta+n \int_M (\omega^T)^n\wedge \eta\right)
\end{split}
\]
\end{proof}
\begin{prop}\label{capacity2}
Suppose that $u_k, u \in \text{PSH}(M,\xi,\omega^T) \cap L^{\infty}$ and $u_k$ decreases to $u$. Then for $\delta>0$ we have
\begin{equation*}
\text{cap}_{\omega^T}(\{u_k>u+\delta\}) \rightarrow 0, k\rightarrow \infty.
\end{equation*}
\end{prop}
\begin{proof}This proceeds exactly the same as in \cite{GZ01}[Proposition 3.7]. We sketch the argument briefly. We assume $\text{Vol}(M)=1$ for simplicity. Fix $\delta>0$ and $\phi\in \text{PSH}(M, \xi, \omega^T)$ such that $0\leq \phi\leq 1$. We have
\[
\int_{\{u_k>u+\delta\}} \omega_\phi^n\wedge \eta\leq \delta^{-1}\int_M (u_k-u) \omega_\phi^n\wedge \eta
\]
By Stokes' theorem, we write
\begin{equation*}
\begin{split}
\int_M (u_k-u) \omega_\phi^n\wedge \eta=&\int_M (u_k-u) \wedge \omega^T\wedge \omega_\phi^{n-1}\wedge \eta+\int_M (u_k-u) \wedge dd^c_B \phi\wedge \omega_\phi^{n-1}\wedge \eta\\
=&\int_M (u_k-u) \wedge \omega^T\wedge \omega_\phi^{n-1}\wedge \eta-\int_M d(u_k-u) \wedge d^c_B \phi\wedge \omega_\phi^{n-1}\wedge \eta
\end{split}
\end{equation*}
By the Cauchy-Schwartz inequality, setting $f_k=u_k-u\geq 0$,
\[
|\int_M d(u_k-u) \wedge d^c_B \phi\wedge \omega_\phi^{n-1}\wedge \eta|^2\leq \int_M d f_k\wedge d^c_B f_k\wedge \wedge \omega_\phi^{n-1}\wedge \eta \int_M d \phi\wedge d^c_B \phi\wedge \wedge \omega_\phi^{n-1}\wedge \eta
\]
We compute
\[
\int_M d \phi\wedge d^c_B \phi\wedge \wedge \omega_\phi^{n-1}\wedge \eta=\int_M \phi (-dd^c_B\phi)\wedge \omega_\phi^{n-1}\wedge \eta\leq \int_M \phi \omega^T\wedge \omega_\phi^{n-1}\wedge \eta\leq 1
\]
Similarly, we compute
\[
\int_M d f_k\wedge d^c_B f_k\wedge \wedge \omega_\phi^{n-1}\wedge \eta=\int_M f_k (dd^c_B u-dd^c_B u_k)\wedge \omega_\phi^{n-1}\wedge \eta\leq \int_M f_k \omega_u\wedge \omega_\phi^{n-1}\wedge \eta.
\]
Combining all these together this gives
\[
\int_M (u_k-u) \omega_\phi^n\wedge \eta\leq \int_M (u_k-u) \wedge \omega^T\wedge \omega_\phi^{n-1}\wedge \eta+ (\int_M (u_k-u) \omega_u\wedge \omega_\phi^{n-1}\wedge \eta)^{1/2}.
\]
Suppose $u_k-u\leq c_0$ for a fixed positive constant $c_0\geq 1$. Then we have
\[
\int_M (u_k-u) \omega_\phi^n\wedge \eta\leq \sqrt{c_0} (\int_M (u_k-u) \wedge \omega^T\wedge \omega_\phi^{n-1}\wedge \eta)^{1/2}+ (\int_M (u_k-u) \omega_u\wedge \omega_\phi^{n-1}\wedge \eta)^{1/2}.
\]
Hence we have
\[
\int_M (u_k-u) \omega_\phi^n\wedge \eta\leq \sqrt{2c_0} (\int_M (u_k-u) \wedge (\omega^T+\omega_u)\wedge \omega_\phi^{n-1}\wedge \eta)^{1/2}
\]
We can proceed inductively by replacing $\omega_\phi$ by $\omega^T+\omega_u$ to obtain
\[
\int_M (u_k-u) \omega_\phi^n\wedge \eta\leq (\sqrt{2c_0})^n (\int_M (u_k-u) \wedge (\omega^T+\omega_u)^{n}\wedge \eta)^{1/2^n}
\]
The dominated convergence theorem implies the righthand side goes to zero, independent of $\phi$. This completes the proof.
\end{proof}
As a consequence, we have the following,
\begin{thm}\label{quasicontinuity}
Let $\varphi \in \text{PSH}(M,\xi,\omega^T)$, then for any $\epsilon>0$ there exists an open subset $O_{\epsilon} \subset M$ such that $\text{cap}_{\omega^T}(O_{\epsilon}) < \epsilon$ and $\varphi$ is continuous on $M-O_{\epsilon}$.
\end{thm}
\begin{proof}
By Proposition \ref{capacity1} there exists $t_0>0$ such that $\text{cap}_{\omega^t}(O_0) <\frac{\epsilon}{2}$ for the open subset $O_0=\{u<-t_0\}$. Take the cutoff $u_{t_0}=\max\{u,-t_0\} \in \text{PSH}(M,\xi,\omega^T)$, then there exists a sequence $u_k \in {\mathcal H}$ decreasing to $u$. By Proposition \ref{capacity2} we can choose a subsequence $u_{k_j}$ such that
$\text{cap}_{\omega^T}(O_j) < \frac{\epsilon}{2^{j+1}}$ for the open subset $O_j=\{u_{k_j}>u+\frac{1}{j}\}$. Then for the open subset $O_{\epsilon}=\cup_{j=0}^{\infty}O_j$ we have $\text{cap}_{\omega^T}(O)<\epsilon$. Moreover $u_{K_j}$ converges uniformly to $u$ on $M-O_{\epsilon}$, hence $u$ is continuous on $M-O_{\epsilon}$.
\end{proof}
\begin{rmk}The discussions above are taken from K\"ahler setting \cite{GZ01}[Section 3]. Note that in \eqref{cln01} it is necessary to replace $\sup \psi$ by $\max\{\sup \psi, 0\}$ (similarly one needs to replace $\sup_X\psi$ by $\max\{\sup_X \psi, 0\}$ in \cite{GZ01}[Proposition 3.1])\end{rmk}
We also need the following uniqueness in Sasaki setting , see \cite{GZ}[Theorem 3.3].
\begin{thm}Suppose $u, v\in {\mathcal E}_1(M, \xi, \omega^T)$ such that
\[
\omega_u^n\wedge \eta=\omega_v^n\wedge \eta,
\]
then $u-v=\text{const}$.
\end{thm}
\begin{proof}This follows exactly as in \cite{GZ}[Theorem 3.3] and we sketch the argument. The first step is that for $u\in {\mathcal E}_1(M, \xi, \omega^T)$ and its canonical cutoffs $u_j=\max\{u, -j\}$, then $\nabla u_j\in L^2(d\mu_g)$ and has uniformly bounded $L^2$ norm (see \cite{GZ}[Proposition 3.2]). We can assume that $u\leq 0$ and hence $u_j\leq 0$. Then for $\phi\in \text{PSH}(M, \xi, \omega^T)\cap L^\infty$ such that $\phi\leq 0$, we know that, for any basic positive closed of $(n-1, n-1)$ type.
\[
\int_M (-\phi)\omega\wedge T=\int_M (-\phi) (\omega_\phi-dd^c_B\phi)\wedge T=\int_M (-\phi)\omega_\phi\wedge T+\int_M d\phi\wedge d^c_B \phi\wedge T\leq \int_M (-\phi)\omega_\phi\wedge T
\]
An inductive argument applies to $T=\omega_\phi^k\wedge (\omega^T)^{n-k-1}$, we get that
\begin{equation}\label{el2}
0\leq \int_M d\phi\wedge d^c_B \phi\wedge T\leq \int_M (-\phi)\omega_\phi^n\wedge \eta.
\end{equation}
Taking $\phi=u_j$ in \eqref{el2} and noting that the righthand side is uniformly bounded, we get $\nabla u_j$ is uniformly bounded in $L^2(d\mu_g)$, hence $\nabla u\in L^2(d\mu_g)$.
We assume that $u, v\leq -1$ and $\text{Vol}(M)=1$. Set $f=(u-v)/2$ and $h=(u+v)/2$. We need to establish that $\nabla f=0$ by showing that $\int_M df\wedge d^c_B f\wedge (\omega^T)^{n-1}\wedge \eta=0$. If we assume $u, v$ are bounded, then we have
\begin{equation}\label{uni01}
\int_M df\wedge d^c_B f\wedge \omega^{n-1}_h\wedge \eta\leq \sum \int_M df\wedge d^c_B f\wedge \omega_u^k\wedge \omega^{n-1-k}_v\wedge \eta=-\int_M \frac{f}{2}(\omega_u^n-\omega_v^n)\wedge \eta,
\end{equation}
where we use the fact that $dd^c_B f=(\omega_u-\omega_v)/2$.
We shall also establish the following a priori bound, when $u, v$ are bounded,
\begin{equation}\label{uni02}
\int_M df\wedge d^c_B f\wedge (\omega^T)^{n-1}\wedge \eta \leq 3^n \left( \int_M df\wedge d^c_B f\wedge \omega^{n-1}_h\wedge \eta\right)^{1/2^{n-1}}.
\end{equation}
We apply \eqref{uni01} and \eqref{uni02} to the canonical cutoffs $u_j, v_j$ (writing $f_j, h_j$ correspondingly and using Proposition \ref{weakcon3}), \[
\lim \int_M df_j\wedge d^c_B f_j\wedge (\omega^T)^{n-1}\wedge \eta=0
\]
We can then conclude that
\[
\int_M df\wedge d^c_B f\wedge (\omega^T)^{n-1}\wedge \eta=0.
\]
This implies that $u-v$ is a constant. To establish \eqref{uni02}, we need several observations as follows.
First observe that for $l=n-2, \cdots, 0$,
\[
\int_M (-h)\omega^{2+l}_h\wedge (\omega^T)^{n-2-l}\wedge \eta\leq \int_M (-h)(\omega^T)^n\wedge \eta\leq 1,
\]
where the last inequality follows from $-h\leq 1$ and the normalization of the volume.
We can then apply the following inequality inductively for $T=\omega_h^l\wedge (\omega^T)^{n-l-1}$ such that
\begin{equation}\label{uni03}
\int_M df\wedge d^c_B f\wedge \omega^T\wedge T\wedge \eta\leq 3 \left(\int_Mdf\wedge d^c_B f\wedge \omega_h\wedge T\wedge \eta\right)^{1/2},
\end{equation}
which proves \eqref{uni02}. Now we establish \eqref{uni03}. We write
\[
df\wedge d^c_B f\wedge \omega^T=df\wedge d^c_B f\wedge \omega_h-df\wedge d^c_B f\wedge dd^c_B h
\]
hence we obtain, integrating by parts,
\[
\int_M df\wedge d^c_B f\wedge \omega^T\wedge T\wedge \eta=\int_M df\wedge d^c_B f\wedge \omega_h\wedge T\wedge \eta+\int_M df\wedge d^c_Bh\wedge \frac{\omega_u-\omega_v}{2}\wedge T\wedge \eta
\]
By Cauchy-Schwartz inequality, we have
\[
|\int_M df\wedge d^c_Bh\wedge\omega_u\wedge T\wedge \eta|^2\leq 4 \int_M df\wedge d^c_Bf\wedge\omega_h\wedge T\wedge \eta \int_M dh\wedge d^c_Bh\wedge\omega_h\wedge T\wedge \eta
\]
We can get a similar control
\[
|\int_M df\wedge d^c_Bh\wedge\omega_v\wedge T\wedge \eta|^2\leq 4 \int_M df\wedge d^c_Bf\wedge\omega_h\wedge T\wedge \eta \int_M dh\wedge d^c_Bh\wedge\omega_h\wedge T\wedge \eta
\]
Clearly we have the following ($h\leq 0, S=\omega_h^l\wedge (\omega^T)^{n-l-2}$)
\[
\int_M dh\wedge d^c_Bh\wedge\omega_h\wedge S\wedge \eta\leq \int_M (-h) \omega_h^2\wedge S\wedge \eta\leq 1.
\]
Combining these estimate altogether we conclude that,
\[
\int_M df\wedge d^c_B f\wedge \omega^T\wedge S\wedge \eta\leq \int_M df\wedge d^c_B f\wedge \omega_h\wedge T\wedge \eta+2\left( \int_M df\wedge d^c_Bf\wedge\omega_h\wedge T\wedge \eta\right)^{1/2}
\]
The last observation is that
\[
\int_M df\wedge d^c_B f\wedge \omega_h\wedge S\wedge \eta=\frac{1}{4}\int_M (u-v)(\omega_v-\omega_u)\wedge \omega_h\wedge S\wedge \eta\leq \int_M (-h) \omega_h^2\wedge S\wedge \eta\leq 1.
\]
This completes the proof of \eqref{uni03} by combining two inequalities above.
\end{proof}
\subsection{Functionals in finite energy class ${\mathcal E}_1$ and compactness}
We discuss briefly well-known functionals in K\"ahler geometry and their properties over finite energy class ${\mathcal E}_1$, see \cite{D4}[Section 3.8].
The energy functionals include Monge-Ampere energy $\mathbb{I}$ and Aubin's $I$-functional on ${\mathcal E}_1$, see \cite{Aubin, BBGZ10, BBGZ13, BBEGZ, D4} for K\"ahler setting. These results have a direct adaption in Sasaki setting.
Recall Aubin's $I$-functional in Sasaki setting, for $u, v\in {\mathcal H}$
\begin{equation}\label{aubin01}
I(u, v):=I(\omega_u, \omega_v)=\frac{1}{n!}\int_M (v-u) (\omega_u^n-\omega_v^n)\wedge \eta.
\end{equation}
We also recall the $J$-functional
\begin{equation}
J(u, v):=J(\omega_u, \omega_v)=\frac{1}{n!}\int_M (v-u) \omega_u^n\wedge \eta-\mathbb{I}_{\omega_u}(v),
\end{equation}
where the $\mathbb{I}_{\omega_u}(v)$-functional is given by
\begin{equation}
\mathbb{I}_{\omega_u}(v)=\frac{1}{(n+1)!}\int_M (v-u)\sum_{k=0}^n \omega_u^k\wedge \omega_v^{n-k}\wedge \eta.
\end{equation}
We define the $\mathbb{I}$-functional (with the base $\omega^T$) on ${\mathcal H}$,
\begin{equation}\label{maenergy}
\mathbb{I}_{\omega^T}(u)=\frac{1}{(n+1)!}\int_M u\sum_{k=0}^n \omega_u^k\wedge \omega^{n-k}_T\wedge \eta.
\end{equation}
The $\mathbb{I}$-functional is also called the Monge-Amp\`ere energy, since if $t\rightarrow v_t\in {\mathcal H}$ is smooth, then we have (as in K\"ahler setting),
\begin{equation}\label{maderivative}
\frac{d}{dt}\mathbb{I}(v_t)=\frac{1}{n!}\int_M \dot v_t \omega_{v_t}^n \wedge \eta
\end{equation}
We mention that $I$ is symmetric with respect to $u, v$ but $J$ is not. $I, J$ are both defined on the metric level, independent of the choice of normalization of potentials $u, v$; while $\mathbb{I}_{\omega_u}(v)$ depends on the normalization of $u, v$.
When $u, v$ are bounded, then Bedford-Taylor theory allows to integrate by parts and the $I$-functional takes the formula
\begin{equation}\label{i04}
I(\omega_u, \omega_v)=\frac{1}{(n+1)!}\sum_{j=0}^{n-1} \int_M d(u-v)\wedge d^c_B(u-v)\wedge \omega_u^j\wedge \omega_v^{n-1-j}\wedge \eta
\end{equation}
Hence it is nonnegative.
We need more information about $\mathbb{I}$-functional, see \cite{D4}[Section 3.7] for K\"ahler setting. These properties in Sasaki setting follow in a rather straightforward way given pluripotential theory extended to Sasaki setting. We include these facts here for completeness.
\begin{prop}
Given $u, v\in \text{PSH}(M, \xi, \omega^T)\cap L^\infty$, the following cocycle condition holds
\begin{equation}\label{cocycle}
\mathbb{I}(u)-\mathbb{I}(v)=\frac{1}{(n+1)!}\sum_{k=0}^n\int_M (u-v)\omega_u^k\wedge \omega_v^{n-k}\wedge \eta=\mathbb{I}_{\omega_u}(v).
\end{equation}
Moreover, we have $\mathbb{I}(u)$ is concave in $u$ in the sense that,
\begin{equation}\label{maenergy01}
\frac{1}{n!}\int_M (u-v)\omega_u^n\wedge \eta\leq \mathbb{I}(u)-\mathbb{I}(v)\leq \frac{1}{n!}\int_M (u-v)\omega_v^n\wedge \eta.
\end{equation}
As a direct consequence, if $u, v\in \text{PSH}(M, \xi, \omega^T)\cap L^\infty$ such that $u\geq v$. Then $\mathbb{I}(u)\geq \mathbb{I}(v)$.
\end{prop}
\begin{proof}This follows almost identical as in \cite{D4}[Proposition 3.8], given the pluripotential theory established in Sasaki setting in the paper. We sketch the argument.
When $u, v\in {\mathcal H}$, this follows exactly the same as in K\"ahler setting, by taking $h_t=(1-t)u+tv$ and then use \eqref{maderivative} to compute directly. When $u, v\in \text{PSH}(M, \xi, \omega^T)\cap L^\infty$, we then use $u_k, v_k\in {\mathcal H}$ decreasing to $u, v$ (Lemma \ref{BK}) respectively. Using Bedford-Taylor's theorem in Sasaki setting \cite{VC}[Theorem 2.3.1] we proceed exactly as in K\"ahler setting to conclude that $\mathbb{I}(u_k)\rightarrow \mathbb{I}(u)$ etc. For the estimate \eqref{maenergy01}, we compute
\[
\begin{split}
\int_M (u-v)\omega^k_u\wedge \omega_v^{n-k}\wedge \eta=&\int_M (u-v)\omega^{k-1}_u\wedge \omega_v^{n-k+1}\wedge \eta\\&+\int_M (u-v)\sqrt{-1}\partial\bar\partial (u-v)\wedge \omega_u^{k-1}\wedge \omega_v^{n-k}\wedge \eta\\
=&\int_M (u-v)\omega^{k-1}_u\wedge \omega_v^{n-k+1}\wedge \eta\\
&-\int_M \sqrt{-1}\partial(u-v)\wedge\bar\partial (u-v)\wedge \omega_u^{k-1}\wedge \omega_v^{n-k}\wedge \eta\\
\leq & \int_M (u-v)\omega^{k-1}_u\wedge \omega_v^{n-k+1}\wedge \eta
\end{split}
\]
Using the estimate inductively for the terms in \eqref{cocycle} leads to \eqref{maenergy01}. Clearly $\mathbb{I}(u)$ is concave in $u$ given \eqref{maenergy01}.
\end{proof}
The monotonicity property allows to define $\mathbb{I}(u)$ for $u\in \text{PSH}(M, \xi, \omega^T)$ through the limit process, using the canonical cutoffs $u_k=\max\{u, -k\}$
\[
\mathbb{I}(u)=\lim_{k\rightarrow \infty}\mathbb{I}(\max\{u, -k\}).
\]
Though the above limit is well-defined, it may equal $-\infty$. It turns out $\mathbb{I}(u)$ is finite exactly on ${\mathcal E}_1(M, \xi, \omega^T)$.
We record some further properties of $\mathbb{I}(u)$ for $u\in {\mathcal E}_1(M, \xi, \omega^T)$. The proofs are almost identical and we shall skip the details, see \cite{D4}[Proposition 3.40, 3.42, 3.43; Lemma 3.41].
\begin{prop}\label{darvascontinuity01}Let $u\in \text{PSH}(M, \xi, \omega^T)$. Then $-\infty<\mathbb{I}(u)$ if and only if $u\in {\mathcal E}_1(M, \xi, \omega^T)$. Moreover,
\begin{equation}
|\mathbb{I}(u_0)-\mathbb{I}(u_1)|\leq d_1(u_0, u_1), u_0, u_1\in {\mathcal E}_1(M, \xi, \omega^T).
\end{equation}
\end{prop}
\begin{prop}Suppose $u_0, u_1\in {\mathcal E}_1(M, \xi, \omega^T)$ and $t\rightarrow u_t$ is the finite energy geodesic connecting $u_0, u_1$. Then $t\rightarrow \mathbb{I}(u_t)$ is linear in $t$. We also have the following distance formula,
\[
d_1(u_0, u_1)=\mathbb{I}(u_0)+\mathbb{I}(u_1)-2\mathbb{I}(P(u_0, u_1))
\]
In particular, $d_1(u_0, u_1)=\mathbb{I}(u_0)-\mathbb{I}(u_1)$ if $u_0\geq u_1$.
\end{prop}
We have the following (see \cite{D4}[Lemma 3.47])
\begin{lemma}\label{i02}Suppose $u, u^j, v, v^j\in {\mathcal E}_1(M, \xi, \omega^T)$ and $u^j\searrow u$ and $v^j\searrow v$. Then the following hold:
\begin{equation}\label{i01}
I(u, v)=I(u, \max{\{u, v\}})+I(\max{\{u, v\}}, v)
\end{equation}
Moreover, $\lim_{j\rightarrow \infty} I(u^j, v^j)=I(u, v)$.
\end{lemma}
\begin{proof}By Proposition \ref{GBTI}, we have \[\chi_{\{v>u\}}\omega^n_{\max \{u, v\}}\wedge \eta=\chi_{\{v>u\}}\omega^n_v\wedge \eta.\]
Hence it follows that
\[
I(u, \max{\{u, v\}})=\frac{1}{(n+1)!}\int_{\{v>u\}} (u-v)(\omega_v^n-\omega_u^n)\wedge \eta
\]
Interchange $u\leftrightarrow v$, we get $I(v, \max{\{u, v\}})=\int_{\{u>v\}} (u-v)(\omega_v^n-\omega_u^n)\wedge \eta$. This proves \eqref{i01}. We write
\[
I(u^j, v^j)=I(u^j, \max{\{u^j, v^j\}})+I(v^j, \max{\{u^j, v^j\}})
\]
Since $u^j, v^j\leq \max\{u^j, v^j\}$, we can apply Proposition \ref{weakcon3} to conclude $I(u^j, \max{\{u^j, v^j\}})\rightarrow I(u, \max{\{u, v\}})$ and $I(v^j, \max{\{u^j, v^j\}})\rightarrow I(v, \max{\{u, v\}})$, using the formula \eqref{aubin01}. This completes the proof.
\end{proof}
We have the following well-known inequalities,
\begin{prop}\label{i00}For $u, v\in \text{PSH}(M, \xi, \omega^T)\cap L^\infty$, we have
\[
\frac{1}{n+1}I(u, v)\leq J(u, v)\leq \frac{n}{n+1} I(u, v)
\]
Moreover, $J(u, v)$ is convex in $v$ since $\mathbb{I}_{\omega^T}(v)$ is concave in $v$.
\end{prop}
\begin{proof} This is well-known, by direct computation \cite{PG}[Proposition 4.2.1] for $u, v\in {\mathcal H}$. A direct approximation argument using Lemma \ref{BK} shows that this can be generalized to for $u, v\in \text{PSH}(M, \xi, \omega^T)\cap L^\infty$.
\end{proof}
The functionals ($I, J, \mathbb{I}$) are well-defined for $u, v\in {\mathcal E}_1(M, \xi, \omega^T)$ (see Proposition \eqref{boundedenergy}). Note that Proposition \ref{maenergy01} and
Proposition \ref{i00} both hold in ${\mathcal E}_1(M, \xi, \omega^T)$ (see \cite{BBGZ10, BBGZ13} for K\"ahler setting).
This follows by an approximation argument applying Proposition \ref{weakcon3}.
Next we prove the following, as a direct adaption of \cite{BBEGZ}[Theorem 1.8],
\begin{lemma}\label{bbegz01}There exists a positive $C=C(n)$ such that for $u, v, w\in {\mathcal E}_1(M, \xi, \omega^T)$, then
\begin{equation}\label{i03}
I(u, v)\leq C(I(u, w)+I(v, w))
\end{equation}
\end{lemma}
\begin{proof}
With Lemma \ref{i02}, we only need to argue \eqref{i03} holds for bounded potentials, with $u, v, w$ replaced by canonical cutoffs $u_k, v_k, w_k$. The proof follows exactly as in \cite{BBEGZ}[Theorem 1.8, Lemma 1.9].
and we include the proof for completeness.
For $u, v, \psi\in \text{PSH}(M, \xi, \omega^T)\cap L^\infty$, set
\[
\|d(u-v)\|_\psi:=\left(\int_M d(u-v)\wedge d^c_B(u-v)\wedge \omega_\psi^{n-1}\wedge \eta\right)^{\frac{1}{2}}
\]
Using \eqref{i04}, it is straightforward to see that
\begin{equation}
\label{i05}
\|d(u-v)\|_{\frac{u+v}{2}}^2\leq I(u, v)\leq 2^{n-1}\|d(u-v)\|_{\frac{u+v}{2}}^2.
\end{equation}
We need the following, there exists a constant $C=C(n)$ for $u, v, \psi\in \text{PSH}{M, \xi, \omega^T}\cap L^\infty$, we have the following (see \cite{BBEGZ}[Lemma 1.9]),
\begin{equation}\label{i06}
\|d(u-v)\|^2_\psi\leq C I(u, v)^{1/2^{n-1}}\left(I(u, \psi)^{1-1/2^{n-1}}+I(v, \psi)^{1-1/2^{n-1}}\right)
\end{equation}
With \eqref{i06} we prove \eqref{i03}. Taking $\phi=\frac{u+v}{2}$, the triangle inequality gives,
\[
\|d(u-v)\|_\phi\leq \|d(u-w)\|_\phi+\|d(v-w)\|_\phi.
\]
Using \eqref{i05} and \eqref{i06} we have
\[
\begin{split}
I(u, v)\leq 2^{n-1} \|d(u-v)\|_{\phi}^2\leq& C (\|d(u-w)\|_\phi^2+\|d(v-w)\|_\phi^2)\\
\leq& CI(u, w)^{1/2^{n-1}}\left(I(u, \phi)^{1-1/2^{n-1}}+I(w, \phi)^{1-1/2^{n-1}}\right)\\
&+CI(v, w)^{1/2^{n-1}}\left(I(v, \phi)^{1-1/2^{n-1}}+I(w, \phi)^{1-1/2^{n-1}}\right)
\end{split}
\]
By Proposition \ref{i00}, we have
\[
I(u, \phi)\leq nI(u, v), I(v, \phi)\leq nI(v, u), I(w, \phi)\leq n (I(w, u)+I(w, v))
\]
It follows that
\[
I(u, v)\leq C(I(u, w)^{\frac{1}{2^{n-1}}}+I(v, w)^{\frac{1}{2^{n-1}}}) (I(u, v)^{1-1/2^{n-1}}+I(u, w)^{1-1/2^{n-1}}+I(v, w)^{1-1/2^{n-1}})
\]
We assume $I(u, v)\geq \max\{I(u, w), I(v, w)\}$ (otherwise we are done). Hence it follows
\[
I(u, v)^{1/2^{n-1}}\leq C (I(u, w)^{\frac{1}{2^{n-1}}}+I(v, w)^{\frac{1}{2^{n-1}}})
\]
This is sufficient to prove that
\[
I(u, v)\leq C (I(u, w)+I(v, w))
\]
Now we establish \eqref{i06} (see \cite{BBEGZ}[Lemma 1.9]). First observe that
\[
\|d(u-v)\|_\psi\leq \|d(u-\psi)\|_\psi+\|d(v-\psi)\|_{\psi}\leq I(u, \psi)^{1/2}+I(v, \psi)^{1/2}
\]
Hence we have
\[
\|d(u-v)\|_\psi^2\leq 2(I(u, \psi)+I(v, \psi))
\]
Hence if $I(u, v)\geq I(u, \psi)+I(v, \psi)$, clearly we have
\begin{equation}
\|d(u-v)\|_\psi^2\leq 2(I(u, \psi)+I(v, \psi))\leq C I(u, v)^{1/2^{n-1}}\left( I(u, \psi)^{1-\frac{1}{2^{n-1}}}+I(v, \psi)^{1-\frac{1}{2^{n-1}}}\right)
\end{equation}
Now we suppose $I(u, v)\leq I(u, \psi)+I(v, \psi)$.
Taking $\phi=\frac{u+v}{2}$, we consider
\[
b_p:=\int_M d(u-v)\wedge d^c_B(u-v)\wedge \omega^p_\psi\wedge \omega^{n-p-1}_\phi\wedge \eta.
\]
By \eqref{i05}, $b_0\leq I(u, v)$ and $b_{n-1}=\|d(u-v)\|_{\psi}^2$. We claim that, $p=0, \cdot, n-2$,
\begin{equation}\label{i07}
b_{p+1}\leq b_p+4\sqrt{b_p I(\psi, \phi)}
\end{equation}
We compute
\[
\begin{split}
b_{p+1}-b_p=& \int_M d(u-v)\wedge d^c_B(u-v)\wedge dd^c_B (\psi-\phi)\omega^p_\psi\wedge \omega^{n-p-2}_\phi\wedge \eta\\
=&-\int_M d(u-v)\wedge dd^c_B(u-v)\wedge d^c_B (\psi-\phi)\omega^p_\psi\wedge \omega^{n-p-2}_\phi\wedge \eta\\
=&-\int_M d(u-v)\wedge (\omega_u-\omega_v)\wedge d^c_B (\psi-\phi)\omega^p_\psi\wedge \omega^{n-p-2}_\phi\wedge \eta
\end{split}
\]
Using Cauchy-Schwarz inequality, we compute
\[
\begin{split}
&\left|\int_M d(u-v)\wedge \omega_u\wedge d (\psi-\phi)\omega^p_\psi\wedge \omega^{n-p-2}_\phi\wedge \eta\right|\leq \left(\int_M d(u-v)\wedge d^c_B(u-v)\wedge \omega_u \wedge\omega^p_\psi\wedge \omega^{n-p-2}_\phi\wedge \eta\right)^{1/2}\\
&\;\;\;\;\quad\times \left(\int_M d(\psi-\phi)\wedge d^c_B(\psi-\phi)\wedge \omega_u \wedge\omega^p_\psi\wedge \omega^{n-p-2}_\phi\wedge \eta\right)^{1/2}\leq 2 \sqrt{b_p I(\psi, \phi)},
\end{split}
\]
where we have used that $\omega_u\leq 2\omega_\phi$ and \eqref{i04}.
We can get the same estimate for
\[
\left|\int_M d(u-v)\wedge \omega_v\wedge d (\psi-\phi)\omega^p_\psi\wedge \omega^{n-p-2}_\phi\wedge \eta\right|.
\]
This establishes \eqref{i07}. By Proposition \ref{i00}, we know that
\[
I(\psi, \phi)\leq (n+1)J(\psi, \phi)\leq \frac{n}{2}(I(\psi, u)+I(\psi, v))
\]
Denote $a=(I(\psi, u)+I(\psi, v))$. We write \eqref{i07} as
\[
b_{p+1}\leq b_p+4\sqrt{b_p a}, p=0, \cdots, n-2
\]
Note that $b_0=I(u, v)\leq a$, hence it is evident that $b_p\leq C a$. Hence it follows that, for $p=0, \cdots, n-2$, \[b_{p+1}\leq C \sqrt{b_p a}\]
A direct computation gives that,
\[
b_{n-1}\leq C b_0^{1/2^{n-1}} a^{1-\frac{1}{2^{n-1}}}
\]
This completes the proof.
\end{proof}
More generally, we have the following \cite{D4}[Proposition 3.48]
\begin{prop}\label{darvascontinuity02}Suppose $C>0$ and $\phi, \psi, u, v\in {\mathcal E}_1(M, \xi, \omega^T)$ satisfies
\[
-C\leq \mathbb{I}(\phi), \mathbb{I}(\psi), \mathbb{I}(u), \mathbb{I}(v), \sup_M \phi, \sup_M \psi, \sup_M u, \sup_M v\leq C
\]
Then there exists a continuous function $f_C:\mathbb{R}^{+}\rightarrow \mathbb{R}^{+}$ depending only on $C$ with $f_C(0)=0$ such that
\begin{equation}
\begin{split}
&\left|\int_M \phi(\omega_u^n-\omega_v^n)\wedge \eta\right|\leq f_C(I(u, v))\\
&\left|\int_M (u-v)(\omega_\phi^n-\omega_\psi^n)\wedge \eta\right|\leq f_C(I(u, v))
\end{split}
\end{equation}
\end{prop}
\begin{proof}The proof is similar in philosophy as Lemma \ref{bbegz01} and follows almost identically as in K\"ahler setting, see \cite{D4}[Proposition 3.48]. Hence we skip the details.
\end{proof}
As a consequence, we have the following \cite{D4}[Theorem 3.46]
\begin{thm}\label{d101}Suppose $u_k, u\in {\mathcal E}_1(M, \xi, \omega^T)$. The the following hold:
\begin{enumerate}
\item $d_1(u_k, u)\rightarrow 0$ if and only if $\int_M |u_k-u|\omega^n_T\wedge \eta\rightarrow 0$ and $\mathbb{I}(u_k)\rightarrow \mathbb{I}(u)$.
\item If $d_1(u_k, u)\rightarrow 0$, then $\omega^n_{u_k}\wedge \eta\rightarrow \omega_u^n\wedge \eta$ weakly and $\int_M |u_k-u|\omega^n_v\wedge \eta\rightarrow 0$ for $v\in {\mathcal E}_1(M, \xi, \omega^T)$.
\end{enumerate}
\end{thm}
\begin{proof}If $d_1(u_k, u)\rightarrow 0$, then Proposition \ref{darvascontinuity01} and Proposition \ref{darvascontinuity02} implies (1) and (2). For the reverse direction in (1), it follows almost identically as in K\"ahler setting, see \cite{D4}[Proposition 3.52], using Proposition \ref{darvascontinuity02} and approximation argument. We sketch the process. First we have
\[
\int_M u_k \omega_u^n\wedge \eta\rightarrow \int_M u \omega_u^n\wedge \eta
\]
And then one argues that
\[
I(u, u_k)\leq (n+1)\left(\mathbb{I}(u_k)-\mathbb{I}(u)-\int_M (u-u_k)\omega_u^n\wedge \eta\right)
\]
Hence this shows that $I(u, u_k)\rightarrow 0$. Using Proposition \ref{darvascontinuity02} and Lemma \ref{i02}, one can then show
\[
\int_M |u_k-u|\omega_u^n\wedge \eta, \int_M |u_k-u|\omega_{u_k}^n\wedge \eta \rightarrow 0, k\rightarrow \infty.
\]
This gives the desired convergence $d_1(u_k, u)\rightarrow 0$.
\end{proof}
As an application of results established above, we have the following compactness result in Sasaki setting, following \cite{D4}[Theorem 4.45].
\begin{thm}Let $u_j\in {\mathcal E}_1(M, \xi, \omega^T)$ be a $d_1$-bounded sequence for which the entropy
\[\sup_jH(u_j)<\infty.\] Then $\{u_j\}$ contains a $d_1$-convergence sequence.
\end{thm}
\begin{proof}We sketch the proof for completeness; for details see \cite{D4}[Theorem 4.45]. First $d_1$ bounded implies that $\mathbb{I}$ and $\sup u$ are both bounded. Together with Proposition \ref{compactness001}, this implies that $d_1$ bounded set is precompact in $L^1$. That is, there exists $u\in {\mathcal E}_1(M, \xi, \omega^T)$ such that after passing by subsequence,
\[
\int_M |u_k-u|(\omega^T)^n\wedge \eta\rightarrow 0.
\]
Moreover, we have (see \cite{D4}[Proposition 4.14, Corollary 4.15])
\[
\limsup \mathbb{I}(u_k)\leq \mathbb{I}(u).
\]
Since all elements in ${\mathcal E}_1(M, \xi, \omega^T)$ have zero Lelong number, we apply Zeriahi's uniform version of the famous Skoda integrability
theorem \cite{Zeriahi} (we apply Zeriahi's theorem in each foliation chart) to obtain: for any $p\geq 1$, there exists $C=C(p)$ such that
\[
\int_M e^{-p u_j} (\omega^T)^n\wedge \eta\leq C.
\]
Since $\sup u_j\leq C$, we have
\[
\int_M e^{p |u_j|} (\omega^T)^n\wedge \eta\leq C.
\]
Now we need to use the assumption that $H(u_j)$ is uniformly bounded above. We proceed as in the proof of \cite{D4}[Theorem 4.45] to conclude
\begin{equation*}\label{compact05}
\int_M |u_j-u|\omega_{u_j}^n\wedge \eta\rightarrow 0.
\end{equation*}
By Proposition \ref{maenergy01} (which holds for ${\mathcal E}_1$) we can then conclude that $\liminf \mathbb{I}(u_j)\geq \mathbb{I}(u)$. This gives $\lim \mathbb{I}(u_j)=\mathbb{I}(u)$. Hence $d_1(u_j, u)\rightarrow 0$, as a consequence of Theorem \ref{d101}.
\end{proof}
Finally we have the extension of ${\mathcal K}$-energy, see \cite{BDL}[Theorem 1.2] for K\"ahler setting.
\begin{thm}The ${\mathcal K}$-energy can be extended to a functional ${\mathcal K}: {\mathcal E}_1\rightarrow \mathbb{R}\cup\{+\infty\}$.
Such a ${\mathcal K}$-energy in ${\mathcal E}^1$ is the greatest $d_1$-lsc extension of ${\mathcal K}$-energy on ${\mathcal H}$. Moreover, ${\mathcal K}$-energy is convex along the finite energy geodesics of ${\mathcal E}^1$.
\end{thm}
\begin{proof}As in K\"ahler setting \cite{chen00}, we can write the ${\mathcal K}$-energy as the following,
\begin{equation*}
{\mathcal K}(\phi)=H(\phi)+\mathbb{J}_{\omega^T, -Ric}(\phi)
\end{equation*}
where $H(\phi)$ is the entropy part and $\mathbb{J}$ is the entropy part, taking the formula respectively,
\[
\begin{split}
H(\phi)=&\int_M \log \frac{\omega_\phi^n\wedge \eta}{\omega^n_T\wedge \eta} dv_\phi\\
\mathbb{J}_{-Ric}(\phi)=&\frac{n\underline{R}}{(n+1)!}\int_M \phi \sum_{k=0}^n \omega^k_T\wedge \omega_\phi^{n-k}\wedge \eta-\frac{1}{n!}\int_M \phi \sum_{k=0}^{n-1}Ric\wedge \omega^k_T\wedge \omega_\phi^{n-1-k}\wedge \eta
\end{split}
\]
As a direct consequence of this formula, ${\mathcal K}(\phi)$ is well-defined for $\phi\in {\mathcal H}_\Delta$. More importantly, for $\phi_0, \phi_1\in {\mathcal H}$ and $\phi_t\in {\mathcal H}_\Delta$ being the geodesic connecting $\phi_0, \phi_1$, ${\mathcal K}(\phi_t)$ is convex with respect to $t\in [0, 1]$.
Now we extend $H(\phi)$ and $\mathbb{J}_{-Ric}$ to ${\mathcal E}_1$ separately. As in \cite{BDL}, the extension of $\mathbb{J}_{-Ric}$ to ${\mathcal E}_1$ is $d_1$-continuous, while since $d_1(u_k, u)\rightarrow 0$ implies that $\omega_{u_k}^n\wedge \eta\rightarrow \omega_u^n\wedge \eta$ weakly (Theorem \ref{d101}), this implies that the extension of $\phi\rightarrow H(\phi)$ to ${\mathcal E}_1$ is $d_1$ lsc. Moreover, by \cite{he182}[Lemma 5.4], the extension of ${\mathcal K}$ is the greatest lsc extension. In the end, the convexity of the extended ${\mathcal K}$-energy along the finite energy geodesic segments follows exactly as in \cite{BDL}[Theorem 4.7].
\end{proof}
|
1,314,259,995,307 | arxiv | \section{Introduction}
The thermal conductivity of Si is dominated by phonon transport and has a relatively high value of $\kappa_l=148~\mathrm{W/mK}$. Such high conductivity is beneficial for some applications such as heat management in electronic devices~\cite{Goodson95}, but unwanted for other applications such as thermoelectricity. Low dimensional Si materials, such as nanowires, ultra-thin layers, and nanoporous Si, on the other hand, have demonstrated record low thermal conductivities of $\kappa_l=1-2~\mathrm{W/mK}$, reaching the amorphous limit~\cite{Boukai08,Hochbaum08,Chen01,Chen08b,Wu02,Tang10,Yu10,Hopkins10,Song04}. The thermal conductivity in Si is carried by phonons of a few nanometers to a few micrometers in wavelength~\cite{Zebarjadi12}, and boundary scattering is very effective in suppressing the propagation of low frequency (long wavelength) phonons in Si~\cite{Ponomareva07,Yang08,Wang09b,Liangraksa11,Oh12,Martin09,Liu05,Liu06}.
Although the two order of magnitude reduction in the thermal conductivity is attributed to boundary scattering, an additional reduction can be achieved from changes in the phonon mode structure due to geometrical confinement. Indeed, the phonon mode dispersion undergoes strong modifications in nanostructures~\cite{Lazarenkova02,Broido04,Duchemin11,Ramayya08}. The thermal conductivity in bulk Si is isotropic, however, in low-dimensional materials the choice of geometrical features such as surface orientation, transport orientation, and confinement length scale (i.e. thickness or diameter) can result in different phonon modes. These differences in the phonon modes affect the phonon group velocities and the scattering processes, and introduce variations in the thermal conductance~\cite{Aksamija10,Turney10}. The proper choice of structure geometries can, therefore, lead to different thermal properties and can allow design optimization for the applications of interest.
Very few studies on the effect of geometrical features such as the surface orientation, transport orientation, and layer thickness on the thermal conductivity of ultra-thin-body layers (UTBs), however, can be found in the literature. These mostly focus on layers of larger thicknesses of 10s of nanometers, or employ bulk Si phonon dispersions, whose validity could be debatable for layers with thicknesses down to a few nanometers. Aksamija et al. have theoretically discussed the effects of confinement and orientation of thin Si membranes using the bulk phonon dispersion and Boltzmann transport theory~\cite{Aksamija10}. That work elucidated the importance of geometry, and indicated that there can be indeed a factor of two difference in the thermal conductivity once a proper channel is chosen (the $\{110\}$ versus the $\{100\}$ surfaces in that case). In order to properly understand how modifications in the phonon mode structure will affect the thermal transport of ultra-thin layers in the sub-ten $~\mathrm{nm}$ thickness scale, however, a model that goes beyond the bulk dispersion, and properly captures the effect of confinement on the phonon modes is required. The importance of the complete phonon dispersion details in addressing thermal transport in nanostructures has been stressed in several publications~\cite{Mingo03,Tian11}. Experimental data could only be explained once these details were taken into consideration~\cite{Mingo03b}.
In this work, we employ the modified valence force field (MVFF) method~\cite{Sui93} to address the effects of structural confinement and transport orientations on the phonon dispersion, group velocity, and ballistic thermal conductance of Si thin layers of thickness from $1~\mathrm{nm}$ to $16~\mathrm{nm}$. For a complete study we investigate various surface orientations and transport orientations. We consider the $\{100\}$, $\{110\}$, $\{111\}$ and $\{112\}$ surface orientations, and for each of these surfaces we calculate thermal conductance as a function of the transport orientation. We find that the variation in the thermal conductance between channels of different geometries can be up to a factor of two. This is true for the choice of different surfaces, but also different transport orientations within the same surface. This observation is only weakly dependent upon the layer thickness. The $\{110\}/\textless110\textgreater$ channel exhibits the highest and the $\{112\}/\textless111\textgreater$ channel the lowest thermal conductance, almost $\sim 50\%$ lower. We further show that any variations observed are a consequence of the phonon group velocities which are anisotropic, whereas the density of phonon modes does not show strong anisotropy. We provide explanations for the group velocity behavior through features of the phonon modes.
The paper is organized as follows: i) In section~\ref{s:Approach} we describe the MVFF method for the calculation of the phonon bandstrucutre, and the Landauer method for phonon transport calculations. In section~\ref{s:Results} we present the results, and in section~\ref{s:Analysis} we provide explanations and discussions. Finally in section~\ref{s:Conclusions} we conclude.
\section{Approach}
\label{s:Approach}
For bulk studies, the most frequent model traditionally employed for phonon dispersion calculations is the Debye model, in which the phonon dispersion is described by three acoustic branches, one longitudinal, and two transverse modes. More sophisticated models that could describe the full phonon dispersions of bulk as well as nanostructures is the valence force field method (the Keating model~\cite{Keating66}), the Tersoff inter-atomic potential model~\cite{Tersoff89}, the adiabatic bond charge model~\cite{Hepplestone11}, as well as first principle calculations. In this work, for the calculation of the phononic bandstructure we employ the modified valence force field method~\cite{Sui93}, which is an extension of the Keating model. In this method the interatomic potential is modeled by the following bond deformations: bond stretching, bond bending, cross bond stretching, cross bond bending stretching, and coplanar bond bending interactions~\cite{Sui93}. The model accurately captures the bulk Si phonon spectrum as well as the effects of confinement~\cite{Paul10}.
In the MVFF method, the total potential energy of the system is defined as~\cite{Paul10}:
\begin{equation}
U\approx \frac{1}{2}\sum_{i\in N_{\mathrm{A}}} \left[ \sum_{j\in nn_i} U_{\mathrm{bs}}^{ij} + \sum_{j,k\in nn_i}^{j\neq k} \left (U_{\mathrm{bb}}^{jik}+U_{\mathrm{bs-bs}}^{jik}+U_{\mathrm{bs-bb}}^{jik}\right) +\sum_{j,k,l\in COP_i}^{j\neq k\neq l} U_{\mathrm{bb-bb}}^{jikl} \right]
\label{e:MVFFPotential}
\end{equation}
where $N_{\mathrm{A}}$, $nn_i$, and $COP_i$ are the number of atoms in the system, the number of the nearest neighbors of a specific atom $i$, and the coplanar atom groups for atom $i$, respectively. $U_{\mathrm{bs}}$, $U_{\mathrm{bb}}$, $U_{\mathrm{bs-bs}}$, $U_{\mathrm{bs-bb}}$, and $U_{\mathrm{bb-bb}}$ are the bond stretching, bond bending, cross bond stretching, cross bond bending stretching, and coplanar bond bending interactions, respectively. The terms $U_{\mathrm{bs-bs}}$, $U_{\mathrm{bs-bb}}$, and $U_{\mathrm{bb-bb}}$ are an addition to the usual Keating model~\cite{Keating66}, which can only capture the Si phononic bandstructure in a limited part of the Brillouin zone. As indicated in Ref.~\cite{Paul10} the introduction of these additional terms provides a more accurate description of the entire Brillouin zone.
In this formalism we assume that the total potential energy is zero when all the atoms are located in their equilibrium positions. Under the harmonic approximation, the motion of atoms can be described by a dynamic matrix as~\cite{Karamitaheri12}:
\begin{equation}
D=\left[ D_{3\times 3}^{ij} \right]= \left[ \frac{1}{\sqrt{M_iM_j}}\times \left \{ \begin{array}{lll} D_{ij} & {} & ,i\neq j \\ {} & {} & {} \\-\displaystyle \sum _{l\neq i}D_{il} & {} & ,i=j \end{array} \right. \right]
\end{equation}
where dynamic matrix component between atoms $i$ and $j$ is given by~\cite{Paul10}:
\begin{equation}
D_{ij}=\left[ \begin{array}{ccc} D_{xx}^{ij} & D_{xy}^{ij} & D_{xz}^{ij} \\ D_{yz}^{ij} & D_{yy}^{ij} & D_{yz}^{ij} \\ D_{zx}^{ij} & D_{zy}^{ij} & D_{zz}^{ij} \end{array} \right]
\end{equation}
and
\begin{equation}
D_{mn}^{ij}=\frac{\partial^2 U_{\mathrm{elastic}}}{\partial r_m^i \partial_n^j},~~~~~~ i,j\in N_{\mathrm{A}}~\mathrm{and}~m,n\in [x,y,z]
\end{equation}
is the second derivative of the potential energy with respect to the displacement of atoms $i$ and $j$ along the $m$-axis and the $n$-axis, respectively. $U_{\mathrm{elastic}}$ is
the potential that associated with the motion of only two atoms $i$ and $j$, whereas the
other atoms are considered frozen (unlike $U$, which is the potential when all atoms are
allowed to move out of their equilibrium position). To compute this: 1) We start with $U$
from Eq.~\ref{e:MVFFPotential}. 2) We fix the positions of all atoms except atoms $i$ and $j$. 3) We compute the inter-atomic potential due to all bond deformations that result from interaction between both of these two atoms, and sum them up to obtain $U_{\mathrm{elastic}}$. All other inter-atomic potential terms that result from interactions due to atom $i$ alone, or atom $j$ alone, are not considered, since all double derivatives taken with respect to $\partial^2/\partial r_m^i \partial_n^j$, give zero.
After setting up the dynamic matrix, the following eigenvalue problem is solved for the calculation of the phononic dispersion:
\begin{equation}
D+\sum_l D_l~\exp{\left({i{\bf \overrightarrow{q}}.\Delta {\bf \overrightarrow{R}}_l}\right )}-\omega ^2({q})I=0
\label{e:PhEigen}
\end{equation}
where $D_l$ is the dynamic matrix representing the interaction between the unit cell and its neighboring unit cells separated by ${\Delta \bf \overrightarrow{R}}_l$~\cite{Karamitaheri12}. Using the phononic dispersion, the phonon density of states (DOS) and the ballistic transmission (number of modes at given energy) are calculated by~\cite{Datta05Book}:
\begin{equation}
DOS(\omega)=\sum_{\alpha}DOS_{\alpha}(\omega)=\sum_{\alpha}\sum_{q}\delta\left( \omega - \omega_{\alpha}(q)\right)
\end{equation}
and
\begin{equation}
\overline{T}_{\mathrm{ph}}(\omega)=M(\omega)=\frac{h}{2}\sum_{\alpha,q}\delta \left( \omega - \omega_{\alpha}(q) \right) v_{g,\alpha}(q)\Big\vert_{\parallel}
\end{equation}
where $v_{g,\alpha}(q)\Big\vert_{\parallel}$ is the parallel component of the group velocity $V_{g,\alpha}(q)=\frac{\partial \omega_{\alpha}(q)}{\partial q}$ along the transport orientation. In the expressions above, at a specific frequency $\omega$ the sum runs over all phonon modes ($\alpha$) and all phonon momenta ($q$) of the two-dimensional momentum space.
Once the transmission is obtained, the ballistic lattice thermal conductance is calculated within the framework of the Landauer theory as~\cite{Landauer57,Jeong12}:
\begin{equation}
\kappa_l=\frac{1}{h}\int_{0}^{+\infty}\overline{T}_\mathrm{ph}(\omega)\hbar\omega\left(\frac{\partial n(\omega)}{\partial T}\right)\ d(\hbar\omega)
\label{e:Conductance}
\end{equation}
where $n(\omega)=(\mathrm{e}^{\hbar \omega/k_\mathrm{B}T}-1)^{-1}$ is the Bose-Einstein distribution function. Alternatively, the energy integral in Eq.~\ref{e:Conductance} can be transformed into a summation over $q$-space where the thermal conductance is evaluated as:
\begin{equation}
\kappa_l=\sum_{\alpha,q}\kappa_{l,\alpha}(q)
\label{e:Conductance2}
\end{equation}
where the $q$- and $\alpha$-dependent thermal conductance is defined as:
\begin{equation}
\kappa_{l,\alpha}(q)=\frac{1}{2}\frac{2\pi}{\Delta q_{\parallel}}\frac{2\pi}{\Delta q_{\perp}} v_{g,\alpha}(q)\Big\vert_{\parallel} \hbar \omega_{\alpha}(q) \frac{\partial n\left( \omega_{\alpha}(q) \right)}{\partial T}
\end{equation}
In this work, both Eq.~\ref{e:Conductance} and Eq.~\ref{e:Conductance2} are employed depending on whether we compute $\omega$- or $q$-dependent data.
We note that in this work we calculate the ballistic thermal \emph{conductance} of the thin layers, not the \emph{conductivity} which assumes diffusion of phonons after undergoing all relevant scattering mechanisms. Our intention in this work is to specifically investigate the influence of the confined phonon bandstructure on the anisotropy of the phonon transport.
\section{Results}
\label{s:Results}
Figure~\ref{f:Figure1} shows the geometrical cross sections of the thin layers considered. These are the $\{100\}$, $\{110\}$, $\{111\}$, and $\{112\}$ surface orientations. In all cases, we consider the $x$-axis to be the $\textless110\textgreater$ orientation, and define the angle $\theta$ of the transport direction counter-clockwise from the $x$-axis. Below we present a complete analysis by calculating the phononic properties and thermal conductance as a function of the angle $\theta$ for all the surface orientations mentioned. We also vary the layer thickness $H$ from $1~\mathrm{nm}$ to $16~\mathrm{nm}$. We calculate the phononic dispersion, density of states, ballistic transmission, and effective group velocity of the different structures.
Figure~\ref{f:Figure2} shows the transmission functions for the four layer surface orientations of interest along two particular transport orientations for each case, that, as we will show below, provide the lowest and the highest thermal conductance for that particular surface. The layer thickness in all cases is $2~\mathrm{nm}$. In the case of the thin layer with $\{100\}$ surface orientation in Fig.\ref{f:Figure2}-a, we consider the $\{100\}/\textless110\textgreater$ and the $\{100\}/\textless100\textgreater$ transport channels. The transmissions of the two channels are almost the same, indicating negligible anisotropy. In the case of the thin layer with $\{111\}$ surface orientation in Fig.~\ref{f:Figure2}-c, we consider the $\{111\}/\textless110\textgreater$ and the $\{111\}/\textless112\textgreater$ transport channels. Again in this case, the transmissions are almost the same.
The transmission function of the thin layers with $\{110\}$ and $\{112\}$ surfaces, on the other hand, is orientation dependent. For the $\{110\}$ surface thin layers in Fig.~\ref{f:Figure2}-b, the $\{110\}/\textless110\textgreater$ channel (blue line) shows the highest transmission function, and the $\{110\}/\textless100\textgreater$ channel (red-dotted line) the lowest. An even larger difference is observed in the case of the $\{112\}$ surface thin layers in Fig.~\ref{f:Figure2}-d. The highest transmission is observed for the $\{112\}/\textless110\textgreater$ channel (blue line), and the lowest for the $\{112\}/\textless111\textgreater$ channel (red-dotted line). The difference in the transmission of the channels in different transport orientations is largest for energies between $10-30~\mathrm{meV}$ for both, the $\{110\}$ and the $\{112\}$ thin layers.
Using the transmission functions extracted from the bandstructures, we calculate the ballistic lattice thermal conductance using the Landauer formula for the thin layers with the four different surface orientations of interest. We calculate the thermal conductance as a function of the transport orientation by varying the angle $\theta$ from 0 to $\pi$. The thermal conductances for all cases shown in Fig.~\ref{f:Figure3} are calculated for room temperature. We calculate the conductance of thin layers for thicknesses of 1, 2, 4, 8 and $16~\mathrm{nm}$. With symbols we denote the high symmetry orientations using the Miller index notation, i.e. $\textless110\textgreater$ - circle, $\textless111\textgreater$ - star, $\textless112\textgreater$ - triangle, and $\textless100\textgreater$ - square. We mark these orientations on the $16~\mathrm{nm}$ thin layer result in Fig.~\ref{f:Figure3}. In all cases, the conductance increases linearly as the thickness increases because the thicker layers contain more phonon modes that contribute to the thermal conductance. With regards to anisotropy, for the thin layers with $\{100\}$ surface in Fig.~\ref{f:Figure3}-a, the conductance has a maximum along the $\textless100\textgreater$ direction (square), and a minimum is along the $\textless110\textgreater$ direction (circle), although the difference is small (only $\sim 5\%$). Interestingly, this observation is the same for all thicknesses considered. The conductance of the channels with $\{110\}$ surface is shown in Fig.~\ref{f:Figure3}-b. The conductance is highest in the $\textless110\textgreater$ transport orientation ($\theta=0$, circle), and is lowest for the $\textless100\textgreater$ channels ($\theta=\pi/2$, square). The variation between the maximum and minimum, however, in this case is $\sim 30\%$ for the $1~\mathrm{nm}$ thin layer, and decreases to $20\%$ for the $16~\mathrm{nm}$ layer. The conductance of channels with $\{111\}$ surface is shown in Fig.~\ref{f:Figure3}-c. The conductance in this case also peaks along the $\textless110\textgreater$ direction (circle) and it is lowest along the $\textless112\textgreater$ direction (triangle). The variation of the conductance with transport orientation in this case is negligible for the thinner layers, but increases to $\sim10\%$ in the $16~\mathrm{nm}$ case. The thermal conductance for channels with $\{112\}$ surface is shown in Fig.~\ref{f:Figure3}-d. The maximum and minimum conductance is observed along $\textless110\textgreater$ (circle) and $\textless111\textgreater$ (star), respectively. Channels with this surface indicate the largest variation in thermal conductance compared to other surfaces. The difference varies from $\sim 40\%$ for the $1~\mathrm{nm}$ layers to $\sim 30\%$ for the $16~\mathrm{nm}$ layers. Overall, considering all surfaces and transport orientations, the maximum thermal conductance is observed for the $\{110\}/\textless110\textgreater$ channels, and the minimum for the $\{112\}/\textless111\textgreater$ channels. Interestingly, however, regardless of surface orientation, the thermal conductance is high in $\textless110\textgreater$ direction. This agrees well with previous works on silicon nanowires, where it is reported that the $\textless110\textgreater$ oriented nanowires have the highest thermal conductance~\cite{Markussen08,Paul11,Karamitaheri13}. A similar conclusion was found for thin layers of larger sizes~\cite{Aksamija10}. As we shall explain below, the phonon dispersions along the $\textless110\textgreater$ orientations are more dispersive compared to other orientations, which yield higher group velocities and, therefore, highest thermal conductance.
Figure~\ref{f:Figure4} shows the thermal conductance of the $H=2~\mathrm{nm}$ layers as a function of temperature. For every surface orientation we show two transport orientations, the one with the maximum and the one with the minimum conductance (as in Fig.~\ref{f:Figure2}). The conductance increases with temperature as expected from a ballistic quantity, and starts to saturate around $300~\mathrm{K}$. The reason is that the thermal conductance in Eq.~\ref{e:Conductance} can be also expressed as:
\begin{equation}
\kappa_l=\frac{k_{\mathrm{B}}^2T\pi^2}{3h} \int_{0}^{+\infty}\overline{T}_\mathrm{ph}(\omega) W_{\mathrm{ph}}(\hbar \omega) d(\hbar \omega)
\end{equation}
where
\begin{equation}
W_{\mathrm{ph}}=\frac{3}{\pi^2}\left( \frac{\hbar \omega}{k_{\mathrm{B}}T} \right)^2 \frac{\partial n}{\partial (\hbar \omega)}
\end{equation}
is the so-called phononic window function~\cite{Jeong12}. The phonon energy spectrum of Si extends up to $\sim 65~\mathrm{meV}$, and for sufficiently high temperatures the phononic window function is nearly constant within the entire $\sim 65~\mathrm{meV}$ energy range, as also shown in Ref.~\cite{Jeong12}. This causes the thermal conductance to saturate. Figure~\ref{f:Figure4} shows that the $\{110\}/\textless110\textgreater$ channel has the largest conductance, and the $\{112\}/\textless111\textgreater$ channel the smallest in the entire temperature range. The conductances of the other channels lie in between and do not deviate significantly from one another. The same trend is observed for the $H=16~\mathrm{nm}$ channels (inset of Fig.~\ref{f:Figure4}), although the spread is smaller. Below, we provide explanations for this geometry dependence in terms of the phonon bandstructure, by extracting the phonon density of states and the effective group velocity.
\section{Analysis}
\label{s:Analysis}
The ballistic thermal conductance in the Landauer formalism is determined by the product of the density of states and the group velocity. In Fig.~\ref{f:Figure5} we plot the density of states for thin layers of thickness $H=2~\mathrm{nm}$ and the four different surface orientations of interest. Although some differences are observed for the different surface orientations, especially in the low frequency range, the overall values and trends are very similar. The inset of Fig.~\ref{f:Figure5} shows the density of states for layers of thickness $H=16~\mathrm{nm}$. In this case a much smaller variation is observed as expected, since the phonon density of states depends at first order on the number of atoms, and layers of the same thickness contain a similar amount of atoms. At smaller thicknesses the different arrangement of atoms can result in slightly different numbers of atoms for different surfaces, but as the thickness increases the crystal becomes more uniform and any variations are eliminated. In general, of course, the arrangement of atoms, the coupling between them, and the type of interactions they have can also influence their density of states. But as we show in Fig.~\ref{f:Figure5}, such effects
are only important on the density of states at very thin sizes, i.e. $H=2~\mathrm{nm}$,
and even then, they are small. We note that also in the case of Si nanowires, our previous
work has demonstrated a similar result, namely that even for nanowires with cross section
sizes down to $H=6~\mathrm{nm}$, the density of states is orientation independent~\cite{Karamitaheri13}. From this we conclude that the variation in the thermal conductance and transmission does not originate from the difference in the density of states.
In Fig.~\ref{f:Figure6} we plot the second quantity that influences the transmission and conductance, which is related to the velocity of the phonon states. We define the effective group velocity at a specific energy $E=\hbar \omega$ as the weighted average of the velocities of the phonon states, with the weighting factor being the density of states:
\begin{equation}
\ll V_g(\omega)\gg=\frac{\displaystyle{\sum_{\alpha,q}}v_{g,\alpha}(q)\Big\vert_{\parallel} \delta\left( \omega -\omega_{\alpha}(q)\right)}{\displaystyle{\sum_{\alpha,q}}\delta\left( \omega -\omega_{\alpha}(q)\right)}
\label{e:Veff}
\end{equation}
The velocity of a phonon is in general a function of the subband index, the frequency, and the wavenumber $q$. The quantity in Eq.~\ref{e:Veff} averages over the subband index and the wavenumber and thus provides a quantity that depends only on frequency (or energy). Similar "effective" quantities have also been used in thermal conductivity calculations in different works as well~\cite{Zou01,Mingo03,Jeong10}. However, in our actual calculations we utilize all the information of the phonon spectrum. This quantity is orientation-dependent, in contrast to the density of states, and indicates how dispersive the modes are. The velocity is calculated along the transport direction. The density of states times the effective group velocity is proportional to the transmission function. Therefore, the differences in the transmission functions should be seen in the effective group velocities of the channels, since the density of states is the same for all channels of the same thickness. Figure~\ref{f:Figure6} shows the effective group velocities of the channels considered. Figures~\ref{f:Figure6}-a and~\ref{f:Figure6}-c show the effective group velocities of thin layers with $\{100\}$ and $\{111\}$ surfaces along the two different orientations with the lower and highest thermal conductance for each surface. The two different cases for each surface are almost identical, as in the case of the transmission functions in Fig.~\ref{f:Figure2}-a and~\ref{f:Figure2}-c. Figures~\ref{f:Figure6}-b and~\ref{f:Figure6}-d show the effective group velocities for channels with $\{110\}$ and $\{112\}$ surfaces, respectively. A variation is observed for the different channels, which causes the difference in the transmission functions shown earlier in Fig.~\ref{f:Figure2}-b and~\ref{f:Figure2}-d.
The anisotropy (or isotropy) of the effective group velocity originates from the phonon bandstructure. In Fig.~\ref{f:Figure7} we show contour plots of the phonon bandstructure at $E=\hbar \omega=10~\mathrm{meV}$ for all eight channels considered in Fig.~\ref{f:Figure2} and Fig.~\ref{f:Figure6} for layer thickness $H=2~\mathrm{nm}$. This is an energy value at which the most significant differences for the channels with $\{110\}$ and $\{112\}$ surfaces appear. It turns out that what we present for this energy is a good indicator of the anisotropic behavior of the entire energy spectrum, most of which contributes to thermal conductance at room temperature. The lines represent the different modes at that energy, whereas the colormap indicates the contribution of each $q$-state to the ballistic thermal conductance at room temperature in the transport orientation of the specific channel of interest, as indicated by the arrow in each case. Elongation of contour lines along a specific direction provides high phonon group velocities in the perpendicular direction, and consequently high thermal conductance. This is very similar to the low effective mass and high velocities of carriers in an ellipsoidal band along the direction of the short axis in the case of electronic transport. Figures~\ref{f:Figure7}-a and~\ref{f:Figure7}-b show the energy contours for the $\{100\}$ surface in the $\textless110\textgreater$ and $\textless100\textgreater$ transport orientations, respectively (indicated by the arrow). Despite the square shape of the contour, which indicates that there is different symmetry in the two orientations of interest, the contours are elongated similarly in both directions, which results in a similar thermal conductance for both channels. This is the case for almost the entire energy spectrum (although at higher energies we have many more modes and more complex contour shapes). In the case of the $\{111\}$ surface in Fig.~\ref{f:Figure7}-e and~\ref{f:Figure7}-f, a highly symmetric contour provides very similar transmission functions and thermal conductances along the $\textless110\textgreater$ and $\textless112\textgreater$ directions. The largest differences in the thermal conductance are observed for thin layers with $\{110\}$ and $\{112\}$ surfaces in Fig.~\ref{f:Figure7}-c,~\ref{f:Figure7}-d and Fig.~\ref{f:Figure7}-g,~\ref{f:Figure7}-h, respectively. In both cases, the contours at energy $E=10~\mathrm{meV}$ are clearly elongated along the vertical axis. This results in a larger phonon group velocity along the horizontal axis, and finally a higher thermal conductance, as also indicated by the colormap. This is especially evident for the $\{112\}$ surface, where the contour of the $\textless110\textgreater$ channel in Fig.~\ref{f:Figure7}-g is colored much closer to red (higher conductance value) than the $\textless111\textgreater$ channel in Fig.~\ref{f:Figure7}-h, indicating much larger phonon group velocities. This causes the thermal transmission and conductance of the $\{112\}/\textless110\textgreater$ channel to be higher than that of the $\{112\}/\textless111\textgreater$ channel shown in Fig.~\ref{f:Figure2}-d and Fig.~\ref{f:Figure3}-d.
Figure~\ref{f:Figure7} explains the origin of anisotropy in the ballistic thermal conductance of thin layers with $2~\mathrm{nm}$ thickness. We point out, however, that such effects also hold for all the thicknesses we examine, e.g. up to $H=16~\mathrm{nm}$. The anisotropic behavior depends weakly on the layer thickness. The ballistic conductance increases linearly as the layer thickness increases due to the increased number of atoms which results in a larger number of phonon modes, but the anisotropy does not change significantly. This is illustrated in Fig.~\ref{f:Figure8}-a which shows the ballistic thermal conductance for each of the four surface orientations examined, normalized by the thickness of the layer. For each surface we only consider the direction showing the maximum conductance. We observe that in all cases the normalized conductance is constant, even down to a thickness of $H\sim 5~\mathrm{nm}$. Below $H\sim 5~\mathrm{nm}$, variations of the order of $\sim 10-20\%$ are observed for all channels. From this, it follows that other than the reduction in the size of the phonon spectrum with thickness scaling, no significant changes in the shape of the phonon structure are observed, at least not significant to introduce changes in the thermal conductance. This is also supported by Fig.~\ref{f:Figure8}-b, which depicts the ratio of the maximum to the minimum thermal conductance that can be achieved for each surface. Similarly to Fig.~\ref{f:Figure8}-a, the anisotropy does not change with layer thickness even down to $H\sim 5~\mathrm{nm}$. Again, below $H\sim 5~\mathrm{nm}$, differences of the order of $10-20\%$ can be observed.
This anisotropy observed is not only a function of thickness, but also of temperature. Figure~\ref{f:Figure9} shows the ratio of the maximum to the minimum ballistic thermal conductance for the four surface orientations of interest, again by choosing the appropriate transport orientations. Figure~\ref{f:Figure9}-a and~\ref{f:Figure9}-b show results for $H=2~\mathrm{nm}$ and $H=16~\mathrm{nm}$, respectively. The maximum anisotropy (up to $60\%$) is observed for the $\{112\}$ surface, followed by the $\{110\}$ surface (up to $30\%$), whereas the $\{111\}$ and $\{100\}$ surfaces are more or less isotropic (the ratio stays $\sim 1$). This holds for most of the temperature range we examine, even down to $100~\mathrm{K}$. Below $100~\mathrm{K}$, the ratio approaches unity in all cases, because at this temperature the main contribution to thermal conductance comes from the acoustic branches at low energy, which are more isotropic. This is clearly observed in the low energy range of the transmissions in Fig.~\ref{f:Figure2}, in which the thermal conductivity is isotropic.
Finally, we need to mention that this work focused on the influence of bandstructure on the anisotropic behavior of the thermal transport properties of ultra-thin Si layers. Thus, we employed accurate phonon bandstructures, but utilized a rather simplified ballistic transport formalism, which ignores the effects of phonon scattering. Our intent is to provide a qualitative indication of the anisotropic behavior of phonon transport in thin layers. Employing atomistic phonon bandstructures and a fully diffusive transport formalism that accounts of the energy, momentum, and bandstructure dependence of each scattering event would be computationally very expensive, and will be the topic of subsequent studies. Our results, however, point out that a factor of two variation in phonon transport can be achieved once the channel geometry is optimized. These findings agree qualitatively well with diffusive phonon transport calculations that indicate the superiority of the thermal conductivity of the $\{110\}/\textless110\textgreater$ channel over other geometries, and the low thermal conductance for the $\{111\}/\textless110\textgreater$ and $\{112\}/\textless111\textgreater$ channels~\cite{Aksamija10}. They also agree with calculations for Si NWs, which indicate the
beneficial $\textless 110\textgreater$ transport orientation to heat transport, compared to other orientations~\cite{Markussen08,Paul11,Karamitaheri13}. When it comes to comparing to experimental results, however,unfortunately we could not identify any works in the literature that perform systematic thermal conductivity measurements in such ultra-thin layers ($H<16~\mathrm{nm}$) and in various confinement and transport orientations. Most experimental works on thermal conductivity consider relatively thick layers of thicknesses in the order of 10s-100s of nanometers and primarily on $\{100\}$ layers. In thicker layers the phonon modes are almost bulk-like and one cannot observe the anisotropic phonon confinement effects that lead to bandstructure modifications and conductance variations. In addition, the influence of various scattering mechanisms make the thermal conductivity of thicker layers more isotropic, and hide the results of bandstructure anisotropy (that ballistic simulations fully capture).
Our findings, however, are useful in understanding phonon transport in ultra-thin Si layers, and with regards to applications, could provide guidance in either maximizing heat transport as in the case of thermal management, or minimizing heat transport as in the case of thermoelectrics. For example, for electronic applications, we mention that for $p$-type nanoelectronic channels, transport in the $\{110\}/\textless110\textgreater$ orientation is beneficial compared to other orientations~\cite{NeoAPL11,NeoNL09}. This is also the case for the power factor in the case of thermoelectric devices~\cite{NeoJAP12}. In the former case, however, for electronic devices large thermal conductivity is necessary in order to remove the heat from the device, otherwise the mobility is degraded. The large thermal conductivity of the $\{110\}/\textless110\textgreater$ channel, therefore, could be advantageous for $p$-type electronic devices. In the latter case, for thermoelectric devices channels with low thermal conductivity are needed in order to reduce losses and increase thermoelectric efficiency. The large thermal conductivity of the $\{110\}/\textless110\textgreater$ channel, therefore, could counteract the benefit of its larger power factor, and this channel might not be the optimal for thermoelectric $p$-type Si devices.
\section{Conclusions}
\label{s:Conclusions}
The ballistic thermal conductance and its dependence on surface and transport orientations in ultra-thin silicon layers from $1~\mathrm{nm}$ to $16~\mathrm{nm}$ in thickness is investigated using the modified valence force field method and the Landauer formalism. The ballistic conductance of thin layers with $\{100\}$ and $\{111\}$ surface orientations is almost isotropic for all transport orientations. An anisotropy in the transport orientation of the order of $60\%$ and $40\%$ is observed for $\{112\}$ and $\{110\}$ channels respectively, due to the asymmetry in their phonon mode structure. In terms of absolute values, the $\{110\}/\textless110\textgreater$ channel has the highest thermal conductance, and the $\{112\}/\textless111\textgreater$ channel the lowest (almost $50\%$ lower). Interestingly, for all surfaces, the $\textless110\textgreater$ transport orientation shows the highest conductance. We finally show that these observations are layer thickness-independent, as well as temperature-independent. This anisotropy of transport for each surface is observed for temperatures above $100~\mathrm{K}$, whereas for lower temperatures the anisotropy is reduced. Our results can be useful in understanding the contribution of the phonon dispersion in the thermal conductivity of ultra-thin Si layers, as well as in the design of efficient thermal management and thermoelectric devices.
\section*{Acknowledgments}
This work was supported by the European Commission, grant 263306 (NanoHiTEC).
|
1,314,259,995,308 | arxiv | \section{Introduction}
\par It is now the beginning of the decade in which we expect incredible progress in the field of particle dark matter detection. Through a variety of indirect searches, direct searches, and collider searches around the world there has already been a series of exciting developments in particle dark matter searches over the course of the past several years. As progress in this field will continue to accelerate over the next several years, the impetus to understand how to best interpret the results in the larger context of cosmology and high energy physics will become stronger.
\par It has now been over four years since the highly-successful launch of the Fermi Large Area Space Telescope (LAT), which is sensitive to photons in the energy band of $100$ MeV to $300$ GeV~\citep{Atwood:2009ez}. The Fermi-LAT has expanded upon our window into the high energy universe, providing a wealth of information about pulsars, active galactic nuclei, and diffuse gamma-ray radiation on the scale of the Milky Way and beyond. In the area of particle dark matter, as I will discuss in more detail below, the Fermi-LAT has proven to be the first experiment to gain sensitivity to the thermal relic scale of the dark matter annihilation cross section of $\langle \sigma v \rangle \simeq 3 \times 10^{-26}$ cm$^3$ s$^{-1}$. Because of the LAT, for the first time we are now able to explore the regime of cosmologically-motivated weakly interacting massive particle (WIMP) dark matter.
\par Complementing the indirect results from the Fermi-LAT, direct dark matter searches, such as XENON100~\citep{2012arXiv1207.5988X} and the Cryogenic Dark Matter Search (CDMS)~\citep{Ahmed:2009zw}, are continually improving the sensitivity to the WIMP-nucleon interaction rate. These experiments are now beginning to reach into the well-motivated theoretical regime of Higgs-mediated dark matter interactions, and are now beginning to constrain well-motivated regimes of the supersymmetric model parameter space. As these searches, and many others, continue to improve over the next several years, it is certainly fair to say that either WIMPs will be detected, or there will be a paradigm shift in our strategy regarding searches for particle dark matter.
\par In this contribution I will discuss how our modern understanding of dark matter in the Galaxy, satellite galaxies, and dark matter subhalos impact our interpretation of the results from both indirect and direct dark matter searches. For the indirect searches, I focus primarily on the Fermi-LAT results, highlighting what we are now able to robustly extract about the nature of WIMP dark matter from studies of a variety of astrophysical environments. During the discussion of direct searches, I review our observational understanding of the local dark matter distribution, how it affects modern results, and methods for extracting the properties of the WIMP in the future, even though our knowledge of the dark matter distribution in the Galaxy is still not as precise as we ultimately desire.
\section{Indirect Searches}
\par Examination of the all-sky gamma-ray map in the Fermi-LAT band reveals that the largest source of gamma-rays results from diffuse emission in the Galactic plane due to cosmic ray interactions with the interstellar medium. Gamma-rays are produced via neutral pion decay, bremsstrahlung, and inverse Compton emission~\citep{Abdo:2009mr}. In addition to this diffuse emission, there are 1873 known point sources in the two-year source catalog~\citep{2012ApJS..199...31N}. The majority of the identified point sources are active galactic nuclei, while a smaller fraction of them are supernova remnants, globular clusters, and star-forming galaxies such as M31 and the Magellanic Clouds. Of the detected point sources, less than one percent of them are firmly identified in other wavebands, while approximately 60\% of them are reliably associated with sources at other wavelengths. Because the angular resolution of the LAT is approximately $0.1-1$ degree, depending on energy, the association with sources at other wavelengths is done via spectral or timing information. Interestingly, over 30\% of the Fermi-LAT sources are unidentified in other wavelengths.
\par Extraction of a potential dark matter signal from the Fermi-LAT data requires an understanding of both point source emission and diffuse emission over more extended regions. To date, searches for dark matter have been undertaken from a variety of sources within the Galaxy and beyond, including dwarf spheroidal (satellite) galaxies, dark matter subhalos that are not associated with stars, and clusters of galaxies. There have also been analyses of diffuse emission from Galactic and extragalactic sources, and searches for gamma-ray lines from the region of the Galactic center and in the diffuse halo. Each of these analyses provides a unique potential signature of dark matter, as well as unique sets of systematics that must be understood.
\par At this stage, the dwarf spheroidal (dSph) results are most robust dark matter results that have been obtained by the Fermi-LAT. The dSph analysis relies on a unique confluence of results from the fields of Galactic optical astronomy and gamma-ray astronomy, and the searches are now sophisticated to the point where they may be viewed as particle dark matter experiments in the sky. For these reasons, I will review the methodology for obtaining the dSph results, as well as how these results are expected to improve in the future with more Fermi-LAT data, and data from ground based Air Cherenkov Telescopes (ACTs). I will then compare these results to searches for dark matter annihilation from other sources with the LAT.
\subsection{Satellite galaxies}
\par There are now nearly two dozen galaxies that are classified as satellites of the Milky Way (see Ref.~\citep{McConnachie:2012vd} for a recent review of their properties). More than have of these have been identified in the Sloan Digital Sky Survey (SDSS), so the majority of these objects are concentrated towards the North Galactic Gap. The luminosities of these systems vary between a few hundred solar luminosities to tens of million solar luminosities. Combining the stellar kinematics of these galaxies with their measured luminosity, they have dark-to-luminous mass ratios of anywhere from tens to even thousands to one~\citep{Strigari:2007at,Strigari:2008ib}. The dSphs that are utilized in the Fermi-LAT analysis contain no detectable interstellar gas, so there is no expected gamma-ray emission from conventional astrophysical sources. Any observed intrinsic emission in these systems would be due to dark matter.
\par Detailed analysis of the kinematics of these systems over the past several years has revealed that the mass within their approximate half-light radii are well-constrained by the observed data~\citep{Strigari:2007vn,Walker:2009zp,Wolf:2009tu}. The half-light radii for these systems vary anywhere between a few tens of parsecs for the faintest of them, to a few hundred parsecs for the brightest. The angular scale subtended by their respective half-light radii are then in the range $0.1-1$ degrees. This is an important scale, because it implies that both the integral over their density and their density-squared distributions are well constrained on this angular scale. Assuming that the central density cusp is less steep than approximately $r^{-1.5}$, the integral over the density-squared is insensitive to the presence of a core or a cusp in the central density of dark matter. Typical uncertainties on the (log of) the integral of the density-squared, which is the relevant quantity for dark matter annihilation, are approximately 10\%, depending in detail on the numbers of kinematic tracers used to derive the dark matter distribution. The uncertainties are found to be log-normal to a good approximation.
\par With the integral of the density-squared of the dark matter distributions, $\int \rho^2 dV$, determined from the stellar kinematics, in combination with a model for the WIMP, its annihilation cross section, and its corresponding spectrum of gamma-ray radiation, the flux of gamma-ray can be precisely predicted from the dSphs with high quality kinematic data~\citep{Strigari:2006rd,Strigari:2007at,Martinez:2009jh,Charbonnier:2011ft}. The dSphs with the largest flux are found to be Ursa Major II (32 kpc), Segue 1 (25 kpc), Draco (80 kpc), and Ursa Minor (66 kpc). The former two were identified in the SDSS, and have fractional uncertainties in $\int \rho^2 dV$ still approximately 50\% because they have relatively small samples of stars from which kinematic information can be extracted~\citep{Simon:2007dq,Simon:2010ek}. On the other hand, Draco and Ursa Minor have hundred of stars associated with them, so their fractional uncertainties are of the order 10\%.
\par In the two-year Fermi-LAT data, examination of the dSphs reveals no excess above the known backgrounds from diffuse Galactic emission, extragalactic emission, and nearby point sources. A joint likelihood with 10 dSphs finds that at the 95\% c.l., WIMPs with mass in the range 10-25 GeV that annihilate to $b \bar b$ and $\tau \bar \tau$ are ruled out~\citep{Ackermann:2011wa,GeringerSameth:2011iw}. This result is additionally robust to the presence of dark matter substructure, which is expected to be not important for the dSphs~\citep{Springel:2008cc,Martinez:2009jh}. This result implies that thermal relic WIMPs that annihilate predominantly through s-wave (velocity independent) interactions in this mass range are ruled out. The significance of this result cannot be overstated, because it is the first time that thermal relic dark matter is robustly being probed via astrophysical observations. In addition to this lack of continuum emission no lines have yet been found in any of these sources~\cite{GeringerSameth:2012sr}.
\par The present dSph limits lose sensitivity at WIMP masses of approximately 1 TeV. Above this mass scale, ACTs, which have energy thresholds of approximately $100$ GeV and better angular resolution than the Fermi-LAT, are able to complement Fermi-LAT searches~\citep{Essig:2009jx,Essig:2010em}. There are now several ACTs that have studied nearby dSphs~\citep{Aleksic:2011jx,Aliu:2012ga}. Because the exposure of ACTs on the dSphs is significantly less than the Fermi-LAT exposure, the limits on the annihilation cross section are weaker, typically about two to three orders of magnitude above the thermal relic WIMP scale.
\subsection{Dark subhalos}
\par Cold dark matter theory predicts that, in addition to the dark matter subhalos that host the dSphs, there may be many orders of magnitude more that do not have any visible stars associated with them (e.g.~\citep{Strigari:2010un,BoylanKolchin:2011de}). These subhalos may in some cases be bright enough to be identified by the Fermi-LAT through dark matter annihilation. Extracting this signal represents a signifiant challenge, because our knowledge of the subhalo distribution in the galaxy (if they even exist) is incomplete and depends on how the cold dark matter mass function is extrapolated to lower mass scales.
\par In order to get an idea of whether it is now possible to detect dark subhalos with Fermi-LAT, as a first pass, one can ask which of the unidentified point sources mentioned above have a gamma-ray spectrum that is consistent with WIMP annihilation~\citep{Buckley:2010vg}. Even if these sources are consistent with a WIMP spectrum, there are two issues that make a clean interpretation of a WIMP signal difficult. First, there is significant contamination from astrophysical sources whose spectra mimic the high-energy exponential cut-off that is characteristic of a WIMP spectrum. Second, it is highly unlikely that hundreds of sources would be visible in the present LAT data-- N-body and gamma-ray simulations predict that, for a thermal relic cross section, the number of visible satellites should be not much greater than a few at the most, accounting for detection efficiency cuts~\citep{Ackermann:2012nb}.
\par Though in principle emission may be detected from an extended dark matter subhalo that is both nearby (i..e within a few kpc from the Solar neighborhood), it is more probable that a dark subhalo that is detected will have a mass greater than approximately $10^7$ M$_\odot$. This is because the mass function of subhalos is $dN/dM \propto M^{-1.9}$, implying that the majority of the total mass in subhalos is locked up in the most massive objects. Accounting for detection efficiency and for the fact that some of the sources may be extended, a search in the one year Fermi-LAT data has revealed that there are no conclusively-detected dark matter subhalos~\citep{Ackermann:2012nb}. Initial promising candidates were eventually correlated with astrophysical sources at other wavelengths. For a $100$ GeV mass WIMP, this null detection corresponds to a limit on the annihilation cross section approximately two orders of magnitude greater than the thermal relic scale.
\subsection{Galactic center}
\par In contrast to the dSph analysis, it is not yet possible to make reliable predictions for the flux of gamma-rays from the Galactic center, because there are no direct empirical constraints on the dark matter distribution in this region. Kinematic data are consistent with both cored and cusped dark matter profiles. Further, the diffuse emission from neutral pion decay, bremsstrahlung, and inverse compton is not known to the precision that is required to accurately subtract out the gamma-ray emission that traces back to cosmic rays. In spite of this lack of understanding of cosmic ray induced emission, recently some studies have hinted that large scale features in the diffuse gamma-ray emission towards the Galactic center are consistent with what is nominally expected from a WIMP induced gamma-ray spectrum~\citep{Hooper:2010mq,Abazajian:2012pn}. Further, there have been more recent suggestions of a line feature near the Galactic center~\citep{Weniger:2012tx}. More will be learned from Fermi-LAT data, as well as from HESS-II data, about these potential features in the coming years.
\subsection{Diffuse searches}
\par If the dark matter has the thermal relic cross section and has a significant s-wave component, it may also be identified over larger angular scales in the Galactic halo. However, extraction of this signal is difficult at present, because it relies on marginalizing over a large number of parameters that describe the diffuse emission from cosmic rays~\citep{Ackermann:2012rg}. Further, there is uncertainty in the scale of the annihilation cross section that these results are testing due to the uncertainty in both the local dark matter distribution and the total mass of the Milky Way. The case is similar for indirect dark matter searches that rely on extracting a dark matter signal from the isotropic background~\citep{Abdo:2010dk}, which has an observed power law with a spectral index of $2.4$~\citep{Abdo:2010nz}. The predictions for the extragalactic radiation component from dark matter annihilation is particularly difficult due to the unknown contribution to the flux from dark matter substructure in galaxies. Complementing these results from gamma-ray searches, it is also interesting to note that searches for diffuse neutrinos are now able to place limits on the annihilation cross section into neutrinos, but the sensitivity is a few orders of magnitude worse than the gamma-ray searches~\citep{Abbasi:2011eq}.
\subsection{Galaxy clusters}
\par Several nearby galaxy clusters are promising targets for indirect dark matter searches. For the nearest clusters, because of the appropriate mass and distance factors the emission is predicted to be similar to that from satellite galaxies. Also, several nearby clusters have well-deterimned mass profiles, so it is possible to make detailed predictions for the expected gamma-ray flux in a manner similar to the case of the dSphs. However, unlike the dSphs, there is expected to be significant gamma-ray emission from clusters on the scales probed by the cluster gas distribution~\citep{Pinzke:2010st}; understanding this signal in more detail is required to extract a WIMP signal.
\par There have been no conclusive detections of gamma-ray emission from clusters with the Fermi-LAT to date~\citep{Ackermann:2010rg}. For a smooth dark matter mass distribution, at $10$ GeV this non-detection constrains the annihilation cross section $\langle \sigma v \rangle$ to less than approximately $10^{-25}$ cm$^3$ $s^{-1}$ for annihilation into $b \bar b$~\citep{Ando:2012vu,Han:2012uw}. However, a more precise determination is difficult due to the uncertainty in the component of the annihilation that results from halo substructure. Indeed for extrapolation down to Earth mass scales, the emission from substructure may increase the smooth flux by about three orders of magnitude~\cite{Gao:2011rf}. The vast majority of this emission is expected from the outer regions of the cluster where the dominant component of the substructure is distributed. Understanding the nature of this extended emission, and separating it out from the less extended emission that traces the gas distribution, will be the most important aspect of gamma-ray cluster analyses going forward into the future.
\section{Direct Dark Matter Searches}
\par As highlighted above, both spin-independent and spin-dependent direct dark matter searches are now reaching the sensitivity to probe the theoretically well-motivated Higgs-mediated dark matter interactions. The best modern limits on WIMP spin-independent interactions over the entire mass range of $10-1000$ GeV now come from the XENON100 experiment~\citep{2012arXiv1207.5988X}, which has a maximal sensitivity to an elastic scattering cross section of $\sim 10^{-45}$ cm$^2$ at $50$ GeV. As these limits continue to improve, and optimistically close in on a confirmed detection, it will be increasingly important to understand and separate the three components that represent systematic uncertainties that go into determining the dark matter properties. Very broadly, these systematics can be classified as those that arise from the experimental backgrounds in the analysis, those that arise from theoretical uncertainties in the prediction for the WIMP-nucleon cross section, and those that arise from our uncertainty in the distribution of the mass and the velocity distribution of the dark matter. It is probably true that when a WIMP detection is confirmed, the third of these systematics will be the most important systematic that impacts the determination of WIMP properties. For the remainder of this section, I highlight different theoretical and observational aspects of this final systematic, focusing in particular how it affects modern results, and how it can be dealt with more rigorously in the future.
\subsection{Local dark matter}
\par The local dark matter density is determined from measurements of the local distribution of stars and their kinematic information. However, extraction of the local dark matter density is rendered difficult because it appears that the dark matter is subdominant to the various baryonic matter components in the Solar neighborhood. Summing up the contributions from low-mass stars, as well as gas from various temperature phases of the interstellar medium, the total mass density of local baryonic matter is approximately $0.1$ M$_\odot$ pc$^{-3}$~\citep{Holmberg:1998xu}. Recent analysis of the kinematics of bright stars finds that the local dark matter density may be up to three times larger than the canonical value of $0.3$ GeV cm$^{-3}$, though these results still systematically depend on the inputs of the analysis and the specific stellar population that is utilized~\citep{Garbari:2011dh,Garbari:2012ff}. Note that even these estimates are still below the local baryonic material by up to a factor of several, and they still carry both significant systematic and statistical uncertainties. Various other analyses that add in constraints from the total Galactic potential find that this uncertainty is reduced, and the mean central value for the dark matter density is slightly larger~\citep{Catena:2009mf,McMillan:2011wd}. However, it is still important to note that these latter estimates are sensitive to the shape and the scale radius of the Milky Way dark matter halo.
\par The local kinematic measurements above are primarily measuring the ``smooth" distribution of dark matter in the Solar neighborhood. However, from theoretical predictions of dark matter halo formation in cold dark matter cosmology, the distribution of dark matter in the Galactic halo is not smooth, so in principle it is possible that the Sun resides in either a significant local over or under density of dark matter, which may affect the implied constraint on the WIMP elastic scattering cross section. Numerical simulations~\citep{Vogelsberger:2008qb}, as well as analytic models~\citep{Kamionkowski:2008vw}, have begun to address this issue, finding that the probability for the Sun to reside in a significant over density or under density is small, of order $10^{-4}$\%. Further, there are predictions of a dark matter disk in the Galaxy, that may have its origin in the accretion of a massive satellite galaxy that was dragged into the disk by dynamical friction~\citep{Read:2008}. However, analysis of stellar kinematics that extend out beyond a few kilo-parsecs place strong limits on a dark matter disk component in the Galaxy~\citep{Bidin:2010rj}.
\subsection{WIMP Velocity distribution}
\par The WIMP-nucleon scattering event rate depends in a more phenomenologically interesting manner on the velocity distribution of WIMPs in the halo. Although we have measurements of the distribution of stellar velocities in the disk and in the extended stellar halo of the Milky Way, the only methods that we have available to study the dark matter velocity distribution is through theoretical modeling, numerical simulations, or, most ideally, through direct detection of WIMPs themselves. The mean WIMP event rate scales as $\int d^3 \vec v f(\vec v)/v$, where $f(v)$ is the velocity distribution. This scaling can be simply understood by noting that direct detection experiments are sensitive to the mean WIMP velocity, $\int d^3 \vec v v f(\vec v)$, and at low energies the WIMP-nucleon cross section scales as $1/v^2$. Direct dark matter searches typically assume the so-called standard halo model (SHM) for $f(v)$ to interpret their results in terms of a WIMP mass and cross section. The SHM is an isotropic maxwellian distribution with a cut-off imposed at the local Galactic escape velocity~\citep{Lewin:1995rx}. Translated into position space, the SHM velocity distribution corresponds to an isothermal dark matter density profile.
\par Although the SHM is useful for calibrating results from different direct detection experiments, it is now becoming clear that the SHM is not the appropriate description of the velocity distribution of dark matter halos in N-body simulations. Indeed, the highest resolution N-body simulations of Milky Way-mass halos find that the velocity distribution differs from the SHM in a several important and interesting ways~\citep{Vogelsberger:2008qb,Kuhlen:2009vh}. The peak of the distribution is broader than is expected from the SHM. Though the physical origin of this is unclear, it may be a reflection of the different dispersions for the different velocity components. Distinct features in the velocity distribution are present due to individual subhalos, and broader, more extended features are apparent out in the power law tail of the distribution that reflect features in energy space~\citep{Vogelsberger:2008qb}. Finally, and probably the most critical for the purposes of direct dark matter detection, the extreme high velocity tail of the distribution appears to be suppressed relative to the SHM.
\par Although the dark matter velocity distribution is generated from a combination of complicated physical processes that include violent relaxation, phase mixing, and smooth accretion, and the distribution function is certainly neither spherical nor isotropic, it is worthwhile to consider whether the features of the simulated velocity distributions can be understood in the context of simplified theoretical models of the distribution function. In order to gain the best physical intuition, the simplest assumptions to make are that the dark matter velocity distribution is spherical, isotropic, and the system is isolated and in equilibrium. If the dark matter density profile falls off in the outer region as a power law with $r^{-\gamma}$, then by taking the limit of the energy distribution as the binding energy approaches zero it is possible to show that the tail of the velocity distribution will also fall off with a power law index $k$ such the $k = \gamma - 3/2$~\citep{Little:1987}. Note that in this terminology, the commonly used Navarro-Frenk-White profile has $r^{-3}$. This result generally implies that $k$ is determined by the shape of the potential in the outer region where the potential is controlled by the outer slope.
\par Numerical solutions for the velocity distribution show the aforementioned relation between the density profile and the velocity distribution is appropriate for particles within a few percent of the tail of the distribution~\citep{Lisanti:2010qx}. Though the resolution of N-body simulations is not at the level required to fully probe this relation, there are some indications that the power law tail of the velocity distribution is steeper than is predicted by the SHM for the highest resolution halos~\citep{Lisanti:2010qx}. Understanding these properties of the tail of the distribution is particular important for WIMPs in the mass regime of approximately $10$ GeV; a WIMP at this mass scale may be able to reconcile signals reported in a couple of direct detection experiments with sensitivity to low energy nuclear recoils~\citep{Bernabei:2010mq,Aalseth:2010vx}.
\par The N-body simulations discussed above provide us with highly precise estimates of the velocity distribution in a small number of halos that were re-simulated from halos at larger cosmological volumes. In order to more robustly test the trends that have been seen, it is required to study the velocity distribution of a larger sample of Milky Way-mass halos. Naturally, when extracting simulated halos from a larger cosmological volume (and not re-simulating them at higher resolution, as in the case of the simulated halos discussed above), resolution is an important issue that prohibits a robust determination of the velocity distribution for individual halos. As a concrete example, typical Milky Way-mass halos extracted from large scale simulations~\citep{BoylanKolchin:2009an,Klypin:2010qw} have particle mass about five orders of magnitude larger than those in Refs.~\citep{Vogelsberger:2008qb,Kuhlen:2009vh}. However, what is lost in resolution can be gained by using a larger sample of halos over a wider halo mass range. Using a large sample of dark matter halos from Milky Way-mass to cluster mass scales~\citep{Wu:2012wu}, and stacking the resulting velocity distributions, Mao et al.~\citep{Mao:2012hf} find that the broad properties of the velocity distribution translate more globally in cold dark matter cosmologies. In the Mao et al. study, the stacked velocity distribution is well-described by the following functional form,
\begin{equation}
f(v) = \exp(-|v|/v_0)(v-v_{esc})^p,
\label{eq:universal_VDF}
\end{equation}
where $v_0$ and $p$ are fitting parameters that depend on the radial position relative to the scale radius. Due to the behavior of the exponential, this distribution implies a wider peak than the SHM, which is a better description of what is found in the simulated distributions. It also better characterizes the power law fall-off of the distribution at high velocities (note that $p \ne k$, where $k$ is defined above as the asymptotic tail of the distribution). Though the theoretical origin of this distribution is not clear, in a manner similar to which the universal origin of dark matter density profiles in cold dark matter simulations is unknown, it may be directly pointing towards global trends in the respective distributions of the different principal components of the velocity distribution. For low mass WIMPs and low threshold detectors, the distribution in Equation~\ref{eq:universal_VDF} implies significant deviations in the WIMP event rate relative to the SHM distribution.
\par Though in the discussion above I have motivated the effects of the variations in the WIMP velocity distribution on the mean WIMP-nucleon event rate, to which modern experiments are most sensitive, it is certainly true that the velocity distribution also strongly affects different signatures of WIMPs in underground detectors. These include the annual modulation signal, and also further into the future, the signal in directional dark matter detectors. The annual modulation signal is a sensitive function of both the minimal velocity to scatter a nucleus above the detector threshold, and the anisotropy of velocities in the halo~\citep{Fornengo:2003fm}. In addition, directional dark matter detectors may be able to directly determine the anisotropy of the velocity distribution, in addition to the WIMP mass, though present theoretical estimates indicate that the number of events required to cleanly extract the signal is substantial~\citep{Lee:2008jp,Alves:2012ay}.
\subsection{Extracting WIMP properties}
\par Direct dark matter searches are clearly in the midst of the ``discovery" phase, attempting to extract the WIMP signal from background sources. The ultimate goal of direct dark matter searches is of course not only to detect the dark matter, but also to extract information on its properties such as the mass and the cross section. In addition to examination of the particle properties, it will be interesting to determine if we can use the detection of WIMPs to understand properties of the Galactic halo. Examples of these properties are the local density, velocity distribution, as well as any more detailed features in these distributions that may result from substructures or streams. Understanding the extent to which it is possible to extract both particle and astrophysical properties will certainly require comprehensive modeling of both of these components.
\par Some recent theoretical work has focused on understanding whether different direct detection experiments are consistent with a dark matter signal, given that they have different values for their thresholds and different backgrounds to deal with~\citep{Fox:2010bz,Gondolo:2012rs}. Formalisms like this will need to be further expanded upon when looking forward towards the potential ``detection" phase of direct detection. Ultimately, a rigorous statistical formalism is required to model the data using input from both the astrophysical and the particle physics components. Initial studies that focused on extracting WIMP properties in direct detection experiments did so using phenomenological models for the velocity distribution~\citep{Green:2008rd}. Due to the larger WIMP event rate and the lack of form factor suppression in the cross section, these studies brought to light the important result that lower mass WIMPs ($< 100$ GeV) are much better constrained in direct detection experiments than larger mass WIMPs ($>100$ GeV).
\par Strigari and Trotta~\cite{Strigari:2009zb} extended the aforementioned analyses by developing a bayesian method for extracting WIMP properties, including uncertainties in a wide range of Galactic model parameters. These parameters include the local dark matter density, the circular velocity, the dark matter halo mass, as well as the properties of the Milky Way disk. For a one ton-year exposure with liquid xenon, they find that the mass of a fiducial $50$ GeV WIMP can be determined to a precision of less than approximately 25\%. For WIMPs greater than $100$ GeV, the mean event rate distribution will only be able to provide a lower bound on the WIMP mass. Combining different detector targets holds promise for further improving the determination the WIMP mass and cross section~\citep{Pato:2010zk}, in particular using experiments with very different nuclear masses. Even including uncertainties in Galactic model parameters, determination of the WIMP mass and cross section is unbiased when properly marginalizing over the uncertainties in the Galactic model parameters. This result also holds when using parameterizations of the velocity distribution that are closely related to the SHM, though this result on the bias of the parameter reconstruction clearly needs to be explored further for different models of the velocity distribution.
\subsection{Astrophysical backgrounds}
\par Finally, on the topic of direct dark matter searches, it is interesting to point out that ultimately direct dark matter searches will not be zero background experiments. Indeed, this reflects the fact that the descendants of modern direct dark matter detection experiments were developed for the purposes of solar neutrino detection~\citep{Cabrera:1984rr}. Due to the vector interactions with neutrons in the nucleus, for neutrinos with energies of less than approximately $10$ MeV, the neutrino-nucleus cross section is enhanced by the factor of the square of the mass number. This enhancement is similar to the coherent enhancement of spin-independent WIMP-nucleon interactions. At the lowest detectable recoil energies of $\sim 3$ keV in liquid xenon, solar neutrinos will provide a background for experiments at approximately the 1 ton scale~\citep{Monroe:2007xp}. Larger mass detectors at the scale of approximately $20-100$ ton will be sensitive to atmospheric and diffuse supernova neutrinos over all recoil energy ranges of interest~\citep{Strigari:2009bq}. For a $100$ GeV WIMP, this corresponds to a spin independent cross section of approximately $10^{-48}$ cm$^2$. Though it will make extraction of the WIMP signal more difficult at this scale, it likely will not be an ``irreducible" background, because the energy spectrum of WIMPs is distinct from each of the neutrino signals. For a large enough sample of events, it will be in principle possible to do spectral analyses of each of the different sources. And of course even farther into the future, it would clearly be possible to distinguish the WIMP signal from the isotropically-distributed atmospheric and supernova signals using dark matter detectors with directional sensitivity.
\section{Going forward}
\par We are now right in the beginning of the experimental era in particle dark matter searches. Though there have been hints of detections, at different levels of plausibility, it is important to bear in mind that we are now just reaching the sensitivity, through both direct and indirect detection methods, to probe the most well-motivated models from cosmology and particle physics. At this stage, the Fermi-LAT dSph analysis has been the first experiment to achieve robust sensitivity to thermal relic WIMP dark matter, doing so in the mass regime $10-25$ GeV. Direct dark matter searches are now just approaching the theoretically motivated Higgs-mediated regime, and are now probing well-motivated regimes of the supersymmetric parameter space.
\par Over the course of the next several years, the sensitivities of direct and indirect searches will continue to improve, which will certainly improve the modern limits, and hopefully even reveal interesting signals. From the point of view of gamma-ray studies, at least seven years of Fermi-LAT data is expected, clearly statistically improving the two year results from the Fermi-LAT discussed above. However, there are also reasons to believe that the sensitivity will improve more rapidly than is expected just from photon counts alone. This is particularly true from the perspective of the dSph searches. First, through galaxy surveys that are now coming online, such as Pan-STARRS~\footnote{http://pan-starrs.ifa.hawaii.edu/public/}, the Dark Energy Survey (DES)~\footnote{http://www.darkenergysurvey.org/}, and even further into the future the Large Synoptic Survey Telescopes (LSST)~\footnote{http://www.lsst.org/lsst/}, we are certain to discover more faint satellite galaxies around the Milky Way, in a manner similar to the methods used by the Sloan Digital Sky Survey (SDSS) to discover nearly a dozen new ultra-faint satellites. In particular in the southern sky, only a handful of satellites are now known, and these are only the brightest objects that have been known for nearly a century since their discovery on photographic plates. As new objects are detected, kinematic analysis of their constituent stars will provide their dark matter masses, and adding these objects to the Fermi-LAT analysis will certainly improve the limits on particle dark matter. It is still even possible that new surveys will find a massive, nearby satellite, similar to the SDSS discovery of the satellite Segue 1. In addition to the potential for detection of new satellites that can be added to the analysis, with an extended mission lifetime for Fermi-LAT, and new cleaned analysis of the gamma-ray data and a better understanding of the background, it will be possible to obtain more information from the gamma-ray data. This data will be of particular interest over the next several years, when extended source models for the satellite galaxies will be used in the Fermi-LAT analysis pipeline. With this combination of improvements, it will certainly be possible to reach the thermal relic cross section scale for dark mater masses of $100$ GeV over the next several years. Finally, the proposed Cherenkov Telescope Array (CTA)~\footnote{http://www.cta-observatory.org/}, which is expected to begin operation near 2017, will extend thermal relic dark matter limits to much higher masses, beyond the TeV scale, with better angular resolution than Fermi-LAT. Combining results from the Fermi-LAT and CTA, within the next decade we will cover fully the thermal relic dark matter parameter space above the TeV mass scale.
\begin{theacknowledgments}
I would like to especially thank Barbara Szczerbinska for organizing the CETUP* workshop, Bhaskar Dutta for organizing the dark matter program, as well
as all the participants of the dark matter program for stimulating discussions. I thank Yao-Yuan Mao for discussions on topics that I cover in these proceedings.
\end{theacknowledgments}
|
1,314,259,995,309 | arxiv | \section{Introduction}
With increasingly expansion of sensor network and Internet of Things (IoT) deployment, allocation of radio spectrum is becoming more complex and dynamic, and poses further challenges in managing radio resources and enabling new applications \cite{b1}. To better capture spectrum usage pattern and improve efficiency of resource management, radio maps can play more important roles in the modern wireless communication systems. A radio map is generally characterized by the power spectral density (PSD) over geographical locations, frequencies and time \cite{b2}. Providing rich and useful information regarding spectrum activities and propagation channels, radio maps
can provide information on detailed PSD distribution and help develop spectrum management applications \cite{b3}. Usually, a high-resolution radio map should be constructed from sparser measurements \cite{b4}.
One major challenge lies in reconstructing more complete radio maps from partial observations.
General construction of radio maps utilizes either model-based methods or model-free methods \cite{b2}. Model-based methods
assume certain signal propagation models
to express the received PSD as a combination of transmitted PSD from active transmitters. For example, an interpolation method
\cite{b5} proposes to utilize log-distance path loss model (LDPL) for Wi-fi radio map reconstruction. In \cite{b4}, another model-based method introduces
the use of thin-plate splines kernels. Different from model-based approaches, model-free methods do not rely on specific signal propagation models but favor neighborhood information. Typical examples include inverse distance weighted (IDW) interpolation \cite{b6}, Kriging-based interpolation \cite{b7} and Radial Basis
Functions (RBF) interpolation \cite{b8}. In addition, graph-based approaches, such as graph signal processing \cite{b9} and label propagation \cite{b10}, can also assist
radio map reconstruction. Beyond interpolation-based methods, machine learning has also attracted significant attention in radio map reconstruction owing to its
ability to utilize hidden data features \cite{b11,b12,b13}.
\begin{figure}[t]
\centering
\subfigure[]{
\label{saml1}
\includegraphics[height=2.2cm]{rm.jpg}}
\hspace{1cm}
\subfigure[]{
\label{same1}
\includegraphics[height=2.2cm]{rm_sam.jpg}}\\
\vspace{-3mm}
\subfigure[]{
\label{samb11}
\includegraphics[height=2.2cm]{apl_rm.jpg}}
\hspace{1cm}
\subfigure[]{
\label{samb21}
\includegraphics[height=2.2cm,width=2.7cm]{apl_rm_sam.jpg}}
\caption{Examples of Radio Map: Figure (a) and (b) show the power spectrum density and sensor (receivers represented by dots) placement for general large-scale radio map; Figure (c) and (d) show the spectrum distribution (Watts) and missing observations for restricted areas (marked in yellow) of a small-scale radio map (e.g. several street blocks), which only covers a small part of the large-scale radio map. Note that, the coordinates here are the index of PSD conformed to the grid. Usually, the small-scale radio map has higher resolution and smaller area than the large-scale ones.
}
\vspace{-6mm}
\label{samex1}
\end{figure}
Presently, most existing approaches focus on constructing radio maps from sparse observations, where sensors are spread
over a given region as shown in
Fig.~\ref{same1}. However, in
cases involving inaccessible,
restricted, or protected areas, radio
measurement is not available,
leading to missing observations
of certain regions or blocks. The radio map construction for such restricted areas is
more challenging and does not lend themselves to traditional radiomap
construction methods.
First, unlike large-scale radio map, missing
observations of power spectrum covering
restricted areas occur in relatively smaller regions, such as the example of Fig. \ref{samb21}. PSD distribution in these small-scale regions tends not
to follow well known propagation models but
is more sensitive to small scale environmental
features, which makes the implementation of model-based methods more difficult.
Secondly, since available measurement
samples are uneven and observations of
some entire segments are missing,
interpolation methods are ineffective
without accurate and reliable
neighborhood information,
especially for restricted regions.
Last but not least, for practical
reasons, observed data are usually limited, providing insufficient training samples for learning-based approaches.
In this work, to capture the spectrum power distribution from limited and uneven observations, we propose an exemplar-based approach using radio propagation priority to reconstruct radio map in restricted
or inaccessible areas. The main contributions of our work are summarized as follows:
\begin{itemize}
\item Through exploring the pattern of spectrum power from observed data and integrating radio propagation models, we introduce propagation model-based priority to define directions of data filling for missing regions.
\item By analyzing correlations from observed signals, we propose to estimate
missing radio PSD values based on exemplar copying and dictionary learning, respectively.
\end{itemize}
We compare our proposed methods with traditional radio map constructions
by testing over a Applied Physics Laboratory (APL) dataset
from Johns Hopkins University (JHU). Our
test results demonstrate the effectiveness
of the proposed radio map reconstruction method for restricted/inaccessible areas.
\begin{figure}[t]
\centering
\includegraphics[width=2.8in]{fig1.png}
\vspace{-2mm}
\caption{Illustration of Objective Scenarios: the restricted/inaccessible area $\mathbf{Z}_p$ is marked in yellow with area size $M\times N$ in a small-scale radio map $U(\mathbf{Z})\in\mathbb{R}^{P \times Q}$.}
\label{exm}
\vspace{-6mm}
\end{figure}
\section{Problem Description}\label{sysm}
Our model considers a wireless network coverage of
a rectangular area with one transmitter.
All radio observations are arranged on a regular grid and are located in the rectangular area $\mathbf{Z}$ with size $P\times Q$ within the network coverage, denoted by ${U}(\mathbf{Z})\in\mathbb{R}^{P\times Q}$ shown as Fig. \ref{exm}.
Here, $P$ and $Q$ are the size of grid.
Each observation in ${U}(\mathbf{Z})$ is characterized by a 2-dimensional (2D) coordinates $Z_i=(X_i,Y_i)$ and the corresponding radio spectrum power $e_i=U(Z_i)$. The restricted/inaccessible area $\mathbf{Z}_p$ with size $M\times N$ is located within $\mathbf{Z}$, marked
as yellow in Fig. \ref{exm}, where $M\leq P, N\leq Q$. No observation within $\mathbf{Z}_p$ is available. Compared to traditional
radio map reconstruction problems, the small-scale radio map has higher resolution (e.g., accurate to 1 meter) and smaller area, which make it more sensitive to the nearby environment, such as buildings, trees and roads. Moreover, we only have limited and unbalanced observations around restricted/inaccessible areas.
Our goal is to estimate $U(\mathbf{Z}_p)$ in the restricted/inaccessible area $\mathbf{Z}_p$ from other observed samples in area $\mathbf{Z}$.
Although we only consider one transmitter here, our framework can be directly extended to multiple transmitters by combining all the transmitted PSD. For convenience, we will focus on the one transmitter case in this work and leave more detailed analysis in future works.
Note that, our objective here is similar to the image inpainting \cite{b14} problem in computer vision. However, the traditional image inpainting only concerns about pixel values
but not the wireless communication context. Hence, it is ineffective for capturing spectrum power distribution in radio scenarios.
Besides observed values, we also consider the radio propagation model to assist radio map reconstruction for restricted/inaccessible areas. More analysis and comparison will be discussed in Section \ref{exp}.
\section{Exemplar-Based Radio Map Reconstruction}
In this section, we introduce an exemplar-based radio map reconstruction using radio propagation priority.
\begin{figure}[t]
\centering
\includegraphics[width=3.5in]{chart.jpg}
\vspace{-8mm}
\caption{Scheme of Proposed Method}
\label{sch}
\vspace{-3mm}
\end{figure}
\begin{figure}[t]
\centering
\subfigure[]{
\label{exem1}
\includegraphics[height=2.1cm]{exm1.png}}
\hspace{1.5cm}
\subfigure[]{
\label{exem2}
\includegraphics[height=2.1cm]{exm2.png}}
\vspace{-2mm}
\caption{Illustration of Filling Process: a) Select a patch $\Psi_q$ in the boundary $\delta \Omega$; b) Estimate the missing values in $\Psi_q$ and regenerate $\tilde\Psi_q$}
\label{exeme}
\vspace{-5mm}
\end{figure}
\subsection{Overview of the Proposed Method}
To fill a region based on
surrounding observations, one intuitive way is to estimate the missing values patch (block) by patch (block) from boundaries between observed and target (restricted area) regions to the center of the restricted/inaccessible area.
In this work, we follow a similar scheme to reconstruct the radio map from observations as shown in Fig. \ref{sch}. To estimate radio power in restricted areas, we start from a small selected patch centered at the boundaries shown as Fig. \ref{exeme}. Next, we estimate the missing values for this selected patch and update the boundary. Through patch-by-patch estimation of the missing values, we can obtain the reconstructed radio map for the whole restricted/inaccessible area. The general steps are described as follows.
\begin{itemize}
\item Step 1: Extract the boundary $\delta \Omega$ between observed region $\Phi$ and target region $\Omega$ (initialized as $\mathbf{Z}_p$) in $\mathbf{Z}$;
\item Step 2: Given a patch $\Psi_p$ with size $n\times n$ centered at point $p$ located at boundaries, i.e., $p\in\delta \Omega$, calculate the priority of the patch as $P(p)$ based on the texture properties of observations and radio propagation features;
\item Step 3: Order all patches $\Psi_p$ centered at $\delta \Omega$ by $P(p)$ and select the one with highest priority as $\Psi_q$;
\item Step 4: Select exemplars from observed region for $\Psi_q$ and estimate the missing values in $\Psi_q$;
\item Step 5: Update $\Phi$ and $\Omega$;
\item Step 6: Update the confidence term in the priority;
\item Step 7: Repeat Step 1-6 until all the missing values in the restricted/inaccessible area $\mathbf{Z}_p$ are estimated.
\end{itemize}
From the steps above, the key issues in the proposed method are how to define priority $P(p)$ to determine the filling direction, and how to estimate the missing values from exemplars. We will discuss more details in Section \ref{detailpro}.
\subsection{Details in the Proposed Method} \label{detailpro}
This part introduces definition of priority based on radio propagation and two approaches to estimate the missing radio map values.
\subsubsection{Definition of Priority}
To find a suitable direction of filling the missing region, we expect to propagate the key information in texture and radio spectrum with larger certainty. Thus, we define the priority of patch selection as follows:
\begin{figure}[t]
\centering
\subfigure[Data term.]{
\label{pp1}
\includegraphics[height=2.5cm]{pp1.png}}
\hspace{3mm}
\subfigure[Radio term.]{
\label{pp2}
\includegraphics[height=2.5cm]{pp2.png}}
\hspace{1mm}
\subfigure[Block term.]{
\label{pp3}
\includegraphics[height=2.5cm]{pp3.png}}
\vspace{-2mm}
\caption{Illustration of Calculating Priority}
\label{pp}
\vspace{-5mm}
\end{figure}
\begin{equation}
P(p)=C(p)\cdot D(p)\cdot B(p)\cdot L(p),
\end{equation}
where the confidence term $C(p)$ together with data term $D(p)$ contain radio map pattern information (texture), whereas radio propagation term $L(p)$ together with block term $B(p)$ describe radio propagation properties. More specifically:
\begin{itemize}
\item $C(p)$: The confidence term $C(p)$ describes the confidence level of the PSD within $\Psi_p$. If there are more points from the observed region, the corresponding patch has a higher confidence value. Suppose that there are $n\times n$ points in $\Psi_p$. The confidence term is calculated as
\begin{equation}
C(p)=\frac{\sum_{v \in (\Psi_p\cap \Phi)}C(v)}{n\times n},
\end{equation}
where $C(v)$ is initialized as $C(v)=1$ for $v\in \Phi$; otherwise, $C(v)=0$. For each iteration, confidence term $C(u)$ for a newly-filled point $u$ in $\tilde \Psi_q$ is updated by $C(u)=C(q)$ before the next iteration at Step 6.
\item $D(p)$: $D(p)$ is the data term describing the gradients of texture. Suppose that the normal of boundary at $p$ is $\mathbf{n}_p$, and the orthogonal direction of the texture gradient at $p$ is $\mathbf{s}_p=\nabla T_p^{\perp}$ where $T_p$ is the power level around $p$, and $^{\perp}$ is the orthogonal operator. The data term is defined as
\begin{equation}
D(p)=\frac{|\mathbf{s}_p \cdot\mathbf{n}_p|}{\alpha},
\end{equation}
where $\cdot$ is the inner product, and $\alpha$ is a normalization factor (e.g., $\alpha=1$ if $\mathbf{n}_p$ and $\mathbf{s}_p$ are unit vectors). The data term describes the intensity of radio map texture hitting the boundaries.
\item $L(p)$: The radio propagation term describes the relationship between the PSD at $p$ and the transmitter at location $t$. In model-based approaches, signal power is a function of distance to the transmitter \cite{b5,b6}. Similarly, we embed the power strength information in $L(p)$ based on the distance $d(t,p)$ between $t$ and $p$. Since radio propagation property is similar to the texture change described in data term $D(p)$, we can also measure the certainty of radio propagation based on its strengths hitting the boundary:
\begin{equation}
L(p)=|d(t,p)|^{-\beta}|\mathbf{l}_p\cdot \mathbf{n}_p|,
\end{equation}
where $\beta$ is the inverse distance parameter, $\mathbf{n}_p$ is the normal of boundary at $p$ and $\mathbf{l}_p$ is the direction of radio propagation from $t$ to $p$, shown as Fig. \ref{pp2}.
\item $B(p)$: Since radio map around a restricted/inaccessible area is small-scale and sensitive to the environment, we could also embed information of propagation obstacles in block term $B(p)$. From additional resources, such as satellite image and city map, we can segment buildings (in yellow) and background (in blue) as shown in Fig. \ref{pp3}. Let $l_p$ be part of the line connecting $t$ and $p$ within the whole region $\mathbf{Z}$ defined in Section \ref{sysm}, i.e., red parts in Fig. \ref{pp3}. Then we define $B(p)$ as
\begin{equation}\label{btm}
B(p)=1-\frac{\textnormal{the length covering buildings in } l_p}{\textnormal{the total length of } l_p}.
\end{equation}
If the radio propagates over more obstacles, $B_p$ is smaller and the priority would be reduced.
\end{itemize}
By selecting those patches with the
largest $P(p)$ to fill first, we can determine the filling direction with larger confidence level in both texture and radio propagation. Note that we provide the priority based on single transmitter here. If there are multiple transmitters, one can simply modify $P(p)$ as
\begin{equation}
P(p)=C(p)\cdot D(p)\cdot [\sum_i B_i(p)\cdot L_i(p)],
\end{equation}
where $B_i(p)$ and $L_i(p)$ are for the $i$th transmitter. We plan to explore general representations for multiple transmitters in
future works.
\begin{figure}[t]
\centering
\subfigure[Mean Power (watt).]{
\label{img1}
\includegraphics[height=2.5cm,width=3.3cm]{apl_rm1.jpg}}
\hspace{0.8cm}
\subfigure[Satellite]{
\label{img2}
\includegraphics[height=2.5cm,width=2.5cm]{satellite.png}}
\vspace{-2mm}
\caption{Illustration of APL dataset.}
\vspace{-3mm}
\label{data_img}
\end{figure}
\begin{figure}[t]
\centering
\subfigure[]{
\label{img11}
\includegraphics[height=2cm,width=2.7cm]{nrm.jpg}}
\subfigure[]{
\label{img21}
\includegraphics[height=2cm,width=2.7cm]{building_Seg.png}}
\subfigure[]{
\label{img31}
\includegraphics[height=2cm,width=2.7cm]{bp.jpg}}
\vspace{-2mm}
\caption{Preprocessing of APL Dataset: a) Normalized radio map; b) Segmented buildings; and c) Block term in priority.}
\label{data_img1}
\vspace{-3mm}
\end{figure}
\subsubsection{Estimation of Missing Measurement}
After selecting the patch $\Psi_q$ with highest priority, the next step is to estimate the missing measurement values from identified regions. In this part, we introduce two exemplar-based approaches as follows:
\begin{itemize}
\item Estimation based on exemplar copy (EPC):
Copying values from similar patches in the observed region at the same indices is a widely-used approach to fill the missing regions \cite{c15}. Here, we also consider exemplar-based copying to reconstruct the radio map. Let $\Psi_q$ be the $n\times n$ patch selected by $P(p)$. We first find the most similar exemplar patch $\Psi_s$ from the observed region according to
\begin{equation}
\Psi_s=\arg \min_{\Psi_w, w\in \Phi}\sum_{i\in \Phi}[(\Psi_w)_i-(\Psi_q)_i]^2,
\end{equation}
where $(\Psi)_i$ is the PSD value at position $i$ within the patch $\Psi$. We then fill the missing value as
\begin{equation}
(\tilde \Psi_q)_i=\left \{
\begin{aligned}
(\Psi_q)_i &\quad \quad i\in\Phi\\
(\Psi_s)_i&\quad\quad i\in\Omega
\end{aligned}
\right..
\end{equation}
\item Estimation based on dictionary learning (EPD): Generating a dictionary from observations, one can optimize a sparse vector to combine the code-words in the dictionary to estimate missing values in the patches \cite{b16}. After selecting $n\times n$ patch $\Psi_q$, we can randomly pick $W$ patches from $\Phi$ and generate a dictionary $\mathbf{A}\in\mathbb{R}^{n^2 \times K}$ containing $K$ normalized code-words via K-SVD \cite{b17} or matching pursuit \cite{b18}. Reshaping patch $\Psi_q$ as a vector $\mathbf{x}_q$, we formulate dictionary learning as follows:
\begin{equation}
\tilde{\bm{\beta}}=\arg \min_{\bm{\beta}} ||(\mathbf{x}_q)_\Phi-\mathbf{A}_\Phi \bm{\beta}||^2_2+\lambda||\bm{\beta}||_1,
\end{equation}
where $\bm{\beta}\in\mathbb{R}^{K\times 1}$ is a sparse vector and $(\mathbf{x}_q)_\Phi$ is the observed part in $\Psi_q$. From the optimal $\bm{\beta}$, we reconstruct the radio map as
\begin{equation}
(\tilde \Psi_q)_i=\left \{
\begin{aligned}
(\Psi_q)_i &\quad \quad i\in\Phi\\
(\mathbf{A}\bm{\beta})_i&\quad\quad i\in\Omega
\end{aligned}
\right..
\end{equation}
\end{itemize}
In general, exemplar-based copying performs better when the radio map has regular, continuous patterns, while the dictionary learning performs better when the environment is more complex. See more discussions in Section \ref{exp}. Other potential ways to estimate the missing values include subspace learning \cite{b19} and graph learning \cite{b20}.
\section{Experiment Results} \label{exp}
In this section, we present test results to demonstrate the efficacy of the proposed methods.
\subsection{Data Information and Preprocessing}
Our test is based on the APL dataset which was generated from Wireless inSite Software \cite{a1} with Light Detection and Ranging (LIDAR) information of a select region in Atlanta, Georgia, USA. The LIDAR data used for the simulation has a 1-meter resolution.
The APL dataset contains a transmitter (Tx) and distributed single-antenna receivers in a 10-block area. The TX antenna is a uniform square array of $16\times 16$ elements, spaced at 0.5 wavelength. The TX is located at latitude/longitude of 33.689/-84.390. The antenna height is 201 meters, and the frequency used is 2660 MHz. The receiver antennas assumed
a height of 2.01 meters and uniformly
spaced by 0.8 meters. The location of the observed area is at 33.7283$\sim$33.7327 in latitude and -84.3923$\sim$-84.3854 in longitude. To generate the radio map from APL data, we average antenna gains from TX for each data point and conform it to a $604\times 800$ grid, i.e., $U(\mathbf{Z})\in \mathbb{R}^{604 \times 800}$, where the grid resolution (each $1\times 1$ block) is in 0.8 meters. Note that some original points might be arranged to shared positions in the grid during this process. For those data, the values are further averaged in the shared locations.
The mean power in $\mathbf{Z}$, together with its satellite image, are presented in Fig. \ref{data_img}.
For convenience, we linearly normalize the radio map between $0\sim 1$. Note that the original radio map can be transformed without loss from the normalized one, and their pattern are exactly the same shown as Fig. \ref{img11}. Based on the satellite map, we segment the buildings against the background and calculate the block term in the priority by Eq. (\ref{btm}), shown as Fig. \ref{img21} and Fig. \ref{img31}, respectively.
\begin{figure}[t]
\centering
\subfigure[Scenario 1]{
\label{ss1}
\includegraphics[height=2.5cm,width=3.3cm]{s1.jpg}}
\hspace{0.8cm}
\subfigure[Scenario 2]{
\label{ss2}
\includegraphics[height=2.5cm,width=3.3cm]{s2.jpg}}
\vspace{-2mm}
\caption{Selected Areas to Test Performance: a) Scenario with regular neighborhood pattern; and b) Scenario with complex neighborhood pattern. The restricted/inaccessible areas $\mathbf{Z}_p$ with size $100\times 100$ are marked in yellow.}
\vspace{-5mm}
\label{sss}
\end{figure}
\subsection{Performance in Selected Areas}\label{psa}
To measure performance, we first consider two specific scenarios, i.e., one with regular neighborhood pattern and one with complex neighborhood pattern shown in Fig. \ref{sss}. In both scenarios, we considered a restricted/inaccessible areas $\mathbf{Z}_p$ with area size $100\times 100$ in grid. The PSD in the whole restricted areas (marked as yellow in Fig \ref{sss}) is unavailable, which we reconstruct from other observed parts in $\mathbf{Z}$.
We compare our methods with Model-based Interpolation (MBI) \cite{b5}, Radial Basis Function (RBF) Interpolation \cite{b8}, Label Propagation (LP) \cite{b10}, Exemplar-based Inpainting (EI) \cite{c15}, and Dictionary Learning (DL) \cite{b16}.
MBI and RBF are interpolation methods based on distances. EI and DL are image inpainting approaches without using radio propagation knowledge. For LP, we incorporate satellite images and information
on distance to transmitter as features.
For our proposed method, we consider three setups: 1) texture priority together with block term under exemplar-based copy (EBC); 2) complete priority with all 4 terms under exemplar-based copy (EPC); and 3) complete priority with all 4 terms under exemplar-based dictionary learning (EPD). For image inpainting methods and our proposed methods, we select patch size of $\Psi_p$ as $21\times 21$ for fair comparison. For methods related to dictionary learning, we set the number of code-words to $K=500$.
We apply K-SVD \cite{b17} to generate the dictionary.
\begin{figure*}[t]
\centering
\subfigure[Reconstructed Results for Scenario 1.]{
\label{r1}
\includegraphics[width=7in]{r1.png}}\\
\vspace{-2mm}
\subfigure[Reconstructed Results for Scenario 2.]{
\label{r2}
\includegraphics[width=7in]{r2.png}}
\vspace{-3mm}
\caption{Visualized Results in Selected Areas: (a) and (b) describe the regular and complex area, respectively; the results in red blocks are zoom-in presentations.}
\label{rr}
\vspace{-3mm}
\end{figure*}
\begin{table*}[t]
\caption{Numerical Results in Selected Areas.}
\centering
\begin{tabular}{|l|l|l|l|l|l|l|l|l|}
\hline
& EI & DL & RBF & MBI & LP & EBC & EPC & EPD \\ \hline
MSE in Scenario 1 & 0.0092 & 0.0152 & 0.0448 & 0.0271 & 0.0327 & 0.0088 & \textbf{0.0038} & 0.0096 \\ \hline
MSE in Scenario 2 & 0.0258 & 0.0158 & 0.0217 & 0.0173 & 0.0306 & 0.0227 & 0.0152 & \textbf{0.0136} \\ \hline
\end{tabular}
\vspace{-4mm}
\label{tt}
\end{table*}
The visualization results are shown in Fig \ref{rr}, and the corresponding numerical results are shown as Table \ref{tt}. Here, we define
$\mbox{MSE}=\frac{1}{m}\sum_{i=1}^m (x_i-\tilde{x}_i)^2$,
where $\tilde {x}_i, i=1,\cdots,m$ are the estimated radio map.
Shown as Fig. \ref{rr}, model-based MBI fails to estimate the radio map in the restricted/inaccessible area since the power spectrum in this dataset is over smaller
distance variation from the transmitter but is more sensitive
to the surrounding environment as
seen from Fig. \ref{data_img}. The RBF interpolation also fails to reconstruct missing segments and fills missing radio map with
similar values since the observations are uneven, especially near the center of the restricted/inaccessible areas.
For learning-based LP, the results display strong noises since the training samples from satellite images are noisy. Compared to the image inpainting methods, the proposed methods based on radio propagation priority show superior performance, since propagation information can enhance the features and textures. As shown in Fig. \ref{img31}, propagation priority terms favor the vertical direction to fill the region, which match the distribution of spectrum pattern in Fig. \ref{img1}. In our proposed methods, copy-based estimations display sharper features while dictionary learning based estimation provides more robust but blurred results. In the first scenario with regular nearby patterns close to the main road, EPC displays significant improvement since the vertical patterns therein is clear and similar. In the second scenario near buildings and trees, EPC sometimes over-estimates some regions from neighborhoods while EPD displays more robustness. The numerical results in Table \ref{tt} are consistent with the visualization results. Thus, one can determine whether EPC or EPD should be selected for estimation depending on the variations of
the nearby environment.
\subsection{Overall Performance for Different Area Sizes}
We further examine the overall radio map estimation performance for different area sizes. In this test, we compare different
methods
for restricted/inaccessible areas of various area sizes, i.e., $30\times 30$, $70\times 70$, $100\times 100$, $130\times 130$, and $160\times 160$. For each size, we randomly generate 10 restricted/inaccessible areas as the target region within Fig. \ref{img1}. We then calculate the mean error of different generated areas to implement the comparison.
In addition to MSE, we define a normalized error ({NE}), i.e.,
$\mbox{NE}=\frac{\sum_{i=1}^m(x_i-\tilde{x}_i)^2}{\sum_{i=1}^m x_i^2}$.
The results are shown in Fig. \ref{err}. Since MBI fails to capture the spectrum patterns in small-scale areas, it displays steadily poor result. For other methods, radio map error increases as the area size grows. This is intuitive since neighborhood information and observations become
more limited and uneven for larger restricted/inaccessible areas, especially near the center of the restricted/inaccessible area. Our proposed methods are better than traditional inpainting and LP approaches, demonstrating the important impact of the proposed radio propagation priority. EPC and EPD show similar MSE results while EPD generate better
NE than EPC.
The results indicate
that EPC works better in some special scenarios whereas EPD is more robust regardless
of the power in the restricted/inaccessible areas. The conclusions are similar to Section \ref{psa} and further demonstrate the benefits of the proposed method.
\begin{figure*}[t]
\centering
\subfigure[MSE.]{
\label{mse}
\includegraphics[height=3.1cm]{MSE.PNG}}
\hspace{5cm}
\subfigure[NE]{
\label{ne}
\includegraphics[height=3.1cm]{NE.png}}
\caption{Numerical Results in Different Area Sizes}
\label{err}
\vspace{-5mm}
\end{figure*}
\subsection{Guidelines of Parameter Selection}
In this part, we consider the proposed methods under different parameters to develop selection guidelines. We first evaluate the impact of patch sizes in $\Psi_p$ for EPC and EPD in Table. \ref{psz}.
For EPC, we test a randomly selected $100 \times 100$ restricted/inaccessible area.
For EPD, we set $K=1000$ for the dictionary and test a
$40 \times 40$ restricted/inaccessible area.
Patch size selection
is a trade-off between the global information and local observations. For a larger patch size, uncertainty
grows with more global information considered.
From the results, we determine
a suitable patch size
around 15$\sim$21. We also test EPD with different dictionary sizes in Table \ref{ddz}, which shows that a larger $K$ can
achieve better performance.
\begin{table}[t]
\centering
\caption{MSE in Different Patch Size}
\begin{tabular}{|l|l|l|l|l|l|}
\hline
Patch Size & 9 & 15 & 21 & 27 & 33 \\ \hline
MSE for EPC & {0.0177} & {0.0132} & {0.0152} & {0.0163} & {0.0205} \\ \hline
MSE for EPD & {0.0020} & {0.0018} & {0.0025} & {0.0027} & {0.0034} \\
\hline
\end{tabular}
\label{psz}
\vspace{-3mm}
\end{table}
\begin{table}[t]
\centering
\caption{MSE for EPD with Different Dictionary Size}
\begin{tabular}{|l|l|l|l|l|}
\hline
K (Patch size=15) & 500 & 1000 & 1500 & 2000 \\ \hline
MSE & 0.0026 & 0.0025 & 0.0020 & 0.0020 \\ \hline
\end{tabular}
\label{ddz}
\vspace{-3mm}
\end{table}
\section{Conclusion}
In this work, we introduce an exemplar-based approach to
wireless radio map reconstruction in the cases of missing measurement. More specifically, we proposed a propagation-based priority to determine the filling direction based on PSD pattern and radio properties. We then
introduced two new schemes for patch estimation. The experimental results demonstrate the efficiency of the propagation-based priority to capture the PSD patterns and the power of our proposed methods in radio map reconstruction for missing areas, which make further spectrum access and management more reliable for such restricted/inaccessible areas.
|
1,314,259,995,310 | arxiv | \section{Introduction}
\label{sec:introduction}
\IEEEPARstart{I}{mage} denoising is an ill-posed inverse problem to recover a \textsl{clean} image $\mathbf{x}$ from the \textsl{observed} noisy image $\mathbf{y}=\mathbf{x}+\mathbf{n}_{o}$, where $\mathbf{n}_{o}$ is the \textsl{observed} corrupted noise.\
One popular assumption on $\mathbf{n}$ is the additive white Gaussian noise (AWGN) with standard deviation (std) $\sigma$, which serves as a perfect test bed for supervised networks in the deep learning era~\cite{vggnet,googlenet,resnet}.\
Supervised networks~\cite{nlnet,dncnn,n3net} learn the image priors and noise statistics on plenty pairs of clean and corrupted images, and achieve promising denoising performance on the images with similar priors and noise statistics (e.g., AWGN).
\begin{figure}[t]
\centering
\subfigure{
\begin{minipage}{0.23\textwidth}
\includegraphics[width=1\textwidth]{F-house/br_Noisy_house.png}\vspace{-1mm}
\centering{(a) Noisy: 24.62dB/0.4595}
\end{minipage}
\begin{minipage}{0.23\textwidth}
\includegraphics[width=1\textwidth]{F-house/br_Mean_house.png}\vspace{-1mm}
\centering{(b) Clean Image}
\end{minipage}
}\vspace{-2.5mm}
\subfigure{
\begin{minipage}{0.23\textwidth}
\includegraphics[width=1\textwidth]{F-house/br_CDnCNN_house.png}\vspace{-1mm}
\centering{(c) DnCNN~\cite{dncnn}: 34.23dB/0.8695}
\end{minipage}
\begin{minipage}{0.23\textwidth}
\includegraphics[width=1\textwidth]{F-house/br_NAC_house.png}\vspace{-1mm}
\centering{(d) DnCNN+NAC: \textbf{35.80}dB/\textbf{0.9116}}
\end{minipage}
}
\vspace{-2mm}
\caption{Denoised images and PSNR/SSIM results of DnCNN~\cite{dncnn} (c) and DnCNN trained by our NAC strategy (``DnCNN+NAC'') (d) on the color image \textsl{House} (b) corrupted by AWGN noise ($\sigma=15$) (a).}
\vspace{-2.5mm}
\label{f-example}
\end{figure}
With advances on AWGN noise removal~\cite{mlp,dncnn,n3net}, a natural question arises is how these denoising networks can exert their effect on real noisy photographs.\
Realistic noise is signal dependent and more complex than AWGN~\cite{crosschannel2016,dnd2017,PolyUdataset}.\
Thus, previous supervised denoising networks unavoidably suffer from a \textsl{domain gap problem}: both the image priors and noise statistics in training are different from those of the real-world test images.\
Recently, several unsupervised~\cite{gcbd,noise2noise,dip,noise2void} and self-supervised~\cite{noise2self,ss2019} networks have been developed to get rid of the dependence on clean images, which are difficult to be obtained in real-world scenarios.\
However, unsupervised networks are subjected to the gap on either image priors or noise statistics, while self-supervised suffer from the gap on noise statistics, between the external images for training and the corrupted ones for test.\
Besides, several networks~\cite{noise2noise,dip} succeed on the zero-mean noise.\
But the realistic noise in real-world images is not necessarily zero-mean~\cite{crosschannel2016,dnd2017,sidd2018}.
To alleviate the domain gap on image priors and noise statistics between training and test images, in this paper, we propose a ``Noisy-As-Clean'' (NAC) strategy for training self-supervised denoising networks.\
In our NAC, we directly train an image-specific network by taking the corrupted image $\mathbf{y}=\mathbf{x}+\mathbf{n}_{o}$ as the ``\textsl{clean}'' target.\
Thus, the domain gap on image priors are largely bridged by our NAC.\
To reduce the gap on noise statistics, for the target corrupted image $\mathbf{y}$, we take as the input of our NAC a \textsl{simulated} noisy image $\mathbf{z}=\mathbf{y}+\mathbf{n}_{s}$ consisting of the corrupted image $\mathbf{y}$ and a \textsl{simulated} noise $\mathbf{n}_{s}$, which is statistically close to the corrupted noise $\mathbf{n}_{o}$ in $\mathbf{y}$.\
By this way, our NAC network learns to clean up the \textsl{simulated} noise $\mathbf{n}_{s}$ from the doubly corrupted image $\mathbf{z}$ during training, and thus is able to remove the noise $\mathbf{n}_{o}$ from the corrupted image $\mathbf{y}$ during test.
A simple but useful observation about our NAC strategy is: \textsl{as long as the corrupted noise is ``weak'', it is feasible to train a self-supervised denoising network only with the corrupted test image, and the learned parameters are very close to those of a \textsl{supervised} network trained with a pair of the corrupted image and its clean version}.\ Though being very simple, our NAC strategy is very effective for image denoising.\ In Figure~\ref{f-example}, we compare the denoised images by the vanilla DnCNN~\cite{dncnn} and the DnCNN trained with our NAC (DnCNN+NAC), on the image ``House'' corrupted by AWGN ($\sigma=15$).\
We observe that the ``DnCNN+NAC'' achieves better visual quality and higher PSNR/SSIM results than DnCNN~\cite{dncnn}, which is trained on plenty of noisy and clean image pairs.\ %
Experiments on diverse benchmarks demonstrate that, when trained with our NAC strategy, the DnCNN~\cite{dncnn} and ResNet~\cite{resnet} in Deep Image Prior (DIP)~\cite{dip} achieve comparable or better performance than supervised denoising networks on synthetic and real-world noisy images.\
Our work reveals that, \textsl{when the noise is ``weak''}, a self-supervised network trained directly on the corrupted image can obtain comparable or even better performance than supervised networks on image denoising.
In summary, our contribution are mainly three-fold:
\begin{itemize}
\item We propose a ``Noisy-As-Clean'' (NAC) strategy for training self-supervised denoising networks.
\item We provide a theoretical background of our NAC strategy, and implement the DnCNN~\cite{dncnn} and ResNet in DIP~\cite{dip} into self-supervised networks by our NAC for effective image denoising.
\item Experiments on synthetic and real-world benchmarks show that, on weak noise, the DnCNN and ResNet in~\cite{dip} trained by our NAC achieve comparable or even better performance than the comparison denoising networks.
\end{itemize}
The remaining parts of this paper are organized as follows.
In \S\ref{sec:related}, we introduce the related work.
In \S\ref{sec:nas}, we present the theoretical background of our NAC strategy for self-supervised image denoising.\
In \S\ref{sec:self-sup}, we implement the DnCNN~\cite{dncnn} and ResNet used in~\cite{dip} as self-supervised networks by our NAC.\
Extensive experiments are conducted in \S\ref{sec:exp} demonstrate that, the DnCNN and ResNet networks trained by our NAC achieve comparable or even better performance than previous supervised image denoising networks on benchmark synthetic and real-world datasets.\
Conclusion is given in \S\ref{sec:con}.
\section{Related Work}
\label{sec:related}
In Table~\ref{t1}, we summarize several state-of-the-art supervised~\cite{dncnn,cbdnet}, unsupervised~\cite{noise2noise,gcbd,noise2void} and self-supervised~\cite{dip,noise2self,ss2019} networks, image priors, and noise statistics.\
In this work, to bridge the \textsl{domain gap problem}, we propose a ``Noisy-As-Clean'' strategy to learn the image-specific internal prior and noise statistics directly from the corrupted test image.\
\noindent
\textbf{Supervised denoising networks} are trained with plenty pairs of noisy and clean images.\
This category of networks can learn external image priors and noise statistics from the training data.\
Several methods~\cite{dncnn,n3net,nlrn2018} have been developed with achieving promising performance on AWGN noise removal, where the statistics of training and test noise are similar.\
However, due to the aforementioned \textsl{domain gap problem}, the performance of these networks degrade severely on real-world noisy images~\cite{crosschannel2016,dnd2017,PolyUdataset}.\
\noindent
\textbf{Unsupervised and self-supervised denoising networks} are developed to remove the need on plenty of clean images.\ Along this direction, Noise2Noise (N2N)~\cite{noise2noise} trains the network between pairs of corrupted images with the same scene, but independently sampled noise.\ This work is feasible to learn external image priors and noise statistics from the training data.\ However, in real-world scenarios, it is difficult to collect large amounts of paired images with independent corruption for training.\ Noise2Void (N2V)~\cite{noise2void} predicts a pixel from its surroundings by learning blind-spot networks, but it still suffers from the domain gap on image priors between the training images and test images.\ This work assumes that the corruption is zero-mean and independent between pixels.\ However, as mentioned in Noise2Self (N2S)~\cite{noise2self}, N2V~\cite{noise2void} significantly degrades the training efficiency and denoising performance at test time.\ Recently, Deep Image Prior (DIP)~\cite{dip} reveals that the network structure can resonate with the natural image priors, and can be utilized in image restoration without external images.\ However, it is not practical to select a suitable network and early-stop its training at right moments for each corrupted image.
Self-supervised denoisers~\cite{noise2self,ss2019} employ explicit corruption models, and train the networks only with the corrupted image itself.\
In this work, we utilize the helpful noise model to learn self-supervised denoising networks for real-world image denoising.\
\begin{table}[t]
\centering
\caption{\textbf{Summary of representative networks for image denoising}.
\textbf{S.}: Supervised networks.
\textbf{U.}: Unsupervised networks.
\textbf{SS.}: Self-supervised networks.
\textbf{Pub.}: Publication.
\textbf{Int.}: Internal image priors.
\textbf{Ext.}: External image priors.
\textbf{Stat.}: Statistics.
The networks with ``\cmark'' are able to learn the noise statistics from training data.
}
\begin{tabular}{c|rl|c|c}
\Xhline{1pt}
\rowcolor[rgb]{ .85, .9, 0.95}
&
&
& Image
& Noise
\\
\rowcolor[rgb]{ .85, .9, 0.95}
\multicolumn{1}{c|}{\multirow{-2}{*}{Type}}
& \multicolumn{1}{c}{\multirow{-2}{*}{Method}}
& \multicolumn{1}{l|}{\multirow{-2}{*}{Year'Pub.}}
& \multicolumn{1}{c|}{Prior}
& Stat.
\\
\hline
\hline
\multicolumn{1}{c|}{\multirow{2}{*}{S.}} & DnCNN~\cite{dncnn}
& 17'TIP & Ext. & \cmark
\\
& CBDNet~\cite{cbdnet}
& 19'CVPR & Ext. & \cmark
\\
\hline
\hline
\multicolumn{1}{c|}{\multirow{3}{*}{U.}}& Noise2Noise~\cite{noise2noise} & 18'ICML & Ext. & \cmark
\\
& GAN-CNN~\cite{gcbd} & 18'CVPR & Ext. & \cmark
\\
& Noise2Void~\cite{noise2void} & 19'CVPR & Ext. &
\\
\hline
\hline
\multicolumn{1}{c|}{\multirow{4}{*}{SS.}}
&
Deep Image Prior~\cite{dip} & 18'CVPR & Int. &
\\
& Noise2Self\ \ ~\cite{noise2self} & 19'ICML & Ext. &
\\
& Self-Supervised~\cite{ss2019} & 19'NeurIPS & Ext. &
\\
&
Noisy-As-Clean (Ours) & 20'Submit & Int. & \cmark
\\
\hline
\end{tabular}%
\vspace{-2.5mm}
\label{t1}%
\end{table}%
\begin{figure*}[htp]
\vspace{-0mm}
\centering
\raisebox{-0.15cm}{\includegraphics[width=1\textwidth]{F-pipeline/pipeline.pdf}}
\vspace{-2.5mm}
\caption{\textbf{Proposed ``Noisy-As-Clean'' strategy for training self-supervised image denoising networks}.\
In our NAC strategy, we take the \textsl{observed} noisy image $\mathbf{y}=\mathbf{x}+\mathbf{n}_{o}$ as the ``clean'' target, and take the \textsl{simulated} noisy image $\mathbf{z}=\mathbf{y}+\mathbf{n}_{s}$ as the input.
We do not regard the clean image $\mathbf{x}$ as target.
After training, the inference is performed on the target noisy image $\mathbf{y}=\mathbf{x}+\mathbf{n}_{o}$.
}
\vspace{-2.5mm}
\label{f1}
\end{figure*}
\noindent
\textbf{Internal and external image priors} are widely used for diverse image restoration tasks~\cite{mcwnnm,foe,pgpd}.\ Internal priors are directly learned from the input test image itself, such as the multi-scale priors~\cite{ksvd,STAR2020,cvid2020}, image-specific details~\cite{iraniinternal,Liang_2018_CVPR}, and non-local self similarity~\cite{mcwnnm,twsc,NLH2020}.\
The external ones are learned on external natural images~\cite{epll,pgpd,gid2018}.\
Internal priors are adaptive to its image contents, but somewhat affected by the corruptions~\cite{ksvd,iraniinternal}.\
By contrast, the external priors are effective for restoring images with general contents, but may not be optimal for specific test image~\cite{epll,pgpd,chen2017trainable}.\
\noindent
\textbf{Noise statistics} is of key importance for image denoising.\ The AWGN noise is one typical noise with widespread study.\ Recently, researchers shift more attention to the realistic noise produced in camera sensors~\cite{dnd2017,sidd2018}, which is usually modeled as mixed Poisson and Gaussian distribution~\cite{poissongaussian}.\ The Poisson component mainly comes from the irregular photons hitting the sensor~\cite{liu2006noise}, while Gaussian noise is majorly produced by dark current~\cite{crosschannel2016}.\ Though performing well on the synthetic noise being trained with, supervised denoisers~\cite{dncnn,nlrn2018,cbdnet} still suffer from the \textsl{domain gap problem} when processing the real-world noisy images.
\section{Theoretical Background of ``Noisy-As-Clean''}
\label{sec:nas}
Training a supervised network $f_{\theta}$ (parameterized by $\theta$) requires many pairs $\{(\mathbf{y}_{i},\mathbf{x}_{i})\}$ of noisy image $\mathbf{y}_{i}$ and clean image $\mathbf{x}_{i}$,
by minimizing an empirical loss function $\mathcal{L}$ as
\begin{equation}
\arg\min_{\theta}\sum_{i=1}\mathcal{L}(f_{\theta}(\mathbf{y}_{i}),\mathbf{x}_{i}).
\end{equation}
Assuming that the
probability of occurrence for pair $(\mathbf{y}_{i},\mathbf{x}_{i})$ is $p(\mathbf{y}_{i},\mathbf{x}_{i})$, then statistically we have
\begin{equation}
\label{eqn:e2}
\begin{split}
\theta^{*}
&=\arg\min_{\theta}\sum_{i=1}p(\mathbf{y}_{i},\mathbf{x}_{i})\mathcal{L}(f_{\theta}(\mathbf{y}_{i}),\mathbf{x}_{i})
\\
&=
\arg\min_{\theta}\mathbb{E}_{(\mathbf{y},\mathbf{x})}[\mathcal{L}(f_{\theta}(\mathbf{y}),\mathbf{x})],
\end{split}
\end{equation}
where $\mathbf{y}$ and $\mathbf{x}$ are random variables of noisy and clean images, respectively.\
The paired variables $(\mathbf{y},\mathbf{x})$ are dependent, and their relationship is $\mathbf{y}=\mathbf{x}+\mathbf{n}_{o}$, where $\mathbf{n}_{o}$ is the random variable of \textsl{observed} noise.\
By exploring the dependence of
$p(\mathbf{y}_{i},\mathbf{x}_{i})=p(\mathbf{x}_{i})p(\mathbf{y}_{i}|\mathbf{x}_{i})$,
Eqn.~(2) is equivalent to
\begin{equation}
\label{e3}
\begin{split}
\theta^{*}
&=\arg\min_{\theta}\sum_{i=1}p(\mathbf{x}_{i})p(\mathbf{y}_{i}|\mathbf{x}_{i})\mathcal{L}(f_{\theta}(\mathbf{y}_{i}),\mathbf{x}_{i})
\\
&=
\arg\min_{\theta}\mathbb{E}_{\mathbf{x}}[\mathbb{E}_{\mathbf{y}|\mathbf{x}}[\mathcal{L}(f_{\theta}(\mathbf{y}),\mathbf{x})]]
.
\end{split}
\end{equation}
This indicates that the network $f_{\theta}$ can minimize the loss function by solving Eqn.~(3) separately for each clean image.\
Different with the ``zero-mean'' assumption in~\cite{noise2noise,noise2void}, here we study a more practical assumption on noise statistics, i.e., \textsl{the expectation $\mathbb{E}[\mathbf{x}]$ and variance} $\text{Var}[\mathbf{x}]$ \textsl{of signal intensity are much stronger than those of noise $\mathbb{E}[\mathbf{n}_{o}]$ and} $\text{Var}[\mathbf{n}_{o}]$ (negligible but not necessarily zero):
\begin{equation}
\label{e4}
\mathbb{E}[\mathbf{x}]
\gg
\mathbb{E}[\mathbf{n}_{o}]
,
\
\text{Var}[\mathbf{x}]
\gg
\text{Var}[\mathbf{n}_{o}]
.
\end{equation}
This is actually valid in real-world scenarios, since we can clearly observe the contents in most real photographs, \textsl{with little influence of the noise}.\
The noise therein is often modeled by zero-mean Gaussian or mixed Poisson and Gaussian (for realistic noise).\
Hence, the noisy image $\mathbf{y}$ should have similar expectation with the clean image $\mathbf{x}$:
\begin{equation}
\label{e5}
\mathbb{E}[\mathbf{y}]
=
\mathbb{E}[\mathbf{x}+\mathbf{n}_{o}]
=
\mathbb{E}[\mathbf{x}]+\mathbb{E}[\mathbf{n}_{o}]
\approx
\mathbb{E}[\mathbf{x}].
\end{equation}
Now we add \textsl{simulated} noise $\mathbf{n}_{s}$ to the \textsl{observed} noisy image $\mathbf{y}$, and generate a new noisy image $\mathbf{z}=\mathbf{y}+\mathbf{n}_{s}$.\
We assume that $\mathbf{n}_{s}$ is statisticly close to $\mathbf{n}_{o}$, i.e., $\mathbb{E}[\mathbf{n}_{s}]\approx\mathbb{E}[\mathbf{n}_{o}]$ and $\text{Var}[\mathbf{n}_{s}]\approx\text{Var}[\mathbf{n}_{o}]$.\
Then we have
\begin{equation}
\label{e6}
\mathbb{E}[\mathbf{z}]\gg\mathbb{E}[\mathbf{n}_{s}],
\
\text{Var}[\mathbf{z}]\gg\text{Var}[\mathbf{n}_{s}]
.
\end{equation}
Therefore, the \textsl{simulated} noisy image $\mathbf{z}$ has similar expectation with the \textsl{observed} noisy image $\mathbf{y}$:
\begin{equation}
\label{e7}
\mathbb{E}[\mathbf{z}]
=
\mathbb{E}[\mathbf{y}+\mathbf{n}_{s}]
\approx
\mathbb{E}[\mathbf{y}]
.
\end{equation}
By the \textsl{Law of Total Expectation}~\cite{billingsley1995probability}, we have
\begin{equation}
\label{e8}
\mathbb{E}_{\mathbf{y}}[\mathbb{E}_{\mathbf{z}}[\mathbf{z}|\mathbf{y}]]
=
\mathbb{E}[\mathbf{z}]
\approx
\mathbb{E}[\mathbf{y}]
=
\mathbb{E}_{\mathbf{x}}[\mathbb{E}_{\mathbf{y}}[\mathbf{y}|\mathbf{x}]]
.
\end{equation}
\begin{figure*}[htbp]
\centering
\subfigure{
\begin{minipage}[t]{0.24\textwidth}
\centering
\raisebox{-0.15cm}{\includegraphics[width=1\textwidth]{F-example/004_x.jpg}}
{\footnotesize (a) Clean Image}
\end{minipage}
\begin{minipage}[t]{0.24\textwidth}
\centering
\raisebox{-0.15cm}{\includegraphics[width=1\textwidth]{F-example/004_x+n1_34-35_0-8985.jpg}}
{\footnotesize (b) Corrupted $\mathbf{x}+\mathbf{n}_{o}$ (34.35dB/0.8985)}
\end{minipage}
\begin{minipage}[t]{0.24\textwidth}
\centering
\raisebox{-0.15cm}{\includegraphics[width=1\textwidth]{F-example/004_x+n1+n2_28-00_0-6589.jpg}}
{\footnotesize (c) Doubly Corrupted $\mathbf{x}+\mathbf{n}_{o}+\mathbf{n}_{s}$ (28.00dB/0.6589)}
\end{minipage}
\begin{minipage}[t]{0.24\textwidth}
\centering
\raisebox{-0.15cm}{\includegraphics[width=1\textwidth]{F-example/004_DnCNN+NAC_x+n1_pred_31-51_0-8225.jpg}}
{\footnotesize (d) Output of DnCNN+NAC in training (31.51dB/0.8225)
}
\end{minipage}
}\vspace{-3mm}
\subfigure{
\begin{minipage}[t]{0.24\textwidth}
\centering
\raisebox{-0.15cm}{\includegraphics[width=1\textwidth]{F-example/004_DnCNN+NAC_x_pred_40-23_0-9663.jpg}}
{\footnotesize (e) Output of DnCNN+NAC in test (\textbf{40.23}dB/\textbf{0.9663})}
\end{minipage}
\begin{minipage}[t]{0.24\textwidth}
\centering
\raisebox{-0.15cm}{\includegraphics[width=1\textwidth]{F-example/004_DnCNN_5_36-36_0-9384.jpg}}
{\footnotesize (f) Denoised (b) by DnCNN~\cite{dncnn} (36.36dB/0.9384)}
\end{minipage}
\begin{minipage}[t]{0.24\textwidth}
\centering
\raisebox{-0.15cm}{\includegraphics[width=1\textwidth]{F-example/004_n2x3.jpg}}
{\footnotesize (g) Estimated $\mathbf{n}_{s}$ \\ (difference between (c) and (d)) }
\end{minipage}
\begin{minipage}[t]{0.24\textwidth}
\centering
\raisebox{-0.15cm}{\includegraphics[width=1\textwidth]{F-example/004_n1x3.jpg}}
{\footnotesize (h) Estimated $\mathbf{n}_{o}$ \\ (difference between (b) and (e)) }
\end{minipage}
}\vspace{-1mm}
\caption{\textbf{An example to illustrate the pipeline of our NAC strategy based image denoising}.\
The image is ``\textsl{Test004}'' from the \textsl{BSD68} dataset.\
The observed noise $\mathbf{n}_{o}$ and simulated noise $\mathbf{n}_{s}$ are additive white Gaussian noise with $\sigma=5$.\
(a) The clean image $\mathbf{x}$.\
(b) The corrupted image $\mathbf{x}+\mathbf{n}_{o}$ (training target of our DnCNN+NAC).\
(c) The doubly corrutped image $\mathbf{x}+\mathbf{n}_{o}+\mathbf{n}_{s}$.\
(d) The output of training DnCNN in our NAC strategy, with input is the doubly corrupted image (c) and target is the corrupted image (b).\
(e) The output of our image-specific DnCNN+NAC tested on (b).\
PSNR and SSIM results of corresponding images are provided for objective references.
}
\label{fig:example}
\end{figure*}
Since the loss function $\mathcal{L}$ (usually $\ell_{2}$) and the conditional probability density functions $p(\mathbf{y}|\mathbf{x})$ and $p(\mathbf{z}|\mathbf{y})$ are all \textsl{continuous everywhere}, the optimal network parameters $\theta^{*}$ of Eqn.~(3) changes little with the addition of negligible noise $\mathbf{n}_{o}$ or $\mathbf{n}_{s}$.\ With Eqns.~(4)-(8), when the $\mathbf{x}$-conditioned expectation of $\mathbb{E}_{\mathbf{y}|\mathbf{x}}[\mathcal{L}(f_{\theta}(\mathbf{y}),\mathbf{x})]$ are replaced with the $\mathbf{y}$-conditioned expectation of $\mathbb{E}_{\mathbf{z}|\mathbf{y}}[\mathcal{L}(f_{\theta}(\mathbf{z}),\mathbf{y})]$, $f_{\theta}$ obtains similar $\mathbf{y}$-conditioned optimal parameters $\theta^{*}$:
\begin{equation}
\begin{split}
\label{e9}
&\arg\min_{\theta}\mathbb{E}_{\mathbf{y}}[\mathbb{E}_{\mathbf{z}|\mathbf{y}}[\mathcal{L}(f_{\theta}(\mathbf{z}),\mathbf{y})]]
\\
\approx
&\arg\min_{\theta}\mathbb{E}_{\mathbf{x}}[\mathbb{E}_{\mathbf{y}|\mathbf{x}}[\mathcal{L}(f_{\theta}(\mathbf{y}),\mathbf{x})]]
=
\theta^{*}.
\end{split}
\end{equation}
The network $f_{\theta}$ minimizes the loss function $\mathcal{L}$ for each input image pair separately, which equals to minimize it on all finite pairs of images.\ Through simple manipulations, Eqn.~(9) is equivalent to
\begin{equation}
\label{e10}
\begin{split}
&\arg\min_{\theta}\sum_{i=1}p(\mathbf{y}_{i})p(\mathbf{z}_{i}|\mathbf{y}_{i})\mathcal{L}(f_{\theta}(\mathbf{z}_{i}),\mathbf{y}_{i})
\\
=&
\arg\min_{\theta}\mathbb{E}_{\mathbf{y}}[\mathbb{E}_{\mathbf{z}|\mathbf{y}}[\mathcal{L}(f_{\theta}(\mathbf{z}),\mathbf{y})]]
\approx
\theta^{*}
.
\end{split}
\end{equation}
By exploring the dependence of
$p(\mathbf{z}_{i},\mathbf{y}_{i})=p(\mathbf{y}_{i})p(\mathbf{z}_{i}|\mathbf{y}_{i})$, Eqn.~(10) is equivalent to
\begin{equation}
\begin{split}
\label{e11}
&\arg\min_{\theta}\mathbb{E}_{(\mathbf{z},\mathbf{y})}[\mathcal{L}(f_{\theta}(\mathbf{z}),\mathbf{y})]
\\
=&
\arg\min_{\theta}\sum_{i=1}p(\mathbf{z}_{i},\mathbf{y}_{i})\mathcal{L}(f_{\theta}(\mathbf{z}_{i}),\mathbf{y}_{i})
\approx
\theta^{*}
.
\end{split}
\end{equation}
\noindent
\textbf{A simple but useful observation is}: \textsl{as long as the noise is weak, the optimal parameters of self-supervised network trained on noisy image pairs $\{(\mathbf{z}_{i},\mathbf{y}_{i})\}$ are very close to the optimal parameters of the supervised networks trained on pairs of noisy and clean images $\{(\mathbf{y}_{i},\mathbf{x}_{i})\}$}.\
In Figure~\ref{fig:example}, we explain our NAC strategy and illustrate this observation through an example on the image ``Test004'' from the BSD68 dataset:
The clean image $\mathbf{x}$ in (a) is firstly corrupted by observed AWGN noise $\mathbf{n}_{o}$ with $\sigma=5$.\
Then we add simulated AWGN noise $\mathbf{n}_{s}$ also with $\sigma=5$ to the corrupted image $\mathbf{x}+\mathbf{n}_{o}$ in (b), and obtain a doubly corrupted image $\mathbf{x}+\mathbf{n}_{o}+\mathbf{n}_{s}$ in (c).
The DnCNN~\cite{dncnn} with our NAC strategy, named as DnCNN+NAC, is trained with the doubly corrupted image $\mathbf{x}+\mathbf{n}_{o}+\mathbf{n}_{s}$ in (c) as input and the corrupted image $\mathbf{x}+\mathbf{n}_{o}$ in (b) as target.\
The final training output is plotted in (d), with similar PSNR and SSIM~\cite{ssim} results as the corrupted image $\mathbf{x}+\mathbf{n}_{o}$ in (b).\
Then the DnCNN+NAC network learned on (b) and (c) is directly employed to perform inference on the corrupted image $\mathbf{x}+\mathbf{n}_{o}$ in (b), and produces the testing output in (e).\
When compared to DnCNN~\cite{dncnn}, our DnCNN+NAC achieves much higher PSNR and SSIM results on the corrupted image (b). %
The estimated simulated noise $\mathbf{n}_{s}$ and observed noise $\mathbf{n}_{o}$ in training and test stages are plotted in (g) and (h), respectively.
One can see that they are visually in similar noise statistics.
\noindent
\textbf{Consistency of noise statistics}.\
Since our contexts are the real-world scenarios, the noise can be modeled by mixed Poisson and Gaussian distribution~\cite{poissongaussian}.\ Fortunately, both the two distributions are linear additive, i.e., the addition variable of two Poisson (or Gaussian) distributed variables are still Poisson (or Gaussian) distributed.\ Assume that the observed (simulated) noise $\mathbf{n}_{o}$ ($\mathbf{n}_{s}$) follows a mixed $\mathbf{x}$-dependent ($\mathbf{y}$-dependent) Poisson distribution parameterized by $\lambda_{o}$ ($\lambda_{s}$) and Gaussian distribution $\mathcal{N}(\bm{0}, \sigma_{o}^{2})$ ($\mathcal{N}(\bm{0}, \sigma_{s}^{2})$), i.e.,
\begin{equation}
\begin{split}
\mathbf{n}_{o}
&\sim
\mathbf{x}\odot \mathcal{P}(\lambda_{o})+\mathcal{N}(\bm{0}, \sigma_{o}^{2})
,
\\
\mathbf{n}_{s}
&\sim
\mathbf{y}\odot \mathcal{P}(\lambda_{s})+\mathcal{N}(\bm{0}, \sigma_{s}^{2})
\\
&\approx
\mathbf{x}\odot \mathcal{P}(\lambda_{s})+\mathcal{N}(\bm{0}, \sigma_{s}^{2})
,
\end{split}
\end{equation}
where $\mathbf{x}\odot \mathcal{P}(\lambda_{o})$ and $\mathbf{y}\odot P(\lambda_{s})$ indicates that the noise $\mathbf{n}_{o}$ and $\mathbf{n}_{s}$ are element-wisely dependent on $\mathbf{x}$ and $\mathbf{y}$, respectively.\ The ``$\approx$'' is valid if we assume that the observed noise $\mathbf{n}_{o}$ is ``weak'' when compared to the signal $\mathbf{x}$.\
To this end, we have
\begin{equation}
\label{e13}
\mathbf{n}_{o}+\mathbf{n}_{s}
\sim
\mathbf{x}\odot \mathcal{P}(\lambda_{o}+\lambda_{s})+\mathcal{N}(\bm{0}, \sigma_{o}^{2}+\sigma_{s}^{2}+2\rho\sigma_{o}\sigma_{s})
,
\end{equation}
where $\rho$ is the correlation between $\mathbf{n}_{o}$ and $\mathbf{n}_{s}$ ($\rho=0$ if they are independent).\ This indicates that the summed noise variable $\mathbf{n}_{o}+\mathbf{n}_{s}$ still follows a mixed $\mathbf{x}$ dependent Poisson and Gaussian distribution, guaranteeing the consistency in noise statistics between the \textsl{observed} realistic noise and the \textsl{simulated} noise.\ As will be validated by the experiments (\S\ref{sec:exp}), this property makes our NAC strategy consistently effective on different noise removal tasks.
\begin{table*}[t]
\small
\centering
\caption{\textbf{Average PSNR (dB) and SSIM~\cite{ssim} results of different methods on~\textsl{Set12} dataset} corrupted by AWGN noise.\ The first, second, and third best results are highlighted in \textcolor{red}{\textbf{red}}, \textcolor{blue}{\textbf{blue}}, and \textbf{bold}, respectively.}
\begin{tabular}{r||c|c|c|c|c|c|c|c|c|c}
\Xhline{1pt}
\rowcolor[rgb]{.85, .9, 0.95}
\multicolumn{1}{c||}{Noise Level}& \multicolumn{2}{c|}{$\sigma=5$ }& \multicolumn{2}{c|}{$\sigma=10$} & \multicolumn{2}{c|}{$\sigma=15$} & \multicolumn{2}{c|}{$\sigma=20$} & \multicolumn{2}{c}{$\sigma=25$}
\\
\hline
\rowcolor[rgb]{.85, .9, 0.95}
\multicolumn{1}{c||}{Metric} & PSNR$\uparrow$ & SSIM$\uparrow$ & PSNR$\uparrow$ & SSIM$\uparrow$ & PSNR$\uparrow$ & SSIM$\uparrow$ & PSNR$\uparrow$ & SSIM$\uparrow$ & PSNR$\uparrow$ & SSIM$\uparrow$
\\
\hline
\hline
\textbf{BM3D}~\cite{bm3d} & 38.07 & 0.9580 & 34.40 & 0.9234 & 32.38 & 0.8957 & 31.00 & 0.8717 & 29.97 & 0.8503 \\
\textbf{DnCNN}~\cite{dncnn} & 38.76 & 0.9633 & 34.78 & 0.9270 & 32.86 & 0.9027 & 31.45 & 0.8799 & 30.43 & 0.8617 \\
\textbf{N2N}~\cite{noise2noise} & \textbf{39.72} & 0.9665 & 36.18 & 0.9446 & 33.99 & 0.9149 & \textbf{32.10} & 0.8788 & 30.72 & 0.8446 \\
\textbf{DIP}~\cite{dip} & 32.49 & 0.9344 & 31.49 & 0.9299 & 29.59 & 0.8636 & 27.67 & 0.8531 & 25.82 & 0.7723 \\
\textbf{N2V}~\cite{noise2void} & 27.06 & 0.8174 & 26.79 & 0.7859 & 26.12 & 0.7468 & 25.89 & 0.7405 & 25.01 & 0.6564 \\
\hline
\hline
\textbf{DnCNN+NAC} & \textcolor{red}{\textbf{43.17}} & \textcolor{blue}{\textbf{0.9817}} & \textcolor{red}{\textbf{37.16}} & \textbf{0.9336} & 33.64 & 0.8697 & 31.15 & 0.8024 & 29.22 & 0.7382 \\
\textbf{Blind DnCNN+NAC} & \textcolor{red}{\textbf{43.16}} & \textcolor{blue}{\textbf{0.9817}} & \textcolor{red}{\textbf{37.14}} & \textbf{0.9333} & 33.63 & 0.8693 & 31.14 & 0.8018 & 29.21 & 0.7376\\
\hline
\hline
\textbf{ResNet+NAC} &
\textcolor{blue}{\textbf{39.99}} & \textcolor{red}{\textbf{0.9820}} & \textbf{36.55} & \textcolor{red}{\textbf{0.9569}} & \textcolor{blue}{\textbf{34.24}} & \textcolor{red}{\textbf{0.9277}} & \textcolor{blue}{\textbf{32.46}} & \textcolor{blue}{\textbf{0.8961}} & \textcolor{blue}{\textbf{31.08}} & \textcolor{blue}{\textbf{0.8654}} \\
\textbf{Blind ResNet+NAC} & 38.48 & \textbf{0.9805} & \textcolor{blue}{\textbf{36.65}} & \textcolor{blue}{\textbf{0.9564}} & \textcolor{red}{\textbf{34.77}} & \textcolor{blue}{\textbf{0.9275}} & \textcolor{red}{\textbf{33.13}} & \textcolor{red}{\textbf{0.9024}} & \textcolor{red}{\textbf{31.78}} & \textcolor{red}{\textbf{0.8802}} \\
\hline
\end{tabular}
\vspace{-2mm}
\label{t-g12}
\end{table*}
\section{Learning Self-supervised Denoising Networks \\
by ``Noisy-As-Clean''}
\label{sec:self-sup}
Here, we propose to learn self-supervised denoising networks with our ``Noisy-As-Clean'' (NAC) strategy.\ %
We employ the DnCNN~\cite{dncnn} and ResNet in DIP~\cite{dip} as our baseline, and call the self-supervised networks as DnCNN+NAC and ResNet+NAC, respectively.\
Note that we only need the \textsl{observed} noisy image $\mathbf{y}$ to generate noisy image pairs $\{(\mathbf{z},\mathbf{y})\}$ with \textsl{simulated} noise $\mathbf{n}_{s}$, as illustrated in Figure~\ref{f1}.
\noindent
\textbf{Training self-supervised networks by our NAC}.
For real-world images captured by camera sensors, one can hardly distinguish the realistic noise from the signal.\ The signal intensity $\mathbf{x}$ is usually stronger than the noise intensity.\ That is, the expectation of the observed realistic noise $\mathbf{n}_{o}$ is usually much smaller than that of the latent clean image $\mathbf{x}$.\ If we train an image-specific network for the new noisy image $\mathbf{z}$ and regard the original noisy image $\mathbf{y}$ as the ground-truth image, then the trained image-specific network basically joint learn the image-specific prior and noise statistics.\ It has the capacity to remove the noise $\mathbf{n}_{s}$ from the new noisy image $\mathbf{z}$. Then if we perform denoising on the original noisy image $\mathbf{y}$, the observed noise $\mathbf{n}_{o}$ can be well-removed.\ Note that we \textsl{do not} use the clean image $\mathbf{x}$ as ``ground-truth'' in training the DnCNN+NAC and ResNet+NAC networks.\
\noindent
\textbf{Training blind denoising networks}.\ Most of existing supervised denoising networks train a specific model to process a fixed noise pattern~\cite{crosschannel2016,nlrn2018,upi}.\ To tackle unknown noise, one feasible solution for these networks is to assume the noise as AWGN and estimate its noise deviation.\ The corresponding noise is removed by using the networks trained with the estimated level.\ But this strategy largely degrades the denoising performance when the noise deviation is not estimated accurately.\ Besides, this solution can hardly deal with realistic noise, which is usually not AWGN, captured on real photographs.\ In order to be effective on removing realistic noise, the self-supervised networks by our NAC are feasible to blindly remove the unknown noise from real photographs.\ Inspired by~\cite{dncnn,cbdnet}, we propose to train a blind version of DnCNN+NAC and ResNet+NAC networks by using the AWGN noise within a range of levels (e.g., $[0, 55]$) for removing unknown AWGN noise.\ We also train blind ResNet+NAC with mixed AWGN and Poisson noise (both within a range of intensities) for removing the realistic noise.\ More details will be explained in \S\ref{sec:blind}.
\begin{table*}[htp]
\vspace{0mm}
\small
\centering
\caption{\textbf{Average PSNR (dB) and SSIM~\cite{ssim} results of different methods on \textsl{BSD68} dataset} corrupted by AWGN noise.\ The first, second, and third best results are highlighted in \textcolor{red}{\textbf{red}}, \textcolor{blue}{\textbf{blue}}, and \textbf{bold}, respectively.}
\begin{tabular}{r||c|c|c|c|c|c|c|c|c|c}
\Xhline{1pt}
\rowcolor[rgb]{ .85, .9, 0.95}
\multicolumn{1}{c||}{Noise Level} & \multicolumn{2}{c|}{$\sigma=5$}& \multicolumn{2}{c|}{$\sigma=10$} & \multicolumn{2}{c|}{$\sigma=15$} & \multicolumn{2}{c|}{$\sigma=20$} & \multicolumn{2}{c}{$\sigma=25$}\\
\hline
\rowcolor[rgb]{ .85, .9, 0.95}
\multicolumn{1}{c||}{Metric} & PSNR$\uparrow$ & SSIM$\uparrow$ & PSNR$\uparrow$ & SSIM$\uparrow$ & PSNR$\uparrow$ & SSIM$\uparrow$ & PSNR$\uparrow$ & SSIM$\uparrow$ & PSNR$\uparrow$ & SSIM$\uparrow$ \\
\hline
\hline
\textbf{BM3D}~\cite{bm3d} & 37.59 & 0.9640 & 33.32 & 0.9163 & 31.07 & 0.8720 & 29.62 & 0.8342 & 28.57 & 0.8017 \\
\textbf{DnCNN}~\cite{dncnn} & 38.07 & \textcolor{blue}{\textbf{0.9695}} & 33.88 & \textcolor{blue}{\textbf{0.9270}} & 31.73 & 0.8706 & \textbf{30.27} & \textbf{0.8563} & \textcolor{blue}{\textbf{29.23}} & \textcolor{blue}{\textbf{0.8278}} \\
\textbf{N2N}~\cite{noise2noise} & \textbf{38.58} & 0.9627 & 34.07 & 0.9200 & \textbf{31.81} & \textbf{0.8770} & 30.14 & 0.8550 & 28.67 & 0.8123 \\
\textbf{DIP}~\cite{dip} & 29.74 & 0.8435 & 28.16 & 0.8310 & 27.07 & 0.7867 & 25.80 & 0.7205 & 24.63 & 0.6680 \\
\textbf{N2V}~\cite{noise2void} & 26.70 & 0.7915 & 26.39 & 0.7621 & 25.77 & 0.7126 & 25.41 & 0.6678 & 24.83 & 0.6305 \\
\hline
\hline
\textbf{DnCNN+NAC} & \textcolor{red}{\textbf{40.21}} & \textbf{0.9674} & \textbf{34.21} & 0.8913 & 30.72 & 0.8044 & 28.25 & 0.7230 & 26.34 & 0.6515 \\
\textbf{Blind DnCNN+NAC} & \textcolor{red}{\textbf{40.20}} & \textbf{0.9674} & \textbf{34.21} & 0.8911 & 30.71 & 0.8041 & 28.24 & 0.7227 & 26.33 & 0.6511\\
\hline
\hline
\textbf{ResNet+NAC} & \textcolor{blue}{\textbf{39.00}} & \textcolor{red}{\textbf{0.9707}} & \textcolor{red}{\textbf{34.60}} & \textcolor{red}{\textbf{0.9324}} & \textcolor{red}{\textbf{32.13}} & \textcolor{red}{\textbf{0.8942}} & \textcolor{blue}{\textbf{30.47}} & \textcolor{red}{\textbf{0.8636}} & \textbf{28.96} & \textbf{0.8185}\\
\textbf{Blind ResNet+NAC} & 38.26 &0.9605 & \textcolor{blue}{\textbf{34.26}} & \textbf{0.9266} & \textcolor{blue}{\textbf{32.06}} & \textcolor{blue}{\textbf{0.8919}} & \textcolor{red}{\textbf{30.50}} & \textcolor{blue}{\textbf{0.8609}} & \textcolor{red}{\textbf{29.33}} & \textcolor{red}{\textbf{0.8327}} \\
\hline
\end{tabular}
\vspace{-2.5mm}
\label{t-g68}
\end{table*}
\noindent
\textbf{Testing} is performed by directly regarding an \textsl{observed} noisy image $\mathbf{y}=\mathbf{x}+\mathbf{n}_{o}$ as input.\
We only test the image $\mathbf{y}$ once.\
The denoised image can be represented as $\hat{\mathbf{y}}=f_{\theta^{*}}(\mathbf{y})$, with which the objective metrics, e.g., PSNR and SSIM~\cite{ssim}, can be computed with the clean image $\mathbf{x}$.\
\noindent
\textbf{Implementation details}.\ We employ the DnCNN~\cite{dncnn} and ResNet in DIP~\cite{dip} as the backbones, and turn them into self-supervised networks by our NAC strategy, which are named as DnCNN+NAC and ResNet+NAC, respectively.
The DnCNN contains 17 layers of convolution, Batch Normalization (BN)~\cite{bn2015}, and Rectified Linear Units (ReLU) activation operator~\cite{relu2010}.\
To accommodate DnCNN with our NAC strategy, we set the output of DnCNN+NAC as the denoised image, not the residual noise in DnCNN~\cite{dncnn}.\
We observe no difference between the results on PSNR, SSIM~\cite{ssim}, and visual quality by employing these two types of outputs in our experiments.
As DnCNN, the parameters of DnCNN+NAC are initialized from a pretrained ResNet.
As used in~\cite{dip}, the ResNet in our ResNet+NAC includes $10$ residual blocks, each containing two convolutional layers followed by a BN~\cite{bn2015} and a ReLU~\cite{relu2010} after the first BN.\
The parameters are randomly initialized without being pretrained.\
For both baselines, the optimizer is Adam~\cite{adam} with default parameters.\
The learning rate is fixed at $0.001$ in all experiments.\
We use the $\ell_{2}$ loss function.\
For each test image, we only train the DnCNN+NAC in 100 epochs, while the original DnCNN is trained with 180 epochs.\
The ResNet+NAC is trained in $1000$ epochs for each test image, the same as that in DIP~\cite{dip}.
As suggested by DnCNN~\cite{dncnn} and DIP~\cite{dip}, we employ $4$ rotations \{0\degree, 90\degree, 180\degree, 270\degree\} combined with 2 mirror (vertical and horizontal) reflections, resulting in totally $8$ transformations for data augmentation.\
We implement the DnCNN+NAC and ResNet+NAC networks in PyTorch.
\section{Experiments}
\label{sec:exp}
In this section, we evaluate the performance of our ``Noisy-As-Clean'' (NAC) networks on image denoising.\ In all experiments, we train a denoising network using only the noisy test image $\mathbf{y}$ as the target, and using the \textsl{simulated} noisy image $\mathbf{z}$ as the input.\
For all comparison methods, the source codes or trained models are downloaded from the corresponding authors' websites.\ We use the default parameter settings, unless otherwise specified.\
The PSNR, SSIM~\cite{ssim}, and visual quality of different methods are used to evaluate the comparison.\
We first test with synthetic noise such as additive white Gaussian noise (AWGN) in \S\ref{sec:syn}, continue to perform blind image denoising in \S\ref{sec:blind}, and finally tackle the realistic noise in \S\ref{sec:real}.\
In \S\ref{sec:dis}, we conduct comprehensive ablation studies to gain deeper insights into our NAC strategy.
\begin{table*}[t]
\centering
\caption{\textbf{Average PSNR (dB) and SSIM~\cite{ssim} of different methods} on the \textsl{CC} dataset~\cite{crosschannel2016} and the \textsl{DND} dataset~\cite{dnd2017}.\ The best results are highlighted in \textbf{bold}.\ ``NA'' means ``Not Available'' due to unavailable code (GCBD on \textsl{CC}~\cite{crosschannel2016}) or difficult experiments (DIP on \textsl{DND}~\cite{dnd2017}).\
The first, second, and third best results are highlighted in \textcolor{red}{\textbf{red}}, \textcolor{blue}{\textbf{blue}}, and \textbf{bold}, respectively.
}
\renewcommand\arraystretch{1}
\begin{tabular}{r||c|cc|cc|cc|cc}
\Xhline{1pt}
\rowcolor[rgb]{ .85, .9, 0.95}
&
\multicolumn{1}{c|}{Type}&
\multicolumn{2}{c|}{Traditional Methods}&
\multicolumn{2}{c|}{Supervised Networks}& \multicolumn{2}{c|}{Unsupervised Networks}& \multicolumn{2}{c}{Self-supervised Networks}
\\
\cline{2-10}
\rowcolor[rgb]{ .85, .9, 0.95}
\multicolumn{1}{c||}{\multirow{-2}{*}{Dataset}}
&
Method
& \textbf{CBM3D}~\cite{cbm3d}&\textbf{NI}~\cite{neatimage}&\textbf{DnCNN+}\cite{dncnn}&\textbf{CBDNet}~\cite{cbdnet}&\textbf{GCBD}~\cite{gcbd}&\textbf{N2N}~\cite{noise2noise}&\textbf{DIP}~\cite{dip}&\textbf{Blind ResNet+NAC}
\\
\hline
\multicolumn{1}{r||}{\multirow{2}{*}{\textsl{CC}~\cite{crosschannel2016}}}
&PSNR$\uparrow$ & 35.19 & 35.33 & 35.40 & \textcolor{blue}{\textbf{36.44}} & NA & 35.32 & \textbf{35.69} & \textcolor{red}{\textbf{36.59}}
\\
&SSIM$\uparrow$ & 0.9063 & 0.9212 & 0.9115 & \textcolor{blue}{\textbf{0.9460}} & NA & 0.9160 & \textbf{0.9259} & \textcolor{red}{\textbf{0.9502}}
\\
\hline
\multicolumn{1}{c||}{\multirow{2}{*}{\textsl{DND}~\cite{dnd2017}}}
&PSNR$\uparrow$ & 34.51 & 35.11 & \textcolor{blue}{\textbf{37.90}} & \textcolor{red}{\textbf{38.06}} & 35.58 & 33.10 & NA & \textbf{36.20}
\\
&SSIM$\uparrow$ & 0.8507 & 0.8778 & \textcolor{red}{\textbf{0.9430}} & \textcolor{blue}{\textbf{0.9421}} & 0.9217 & 0.8110 & NA & \textbf{0.9252}
\\
\hline
\end{tabular}
\vspace{-2.5mm}
\label{t-cc+dnd}
\end{table*}
\begin{figure*}[htbp]
\centering
\subfigure{
\begin{minipage}[t]{0.24\textwidth}
\centering
\raisebox{-0.15cm}{\includegraphics[width=1\textwidth]{Supp_AWGN/15/br_noisy_G15_02_24-59_0-4456.png}}
{\footnotesize (a) Noisy (24.59dB/0.4456)}
\end{minipage}
\begin{minipage}[t]{0.24\textwidth}
\centering
\raisebox{-0.15cm}{\includegraphics[width=1\textwidth]{Supp_AWGN/15/br_BM3D_G15_02_34-93_0-8907.png}}
{\footnotesize (b) BM3D~\cite{bm3d} (34.93dB/0.8907)}
\end{minipage}
\begin{minipage}[t]{0.24\textwidth}
\centering
\raisebox{-0.15cm}{\includegraphics[width=1\textwidth]{Supp_AWGN/15/br_PGPD_G15_02_34-83_0-8850.png}}
{\footnotesize (c) PGPD~\cite{pgpd} (34.83dB/0.8850)}
\end{minipage}
\begin{minipage}[t]{0.24\textwidth}
\centering
\raisebox{-0.15cm}{\includegraphics[width=1\textwidth]{Supp_AWGN/15/br_DnCNN_G15_02_34-98_0-8846.png}}
{\footnotesize (d) DnCNN~\cite{dncnn} (34.98dB/0.8846)}
\end{minipage}
}\vspace{-3mm}
\subfigure{
\begin{minipage}[t]{0.24\textwidth}
\centering
\raisebox{-0.15cm}{\includegraphics[width=1\textwidth]{Supp_AWGN/15/br_N2N_G15_02_35-74_0-9019.png}}
{\footnotesize (e) N2N~\cite{noise2noise} (35.74dB/0.9019)}
\end{minipage}
\begin{minipage}[t]{0.24\textwidth}
\centering
\raisebox{-0.15cm}{\includegraphics[width=1\textwidth]{Supp_AWGN/15/br_DIP_G15_02_30-38_0-7145.png}}
{\footnotesize (f) DIP~\cite{dip} (30.38dB/0.7145)}
\end{minipage}
\begin{minipage}[t]{0.24\textwidth}
\centering
\raisebox{-0.15cm}{\includegraphics[width=1\textwidth]{Supp_AWGN/15/br_NAC_G15_02_36-46_0-9103.png}}
{\footnotesize (g) ResNet+NAC (\textbf{35.89}dB/\textbf{0.9101}) }
\end{minipage}
\begin{minipage}[t]{0.24\textwidth}
\centering
\raisebox{-0.15cm}{\includegraphics[width=1\textwidth]{Supp_AWGN/15/br_clean_02.png}}
{\footnotesize (h) Ground Truth}
\end{minipage}
}\vspace{-1mm}
\caption{\textbf{Denoised images and PSNR/SSIM results of ``\textsl{House}'' in \textsl{Set12} by different methods}.\ The images are corrupted by AWGN noise with $\sigma=15$.\ The best results on PSNR and SSIM are highlighted in \textbf{bold}.}
\label{fig:awgn15}
\end{figure*}
\subsection{Synthetic Noise Removal With Known Noise}
\label{sec:syn}
We evaluate the DnCNN+NAC and ResNet+NAC networks on images corrupted by synthetic AWGN noise.\
More results on signal dependent Poisson noise and mixed Poisson-AWGN noise are provided in the \textsl{Supplementary File}.\
\noindent
\textbf{Training self-supervised networks}.\ Here, we train an image-specific denoising network using the \textsl{observed} noisy test image $\mathbf{y}$ as the target, and the \textsl{simulated} noisy image $\mathbf{z}$ as the input.\ Each \textsl{observed} noisy image $\mathbf{y}=\mathbf{x}+\mathbf{n}_{o}$ is generated by adding the \textsl{observed} noise $\mathbf{n}_{o}$ to the clean image $\mathbf{x}$.\ The \textsl{simulated} noisy image $\mathbf{z}=\mathbf{y}+\mathbf{n}_{s}$ is generated by adding \textsl{simulated} noise $\mathbf{n}_{s}$ to \textsl{observed} noisy image $\mathbf{y}$.\
\noindent
\textbf{Comparison methods}.\
We compare DnCNN+NAC and ResNet+NAC networks with state-of-the-art image denoising methods~\cite{bm3d,dncnn,noise2noise}.\ On AWGN noise, we compare with BM3D~\cite{bm3d}, DnCNN~\cite{dncnn}, Noise2Noise (N2N)~\cite{noise2noise}, Deep Image Prior (DIP)~\cite{dip}, and Noise2Void (N2V)~\cite{noise2void}.\
\noindent
\textbf{Test datasets}.\ We evaluate the comparison methods on the \textsl{Set12} and \textsl{BSD68} datasets, which are widely tested by supervised denoising networks~\cite{dncnn,nlrn2018} and previous methods~\cite{bm3d,pgpd}.\ The \textsl{Set12} dataset contains 12 images of sizes $512\times512$ or $256\times256$, while the \textsl{BSD68} dataset contains 68 images of different sizes.
\noindent
\textbf{Results on AWGN noise} with noise levels (standard deviation, or std) of $\sigma\in\{5, 10, 15, 20, 25\}$ are provided here.\
The \textsl{observed} noise $\mathbf{n}_{o}$ is AWGN with std of $\sigma$, while the \textsl{simulated} noise $\mathbf{n}_{s}$ is with the same $\sigma$ as that of $\mathbf{n}_{o}$.\
The comparison results are listed in Tables~\ref{t-g12} and~\ref{t-g68}.\
It can be seen that, DnCNN+NAC achieves better PSNR and SSIM results than those of the original DnCNN when $\sigma=5,10$.
Note that DnCNN are supervised networks trained offline on the \textsl{BSD400} dataset, while the variant DnCNN+NAC network is trained online for each corrupted image.\
Besides, the blind version of DnCNN+NAC achieves negligible performance drop when compared to the DnCNN+NAC, which is consistent with~\cite{dncnn}.\
On the other side, the ResNet+NAC networks achieve comparable or better performance on PSNR and SSIM~\cite{ssim} than BM3D~\cite{bm3d} and DnCNN~\cite{dncnn}, especially when the noise levels are weak ($\sigma=5,10$).\
Besides, our ResNet+NAC networks outperform the other unsupervised and self-supervised networks such as N2N~\cite{noise2noise}, DIP~\cite{dip}, and N2V~\cite{noise2void} by a large margin on PSNR and SSIM~\cite{ssim}.\
In Figures~\ref{fig:awgn15} and~\ref{fig:awgn5}, we provide the visual comparisons of the denoised images by the competing methods.\
One can see that the ResNet+NAC networks produce better image quality and higher PSNR/SSIM results than the comparison methods.
\begin{figure*}[htbp]
\centering
\subfigure{
\begin{minipage}[t]{0.24\textwidth}
\centering
\raisebox{-0.15cm}{\includegraphics[width=1\textwidth]{Supp_AWGN/5/br_Noisy_g5_test003.png}}
{\footnotesize (a) Noisy (34.15dB/0.8416)}
\end{minipage}
\begin{minipage}[t]{0.24\textwidth}
\centering
\raisebox{-0.15cm}{\includegraphics[width=1\textwidth]{Supp_AWGN/5/br_test003_BM3D_g5_38-20_0-9569.png}}
{\footnotesize (b) BM3D~\cite{bm3d} (38.20dB/0.9569)}
\end{minipage}
\begin{minipage}[t]{0.24\textwidth}
\centering
\raisebox{-0.15cm}{\includegraphics[width=1\textwidth]{Supp_AWGN/5/br_test003_PGPD_g5_38-02_0-9524.png}}
{\footnotesize (c) PGPD~\cite{pgpd} (38.02dB/0.9524)}
\end{minipage}
\begin{minipage}[t]{0.24\textwidth}
\centering
\raisebox{-0.15cm}{\includegraphics[width=1\textwidth]{Supp_AWGN/5/br_DnCNN_g5_test003.png}}
{\footnotesize (d) DnCNN~\cite{dncnn} (38.64dB/0.9559)}
\end{minipage}
}\vspace{-3mm}
\subfigure{
\begin{minipage}[t]{0.24\textwidth}
\centering
\raisebox{-0.15cm}{\includegraphics[width=1\textwidth]{Supp_AWGN/5/br_N2N_g5_test003.png}}
{\footnotesize (e) N2N~\cite{noise2noise} (39.63dB/0.9682)}
\end{minipage}
\begin{minipage}[t]{0.24\textwidth}
\centering
\raisebox{-0.15cm}{\includegraphics[width=1\textwidth]{Supp_AWGN/5/br_DIP_g5_test003.png}}
{\footnotesize (f) DIP~\cite{dip} (27.22dB/0.8794)}
\end{minipage}
\begin{minipage}[t]{0.24\textwidth}
\centering
\raisebox{-0.15cm}{\includegraphics[width=1\textwidth]{Supp_AWGN/5/br_NAC_g5_test003.png}}
{\footnotesize (g) ResNet+NAC (\textbf{39.89}dB/\textbf{0.9693}) }
\end{minipage}
\begin{minipage}[t]{0.24\textwidth}
\centering
\raisebox{-0.15cm}{\includegraphics[width=1\textwidth]{Supp_AWGN/5/br_test003.png}}
{\footnotesize (h) Ground Truth}
\end{minipage}
}\vspace{-1mm}
\caption{\textbf{Denoised images and PSNR/SSIM results of ``\textsl{Test003}'' in \textsl{BSD68} by different methods}.\ The images are corrupted by AWGN noise with $\sigma=5$.\ The best results on PSNR and SSIM are highlighted in \textbf{bold}.
}
\label{fig:awgn5}
\end{figure*}
\subsection{Synthetic Noise Removal With Unknown Noise}
\label{sec:blind}
To deal with unknown noise, we propose to train blind versions of the DnCNN~\cite{dncnn} and ResNet in~\cite{dip} by our NAC strategy.\
Here, we test the Blind DnCNN+NAC and Blind ResNet+NAC networks on AWGN noise with unknown noise deviation.\ We use the same training strategy, comparison methods, and test datasets as in \S\ref{sec:syn}.
\noindent
\textbf{Training blind networks}.\ We train the Blind DnCNN+NAC and Blind ResNet+NAC networks on the corrupted test image degraded \textsl{again} by AWGN noise with unknown noise levels (deviations).\
The noise levels are randomly sampled in Gaussian distribution within $[0, 55]$.\ We also test on noise levels in uniform distribution and obtain similar results.\ We repeat the training of DnCNN+NAC and ResNet+NAC networks on the test image with different deviations.\
\noindent
\textbf{Results on blind denoising}.\ For the same test image, we add to it the AWGN noise whose deviation is also in $\{5, 10, 15, 20, 25\}$.\ The blindly trained DnCNN+NAC and ResNet+NAC networks are directly utilized to denoise the test image without estimating its deviation.\ The results are also listed in Tables~\ref{t-g12} and~\ref{t-g68}.\ We observe that, the Blind ResNet+NAC networks trained on AWGN noise with unknown levels can achieve even better PSNR and SSIM~\cite{ssim} results than the ResNet+NAC networks trained on specific noise levels.\ Note that on \textsl{BSD68}, the ResNet+NAC networks achieve higher PSNR and SSIM results than DnCNN~\cite{dncnn}.\ This demonstrates the effectiveness of our ResNet+NAC networks on blind image denoising.\ With the success on blind image, next we will turn to real-world image denoising, in which the noise is also unknown and very complex.
\subsection{Practice on Real Photographs}
\label{sec:real}
With the promising performance on blind image denoising, here we tackle the realistic noise for practical applications.\ The \textsl{observed} realistic noise $\mathbf{n}_{o}$ can be roughly modeled as mixed Poisson noise and AWGN noise~\cite{poissongaussian,cbdnet}.\ Hence, for each \textsl{observed} noisy image $\mathbf{y}$, we generate the \textsl{simulated} noise $\mathbf{n}_{s}$ by sampling the $\mathbf{y}$-dependent Poisson part and the independent AWGN noise.\
\begin{figure*}
\centering
\subfigure{
\begin{minipage}[t]{0.32\textwidth}
\centering
\raisebox{-0.15cm}{\includegraphics[width=1\textwidth]{SuppCC/br_5dmark3_iso3200_1.png}}
{\footnotesize (a) Ground Truth}
\end{minipage}
\begin{minipage}[t]{0.32\textwidth}
\centering
\raisebox{-0.15cm}{\includegraphics[width=1\textwidth]{SuppCC/br_5dmark3_iso3200_1_36-25_0-9345.png}}
{\footnotesize (b) Noisy (36.25dB/0.9345)}
\end{minipage}
\begin{minipage}[t]{0.32\textwidth}
\centering
\raisebox{-0.15cm}{\includegraphics[width=1\textwidth]{SuppCC/br_CBM3D_CC_5dmark3_iso3200_1_36-61_0-9669.png}}
{\footnotesize (c) CBM3D~\cite{cbm3d} (36.61dB/0.9669)}
\end{minipage}
}\vspace{-3mm}
\subfigure{
\begin{minipage}[t]{0.32\textwidth}
\centering
\raisebox{-0.15cm}{\includegraphics[width=1\textwidth]{SuppCC/br_NI_CC_5dmark3_iso3200_1_37-58_0-9600}}
{\footnotesize (d) NI~\cite{neatimage} (37.58dB/0.9600)}
\end{minipage}
\begin{minipage}[t]{0.32\textwidth}
\centering
\raisebox{-0.15cm}{\includegraphics[width=1\textwidth]{SuppCC/br_DnCNN_CC_5dmark3_iso3200_1_37-16_0-9389.png}}
{\footnotesize (e) DnCNN+~\cite{dncnn} (37.16dB/0.9389)}
\end{minipage}
\begin{minipage}[t]{0.32\textwidth}
\centering
\raisebox{-0.15cm}{\includegraphics[width=1\textwidth]{SuppCC/br_CBDNet_CC_5dmark3_iso3200_1_36-58_0-9613.png}}
{\footnotesize (f) CBDNet~\cite{cbdnet} (36.58dB/0.9613) }
\end{minipage}
}\vspace{-3mm}
\subfigure{
\begin{minipage}[t]{0.32\textwidth}
\centering
\raisebox{-0.15cm}{\includegraphics[width=1\textwidth]{SuppCC/br_N2N_CC_5dmark3_iso3200_1_36-99_0-9604.png}}
{\footnotesize (g) N2N~\cite{noise2noise} (36.99dB/0.9604)}
\end{minipage}
\begin{minipage}[t]{0.32\textwidth}
\centering
\raisebox{-0.15cm}{\includegraphics[width=1\textwidth]{SuppCC/br_DIP_CC_5dmark3_iso3200_1_35-99_0-9529.png}}
{\footnotesize (h) DIP~\cite{dip} (35.99dB/0.9529)}
\end{minipage}
\begin{minipage}[t]{0.32\textwidth}
\centering
\raisebox{-0.15cm}{\includegraphics[width=1\textwidth]{SuppCC/br_NAC_CC_5dmark3_iso3200_1_37-88_0-9729.png}}
{\footnotesize (i) Blind ResNet+NAC (\textbf{37.88}dB/\textbf{0.9729})}
\end{minipage}
}\vspace{-1mm}
\caption{\textbf{Denoised images and PSNR/SSIM results of ``\textsl{5dmark3-iso3200-1}'' in the \textsl{Cross-Channel} dataset~\cite{crosschannel2016} by different methods}.\ The best results are highlighted in \textbf{bold}.}
\label{fig:cc}
\end{figure*}
\noindent
\textbf{Training blind ResNet+NAC networks} is also performed for each test image, i.e., the \textsl{observed} noisy image $\mathbf{y}$.\ In real-world scenarios, each \textsl{observed} noisy image $\mathbf{y}$ is corrupted without knowing the specific noise statistics of the \textsl{observed} noise $\mathbf{n}_{o}$.\ Therefore, the \textsl{simulated} noise $\mathbf{n}_{s}$ is directly estimated on $\mathbf{y}$ as mixed $\mathbf{y}$-dependent Poisson and AWGN noise.\ For each transformation image in data augmentation, the Poisson noise is randomly sampled with the parameter $\lambda$ in $0<\lambda\le25$, and the AWGN noise is randomly sampled with the noise level $\sigma$ in $0<\sigma\le25$.
\noindent
\textbf{Comparison methods}.\
We compare with state-of-the-art methods on real-world image denoising, including CBM3D~\cite{cbm3d}, the commercial software Neat Image~\cite{neatimage}, two supervised networks DnCNN+~\cite{dncnn} and CBDNet~\cite{cbdnet}, and two unsupervised networks GCBD~\cite{gcbd} and Noise2Noise~\cite{noise2noise}, and the self-supervised network DIP~\cite{dip}.\ Note that DnCNN+~\cite{dncnn} and CBDNet~\cite{cbdnet} are two state-of-the-art supervised networks for real-world image denoising, and DnCNN+ is an improved extension of DnCNN~\cite{dncnn} with better performance (the authors of DnCNN+ provide us the models/results of DnCNN+).
\begin{figure*}[ht!]
\centering
\begin{minipage}[t]{0.245\textwidth}
\includegraphics[width=1\textwidth]{F-dnd/br_Noisy_0017_3.png}
\centering{\scriptsize (a) \scriptsize Noisy: 31.46dB/0.9370}
\includegraphics[width=1\textwidth]{F-dnd/br_CBDNet_0017_3.png}
\centering{\scriptsize (e) \scriptsize CBDNet~\cite{cbdnet}: 39.34dB/0.9905}
\end{minipage}
\begin{minipage}[t]{0.245\textwidth}
\includegraphics[width=1\textwidth]{F-dnd/br_CBM3D_0017_3.png}
\centering{\scriptsize (b) \scriptsize CBM3D~\cite{cbm3d}: 36.26dB/0.9811}
\includegraphics[width=1\textwidth]{F-dnd/br_GCBD_0017_3.png}
\centering{\scriptsize (f) \scriptsize GCBD~\cite{gcbd}: 37.52dB/0.9765}
\end{minipage}
\begin{minipage}[t]{0.245\textwidth}
\includegraphics[width=1\textwidth]{F-dnd/br_NI_0017_3.png}
\centering{\scriptsize (c) \scriptsize NI~\cite{neatimage}: 37.52dB/0.9868}
\includegraphics[width=1\textwidth]{F-dnd/br_N2N_0017_3.png}
\centering{\scriptsize (g) \scriptsize N2N~\cite{noise2noise}: 34.95dB/0.9621}
\end{minipage}
\begin{minipage}[t]{0.245\textwidth}
\includegraphics[width=1\textwidth]{F-dnd/br_DnCNN+_0017_3.png}
\centering{\scriptsize (d) \scriptsize DnCNN+~\cite{dncnn}: 38.25dB/0.9888}
\includegraphics[width=1\textwidth]{F-dnd/br_NAC_0017_3.png}
\centering{\scriptsize (h) \scriptsize Blind ResNet+NAC: 38.34dB/0.9887}
\end{minipage}
\vspace{0mm}
\caption{\textbf{Denoised images and PSNR(dB)/SSIM by comparison methods} on ``\textsl{0017\_3}'' in \textsl{DND}~\cite{dnd2017}.\ The ``ground-truth'' image is not released, but PSNR(dB)/SSIM results are publicly provided on \href{https://noise.visinf.tu-darmstadt.de/benchmark/\#results_srgb}{\textsl{DND} Benchmark}.}
\vspace{-2.5mm}
\label{fig:dnd}
\end{figure*}
\noindent
\textbf{Test datasets}.\ We evaluate the comparison methods on the \textsl{Cross-Channel} (\textsl{CC}) dataset~\cite{crosschannel2016} and \textsl{DND} dataset~\cite{dnd2017}.\
The \textsl{CC} dataset~\cite{crosschannel2016} includes noisy images of 11 static scenes captured by Canon 5D Mark 3, Nikon D600, and Nikon D800 cameras.\ The noisy images are collected under a highly controlled indoor environment.\ Each scene is shot $500$ times using the same settings.\ The average of the $500$ shots is taken as ``ground-truth''.\ We use the default $15$ images of size $512\times512$ cropped by the authors to evaluate different image denoising methods.\
The \textsl{DND} dataset \cite{dnd2017} contains 50 scenarios captured by Sony A7R, Olympus E-M10, Sony RX100 IV, and Huawei Nexus 6P.\ Each scene is cropped to 20 bounding boxes of $512\times512$ pixels, generating totally 1000 test images.\ The noisy images are collected under higher ISO values with shorter exposure times, while the ``ground truth'' images are captured under lower ISO values with adjusted longer exposure times.\ The ``ground truth'' images are not released, but we can obtain the PSNR and SSIM results by submitting the denoised images to the \href{https://noise.visinf.tu-darmstadt.de/benchmark/\#results_srgb}{\textsl{DND}'s Website}.\
\noindent
\textbf{Comparison results on PSNR and SSIM} are listed in Table~\ref{t-cc+dnd}.\
As can be seen, the ResNet+NAC networks achieve better performance than all previous denoising methods, including the CBM3D~\cite{cbm3d}, the supervised networks DnCNN+~\cite{dncnn} and CBDNet~\cite{cbdnet}, and the unsupervised networks GCBD~\cite{gcbd}, N2N~\cite{noise2noise}, and DIP~\cite{dip}.\ This demonstrates that the ResNet+NAC networks can indeed handle the complex, unknown, and realistic noise, and achieve better performance than supervised networks such as DnCNN+~\cite{dncnn} and CBDNet~\cite{cbdnet}.\
\noindent
\textbf{Qualitative results}.\
In Figures~\ref{fig:cc} and~\ref{fig:dnd}, we show the denoised images of our ResNet+NAC and the comparison methods on the images of ``\textsl{5dmark3-iso3200-1}'' from the \textsl{CC} dataset~\cite{crosschannel2016} and ``\textsl{0017\_3}'' from the \textsl{DND} dataset~\cite{dnd2017}, respectively.\ We observe that our self-supervised Blind ResNet+NAC is very effective on removing realistic noise from the real photograph.\ Besides, the Blind ResNet+NAC networks achieve competitive PSNR and SSIM results when compared with the other methods, including the supervised DnCNN+~\cite{dncnn} and CBDNet~\cite{cbdnet}.
\noindent
\textbf{Speed}.\ The work most similar to ours is Deep Image Prior (DIP)~\cite{dip}, which also trains an image-specific network for each test image.\ Averagely, DIP needs $603.9$ seconds to process a $512\times512$ color image, on which our ResNet+NAC network needs $583.2$ seconds (on an NVIDIA Titan X GPU).
\subsection{Ablation Study}
\label{sec:dis}
To further study our NAC strategy, we conduct more examination of our ResNet+NAC networks on image denoising. Specifically, we assess 1) differences of the ResNet+NAC from the ResNet in DIP~\cite{dip}; 2) how the number of residual blocks and epochs influence the ResNet+NAC; 3) comparison with the ``Oracle'' performance of the ResNet+NAC networks; 4) performance of the ResNet+NAC on ``strong'' noise.
\noindent
\textbf{1) Differences from DIP~\cite{dip}}.\ Though the basic network in our work is the ResNet used in DIP~\cite{dip}, our ResNet+NAC network is essentially different from DIP on at least two aspects.\ First, our ResNet+NAC is a novel strategy for self-supervised learning of \textsl{adaptive network parameters} for the degraded image, while DIP aims to investigate \textsl{adaptive network structure} without learning the parameters.\ Second, our ResNet+NAC learns a mapping from the synthetic noisy image $\mathbf{z}=\mathbf{y}+\mathbf{n}_{s}$ to the noisy image $\mathbf{y}$, which approximates the mapping from the noisy image $\mathbf{y}=\mathbf{x}+\mathbf{n}_{o}$ to the clean image $\mathbf{x}$.\ But DIP maps a random noise map to the noisy image $\mathbf{y}$, and the denoised image is obtained during the process.\ Due to the two reasons, DIP needs early stop for different images, while our ResNet+NAC achieves more robust (and better) denoising performance than DIP on diverse images.\ In Figure~\ref{fig:curve}, we plot the curves of training loss and test PSNR of DIP (a) and ResNet+NAC (b) networks in 10,000 epochs, on two images of ``Cameraman'' and ``House''.\ We observe that DIP needs early stop to select the best results, while our ResNet+NAC can stably achieve better denoising results within 1000 epochs.
\begin{figure*}[t]
\centering
\subfigure{
\begin{minipage}{0.48\textwidth}
\includegraphics[width=1\textwidth]{F-curve/DIP_Image_PSNR_Curve.pdf}\vspace{-2mm}
\centering{(a) Curves of DIP~\cite{dip}}
\end{minipage}
\begin{minipage}{0.48\textwidth}
\includegraphics[width=1\textwidth]{F-curve/NAC_train_curve.pdf}\vspace{-2mm}
\centering{(b) Curves of our NAC}
\end{minipage}
}
\vspace{-2mm}
\caption{\textbf{Training loss and PSNR (dB) curves} of DIP~\cite{dip} (a) and our ResNet+NAC (b) networks w.r.t. the number of epochs, on the images of ``Cameraman'' and ``House'' from \textsl{Set12}.}
\vspace{-2.5mm}
\label{fig:curve}
\end{figure*}
\begin{figure*}[t]
\centering
\subfigure{
\begin{minipage}{0.48\textwidth}
\includegraphics[width=1\textwidth]{F-oracle/set12_g_oracle_final.pdf}\vspace{-2mm}
\centering{(a)}
\end{minipage}
\begin{minipage}{0.48\textwidth}
\includegraphics[width=1\textwidth]{F3/Large_sigma_final.pdf}\vspace{-2mm}
\centering{(b)}
\end{minipage}
}
\vspace{-2mm}
\caption{\textbf{Comparisons of PSNR (dB) and SSIM results} on \textsl{Set12} (a) by our ResNet+NAC network and its ``Oracle'' version for AWGN with $\sigma=5,10,15,20,25$ and (b) by BM3D~\cite{bm3d}, DnCNN~\cite{dncnn}, and our ResNet+NAC network for strong AWGN ($\sigma=50$).}
\vspace{-2.5mm}
\label{f-oracle-strong}
\end{figure*}
\noindent
\textbf{2) Influence on the number of residual blocks and epochs}.\ Our backbone network is the ResNet~\cite{dip} with 10 residual blocks trained in 1000 epochs.\ Now we study how the number of residual blocks and epochs influence the performance of ResNet+NAC on image denoising.\ The experiments are performed on the \textsl{Set12} dataset corrupted by AWGN noise ($\sigma=15$).\ From Table~\ref{t-block}, we observe that, with more residual blocks, the ResNet+NAC networks can achieve better PSNR and SSIM~\cite{ssim} results.\ And 10 residual blocks are enough to achieve satisfactory results.\ With more (e.g., 15) blocks, there is little improvement on PSRN and SSIM.\ Hence, we use 10 residual blocks the same as~\cite{dip}.\ Then we study how the number of epochs influence the performance of ResNet+NAC on image denoising.\ From Table~\ref{t-epoch}, one can see that on the \textsl{Set12} dataset corrupted by AWGN noise ($\sigma=15$), with more training epochs, our ResNet+NAC networks achieve better PSNR and SSIM results, but with longer processing time.\
\begin{table}[htp]
\vspace{-0mm}
\centering
\caption{\textbf{Average PSNR (dB)/SSIM of ResNet+NAC with different number of blocks} on \textsl{Set12} corrupted by AWGN noise ($\sigma=15$).}
\renewcommand\arraystretch{1}
\footnotesize
\begin{tabular}{c||cccccc}
\Xhline{1pt}
\rowcolor[rgb]{ .85, .90, .95}
\# of Blocks & 1 & 2 & 5 & 10 & 15
\\
\hline
PSNR$\uparrow$ & 33.58 & 33.85 & 34.14 & 34.24 & 34.26
\\
\hline
SSIM$\uparrow$ & 0.9161 & 0.9226 & 0.9272 & 0.9277 & 0.9272
\\
\hline
\end{tabular}
\vspace{-1mm}
\label{t-block}
\end{table}
\noindent
\textbf{3) Comparison with Oracle}.\ We also study the ``Oracle'' performance of the ResNet+NAC networks.\ In ``Oracle'', we train the ResNet+NAC networks on the pair of \textsl{observed} noisy image $\mathbf{y}$ and its clean image $\mathbf{x}$ corrupted by AWGN noise or signal dependent Poisson noise.\ The experiments are performed on \textsl{Set12} dataset corrupted by AWGN or signal dependent Poisson noise.\ The noise deviations are in $\{5,10,15,20,25\}$.\ Figure~\ref{f-oracle-strong} (a) shows comparisons of our ResNet+NAC and its ``Oracle'' networks on PSNR and SSIM.\ It can be seen that, the ``Oracle'' networks trained on the pair of noisy-clean images only perform slightly better than the original ResNet+NAC networks trained with the \textsl{simulated}-\textsl{observed} noisy image pairs $(\mathbf{z},\mathbf{y})$.\ With our NAC strategy, the ResNet networks trained only with noisy test images achieves similar promising performance on the weak noise.
\noindent
\textbf{4) Performance on strong noise}.\
Our NAC strategy is based on the assumption of ``weak noise''.\ It is natural to wonder how well ResNet+NAC performs against strong noise.\ To answer this question, we compare the ResNet+NAC networks with BM3D~\cite{bm3d} and DnCNN~\cite{dncnn}, on \textsl{Set12} corrupted by AWGN noise with $\sigma=50$.\ The PSNR and SSIM results are plotted in Figure~\ref{f-oracle-strong} (b).\ One can see that, our ResNet+NAC networks are limited in handling strong AWGN noise, when compared with BM3D~\cite{bm3d} and DnCNN~\cite{dncnn}.
\begin{table}[htp]
\vspace{-0mm}
\centering
\caption{\textbf{Average PSNR (dB) and time (s) of ResNet+NAC with different number of epochs} on \textsl{Set12} corrupted by AWGN noise ($\sigma=15$).}
\renewcommand\arraystretch{1}
\footnotesize
\begin{tabular}{c||ccccc}
\Xhline{1pt}
\rowcolor[rgb]{ .85, .90, .95}
\# of Epochs & 100 & 200 & 500 & 1000 & 5000
\\
\hline
PSNR$\uparrow$ & 31.80 & 32.79 & 33.77 & 34.24 & 34.28
\\
\hline
SSIM$\uparrow$ & 0.8714 & 0.9023 & 0.9189 & 0.9277 & 0.9280
\\
\hline
Time$\downarrow$ & 67.4& 132.5 & 302.0 & 583.2 & 2815.6
\\
\hline
\end{tabular}
\vspace{-1mm}
\label{t-epoch}
\end{table}
\section{Conclusion}
\label{sec:con}
In this work, we proposed a ``Noisy-As-Clean'' (NAC) strategy for learning self-supervised image denoising networks.\ In our NAC, we trained an image-specific network by taking the corrupted image as the target, and adding to it the simulated noise to generate the doubly corrupted noisy input.\ The simulated noise is close to the observed noise in the noisy test image.\ This strategy can be seamlessly embedded into existing supervised denoising networks.\ We observed that \textsl{it is possible to learn a self-supervised network only with the corrupted image, approximating the optimal parameters of a supervised network learned with a pair of noisy and clean images}.\ Extensive experiments on synthetic and real-world benchmarks demonstrate that, the DnCNN~\cite{dncnn} and ResNet in Deep Image Prior~\cite{dip} trained with our NAC strategy achieved comparable or better performance on PSNR, SSIM, and visual quality, when compared to previous state-of-the-art image denoising methods, including supervised denoising networks.\ These results validate that our NAC strategy can learn effctive image-specific priors and noise statistics only from the corrupted test image.
{
\small\small
\bibliographystyle{ieee}
|
1,314,259,995,311 | arxiv | \section{Introduction}
Recent astrophysical observations \cite{wmap} and
neutrino oscillation experiments \cite{oscil}
require an extension of the standard model (SM) so as to include
dark matter as well as to incorporate a generation mechanism for small neutrino
masses.
At the moment, however, we know about dark matter only a little;
the constraint on its mass and abundance, but nothing about
its detailed feature is known. Consequently, there are many
consistent models for
dark matter. Representative possibilities may be summarized
as follows: \\
(i) Dark matter is a stable thermal relic, so that its relic abundance and
annihilation cross section are strongly related to each other.
The lightest neutralino in supersymmetric models
with the conserved $R$-parity
is a well studied example of this category \cite{susydm}.
Another well motivated example may be a stable neutral
particle in the
radiative seesaw scenario \cite{Ma:2006km}, which is an alternative model of
the seesaw mechanism to generate small neutrino masses \cite{seesaw}.
In fact, there exits similar models and the nature of
the DM candidates in these models has been studied \cite{scdm,cdmmeg,
fcdm,ncdm,ext}.\\
(ii) Dark matter consists of multiple components \cite{multidm}. If some of them are
unstable, dark matter can contain thermal components as well as
non-thermal ones which can be produced by the decay of unstable components.
In this case the dark matter relic abundance at present and the annihilation
cross section of the dominant component need not to be related.\\
(iii) Dark matter is not stable and is decaying with a very long
lifetime \cite{Takayama:2000uz}-\cite{Shirai:2009fq}.
In any case it will be crucial for the study for going beyond the SM to know
which class the true dark matter model belongs to.
Since the above mentioned dark matter models
predict different signals for
the annihilation and/or decay of dark matter in the Galaxy,
it may be possible to use
the data from these observations to distinguish the dark matter
models \cite{mindep}.
The positron excess in the recent PAMELA observation \cite{pamela} and
ATIC data of $e^++e^-$ flux \cite{atic} are such examples.
PAMELA data show a hard positron excess compared with the background but
no antiproton excess, while
ATIC data show the excess of $e^++e^-$ flux at regions of 300-800 GeV.
A lot of works have been done to explain these data within the framework
of dark
matter models (see, for instance, \cite{positron} and references therein).
However, the observed positron flux requires much larger
annihilation cross section or much larger dark matter density than
the ones needed for the explanation of WMAP data.
The required
large factor in the latter feature is parametrized as a boost factor
in the references\footnote{The necessary enhancement for the s-wave
annihilation can be partly covered by
the Sommerfeld effects \cite{Hisano:2003ec} or others \cite{enhance}.}.
So, it seems very difficult to
give a natural explanation for the boost factor for the type (i) dark matter.
In this paper, following
\cite{Ma:2006uv},
we consider a supersymmetric extension of the radiative
seesaw model for the neutrino mass
to understand the data obtained by the
PAMELA and ATIC experiments.
The radiative seesaw model is attractive in two respects:
(a) The non-vanishing
small neutrino mass
and the presence of a dark matter candidate are closely related through
a discrete symmetry $Z_2$.
(b) The dark matter candidate in this model couples only with
leptons but not quarks. This feature is favorable for
the above mentioned PAMELA and ATIC data.
However, a large boost
factor still has to be introduced to explain the observed positron flux in this
model \cite{Bi:2009md,Cao:2009yy,Chen:2009mf}.
In our supersymmetric
extension of the model this problem is overcome
as follows.
There are two kinds of stable neutral
particles corresponding to two discrete symmetries,
$R$ and $Z_2$, where $R$ is the $R$ parity
in supersymmetric theories, and $Z_2$ is mentioned above.
If one of these
discrete symmetries is broken,
the heavier one can decay to the lighter one.
We propose that this breaking can be induced by anomaly
\cite{Banks:1991xj,Banks:1995ii,ArkaniHamed:1998nu} to realize
an exponentially suppressed decay rate of the heavier dark matter.
It should be noted that the smallness of this decay rate is a crucial
ingredient for the explanation of the observed $e^++e^-$ flux.
Moreover, due to the very nature of the model,
only lepton pairs can be produced through the dark matter decay.
We show that both data of PAMELA and ATIC can be described
well simultaneously in this scenario.
The model for dark matter proposed in this paper gives a concrete
realistic example of type (ii).
\section{Anomaly induced dark matter decay}
The stability of the dark matter is usually ensured by
an unbroken discrete symmetry $Z$.
If the discrete symmetry is broken, the dark matter can decay.
The preferable decay modes depend
on how $Z$ is broken.
However, its life time will be too short $\tau_{DM} \sim (8/\pi) m_{NDM}
\simeq 10^{-24} $ sec for $m_{DM} \simeq 1$ TeV, unless the $Z$ breaking
is extremely weak
\cite{Takayama:2000uz}-\cite{Shirai:2009fq}.
Such suppression may occur if $Z$ is
broken by GUT or Planck scale
physics
\cite{Hamaguchi:2008ta}-\cite{Shirai:2009fq}.
Here we would like to suggest an alternative
suppression mechanism which
is based on the observation that
if a symmetry, continuous or discrete, is anomalous,
then non-perturbative effects can generally
induce non-invariant terms, like
quark masses in QCD.
Although the discrete symmetry
$Z$ in question can be anomaly free with respect to the SM gauge group,
it can be anomalous at high energy when imbedded into a larger
discrete group, because heavy particles
can contribute to discrete anomalies \cite{Ibanez:1991hv,Banks:1991xj}.
If the discrete symmetry is anomalous at high energy,
non-perturbative effects can produce
$e^{-b S} \Phi^n$ in the superpotential
\cite{Banks:1991xj,Banks:1995ii,ArkaniHamed:1998nu},
which is $Z$ invariant. This is because the dilaton superfield
$S$ transforms inhomogeneously
under the anomalous $Z$, where $\Phi$ is a generic chiral super field, and
$b$ is a certain real number
(see also \cite{Araki:2007ss,Araki:2008ek}).
Below the Planck scale, where the dilaton is assumed
to be stabilized at a vacuum
expectation value of $O(1)$, the factor $e^{-b <S>}$ can work
as a suppression factor for the noninvariant product $\Phi^n$.
Let us estimate the size of the suppression.
To this end, consider a (chiral) $Z_N$ symmetry in a gauge theory
based on the gauge group $G$ and assume $Z_N$ is anomalous. Then
the Jacobian $J$ of the path integral measure
corresponding to the $Z_N [G]^2$ anomaly can be written as
\cite{Araki:2006mw,Araki:2007ss,Araki:2008ek}
\begin{eqnarray}
J &=&
\exp\left(-\frac{2\pi i}{N}\Delta Q \int d^{4}x~2{\cal A}(x)
\right), {\cal A}(x) =\frac{1}{64\pi^{2}}
\epsilon^{\mu\nu\rho\sigma}
\ F_{\mu\nu}^a F_{\rho\sigma}^a,
\label{jac}
\end{eqnarray}
where $F_{\mu\nu}^a$ is the field strength for $G$. Since
the Pontryagin index $\int d^{4}x~{\cal A}(x)$ is
an integer, $\Delta Q =0$ mod $ N/2$ means anomaly freedom
of $Z_N$. In the anomalous case, we have $\Delta Q =k/2$ with
an integer $k < N$. (So, $\Delta Q/N$=1/4 for an
anomalous $Z_2$, for instance.)
This anomaly can be cancelled by the Green-Schwarz
mechanism \cite{Green:1984sg},
which defines the transformation property of the dilaton
supermultiplet $S=(\varphi+i a, \psi_S,F_S)$,
where $\varphi$ ($a$) is the dilaton (axion) field, and they couple
to the gauge field as
\begin{eqnarray}
{\cal L}_F &=&
-\frac{\varphi}{4} \ F_{\mu\nu}^a F^{a\mu\nu}
- \frac{a}{8}\epsilon^{\mu\nu\rho\sigma}
\ F_{\mu\nu}^a F_{\rho\sigma}^a.
\end{eqnarray}
To cancel the anomaly (\ref{jac}), the axion $a$ has to transform
according to $a \to a -(1/2\pi) (\Delta Q/N)$.
Therefore, the $Z_N$ charge of $\exp(-bS)$ becomes $C$ if
$b = 4 \pi^2 C/\Delta Q$. Since $<\varphi>=1/g^2\simeq O(1)$
at the Planck scale,
the expression $\exp(-bS)$ would then yield a suppression factor $SF$
such as
\begin{eqnarray}
SF &\simeq& \exp(-4 \pi^2 C/\Delta Q),\nonumber
\\
(SF)^2 &\simeq & 10^{-55}, 10^{-69},
10^{-86}~~~\mbox{for}~
C/\Delta Q =8/5, 2, 10/4,
\label{sf}
\end{eqnarray}
where $C$ and $2\Delta Q$ are defined modulo $N$
\footnote{Since $b>0$, $C+N$ should appear instead of $C$ for a negative $C$.
}.
Do we need such a big suppression? According to
\cite{Ibarra:2008jk,Ishiwata:2008cu}, to explain
the PAMELA/ATIC data,
the decaying dark matter
should decay with a life time of $\sim 10^{26}$ sec, which corresponds
to a decay width $\Gamma_{NDM} \sim 10^{-54}\times(1 \mbox{TeV})
\sim 10^{-70}\times(1 \mbox{TeV}) (M_{\rm PL}/1 \mbox{TeV})
\sim 10^{-86}\times(1 \mbox{TeV}) (M_{\rm PL}/1 \mbox{TeV})^2$,
where we have assumed that the decay is induced by dimension four (three)
operators for the first (third) expression. (The second one
could appear accidentally.) The precise suppression needed depends, of course,
on the details of the model. But it is clear that one needs
a huge suppression factor
for the decaying dark matter,
if one would like to explain
the PAMELA/ATIC data within the framework of particle physics
\cite{Hamaguchi:2008ta,Arvanitaki:2008hq,Shirai:2009fq}.
It is also clear that the existence of such a small number can not
be explained
in a low energy theory.
\section{The Model: Radiative see-saw and dark matter candidates}
Here we would like to supersymmetrize the model of \cite{Ma:2006km}.
(An first attempt has been made in \cite{Ma:2006uv}.)
We assume the $R$ parity invariance as usual.
So, we have $R\times Z_2$ discrete symmetry at low energy.
The matter content of the model with their quantum number
is given in Table I.
$L$, $ H^u, H^d$ and $\eta^u, \eta^d$
stand for $SU(2)_L$ doublets
supermultiplets of the leptons, the MSSM Higgses and the inert
Higgses, respectively.
Similarly, $SU(2)_L$ singlet
supermultiplets of the charged leptons and right-handed neutrinos are denoted by
$E^c$ and $N^c$. $\phi$ is an additional neutral Higgs supermultiplet
which is needed
to generate neutrino masses radiatively.
$\Sigma$ and $\sigma$ are also additional neutral Higgs
supermultiplets which are
needed to derive the superpotential (\ref{superP1}) from
a $Z_4$ invariant one.
\begin{table}[t]
\begin{center}
\begin{tabular}{|c|cccccccc|cc|} \hline
& $L$ & $E^c$ & $N^c $&$H^u$&$H^d$& $\eta^u $
& $\eta^d$ & $\phi $ & $\Sigma$ & $\sigma$
\\ \hline
$R\times Z_2 $
& $(-,+)$ & $(-,+)$ & $(+,-)$
& $(+,+)$& $(+,+)$
& $(-,-)$ & $(-,-)$& $(-,-)$ & $(+,+)$ & $(+,+)$
\\ \hline
$Z_4$
& $0$ & $0$ & $-1$
& $0$& $0$
& $1$ & $1$& $-1$ & $2$ & $0$
\\ \hline
$Z_2^L$
& $-$ & $-$ & $-$
& $+$& $+$
& $+$ & $+$& $+$& $+$& $+$
\\ \hline
\end{tabular}
\caption{The matter content and
the quantum number. $Z_{2L}$ is a discrete lepton number.
$Z_2$ is a subgroup of $Z_4$, which is assumed to be anomalous
and spontaneously broken by VEV
of $\Sigma$ and $\sigma$ down to $Z_2$.
}
\end{center}
\end{table}
We first consider
a $R\times Z_2$ invariant
superpotential below. Later on, using $\Sigma$ and $\sigma$, we will describe a
possibility to
obtain it from a $R\times Z_4$ invariant one:
\begin{eqnarray}
W &=& W_4+W_2,
\label{superP1}
\end{eqnarray}
where
\begin{eqnarray}
W_4
&=&
Y_{i}^{e} L_{i} E_{i}^c H^d
+Y_{ij}^{\nu } L_{i} N_{j}^c \eta^u
+\lambda_u \eta^u H^d \phi+
\lambda_d \eta^d H^u \phi
+\mu_H H^u H^d,
\label{w4}\\
W_2 &=& \frac{(M_N)_{ij}}{2}N_i^c N_j^c,+\mu_\eta \eta^u \eta^d+
\frac{1}{2}\mu_\phi\phi^2.
\label{w2}
\end{eqnarray}
The Yukawa couplings of the charged leptons $Y_{i}^{e}$
can be assumed to be diagonal without loss of generality.
Soft-supersymmetry breaking terms are necessary to generate neutrino masses
radiatively. For the relevant Higgs sector they are given by
\begin{eqnarray}
{\cal L}_{SB}
&=&
-m_{\eta^u}^2\hat{\eta}^{u\dag} \hat{\eta}^u-
m_{\eta^d}^2\hat{\eta}^{d\dag} \hat{\eta}^d
-m_{\phi}^2 \hat{\phi}^\dag \hat{\phi}
-(B_\eta \hat{\eta}^u \hat{\eta}^d+h.c.)\nonumber\\
& &-
(\frac{1}{2}B_\phi\hat{\phi}^2+h.c.)+
(A_u \lambda_u \hat{\eta}^u \hat{H}^d \hat{\phi}+
A_d \lambda_d \hat{\eta}^d \hat{H}^u \hat{\phi} +h.c.),
\label{LSB}
\end{eqnarray}
where the hatted field is
the scalar component of the corresponding
superfield. The $B$ and $A$ soft terms are responsible
for the radiative generation of the neutrino masses.
We assume that $\hat{\eta}^u, \hat{\eta}^d$ and $\hat{\phi}$
do not acquire vacuum expectation values.
To calculate
the one-loop neutrino mass matrix, we treat the $B$ terms
as insertions. Then we find
a one-loop diagram with one insertion of
$B_\eta \hat{\eta}^u \hat{\eta}^d$,
which mixes $\hat{\eta}^u$ and $\hat{\eta}^d$.
Correspondingly, we define the approximate
mass eigenstates $\eta_0^\pm$ (the neutral
component of $\eta$) as
\begin{eqnarray}
\left(\begin{array}{c}
\eta^u_0 \\ \eta^{d}_0 \end{array}\right) &=&
\left(
\begin{array}{cc}
\cos\theta & \sin\theta \\
-\sin\theta &\cos\theta \end{array}\right)~
\left(\begin{array}{c}
\eta^+_0 \\
\eta^-_0 \end{array}\right),
\end{eqnarray}
where
\begin{eqnarray}
\tan2\theta &=&-\frac{2 m_{ud}^2}{m_{uu}^2-m_{dd}^2},~
m_{\pm}^2 =
\frac{1}{2}\left\{m_{uu}^2+m_{dd}^2\pm
[(m_{uu}^2-m_{dd}^2)^2+4 m_{ud}^4]^{1/2}\right\},
\label{t2t}
\end{eqnarray}
with
\begin{eqnarray}
m_{uu}^2 &=&
\mu_\eta^2+m_{\eta^u}^2
-\frac{1}{2}M_z^2\cos 2\beta
+\frac{1}{2}\lambda_u^2v^2\cos^2\beta,\\
m_{dd}^2 &=&
\mu_\eta^2+m_{\eta^d}^2
+\frac{1}{2}M_z^2\cos 2\beta
+\frac{1}{2}\lambda_d^2v^2\sin^2\beta,\\
m_{ud}^2 &=&\frac{1}{2}\lambda_u\lambda_d v^2 \cos\beta\sin\beta,
\label{mud}
\end{eqnarray}
and $\tan\beta=v_u/v_d~,~v^2=v_u^2+v_d^2~,~
M_z =(g_2^2+g^{\prime 2})v^2/4$.
Neglecting
higher order insertions we obtain the neutrino mass matrix at one loop:
\begin{eqnarray}
({\bf M}_\nu)_{ij} &=&
\frac{1}{16\pi^2} Y^\nu_{il} U_{l k} M_k
U^T_{km} Y^\nu_{jm}B_\eta \sin 2\theta
\left[-\cos^2 \theta I(m_+,m_+,M_k)\right.\nonumber\\
& &\left.+\sin^2\theta I(m_-,m_-,M_k)
+\cos 2\theta I(m_+,m_-,M_k)\right],
\label{mnu}
\end{eqnarray}
where $U$ is a unitary matrix defined by
$(U^T M_N U)_{ik}=M_k\delta_{ik}$
\footnote{$M_1 < M_{2,3}$ is assumed, and we denote
$M_1$ by $m_{NDM}$ later on.}, and
\begin{eqnarray}
& &I(m_a,m_b,m_c) \nonumber\\
& &=\int_0^1 dx \int_0^{1-x} dy
[m^2_a x+m^2_b y+m^2_c (1-x-y)]^{-1}\nonumber\\
& &=
\frac{m_a^2 m_c^2 \ln(m_a^2/m_c^2)+
m_b^2 m_c^2 \ln(m_b^2/m_c^2)+
m_b^2 m_a^2 \ln(m_b^2/m_a^2)
}{(m_a^2-m_b^2)
(m_b^2-m_c^2)(m_c^2-m_a^2)}.
\end{eqnarray}
As we can see from (\ref{t2t}), (\ref{mud}) and (\ref{mnu}),
the neutrino masses are proportional to
$B_\eta$ and $\lambda_u \lambda_d$ at the lowest order, because $\sin 2\theta
\propto \lambda_u \lambda_d$. So, the neutrino masses can be controlled by these
parameters along with the Yukawa couplings $Y^\nu_{il}$, the
masses of the inert Higgses and right-handed neutrinos.
There are many candidates for the dark matter in this model \cite{Ma:2006uv}.
The lightest combination
of each row in Table II could be a dark matter.
\begin{table}[t]
\begin{center}
\begin{tabular}{|c|c|c|} \hline
$R\times Z_2\times Z_2^L $ & Bosons &Fermions \\ \hline
$(-,+,+)$ &
& $ \psi_{h^u},\psi_{h^d}, {\tilde Z}, {\tilde \gamma}$\\ \hline
$(+,-,+)$ & & $\psi_{\eta^u}, \psi_{\eta^d}, \psi_\phi$
\\ \hline
$(-,-,+)$ & $\hat{\eta}_0^u,\hat{\eta}_0^d,
\hat{\phi}$ & \\ \hline
$(+,-,-)$ & $\hat{N}$'s & \\ \hline
$(-,-,-)$ & & $\psi_N $'s
\\ \hline
$(-,+,-)$ & ${\hat \nu}_L$'s &
\\ \hline
\end{tabular}
\caption{The dark matter candidates. The $(+,+,-)$ candidates
are dropped, because
they are the left-handed neutrinos.}
\end{center}
\end{table}
But there can exist only three types of dark matter including the
left-handed neutrinos depending on which discrete symmetry guarantees
their stability.
We assume
that the first right-handed neutrino $\psi_{N_1}$ is the lightest one
among $\psi_{N}$'s and denote it by $\psi_{N}$ (its mass is denoted by
$m_{NDM}$). So, $\psi_{N}$
and the lightest neutralino $\chi$
( its mass is denoted by $m_{\chi DM}$)
are the dark matter candidates. Both have an odd $R$ parity, so that
one of them can be the stable dark matter, while the other one
is the decaying dark matter, if $Z_2$ is broken.
Following \cite{Griest:1988ma,Griest:1989zh}, we have computed the
thermally averaged cross section for the annihilation of two $\psi_N$'s
and that of two $\chi$'s by expanding the corresponding relativistic
cross section $\sigma$ in powers of their relative velocity,
and we have then computed the relic densities
$\Omega_{NDM}$ and $\Omega_{\chi DM}$.
We have assumed that the SM particles are the only ones
which are lighter than $\psi_N$ and $\chi$,
so that we have used the SM degrees of freedom at the decoupling, i.e.
$g_*=106.75$. We have found that,
for the given interval of the dark matter masses, i.e.,
$1~\mbox{TeV}~\mathrel{\mathpalette\@versim<} m_{NDM}\mathrel{\mathpalette\@versim<} 3~\mbox{TeV}$
and $0.2~\mbox{TeV}~\mathrel{\mathpalette\@versim<} m_{\chi DM}\mathrel{\mathpalette\@versim<} 0.5~\mbox{TeV}$,
there is an enough parameter space
in which $(\Omega_{NDM}+\Omega_{\chi DM}) h^2\simeq 0.11$ is satisfied.
In the next section we let $\psi_N$ decay into $\chi$, while emitting
high energy positrons. As it is clear from the superpotential (\ref{w4}),
$\psi_N$ can not decay into the quarks, because $\eta$'s do not
couple to the quarks.
\section{Decaying right-handed neutrino dark matter and
PAMELA/ATIC Data}
As long as the discrete symmetry $R\times Z_2$ is unbroken, there are two CDM
particles in the present model. One finds that the $R\times Z_2[SU(3)_C]^2$ and
$R\times Z_2[SU(2)_L]^2 $ anomalies are canceled with the matter content
given Table 1 \footnote{We do not consider the $R\times Z_2[U(1)_Y]^2$
and mixed gravitational anomalies, because they do not give us
useful informations.}.
Our assumption is that $Z_4$
is anomalous and spontaneously broken to its subgroup $Z_2$.
Note that $Z_4$ forbids $W_2$ in (\ref{w2}) while $W_4$ is allowed.
Therefore, we have to produce it from an additional sector.
This situation can be realized as follows. Consider the $Z_4$
invariant superpotential
including the SM singlet $\Sigma$ and $\sigma$ given in Table 1:
\begin{eqnarray}
W_\sigma &=&
\xi \sigma +m_\sigma\sigma^2+\lambda_\sigma\sigma^3
+ \lambda_\Sigma \sigma \Sigma^2+\lambda_\mu\sigma H^u H^d\nonumber\\
& &+m_\Sigma \Sigma^2
+\left(~ \frac{(\lambda_N)_{ij}}{2}N_i^c N_j^c+\lambda_\eta \eta^u \eta^d+
\frac{1}{2}\lambda_\phi\phi^2\right)\Sigma .
\label{Ws}
\end{eqnarray}
The superpotential (\ref{Ws}) serves for $\Sigma$ and $\sigma$ to develop VEVs,
and consequently, $Z_4$ is spontaneously broken to $Z_2$,
producing effectively the superpotential (\ref{w2}).
The true stable dark matter is the lightest one
which has an odd parity of $R$.
In the following discussion we assume that $\psi_N$ is heavier than $\chi$.
Since the ATIC data are indicating that the mass of the decaying
dark matter particle is preferably
heavier than $O(1)$ TeV, all the
superpartners should be heavier than $O(1)$ TeV
if $m_{\chi DM} > m_{NDM}$.
It is, therefore, more welcome for $\psi_N$ to be the
decaying dark matter, because
a heavy $\psi_N$ means a heavy $\eta$ Higgs, which is desirable to suppress
FCNC processes such as $\mu \to e + \gamma$.
As one can find, $Z_4$ is anomalous: $\Delta Q=1~\mbox{mod}~N/2(=2)$
\footnote{For the Green-Schwarz cancellation to work, the $Z_4[SU(3)_C]^2$
anomaly has to be matched to $Z_4[SU(2)_L]^2$ anomaly.
To realize this we introduce, for instance, a pair
of ${\bf 3}$ and $\overline{{\bf 3}}$ of $SU(3)_C$ with the $Z_4$
charge one. Their mass can be obtained from
$<\Sigma> {\bf 3} \times \overline{{\bf 3}}$.}.
Consequently,
the suppression coefficient
$b$ of (\ref{sf}) can take values
\begin{eqnarray}
b &=& \frac{4\pi^2 C}{\Delta Q} =4\pi^2\times \frac{C~~(\mbox{mod}~4)}
{1~~(\mbox{mod}~2)},
\label{b}
\end{eqnarray}
where $C$ is the charge of
$\exp(-bS)$.
We assume that the non-perturbative effect
can generate $R$ invariant, but $Z_4$ violating operators.
At $d=3$ there is only one operator
$\eta^u L$ which is even under $R$, and has the $Z_4$ charge one.
So we focus on $\eta^u L$:
\begin{eqnarray}
W_b &=& \mu_{bi}\eta^u L_i \quad \mbox{with}~
\mu_{bi}=\rho_i M_{\rm PL}e^{-b \langle S\rangle},
\label{Wb}
\end{eqnarray}
where $\rho_i$ are dimensionless couplings.
Since $\langle F_S/\varphi\rangle \sim m_{3/2}$ and
$\langle\varphi\rangle\sim O(1)$,
the superpotential $W_b$ induces a soft-supersymmetry breaking term
\begin{eqnarray}
{\cal L}_b &=& B_{bi} \hat{\eta}^u \hat{L}_i \quad \mbox{with}~
B_{bi}= w \rho_i M_{\rm PL} m_{3/2}e^{-b}
\label{Lb}
\end{eqnarray}
at the Planck scale, where is $w$ a dimensionless constant.
Since the $Z_4$ charge of $\eta$ is $1$,
the charge of $\exp(-bS)$ has to be $-1~\mbox{mod}~4$, and then
$$ b =4\pi^2\times (\cdots 7/3,~11/5,~11/7,~ 7/5,~ 1\cdots),$$
which could give a huge suppression factor.
With this observation we proceed with our discussion.
The tree diagrams contributing to the $\psi_N$ decay are shown
in Fig.~\ref{graph1},
where we have assumed that $\chi$ is the pure bino.
We do not take into account the tree diagrams which
exist due to the mixing
of $\psi_{\eta^u}$ and $\psi_{e_f^c}$, because these diagrams
are suppressed by $m_f/m_\eta$,
where $m_f$ is the lepton mass.
So, in the lowest order approximation only dimension two
operators in the $B$ soft-breaking sector exist.
At the lowest order, $\psi_N$ can decay only into
the leptons along with a $\chi$. ($R$ parity violating
operator $L H^u$ allows the decay into the quarks, too.)
The differential decay width is given by
\begin{eqnarray}
\frac{d \Gamma_{e^+}}{d E} &=&
\frac{m_{NDM}^4}{768 \pi^3}x^2 \left(1-\frac{z^2}{1-2x}\right)^2
\Big[~A_1(1-2x -z^2)+2 A_1 (1-x) (1+2 \frac{z^2}{1-2x})\nonumber\\
& &+6 A_2 z+6 A_3 (1-2x)~\Big],
\label{Dg}
\end{eqnarray}
and the total decay width is
\begin{eqnarray}
\Gamma_{e^+ T} &=&\tau_{NDM}^{-1}=\frac{m_{NDM}^5}{12288\pi^3}\Big\{~(1-z^2)
[ (A_1+A_3)(1-7 z^2-7 z^4+z^6)\nonumber\\
& &+4 A_2 z (1+10 z^2+z^4) ]+
24z^2[-(A_1+A_3)z+2 A_2 (1+z^2) \ln z]~\Big\},
\label{Tg}
\end{eqnarray}
where
\begin{eqnarray}
z &=& \frac{m_{\chi DM}}{m_{N DM}} <1,\qquad x =\frac{E}{m_{N DM}} < (1-z^2)/2,
\label{z}
\end{eqnarray}
\begin{eqnarray}
A_1 &=&2g^{\prime 2}\sum_{i,j} \left | Y_{j 1}^* B_i
\right|^2\frac{1}{\tilde{m}_L^4 m_\eta^4} , \qquad
A_3 =2g^{\prime 2}\sum_{i,j} \left |
Y_{i 1}^* B_j \right|^2\frac{1}{\tilde{m}_L^4 m_\eta^4},
\label{a1} \\
A_2 &=& -g^{\prime 2}\sum_{i,j}
\left [( Y_{j 1} B_i ^*)(Y_{i 1}^* B_{j})+h.c. \right]
\frac{1}{\tilde{m}_L^4\tilde{m}_{\eta}^4},
\label{a2}
\end{eqnarray}
where $g' ~(\simeq 0.345)$ is the $U(1)_Y$ gauge coupling constant
(the bino is assumed to be $\chi$).
$j$ runs over the negatively charged leptons, and
$i$ stands for a positively charged lepton in
(\ref{a1}) and (\ref{a2}). We have assumed that
all the scalar partners of the left-handed superpartners $\hat{l}_L$
have the same mass $\tilde{m}_L$.
The positron can come
from the decay of the anti-muons and anti-taus.
In the following calculations, however,
we assume that the energy spectrum of the positron coming from the anti-muon
and tau does not differ very much from that of the direct production
of the positron.
So, we also sum over $i=e^+, \mu^+,\tau^+$
in (\ref{a1}) and (\ref{a2}) to obtain
$d \Gamma_{e^+}/d E$. At this order, all the emitted positrons are right-handed,
as one can see from Fig.~\ref{graph1}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=10cm]{graph1.eps}
\caption{\label{graph1}\footnotesize The diagrams contributing
to the $\psi_N$ decay.
We assume that $\chi$ is the pure bino.
The emitted positrons are all right-handed.
Similar diagrams exist
because of the mixing of $\eta^u$ with $E^c$.
The amplitude is suppressed by $e^{-b} m_f/m_\eta$,
so that we do not consider them.
}
\end{center}
\end{figure}
Before we calculate the positron spectrum, we briefly consider
the suppression factor we need for our case.
Assuming that $Y_{ij}\sim 1$ and
that all the $\rho_i$ in $B$'s in (\ref{Lb}) are of the same size, we obtain
\begin{equation}
\tau_{NDM} \sim \left(\frac{ \mbox{TeV}}{m_{NDM}}\right)
\left( \frac{m_\eta^2 \tilde{m}_L^2}{m_{NDM}^3 m_{3/2}} \right)^2
\left( \frac{m_{NDM}}
{M_{\rm PL}/10^{16}}
\right)^2 (\rho_{\tau} \omega)^{-2} \left( 10^{-79} e^{2b} \right)
\times 10^{26} ~~\mbox{sec}.
\end{equation}
So, we have a right order of $\tau_{NDM}$
with $b=4\pi^2(7/3)$ which gives a suppression factor of
$10^{-80}$ (see (\ref{b})).
\begin{figure}[t]
\begin{center}
\includegraphics[width=10cm]{pamela.eps}
\caption{\label{pamela}\footnotesize
$(e^+/(e^++e^-)$ versus the positron energy $E$.
The blue lines are the predictions of the model,
where we have used: $z=1/5\mbox{(dashed)}$, $1/6\mbox{(dot-dashed)}$,
$1/5\mbox{(dotted)}$, $\tau_{NDM}(0.11/\omega_{NDM}
h^2)=1.4\mbox{(dashed)}$,
$2.0\mbox{(dot-dashed)}$, $3.0\mbox{(dotted)}~\times 10^{-26}~\mbox{sec}$,
$m_{NDM}= 2.0\mbox{(dashed)}$, $1.5\mbox{(dot-dashed)}$,
$1.0\mbox{(dotted)}~\mbox{TeV}$.
The red points are the PAMELA data \cite{pamela},
where the predictions are written over the
figure 4 of the PAMELA paper \cite{pamela}.
The solid line is the background published in \cite{pamela}, and it agrees
with the one calculated from
(\ref{p1})-(\ref{p3}) without the primary source of the positron.}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=12cm]{atic.eps}
\caption{\label{atic2}\footnotesize
The differential energy spectrum scaled by $E^3$.
The solid, dashed, dot-dashed and dotted blue lines are the predictions
of the model.
The parameter values used here are the same as for Fig.~\ref{pamela}.
The solid blue line is calculated with $z=1/10$,
$\tau_{NDM}(0.11/\omega_{NDM} h^2)=0.94
\times 10^{26}~\mbox{sec}$ and $m_{NDM}=3~\mbox{TeV}$.
The predictions are written over the
figure 3 of the ATIC paper \cite{atic}, where the PPB-BETS data
\cite{Torii:2008xu} are also plotted.
The black dashed line is the background
presented in \cite{atic}.
The normalization factor $N_\phi = 0.76$ in (\ref{p1})-(\ref{p3}) is so chosen
that the background computed from (\ref{p1})-(\ref{p3})
agrees with the black dashed line at low energy.
}
\end{center}
\end{figure}
Now we come to compute the positron spectrum:
\begin{eqnarray}
f_{e^+}(E) &=&\int_E^{E_{\rm max}} d E'G_{e^+}(E,E')~
\frac{d \Gamma_{e^+}(E')}{d E'},
\label{fe}
\end{eqnarray}
where $E_{\rm max} = (m_{NDM}^2-m_{\chi DM}^2)/2 m_{NDM}$,
$d \Gamma_{e^+}(E)/d E=(\tau_{NDM})^{-1}
d n_{e^+}(E)/d E$, and we vary $\tau_{NDM}$ freely to fit
the data.
The positron Green's function $G_{e^+}$
of \cite{Hisano:2005ec} can be approximately written
as \cite{Ibarra:2008jk}
\begin{eqnarray}
G_{e^+}(E,E') &\simeq &\left(\frac{\Omega_{NDM}h^2}{0.11}\right)
\frac{10^{16}}{E^2}
\exp [ a+b(E^{\delta-1}-E^{\prime \delta -1}) ]~~\mbox{cm}^{-3}~\mbox{s},
\end{eqnarray}
where $a,b,\delta$ depend on the diffusion model
\cite{Moskalenko:1997gh,Delahaye:2007fr,Ibarra:2008jk}.
Here we use those of the MED
model \cite{Delahaye:2007fr}: $a=-1.0203, b=-1.4493, \delta=0.70$,
and we have assumed that
except for the normalization factor $\Omega_{NDM}h^2/0.11$
the decaying dark matter $\psi_N$ has the same density profile in our galaxy
as the NFW profile \cite{Navarro:1995iw}.
The background differential flux for each species are \cite{Baltz:1998xv}
\begin{eqnarray}
\Phi^{\rm prim. bkg}_{e^-}(E) &=&N_{\phi}0.16 E^{-1.1}\left[
1+11 E^{0.9}+3.2 E^{2.15}\right]^{-1},
\label{p1}\\
\Phi^{\rm sec. bkg}_{e^-}(E) &=&N_{\phi}0.7 E^{0.7}\left[
1+11 0E^{1.5}+600 E^{2.9}+580 E^{4.2}\right]^{-1},
\label{p2}\\
\Phi^{\rm sec. bkg}_{e^+}(E) &=&N_{\phi}4.5 E^{0.7}\left[
1+650 E^{2.3}+1500 E^{4.2}\right]^{-1}
\label{p3}
\end{eqnarray}
in the units in $[\mbox{GeV}~\mbox{cm}^2~\mbox{s}~\mbox{sr}]^{-1}$,
where the energy $E$ is in the units in GeV, and $N_{\phi}$ is
a normalization factor
which we fix to be $0.76$ from the ATIC data at low energies.
The primary positron differential flux
$\Phi^{\rm prim.}_{e^+}$ is $ (c/4\pi)f_{e^+}$,
where $f_{e^+}$ is given in (\ref{fe}).
Using (\ref{fe})-(\ref{p3}), we then calculate appropriate
quantities for PAMELA and ATIC:
\begin{eqnarray}
\frac{e^+}{e^+ +e^-} &=&
\frac{\Phi^{\rm prim.}_{e^+}+
\Phi^{\rm sec. bkg}_{e^+} }{\Phi^{\rm prim. }_{e^+}+
\Phi^{\rm sec. bkg}_{e^+}+\Phi^{\rm prim. bkg}_{e^-}
+\Phi^{\rm sec. bkg}_{e^-}} ~~~\mbox{for PAMELA},
\label{pa}\\
E^3 \frac{dN }{d E}&=&E^3 (\Phi^{\rm prim. }_{e^+}+
\Phi^{\rm sec. bkg}_{e^+}+\Phi^{\rm prim. bkg}_{e^-}
+\Phi^{\rm sec. bkg}_{e^-})
~\mbox{for ATIC}.
\label{at}
\end{eqnarray}
The results are shown in Figs.~\ref{pamela} and \ref{atic2},
where we have assumed $A_1=A_3=A_2$ in (\ref{a1}) and (\ref{a2}).
The blue lines are the predictions of the model, and
we have used:~
$z=1/5\mbox{(dashed)}$, $1/6\mbox{(dot-dashed)}$, $1/5\mbox{(dotted)}$,
$\tau_{NDM}(0.11/\omega_{NDM} h^2)=1.44\mbox{(dashed)}$,
$2.0\mbox{(dot-dashed)}$,
$3.0\mbox{(dotted)}~\times 10^{-26}~\mbox{sec}$,
$m_{NDM}=2.0\mbox{(dashed)}$, $1.5\mbox{(dot-dashed)}$,
$1.0\mbox{(dotted)}~\mbox{TeV}$,
where $z$ is defined in (\ref{z}).
The predictions are written over the
figure 4 of the PAMELA paper \cite{pamela}
and the figure 3 of the ATIC paper \cite{atic}.
We see from Figs.~\ref{atic2} that
$m_{NDM}$ should be heavier than $O(1)$ TeV
in this model, too.
\section{Conclusion}
We have studied a dark matter model, in which one decaying and one stable
dark matter particles coexist. We have assumed that one of the
discrete symmetries
ensuring the stability of the dark matter particles,
when imbedded into a larger group, is anomalous, and
the heavier dark matter can decay non-perturbatively.
The huge suppression factor for the decay of dark matter
to be needed can be obtained in this way.
The concrete model we have considered
is a supersymmetric extension of the Ma's inert Higgs model,
so that the decaying dark matter (the lightest right-handed neutrino)
can decay only into leptons along with the stable dark matter (LSP).
We have shown that this scenario can explain
the data of \cite{pamela, atic}.
It is clear that if the recent and future data coming
from the cosmic ray observations are intimately related to
the nature of dark matter,
its explanation may open the window to new physics beyond the SM.
The radiative dark matter decay and high energy neutrino productions
will be our next projects.
\vspace*{5mm}
J.~K. is partially supported by a Grant-in-Aid for Scientific
Research (C) from Japan Society for Promotion of Science (No.18540257).
D.~S.. is partially supported by a Grant-in-Aid for Scientific
Research (C) from Japan Society for Promotion of Science (No.21540262).
\bibliographystyle{unsrt}
|
1,314,259,995,312 | arxiv | \section{Introduction}
The detection of high-redhifted ($z>2$) millimeter CO lines in the
hyperluminous object IRAS 10214+4724
($z=2.28$, Brown \& Vanden Bout 1992, Solomon et al. 1992a), has opened
a new way of research to tackle the star formation history of the Universe.
Although the object turned out to be highly gravitationally amplified, it
revealed however that galaxies at this epoch could have large
amounts of molecular gas, excited by an important starburst,
and sufficiently metal-enriched to emit detectable CO emission lines.
The latter bring fundamental information about the cold gas component
in high-z objects and therefore about the physical conditions of
the formation of galaxies and the first generations of stars.
At high enough redshifts, most of the galaxy mass could be molecular.
The main problem to detect this molecular component could be its low
metallicity, but theoretical calculations have shown that in a violent
starburst, the metallicity could reach solar values very quickly
(Elbaz et al. 1992).
After the first discovery, many searches for other candidates took place,
but they were harder than expected, and only a few,
often gravitationally amplified,
objects have been detected: the lensed Cloverleaf quasar
H 1413+117 at $z=2.558$ (Barvainis et al. 1994),
the lensed radiogalaxy MG0414+0534 at $z=2.639$ (Barvainis et al. 1998),
the possibly magnified object
BR1202-0725 at $z=4.69$ (Ohta et al. 1996, Omont et al. 1996a),
the amplified submillimeter-selected hyperluminous galaxies SMM02399-0136
at $z=2.808$ (Frayer et al. 1998), and SMM 14011+0252 at 2.565
(Frayer et al. 1999), and the magnified BAL quasar APM08279+5255,
at $z=3.911$, where the gas temperature derived from the CO lines is
$\sim$ 200K, maybe excited by the quasar (Downes et al. 1999).
Recently Scoville et al. (1997b) reported the detection of the first
non-lensed object at $z=2.394$, the weak radio galaxy 53W002,
and Guilloteau et al. (1997) the radio-quiet quasar BRI 1335-0417, at $z=4.407$,
which has no direct indication of lensing.
If the non-amplification is confirmed, these objects
would contain the largest molecular contents known
(8-10 10$^{10}$ M$_\odot$ with a standard CO/H$_2$
conversion ratio, and even more
if the metallicity is low).
The derived molecular masses are so high that H$_2$ would constitute
between 30 to 80\% of the total dynamical mass (according to the unknown
inclination), if the standard CO/H$_2$ conversion ratio was adopted.
The application of this conversion ratio is however doubtful, and it is
possible that the involved H$_2$ masses are 3-4 times lower (Solomon
et al. 1997).
Obviously, the search of CO lines in high-z objects is still a challenge
for present day instrumentation, but this will rapidly change with
the new millimeter instruments planned over the world
(the Green-Bank-100m of NRAO, the LMT-50m of UMass-INAOE,
the LSA-MMA (Europe/USA) and the
LMSA (Japan) interferometers). It is therefore interesting to
predict with simple models the detection capabilities, as a function
of redshift, metallicity or physical conditions in the high-z objects.
In particular, it would be highly interesting to detect not only
the few exceptional amplified monsters in the sky, but also the
widely spread normal galaxy population of the young universe.
Our aim here is to determine to which redshift it will be possible,
and with which instrument. A previous study has already modelled
galaxies at very high redshift (up to $z=30$) and concluded
that CO lines could be even more easy to detect than the continuum
(Silk \& Spaans 1997). Our models do not agree with this conclusion.
Section 2 describes the two-component model we use for the molecular clouds
of the starburst galaxies, section 3 the cosmological model; both
are combined and the results are discussed in section 4.
Section 5 describes the detection perspectives with the future
millimeter instruments.
\section{Galaxy ISM modeling }
Since the physical conditions of the interstellar medium
is still unknown in early galaxies, the most straightforward
modeling is to extrapolate what we know from the local
star-forming clouds in the Milky Way. Solomon et al. (1990)
showed that the ISM of ultra-luminous starbursting galaxies
is likely to contain a large fraction of dense gas,
corresponding to the star-forming cores of local molecular
clouds. The gross features of the CO and far-infrared emissions
of these luminous starbursts can be explained by a
dense (average density 10$^4$ cm$^{-3}$) molecular component
confined in the central kpc, containing cores of even
higher density ($\sim$ 10$^6$ cm$^{-3}$). These are about
100 times denser than in a normal galactic disk. The FIR to
CO luminosity ratio are nearly those expected for an optically
thick region radiating as a black-body (Solomon et al. 1997).
Typical masses of molecular gas are 2-6 10$^{10}$ M$_\odot$.
The dominant dust component in ultra-luminous galaxies has a
temperature between 30 and 50 K, while there is sometimes
a higher temperature emission bump (e.g. Klaas et al. 1997).
The emission peaks around 100$\mu$m with a flux of 1-3 Jy
at $z = 0.1$. At the higher redshifts of $z =4-5$, continuum
fluxes of a few mJy have been detected at 1.25mm (e.g.
Omont et al. 1996b), corresponding to dust emission at
about 220$\mu$m, with gas masses up to 10$^{11}$ M$_\odot$ if
no gravitational lens is present. As for the CO lines,
they are more difficult to detect with present day
instrumentation, and most of the 8 detections reported so far
are gravitationally magnified. The flux detected
range between 3 to 20 mJy for lines (3-2) to (7-6)
(see Table \ref{COdata}).
\begin{table}[h]
\caption[ ]{CO data for high redshift objects}
\begin{flushleft}
\begin{tabular}{lllclcl} \hline
Source & $z$ & CO & S & $\Delta$V& MH$_2$ & Ref \\
& &line & mJy & km/s & 10$^{10}$ M$_\odot$ & \\
\hline
F10214+4724 & 2.285 & 3-2 & 18 & 230 & 2$^*$ & 1 \\
53W002 & 2.394 & 3-2 & 3 & 540 & 7 & 2 \\
H 1413+117 & 2.558 & 3-2 & 23 & 330 & 2-6 & 3 \\
SMM 14011+0252&2.565& 3-2 & 13 & 200 & 5$^*$ & 4 \\
MG 0414+0534& 2.639 & 3-2 & 4 & 580 & 5$^*$ & 5 \\
SMM 02399-0136&2.808& 3-2 & 4 & 710 & 8$^*$ & 6 \\
APM 08279+5255&3.911& 4-3 & 6 & 400 & 0.3$^*$ & 7 \\
BR 1335-0414& 4.407 & 5-4 & 7 & 420 & 10 & 8 \\
BR 1202-0725& 4.690 & 5-4 & 8 & 320 & 10 & 9 \\
\hline
\end{tabular}
\end{flushleft}
$^*$ corrected for magnification, when estimated\\
Masses have been rescaled to $H_0$ = 75km/s/Mpc. When multiple images
are resolved, the flux corresponds to their sum\\
(1) Solomon et al. (1992a), Downes et al (1995); (2) Scoville et al. (1997b); (3) Barvainis et
al (1994, 1997); (4) Frayer et al. (1999);
(5) Barvainis et al. (1998); (6) Frayer et al. (1998); (7)
Downes et al. (1999); (8) Guilloteau et al. (1997); (9) Omont et al. (1996a)
\label{COdata}
\end{table}
A striking feature of these ultra-luminous infra-red
objects, is that both the IR and CO emissions originate in regions a few
hundred parsecs in radius (Scoville et al. 1997a, Solomon et al. 1997,
Barvainis et al. 1997).
Observations at arcsecond resolution have shown that the molecular
gas is confined in compact sub-kpc components, or disks of a few
hundred parsecs in radius. The average H$_2$ surface density there
is of the order of a few 10$^{24}$ cm$^{-2}$ (Bryant \& Scoville 1996).
The gas must be clumpy, since molecules with high dipole moment are excited,
such as HCN (Solomon et al. 1992b). This strong central concentration
corresponds very well to what is expected in a merger by N-body
simulations (e.g. Barnes \& Hernquist 1992). Due to gas
dissipation and gravity torques,
large H$_2$ concentrations pile up at the galaxy nuclei in interacting
systems. Up to 50\% of the dynamical mass could be under
the form of molecular hydrogen in mergers (Scoville et al.. 1991).
The most extreme
ultra-luminous infrared galaxies, which are also mergers and
starbursts possess several 10$^{10}$ M$_\odot$ of H$_2$ gas, more than
10 times the H$_2$ content of the Milky Way.
\subsection{A two-component model}
We will therefore base our simple model on a small inner
disk of 1 kpc diameter, containing two
density components (cf Table \ref{model}).
Both are distributed
in clouds of low volume filling factor.
The dense and hot component represents the star-forming
cores, and for the sake of simplicity, there is one
core in every cloud. Because of
the large velocity gradients in the inner regions
(due to rotation or macroscopic velocity dispersion),
most of these cores are not overlapping at a given velocity,
and their molecular emission can be simply summed up.
This is not the case for the more extended
clouds (see below), or for the dust, that
could be optically thick at some frequencies.
For the opacity of the dust, we take the formula computed
by Draine \& Lee (1984) and quite compatible with observations
(see Boulanger et al. 1996):
$$
\tau = N_H(cm^{-2}) 10^{-25} (\lambda/ 250\mu)^2
$$
The total molecular mass is chosen to be
6 10$^{10}$ M$_\odot$, and the average column density N(H$_2$)
of the order of 10$^{24}$ cm$^{-2}$ (typical of the Orion cloud
center). To fix ideas, we assume that the total mass
is distributed in 8.6 10$^7$ clouds of 700 M$_\odot$
each, with an individual velocity dispersion of 10 km/s,
embedded in a macroscopic profile of 300 km/s dispersion/rotation
(cf Table \ref{model}).
The column density of each cloud
is respectively 3 10$^{22}$ cm$^{-2}$ for the extended
component and 3 10$^{23}$ cm$^{-2}$ for the core.
For the sake of simplicity, we consider cubic clouds
(the cube size is indicated in Table \ref{model}),
and their mass is computed taking into account the
helium mass (total mass
larger than the H$_2$ mass by a factor 1.33).
\begin{table}[h]
\caption[ ]{Parameters of the two-component model}
\begin{flushleft}
\begin{tabular}{lll} \hline
Parameter & Hot comp. & Warm comp. \\
\hline
n(H$_2$) cm$^{-3}$ & 10$^6$ & 10$^4$\\
sizes (pc) & 0.1 & 1 \\
$\Delta$V (km/s) & 10 & 10 \\
$T_K$ ($z$ = 0.1) & 90.0 & 30.0 \\
$T_K$ ($z$ = 1.0) & 90.0 & 30.0 \\
$T_K$ ($z$ = 2.0) & 90.0 & 30.0 \\
$T_K$ ($z$ = 3.0) & 90.0 & 30.0 \\
$T_K$ ($z$ = 5.0) & 90.0 & 30.1 \\
$T_K$ ($z$ = 10.0) & 90.0 & 33.7 \\
$T_K$ ($z$ = 20.0) & 91.0 & 57.5 \\
$T_K$ ($z$ = 30.0) & 98.2 & 84.6 \\
N(CO) cm$^{-2}$ & 3. 10$^{19}$ & 3. 10$^{18}$ \\
N(H$_2$) cm$^{-2}$ & 3. 10$^{23}$ & 3. 10$^{22}$ \\
f$_s^*$ & 1. & 100. \\
f$_v^*$ & 0.03 & 0.03 \\
mass fraction & 0.1 & 0.9 \\
\hline
\end{tabular}
\end{flushleft}
$^*$ f$_s$: surface filling factor, f$_v$ velocity filling factor \\
$T_K$ increases with $z$ keeping $T_{dust}^6 - T_{bg}^6$ constant (see text)
\label{model}
\end{table}
As for the temperatures of the two components, we choose
for local clouds $T_K$ = 30K for the extended component, and
90K for the star-forming cores.
We do not take into account here a hotter component, possibly
heated by the AGN, as seen in F10214+4724 (Downes et al. 1995)
or the Cloverleaf (Barvainis et al. 1997).
These temperatures were
increased at high redshift, to take into account the
hotter cosmic background: indeed the dust is
heated above the background temperature by the
UV and visible light coming from the stars.
Assuming the same star-formation
rate for the clouds, and the same fraction of the star light
reprocessed by the optically thin dust,
with an opacity varying as $\lambda^{-2}$,
the quantity $T_{dust}^6 - T_{bg}^6$ is fixed. This
means that the difference between the energy re-radiated
by the dust ($\propto T_{dust}^6$), and the energy received
from the cosmic background by the dust ($\propto T_{bg}^6$)
is always the same. This is the energy flux coming from the stars.
We assume that the density is so high that the gas is heated
efficiently by the dust, and $T_{dust}$ = $T_K$ (which
maximises the line flux). This condition might not
be satisfied, and our CO line fluxes from the gas
could be somewhat optimistic (however not by a large factor).
The corresponding temperatures as a function
of redshift are displayed in Table \ref{model}.
We have also varied this condition, to take into account
other dust properties: in some dust infra-red spectra,
the opacity dependence with frequency has a lower
power than $\nu^2$ (sometimes $\nu^{1.5}$ or $\nu$);
also the dust could be completely optically thick,
so that the quantity
$T_{dust}^4 - T_{bg}^4$ is conserved.
This relation gives the maximum possible temperatures,
and the results are discussed below. With these two simple models,
the dust temperatures are bracketed.
We compute the relative populations of the CO molecule levels
in each component with the LVG approximation (using 17 levels); results are
displayed in Fig. \ref{pop}, for each temperature component.
At each redshift, the actual level populations are plotted as a full
line, and the LTE distribution as a dotted line for comparison.
At high temperature, the full and dotted lines coincide.
From these distributions, excitation temperatures $T_{ex}$ and
opacities $\tau$ can be derived for each CO line, and summed
with the relevant filling factors for the two components.
Antenna temperatures are obtained through
\begin{equation}
$$ T_A = [ f(T_{ex}) - f(T_{bg})] [1 - exp(-\tau)]$$
\end{equation}
where $f(T) = \frac{h\nu}{k} (exp(\frac{h\nu}{kT}) -1)^{-1}$.
The flux is then
\begin{equation}
$$ S = \frac{2 k T_A}{\lambda^2} \Omega_S$$
\end{equation}
where $\Omega_S$ is the angular size of the source.
The emission from individual clouds are summed directly, as long
as their total filling factor (i.e. the product of the surface
and velocity filling factor for the lines, or only the
surface filling factor for the continuum) is smaller than 1.
When the total filling factor is larger than 1,
the overlapping of clouds is taken into account, and
the opacity is increased on each line of sight by this factor.
This takes into account
the overcrowding and self-absorption of the cloud population.
We computed the equivalent CO to H$_2$ conversion factor in this
model, for the CO(1-0) flux at $z = 0.1$: it is X= 5 10$^{20}$
mol/K/km.s$^{-1}$. At higher redshift, X increases, since
the gas is hotter and the emission comes out in higher-$J$ lines than CO(1-0);
it is 9 10$^{20}$ at $z = 5$ and 280 10$^{20}$ at $z = 30$.
\begin{figure}
\psfig{width=8.5cm,file=8282_f1a.ps,bbllx=4cm,bblly=1cm,bburx=20cm,bbury=24cm,angle=-90}
\psfig{width=8.5cm,file=8282_f1b.ps,bbllx=4cm,bblly=1cm,bburx=20cm,bbury=24cm,angle=-90}
\caption{ Relative populations of the CO molecules, for the hot core (upper)
and warm cloud (bottom) components. The 8 curves correspond to redshifts
$z$= 0.1, 1, 2, 3, 5, 10, 20, 30. The values of
$T_K$ indicated are for $z$= 0 (see Table \ref{model}), and they do not vary
up to $z$ = 5. The full curves are the actual distributions, and
the dotted curves correspond to the LTE ones, at the corresponding $T_K$
(cf Table 2).
The 5 first dotted curves are superposed (same $T_K$), and the two last
ones ($z$ = 20 and 30) are coinciding with their corresponding full curves. }
\label{pop}
\end{figure}
\subsection{An homogeneous sphere model}
We also computed a quite different model of the molecular gas
in starburst, a homogeneous sphere model. This is motivated by the
consideration that the tidal forces and star formation energy
could be so high in the center as to disrupt molecular clouds.
The CO to H$_2$ conversion factor could then be quite different
from the standard one, based on a collection of virialised clouds.
The total mass and size is the same in this model: 6 10$^{10}$
M$_\odot$ and 1 kpc diameter. The unique temperature is chosen to
be T$_d$ = 50K. Since the surface density of the homogeneous source
is 3.5 10$^{24}$ cm$^{-2}$, the dust is optically thick for
wavelengths $\lambda < $ 150$\mu$m, and the emission of the
central molecular region can be approximately modelled
by a black-body (e.g. Solomon et al. 1997).
The average density is however 10$^{3}$ cm$^{-3}$, not enough to reach
LTE even in the low levels. The LVG model yields in this case an
excitation temperature approaching $T_K$ for the first CO levels,
then decreasing to a minimum, that could be 3 times below, and increasing
slightly for the highest levels (this translates into oscillations
of the flux versus wavelength, as shown in Fig. 7 below).
The equivalent CO to H$_2$ conversion factor in this
model, for the CO(1-0) flux at $z$ = 0.1, is now X= 3.7 10$^{20}$
mol/K/km.s$^{-1}$ ( 5.1 10$^{20}$ at $z$ = 5 and 370 10$^{20}$ at
$z$ = 30).
It must be noted that the homogeneous gas model minimises the
optical depth of the CO lines, at a given velocity, since both
the spatial and velocity filling factors are 1, and consequently, it
should maximise the amount of CO emission per gas mass. However, since
the corresponding density is then small (here 10$^{3}$ cm$^{-3}$),
the rotational levels are not excited to a $J$ as high as in the
two-component clumpy model, and the excitation temperature is
less than the kinetic temperature (cf Fig. \ref{poph}).
This reduces the efficiency of the CO lines emission.
One way to have a better emissivity is to enlarge the
starburst region to a size of several kpc, but this is not likely
to occur, given the observations of ultra-luminous galaxies
at low redshift, and the huge star formation rate then required.
In summary, the clumpy and homogeneous models are
two extreme simple models that help us to explore the large
range of cloud distribution possibilities.
\begin{figure}
\psfig{width=8.5cm,file=8282_f2.ps,bbllx=4cm,bblly=1cm,bburx=20cm,bbury=24cm,angle=-90}
\caption{ Same as Fig. \ref{pop}, for the homogeneous sphere model,
at $T_K$ = 50 K, at $z$ = 0.
The 6 first dotted curves are superposed (same $T_K$), and the two last
ones ($z$ = 20 and 30) are almost coinciding with their corresponding full
curves. }
\label{poph}
\end{figure}
The principal way to optimize CO lines emissivity is to assume
that they are never highly optically thick, as assumed by
Barvainis et al (1997). In their first interpretation of the CO lines
from the Cloverleaf, Barvainis et al (1994) derived an H$_2$ mass
of 4 10$^{11}$ M$_\odot$ (uncorrected for magnification), about an order
of magnitude larger than in Barvainis et al (1997). In the latter work,
the authors claim that the optical depth of the CO lines cannot be
much larger than 1, since the CO line ratios between the (3--2), (4--3),
(5--4) and (7--6) lines would then be Rayleigh-Jeans (in $\nu^2$),
which is not confirmed by observations. However, the various rotational
lines might not come from the same area, and their relative
magnification ratios are unknown. This prevents a definite conclusion
about the optical depth of the lines. Let us note that the low-optical
depth models are rather contrived, since to have a high enough excitation,
the density must be substantial, while the column density must be kept low.
This results for instance in clumps of 0.1pc size for instance, in the
T = 60 K model of Barvainis et al (1997), that completely cover
the source surface (with a low volume filling factor). To reach a diffuse
homogeneous sphere, the temperature should be larger than T = 300 K
(but then the expected dust emission is too large).
In our standard two-component clumpy model, the low-$J$ lines
are highly optically thick ($\tau \geq$ 100), and their relative intensities
are in the Rayleigh-Jeans ratio. We explore the optically thin cases in section
4, and in particular study the evolution of the continuum-to-line ratio.
\section{Cosmological model}
For very distant ($z>1$) objects, the luminosity and angular size distances
are significantly different (cf examples in Fig. \ref{distances}).
We parametrize as usual the cosmological model by the Hubble constant
$H_0$ (here fixed to 75 km/s/Mpc), and the deceleration parameter $q_0$.
We choose two values $q_0$= 0.1 and 0.5 to illustrate
our computations here, corresponding respectively to an open universe, and
to a critical (Einstein-deSitter model) flat one, for a zero cosmological
constant (since in this case $\Omega_0$ = 2 $q_0$).
For a matter-dominated Friedmann universe,
the luminosity distance of an object at redshift $z$ is (e.g. Weinberg 1972):
\begin{equation}
$$ D_L = \frac{c}{H_0 q_0^2}[zq_0 + (q_0-1)(-1 + \sqrt{2q_0z+1})] $$
\end{equation}
and the angular size distance:
\begin{equation}
$$ D_A = (1 + z)^{-2} D_L $$
\end{equation}
As can be seen in Fig. \ref{distances}, the latter is decreasing with
$z$ as soon as $z>2$, i.e. the apparent diameter of objects increase with $z$.
This may give the spurious impression that sources at high redshift will
be more easy to detect; in fact the measured integrated flux
decreases as $D_L^2$.
It is interesting to introduce the correction factor $F_z$, which is
the square of the ratio of the luminosity distance to the extrapolation of the
low-redshift distance formula $cz/H_0$ (see for instance Gordon et al. 1992):
\begin{equation}
$$ F_z = (\frac{D_L H_0}{cz})^2$$
\end{equation}
This correction factor is also plotted in Fig. \ref{distances}, and
is varying almost as $z$ for $q_0$ = 0.1. This means that for a
given intrinsic luminosity, this factor makes it more and more difficult
to detect objects at high redshift. The only favorable factor
in the millimetric domain is what is usually called
``a negative K-correction''\footnote{The term of K correction may appear
confusing; it has been used for the first time by Wirtz in 1918,
for ``Kosmologie'' to determine the distance-redshift law; Hubble (1929)
called also K (after Wirtz's law) his now famous constant, and
Oke \& Sandage (1968) quantified what they called the K-correction,
the combined effet of the changing $\lambda$ domain (if the spectrum of the
object is not flat), and the reduced $\lambda$-interval observed at high
redshift, given a photometry band.}, i.e. that the flux
of the object increases with the frequency $\nu$ faster
than $\nu^2$ in the wavelength domain considered, and therefore
that its apparent luminosity could increase with redshift. This is occuring
for the dust emission from starbursts, which peaks
in the 60-100$\mu$m region, and is progressively redshifted in the
sub-millimeter and millimeter domain, where it is as easy to detect
objects at $z=5$ than $z=1$ (Blain \& Longair 1993,
McMahon et al. 1994, Omont et al. 1996b, Hughes et al. 1998).
\begin{figure}
\psfig{width=8.5cm,file=8282_f3.ps,bbllx=15mm,bblly=5mm,bburx=20cm,bbury=21cm,angle=-90}
\caption{ Angular size distance ($D_A$), luminosity distance ($D_L$)
and correction factor $ F_z = (\frac{D_L H_0}{cz})^2$, for two values of
$q_0$ (0.5 full line, 0.1 dashed line).}
\label{distances}
\end{figure}
To estimate detection capabilities, we will consider only point
sources, for the sake of simplicity.
At $z=2$, a beamwidth of 1$''$, reached already by present mm interferometers,
encompasses about 7 kpc, which is much larger than the area of dense
and excited molecular gas in a starburst. With the foreseen next generation
mm instruments, it will be possible to begin to resolve the emission
only for the best possible resolution (0.1$''$) and for very large redshifts,
if however there is enough sensitivity.
At the present time, galaxies are detected in the optical up
to $z=$ 6 only. At this epoch, the age of the universe is about
5\% of its age, or 10$^{10}$ yrs in a standard flat model.
For larger redshifts, it is likely that the total amount of
cumulated star formation is not a significant fraction of the total
(e.g. Madau et al 1996). However, it is of overwhelming interest
to trace the first star-forming structures, as early as possible
to constrain theories of galaxy formation. At the recombination of
matter, at $z \sim$ 1000, the first structures to become unstable
have masses between 10$^5$ and 10$^8$ M$_\odot$, and at $z \sim$ 30,
it is possible than some structures of 10$^{10}$-10$^{12}$ M$_\odot$ become
in turn non linear, according to models, so we have computed our
estimations until such extreme redshifts ($\sim$ 30), even if such massive
objects are not likely to be numerous so early.
\section{Results and discussion}
\subsection{The standard clumpy model}
The computed flux in the CO lines (not integrated in frequency)
for the two-component cloud model is shown
in Fig. \ref{flux6} for the 8 values of the redshift considered.
The two temperature contributions can
be seen clearly, although they tend to merge at high redshift.
The effect of the assumption on the gas temperatures can be seen
by comparison with Fig. \ref{flux4}, which maximises the gas
temperatures. We can see at once that
the largest sensitivity for CO detection is
around the lines CO(6-5) or CO(5-4) for the redshift up to $z$ = 5.
At larger redshifts, the higher lines CO(15-14) or CO(14-13)
will be the best choice, in the 3-5mm domain.
The first effect to notice is the strong negative K-correction
in the dust emission,
which makes its detection even more easy at $z$ = 5 or 1.
For a fixed wavelength range, for instance at 3mm, we see that
the various redshift curves of Fig. \ref{flux6} are almost touching
each other, or are even $z$-reversed. However, the effect is
much more favorable for continuum detection. For CO lines,
we see more precisely that there is a factor 3 in flux between these
two redshifts ($z$ = 1 and 5)
for a given $\lambda$, and therefore that ten times more
integration time is required to detect the same objects at $z$ = 5
with respect to $z$ = 1.
This difference between the lines and continuum comes from the
fact that the lines are highly optically thick at long wavelengths:
the right side of the curves in Fig. \ref{flux6} for the lines
go as $\lambda^{-2}$ (the Rayleigh Jeans approximation), while
for the continuum, they go as $\lambda^{-4}$, due to the dust opacity
in $\nu^2$, and the fact that at low redshift and at these long wavelengths
the dust is optically thin. The K-correction advantage is then
much stronger for the dust emission; it is even optimum at very
high redshift ($z$ = 10 to 30), since the maximum dust emission then
enters the mm domain, and begins to be optically thick.
\begin{figure}
\psfig{width=8.5cm,file=8282_f4.ps,bbllx=15mm,bblly=5mm,bburx=21cm,bbury=12cm,angle=-90}
\caption{ Expected flux for the two-component cloud model, for various
redshifts $z$ = 0.1, 1, 2, 3, 5, 10, 20, 30, and $q_0$ = 0.5.
. Top are the CO lines, materialised
each by a circle. Bottom is the continuum emission from dust.
It has been assumed here that $T_{dust}^6 - T_{bg}^6$ is conserved.}
\label{flux6}
\end{figure}
\begin{figure}
\psfig{width=8.5cm,file=8282_f5.ps,bbllx=15mm,bblly=5mm,bburx=21cm,bbury=12cm,angle=-90}
\caption{ Same as figure \ref{flux6}, but
with $T_{dust}^4 - T_{bg}^4$ conserved, as for an optically
thick medium (see text for details).}
\label{flux4}
\end{figure}
Fig. \ref{flux6} demonstrates that it is easier to detect the dust
emission of starbursts at $z > $10 than at $z$ = 5, at 1mm, and therefore
it should be possible to detect them with today instruments. However
all these estimates were computed for $q_0$ = 0.5. For $q_0$ = 0.1,
the fluxes are smaller at high redshift, as indicated in
figure \ref{fluxq1}.
\begin{figure}
\psfig{width=8.5cm,file=8282_f6.ps,bbllx=15mm,bblly=5mm,bburx=21cm,bbury=12cm,angle=-90}
\caption{ Same as figure \ref{flux6}, but with $q_0$ = 0.1.}
\label{fluxq1}
\end{figure}
A second obvious point to note, as far as the detection
of CO lines at high-$z$ is concerned, is that the maximum of
emission is always at longer wavelengths than for the continuum
(notice the different $\lambda$ scales in Fig. \ref{flux6}).
In the case of the CO lines, the emission peaks at a frequency
lower by almost a factor 5 than in the case of the continuum.
This is easy to understand, since it is the energy of the upper level $J$
of the transition which corresponds to the gas temperature; this energy
is proportional to $J (J+1)$. The energy of the transition is only
a fraction of it, proportional to $2 J$. Between the two, the ratio
is almost a factor 5, in the case of the CO molecule, excited at a
temperature of $\sim$ 90 K. The two peaks will be much closer
for molecules of higher rotational energy, such as H$_2$O for
instance, but their lines are expected to be much weaker and not
as favorable for detection (because of clumpiness and high optical depth).
A question that could be asked is whether the hotter cosmic background
at high $z$ helps in the detection of the CO lines.
It can be seen on the relative populations of CO levels (Fig. \ref{pop})
that the density and column-densities we adopted for starburst objects
are always high enough to excite the levels nearly thermally (almost LTE).
The effect of going to high redshift does not play on excitation
directly, but on the absolute temperature of the gas. The same effect
occurs for the dust emission: temperatures are higher at high redshift,
for the same star formation rate. But this does not help {\it in fine}
on the detected flux level, since the observed flux takes into
account the subtraction of the black-body emission itself (cf equation (1))
which is also higher at high $z$. The net effect for lines is even negative,
as shown in Fig. \ref{flux6}: the high-$z$ curves drop down from the
general tendency at lower redshift. This effect is generally
ignored when such estimations are done, and this is justified
for redshift $\leq$ 5 (e.g. Guiderdoni et al. 1998). Very often,
for quick estimations, the curves of emission for a given object are simply
redshifted (translated in log-log plots) to estimate the K-correction
(e.g. Norman \& Braun 1996, Israel \& van der Werf 1996).
One can also take into account that it is
the {\it integrated} flux which is relevant to detect a line, since smoothing
proportionally to the line width increases the signal-to-noise.
Because the width of the line (at constant $\Delta V$) is proportional to
the frequency $\nu$, detection at high frequencies is then
favored, as shown in Fig. \ref{flux-int}.
\begin{figure}
\psfig{width=8.5cm,file=8282_f7.ps,bbllx=15mm,bblly=5mm,bburx=105mm,bbury=12cm,angle=-90}
\caption{ Same as figure \ref{flux6}, but for the integrated flux,
$\int S_{\nu} d\nu$, more relevant for detection.}
\label{flux-int}
\end{figure}
\subsection{The homogeneous model}
\begin{figure}
\psfig{width=8.5cm,file=8282_f8.ps,bbllx=15mm,bblly=5mm,bburx=21cm,bbury=12cm,angle=-90}
\caption{ Same as figure \ref{flux6}, but with the homogeneous sphere
model, with T$_d$ = 50K. The non monotonous behaviour is due to excitation far
from LTE (see Section 2.2)}
\label{cohz_5}
\end{figure}
In the case of the homogeneous sphere model, even though the CO to H$_2$
conversion factor is similar for the CO(1-0) line, our expectation
to detect such objects at high-$z$ is much less optimistic,
because of the lower common
temperature, and the lower excitation of the gas, which is now
at low average density. The lines are significantly excited only
up to the CO(6-5), and this reduces the flux at high redshift,
for $z$ = 20 and 30 (see Fig. \ref{cohz_5}). The dust emission is also
considerably reduced at high redshift.
Let us note that such low dust temperatures ($\leq 50K$) are relevant for a
fraction of high-$z$ sources such as BR1202-0725 (Cox et al. 1999), but
not for others such as F10214+4724 (Downes et al. 1995), the Cloverleaf
(Barvainis et al. 1997), or APM08279+5255 (Downes et al. 1999).
\subsection{Less optically thick models}
The standard two-component clumpy model is optically thick in most
CO lines. The optical depth comes essentially from the depth of
the cloud components themselves, but also from the
overlap of the clouds, at a given velocity. As can be seen in Table
\ref{model}, the overlap amounts to $f_s f_v$ = 3 for the warm
component. It is therefore possible to have nearly the same CO emission
for about 3 times less gas mass, at least for the low-$J$ lines.
The high-$J$ line emission is provided mainly by the core component,
for which there is no overlap ($f_s f_v$ = 0.03). There is no significant
absorption of the core emission by the warm component either,
since at the high frequencies of the main core contribution,
the warm component is optically thin. Therefore, dividing the
total H$_2$ mass by 3, without modifying the cloud structure,
will result in about the same low-$J$ line emission, but 3-times
less high-$J$ emission. The continuum emission will also not be
simply divided by 3 (since the dust $\tau\ge$ 1 for $\lambda \le$ 200 $\mu$m),
but its spectrum will change shape. It is interesting to
plot the continuum-to-line ratio for the various cases.
In figure \ref{cont-line-ratio}, we plot both the total dimensionless
L$_{FIR}$/L$_{CO}$ ratio, where L$_{CO}$ is integrated over all lines,
and the L$_{FIR}$/L$_{1-0}$ ratio, when only the CO(1-0) line is taken
into account, as is done observationally (e.g. Solomon et al 1997).
We can see that both ratios increase with redshift, which confirms
the fact that the continuum will be easier to detect at high $z$
than the CO lines. The two cases, with 6 and 2 10$^{10}$ M$_\odot$
of gas have not very different L$_{FIR}$/L$_{CO}$ ratios
(although the L$_{FIR}$/L$_{1-0}$ ratio shows a marked effect, at
low redshift).
Another obvious way to change the optical depth, without changing
the cloud structure, is to change the metallicity, and consequently
the CO and dust abundances. It is quite natural to keep the
cloud structures (and their high H$_2$ density) for starburst objects;
lower densities will not be able to excite the high-$J$ CO lines,
penalizing detectability at high $z$. It is, however, likely that
gas at high $z$ is less enriched in metals and dust.
We have varied the CO/H$_2$ abundance ratio from 10$^{-4}$ in the
standard model to 10$^{-8}$ by factors of 10, keeping the total gas
mass at 6 10$^{10}$ M$_\odot$. Figure \ref{cont-line-ratio}
summarizes the results. For lower metallicities,
since both CO lines and dust are then entirely
optically thin, the continuum-to-line ratio reaches a constant
value, independent of total mass. It is nearly two orders of magnitude
lower than in the standard model.
Note that our models bracket the range of observed values. The
standard model has a high ratio, due to it high gas mass, and high dust
temperature (the ratio varies at least as T$_d^3$, and even more for
moderate optical thickness).
\begin{figure}
\psfig{width=8.5cm,file=8282_f9.ps,bbllx=5mm,bblly=3cm,bburx=18cm,bbury=18cm,angle=0}
\caption{ Luminosity ratio between the total Far infrared (FIR) dust emission
and the total integrated CO line emission (full lines), or only the integrated
CO(1-0) emission (dashed lines), as a function of redshift.
The lines are labeled by the log of CO/H$_2$ abundance ratio (-4 for
the standard model). The dotted lines indicate the result of dividing
the gas mass in the standard model by 3 (labeled M/3). The dash-dotted lines
correspond to the homogeneous model (labeled h), displayed in fig 8. The ratio for the 37
ultra-luminous infrared galaxies observed by Solomon et al (1997)
is marked as filled squares.}
\label{cont-line-ratio}
\end{figure}
The resulting spectra for the optically thin model, with CO/H$_2$ = 10$^{-6}$,
are plotted in fig \ref{thin}. The same behaviour as a function
of redshift is observed, although the fluxes are lower, mainly
in continuum.
\begin{figure}
\psfig{width=8.5cm,file=8282_f10.ps,bbllx=15mm,bblly=5mm,bburx=21cm,bbury=12cm,angle=-90}
\caption{ Same as figure \ref{flux6}, but for one optically thin model,
with CO/H$_2$ = 10$^{-6}$.}
\label{thin}
\end{figure}
\subsection{Conclusion}
In summary, we can see that the probability to detect objects at very high
redshift is much larger in the continuum dust emission than in the CO lines.
This is essentially due to a stronger K-correction advantage for the
continuum, due to the lower opacity of dust. However, the
detection of CO lines brings a lot of
complementary information, involving the redshift,
the line-widths and the kinematics of the gas. Also the line
detection suppresses the problems of confusion of sources, that
can affect the continuum surveys with single dishes.
We remark that we do not retrieve the surprising result
of Silk \& Spaans (1997), that the CO lines are even easier
to detect than the continuum at very high redshift.
From our results, the continuum-to-line flux ratio increases with redshift
whatever the model and the optical thickness.
In figure 2 from Silk \& Spaans (1997), the maximum line
flux for the CO lines does not vary significantly
with redshift from 5 to 30, while we have in all cases
a variation as large as (1+z)$^{-2.5}$ in the same range.\footnote{
We can note that they do not take into account the overlapping of
clouds on the line of sight, but this is in our model a factor 3 only,
and cannot account for the factor $\sim$ 60 discrepancy.}
\section{Detection perspectives}
We have computed, for both line and continuum, the integration time
required to detect a high-redshift object, with the best possible CO line,
or the best continuum frequency. These are displayed in Tables
\ref{line-h} and \ref{cont-h}. A signal-to-noise ratio of a factor 5
has been assumed on the continuum flux $S_{\nu}$, and
these estimations have been done
for the two-component model of Fig. \ref{flux6}.
For the line, the signal-to-noise ratio adopted is 3 when the
noise is smoothed over 30 km/s width, or 9 when smoothed over
the whole line width (300 km/s).
Since the sub-mm and mm domain is in fact punctuated by transparent
atmospheric windows and opaque broad regions, due to O$_2$ and H$_2$O,
some particular redshifts will be severely disfavored, as for the
CO lines detection. We have not taken into account such a complex
frequency dependence in our estimations, but on the contrary, we have
kept only the upper envelope of atmospheric transmission
all over the domain (in table \ref{line-h}, we have however avoided
to estimate integration times exactly at the center
of atmospheric lines). The final estimations are therefore valid
statistically for a big sample of sources, but can be wrong for
a given object for which the best CO lines fall in an opaque
atmospheric region. This is not as severe for continuum detections.
\begin{figure}
\psfig{width=8.5cm,file=8282_f11.ps,bbllx=22mm,bblly=5mm,bburx=105mm,bbury=12cm,angle=-90}
\caption{ System temperature as a function of frequency adopted
for the LSA/MMA instrument.
Dash curve: actual expected T$_{sys}$, corresponding to a receiver
temperature equal to 2 h$\nu$/k, operating in full SSB mode,
with 1mm height of water, temperature 0$^{\circ}$ (at altitude 5 km);
Full curve: second-order polynomial fit}
\label{atm}
\end{figure}
For presently operating telescopes, the system temperature is
taken from well known measurements between 1 and 3mm, and then
extrapolated as a second order polynomial in frequency.
For planned instruments, the large foreseen improvement in
receivers is taken into account (about a factor 4 in
system temperatures, and therefore noise power,
cf Guilloteau, 1996).
This corresponds to receiver temperatures of the order of 2 $h\nu/k$,
and SSB mode (at least 20 dB rejection).
For the LSA-MMA project, a configuration of 64 antennae of 12m has been
adopted (providing a collecting surface of 7200 m$^2$).
The expected surface efficiency of telescopes and their
altitudes are also taken into account.
Fig. \ref{atm} shows how the 2nd-order polynomial
fit corresponds to the lower envelope of the expected T$_{sys}$ for
the LSA/MMA. For the continuum, the flux is smoothed
over 0.5 and 4 GHz for the IRAM and LSA/MMA interferometers
respectively, and 0.1 $\nu$ for bolometers. The sensitivity
figures given in Table \ref{cont-h} for the continuum receivers
have been taken from Holland et al (1998) for JCMT-Scuba,
and from Glenn et al (1998) for Bolocam.
All estimates have been done assuming point sources.
\begin{table*}[h]
\caption[ ]{Smallest integration time to detect CO lines, (with optimum
$\nu$ in GHz)}
\begin{flushleft}
\begin{tabular}{lll @{\hspace{1cm}}|| @{\hspace{1cm}} lll}
\multicolumn{3}{c @{\hspace{1cm}}|| @{\hspace{1cm}} }{Present Receivers}
& \multicolumn{3}{c}{Future Receivers} \\
&&&&&\\
\hline
&&&&&\\
z & IRAM-30m & IRAM-PdB & GBT-100m & LMT-50m & LSA-MMA \\
&&&&&\\
\hline
&&&&&\\
0.1& 3mn (209) & 1mn (209) & 0.4s (105) & 1.5s (209) & 0.04s (209) \\
1 & 36h (230) & 16h (230) & 4mn (115) & 15mn (230) & 22s (230) \\
2 & 180h (230) & 70h (230) & 8mn (115) &1h15 (230) & 2mn (230) \\
3 & 210h (144) & 86h (144) & 13mn (115) &1h15 (144) & 2mn (144) \\
5 & 36d (115) & 14d (115) & 1h (115) & 10h (115) & 15mn (115) \\
10 & -- & -- & 13h (115) & 5d (146) & 3h (146) \\
20 & -- & -- & 90h (77) & -- & 28h (77) \\
30 & -- & -- & 54d (52) & -- & 15d (52?) \\
\hline
\end{tabular}
\end{flushleft}
The time is estimated to have a 3$\sigma$ detection of the flux
$S_{\nu}$, smoothed to 30 km/s \\
(one tenth of the profile width of 300 km/s), equivalent to a 9$\sigma$
detection, smoothed to 300 km/s \\
For all future receivers, $T_{sys}$ has been computed assuming
$T_{rec}$ = 2 h $\nu$/k, and 1mm height of water
\label{line-h}
\end{table*}
\begin{table*}[h]
\caption[ ]{Smallest integration time to detect dust continuum
(at 5$\sigma$, with optimum $\lambda$ in $\mu$m)}
\begin{flushleft}
\begin{tabular}{lll @{\hspace{1cm}}|| @{\hspace{1cm}} lll}
\multicolumn{3}{c @{\hspace{1cm}}|| @{\hspace{1cm}} }{Present Receivers}
&\multicolumn{3}{c}{Future Receivers} \\
&&&&&\\
\hline
&&&&&\\
z & IRAM-30m & JCMT-Scuba & CSO-Bolocam & LMT-Bolocam & LSA-MMA \\
&&&&&\\
\hline
&&&&&\\
0.1& 6mn (1250)&25s (350) & 50s (1100) & 0.1s (1100) & 36ms (850) \\
1 &6.7h (1250)& 1h (450) & 55mn (1100) & 6s (1100) & 2.7s (850) \\
2 &3.9h (1250)&52mn (850) & 35mn (1100) & 4s (1100) & 1.8s (850) \\
3 &2.2h (1250)&38mn (850) & 20mn (1100) & 2.3s (1100) & 1.5s (850) \\
5 &1.1h (1250)&22mn (850) & 10mn (1100) & 1.2s (1100) & 0.9s (1250) \\
10 &24mn (1250)& 9mn (850) & 4mn (1100) & 0.5s (1100) & 0.3s (1250) \\
20 &13mn (1250)&19mn (850) & 3mn (1100) & 0.4s (1100) & 0.15s (1250) \\
30 &44mn (1250)&1.2h (1350)& 16mn (1100) & 1.8s (1100) & 0.6s (1250) \\
\hline
\end{tabular}
\end{flushleft}
The following sensitivities were used, for one second integration time:
50mJy (IRAM-30m at 1250$\mu$m); \\
1200, 530, 80 and 60mJy (JCMT-Scuba, at 350, 450, 850 and 1350$\mu$m
respectively); \\
30mJy (CSO-Bolocam at 1100$\mu$m); 1.3mJy (LMT-Bolocam at 1100$\mu$m); \\
2 and 0.7 mJy (LSA-MMA at 850 and 1250$\mu$m, respectively)
\label{cont-h}
\end{table*}
The results in Tables \ref{line-h} and \ref{cont-h} are of course only
orders of magnitude, since they depend on the starburst model, and also
the technical performances of planned instruments are only extrapolated.
But they give already a good insight in what will be feasible in the
next decade. The continuum sources are already detectable with present
instruments at all redshifts. Large interferometers will be
necessary to reduce the confusion level, and map the sources.
The CO lines are presently not detectable easily at redshifts larger
than 1. The few detections already published at high $z$ owe their
detection to the high magnification factor provided by a gravitational lens,
or to exceptionally massive objects. For instance, the actual integrated
intensity of BR 1335-0414, 2.8 $\pm$ 0.3 Jy km/s at $z=$ 4.4,
Guilloteau et al 1997) is a factor 9 larger than our standard model
at $z=$ 5, at 3mm.
With the future instruments, they will be detectable easily up
to $z = 10$ and may be larger, if huge starbursts exist there.
At $z < 5$, it will be possible to detect CO lines from more normal
galaxies, and to tackle star formation in those very young galaxies.
More exotic lines, such as CS, HCN, or even H$_2$O will then be available
to explore the physics of the interstellar medium
and star formation in more detail.
\begin{acknowledgements}
We thank D. V. Trung for his LVG code,
James Lequeux and F. Viallefond for useful discussions,
and an anonymous referee for helpful and detailed comments.
\end{acknowledgements}
|
1,314,259,995,313 | arxiv | \section{Introduction}
The $A_{\rm FB}$ of the top quark is one of the interesting observables
related with top quark. Within the Standard Model (SM),
this asymmetry vanishes at leading order in QCD because of $C$ symmetry.
At next-to-leading order [$O(\alpha_s^3)$],
a nonzero $A_{\rm FB}$ can develop from the interference
between the Born amplitude and two-gluon intermediate state,
as well as the gluon bremsstrahlung and gluon-(anti)quark scattering
into $t \bar{t}$,
with the prediction $A_{\rm FB}\sim 0.078$ \cite{Antunano:2007da}.
The measured asymmetry has been off the SM prediction by $2 \sigma$
for the last few years, albeit a large experimental uncertainties.
The measurement in the $t\bar{t}$ rest frame before this meeting was
\cite{cdf2009}
\begin{equation}
A_{\rm FB} \equiv \frac{N_t ( \cos\theta \geq 0) - N_{\bar{t}}
( \cos\theta \geq 0 )}{N_t ( \cos\theta \geq 0) + N_{\bar{t}}
( \cos\theta \geq 0 )} = (0.24 \pm 0.13 \pm 0.04)
\end{equation}
with $\theta$ being the polar angle of the top quark with
respect to the incoming proton in the $t\bar{t}$ rest frame.
This $\sim 2\sigma$ deviation stimulated some speculations on new physics
scenarios \cite{Choudhury:2007ux,Djouadi:2009nb,Ferrario:2009bz,Jung:2009jz,ko}.
On the other hand, search for a new resonance decaying into
$t\bar{t}$ pair has been carried out at the Tevatron.
As of now, there is no clear signal for such a new resonance \cite{cdf2009}.
Therefore, in this talk, I assume that a new physics scale relevant to
$A_{\rm FB}$ is large enough so that production of a new particle is
beyond the reach of the Tevatron \cite{ko},
which makes a key difference between
our work and other literatures on this subject
\cite{Choudhury:2007ux,Djouadi:2009nb,Ferrario:2009bz,Jung:2009jz}.
Then it is adequate to integrate out the heavy fields, and use the
resulting
effective lagrangian approach in order to study
new physics effects on $\sigma_{t\bar{t}}$ and $A_{\rm FB}$.
At the Tevatron, the $t\bar{t}$ production is dominated by $q\bar{q}
\rightarrow t\bar{t}$, and it would be sufficient to consider dimension-6
four-quark operators (the so-called contact interaction terms)
to describe the new physics effects on the $t\bar{t}$ production
at the Tevatron. A similar approach was adopted for the dijet
production to constrain the composite scale of light quarks,
and we are proposing the same analysis for the $t\bar{t}$ system.
\section{Model independent analysis}
\subsection{Lagrangian}
Our starting point is the effective lagrangian with dimension-6
operators relevant to the $t\bar{t}$ production at the Tevatron:
\begin{equation}
\mathcal{L}_6 = \frac{g_s^2}{\Lambda^2}\sum_{A,B}
\left[C^{AB}_{1q}(\bar{q}_A\gamma_\mu q_A)(\bar{t}_B\gamma^\mu t_B) +
C^{AB}_{8q}(\bar{q}_A T^a\gamma_\mu q_A)(\bar{t}_B T^a\gamma^\mu t_B)\right]
\end{equation}
where $T^a = \lambda^a /2$, $\{A,B\}=\{L,R\}$, and
$L,R \equiv (1 \mp \gamma_5)/2$
with $q=(u,d)^T,(c,s)^T$.
Using this effective lagrangian, we calculate the cross section up to
$O(1/\Lambda^2)$, keeping only the interference term between
the SM and new physics contributions.
The above effective lagrangian was also discussed in
Ref.~\cite{Hill:1993hs}, where the $t$ quark was treated as
$SU(2)_L \times SU(2)_R$ singlet and top currents were
decomposed into vector and axial vector currents, rather than
chirality basis as in our case.
\begin{figure*}
\includegraphics[width=7cm]{c1c2_org.eps}
\includegraphics[width=7cm]{cfb.eps} \\%
\caption{\label{} (a) The region in $( C_1 , C_2 )$ plane that is
consistent with the Tevatron data at the $1 \sigma$ level:
$\sigma_{t\bar{t}} = (7.50 \pm 0.48)$ pb
and $A_{\rm FB} = (0.24 \pm 0.13 \pm 0.04)$.
(b) the spin-spin correlations $C$ and $C_{FB}$.
}
\label{fig:1a}
\end{figure*}
\subsection{Origin of FB Asymmetry}
It is straightforward to calculate the amplitude for
$q (p_1) + \bar{q} (p_2) \rightarrow t (p_3) + \bar{t} (p_4)$
using the above effective lagrangian and the SM.
The squared amplitude summed (averaged) over the final (initial)
spins and colors is given by
\begin{eqnarray}
\overline{|{\cal M}|^2}
& \simeq & \frac{4\,g_s^4}{9\,\hat{s}^2} \left\{
2 m_t^2 \hat{s} \left[
1+\frac{\hat{s}}{2\Lambda^2}\,(C_1+C_2)
\right] s_{\hat\theta}^2 \right.
\\
& + & \left.
\frac{\hat{s}^2}{2}\left[ \left(1+\frac{\hat{s}}{2\Lambda^2}\,(C_1+C_2)\right)
(1+c_{\hat\theta}^2)
+\hat\beta_t\left(\frac{\hat{s}}{\Lambda^2}\,(C_1-C_2)\right)c_{\hat\theta}
\right]\right\} \nonumber
\label{eq:ampsq}
\end{eqnarray}
where $\hat{s} = (p_1 + p_2)^2$, $\hat\beta_t^2=1-4m_t^2/\hat{s}$,
and $s_{\hat\theta}\equiv \sin\hat\theta$ and
$c_{\hat\theta}\equiv \cos\hat\theta$
with $\hat{\theta}$ being the polar
angle between the incoming quark and the outgoing top quark in the
$t\bar{t}$ rest frame.
And the couplings are defined as:
$C_1 \equiv C_{8q}^{LL}+C_{8q}^{RR}$ and
$C_2 \equiv C_{8q}^{LR}+C_{8q}^{RL}$.
Since we have kept only up to the interference terms, there are
no contributions from the color-singlet operators with coupling
$C_{1q}^{AB}$.
The term linear in $\cos\hat{\theta}$ could
generate the forward-backward asymmetry
which is proportional to $\Delta C \equiv (C_1 - C_2)$.
Note that both light quark and top quark should have chiral couplings
to the new physics in order to generate $A_{\rm FB}$ at the tree level
(namely $\Delta C \neq 0$). This parity violation, if large,
could be observed in the nonzero (anti)top spin polarization \cite{progress}.
In Fig.~\ref{fig:1a}, we show the allowed region in the $(C_1,C_2)$ plane
that is consistent with the Tevatron data at the $1 \sigma$ level.
The allowed region is around $0 \lesssim C_1 \lesssim 4$ and
$-4 \lesssim C_2 \lesssim + 0.5$. The negative sign of $C_2$ is preferred
at the 1 $\sigma$ level.
\subsection{A New Spin-spin Correlation}
Another interesting observable which is sensitive to the chiral
structure of new physics affecting $q\bar{q} \rightarrow t\bar{t}$
is the top quark spin-spin correlation \cite{Stelzer06,progress}:
\begin{equation}
C = \frac{\sigma(t_L\bar{t}_L + t_R\bar{t}_R) -
\sigma(t_L\bar{t}_R + t_R\bar{t}_L)}{\sigma(t_L\bar{t}_L + t_R\bar{t}_R) +
\sigma(t_L\bar{t}_R + t_R\bar{t}_L)} \,.
\end{equation}
Since new physics must have chiral couplings both to light quarks and
top quark, the spin-spin correlation defined above will be affected.
From Eq. (\ref{eq:ampsq}), it is clear the spin-spin correlation
Eq.~(4) is sensitive to $(C_1 + C_2)$,
since the linear term in $\cos\hat{\theta}$ does not contribute to
the correlation $C$ after integration over $\cos\hat{\theta}$.
On the other hand, if one considers the forward and the backward regions
separately, the spin-spin correlation would depend on $( C_1 - C_2 )$
and will be closely correlated with $A_{\rm FB}$.
Therefore we propose a new spin-spin FB asymmetry $C_{FB}$ defined as
\begin{equation}
C_{FB} \equiv C (\cos\theta \geq 0) - C (\cos\theta \leq 0) ,
\end{equation}
where $C(\cos\theta \geq 0 (\leq 0))$ implies the cross sections in the
numerator of Eq.~(4) are obtained for the forward (backward) region:
$\cos\theta \geq 0 (\leq 0)$.
In Fig.~\ref{fig:1a} (b), we show the contour plots for the $C$ and $C_{FB}$
in the $(C_1 , C_2 )$ plane along with the SM prediction at LO.
There is a clear correlation between $C_{FB}$ and $A_{FB}$ in Fig.~\ref{fig:1a},
which must be observed in the future measurements
if the $A_{\rm FB}$ anomaly is real and
a new particle is too heavy to be produced at the Tevatron.
\section{Explicit Models}
So far, we considered dim-6 four-quark operators that could
affect the $t\bar{t}$ productions at the Tevatron, and found
the necessary conditions for accommodating $A_{\rm FB}$.
In Ref.~\cite{ko}, we also considered the explict models with new
particles with various spins and colors that could
affect $A_{\rm FB}$.
In Table~\ref{tab:newparticles}, we show the new particle exchanges
under consideration and the signs of the couplings $C_1, C_2$ induced by them.
We found that the four types of exchanges of $V_8$, $\tilde{V}_8$,
$\tilde{S}_1$, and $S_{13}^{\alpha\beta}$ could give rise to the large positive
$A_{\rm FB}$ at the 1-$\sigma$ level.
It would be interesting to search for new vector or
scalar particles that satisfy the above conditions at LHC.
For more quantitative discussions, we have to study the full amplitude
without integrating out new heavy particles,
the detailed study of which will be presented in the
future work \cite{progress}.
\begin{table}[t]
\caption{\label{tab:newparticles}
{\it
New particle exchanges and the signs of induced couplings $C_1$ and $C_2$}
}
\begin{center}
\begin{tabular}{l|c|c|c|c}
\hline\hline
& & & & \\[-0.3cm]
New particles & couplings & $C_1$ & $C_2$ & 1 $\sigma$ favor \\[0.1cm]
\hline\hline
& & & & \\[-0.3cm]
$V_8$ (spin-1 FC octet) & $g^{L,R}_{\,8q,8t}$ & indef. & indef. & $\surd$ \\[0.1cm]
$\tilde{V}_1$ (spin-1 FV singlet) & $\tilde{g}^{L,R}_{1q}$ & $-$ & $0$ & $\times$ \\[0.1cm]
$\tilde{V}_8$ (spin-1 FV octet) & $\tilde{g}^{L,R}_{8q}$ & $+$ & $0$ & $\surd$ \\[0.1cm]
\hline
& & & & \\[-0.2cm]
$\tilde{S}_1$ ~~(spin-0 FV singlet) & $\tilde\eta^{L,R}_{1q}$ & $0$ & $-$ & $\surd$ \\[0.1cm]
$\tilde{S}_8$ ~~(spin-0 FV octet) & $\tilde\eta^{L,R}_{8q}$ & $0$ & $+$ & $\times$ \\[0.1cm]
$S_2^\alpha$ ~\,(spin-0 FV triplet) & $\eta_{3}$ & $-$ & $0$ & $\times$ \\[0.1cm]
$S_{13}^{\alpha\beta}$ \,(spin-0 FV sextet) & $\eta_{6}$ & $+$ & $0$ & $\surd$ \\[0.1cm]
\hline\hline
\end{tabular}
\end{center}
\end{table}
\section{Conclusions}
In this talk, I presented a model independent study
of $t\bar{t}$ production cross section and $A_{\rm FB}$
at the Tevatron using dimension-6 contact interactions.
We derived conditions for the couplings
of four-quark operators that could generate the FB asymmetry
observed at the Tevatron [Fig.~\ref{fig:1a}].
Then we considered the $s-$, $t-$ and $u-$channel exchanges of
spin-0 and spin-1 particles whose color quantum number is
either singlet, octet, triplet or sextet.
Our results in Fig.~1 and Table 1
encode the necessary conditions for the underlying new physics
in a compact and an effective way, when those new particles
are too heavy to be produced at the Tevatron but still affect $A_{\rm FB}$.
If these new particles could be produced directly at the Tevatron or
at the LHC, we cannot use the effective lagrangian any more.
We have to study specific models case by case, and anticipate
rich phenomenology at colliders as well as at low energy.
Detailed study of these issues
will be discussed in the future publications \cite{progress}.
\section*{Acknowledgments}
This work was supported in part by Korea Neutrino Research Center (KNRC)
through National Research Foundation of Korea Grant.
\section*{References}
|
1,314,259,995,314 | arxiv | \section{\@startsection{section}{1}{\z@
{6ex \@plus 1ex \@minus 1ex
{0.2ex \@plus.2ex
{\normalfont\large\bfseries}}
\renewcommand\subsection{\@startsection{subsection}{1}{\z@
{-5ex \@plus -1ex \@minus -.2ex
{0.1ex \@plus.2ex
{\normalfont\large\it}}
\makeatother
\setlength{\parindent}{0pt}
\newcommand{\paragraf}{\textsection}
\renewcommand{\emptyset}{\varnothing}
\newcommand{\R}{\ensuremath{\mathbb R}}
\newcommand{\C}{\ensuremath{\mathbb C}}
\newcommand{\N}{\ensuremath{\mathbb N}}
\newcommand{\Z}{\ensuremath{\mathbb Z}}
\newcommand{\K}{\ensuremath{\mathbb K}}
\newcommand{\gperp}{{[\perp]}}
\newcommand{\iso}{\circ}
\newcommand{\product}{[\cdot\,,\cdot]}
\newcommand{\hproduct}{(\cdot\,,\cdot)}
\newcommand{\lk}{\langle}
\newcommand{\rk}{\rangle}
\newcommand{\aperp}{{\langle\perp\rangle}}
\newcommand{\aproduct}{\langle\cdot\,,\cdot\rangle}
\newcommand{\llb}{\llbracket}
\newcommand{\rrb}{\rrbracket}
\newcommand{\dproduct}{\llb\cdot,\cdot\rrb}
\newcommand{\dperp}{{\llb\perp\rrb}}
\newcommand{\calA}{\mathcal A} \newcommand{\frakA}{\mathfrak A}
\newcommand{\calB}{\mathcal B} \newcommand{\frakB}{\mathfrak B}
\newcommand{\calC}{\mathcal C} \newcommand{\frakC}{\mathfrak C}
\newcommand{\calD}{\mathcal D} \newcommand{\frakD}{\mathfrak D}
\newcommand{\calE}{\mathcal E} \newcommand{\frakE}{\mathfrak E}
\newcommand{\calF}{\mathcal F} \newcommand{\frakF}{\mathfrak F}
\newcommand{\calG}{\mathcal G} \newcommand{\frakG}{\mathfrak G}
\newcommand{\calH}{\mathcal H} \newcommand{\frakH}{\mathfrak H}
\newcommand{\calI}{\mathcal I} \newcommand{\frakI}{\mathfrak I}
\newcommand{\calJ}{\mathcal J} \newcommand{\frakJ}{\mathfrak J}
\newcommand{\calK}{\mathcal K} \newcommand{\frakK}{\mathfrak K}
\newcommand{\calL}{\mathcal L} \newcommand{\frakL}{\mathfrak L}
\newcommand{\calM}{\mathcal M} \newcommand{\frakM}{\mathfrak M}
\newcommand{\calN}{\mathcal N} \newcommand{\frakN}{\mathfrak N}
\newcommand{\calO}{\mathcal O} \newcommand{\frakO}{\mathfrak O}
\newcommand{\calP}{\mathcal P} \newcommand{\frakP}{\mathfrak P}
\newcommand{\calQ}{\mathcal Q} \newcommand{\frakQ}{\mathfrak Q}
\newcommand{\calR}{\mathcal R} \newcommand{\frakR}{\mathfrak R}
\newcommand{\calS}{\mathcal S} \newcommand{\frakS}{\mathfrak S}
\newcommand{\calT}{\mathcal T} \newcommand{\frakT}{\mathfrak T}
\newcommand{\calU}{\mathcal U} \newcommand{\frakU}{\mathfrak U}
\newcommand{\calV}{\mathcal V} \newcommand{\frakV}{\mathfrak V}
\newcommand{\calW}{\mathcal W} \newcommand{\frakW}{\mathfrak W}
\newcommand{\calX}{\mathcal X} \newcommand{\frakX}{\mathfrak X}
\newcommand{\calY}{\mathcal Y} \newcommand{\frakY}{\mathfrak Y}
\newcommand{\calZ}{\mathcal Z} \newcommand{\frakZ}{\mathfrak Z}
\newcommand{\scrA}{\mathscr A}
\newcommand{\scrB}{\mathscr B}
\newcommand{\scrC}{\mathscr C}
\newcommand{\scrD}{\mathscr D}
\newcommand{\scrE}{\mathscr E}
\newcommand{\scrF}{\mathscr F}
\newcommand{\scrG}{\mathscr G}
\newcommand{\scrH}{\mathscr H}
\newcommand{\scrI}{\mathscr I}
\newcommand{\scrJ}{\mathscr J}
\newcommand{\scrK}{\mathscr K}
\newcommand{\scrL}{\mathscr L}
\newcommand{\scrM}{\mathscr M}
\newcommand{\scrN}{\mathscr N}
\newcommand{\scrO}{\mathscr O}
\newcommand{\scrP}{\mathscr P}
\newcommand{\scrQ}{\mathscr Q}
\newcommand{\scrR}{\mathscr R}
\newcommand{\scrS}{\mathscr S}
\newcommand{\scrT}{\mathscr T}
\newcommand{\scrU}{\mathscr U}
\newcommand{\scrV}{\mathscr V}
\newcommand{\scrW}{\mathscr W}
\newcommand{\scrX}{\mathscr X}
\newcommand{\scrY}{\mathscr Y}
\newcommand{\scrZ}{\mathscr Z}
\newcommand{\la}{\lambda}
\newcommand{\veps}{\varepsilon}
\newcommand{\vphi}{\varphi}
\newcommand{\mat}[4]
{
\begin{pmatrix}
#1 & #2\\
#3 & #4
\end{pmatrix}
}
\newcommand{\vek}[2]
{
\begin{pmatrix}
#1\\
#2
\end{pmatrix}
}
\newcommand{\smallvek}[2]{\left(\begin{smallmatrix}#1\\#2\end{smallmatrix}\right)}
\newcommand{\smallmat}[4]{\left(\begin{smallmatrix}#1 & #2\\#3 & #4\end{smallmatrix}\right)}
\renewcommand{\Im}{\operatorname{Im}}
\renewcommand{\Re}{\operatorname{Re}}
\newcommand{\linspan}{\operatorname{span}}
\renewcommand{\ker}{\operatorname{ker}}
\newcommand{\ran}{\operatorname{ran}}
\newcommand{\dom}{\operatorname{dom}}
\newcommand{\codim}{\operatorname{codim}}
\newcommand{\sap}{\sigma_{{ap}}}
\newcommand{\esap}{\wt\sigma_{{ap}}}
\renewcommand{\sp}{\sigma_{+}}
\newcommand{\sm}{\sigma_{-}}
\newcommand{\esigma}{\wt\sigma}
\newcommand{\erho}{\wt\rho}
\newcommand{\spp}{\sigma_+}
\newcommand{\smm}{\sigma_-}
\newcommand{\sess}{\sigma_{\rm ess}}
\newcommand{\lra}{\longrightarrow}
\newcommand{\sra}{\rightarrow}
\newcommand{\Lra}{\Longrightarrow}
\newcommand{\Sra}{\Rightarrow}
\newcommand{\lla}{\longleftarrow}
\newcommand{\sla}{\leftarrow}
\newcommand{\Lla}{\Longleftarrow}
\newcommand{\Sla}{\Leftarrow}
\newcommand{\llra}{\longleftrightarrow}
\newcommand{\slra}{\leftrightarrow}
\newcommand{\Llra}{\Longleftrightarrow}
\newcommand{\Slra}{\Leftrightarrow}
\newcommand{\upto}{\uparrow}
\newcommand{\downto}{\downarrow}
\newcommand{\wto}{\rightharpoonup}
\newcommand{\Ato}{\stackrel{A}{\to}}
\newcommand{\Awto}{\stackrel{A}{\rightharpoonup}}
\newcommand{\restr}{\!\!\upharpoonright\!\!}
\newcommand{\ol}{\overline}
\newcommand{\ds}{\dotplus}
\newcommand{\wt}{\widetilde}
\newcommand{\wh}{\widehat}
\newcommand{\cls}{\operatorname{c.l.s.}
\begin{document}
\thispagestyle{empty}
\vspace*{-.3cm}
\begin{center}
\begin{spacing}{1.5}
{\LARGE\bf Spectral functions of products of selfadjoint operators}
\end{spacing}
\vspace{1cm}
{\Large Tomas Ya.\ Azizov, Mikhail Denisov, Friedrich Philipp}
\end{center}
\vspace{.7cm}
{\bf Abstract:} Given two possibly unbounded selfadjoint operators $A$ and $G$ such that the resolvent sets of $AG$ and $GA$ are non-empty, it is shown that the operator $AG$ has a spectral function on $\R$ with singularities if there exists a polynomial $p\neq 0$ such that the symmetric operator $Gp(AG)$ is non-negative. This result generalizes a well-known theorem for definitizable operators in Krein spaces.
\vspace{.3cm}
{\it Keywords:} Product of selfadjoint operators, indefinite inner product, definitizable operator
\setlength{\parskip}{3ex plus 0.5ex minus 0.2ex}
\section{Introduction}
Let $A$ and $G$ be two selfadjoint operators in a Hilbert space
$(\calH,\hproduct)$ such that either $A$ or $G$ is bounded and
boundedly invertible. Then the product $AG$ is selfadjoint in a
Krein space. Indeed, if $G$ ($A$) is bounded and boundedly
invertible, then $AG$ is selfadjoint in the Krein space
$(\calH,\product_G)$ ($(\calH,\product_{A^{-1}})$, respectively),
where
$$
[x,y]_G = (Gx,y),\quad [x,y]_{A^{-1}} = (A^{-1}x,y),\quad x,y\in\calH.
$$
Conversely, a selfadjoint operator in a Krein space can be written as a product of two selfadjoint operators in a Hilbert space one of which is bounded and boundedly invertible.
The spectrum of a selfadjoint operator in a Krein space is symmetric with respect to the real axis. But even simple examples show that the spectrum of such operators can be empty or cover the entire complex plane. However, some classes of selfadjoint operators in Krein spaces are well-understood. Among those are the definitizable operators. A selfadjoint operator $T$ in the Krein space $(\calH,\product)$ is called definitizable if its resolvent set $\rho(T)$ is non-empty and if there exists a polynomial $p\neq 0$ with real coefficients such that
$$
[p(T)x,x]\,\ge\,0\quad\text{for all }x\in\dom p(T).
$$
This definition goes back to H.\ Langer who proved that the spectrum
of a definitizable operator $T$ -- with the possible exception of a
finite number of non-real eigenvalues which are poles of the
resolvent of $T$ -- is real and that $T$ possesses a spectral
function on $\R$ with a finite number of singularities, see
\cite{l}. Definitizable operators appear in many applications including
differential operators with indefinite weights (see, e.g.,
\cite{abt,bp,cl,km,kt,kwz}), selfadjoint operator polynomials (see,
e.g., \cite{dl,l_h}) and Sturm-Liouville equations with floating
singularity (see, e.g., \cite{jt02,jt06,lmem}).
In the present paper we extend the spectral theory of definitizable
operators from selfadjoint operators in Krein spaces to products $T
= AG$ of selfadjoint operators $A$ and $G$ in a Hilbert space which
are both allowed to be unbounded and non-invertible. Instead of
$\rho(T)\neq\emptyset$ as in the above definition of
definitizability we will have to assume that both resolvent sets
$\rho(AG)$ and $\rho(GA)$ are non-empty. In the Krein space case
(i.e.\ when $G$ or $A$ is bounded and boundedly invertible) this is
equivalent to $\rho(T)\neq\emptyset$ since in this situation the
operators $AG$ and $GA$ are similar. As is shown in Theorem
\ref{t:main}, a definitizable product $AG$ of selfadjoint operators
$A$ and $G$ has the same above-mentioned spectral properties as a
definitizable operator in a Krein space. Moreover, it has a spectral
function with a finite number of singularities, see Theorem
\ref{t:sf}. In the special case when both $A$ and $G$ are bounded,
$A\geq{0}$ and $0\notin{\sigma_{p}(A)}$ the existence of a spectral
function of $AG$ was already proved in \cite{denm}.
The techniques used in our proof of the existence of the spectral
function are different to those in \cite{l} where an analogue of
Stone's formula for selfadjoint operators in Hilbert spaces was used
to define the spectral function. Here, we make use of the concept of
the spectral points of positive and negative type of symmetric
operators in inner product spaces which was introduced by H.\
Langer, A.S.\ Markus and V.I.\ Matsaev in \cite{lmm}, see also
\cite{ajt,abjt,lamm} for the Krein space case. In Theorem
\ref{t:main} we prove that if the product $AG$ is definitizable,
then there exists a finite number of real points which divide the
real line into intervals which are either of positive or negative
type with respect to $AG$. Due to a theorem in \cite{lmm} this
implies the existence of local spectral functions of $AG$ on these
intervals. In the proof of Theorem \ref{t:sf} we "connect" those
local spectral functions and thus obtain a spectral function of $AG$
on $\R$ with a finite number of singularities.
The paper is arranged as follows. In the preliminaries section following this introduction we introduce the spectral points of positive and negative type of a symmetric operator in a (possibly indefinite) inner product space and prove that such an operator has a local spectral function on intervals of positive or negative type. In section \ref{s:products} we consider products $AG$ of selfadjoint operators $A$ and $G$ in a Hilbert space $(\calH,\hproduct)$ such that $\rho(AG)$ and $\rho(GA)$ are non-empty. The operator $AG$ is then symmetric with respect to the inner product $(G_0\cdot,\cdot)$, where $G_0$ is the bounded selfadjoint operator given by
$$
G_0 := G(AG - \la_0)^{-1}(AG - \ol{\la_0})^{-1},\quad\la_0\in\rho(AG)\setminus\R,
$$
and we analyze the spectra of positive and negative type of $AG$ (corresponding to the inner product $(G_0\cdot,\cdot)$). For example, it turns out that these spectra do not depend on the choice of $\la_0$. In section \ref{s:def} we particularly make use of the results in section \ref{s:products} to prove the main theorems on definitizable pairs of selfadjoint operators. In section \ref{s:application} we apply our results to Sturm-Liouville problems.
\section{Preliminaries}
Let $S$ be a linear operator in a Banach space $X$. If $S$ is bounded and everywhere defined, we write $S\in L(X)$. By the {\em resolvent set} $\rho(S)$ of $S$ we understand the set of all $\la\in\C$ for which $\ran(S - \la) = X$, $\ker(S - \la) = \{0\}$ and $(S - \la)^{-1}\in L(X)$. With this definition of $\rho(S)$, the operator $S$ is closed if $\rho(S)$ is non-empty. The operator $S$ is called {\it boundedly invertible} if $0\in\rho(S)$. The set $\sigma(S) := \C\setminus\rho(S)$ is called the {\em spectrum} of $S$. The {\it approximate point spectrum} $\sap(S)$ of $S$ is defined as the set of all $\la\in\C$ for which there exists a sequence $(x_n)\subset\dom S$ with $\|x_n\| = 1$ and $(S - \la)x_n\to 0$ as $n\to\infty$. A point $\la\in\C$ does not belong to $\sap(S)$ if and only if there exists $c > 0$ and an open neighborhood $\calU$ of $\la$ in $\C$ such that $\|(S - \mu)x\|\ge c\|x\|$ holds for all $x\in\dom S$ and all $\mu\in\calU$.
Throughout this section $(\calH,\hproduct)$ denotes a Hilbert space and $G_0$ a bounded selfadjoint operator in $\calH$. The operator $G_0$ induces a new inner product $\product$ on $\calH$ via
$$
[x,y] := (G_0x,y),\quad x,y\in\calH.
$$
The pair $(\calH,\product)$ is often referred to as a {\it $G_0$-space}. If $G_0$ is boundedly invertible, $(\calH,\product)$ is called a {\it Krein space}. A subspace $\calL$ of $\calH$ is called {\it uniformly positive} ({\it uniformly negative}) if there exists $\delta > 0$ such that
$$
[x,x]\ge\delta\|x\|^2\qquad\big([x,x]\le-\delta\|x\|^2,\text{ respectively}\big)
$$
holds for all $x\in\calL$. If $\calL$ is closed, then $\calL$ is uniformly positive (uniformly negative) if and only if $(\calL,\product)$ ($(\calL,-\product)$, respectively) is a Hilbert space. The {\it orthogonal companion} of a subspace $\calL$ is defined by
$$
\calL^\gperp := \{x\in\calH : [x,\ell] = 0\;\text{ for all }\,\ell\in\calL\}.
$$
The subspace $\calL$ is called {\it ortho-complemented} if $\calH = \calL\,+\,\calL^\gperp$. If the sum is direct, we write $\calH = \calL[\ds]\calL^\gperp$. The symbol $[\ds]$ thus denotes the direct $\product$-orthogonal sum. The following lemma will be used frequently, cf.\ \cite[Theorem 1.7.16]{ai}.
\begin{lem}\label{l:ks->oc}
Let $\calL\subset\calH$ be a closed subspace. If $(\calL,\product)$ is a Krein space, then $\calL$ is ortho-complemented. More precisely, we have
$$
\calH = \calL\,[\ds]\,\calL^\gperp.
$$
\end{lem}
A closed and densely defined linear operator $T$ in $\calH$ will be called {\it $G_0$-symmetric} (or $\product$-{\it symmetric}) if
$$
[Tx,y] = [x,Ty]\quad\text{holds for all }x,y\in\dom T.
$$
This is equivalent to the symmetry of the operator $G_0T$ in the Hilbert space $(\calH,\hproduct)$, i.e.\ $G_0T\subset (G_0T)^*$.
The Riesz-Dunford spectral projection of a closed linear operator $T$ in $\calH$ with respect to a spectral set $\sigma$ of $T$ will be denoted by $E(T;\sigma)$. If $\sigma = \{\la\}$, we write $E(T;\la)$ instead of $E(T;\{\la\})$.
\begin{lem}\label{l:E}
Let $T$ be $G_0$-symmetric. If $\la\in\C$ and $\ol\la$ are isolated points of the spectrum of $T$, we have
$$
[E(T;\la)x,y] = [x,E(T;\ol\la)y]\quad\text{for all }\;x,y\in\calH.
$$
\end{lem}
\begin{proof}
Let $\veps > 0$ be a number such that the deleted discs $\{\mu\in\C : |\mu - \la|\le\veps\}\setminus\{\la\}$ and $\{\mu\in\C : |\mu - \ol\la|\le\veps\}\setminus\{\ol\la\}$ are contained in $\rho(T)$. Define the curves $\gamma,\psi : [0,2\pi]\to\C$ by
$$
\gamma(t) := \la + \veps e^{it} \:\text{ and }\; \psi(t) := \ol\la + \veps e^{it}, \;\;t\in [0,2\pi].
$$
Then for $x,y\in\calH$ we have
\begin{eqnarray*}
[E(T;\la)x,y] &=& -\frac{1}{2\pi i}\int_\gamma [(T - \mu)^{-1}x,y]\,d\mu = \frac{1}{2\pi i}\int_{\gamma^{-1}} [x,(T - \ol\mu)^{-1}y]\,d\mu \\
&=& \frac{1}{2\pi i}\int_0^{2\pi} [x,(T - \ol\la - \veps e^{it})^{-1}y](-i)\veps e^{-it}\,dt \\
&=& \left[ x , -\frac{1}{2\pi i}\int_0^{2\pi} i\veps e^{it}(T - \psi(t))^{-1}y\,dt \right] = [x,E(T;\ol\la)y],
\end{eqnarray*}
where $\gamma^{-1}(t) := \la + \veps e^{-it}$, $t\in [0,2\pi]$.
\end{proof}
In \cite{lmm} the spectral points of positive and negative type of a bounded $G_0$-symmetric operator were introduced. In the following definition these notions are extended to unbounded operators.
\begin{defn}
Let $T$ be a $G_0$-symmetric operator in $\calH$. A point $\la\in\sap(T)$ is called a {\em spectral point of positive {\rm (}negative{\rm )} type} of $T$ if for every sequence $(x_n)\subset\dom T$ with $\|x_n\| = 1$ and $(T - \la)x_n\to 0$ as $n\to\infty$ we have
$$
\liminf_{n\to\infty}\,[x_n,x_n] > 0\quad\Big(\limsup_{n\to\infty}\,[x_n,x_n] < 0,\;\text{respectively}\Big).
$$
The set of all spectral points of positive {\rm (}negative{\rm )} type of $T$ will be denoted by $\sp(T)$ {\rm (}$\sm(T)$, respectively{\rm )}. A set $\Delta\subset\C$ is said to be of positive {\rm (}negative{\rm )} type with respect to $T$ if
$$
\Delta\cap\sap(T)\subset\sp(T)\quad\Big(\Delta\cap\sap(T)\subset\sm(T),\,\text{respectively}\Big).
$$
\end{defn}
The following statements were proved in \cite{lmm} for bounded operators. However, the proofs can be adopted without difficulties in the unbounded case.
\begin{prop}\label{p:def_type}
The spectral points of positive and negative type of a $G_0$-sym\-met\-ric operator $T$ are real. Moreover, $\sp(T)$ and $\sm(T)$ are open in $\sap(T)$. In particular, if $\Delta$ is a compact interval which is of positive {\rm (}negative{\rm )} type with respect to $T$, then there exists a $\C$-open neighborhood $\calU$ of $\Delta$ such that $(\calU\setminus\R)\cap\sap(T) = \emptyset$ and $\calU\cap\R$ is of positive type {\rm (}negative type, respectively{\rm )} with respect to $T$. Moreover, there exists $C > 0$ such that for all $\la\in\calU$ we have
$$
\|(T - \la)x\|\,\ge\,C|\Im\la|\,\|x\|,\quad x\in\dom T.
$$
\end{prop}
\begin{defn}\label{d:sf}
Let $J\subset\R$ be a bounded or unbounded open interval and let $s\subset J$ be a finite set. The system consisting of all bounded Borel subsets $\Delta$ of $J$ with $\ol\Delta\subset J$ the boundary points of which are not contained in $s$ will be denoted by $\mathfrak R_s(J)$. If $s = \emptyset$, we simply write $\mathfrak R(J)$. Let $S$ be a closed and densely defined linear operator in the Banach space $X$. A set function $E$ mapping from $\mathfrak R_s(J)$ into the set of bounded projections in $X$ is called a {\it local spectral function of $S$ on $J$} ({\it with the set of critical points $s = s(E)$}) if the following conditions are satisfied for all $\Delta,\Delta_1,\Delta_2\in\mathfrak R_s(J)$:
\begin{enumerate}
\item[{\rm (S1)}] $E(\Delta_1\cap\Delta_2) = E(\Delta_1)E(\Delta_2)$.
\item[{\rm (S2)}] If $\Delta_1\cap\Delta_2 = \emptyset$, then $E(\Delta_1\cup\Delta_2) = E(\Delta_1) + E(\Delta_2)$.
\item[{\rm (S3)}] $E(\Delta)$ commutes with every operator $B\in L(\calH)$ for which $BS\,\subset\,SB$.
\item[{\rm (S4)}] $\sigma(S|E(\Delta)\calH)\subset\ol{\sigma(S)\cap\Delta}$.
\item[{\rm (S5)}] $\sigma(S|(I - E(\Delta))\calH)\subset\ol{\sigma(S)\setminus\Delta}$.
\end{enumerate}
The points $\la\in s(E)$ for which the strong limits
$$
s-\lim_{t\to 0}E([\la-\veps,\la-t])\quad\text{ and }\quad s-\lim_{t\to 0}E([\la+t,\la+\veps])
$$
do not exist for sufficiently small $\veps > 0$ are called the {\it singularities of $E$}.
\end{defn}
\begin{rem}
Note that $BS\subset SB$ is equivalent to $B(S - \la)^{-1} = (S - \la)^{-1}B$ for every $\la\in\rho(S)$ if $\rho(S)\neq\emptyset$.
\end{rem}
Let $S$ be a closed operator in a Banach space and let $\Delta$ be a compact set in $\C$. A closed subspace $\calL_\Delta\,\subset\,\dom S$ is called the {\it maximal spectral subspace} of $S$ corresponding to $\Delta$ if the following holds:
\begin{enumerate}
\item[(a)] $S\calL_\Delta\subset\calL_\Delta$.
\item[(b)] $\sigma(S|\calL_\Delta)\subset\sigma(S)\cap\Delta$
\item[(c)] If $\calL\subset\dom S$ is a closed subspace such that (a) and (b) hold with $\calL_\Delta$ replaced by $\calL$ then $\calL\subset\calL_\Delta$.
\end{enumerate}
By $\C^+$ ($\C^-$) we denote the open upper (lower, respectively) halfplane. The following theorem has been shown for bounded $G_0$-symmetric operators in \cite{lmm}.
\begin{thm}\label{t:lsf}
Let $J$ be a bounded or unbounded open interval in $\R$ which is of positive {\rm (}negative{\rm )} type with respect to the $G_0$-symmetric operator $T$. If each of the sets $\C^+\cap\rho(T)$ and $\C^-\cap\rho(T)$ has an accumulation point in $J$, then $T$ has a local spectral function $E$ without critical points on $J$ with the following properties {\rm (}$\Delta\in\mathfrak R(J)${\rm )}:
\begin{enumerate}
\item[{\rm (i)}] The subspace $E(\Delta)\calH$ is uniformly positive {\rm (}uniformly negative, respectively{\rm )}.
\item[{\rm (ii)}] The operator $E(\Delta)$ is $G_0$-symmetric.
\item[{\rm (iii)}] If $\Delta$ is compact, then $E(\Delta)\calH$ is the maximal spectral subspace of $T$ corresponding to $\Delta$.
\end{enumerate}
\end{thm}
\begin{proof}
Let $J$ be of positive type with respect to $T$. As a consequence of the uniqueness of a local spectral function (see \cite[Lemma 3.14]{j2}) it is sufficient to prove that the operator $T$ has a local spectral function on each compact subinterval of $J$. Let $J'$ be such an interval. Choosing a larger compact interval which contains accumulation points of $\C^+\cap\rho(T)$ and $\C^-\cap\rho(T)$ it is seen from Proposition \ref{p:def_type} that there exists an open neighborhood $\calU$ of $J'$ in $\C$ such that $\calU\setminus\R\subset\rho(T)$ and that there exists $C > 0$ such that
\begin{equation}\label{e:growth}
\|(T - \la)^{-1}\|\,\le\,\frac C {|\Im\la|}
\end{equation}
holds for all $\la\in\calU\setminus\R$. By \cite[Chapter II, \paragraf 2, Theorem 5]{lm} the maximal spectral subspace $\calL$ of $T$ corresponding to $J'$ exists and $T|\calL$ is bounded. As $T|\calL$ is also $\product$-symmetric and $\sigma(T|\calL) = \sp(T|\calL)$ it follows from \cite[Theorem 3.1]{lmm} that $(\calL,\product)$ is a Hilbert space. Denote by $E_\calL$ the spectral measure of the selfadjoint operator $T|\calL$ in $(\calL,\product)$ and by $P_\calL$ the projection onto $\calL$ with $\ker P_\calL = \calL^\gperp$ which exists due to Lemma \ref{l:ks->oc}. Then $E(\cdot) := E_\calL(\cdot)P_\calL$ defines a local spectral function of $T$ on $J'$.
\end{proof}
\section{Products of selfadjoint operators}\label{s:products}
Throughout this section let $A$ and $G$ be (possibly unbounded and/or non-invertible) selfadjoint operators in the Hilbert space $(\calH,\hproduct)$. Each of the statements in the following proposition follows from or is an easy consequence of \cite[Remark 2.5]{hkm} and \cite[Theorem 1.1]{hkm}, see also \cite{hm}.
\begin{prop}\label{p:basic}
Let $A$ and $G$ be selfadjoint operators in $\calH$. If
\begin{equation}\label{e:ass}\tag{$*$}
\rho(AG)\neq\emptyset\quad\text{and}\quad\rho(GA)\neq\emptyset,
\end{equation}
then both operators $AG$ and $GA$ are closed and densely defined and
\begin{equation}\label{e:adj}
(AG)^* = GA.
\end{equation}
Moreover,
\begin{equation}\label{e:sigs}
\sigma(AG)\setminus\{0\} = \sigma(GA)\setminus\{0\}.
\end{equation}
In addition, for $\la\in\rho(AG)\setminus\{0\}$ the following relations hold:
\begin{align*}
A(GA - \la)^{-1} &= \ol{(AG - \la)^{-1}A}\\
G(AG - \la)^{-1} &= \ol{(GA - \la)^{-1}G}.
\end{align*}
\end{prop}
In our main results (Theorems \ref{t:main} and \ref{t:sf} below) we require that \eqref{e:ass} is satisfied. Since in applications this condition might be hard to verify, the following sufficient conditions for \eqref{e:ass} may be helpful.
\begin{lem}
The following conditions are sufficient for {\rm\eqref{e:ass}} to hold:
\begin{enumerate}
\item[{\rm (a)}] $G$ is bounded and $\rho(GA)\neq\emptyset$.
\item[{\rm (b)}] $G$ is boundedly invertible and $\rho(AG)\neq\emptyset$.
\item[{\rm (c)}] $(AG)^* = GA$ and $\rho(AG)\neq\emptyset$.
\item[{\rm (d)}] $\rho(AG)\neq\emptyset$, $GA$ is closed and for some $\la\in\rho(AG)\setminus\{0\}$ the operator $G(AG - \la)^{-1}A$ is bounded on $\dom A$.
\end{enumerate}
\end{lem}
\begin{proof}
If (c) holds, then $\sigma(GA) = \sigma((AG)^*) = \{\ol\la : \la\in\sigma(AG)\}$ and hence $\rho(GA)\neq\emptyset$. If (b) holds, then $AG$ and $GA$ are closed and $(AG)^* = GA$. If (a) holds, then $AG$ and $GA$ are closed and $(GA)^* = AG$. Hence, in both cases \eqref{e:ass} follows from (c). Assume now that (d) holds. Then the operator $GA - \la$ is injective. Moreover, for $x\in\dom A$ we have $G(AG - \la)^{-1}Ax - x \in\dom(GA)$ and
$$
(GA - \la)\big(G(AG - \la)^{-1}Ax - x\big) = \la x.
$$
This shows that $\dom A\subset\ran(GA - \la)$ and that $(GA - \la)^{-1}|\dom A$ is bounded. As the closure of $(GA - \la)^{-1}|\dom A$ coincides with $(GA - \la)^{-1}$ (on $\ran(GA - \la)$), it follows that $(GA - \la)^{-1}\in L(\calH)$.
\end{proof}
\begin{rem}\label{r:no_star}
If $AG\in L(\calH)$ or $GA\in L(\calH)$ then either $A,G\in L(\calH)$ or ($*$) does not hold.
\end{rem}
Indeed, if $AG\in L(\calH)$, then $\dom G = \calH$ yields $G\in L(\calH)$. Suppose that ($*$) holds. Then, according to Proposition \ref{p:basic}, we have $GA = (AG)^*\in L(\calH)$ and thus $A\in L(\calH)$.
\begin{prop}\label{p:zero_res}
If \eqref{e:ass} is satisfied, then the following statements are equivalent.
\begin{enumerate}
\item[{\rm (a)}] $AG$ is boundedly invertible.
\item[{\rm (b)}] $\ran(AG) = \calH$.
\item[{\rm (c)}] $GA$ is boundedly invertible.
\item[{\rm (d)}] $\ran(GA) = \calH$.
\item[{\rm (e)}] $A$ and $G$ are boundedly invertible.
\end{enumerate}
In particular, $\sigma(AG) = \sigma(GA)$.
\end{prop}
\begin{proof}
Clearly, (a) implies (b). Assume that (b) holds. Then $\ran A = \calH$ (which implies $\ker A = \{0\}$) and $\dom A\subset\ran G$ (which implies $\ker G = (\ran G)^\perp\subset (\dom A)^\perp = \{0\}$). Hence, $\ker(AG) = \{0\}$ and (a) follows. An analog reasoning shows that (c) holds if and only if (d) holds. The equivalence (a)$\Slra$(c) is a consequence of \eqref{e:adj}. Since (a) implies that $A$ is boundedly invertible, (c) implies that $G$ is boundedly invertible and (e) implies both (a) and (c), the proposition is proved.
\end{proof}
\begin{cor}\label{c:sigs}
Assume that \eqref{e:ass} holds. Then for each $\la\in\C$ the following statements hold.
\begin{enumerate}
\item[{\rm (i)}] $\la\in\sigma(AG)\;\Llra\;\ol\la\in\sigma(AG)$.
\item[{\rm (ii)}] $\la\in\sigma(AG)\setminus\sap(AG)\;\Lra\;\ol\la\in\sigma_p(AG)$.
\end{enumerate}
\end{cor}
\begin{proof}
From Propositions \ref{p:basic} and \ref{p:zero_res} it follows that $\la\in\rho(AG)$ implies
$$
\ol\la\in\rho((AG)^*) = \rho(GA) = \rho(AG).
$$
This proves (i). Let us prove (ii) for $\la\neq 0$. If $\la\in\sigma(AG)\setminus\sap(AG)$, $\la\neq 0$, then it is well-known that $\ol\la\in\sigma_p((AG)^*) = \sigma_p(GA)$. Hence, there exists $x\in\dom(GA)\setminus\{0\}$ such that $GAx = \ol\la x$. Therefore, $GAx\in\dom A$ and $(AG - \ol\la)Ax = A(GA - \ol\la)x = 0$. Since $Ax\neq 0$ (otherwise, $GAx = 0$ and thus $x=0$), we conclude that $\ol\la\in\sigma_p(AG)$. But (ii) also holds for $\la=0$ as in this case the left hand side of the implication (ii) is never true. To see this, note that $0\notin\sap(AG)$ implies that there is a neighborhood $\calU$ of zero such that $\calU\cap\sap(AG) = \emptyset$. Now, from (ii) for $\la\neq 0$ it follows that $\calU\setminus\{0\}\subset\rho(AG)$. Hence, the Fredholm index of $AG - \la$ for $\la\in\calU$ is constantly zero. And as $\ker(AG) = \{0\}$, it follows that also $0\in\rho(AG)$.
\end{proof}
If \eqref{e:ass} is satisfied, by Corollary \ref{c:sigs} there exists $\la_0\in\C\setminus\R$ such that $\la_0,\ol{\la_0}\in\rho(AG)$, and thus, the operator
\begin{equation}\label{e:G0}
G_0 := G(AG - \la_0)^{-1}(AG - \ol{\la_0})^{-1}
\end{equation}
is bounded. Moreover, due to Proposition \ref{p:basic} we have
\begin{align*}
G_0^*
&= (GA - \la_0)^{-1}\big(G(AG - \la_0)^{-1}\big)^*\\
&= (GA - \la_0)^{-1}\big((GA - \la_0)^{-1}G\big)^*\\
&= (GA - \la_0)^{-1}G(AG - \ol{\la_0})^{-1}\\
&= G(AG - \la_0)^{-1}(AG - \ol{\la_0})^{-1} = G_0
\end{align*}
and
\begin{align*}
G_0AG
&= G(AG - \la_0)^{-1}(AG - \ol{\la_0})^{-1}AG\\
&\subset GAG(AG - \la_0)^{-1}(AG - \ol{\la_0})^{-1}\\
&= GAG_0 = (G_0AG)^*.
\end{align*}
This shows that $G_0$ is selfadjoint and that $AG$ is $G_0$-symmetric. Equivalently, $AG$ is symmetric with respect to the inner product
\begin{equation}\label{e:ip}
[x,y] := (G_0x,y),\quad x,y\in\calH.
\end{equation}
Note that the inner product $\product$ is in general not a Krein space inner product. It might even be degenerate.
For the rest of this section we assume that \eqref{e:ass} holds and fix $\la_0\in\rho(AG)\setminus\R$, the operator $G_0$ in \eqref{e:G0} and the inner product $\product$ in \eqref{e:ip}. The spectra of positive and negative type of $AG$ are connected with the inner product $\product$ which itself depends on $\la_0\in\rho(AG)\setminus\R$. The following lemma shows that $\sp(AG)$ and $\sm(AG)$ are in fact independent of $\la_0$.
\begin{lem}\label{l:spsp}
Let $\la\in\C$. Then $\la\in\sp(AG)$ {\rm (}$\la\in\sm(AG)${\rm )} if and only if for each sequence $(x_n)\subset\dom AG$ with $\|x_n\| = 1$ and $(AG - \la)x_n\to 0$ as $n\to\infty$ we have
$$
\liminf_{n\to\infty}\,(Gx_n,x_n) > 0\quad\left(\limsup_{n\to\infty}\,(Gx_n,x_n) < 0,\text{ respectively}\right).
$$
\end{lem}
\begin{proof}
Assume that the condition in the lemma on the approximate eigensequences of $AG$ holds and let $(x_n)\subset\dom AG$ with $\|x_n\| = 1$ and $(AG - \la)x_n\to 0$ as $n\to\infty$. Set
$$
y_n := (\la - \la_0)(AG - \la_0)^{-1}x_n.
$$
Then we have
$$
(AG - \la)y_n = (\la - \la_0)(x_n + (\la_0 - \la)(AG - \la_0)^{-1}x_n) = (\la - \la_0)(x_n - y_n).
$$
On the other hand,
$$
(AG - \la)y_n = (\la - \la_0)(AG - \la_0)^{-1}(AG - \la)x_n\,\lra\,0
$$
as $n\to\infty$. Hence $\|y_n\|\to 1$ and since
$$
[x_n,x_n] = (G(AG - \la_0)^{-1}x_n,(AG - \la_0)^{-1}x_n) = \frac{1}{|\la - \la_0|^2}(Gy_n,y_n)
$$
we conclude
$$
\liminf_{n\to\infty}\,[x_n,x_n] = \frac{1}{|\la - \la_0|^2}\,\liminf_{n\to\infty}\,(Gy_n,y_n) > 0.
$$
Conversely, let $\la\in\sp(AG)$ and let $(x_n)\subset\dom AG$ with $\|x_n\| = 1$ and $(AG - \la)x_n\to 0$ as $n\to\infty$. Since
$$
(Gx_n,x_n) = [(AG - \la_0)x_n,(AG - \la_0)x_n],
$$
we obtain from $(AG - \la)x_n\to 0$ as $n\to\infty$:
$$
\liminf_{n\to\infty}\,(Gx_n,x_n) = |\la - \la_0|^2\,\liminf_{n\to\infty}\,[x_n,x_n] > 0,
$$
which proves the assertion.
\end{proof}
\begin{cor}\label{c:zero_+}
Assume that \eqref{e:ass} holds and that $0\in\sp(AG)\cup\sm(AG)$. Then $G$ is boundedly invertible.
\end{cor}
\begin{proof}
Suppose that, e.g., $0\in\sp(AG)$ and that there exists a sequence $(x_n)\subset\dom G$ with $\|x_n\|=1$ for $n\in\N$ and $Gx_n\to 0$ as $n\to\infty$. Define
$$
y_n := -\la_0(AG - \la_0)^{-1}x_n\,\in\,\dom(GAG)
$$
as in the proof of Lemma \ref{l:spsp} (with $\la = 0$). Then $AGy_n = \la_0(y_n - x_n)$ and $AGy_n = -\la_0A(GA - \la_0)^{-1}Gx_n\to 0$ as $n\to\infty$ as $A(GA - \la_0)^{-1}$ is bounded. Therefore, $\|y_n\|\to 1$ and since $0\in\sp(AG)$, we conclude $\liminf_{n\to\infty}\,(Gy_n,y_n) > 0$ from Lemma \ref{l:spsp}. But this contradicts $Gy_n = -\la_0(GA - \la_0)^{-1}Gx_n\to 0$ as $n\to\infty$.
\end{proof}
\begin{lem}\label{l:krein}
Assume that \eqref{e:ass} is satisfied. Let $\calL\subset\dom AG$ be a closed subspace such that $AG\calL\subset\calL$ and $0\in\rho(AG|\calL)$. If $\calH = \calL + \calL^\gperp$, then $(\calL,\product)$ is a Krein space.
\end{lem}
\begin{proof}
Let $P_\calL$ be the orthogonal projection (with respect to $\hproduct$) onto $\calL$ in $\calH$. Then, with $G_\calL := P_\calL (G_0|\calL)\in L(\calL)$ we have
$$
[\ell_1,\ell_2] = (G_\calL\ell_1,\ell_2) \quad\text{for }\;\ell_1,\ell_2\in\calL.
$$
Hence, $(\calL,\product)$ is a Krein space if and only if $G_\calL$ is boundedly invertible. Let $\ell\in\ker G_\calL$. By assumption, for any $x\in\calH$ we find $x_1\in\calL$ and $x_2\in\calL^\gperp$ such that $x = x_1 + x_2$. It follows that
$$
(G_0\ell,x) = [\ell,x_1+x_2] = [\ell,x_1] = (G_\calL\ell,x_1) = 0,
$$
and thus $G_0\ell = 0$. From
$$
0 = G(AG - \la_0)^{-1}(AG - \ol{\la_0})^{-1}\ell = (GA - \la_0)^{-1}(GA - \ol{\la_0})^{-1}G\ell
$$
we conclude $G\ell = 0$ and hence $AG\ell = 0$ which implies $\ell = 0$ as $0\in\rho(AG|\calL)$. Therefore we have $\calH = \calL [\ds] \calL^\gperp$ (since $\ker G_\calL = \calL\cap\calL^\gperp$).
Now, suppose that there exists a sequence $(\ell_n)\subset\calL$ with $\|\ell_n\|=1$ and $\|G_\calL\ell_n\| \to 0$ as $n\to\infty$. If by $P$ we denote the ($G_0$-symmetric) projection onto $\calL$ with $\ker P = \calL^\gperp$, we obtain
\begin{align*}
\|G_0\ell_n\|^2
&= (G_0\ell_n,PG_0\ell_n) + (G_0\ell_n,(I - P)G_0\ell_n)\\
&= (P_\calL G_0\ell_n, PG_0\ell_n) + [\ell_n, (I - P)G_0\ell_n]\\
&= (G_\calL\ell_n,PG_0\ell_n)\\
&\le \|G_\calL \ell_n\|\cdot\|P\|\cdot\|G_0\|.
\end{align*}
Hence, $G_0\ell_n\to 0$ as $n\to\infty$. It is easy to see that $\calL^\gperp$ is $AG$-invarant. Hence, $\calL$ is $(AG - \la_0)^{-1}$-invariant. And since $AG|\calL$ is bounded, we conclude
\begin{align*}
\|AG_0\ell_n\|
&\le \|(AG - \la_0)|\calL\|\cdot\|(AG - \la_0)^{-1}AG_0\ell_n\|\\
&= \|(AG - \la_0)|\calL\|\cdot\|A(GA - \la_0)^{-1}G_0\ell_n\|\\
&\le \|(AG - \la_0)|\calL\|\cdot\|A(GA - \la_0)^{-1}\|\cdot\|G_0\ell_n\|.
\end{align*}
Thus, we have $(AG - \la_0)^{-1}(AG - \ol{\la_0})^{-1}AG\ell_n = AG_0\ell_n\to 0$, which implies $AG\ell_n\to 0$ as $n\to\infty$, which is a contradiction to $0\in\rho(AG|\calL)$. The lemma is proved.
\end{proof}
\begin{prop}\label{p:iso}
Assume that \eqref{e:ass} is satisfied. Then for each $\la\in\C$ the following statements hold.
\begin{enumerate}
\item[{\rm (i)}] If $\la\neq 0$ is an isolated point of the spectrum of $AG$ {\rm (}and hence also $\ol\la${\rm )}, then the inner product space $(E(AG;\{\la,\ol\la\})\calH,\product)$ is a Krein space.
\item[{\rm (ii)}] If $\la$ is a pole of the resolvent of $AG$ of order $\nu$ then $\ol\la$ is a pole of the resolvent of $AG$ of order $\nu$.
\end{enumerate}
\end{prop}
\begin{proof}
For the proof of (i) set $E := E(AG;\{\la,\ol\la\})$. As $E$ is $\product$-symmetric by Lemma \ref{l:E}, it follows that $(I - E)\calH\subset (E\calH)^\gperp$. And since $\calH = E\calH\ds(I - E)\calH$, Lemma \ref{l:krein} yields the assertion.
By \cite[Theorem VII.3.18]{ds} the fact that $\la\notin\R$ (the statement for $\la\in\R$ is trivial) is a pole of the resolvent of $AG$ of order $\nu$ is equivalent to
$$
(AG - \la)^\nu E(AG;\la) = 0 \;\text{ and }\; (AG - \la)^{\nu - 1}E(AG;\la)\neq 0.
$$
Let $x,v\in E(AG;\ol\la)\calH$ be arbitrary. From Lemma \ref{l:E} we obtain
\begin{align*}
[(AG - \ol\la)^\nu x,v]
&= [E(AG;\ol\la)(AG - \ol\la)^\nu x, v]\\
&= [(AG - \ol\la)^\nu x,E(AG;\la)v] = 0.
\end{align*}
Furthermore, for $u\in E(AG;\la)\calH$ we have
$$
[(AG - \ol\la)^\nu x,u] = [x,(AG - \la)^\nu u] = 0.
$$
Hence, $[(AG - \ol\la)^\nu x,y] = 0$ for all $y\in E(AG;\{\la,\ol\la\})\calH$. But $(E(AG;\{\la,\ol\la\})\calH,\product)$ is a Krein space by (i), and we obtain $(AG - \ol\la)^\nu x = 0$.
\end{proof}
\begin{prop}\label{p:bounded_non-negative}
Let $A_0$ be a bounded selfadjoint operator in $\calH$ and assume that $G_0A_0G_0\ge 0$. Then the following statements hold for the bounded $G_0$-symmetric operator $A_0G_0$:
\begin{enumerate}
\item[{\rm (i)}] $\sigma(A_0G_0)\subset\R$\,,
\item[{\rm (ii)}] $(0,\infty)\cap\sigma(A_0G_0)\subset\sp(A_0G_0)$,
\item[{\rm (iii)}] $(-\infty,0)\cap\sigma(A_0G_0)\subset\sm(A_0G_0)$.
\end{enumerate}
\end{prop}
\begin{proof}
Let $\la\in\sap(A_0G_0)\setminus\{0\}$ and let $(x_n)\subset\calH$ with $\|x_n\| = 1$, $n\in\N$, and $(A_0G_0 - \la)x_n\to 0$ as $n\to\infty$. We claim that it is not possible that $\lim_{n\to\infty}\,(G_0A_0G_0x_n,x_n) = 0$. Suppose the contrary. Then, from the Cauchy-Bun\-ya\-kowski inequality we obtain
\begin{align*}
\|G_0A_0G_0x_n\|^2 \le (G_0A_0G_0x_n,x_n)((G_0A_0G_0)^2x_n,G_0A_0G_0x_n),
\end{align*}
and hence $G_0A_0G_0x_n\to 0$ as $n\to\infty$. As $(A_0G_0 - \la)x_n\to 0$, this implies $G_0x_n\to 0$ and hence $A_0G_0x_n\to 0$ as $n\to\infty$. A contradiction.
Assume that there exists $\la\in\sap(A_0G_0)\setminus\R$. Then there exists $(x_n)\subset\calH$ with $\|x_n\| = 1$ and $(A_0G_0 - \la)x_n\to 0$ as $n\to\infty$. Since $[A_0G_0x_n,x_n] - \la [x_n,x_n]$ tends to zero as $n\to\infty$ and $[A_0G_0x_n,x_n]$ and $[x_n,x_n]$ both are real for each $n$, it follows from $\la\notin\R$ that $[A_0G_0x_n,x_n]$ tends to zero which contradicts the statement proved above. Hence $\sap(A_0G_0)\setminus\R = \emptyset$, and from Corollary \ref{c:sigs}(ii) we obtain $\sigma(A_0G_0)\subset\R$.
Let $\la\in\sigma(A_0G_0)$, $\la > 0$. Then $\la\in\sap(A_0G_0)$ by Corollary \ref{c:sigs}(ii). Let $(x_n)\subset\calH$ with $\|x_n\| = 1$ and $(A_0G_0 - \la)x_n\to 0$ as $n\to\infty$. Suppose $\liminf_{n\to\infty}\,[x_n,x_n]\le 0$. Then from
$$
\la\liminf_{n\to\infty}\,[x_n,x_n] = \liminf_{n\to\infty}\,[(\la - A_0G_0)x_n,x_n] + (G_0A_0G_0x_n,x_n)\ge 0
$$
it is seen that there exists a subsequence $(x_{n_k})$ such that $(G_0A_0G_0x_{n_k},x_{n_k})$ tends to zero as $k\to\infty$. But this is a contradiction to the statement proved above, and it follows that
$$
\liminf_{n\to\infty}\,[x_n,x_n] > 0.
$$
This shows (ii), and (iii) can be shown similarly.
\end{proof}
\begin{rem}
In the above proof the special representation $G_0 = G(AG - \la_0)^{-1}(AG - \ol{\la_0})^{-1}$ of $G_0$ was not used. Therefore, Proposition \ref{p:bounded_non-negative} also holds for arbitrary bounded selfadjoint operators $G_0$ in $\calH$.
\end{rem}
As a corollary of Proposition \ref{p:bounded_non-negative} we give another proof of a theorem of Radjavi and Rosenthal (see \cite[Proposition 6.8]{rr}). Recall that a closed subspace is hyperinvariant for $T\in L(X)$, $X$ a Banach space, if it is invariant for any operator in $L(X)$ which commutes with $T$.
\begin{cor}
Let $S,T\in L(\calH)$ be selfadjoint such that $STS\ge 0$. If $TS$ is not a constant multiple of the identity, then $TS$ has a non-trivial hyperinvariant subspace.
\end{cor}
\begin{proof}
If $\sigma(TS)\neq\{0\}$, then the assertion follows from Proposition \ref{p:bounded_non-negative} and Theorem \ref{t:lsf} (note that a maximal spectral subspace is hyperinvariant, cf.\ \cite[Proposition 2.3.2]{cf}). Hence, suppose that $\sigma(TS) = \{0\}$. It is no restriction to assume that $S$ and $T$ are injective. Otherwise, $\ker(TS)$ or $\ol{\ran TS} = \ker(ST)^\perp$ is hyperinvariant for $TS$ or $TS = 0$. Hence, $T$ is a non-negative operator and Proposition \ref{p:basic} yields $\sigma(T^{1/2}ST^{1/2}) = \{0\}$. But $T^{1/2}ST^{1/2}$ is selfadjoint and thus coincides with the zero operator. This yields $T = S = 0$, a contradiction.
\end{proof}
\section{Definitizable pairs of selfadjoint operators}\label{s:def}
In the following we extend the notion of definitizability of selfadjoint operators in Krein spaces to products (or pairs) of selfadjoint operators in a Hilbert space. As in the previous section let $A$ and $G$ be selfadjoint operators in the Hilbert space $(\calH,\hproduct)$. Again, if \eqref{e:ass} is satisfied for $A$ and $G$ we fix $\la_0\in\rho(AG)$, define the bounded selfadjoint operator $G_0$ as in \eqref{e:G0} and set $[\cdot,\cdot] := (G_0\cdot,\cdot)$.
\begin{defn}\label{d:def}
An ordered pair $(A,G)$ of selfadjoint operators is called {\em definitizable} if the resolvent sets of $AG$ and $GA$ are non-empty and if there exists a polynomial $p\neq 0$ with real coefficients such that
$$
(p(AG)x,Gx)\ge 0\quad\text{for all }x\in\dom(AG)^{\max\{1,d\}},
$$
where $d := \deg(p)$. The polynomial $p$ is called {\em definitizing} for $(A,G)$.
\end{defn}
If $G$ is bounded and boundedly invertible, then $AG$ is selfadjoint in the Krein space $(\calH,(G\cdot,\cdot))$ and Definition \ref{d:def} coincides with the definition of definitizability of the operator $AG$ in this Krein space. The next lemma shows that the definitizability of $(A,G)$ can also be expressed by means of the inner product $\product$.
\begin{lem}\label{l:indep}
Assume that \eqref{e:ass} is satisfied. Let $p\neq 0$ be a polynomial with real coefficients. Then the following statements are equivalent.
\begin{enumerate}
\item[{\rm (i)}] $(A,G)$ is definitizable with definitizing polynomial $p$.
\item[{\rm (ii)}] $[p(AG)x,x]\ge 0$ holds for all $x\in\dom p(AG)$.
\end{enumerate}
\end{lem}
\begin{proof}
Let $d$ be the degree of $p$. If (i) holds and $y\in\dom(AG)^d$, then with $x := (AG - \la_0)^{-1}y\in\dom(AG)^{d+1}$ we have
\begin{align*}
[p(AG)y,y]
&= (p(AG)(AG - \la_0)x,G_0(AG - \la_0)x)\\
&= ((AG - \la_0)p(AG)x,(GA - \ol{\la_0})^{-1}Gx)\\
&= (p(AG)x,Gx)\ge 0.
\end{align*}
Conversely, assume that (ii) holds and let $x\in\dom(AG)^{d+1}$. Then with $y := (AG - \la_0)x\in\dom(AG)^d$ the following holds:
\begin{align*}
(p(AG)x,Gx)
&= (p(AG)(AG - \la_0)^{-1}y,G(AG - \la_0)^{-1}y)\\
&= (p(AG)y,(GA - \ol{\la_0})^{-1}G(AG - \la_0)^{-1}y)\\
&= (p(AG)y,G_0y) = [p(AG)y,y]\ge 0.
\end{align*}
Hence, the proof is finished if $d = 0$. Let $d > 0$ and $x\in\dom(AG)^d$. As $\rho(AG)\neq\emptyset$, there exists a sequence $(x_n)\subset\dom(AG)^{d+1}$ such that for $k=0,1,\ldots,d$ we have $(AG)^kx_n\to(AG)^kx$ as $n\to\infty$. Moreover, due to $\dom AG\subset\dom G$ and the closedness of $AG$ and $G$ there exists $c > 0$ such that
$$
\|Gu\|\,\le\,c\big(\|u\| + \|AGu\|\big)\quad\text{for all }u\in\dom AG.
$$
Therefore, from $x_n\to x$ and $AGx_n\to AGx$ we conclude $Gx_n\to Gx$ as $n\to\infty$. This gives $(p(AG)x,Gx) = \lim_{n\to\infty}\,(p(AG)x_n,Gx_n)\ge 0$. The lemma is proved.
\end{proof}
The proof of the following lemma is similar to that of Lemma \ref{l:indep} and is therefore omitted.
\begin{lem}
Let $p\neq 0$ be a polynomial with real coefficients and degree $d$. Then the following holds:
\begin{enumerate}
\item[{\rm (a)}] If $(A,G)$ is definitizable with definitizing polynomial $p$, then $(G,A)$ is definitizable with definitizing polynomial $\la p(\la)$.
\item[{\rm (b)}] If $G$ is boundedly invertible, then $(A,G)$ is definitizable with definitizing polynomial $p$ if and only if the relation $(p(GA)x,G^{-1}x)\ge 0$ holds for all $x\in\dom(GA)^{\max\{1,d\}}$.
\end{enumerate}
\end{lem}
It is well-known (see \cite{l}) that the spectrum of a definitizable operator $T$ in a Krein space is real -- with the possible exception of a finite number of non-real poles of the resolvent of $T$ -- and that $T$ has a spectral function on $\R$ with a finite number of singularities. The following two theorems generalize this result to definitizable pairs of selfadjoint operators.
\begin{thm}\label{t:main}
If $(A,G)$ is definitizable, then the following statements hold.
\begin{enumerate}
\item[{\rm (a)}] The non-real spectrum of $AG$ consists of a finite number of points which are poles of the resolvent of $AG$. Each such point is a zero of every definitizing polynomial for $(A,G)$.
\item[{\rm (b)}] If $\la\in\sigma(AG)\cap(\R\setminus\{0\})$ and $p(\la) > 0$ for some definitizing polynomial $p$ for $(A,G)$, then $\la\in\sp(AG)$.
\item[{\rm (c)}] If $\la\in\sigma(AG)\cap(\R\setminus\{0\})$ and $p(\la) < 0$ for some definitizing polynomial $p$ for $(A,G)$, then $\la\in\sm(AG)$.
\end{enumerate}
\end{thm}
\begin{proof}
Let $p$ be a definitizing polynomial for $(A,G)$ and set $m := \deg(p)+1$. Let $z_0\in\C\setminus\R$ such that $p(z_0)\neq 0$. First of all let us prove that there exists some $\la_1\in\rho(AG)$ such that
$$
z_0^2p(z_0)(z_0 - \la_1)^{-m-1}(z_0 - \ol{\la_1})^{-m-1}\,\notin\,\R.
$$
To see this, choose two open intervals $J_1$ and $J_2$ such that $0\notin J_2$, $z_0\notin J_1\times J_2$ and $J_1\times J_2\subset\rho(AG)$. With $\la = x + iy\in J_1\times J_2$ and $z_0 = \alpha_0 + i\beta_0$ we have
$$
(z_0 - \la)(z_0 - \ol\la) = (\alpha_0 - x)^2 - \beta_0^2 + y^2 + 2i\beta_0(\alpha_0 - x) =: f(x,y).
$$
The function $f : J_1\times J_2\to\R^2$ has the derivative
$$
f'(x,y) = \mat{-2(\alpha_0 - x)}{2y}{-2\beta_0}{0}.
$$
Its determinant equals $4\beta_0y$ and does therefore not vanish as $0\notin J_2$ and $z_0\notin\R$. Hence, $f(J_1\times J_2)$ is an open set in $\C\setminus\{0\}$, and thus also
$$
\{z_0^2p(z_0)(z_0 - \la)^{-m-1}(z_0 - \ol\la)^{-m-1} : \la\in J_1\times J_2\} = \{z_0^2p(z_0)z^{-m-1} : z\in f(J_1\times J_2)\}
$$
is open.
By Lemma \ref{l:indep} it is no restriction to assume $\la_0 = \la_1\,(\neq z_0)$. For $k=1,2$ define the rational functions
\begin{equation}\label{e:r}
r_k(\la) := \la^2p(\la)(\la - \la_0)^{-m-k}(\la - \ol{\la_0})^{-m-k}.
\end{equation}
Then $r_1(z_0)\notin\R$. Define the bounded operator
\begin{equation}\label{e:A0}
A_0 := AGAp(GA)(GA - \la_0)^{-m}(GA - \ol{\la_0})^{-m}.
\end{equation}
It is not difficult to see that $A_0$ is selfadjoint. Moreover, we observe that
\begin{align*}
Gr_2(AG)
&= GAGAGp(AG)(AG - \la_0)^{-m-2}(AG - \ol{\la_0})^{-m-2}\\
&= G_0AGp(AG)AG(AG - \la_0)^{-m-1}(AG - \ol{\la_0})^{-m-1}\\
&= G_0A_0G_0.
\end{align*}
Similarly, one proves that
$$
r_1(AG) = A_0G_0.
$$
In addition, $G_0A_0G_0\ge 0$ holds as for $x\in\calH$ we have
$$
y := AG(AG - \la_0)^{-m-1}x\in\dom p(AG)
$$
and
\begin{align*}
(G_0A_0G_0x,x)
&= (GAGAGp(AG)(AG - \la_0)^{-m-2}(AG - \ol{\la_0})^{-m-2}x,x)\\
&= (GAG(AG - \ol{\la_0})^{-m-2}(AG - \la_0)^{-1}p(AG)y,x)\\
&= (GA(AG - \ol{\la_0})^{-m-1}G_0p(AG)y,x)\\
&= (G_0p(AG)y,y) = [p(AG)y,y]\ge 0.
\end{align*}
By virtue of Proposition \ref{p:bounded_non-negative} we obtain $\sigma(r_1(AG)) = \sigma(A_0G_0)\subset\R$. And since $r_1(\cdot)$ is analytic in a neighborhood of $\sigma(AG)\cup\{\infty\}$, it is a consequence of the spectral mapping theorem \cite[Theorem VII.9.5]{ds} that $r_1(\sigma(AG))\subset\R$ and thus $z_0\in\rho(AG)$. To complete the proof of (a) it remains to show that each $\la\in\sigma(AG)\setminus\R$ is a pole of the resolvent of $AG$. To this end we show that
\begin{equation}\label{e:delete}
p(AG)E(AG;\{\la,\ol\la\}) = 0.
\end{equation}
From this it follows that also $p(AG)E(AG;\la) = 0$. And since the spectrum of $AG|E(AG;\la)\calH$ coincides with $\{\la\}$, we have $(AG - \la)^\alpha E(AG;\la) = 0$, where $\alpha$ is the order of $\la$ as a zero of $p$. This and \cite[Theorem VII.3.18]{ds} imply the assertion. So, let us prove \eqref{e:delete}. Let $y\in E(AG;\la)\calH$ and $z\in E(AG;\ol\la)\calH$ be arbitrary. By Lemma \ref{l:E} we have $[p(AG)y,y] = [E(AG;\la)p(AG)y,y] = [p(AG)y,E(AG;\ol\la)y] = 0$, $[p(AG)z,z] = 0$ and thus
\begin{align*}
[p(AG)y,z] + [p(AG)z,y]
&= [p(AG)y,y+z] + [p(AG)z,y+z]\\
&= [p(AG)(y+z),y+z]\ge 0.
\end{align*}
But at the same time,
\begin{align*}
-[p(AG)y,z] - [p(AG)z,y]
&= [p(AG)y,-z] + [p(AG)(-z),y]\\
&= [p(AG)(y-z),y-z]\ge 0.
\end{align*}
Hence, $[p(AG)(y+z),y+z] = 0$ and thus $[p(AG)x,x] = 0$ holds for all $x\in E(AG;\{\la,\ol\la\})\calH$. By polarization we obtain $[p(AG)x,y] = 0$ for all vectors $x,y\in E(AG;\{\la,\ol\la\})\calH$. But $(E(AG;\{\la,\ol\la\})\calH,\product)$ is a Krein space by Proposition \ref{p:iso}(i), and $p(AG)x = 0$ for all $x\in E(AG;\{\la,\ol\la\})\calH$ follows. Hence, (a) is proved.
For the proof of (b) we observe that by (a) there exists a definitizing polynomial $p$ for $(A,G)$ such that $p(\la_0)\neq 0$. Define the rational function $r_1$ as in \eqref{e:r}. Let $\la_1\in\R\setminus\{0\}$ such that $p(\la_1) > 0$. Then also $r_1(\la_1) > 0$, and there exists a function $g$ which is analytic on $\calU := \ol\C\setminus\{\la_0,\ol{\la_0}\}$ such that
$$
r_1(\la) - r_1(\la_1) = g(\la)(\la - \la_1),\quad\la\in\calU.
$$
It is obvious that $g$ is a rational function with the poles $\la_0$ and $\ol{\la_0}$, both of order $m+1$. Therefore, there exists a polynomial $q$ (with real coefficients) with $q(\la_0)\neq 0$ (and hence also $q(\ol{\la_0})\neq 0$) such that
$$
g(\la) = q(\la)(\la - \la_0)^{-m-1}(\la - \ol{\la_0})^{-m-1}.
$$
From the identity
$$
\la^2p(\la) - r_1(\la_1)(\la - \la_0)^{m+1}(\la - \ol{\la_0})^{m+1} = q(\la)(\la - \la_1)
$$
we see that $\deg(q) = 2m+1$. Hence, the operator $g(AG)$ is bounded. Let $(x_n)\subset\dom AG$ be a sequence with $\|x_n\| = 1$ and $(AG - \la_1)x_n\to 0$ as $n\to\infty$. With the operator $A_0$ from \eqref{e:A0} we have
$$
(A_0G_0 - r_1(\la_1))x_n = (r_1(AG) - r_1(\la_1))x_n = g(AG)(AG - \la_1)x_n\to 0
$$
as $n\to\infty$. And since $G_0A_0G_0\ge 0$ it follows from $r_1(\la_1) > 0$ and Proposition \ref{p:bounded_non-negative} that
$$
\liminf_{n\to\infty}\,[x_n,x_n] > 0.
$$
This shows that $\la_1\in\sp(AG)$. The assertion (c) is proved similarly.
\end{proof}
The following example shows that the condition ($*$) is essential for Theorem \ref{t:main} to be valid.
\begin{ex}
Let $T$ be a closed and densely defined symmetric operator in the Hilbert space $\calH$ which is uniformly positive but not selfadjoint. Then $T$ has a uniformly positive selfadjoint extension $A$ (e.g., the Friedrichs extention). Since for $x\in\dom(T^*T)$ we have $(T^*Tx,x) = \|Tx\|^2\ge\delta\|x\|^2$ with some $\delta > 0$, the selfadjoint operator $|T| := (T^*T)^{1/2}$ is boundedly invertible. We set $G := |T|^{-1}$. Then $AG = T|T|^{-1}$ and hence $(AGx,Gx)\ge 0$ for $x\in\dom AG$. But since $AG$ is bounded while $A$ is unbounded, it follows from Remark \ref{r:no_star} that ($*$) is not satisfied. Let us now see that the statements (a)--(c) of Theorem \ref{t:main} do not apply. For this we note that for $x\in\dom |T|$ and $y\in\calH$ we have
$$
\big(T|T|^{-1}x,T|T|^{-1}y\big) = \big((T^*T)^{1/2}x,(T^*T)^{-1/2}y\big) = (x,y)
$$
which shows that the operator $AG$ is an isometry with $\dom AG = \calH$ and $\ran AG = \ran T\neq\calH$. The spectrum of $AG$ therefore coincides with the closed unit disk.
\end{ex}
Assume that $(A,G)$ is definitizable. Theorem \ref{t:main} shows that there is only a finite number of real points which are not contained in $\rho(AG)\cup\sp(AG)\cup\sm(AG)$. In analogy to definitizable operators in Krein spaces these exceptional points will be called the {\it critical points} of $(A,G)$. By Theorem \ref{t:main} each non-zero critical point of $(A,G)$ is a zero of every definitizing polynomial for $(A,G)$. Moreover, if $G$ is not boundedly invertible, then due to Propsition \ref{p:zero_res} and Corollary \ref{c:zero_+} zero is a critical point of $(A,G)$. The set of the critical points of $(A,G)$ is denoted by $c(A,G)$.
\begin{thm}\label{t:sf}
Assume that $(A,G)$ is definitizable. Then the operator $AG$ possesses a spectral function on $\R$ with the set of critical points $s := c(A,G)$.
\end{thm}
\begin{proof}
The proof is divided into several steps. In step 1 we define the spec\-tral projection $E(\Delta)$ for sets $\Delta$ which have a positive distance to $s$. In step 2, $E(\Delta)$ is defined for compact intervals. This will be used in step 3 to define $E(\Delta)$ for all $\Delta\in\mathfrak R_s(\R)$.
{\bf 1.} By $\mathfrak R_{s,0}(\R)$ we denote the system of all sets $\Delta$ in $\mathfrak R_s(\R)$ with $\Delta\cap s = \emptyset$. In this first step of the proof we define $E(\Delta)$ for $\Delta\in\mathfrak R_{s,0}(\R)$ and prove that the set function $E$ on $\mathfrak R_{s,0}(\R)$ satisfies (S1)--(S5) in Definition \ref{d:sf}. Let $p$ be a definitizing polynomial for $(A,G)$ and let $Z$ be the set of zeros of $p$. By Theorem \ref{t:main} the points in $Z$ divide the real line into intervals which are of either positive or negative type with respect to $AG$. The set $Z$ contains the critical points of $(A,G)$, but there might be spectral points of $AG$ in $Z$ which are not critical. However, a slight modification of the set $Z$ leads to a finite set $Z'$ of real points which divide $\R$ into intervals $J_1,\ldots,J_n$ of positive or negative type with respect to $AG$, respectively, such that $Z'\cap\sigma(AG) = s$. By Theorem \ref{t:lsf}, on each interval $J_k$ the operator $AG$ has a local spectral function $E_k$. For $\Delta\in\mathfrak R_{s,0}(\R)$ we set $\Delta_k := \Delta\cap J_k\cap\sigma(AG)$, $k=1,\ldots,n$, and
$$
E(\Delta) := \sum_{k=1}^n\,E_k(\Delta_k).
$$
As $\Delta_k\in\mathfrak R(J_k)$ for $k=1,\ldots,n$, this is a proper definition. Each of the subspaces $\calL_k := E_k(\Delta_k)$, $k=0,\ldots,n$, is contained in $\dom AG$ and is $AG$-invariant. In the following we shall show that $\calL_k\cap\calL_j = \{0\}$ for $k\neq j$. Let $\la\in\C$ be arbitrary. Then $\la\notin\ol{\Delta_k}$ or $\la\notin\ol{\Delta_j}$. Assume $\la\notin\ol{\Delta_j}$. Then $\la\in\rho(AG|\calL_j)$ and thus $\ker(AG|\calL_k\cap\calL_j - \la) = \{0\}$. Let $y\in\calL_k\cap\calL_j$. Then, as $y\in\calL_j$, the vector
$$
x := (AG|\calL_j - \la)^{-1}y = \lim_{\eta\downto 0}\,(AG - (\la + i\eta))^{-1}y
$$
exists and is contained in both $\calL_j$ and $\calL_k$. Hence, we have $\la\in\rho(AG|\calL_k\cap\calL_j)$. As this is similarly proved for $\la\notin\ol{\Delta_k}$, it follows that $\sigma(AG|\calL_k\cap\calL_j) = \emptyset$ and hence $\calL_k\cap\calL_j = \{0\}$. Therefore, as $E_k(\Delta_k)$ and $E_j(\Delta_j)$ commute, we obtain
$$
E_k(\Delta_k)E_j(\Delta_j) = E_j(\Delta_j)E_k(\Delta_k) = 0.
$$
This shows that $\calL_k\ds\calL_j$ is a subspace and that $\calL_k\subset\calL_j^\gperp$. In fact, we have shown that
$$
E(\Delta)\calH = E_0(\Delta_0)\calH\,[\ds]\,\ldots\,[\ds]\,E_n(\Delta_n)\calH.
$$
With the help of this decomposition it is easily seen that the function $E$, defined on $\mathfrak R_{s,0}(\R)$, satisfies (S1)--(S5) in Definition \ref{d:sf}.
{\bf 2.} In this step we define the spectral projection $E([a,b])$ for a compact interval $[a,b]\in\mathfrak R_s(\R)$. To this end choose $a',b'$ with $a < a' < b' < b$ such that there is no critical point of $AG$ in $[a,a']\cup [b',b]$. We set
$$
\Delta_0 := [a,a']\quad\text{and}\quad\Delta_1 := [b',b].
$$
Define the spectral subspaces $\calL_j := E(\Delta_j)\calH$, $j=0,1$. As these are both uniformly definite, on account of Lemma \ref{l:ks->oc} we have
\begin{equation}\label{e:decomp}
\calH = \calL_0\,[\ds]\,\calL_1\,[\ds]\,\wt\calH,
\end{equation}
where $\wt\calH = (\calL_0[\ds]\calL_1)^\gperp = (I - E(\Delta_0\cup\Delta_1))\calH$. We set $T_j := AG|\calL_j$, $j=0,1$, and $\wt T := AG|\wt\calH$. With respect to the decomposition \eqref{e:decomp} the operator $AG$ decomposes as $AG = T_0\,[\ds]\,T_1\,[\ds]\,\wt T$. As a consequence of the results in step 1 we have
\begin{equation}\label{e:later}
\sigma(\wt T)\,\subset\,\ol{\sigma(AG)\setminus(\Delta_0\cup\Delta_1)}.
\end{equation}
This implies $(a,a')\cup (b',b)\subset\rho(\wt T)$. Set $\Delta := [a',b']$ and denote by $\wt E_\Delta$ the Riesz-Dunford spectral projection of $\wt T$ (in $\wt\calH$) corresponding to $\Delta$. Similarly as in the proof of Lemma \ref{l:E} it is seen that $\wt E_\Delta$ is $\product$-symmetric. With respect to the decomposition \eqref{e:decomp} we now define
$$
E([a,b]) := I_{\calL_0}\,[\ds]\,I_{\calL_1}\,[\ds]\,\wt E_\Delta.
$$
This is obviously a $\product$-symmetric projection in $\calH$ which commutes with the resolvent of $AG$. Moreover, $\sigma(AG|E([a,b])\calH)\subset[a,b]$.
In the following we show that the above definition of $E([a,b])$ is independent of the choice of $a'$ and $b'$. To this end we prove the following claims.
\begin{enumerate}
\item[(C1)] The subspace $E([a,b])\calH$ is the maximal spectral subspace of $AG$ corresponding to $[a,b]$.
\item[(C2)] $E([a,b])$ commutes with every bounded operator which commutes with the resolvent of $AG$.
\end{enumerate}
For the proof of (C1) let $\calK\subset\dom AG$ be an $AG$-invariant (closed) subspace such that $\sigma(AG|\calK)\subset [a,b]$. By Theorem \ref{t:lsf} the maximal spectral subspaces $\calK_j$ of $AG|\calK$ corresponding to $\Delta_j$ exist, $j=0,1$. These are uniformly definite with respect to the inner product $\product$. Hence,
$$
\calK = \calK_0\,[\ds]\,\calK_1\,[\ds]\,\wt\calK,
$$
where $\wt\calK = (\calK_0\,[\ds]\,\calK_1)^\gperp\cap\calK$ and $\sigma(AG|\wt\calK)\subset [a',b']$. From $\sigma(AG|\calK_j)\subset\Delta_j$ and the maximality of $\calL_j$ we conclude $\calK_j\subset\calL_j$, $j=0,1$, and set
$$
\calM := (\calL_0\,[\ds]\,\calL_1) + \wt\calK.
$$
This sum is direct (and hence $\sigma(AG|\calM)\subset [a,b]$): Set $\calL := \calL_0[\ds]\calL_1$. By \cite[Theorem 0.8]{rr}, $\sigma(AG|\calL\cap\wt\calK)\subset (\Delta_0\cup\Delta_1)\cap [a',b'] = \{a',b'\}$. From the maximality of $\calK_0$ and $\calK_1$ it follows that $a',b'\notin\sigma_p(AG|\calL\cap\wt\calK)$. And as the resolvent of $AG|\calL\cap\wt\calK$ satisfies a growth condition \eqref{e:growth} in neighborhoods of $\Delta_0$ and $\Delta_1$, we conclude $\calL\cap\wt\calK = \{0\}$.
Now, with $\wt\calM := (\calL_0[\ds]\calL_1)^\gperp\cap\calM$ we have
$$
\calM = \calL_0\,[\ds]\,\calL_1\,[\ds]\,\wt\calM.
$$
As $\calL_0$ and $\calL_1$ are maximal, the spectrum of $AG|\wt\calM$ is contained in $[a',b']$. Since $\wt\calM\subset\wt\calH$ and $\wt E_\Delta\wt\calH$ (as a Riesz-Dunford spectral subspace) is the maximal spectral subspace of $AG|\wt\calH$ corresponding to $[a',b']$, this implies $\wt\calM\subset\wt E_\Delta\wt\calH$ and hence $\calK\subset\calM\subset E([a,b])\calH$. (C1) is proved.
Let $B$ be a bounded operator in $\calH$ which commutes with the resolvent of $AG$. Then $BAG\subset AGB$ and hence $E(\Delta_j)B = BE(\Delta_j)$, $j=0,1$, see (S3) in Definition \ref{d:sf}. Hence, $\calL_0$ and $\calL_1$ and also their orthogonal companions $\calL_0^\gperp$ and $\calL_1^\gperp$ are $B$-invariant. And as $\wt\calH = \calL_0^\gperp\cap\calL_1^\gperp$, it follows that with respect to the decomposition \eqref{e:decomp} the operator $B$ decomposes as $B = B_0[\ds]B_1[\ds]\wt B$. Hence, $\wt B\wt T\subset\wt T\wt B$ which implies that $\wt B$ commutes with $\wt E_\Delta$. Finally, we conclude that $B$ commutes with $E([a,b])$, and (C2) is proved.
Now, let $a'',b''\in\R$ with $a < a'' < b'' < b$ such that $[a,a'']$ and $[b'',b]$ do not contain any point from $s$ and construct a spectral projection of $AG$ corresponding to $[a,b]$ as in step 1 with $a'$ and $b'$ replaced by $a''$ and $b''$. Denote this projection by $P$. As the maximal spectral subspace of $AG$ corresponding to $[a,b]$ is unique, we have $P\calH = E([a,b])\calH$ by (C1). Therefore, $PE([a,b]) = E([a,b])$ and $E([a,b])P = P$. But (C2) yields that $P$ and $E([a,b])$ commute. Therefore, $P = E([a,b])P = PE([a,b]) = E([a,b])$.
Above, it was shown that $E([a,b])$ commutes with any bounded operator in $\calH$ which commutes with the resolvent of $AG$ and that $\sigma(AG|E([a,b])\calH)\subset\sigma(AG)\cap [a,b]$ holds. Hence, the projection $E([a,b])$ has the properties (S3) and (S4) in Definition \ref{d:sf}. It also satisfies (S5) as due to $(a,a')\cup (b',b)\subset\rho(\wt T)$ and \eqref{e:later} we have
\begin{align*}
\sigma(AG|(I - E([a,b]))\calH)
&= \sigma(\wt T|(I - \wt E_\Delta)\calH) = \sigma(\wt T)\setminus (a,b)\\
&\subset \ol{\sigma(AG)\setminus (\Delta_0\cup\Delta_1)}\setminus (a,b) = \ol{\sigma(AG)\setminus [a,b]}.
\end{align*}
Moreover, similarly as the proof of $E_k(\Delta_k)E_j(\Delta_j) = 0$ in step 1, it is proved that $E([a,b])E([c,d]) = 0$ for compact intervals $[a,b],[c,d]\in\mathfrak R_s(\R)$ with $[a,b]\cap [c,d] = \emptyset$.
{\bf 3.} In this last step of the proof we define the spectral projection $E(\Delta)$ for every $\Delta\in\mathfrak R_s(\R)$ and show that the function $E$, defined on $\mathfrak R_s(\R)$, has the properties (S1)--(S5) in Definition \ref{d:sf}. Let $\Delta\in\mathfrak R_s(\R)$. Then each $\alpha\in\Delta\cap s$ is contained in the interior $\Delta^i$ of $\Delta$. Hence, there exists a compact interval $\Delta_\alpha\subset\Delta$ such that $\Delta_\alpha^i\cap s = \{\alpha\}$. Choose these intervals such that $\Delta_\alpha\cap\Delta_\beta = \emptyset$ for $\alpha,\beta\in\Delta\cap s$, $\alpha\neq\beta$, and define the projection $E(\Delta)$ by
\begin{equation}\label{e:final}
E(\Delta) := \sum_{\alpha\in\Delta\cap s}\,E(\Delta_\alpha) + E\left(\Delta\setminus\bigcup_{\alpha\in\Delta\cap s}\,\Delta_\alpha\right).
\end{equation}
Let $\alpha\in s$ and let $[a,b]\in\mathfrak R_s(\R)$ such that $(a,b)\cap s = \{\alpha\}$. Furthermore, let $a',b'\in (a,b)$ such that $a' < \alpha < b'$. From the construction of $E([a,b])$, $E([a',b])$ and $E([a,b'])$ in step 2 it is seen that
$$
E([a,a')) + E([a',b]) = E([a,b']) + E((b',b]) = E([a,b]).
$$
With the help of this property it is shown that $E(\Delta)$ in \eqref{e:final} is well-defined.
It remains to verify that $E$ satisfies the conditions (S1)--(S5) in Definition \ref{d:sf}. Let $\Delta_1,\Delta_2\in\mathfrak R_s(\R)$. Then $\Delta_j = \Delta_j^1\cup\Delta_j^2$, where $\Delta_j^1\cap\Delta_j^2 = \emptyset$, $\Delta_j^2\in\mathfrak R_{s,0}(\R)$ and
$$
\Delta_j^1 = \bigcup_{\alpha\in\Delta_j\cap s}\,\Delta_\alpha^j
$$
with compact intervals $\Delta_\alpha^j$ as above, $j=1,2$. We may choose the intervals $\Delta_\alpha^j$ such that the following holds:
\begin{enumerate}
\item[(a)] $\Delta_1^2\cap\Delta_2^1 = \Delta_1^1\cap\Delta_2^2 = \emptyset$,
\item[(b)] $\Delta_\alpha^1 = \Delta_\alpha^2$ for $\alpha\in\Delta_1\cap\Delta_2\cap s$,
\item[(b)] $\Delta_\alpha^1\cap\Delta_\beta^2 = \emptyset$ if $\alpha\neq\beta$.
\end{enumerate}
Then we have
\begin{align*}
E(\Delta_1\cap\Delta_2)
&= E((\Delta_1^1\cup\Delta_1^2)\cap(\Delta_2^1\cup\Delta_2^2))\\
&= E((\Delta_1^1\cap\Delta_2^1)\cup(\Delta_1^2\cap\Delta_2^2))\\
&= \sum_{\alpha\in\Delta_1\cap\Delta_2\cap s}\,E(\Delta_\alpha^1) + E(\Delta_1^2)E(\Delta_2^2).
\end{align*}
On the other hand,
$$
E(\Delta_1)E(\Delta_2) = \sum_{\alpha\in\Delta_1\cap s}\,\sum_{\beta\in\Delta_2\cap s}\,E(\Delta_\alpha^1)E(\Delta_\beta^2) + E(\Delta_1^2)E(\Delta_2^2).
$$
And as $E(\Delta_\alpha^1)E(\Delta_\beta^2) = \delta_{\alpha\beta}E(\Delta_\alpha^1)$, where $\delta_{\alpha\beta}$ is the Kronecker delta, (S1) follows.
The proof of (S2) is straightforward and (S3) follows from the facts proved in steps 1 and 2. For the proofs of (S4) and (S5) let $\Delta\in\mathfrak R_s(\R)$. Then $\Delta = \Delta_1\cup\Delta_2$ where $\Delta_1\cap\Delta_2 = \emptyset$, $\Delta_2\in\mathfrak R_{s,0}(\R)$, and $\Delta_1$ is the union of mutually disjoint compact intervals $\Delta_{\alpha_j}\in\mathfrak R_s(\R)$, $j=1,\ldots,r$, with $\Delta_{\alpha_j}\cap s = \{\alpha_j\}$. Due to the definition of $E(\Delta)$ we have
$$
E(\Delta)\calH = E(\Delta_{\alpha_1})\calH\,[\ds]\,\ldots\,[\ds]\,E(\Delta_{\alpha_r})\calH\,[\ds]\,E(\Delta_2)\calH.
$$
Hence,
\begin{align*}
\sigma(AG|E(\Delta)\calH)
&\subset(\sigma(AG)\cap\Delta_{\alpha_1})\cup\ldots\cup(\sigma(AG)\cap\Delta_{\alpha_1})\cup\ol{\sigma(AG)\cap\Delta_2}\\
&= (\sigma(AG)\cap\Delta_1)\cup\ol{\sigma(AG)\cap\Delta_2}\,\subset\,\ol{\sigma(AG)\cap\Delta}.
\end{align*}
From $(I - E(\Delta))\calH\subset (I - E(\Delta_{\alpha_j}))\calH$ for $j=1,\ldots,r$ and $(I - E(\Delta))\calH\subset (I - E(\Delta_2))\calH$ we conclude
\begin{align*}
\sigma(AG|(I - E(\Delta))\calH)&\,\subset\,\sigma(AG|(I - E(\Delta_{\alpha_j}))\calH)\\
\sigma(AG|(I - E(\Delta))\calH)&\,\subset\,\sigma(AG|(I - E(\Delta_2))\calH),
\end{align*}
and therefore
\begin{align*}
\sigma(AG|(I - E(\Delta))\calH)
&\subset \ol{\sigma(AG)\setminus\Delta_{\alpha_1}}\cap\ldots\cap\ol{\sigma(AG)\setminus\Delta_{\alpha_r}}\cap\ol{\sigma(AG)\setminus\Delta_2}\\
&\subset\ol{\sigma(AG)\setminus\Delta_1}\,\cap\,\ol{\sigma(AG)\setminus\Delta_2}\\
&\subset\ol{\sigma(AG)\setminus\Delta}\,\cup\,\partial\Delta_1,
\end{align*}
where $\partial\Delta_1$ is the real boundary of $\Delta_1$. This is a finite set which depends on the choice of the $\Delta_{\alpha_j}$'s. Hence, the theorem is proved.
\end{proof}
\section{An application to Sturm-Liouville problems}\label{s:application}
Let $w$, $p$ and $q$ be real-valued functions on a bounded or unbounded open interval $(a,b)$ such that $w,p^{-1},q\in L^1_{\rm loc}(a,b)$ and $w > 0$ almost everywhere. The differential expression
$$
\tau(f) := \frac 1 w\,\big(-(pf')' + qf\big)
$$
is then called a {\it Sturm-Liouville} differential expression. Usually, the differential operators associated with $\tau$ are considered in the weighted $L^2$-space $L^2_w(a,b)$ which consists of all (equivalence classes of) measurable functions $f : (a,b)\to\C$ for which $f^2w\in L^1(a,b)$. If
$$
\underset{x\in (a,b)}{{\rm ess}\inf}\;w(x) > 0\quad\text{and}\quad\underset{x\in (a,b)}{{\rm ess}\sup}\;w(x) < \infty,
$$
then the topologies of $L^2_w(a,b)$ and $L^2(a,b)$ coincide, and the selfadjoint realizations of $\tau$ in $L^2_w(a,b)$ are similar to selfadjoint operators in $L^2(a,b)$. In the following we use the abstract results from the previous section to show that also in more general cases it can make sense to consider differential operators associated with $\tau$ in (the unweighted space) $L^2(a,b)$.
By $A$ denote the operator of multiplication with the function $w^{-1}$ in the Hilbert space $L^2(a,b)$. The operator $A$ is selfadjoint and non-negative (in $L^2(a,b)$). In addition, define the operator $G_{\max}$ in $L^2(a,b)$ by $G_{\max}f := -(pf')' + qf$, $f\in\dom G_{\max}$, where
$$
\dom G_{\max} := \{f\in L^2(a,b) : f,pf'\in AC_{\rm loc}(a,b),\,-(pf')' + qf\in L^2(a,b)\}.
$$
The selfadjoint realizations of the differential expression
$$
\tau_0(f) := -(pf')' + qf
$$
in $L^2(a,b)$ are well-known to be restrictions of $G_{\max}$. In what follows let $G$ be a selfadjoint realization of $\tau_0$ in $L^2(a,b)$.
\begin{prop}\label{p:SL1}
If $w\in L^\infty(a,b)$ and $G$ is boundedly invertible, then the spectrum of the operator $AG$ is real, and $AG$ has a spectral function without singularities on $\R$.
\end{prop}
\begin{proof}
From $w\in L^\infty(a,b)$ it follows that the operator $A = w^{-1}$ is boundedly invertible in $L^2(a,b)$. Hence, $0\in\rho(A)\cap\rho(G)$ which implies that both $AG$ and $GA$ are boundedly invertible. Therefore, \eqref{e:ass} is satisfied for the selfadjoint operators $A$ and $G$. Furthermore, for $f\in\dom AG$ we have $(AGf,Gf)\ge 0$ as $A$ is non-negative. Hence, the pair $(A,G)$ is definitizable with definitizing polynomial $p(\la) = \la$, and the assertions follow directly from Theorems \ref{t:main} and \ref{t:sf}.
\end{proof}
\section*{Acknowledgements}
Friedrich Philipp gratefully acknowledges the support from Deutsche For\-schungs\-ge\-mein\-schaft (DFG) under grant BE 3765/5-1.
|
1,314,259,995,315 | arxiv | \section{Introduction}
If $X$ is a compact manifold with an action of a torus $S$ and the fixed point set $X^{S}$ of the action is finite, the Atiyah--Bott--Berline--Vergne formula expresses the push-forward in equivariant cohomology of an element $\alpha \in H^*_S(X)$ as a finite sum of local contributions:
\[\int_{X} \alpha = \sum_{p \in X^S} \frac{i_p^*\alpha}{e_p},\]
where $i_p: \{ p\} \to X$ is the inclusion of the fixed point and $e_p$ is the equivariant Euler class of the tangent bundle at $p$. \newline
In the case of the complex Grassmannian $Grass_m(\mathbb{C}^n)$ the fixed points are indexed by the partitions $\lambda = (\lambda_1 \geq \dots \geq \lambda_m)$ with $\lambda_i \in \mathbb{Z}_{> 0}$. Let us denote by $\alpha(\mathcal{R})$ a characteristic class of the tautological bundle $\mathcal{R}$, then the above formula takes the form:
\[ \int_{Grass_{m}{\mathbb{C}^n}} \alpha(\mathcal{R}) = \sum_{p_\lambda} \frac{\alpha_{|_{p_\lambda}}}{e_{p_\lambda}} .\]
In \cite{zielenkiewicz2014} we have proven that if $\alpha(\mathcal{R})$ restricted to the fixed points of the action is given by a symmetric polynomial $V$, this expression is equal to the following residue
\[ \int_{Grass_{m}{\mathbb{C}^n}} \alpha(\mathcal{R}) = \frac{1}{m!} Res_{\mathbf{z} = \infty} \frac{V(z_1,...,z_m)\prod_{i \neq j}(z_i - z_j)}{\prod_{i=1}^n\prod_{j=1}^m(t_i - z_j)}. \]
Similar results have been obtained for other types of Grassmannians (Lagrangian and orthogonal ones). The formulas are given in chapter \ref{wzorki}. In their paper \cite{berczi2012}, B\'erczi and Szenes found a formula for an integral over flag variety (\cite{berczi2012}, Chapter 6.3). All the obtained formulas (for classical, Lagrangian and orthogonal Grassmannians and the one derived by B\'erczi and Szenes) can be uniformly written as special cases of one formula, involving the action of the Weyl group on the characters of the natural representation of the torus action. The result is presented in \cite{zielenkiewicz2014}. \newline
These results, however, have been lacking a geometric motivation. In this paper we relate them to the nonabelian localization theorem by Jeffrey and Kirwan. In particular, we show how the residue-type formula for the classical Grassmannian can be obtained from generalization of the Jeffrey--Kirwan theorem, applied to the group $U(k)$ acting on the space $Hom(\mathbb{C}^k, \mathbb{C}^n)$. \newline
Chapter~\ref{notation} describes the notation conventions used in this paper, trying to find a compromise between the notations of the two influential papers \cite{jeffrey1995} and \cite{guillemin1996}. Chapter \ref{prelim} briefly describes localization theorems in equivariant cohomology, the results obtained in \cite{zielenkiewicz2014} and the various approaches to nonabelian localization. In Chapter~\ref{gk_equiv} we adapt the Jeffrey--Kirwan theorem to the $S$-equivariant setting, for a torus $S$, by modifying the proof of the Jeffrey--Kirwan theorem given by Guillemin and Kalkman in \cite{guillemin1996}. Two examples at the end of Chapter~\ref{gk_equiv} relate the equivariant version of Jeffrey--Kirwan theorem to the formulas in \cite{zielenkiewicz2014}. The Appendix shows the details of the adaptation of the Guillemin--Kalkman proof of the nonabelian localization theorem to the case of $S^1$-action.
\subsection*{Acknowledgements}
This work was partially supported by NCN grant 2015/17/N/ST1/02327.
\section{Notation}\label{notation}
Almost every paper we refer to uses different notation conventions. We will use the following notation:
\begin{itemize}
\item $X$ is a symplectic manifold, usually compact
\item $G$ is a compact Lie group,
\item $T$ is a maximal torus in $G$,
\item $\mu_G: X \to \mathfrak{g}$ is the moment map for the action of $G$ on $X$,
\item $X /\mkern-6mu/ G = \mu_{G}^{-1}(0)/G$ is the symplectic reduction, compact
\item $S$ is a torus (not related to $G$),
\item $\kappa$ is a Kirwan map,
\item $\kappa^{S}$ is the $S$-equivariant Kirwan map.
\end{itemize}
\section{Preliminaries}\label{prelim}
\subsection{Localization in equivariant cohomology}
Let $X$ be a compact space equipped with an action of the torus $S=(S^1)^n$. Let us consider the $S$-equivariant cohomology of $X$, $H^*_{S}(X)$, defined as
\[ H^*_{S}(X) = H^*(\ES \times^{S} X),\]
where $\ES$ is a contractible space on which $S$ acts freely (the total space of the universal $S$-bundle). The space $\ES$ is infinite dimensional, so one constructs finite dimensional approximations $\mathbb{E}_m$ such that for $m$ sufficiently large with respect to $i$ we have isomorphisms
\[H^i_{S}(X) \simeq H^i(\mathbb{E}_m \times^{S} X) \textrm{ for all } i \ll m. \]
We consider cohomology theories with coefficients in a field (usually $\mathbb{C}$). \newline
The characters of the torus action can be identified with elements of $H^*_{S}(X)$ as follows. For a character $\chi \in S^{\#}$ let $\mathbb{C}_{\chi}$ denote the one-dimensional representation of $S$ determined by $\chi$. Then $L_{\chi}: \ES \times^{S} \mathbb{C}_{\chi} \to \BS $
is an equivariant line bundle, and to the character $\chi$ we can associate the first equivariant Chern class of the associated line bundle
\[\chi \mapsto c_1^{S}(L_{\chi}) \in H^2_{S}(pt).\]
We will abuse notation and use this assignment to identify characters with elements in equivariant cohomology.
\newline
It turns out that if the set of fixed points of the action is finite, then after localizing with respect to the multiplicative set generated by the nonzero characters of the action the equivariant cohomology of $X$ is isomorphic to the equivariant cohomology of the fixed point set $X^{S}$.
\begin{theorem}[Quillen (\cite{quillen1971})] Let $X$ be a compact $S$-space. The inclusion $i: X^S \hookrightarrow X$ induces an isomorphism
\[ i^*: H_{S}^*(X)[(S^{\#} \setminus\{ 0\})^{-1}] \stackrel{\simeq}{\longrightarrow} H_{S}^*(X^S)[(S^{\#} \setminus\{ 0\})^{-1}]\]
after localizing with respect to the multiplicative system consisting of finite products of elements $c_1^{S}(L_{\chi})$, for $\chi \in S^{\#}\setminus \{0\}$.
\end{theorem}
For a proper map $f: X \to Y$, one can define the equivariant Gysin homomorphism
\[f_*^{S} : H^i_{S}(X) \to H^{i+2d}_{S}(Y), \]
where $d = \dim Y - \dim X$. One defines $f_*^{S}$ to be the standard non-equivariant Gysin homomorphism for the map $\mathbb{E}_m \times^{S} X \to \mathbb{E}_m \times^{S} Y$. Gysin homomorphisms are also called push-forwards in cohomology, and if $f$ is a projection in a fiber bundle they can be interpreted as integration along fibers. The equivariant Gysin homomorphism associated with the map $X \to pt$ is often denoted by $a \mapsto \int_{X} a$. \newline
If $X$ is a compact manifold and the fixed point set is finite, the localization theorem can be rephrased as follows.
\begin{theorem}[Atiyah--Bott (\cite{atiyah1984}), Berline--Vergne (\cite{berline1982})] Suppose $X$ is a compact manifold with an action of a torus $S$ such that $\# X^S < \infty$. Then for $\alpha \in H_{S}^*(X)$ one has
\[ \int_{X} \alpha = \sum_{p \in X^S} \frac{i_p^*\alpha}{e_p},\]
where $e_p$ is the equivariant Euler class of the tangent bundle at the fixed point $p$, and $i_p^*$ is induced by the inclusion of the fixed point $p$ into $X$.
\end{theorem}
To effectively compute the right-hand side of the above formula, one needs to know the equivariant Euler class at the fixed points of the action. The Euler class at the fixed point $p$ is given by the product of the weights of the torus action on the tangent space. If $X$ is a homogeneous space $G/P$, where $P$ is a parabolic subgroup, the tangent space at $1$ is canonically isomorphic to $\lie{g}/\lie{p}$. The weights of the torus action are then the positive roots $\Phi^+ \setminus \Phi_P^+$.
\subsection{Push-forward in equivariant cohomology and residue formulas}\label{wzorki}
The Atiyah--Bott--Berline--Vergne formula expresses the push-forward in $S$-equivariant cohomology as a finite sum of rational functions, indexed by the fixed points of the action. If $X=G/P$ is a homogeneous space of a compact Lie group for a parabolic subgroup $P$, the fixed points are indexed by the quotient $W/W_P$ of the Weyl group $W$ of $G$ by the Weyl group $W_P$ of the parabolic subgroup $P$. In \cite{berczi2012}, B\'erczi and Szenes used iterated residues at infinity to compute the push-forward of a certain characteristic class over the complete flag variety. In \cite{zielenkiewicz2014} we prove similar residue-type formulas for push-forwards over Grassmannians (classical, Lagrangian and orthogonal). For example, the push-forward of a characteristic class $\alpha$ of the tautological bundle $\mathcal{R}$ over the Grassmannian $Grass_{m}(\mathbb{C}^n)$, which at the fixed points of the action is given by the symmetric polynomial $V$, can be expressed as
\[\int_{Grass_{m}(\mathbb{C}^n)} \!\!\!\! \alpha(\mathcal{R}) = \Res_{z_1 = \infty}\Res_{z_2 = \infty}...\Res_{z_m = \infty} \frac{1}{m!}\frac{V(z_1,...,z_m)\prod_{i \neq j}(z_i - z_j)}{\prod_{i=1}^n\prod_{j=1}^m(t_i - z_j)},\]
where $t_1,\dots,t_n$ are the characters of the action of the maximal torus, and $z_1, \dots, z_m$ are formal variables (whose interpretation and geometric meaning can be given using the Jeffrey--Kirwan nonabelian localization theorem, described in the next section). \newline
Similar formulas can be obtained for the Lagrangian Grassmannians $LG(n)$ and the orthogonal Grassmannians $OG(n,2n)$ and $OG(n, 2n+1)$, as follows. Denote by $\mathbf{z} = \{ z_1, z_2, \dots, z_n \}$. Then
\[\int_{LG(n)} \alpha(\mathcal{R}) = \frac{1}{n!} \Res_{\mathbf{z} = \infty} \frac{V(z_1,...,z_n)\prod_{i<j}(z_j - z_i)}{\prod_{i=1}^n(t_i - z_i)(t_i + z_i)\prod_{i<j}(t_i + t_j)(t_j - t_i)}\]
\[ \int_{OG(n,2n)} \!\!\! \alpha(\mathcal{R}) = \frac{1}{n!} \Res_{\mathbf{z} = \infty} \frac{V(z_1,...,z_n)\prod_{i<j}(z_j - z_i)2^n z_1...z_n}{\prod_{i=1}^n(t_i - z_i)(t_i + z_i)\prod_{i<j}(t_i + t_j)(t_j - t_i)}\]
\[ \int_{OG(n,2n+1)} \!\!\! \alpha(\mathcal{R}) = \frac{1}{n!} \Res_{\mathbf{z} = \infty} \frac{V(z_1,...,z_n)\prod_{i<j}(z_j - z_i)2^n}{\prod_{i=1}^n(t_i - z_i)(t_i + z_i)\prod_{i<j}(t_i + t_j)(t_j - t_i)}. \]
These formulas can be rewritten in the following way, which is far less efficient for computations (due to the higher degrees of the polynomials appearing in the denominators), but has other advantages, described below.
\[\int_{LG(n)} \alpha(\mathcal{R}) = \Res_{\mathbf{z} = \infty} \frac{1}{n!}\frac{V(z_1,...,z_n)\prod_{i \neq j}(z_j - z_i)\prod_{i < j}(z_i+z_j)}{\prod_{i=1}^n\prod_{j=1}^n(t_i - z_j)(t_i + z_j)}.\]
\[ \int_{OG(n,2n)} \!\!\! \alpha(\mathcal{R}) = \Res_{\mathbf{z} = \infty} \frac{1}{n!}\frac{V(z_1,...,z_n)\prod_{i \neq j}(z_j - z_i)\prod_{i < j}(z_i+z_j)2^n\prod_{i=1}^n z_i}{\prod_{i=1}^n\prod_{j=1}^n(t_i - z_j)(t_i + z_j)}\]
\[ \int_{OG(n,2n+1)} \!\!\! \alpha(\mathcal{R}) = \Res_{\mathbf{z} = \infty} \frac{1}{n!}\frac{V(z_1,...,z_n)\prod_{i \neq j}(z_j - z_i)\prod_{i < j}(z_i+z_j)2^n}{\prod_{i=1}^n\prod_{j=1}^n(t_i - z_j)(t_i + z_j)}.\]
The advantage of these formulas, at first glance more complicated, is that one can now uniformly describe all of them, using one expression (which also agrees with the case of flag varieties investigated by B\'erczi and Szenes and may be true for arbitrary homogeneous spaces of Lie groups),
\[\int_{G/P} \alpha(\mathcal{R}) = \Res_{\mathbf{z} = \infty} \frac{1}{|W_P|}\frac{V(z_1,...,z_n)\prod_{i=1}^n \prod_{x \in X_i \setminus \{z_i\} } (z_i - x)}{\prod_{i=1}^n\prod_{x \in X_i} (t_i - x ) \prod_{y \in \Phi^+ \setminus {\Phi_P}^+} y }.\]
Here we push-forward a characteristic class of the tautological bundle on the homogeneous space $G/P$ on which a maximal torus acts with characters $t_1,\dots, t_n$ and the $z_1, \dots ,z_n$ are formal complex variables. The Weyl group $W$ of $G$ acts on the set $\{ z_1, \dots , z_n \}$ by permutations, and the sets $X_i$ appearing in the formula are defined as $X_{i} = \{ \sigma(z_i): \sigma \in W \}$. The positive roots $\Phi^+ \setminus {\Phi_P}^+$ are expressed in the $z_i$ variables. \newline
In \cite{zielenkiewicz2014} we have obtained the above formulas in the case of classical Grassmannians, Lagrangian Grassmannians, orthogonal Grassmannians. The case of complete flag varieties is covered in \cite{berczi2012}. Recently we have obtained formulas for the partial flag varieties of types $A$, $B$, $C$ and $D$, using the methods presented here. The results are a part of the PhD dissertation of the author, submitted in December 2016. We also have computational results suggesting it holds for the homogeneous spaces $G_2/P_1$ and $G_2/P_2$, where $G_2$ is the smallest exceptional Lie group and $P_1, P_2$ are its two unique maximal parabolic subgroups. The aim of this paper is to describe an approach to the residue formulas which can hopefully be used to prove the above formula in general. \newline
The meaning of the variables $z_i$ is not obvious from this description. Within this paper we show that they can be interpreted as the characters of an additional torus acting on $G$. The main idea behind this description is the Jeffrey--Kirwan nonabelian localization theorem.
\subsection{Jeffrey--Kirwan nonabelian localization theorem}
Let $X$ be a compact symplectic manifold equipped with a Hamiltonian action of a compact Lie group $G$, with moment map $ \mu: X \to \mathfrak{g}^* $. Assume $0$ is a regular value of $\mu$. One can form the symplectic reduction
\[ X /\mkern-6mu/ G := \mu^{-1}(0) / G.\]
Then, there is a natural map $ \kappa: H^*_{G}(X) \to H^*(\mu^{-1}(0)/G)$, called the Kirwan map, defined as the composition $(\pi^*)^{-1}\circ i^*$,
\[ \kappa: H^*_{G}(X) \xrightarrow{i^*} H^*_{G}(\mu^{-1}(0)) \xrightarrow{(\pi^*)^{-1}} H^*(\mu^{-1}(0)/G),\]
where $i^*$ is the map induced by the inclusion $i:\mu^{-1}(0) \to X$ and $\pi^*$ is the natural isomorphism $H^*_{G}(\mu^{-1}(0)) \to H^*(\mu^{-1}(0)/G)$ induced by the quotient map $\pi: \mu^{-1}(0) \to \mu^{-1}(0)/G$.
The assumption that $0$ is a regular value can of course be replaced by the assumption that $\xi \in \mathfrak{g}^*$ is a regular value and the reduction is taken at $\xi$, so that $X /\mkern-6mu/_{\xi} G := \mu^{-1}(\xi) / G$. \newline
Assume $T$ is a maximal torus in $G$, and $\mu_T$ is the moment map for the $T$-action.
The Jeffrey--Kirwan nonabelian localization theorem states that the Kirwan map is an epimorphism, and its kernel can be explicitly described in terms of intersection pairings. From the point of view of the residue formulas in equivariant cohomology, the most interesting result is the following (\cite{jeffrey1995}, Theorem 8.1):
\begin{theorem}[Jeffrey--Kirwan (\cite{jeffrey1995})] \label{thm:jk}
Let $\omega$ be a symplectic form on $X$ and $\omega_0$ the induced symplectic form on $X /\mkern-6mu/ G$.
Let $\eta \in H^*_G(X)$. Let $[X /\mkern-6mu/ G]$ be the fundamental class of $X /\mkern-6mu/ G$ in $H^*_G(X)$. Let $\Phi^{+}$ and $W$ be, respectively, the set of positive roots and the Weyl group of $G$. Denote by $\varpi$ the product of the roots of $G$,\footnote{In the original formulation in \cite{jeffrey1995} $\varpi$ is the product of positive roots of $G$, so that the in the formulas $\varpi$ is replaced by $(-1)^{|\Phi^{+}|} \varpi^2$ .} $\varpi = \prod_{\gamma \in \Phi}\gamma$.
Then one can choose a subset $\mathcal{F}$ of the set of components of the fixed point set of the action of $T$ such that the following formula holds.
\[\kappa(\eta) e^{i \omega_0}[X /\mkern-6mu/ G] = \]
\[ = \frac{1}{(2 \pi )^{k-s} |W| \vol(T)} \Res\bigg{(}\varpi \sum_{F \in \mathcal{F}} e^{i \mu_T(F)} \int_F \frac{i^*_F(\eta e^{i \omega})}{e_F} \bigg{)}.\]
For $F \in \mathcal{F}$, the map $i_F$ is the inclusion of $F$ into $X$ and $e_F$ is the equivariant Euler class of the normal bundle to $F$ in $X$. The constant $\vol(T)$ is the volume of the torus $T$, and $k,s$ denote the dimensions of $G$ and $T$ respectively.\footnote{The residue in the Jeffrey--Kirwan theorem is defined as a certain contour integral (see def. 8.5 in \cite{jeffrey1995}).} \newline
For a fixed component of the $T$ action, denote by $\beta_{F,j}$ the weights of the $T$-action on the normal bundle to $F$ in $X$. Define $\beta^{\Lambda}_{F,j}$ to be equal $\beta_{F,j}$ if $\beta_{F,j} > 0$ and $-\beta_{F,j}$ otherwise. The set $\mathcal{F}$ of components of the fixed point set that appear in the summation is the set of those fixed components $F$ for which $\mu_T(F)$ lies in the cone $C_{F} := \{ \sum_{j} s_j \beta^{\Lambda}_{F,j}, s_j \geq 0\}$ spanned by the $\beta^{\Lambda}_{F,j}$.
\end{theorem}
One can hope to find a connection between the above theorem and the push-forward residue-type formulas via symplectic reductions. However, the Jeffrey--Kirwan localization theorem describes the non-equivariant cohomology of the symplectic reduction. Imposing an additional torus action on the space $X$, and generalizing the Jeffrey--Kirwan theorem to the equivariant setting is the first step to obtain the residue-type push-forward formulas as special cases of the generalized Jeffrey--Kirwan formula. The preliminary results obtained in \cite{zielenkiewicz2014} in this setting suggest this approach will be successful. One can improve the Jeffrey--Kirwan theorem to work in $S$-equivariant cohomology for an arbitrary torus $S$ and one can reobtain the residue formula for classical Grassmannians from the Jeffrey--Kirwan theorem. \newline
In the assumptions of the Jeffrey--Kirwan theorem $X$ is a compact symplectic manifold. However, for our purposes it would be convenient to allow noncompact spaces. This can be done under the assumption that the moment map is proper. An explicit generalization of the Jeffrey--Kirwan theorem to noncompact spaces is presented in Kalkman's paper \cite{kalkman}, which uses the constructions in \cite{prato1994} to remove the compactess assumptions. A more detailed comment on that matter can be found in the end of Chapter \ref{gk_equiv} and an application is shown in Example \ref{grassmannian}.
\subsection{Other approaches to the nonabelian localization theorems}
An alternative statement and proof of the Jeffrey--Kirwan nonabelian localization theorem has been described by Guillemin and Kalkman in \cite{guillemin1996}. Their statement of the result is the following. Let $X$ be a compact oriented symplectic manifold with an action of $T$, such that the $T$-action satisfies the assumptions of the Jeffrey--Kirwan localization theorem. Let
\[\kappa: H^*_{T}(X) \to H^*(\mu^{-1}(0)/T)\]
denote the Kirwan map.\newline
Consider the set $X_{crit} = \bigcup X_j$ of critical points of the moment map $\mu$. Each $X_j$ is a fixed point set of a one-dimensional subgroup $T_j$ of $T$. For a one-dimensional torus $T_j$ acting on $X$, the equivariant cohomology can be computed from the Cartan complex
\[\tilde{\Omega} = \Omega^*(X)^{T_j}\otimes \mathbb{C}[x_j],\]
where $\mathbb{C}[x_j]$ is a polynomial ring generated by an element $x_j$ of degree $2$, $x_j=c_1(\gamma_j)$ is the first Chern class of the universal complex line bundle $\gamma_j$ over $\mathbb{C}\mathbb{P}^{\infty} = \mathbb{B} T_j$. The differential in this complex is $\tilde{d} = d \otimes 1 + \iota(v)\otimes x_j$, where $d$ is the differential in $\Omega^*(X)^{T_j}$ and $\iota(v)$ is the contraction with the vector field generating the $T_j$-action. Let $\kappa_j: H^*_{H_j}(X_j) \to H^*(X_j /\mkern-6mu/ H_j)$ be the Kirwan map for the action of $H_j=T/T_j$ on $X_j$. Denote by $i_j$ the inclusion $X_j \to X$, and let $e(\nu_j)$ be the Euler class of the normal bundle $\nu_j$ to $X_j$ in $X$. Define
\[\Res_j(\alpha):= \Res_{x_j =\infty}\frac{i^*_j \alpha}{e(\nu_j)},\]
where taking the residue at infinity means just taking the coefficient at $x_j^{-1}$ in the Laurent series (recall that we compute the equivariant cohomology from the Cartan complex, so we can express elements of $H^*_{H_j}(X_j)$ as polynomials in $x_j$ with coefficients in $\Omega(X_j)^{H_j}$).
\begin{theorem}[Guillemin--Kalkman (\cite{guillemin1996})]
\[\int_{X /\mkern-6mu/ T}\kappa(\alpha) = \sum_{j \in \mathcal{J}} \int_{X_j /\mkern-6mu/ H_j} \kappa_j (\Res_j(\alpha)).\]
The indexing set $\mathcal{J}$ is the set of those $j$ for which $X_j \cap \mu^{-1}(l) \neq \varnothing$ for a suitably chosen halfline $l \subseteq \mathfrak{t}^*$. For details see Section~\ref{gk_equiv}, equation~\eqref{jot}.
\label{gkequiv}
\end{theorem}
In the Guillemin--Kalkman theorem one only considers torus actions. If one considers an arbitrary compact group $G$, like in the Jeffrey--Kirwan theorem, then one can reduce the question of pushing-forward over $G$ to pushing-forward over its maximal torus $T$ using the Weyl Integration Formula. This is done in \cite{jeffrey1995} as a part of the proof of the Jeffrey--Kirwan theorem.
\begin{theorem}[Weyl Integration Formula \cite{weyl1925}]
Let $f$ be a continuous class function on a compact connected Lie group $G$ with maximal torus $T$ i.e. $f(t) = f(g t g^{-1})$. Then
\[\int_{G} f(g) dg = \frac{(-1)^{n_+}}{|W|}\frac{vol(G)}{vol(T)}\int_{T} \varpi(t) f(t) dt,\]
where $\varpi(t) = \prod_{\gamma \in \Phi}\gamma(t), t \in T$ and $n_+$ is the number of positive roots of $G$.
\end{theorem}
Using the above theorem, Jeffrey and Kirwan prove the following integration formula.
\begin{theorem}[Jeffrey--Kirwan (\cite{jeffrey1995})]
Let $\tilde{\alpha} \in H^*( X /\mkern-6mu/ T)$ be a lift of $\alpha \in H^*( X /\mkern-6mu/ G)$, in the sense that the pullback of $\tilde{\alpha}$ under the map induced by the canonical inclusion $i:\mu_{G}^{-1}(0)/T \to X /\mkern-6mu/ T$ equals the pullback of $\alpha$ under the fibration $p: \mu_{G}^{-1}(0)/T \to X /\mkern-6mu/ G$. Then
\[\int_{X /\mkern-6mu/ G} \alpha = \frac{1}{|W|} \int_{X /\mkern-6mu/ T} \tilde{\alpha} \cdot \varpi.\]
\label{MIT}
\end{theorem}
Yet another viewpoint is presented by Martin in \cite{martin2000}.\footnote{The paper \cite{martin2000} was accepted for publication in Annals of Mathematics in December 1999 but has not appeared in print.} He proves that the cohomology of the symplectic reduction of a manifold $X$ by the action of $G$ can be described in terms of the cohomology of the symplectic reduction by the action of its maximal torus, as follows.
\begin{theorem}[Martin]
There is a natural ring isomorphism
\[H^*(X /\mkern-6mu/ G; \mathbb{Q}) \simeq \frac{H^*(X /\mkern-6mu/ T; \mathbb{Q})^W}{ann(\varpi)},\]
where $\varpi=\prod_{\alpha \in \Phi} \alpha$ and $ann(\varpi) = \{ c \in H^*(X /\mkern-6mu/ T; \mathbb{Q})^W: c \cdot \varpi = 0\}$. \label{martin}
\end{theorem}
Note that different authors use different conventions about measures on $G$ and $T$. The classical formulation of the Weyl Integration Formula uses explicitly the volumes of $G$ and $T$; Jeffrey and Kirwan assume the volume of $G$ is normalized to $1$, whereas Guillemin, Kalkman and Martin assume both the volumes of $G$ and $T$ to be normalized to $1$.
\section{The equivariant Jeffrey--Kirwan localization theorem}\label{gk_equiv}
The Jeffrey--Kirwan nonabelian localization theorem can be improved to work in $S$-equivariant cohomology. More precisely, let $X$ be a compact oriented symplectic manifold equipped with a Hamiltonian action of $G$ with moment map $\mu_G$ and an action $S$, such that the $G$-action satisfies the assumptions of the Jeffrey--Kirwan localization theorem and the actions of $G$ and $S$ commute. We do not assume that the action of $S$ is Hamiltonian. If the set $\mu_G^{-1}(0)$ is $S$-invariant, then one can construct the equivariant Kirwan map
\[\kappa^{S}: H^*_{G \times S}(X) \to H^*_{S}(\mu_G^{-1}(0)/G),\]
defined as the composition
\[H^*_{G \times S}(X) \xrightarrow{i^*} H^*_{G \times S}(\mu_G^{-1}(0)) \xrightarrow{(\pi^*)^{-1}} H^*_{S}(\mu_G^{-1}(0)/G), \]
where $i : \mu_G^{-1}(0) \to X$ denotes the inclusion and $\pi:\mu_G^{-1}(0) \to \mu_G^{-1}(0)/G$ is the natural quotient map, inducing an isomorphism $ \pi^*: H^*_{G \times S}(\mu_G^{-1}(0)) \to H^*_{S}(\mu_G^{-1}(0)/G)$. In all of the above, $\mu_G$ denotes the moment map associated with the $G$-action. \newline
Some kind of equivariant Kirwan map has been introduced and investigated by Goldin in \cite{goldin2002}, in a very similar setting - in \cite{goldin2002} the equivariant Kirwan map is a map
\[\kappa^{S}: H^*_{K}(X) \to H^*_{K/S}(\mu^{-1}(0)/S),\]
where $S \lhd K$ is a subtorus of a compact Lie group $K$. A proof of the surjectivity of the equivariant Kirwan map is given in \cite{goldin2002}. Its kernel is explicitly described, in terms analogous to Tolman-Weitsman's description of the kernel of the non-equivariant Kirwan map \cite{tolman2003}. The proof of the surjectivity presented in \cite{goldin2002} uses equivariant Morse theoretic methods to study the case of $S= S^1$, followed by induction on the dimension of the torus. \newline
Theorem~\ref{thm:jk} gives a description of the non-equivariant push-forward. The effect of applying the push-forward map $\int_{\X \git G}: H^*(\X \git G) \to H^*(pt)$ to the image of an element in $G$-equivariant cohomology under the Kirwan map $\kappa(\alpha)$ is described in terms of a certain residue operation. We relate this formula to the residue formulas obtained in \cite{zielenkiewicz2014}, by extending the results of Jeffrey and Kirwan to the equivariant setting. Our approach will be based on the paper by Guillemin and Kalkman \cite{guillemin1996}, in which the authors prove a formula similar to the one by Jeffrey--Kirwan. Their definition of a residue is slightly different, and they sum over a different subset of the fixed point set of the torus action. However, in the cases we will consider (projective spaces, Grassmannians) the two results coincide. We follow the notation of \cite{guillemin1996}, and we show how to adapt their proof of the residue-type push-forward formula in the equivariant setting. \newline
Let $X$ be a compact symplectic manifold equipped with a Hamiltonian action of a compact group $G$ and an action of a torus $S$, and assume the actions of $G$ and $S$ commute and the set $\mu_G^{-1}(0)$ is $S$-invariant. Assume $0$ is a regular value of the moment map $\mu_G $ and let $\X \git G := \mu_G^{-1}(0)/G $ denote the symplectic reduction of $X$ for the $G$-action. Consider the $S$-equivariant Kirwan map
\[\kappa^{S}: H^*_{G \times S}(X) \to H^*_{S}(\X \git G),\]
and let $\int_{\X \git G}: H_{S}^*(\X \git G) \to H_{S}^*(pt)$ denote the equivariant push-forward. \newline
In view of the Jeffrey--Kirwan nonabelian localization theorem, it is possible to recover all results about $G$ from the action of the maximal torus $T$ in $G$. For this reason, let us assume that $G=T$, i.e. that $X$ is equipped with two commuting actions of tori $T, S$. The Borel model of the $S$-equivariant cohomology is
\[H^*_S (X) := H^*(\ES \times^{S} X), \]
so to obtain the equivariant Guillemin--Kalkman theorem, one could try to apply the non-equivariant version of it to the space $\ES \times^{S} X$. However, this space does not satisfy the compactness assumption. To avoid this problem, consider the finite-dimensional approximations $\mathbb{E}_m$ such that
\[ H^i_G(X) = H^i(\mathbb{E}_m \times^G X) \textrm{ for } m \textrm{ sufficiently large with respect to }i. \]
Typically, for the torus actions one takes $\mathbb{E}_m = (\mathbb{C}^m \setminus 0)^n$ with the action of $S$ given by multiplication on every component. However, these spaces are not compact, so it is better to consider spheres $\mathbb{E}_m = (S^{2m+1})^n$. \newline
The following $S$-equivariant version of the Guillemin--Kalkman formula holds. Let $X_{crit} = \bigcup X_j$ be the set of critical points of the moment map $\mu_T$. Each $X_j$ is a fixed point set of a one-dimensional subgroup $T_j$ of $T$. Let $\kappa^{S}_j$ be the equivariant Kirwan map for the action of $H_j=T/T_j$ on $X_j$,
\[\kappa^{S}_j: H^*_{H_j \times S}(X_j) \to H^*_{S}(X_j /\mkern-6mu/ H_j),\]
and let $\kappa^S$ be the $S$-equivariant Kirwan map for the action of $T$ on $X$,
\[\kappa^{S}: H^*_{T \times S}(X) \to H^*_{S}(X /\mkern-6mu/ T).\]
Denote by $i_j$ the inclusion $X_j \to X$ and let $e^{S}(\nu_j)$ be the equivariant Euler class of the normal bundle $\nu_j$ to $X_j$ in $X$. Let $x_j$ be a chosen generator of $H^*_{T_j}(pt)$. Define
\[\Res_j(\alpha):= \Res_{x_j=\infty}\frac{i^*_j \alpha}{e^{S}(\nu_j)}.\]
\begin{theorem}[Equivariant Guillemin--Kalkman Theorem for $S^1$-actions] With the assumptions and notation introduced above, the following formula holds.
\[\int_{X /\mkern-6mu/ T}\kappa^{S}(\alpha) = \sum_{j \in \mathcal{J}} \int_{X_j /\mkern-6mu/ H_j} \kappa^{S}_j (\Res_j(\alpha)).\]
The indexing set $\mathcal{J}$ is the set of those $j$ for which $X_j \cap \mu^{-1}(l) \neq \varnothing$ for a suitably chosen halfline $l \subseteq \mathfrak{t}^*$. For details see Section~\ref{gk_equiv}, equation~\eqref{jot}.
\label{gkequiv}
\end{theorem}
The proof is a straightforward application of the Guillemin--Kalkman result to the space $\tilde{X} = \mathbb{E}_m \times^{S} X$ for a sufficiently large $m$ (which depends on $\alpha$), where $\mathbb{E}_m = (S^{2m+1})^n$. Let us choose $\alpha \in H_{S}^k(X)$ and choose $m$ such that $H_{S}^k(X) \simeq H^k(\mathbb{E}_m \times^{S} X)$, identifying $\alpha$ with an element in $H^k(\mathbb{E}_m \times^{S} X)$. Then the push-forward
\[H^*_{S}(X /\mkern-6mu/ T) \to H^*_{S}(pt)=H^*(\BS)\]
can be described as follows. The equivariant cohomology of a point $H^*_{S}(pt)$ is a polynomial ring $\mathbb{C}[t_1, \dots, t_n]$. Let us denote $t=(t_1,\dots, t_n)$ and let $I=(i_1, \dots, i_n)$ be a multi-index such that $|I| = \frac{1}{2}(\deg \alpha - \dim X /\mkern-6mu/ T )$. Then the push-forward sends
\[\kappa^{S}(\alpha) \mapsto \sum a_I t^I,\]
and the coefficients $a_I$ are given by the integrals
\[\int_{X_I } \kappa^{S}(\alpha),\]
where $X_I = (S^{2i_1+1} \times \dots \times S^{2 i_n +1}) \times^{S} X /\mkern-6mu/ T$. \newline
The proof of Guillemin and Kalkman is based on induction. The $S^1$ case is described in detail in the Appendix (adapting the Guillemin--Kalkman argument to the equivariant case). For one-dimensional torus actions the argument is more general: it does not require symplecticity assumptions, one only needs a compact orientable manifold with boundary $(X, \partial X)$ together with an $S^1$-action which is locally free on the boundary. Instead of the symplectic reduction one can then take $\partial X / S^1$. However, to proceed with the induction one needs to choose subsequent one-dimensional tori in $T$ in such a way that in every step the assumptions are satisfied, i.e. one needs to choose a splitting $T=S \times H$, where $S$ is a one-dimensional torus, acting locally freely on $\partial X / H$. If we knew that $\partial X / H$ is the boundary of a compact manifold $\partial X / H = \partial M$, then using the Guillemin--Kalkman theorem for $S=S^1$, we could express the integral
\[\int_{\partial X / S \times H} \kappa(\alpha) = \int_{\partial M / S} \kappa(\alpha) = \sum_k \int_{M_k} \Res_{x_k=\infty} \frac{i^*_k \alpha}{e(\nu_k)},\]
where the $M_k$ are the connected components of the fixed point set of $S^1$. In general one cannot claim that $\partial X / H $ is the boundary of some compact manifold. This is where we need the symplectic structure and the moment map of the action. \newline
The assumptions that $X$ is symplectic and the action of $T$ is Hamiltonian enable one to use the moment map for the action for choosing the one-dimensional subtori. Using the moment map, one can give a precise description of how to proceed with induction. The argument is based on the following theorem, due to Atiyah \cite{atiyah1982} and independently to Guillemin and Sternberg \cite{guillemin1982}. If $X$ is a compact manifold equipped with a Hamiltonian torus action with moment map $\mu$ and $0$ is a regular value of $\mu$, then the image of the moment map is a convex polytope $\Delta \subseteq \lie{t}^*$. We assume $0$ lies in its interior, otherwise the symplectic reduction is trivial, i.e. $X /\mkern-6mu/ T=\varnothing$ and Theorem \ref{gkequiv} is tautologically true. The set $\Delta^0$ of the regular values of $\mu$ is a disjoint union of convex polytopes
\[\Delta^0 = \Delta^0_1 \cup \dots \cup \Delta^0_k,\]
and by assumption $0$ lies inside one of the $\Delta^0_j$. For any chosen element $\theta$ in the weight lattice of $\lie{t}^*$, one can consider a ray through the origin in the direction of $\theta$,
\begin{equation}\label{jot}
l = \{ s \theta: s \in [0, \infty) \}.
\end{equation}
Let us choose $\theta$ in such a way that the ray $l$ does not intersect any of the walls of $\Delta_i^0$ of codimension greater than one and hence intersects the codimension one walls transversely. Then the Lie subalgebra $\lie{h} \subseteq \lie{t}$ defined as
\[\lie{h} = \{ v \in \lie{t}: \langle \theta, v \rangle = 0 \}\]
is the Lie algebra of a codimension one subtorus $H \subseteq T$. The assumptions we made on the ray $l$ and hence on the element $\theta$ imply that the moment map $\mu$ is transverse to $l$ and the action of $H$ on $\mu^{-1}(l)$ is locally free. Moreover, the action of $T/H \simeq S^1$ on $\mu^{-1}(l)/H$ is locally free on the boundary, which makes it possible to proceed with induction. Note that $\mu^{-1}(l)/H$ might not be a symplectic manifold, but rather a symplectic orbifold. However, the proof can easily be adapted to the case of orbifolds, because most of the analysis is done locally, and the only global component of the proof is Stokes Theorem, which holds for orbifolds \cite{satake1957}. The details of the proof for the $S^1$ case are described in the Appendix. \newline
Following the procedure which is briefly described above and in full detail in \cite{guillemin1996}, one arrives at the following result. Let us choose the ray $l$ as described above, and call $l$ a main branch. Next, for every intersection point $p_j$ of $l$ with the codimension one walls of $\Delta_i^0$, let us choose a ray $l_j$ starting at $p_j$ and not intersecting any codimension three walls of $\Delta_i^0$. The rays $l_j$ are called secondary branches. Next, continue the procedure by considering the intersection points of secondary branches $l_j$ with codimension 2 walls of $\Delta_i^0$, and at each such point choose ternary branches (rays not crossing the codimension 4 walls), etc. Finally one arrives at a vertex of the moment polytope (the vertices of the moment polytope are the images of fixed components). One obtains what Guillemin and Kalkman in \cite{guillemin1996} called a dendrite $\mathcal{D}$ - a set of branches, consisting of sequences of rays $(l, l^{(1)}, \dots , l^{(n)})$, where $l^{(j)}$ is a branch constructed in step $j$, and a set of points $(0, p_1, \dots, p_n)$ on those branches, such that the branch $l^{(i)}$ starts at the point $p_i$ and intersects the codimension $i+2$ wall at $p_{i}$. Each branch $B$ in the dendrite $\mathcal{D}$ corresponds to a fixed component of the action of $T$ and determines a sequence of tori
\[\{0\} \subseteq T_F^{(1)} \subseteq T_F^{(2)} \subseteq \dots \subseteq T_F^{(n)} = T,\]
which gives the choice of the desired one-dimensional subtori $T_F^{(i+1)}/T_F^{(i)}$ needed for the induction. Given a sequence of tori $T_F^{(i)}$ one can choose a basis $\mathbf{x}_F= \{x_{F,1},\dots,x_{F,n}\}$ of $\lie{t}^*$ such that for each $i$ the dual elements $x_{F,1}^*, \dots, x_{F,i}^*$ form a basis of the integer lattice in the Lie algebra $\lie{t}_F^{(i)}$ of the torus $T_F^{(i)}$.
\begin{theorem}[Equivariant Guillemin--Kalkman Theorem] With the notation introduced above, if one additionally assumes that the fixed points of the action are isolated\footnote{If the fixed points are not isolated one needs to replace the summation over fixed points with summation over the fixed components, followed by the push-forward to a point over the component.} and denote by $i_p$ the inclusion of the fixed point $p$ into $X$ and by $e(\nu_p)$ the Euler class of the normal bundle at $p$, applying theorem \ref{MIT} gives:
\[\int_{X /\mkern-6mu/ G}\kappa^{S}(\alpha) = \frac{1}{|W|}\sum_{B \in \mathcal{D}} \Res_{\mathbf{x}_p=\infty}\bigg{(}\varpi \frac{i^*_p \alpha}{e(\nu_p)}\bigg{)}.\]
\end{theorem}
\begin{remark}
The dendrite procedure, introduced by Guillemin and Kalkman in \cite{guillemin1996}, is generalized by Jeffrey and Kogan in \cite{jeffrey2005} by replacing rays with higher dimensional cones, yielding a different proof of the Jeffrey--Kirwan nonabelian localization theorem.
\end{remark}
\begin{example}[Projective plane]
Consider $\mathbb{C}^2$ with a $T = S^1$-action given by $x(z_0, z_1) = (x z_0, x z_1)$ and an $S = S^1 \times S^1$-action given by $(t_0, t_1)(z_0, z_1) = (t_0 z_0, t_1 z_1)$. The moment map to the $S^1$-action is
\[ \mu (z_0, z_1) = (|z_0|^2 + |z_1|^2), \]
so $\mu^{-1}(1) = S^3 \subseteq \mathbb{C}^2$ and $\mu^{-1}(1)/S^1 = \mathbb{C}\mathbb{P}^1$.\newline
Let $X = \{ z \in \mathbb{C}^2: \mu(z) \leq 1 \}$ be a ball of radius $1$ in $\mathbb{C}^2$, so that $X /\mkern-6mu/ S^1 = \partial X / S^1 = \mathbb{C}\mathbb{P}^1$. Let $\alpha \in H^k_{S^1 \times S}(X)$ and consider
\[\int_{\mathbb{C}\mathbb{P}^1} \kappa^S(\alpha).\]
The $S^1 \times S$-equivariant cohomology of $X$ is a module over $H^*_{S^1 \times S}(pt) = \mathbb{C}[x,t_0,t_1]$.
The only fixed point for the action of $S^1$ on $X$ is the origin, so the expression
\[\sum_{\{ k : \mu_{|X_k} \geq 0 \}} \int_{X_k} \Res_{x=\infty} \frac{i^*_k \alpha}{e(\nu_k)}\]
reduces to a sum over a single point $0=(0,0)$ and the integral is just the evaluation at this point. We get:
\[\int_{\mathbb{C}\mathbb{P}^1} \kappa^S(\alpha) = \Res_{x=\infty} \frac{i^*_0 \alpha}{e(\nu_0)}.\]\newline
The restriction of the form $\alpha \in H^k_{S^1 \times T}(X)$ to a point $0$ is a polynomial in $x$, let us denote this polynomial by $f$. The $S^1 \times S$-equivariant Euler class equals $(t_0-x)(t_1 -x)$. One has:
\[\int_{\mathbb{C}\mathbb{P}^1} \kappa^S(\alpha) = \Res_{x=\infty} \frac{f(x)}{(t_0 - x) (t_1 - x)}=\]
\[ = \frac{f(t_0)}{ (t_1 - t_0)} + \frac{f(t_1)}{(t_0 - t_1)} = \frac{f(t_0)-f(t_1)}{ (t_1 - t_0)} ,\]
which is a polynomial in $t_0, t_1$. \newline
Alternatively, one could consider the action of $S^1$ on $\mathbb{C}\mathbb{P}^2$, via
\[z[z_0:z_1:z_2] = [z_0, z z_1, z z_2],\]
and use dendrites to describe the push-forward. Then the fixed points of the $S^1$ action are $[1:0:0]$ and $\{ [0:z_1:z_2] \} = \mathbb{C}\mathbb{P}^1$ and only the fixed point $[1:0:0]$ appears in the summation.
\end{example}
\begin{example}[Grassmannian]
Consider the action of unitary matrices $U(k)$ on the space $Hom(\mathbb{C}^k,\mathbb{C}^n)$ given by multiplication on the right. The moment map for this action \linebreak $\mu: Hom(\mathbb{C}^k, \mathbb{C}^n) \to \mathfrak{u}(k)$ is given by
\[ \mu(A) = A^* A - Id, \]
hence $\mu(0)^{-1} = \{ A^*A = Id\}$ and the column vectors of a matrix $A \in \mu(0)^{-1}$ form a unitary $k$-tuple in $\mathbb{C}^n$, so
\[Hom(\mathbb{C}^k, \mathbb{C}^n) /\mkern-6mu/ U(k) = Grass_k(\mathbb{C}^n).\]
Let $T$ denote the maximal torus in $U(k)$, acting on $Grass_k(\mathbb{C}^n)$ via restriction of the action of $U(k)$, with characters $\mathbf{z} = \{ z_1,\dots,z_k\}$. The roots of $U(k)$ are $ \Phi = \{ z_i - z_j\}_{i \neq j}$. Applying Theorem \ref{MIT} we can reduce the integral over $Grass_k(\mathbb{C}^n)$ to the integral over the reduction of $Hom(\mathbb{C}^k, \mathbb{C}^n)$ with respect to the action of $T$:
\[ \int_{Grass_k(\mathbb{C}^n)} \kappa_S(\alpha) = \frac{1}{|W|} \int_{Hom(\mathbb{C}^k, \mathbb{C}^n) /\mkern-6mu/ T} \varpi \kappa^T_S(\alpha) , \]
where $\varpi = \prod_{\gamma \in \Phi}\gamma = \prod_{i \neq j} (z_i - z_j)$ is the product of the roots of $U(k)$, $W$ denotes the Weyl group of $U(k)$ and $\kappa^T_S$ is the $S$-equivariant Kirwan map for the action of $T$ on $Hom(\mathbb{C}^n, \mathbb{C}^k)$. \newline
The maximal torus $T$ in $U(k)$ acts diagonally on $Hom(\mathbb{C}^k, \mathbb{C}^n) = \mathbb{C}^n \oplus \dots \oplus \mathbb{C}^n$, with the action on each component $\mathbb{C}^k$ given by multiplication. The moment map $\mu_T : Hom(\mathbb{C}^k, \mathbb{C}^n) \to \mathfrak{t}^*$ for this action is given by projection of the moment map for the action of $U(k)$, hence
\[\mu_T(A) = (||v_1||,\dots, ||v_k||),\]
where $v_1, \dots, v_k$ are the column vectors of $A$. The symplectic reduction for the $T$-action is therefore
\[Hom(\mathbb{C}^k, \mathbb{C}^n) /\mkern-6mu/ T = \mu_T^{-1}(0)/T = (\mathbb{C}\mathbb{P}^{n-1})^k,\]
since $\mu_T^{-1}(0)$ consists of $k$-tuples of vectors of length $1$ in $\mathbb{C}^n$. The image of the moment map $\mu_T$ is the product of half-lines $(\mathbb{R}_{\geq 0})^k$. It is not a convex polytope, because the space $Hom(\mathbb{C}^k, \mathbb{C}^n)$ is non-compact. However, we can still use the dendrite algorithm as in \cite{guillemin1996}, only this time we have to make sure that at each step we choose rays $l$ in such a way, that they intersect a codimension one wall (and not diverge to infinity). In our case this can be easily achieved - if the chosen ray $l$ does not intersect a codimension one face of $(\mathbb{R}_{\geq 0})^k$, then the ray $-l$ does. Choosing the rays in this way will always lead to a branch ending at the only fixed point of the action, the origin. Therefore, one gets the following expression for the push-forward
\[ \int_{Grass_k(\mathbb{C}^n)} \kappa_S(\alpha) = \frac{1}{|W|}\Res_{\mathbf{z}=\infty} \frac{\varpi V(z_1,\dots, z_k)}{e(0)}, \]
where $\varpi = \prod_{i \neq j} (z_i - z_j)$ is the product of the roots of $U(k)$, and $e(0)$ is the $S \times T$-equivariant Euler class at zero, $e(0) = \prod_{i,j}(z_i - t_j)$, where $z_1,\dots,z_k$ are the characters of the $T$-action and $t_1,\dots,t_k$ are the characters of the $S$-action. Finally,
\[ \int_{Grass_k(\mathbb{C}^n)} \kappa_S(\alpha) = \frac{1}{|W|} \Res_{\mathbf{z}=\infty} \frac{ \prod_{i \neq j} (z_i - z_j) V(z_1,\dots,
z_k)}{\prod_{i,j}(z_i - t_j)}, \]
which is the same formula as the one obtained in \cite{zielenkiewicz2014}, using purely combinatorial methods. \newline
The description of the Grassmannian as the symplectic reduction for the $U(k)$-action has been used by Martin to give a description of the non-equivariant cohomology of the classical Grassmannian and to derive an integration formula using Theorem \ref{martin}. For details, see \cite{martin2000}, chapter 7.
\label{grassmannian}
\end{example}
\section{Appendix}\label{appendix}
\subsection{The outline of the proof}
Our aim is to apply the Jeffrey--Kirwan nonabelian localization theorem (in the formulation of Guillemin and Kalkman) to reobtain the residue-type formulas for push-forwards in equivariant cohomology of classical Grassmannians. The key idea is to realize the Grassmannian as a symplectic reduction. If $X$ is a symplectic manifold with a Hamiltonian action of a compact group $G$, then the Jeffrey--Kirwan theorem provides a description of the non-equivariant push forward $H^*(X /\mkern-6mu/ G) \to H^*(pt)$ in terms of a certain residue. \newline
We assume that $X$ is a symplectic manifold with action of a product $G \times S$, where $G$ is a compact group with maximal torus $T$ and acts in a Hamiltonian way, and $S$ is a compact torus. We assume that $\mu^{-1}_G(0)$ and $\mu_{T}^{-1}(0)$ are $S$-invariant. We rephrase the nonabelian localization theorem in $S$-equivariant cohomology (adapting the proof of Guillemin and Kalkman), hence obtaining an expression for the equivariant push-forward $H^*_{S}(X /\mkern-6mu/ G) \to H^*_{S}(pt)$ in terms of a certain residue. In the case of the classical Grassmannian, the obtained formula coincides with the one obtained in \cite{zielenkiewicz2014}. \newline
The proof we present here uses two main tools:
\begin{enumerate}
\item Theorem \ref{MIT}, which reduces the problem to the case $G=T$.
\item The non-equivariant approach of Guillemin and Kalkman applied to the approximation spaces of the Borel model of the $S$-equivariant cohomology of $X$.
\end{enumerate}
Guillemin and Kalkman study the Kirwan map by looking at it on the level of differential forms. They give a precise description of the isomorphism
\[ \pi^*: H^*(\partial X / T) \to H^*_{T}(\partial X),\]
in terms of iterated residues, by first describing $\pi^*$ for $T=S^1$ and using induction on the dimension of the torus. Finally, they apply the Stokes theorem and the Atiyah--Bott--Beline--Vergne localization formula to obtain the formula for the push-forward. \newline
In the equivariant case we need to describe the isomorphism
\[ \pi^*_{S}: H^*_{S}(\partial X / T) \to H^*_{T \times S}(\partial X),\]
and use it to deduce the formula for the $S$-equivariant push-forward \linebreak $H^*_{S}(\partial X ) \to H^*_{S}(pt)$. For this we consider the approximation spaces of the Borel model of the $S$-equivariant cohomology of $X$, $X_m = \mathbb{E}_m \times^{S} X$ with $E_m = (S^{2m + 1})^n$ and use the following fact relating the equivariant push-forward with the non-equivariant one.
\begin{proposition}
Let $X$ is a compact manifold with a $S$-action. Let $p: X \to pt$, and let $\alpha \in H^k_{S}(X)$. Then the push-forward $p_*: H^*_S(X) \to H^*_S(pt)$ is given by the formula
\[p_* \alpha = \sum_I \beta_I t^I,\]
where $I = (i_1, \dots, i_n)$ is the multi-index satisfying $|I| = \frac{1}{2}(\deg \alpha- \dim X)$. The coefficients $\beta_I$ are given by
\[\beta_I = \int_{X_I} j^* \alpha,\]
with $X_I = ( S^{2i_1 + 1} \times S^{2 i_2 + 1} \times \dots \times S^{2 i_l + 1} ) \times^S X \xrightarrow{j} \ES \times^S X$ .
\end{proposition}
\subsection{The $S^1$ action - a detailed proof via approximation spaces.}
Let $X$ be a compact oriented $S^1$-manifold with boundary, and assume that the $S^1$ action is locally free on the boundary of $X$. The inclusion $ i: \partial{X} \to X$ induces the map on $S^1$-equivariant cohomology \[i^*: H^*_{S^1}(X) \to H^*_{S^1}(\partial{X}).\]
Recall that if the action on $\partial{X}$ is locally free, there is a canonical isomorphism $\pi^*: H^*(\partial{X} / S^1 ) \to H^*_{S^1}(\partial{X})$. The Kirwan map is defined as the composition
\[ \kappa = (\pi^*)^{-1} \circ i^* : H^*_{S^1}(X) \to H^*(\partial{X} / S^1 ). \]
Equivariant cohomology can be computed from the Cartan complex, which in the case of an $S^1$-action on $\partial{X}$ is
\[ \tilde{\Omega} = \Omega^*(\partial{X})^{S^1} \otimes \mathbb{C}[x] \]
with differential $\tilde{d} = d \otimes 1 + \iota(v) \otimes x$, where and $\iota(v)$ denotes the contraction with the vector field generating the $S^1$-action. \newline
Guillemin and Kalkman in \cite{guillemin1996} consider the integral $\int_{\partial{X} / S^1} \kappa(\alpha)$ for an equivariantly closed form $\alpha \in \tilde{\Omega}^k$. Since our aim is to prove an analogous result for push-forwards in equivariant cohomology, we replace the Kirwan map $\kappa$ by its equivariant analogue $\kappa_{S}: H^*_{S^1 \times S} X \to H_{S}^* \partial{X} / S^1 $ and consider the push-forward in $S$-equivariant cohomology
\[\int_{\partial{X} / S^1} \kappa_{S}(\alpha),\]
for an $(S^1 \times S)$-equivariantly closed form $\alpha$. We will always assume that $S$ is a torus and that the actions of the two tori $S^1$ and $S$ commute. The proof presented here follows the proof in \cite{guillemin1996}, adapting it to the equivariant setting. \newline
If $X$ is a compact $S^1 \times S$-manifold with a locally free $ S^1$ action on $\partial{X}$ and the actions of $S^1$ and $S$ commute, then the spaces $\mathbb{E}_m \times^{S} X$ are compact $S^1$-manifolds with a locally free action of $S^1$ on the boundary. Moreover, $\partial{\mathbb{E}_m \times^{S} X} = \mathbb{E}_m \times^{S} \partial X$ are the approximation spaces in the Borel model for equivariant cohomology of $\partial X$. Let us denote $X_m = \mathbb{E}_m \times^{S} X$ and let $\tilde{\Omega}_{\partial X_m}$ be the Cartan complex computing the $S^1$-equivariant cohomology of $\partial X_m$ (this implies, that for $m$ large enough it computes the $S^1 \times S$-equivariant cohomology of $\partial X$ in degrees $i$ small with respect to $m$). Let $\alpha \in \tilde{\Omega}_{\partial X_m}^*$ be an equivariantly closed form, with respect to the $S^1$-action, and assume $\deg \alpha = \dim \partial X_m -1 $. Then, using the isomorphism
\[ \pi^*: H^*(\partial X_m / S^1) \simeq H^*_{S^1}(\partial X_m),\]
we can write
\[\alpha = \tilde{d} \nu + \pi^* \gamma\]
for some $\nu \in \tilde{\Omega}^{k-1}_{\partial X_m}$ and $\gamma \in \Omega^k(\partial X_m / S^1)$.
Following the calculation in section 2 of \cite{guillemin1996} we can explicitly write down $\nu$ and $\pi^* \gamma$.
Consider the following element of $\Omega^*(\partial X_m)^{S^1}\otimes \mathbb{C}[x,x^{-1}]$ (which is the just the localization of $\tilde{\Omega}_{\partial X_m}$ with respect to $x$):
\[ \nu_0 = \frac{\theta \alpha }{x + d \theta} = \frac{\theta \alpha}{x} \sum_{n \geq 0} \big{(}\frac{-d \theta}{x}\big{)}^n \]
where $\theta$ is an $S^1$-invariant 1-form on $\partial X_m$ satisfying $\iota(v)\theta = 1$.
The element $\nu_0$ is a Laurent series in $x$ with coefficients in $\Omega^*(\partial X_m)^{S^1}$. Writing the form $\alpha = \sum_{i \geq 0} \alpha_i x^i$ as a polynomial in $x$ with coefficients in $\Omega^*(\partial X_m)^{S^1}$, we can rewrite $\nu_0$ as
\[ \nu_0 = \sum_{n,i} \theta \alpha_i (-d \theta)^n x^{i-n-1}.\]
Since the coefficients of the above series are in $\Omega^*(\partial X_m)^{S^1}$, which is trivial in degrees higher than $\dim \partial X_m$, the coefficient of $x^{i-n-1}$ is zero if
\[1 + \deg \alpha_i + 2n > \dim \partial X_m. \]
Since $\deg \alpha = \dim \partial X_m -1$, the only non-zero coefficients appear when $n \leq i$, so the only powers of $x$ that can occur in the Laurent series of $\nu_0$ are the positive ones and $x^{-1}$, so we can write
\[\nu_0 = \nu + \beta x^{-1}, \]
where $\beta = \Res_{x=\infty} \nu_0 \in \Omega^*(\partial X_m)^{S^1}$. \newline
The differential in the Cartan complex $\tilde{\Omega}_{\partial X_m} = \Omega^*(\partial X_m)^{S^1} \otimes \mathbb{C}[x]$ is given by $\tilde{d} = d \otimes 1 + \iota(v)\otimes x$, so
\[\alpha = \tilde{d} \nu + \iota(v)\beta,\]
and the form $\iota(v)\beta$ is $S^1$-invariant and horizontal, so it comes from some form $\gamma \in \Omega^k(\partial X_m / S^1)$,
\[\iota(v)\beta = \pi^* \gamma.\]
This shows that the map $(\pi^*)^{-1}$ is given by the formula
\[(\pi^*)^{-1}(\alpha) = \Res_{x=\infty} \iota(v) \frac{\theta \alpha}{x + d \theta} = \Res_{x=\infty} \pi_{*} \frac{\theta \alpha}{x + d \theta}.\]
\newline
Let $\{ (X_m)_i \}_{i=1, \dots, N}$ be the connected components of the fixed point set of the action of $S^1$, and let $\{ U_i \}_{i=1, \dots, N}$ be pairwise disjoint tubular neighbourhoods of sets $ (X_m)_i$ such that $U_i \cap \partial X_m = \varnothing$. Let $\alpha \in H^*_{S^1}(X)$ (we will abuse notation and denote by $\alpha$ also its restriction to $X_m$ and $\partial X_m$), and let $\nu,\theta$ be as above. Assume that $\deg \alpha = \dim \partial X_m -1$. Let us extend the forms $ \nu , \theta$ to $X_m \setminus X_m^{S^1}$. Applying the Stokes theorem to $ 0 = \int_{X_m} \alpha = \int_{X_m} \tilde{d} \nu $ we get
\[ \sum_{k=1}^N \int_{U_k} \frac{\theta \alpha}{x + d \theta} = \int_{\partial X_m} \frac{\theta \alpha}{x + d \theta} = \int_{\partial X_m / S^1} \pi_{*}\big( \frac{\theta \alpha}{x + d \theta} \big).\]
It was shown in \cite{berline1983}\footnote{The computation is a step of the proof of a theorem announced in \cite{berline1982} and proven in \cite{berline1983}. The most detailed computation can be found in \cite{guillemin1999supersymmetry}.} that by shrinking the radii of $U_i$ to zero the left-hand side converges to
\[ \sum_{k=1}^N \int_{(X_m)_k} \frac{i^*_k \alpha}{e(\nu_k)},\]
where $i_k : (X_m)_k \to X_m$ is the inclusion map and $e(\nu_k)$ denotes the Euler class of the normal bundle to $(X_m)_k$ in $X_m$. Taking residues at $x=\infty $ of both sides of the above expression we get
\[ \int_{\partial X_m / S^1} \Res_{x=\infty} \pi_{*}\big( \frac{\theta \alpha}{x + d \theta} \big) = \sum_{k=1}^N \int_{(X_m)_k} res_{x=\infty} \frac{i^*_k \alpha}{e(\nu_k)}, \]
and the right hand-side equals $\int_{\partial X_m / S^1} \kappa^S (\alpha)$. \newline
Finally, we have shown that for $\alpha \in H^{\dim \partial X_m - 1}(\partial X_m)$ we have
\[\int_{\partial X_m / S^1} \kappa^{S} (\alpha) = \sum_{k=1}^N \int_{(X_m)_k} \Res_{x=\infty} \frac{i^*_k \alpha}{e(\nu_k)},\]
so for a class $\alpha \in H^{i}_T(\partial X)$ we can choose $m$ such that $i=\dim X_m -1$ and it follows that for $\alpha \in H^i_{S}(\partial X)$ the equivariant push-forward satisfies
\[\int_{\partial X/ S^1} \kappa^{S} (\alpha) = \sum_{k=1}^N \int_{X_k} \Res_{x=\infty} \frac{i^*_k \alpha}{e(\nu_k)}.\]
Note that our notation for the residues is different than the one of Guillemin and Kalkman, who denote the residues by $\Res_{x=0}$, not by $\Res_{x=\infty}$. However, this is a matter of notation only. The residue in \cite{guillemin1996}, like here, is defined to be the coefficient at $x^{-1}$ in the series expansion, which we choose to call the residue at infinity to remain consistent with the classical notation from calculus.
\begin{corollary}[Symplectic case]
Consider a symplectic manifold $X$ with a Hamiltonian action of $S^1 \times S$ with moment map for the $S^1$-action $\mu: X \to \lie{s}^*$. Then the manifold with boundary $X_+ = \{ x \in X : \mu(x) \geq 0 \}$ is a compact $S^1$ manifold with a locally free $S^1$-action on the boundary $\partial X_+ = \mu^{-1}(0)$. Let $X /\mkern-6mu/ S^1$ denote the quotient $\mu^{-1}(0)/ S^1$ and let $X_k$ denote the connected components of the fixed point set of the action on $X /\mkern-6mu/ S^1$. By the considerations above applied to $X_+$ we get:
\[\int_{X /\mkern-6mu/ S^1} \kappa^{S}(\alpha) = \sum_{\{ k: \mu_{|X_k} \geq 0 \}} \int_{X_k} \Res_{x=\infty} \frac{i^*_k \alpha}{e(\nu_k)}. \]
\end{corollary}
\begin{corollary}[Jeffrey--Kirwan formula]
In \cite{jeffrey1995} the authors consider the case when $X$ is a symplectic manifold with an action of a compact group $K$ with the moment map $\mu: M \to \lie{k}^*$. The differential form which is being integrated is
\[\kappa(\alpha) = \eta_0 e^{i \omega_0},\]
where $\eta_0 = \kappa(\eta)$ is the image of the Kirwan map for some form $\eta \in H^*_{K}(X)$ and $\omega_0$ is the symplectic form on the symplectic reduction $X /\mkern-6mu/ K$. In the special case when $K = S = S^1$, one obtains the following formula for the push-forward:
\[\int_{X /\mkern-6mu/ S^1} \eta_0 e^{i \omega_0} = \sum_{F \subseteq F_+} \int_{F} \Res_{x=\infty} \frac{i^*_F (\eta e^{i (\omega + \mu)})}{e(\nu_F)} = \]
\[ = \Res_{x=\infty} \sum_{F \subseteq F_+} \int_{F} e^{i \mu(F) }\frac{i^*_F (\eta e^{i \omega})}{e(\nu_F)},\]
where $F \subseteq F_+$ are the components of the subset $F_+$ of the fixed-point set on which $\mu$ is positive. \newline
After a slight change of notation, this is almost the formula from Theorem 8.1 in \cite{jeffrey1995} and its corollary below, corrected by eliminating the $\frac{1}{2}$-factor and the $\psi^2$-factor under the residue. Again, we denote the series coefficient at $x^{-1}$ as a residue at infinity, not at zero as in \cite{jeffrey1995}.
\end{corollary}
\bibliographystyle{alpha}
|
1,314,259,995,316 | arxiv | \section{INTRODUCTION}
Magnetic fields are ubiquitous in stars. Across a wide range of stellar
masses, these fields are thought to arise from dynamo action, with
convection, rotation, and shear all likely players in the process of field
generation (e.g., Moffatt 1978). Stars whose interiors are everywhere
convectively unstable have been thought to harbor dynamos that may differ
fundamentally from those in stars that also possess radiative cores (e.g.,
Durney, De Young \& Roxburgh 1993). Recent observations and theoretical
models are greatly complicating this basic view, and a thorough
understanding of the dynamo process in such stars remains elusive. We
begin here by outlining both the current theoretical understanding of the
magnetism of stars with and without internal stable zones, as well as some
recent observational puzzles that serve to motivate our work.
\subsection{Magnetic fields in solar-type stars}
In the Sun, the boundary layer between the convective envelope and the
stably stratified core is believed to play a pivotal role in the generation
of global magnetic fields by dynamo action (e.g., Ossendrijver 2003).
Helioseismology has revealed that this interface region is a site of strong
shear, where the solar angular velocity profile transitions from
differential rotation -- with a fast equator, slow poles, and angular
velocity nearly constant on radial lines within the convection zone -- to
solid-body rotation within the radiative core (e.g., Thompson et al. 1996).
In the standard ``interface dynamo'' paradigm for the global solar dynamo,
this shearing layer, called the tachocline, stretches and amplifies
poloidal poloidal magnetic fields generated within the convection zone,
giving rise to organized toroidal field structures (e.g., Parker 1993;
Charbonneau \& MacGregor 1997; Ossendrijver 2003). These toroidal fields
may ultimately become unstable to magnetic buoyancy instabilities and rise
through the convection zone, with some eventually appearing at the surface
as sunspots and others being shredded by the convection and used to create
poloidal field, thereby completing the dynamo cycle. Alternatively,
it has been argued that coherent meridional circulations may bring fields
to the surface (e.g., Rempel 2006; Dikpati \& Charbonneau 1999). In either
case, the tachocline is likely a key element in the solar dynamo process --
partly because it is a site of strong shear, but also because the stable
stratification that prevails below the convection zone allows fields to be
greatly amplified before they become unstable to magnetic buoyancy
instabilities and rise. In the convection zone, by contrast, the timescale
for field amplification to the $\sim$2000-3000 G commonly observed in sunspots
(e.g., Simon et al. 1988; Stix 2002) is longer than simple estimates of the timescale
for flux tubes to rise due to this instability (Parker 1975).
Simulations and mean-field models have helped to affirm the likely
importance of the tachocline in generating the large-scale magnetism of
stars like the Sun. Three-dimensional magnetohydrodynamic (MHD)
simulations that modeled convection zones in various geometries have
demonstrated that helical convection can readily build strong (kG) magnetic
fields; in some cases (e.g., Jones \& Roberts 2000; Stellmach \& Hansen
2004), both small and large-scale fields were realized. However, several
recent calculations of turbulent flows (at finite Prandtl number) have
suggested that generating a large-scale field may be quite difficult unless
the flow is highly organized (see discussion in Cattaneo \& Hughes 2006);
other authors have argued that boundary conditions may play a crucial role
in allowing large-scale field generation (e.g., Blackman \& Field 2000).
Recent 3-D spherical shell simulations that modeled the bulk of the solar
convective envelope (Brun, Miesch \& Toomre 2004, hereafter BMT04) have lent
some support to the view that convection alone may have difficulty in
building solar-like magnetism: dynamo action was realized, but the fields
tended to be mostly on small spatial scales, exhibited no evident parity
preferences, and showed a tendency to reverse in polarity on irregular
intervals of only a few hundred days. Building upon this work, recent
simulations also included penetration by the turbulent convection into a
stable region with an imposed tachocline of shear (Browning et al. 2006).
These calculations showed that large-scale toroidal fields could indeed be
realized within the tachocline; further, these fields possessed
antisymmetric parity like that observed in sunspots, and showed much more
stable polarity than in simulations of the convection zone alone.
Likewise, extensive mean-field modeling has generally also suggested that
differential rotation in the tachocline plays a dominant role in generating
toroidal fields from poloidal fields, a process parameterized as the
$\Omega$-effect (e.g., Moffatt 1978; Steenbeck, Krause \& Radler 1966). In
the Sun, this is taken to act in concert with the $\alpha$-effect that can
produce toroidal field from poloidal (and vice versa), either through the
action of cyclonic convection or through Babcock-Leighton effects near the
surface (e.g., Parker 1955; Steenbeck et al 1966; Babcock 1961; Leighton
1969). Together, these constitute the solar $\alpha-\Omega$ dynamo.
\subsection{Puzzles of M-star magnetism}
Stars less massive than about 0.35 solar masses are fully convective, and
so cannot possess a transition region precisely like the solar tachocline.
It therefore seems natural to expect that they should harbor
qualititatively different magnetic dynamo action than stars like the Sun
(e.g., Durney, De Young \& Roxburgh 1993). Yet no obvious transition in
magnetic activity has been observed at spectral types $\sim$ M3, where the
stably stratified core disappears. Instead, stars on both sides of this
``tachocline divide'' appear to be able to build magnetic fields
effectively. Many fully convective stars are observed to have strong
chromospheric H$\alpha$ emission (e.g., Hawley, Gizis \& Reid 1996; Mohanty
\& Basri 2003; West et al. 2004), which is well-established as an
indication of the presence of magnetic fields. Indeed, the fraction of
stars that show such emission increases markedly after the transition to
full convection, reaching a maximum around spectral type M8 (West et
al. 2004). Magnetic fields have also recently been directly detected on
fully convective stars using magnetically sensitive FeH line ratios
(Reiners \& Basri 2007).
It remains unclear whether the magnetic fields in such stars differ
fundamentally -- either in spatial structure, temporal variability, or
dependence on stellar parameters like rotation rate -- from those in more
massive stars. In stars with spectral types ranging from mid-F to early M,
observations indicate that chromospheric and coronal activity increase
rapidly with increasing rotational velocity, then saturate above a
threshold velocity (e.g., Noyes et al. 1984; Pizzolatto et al 2003;
Delfosse et al. 1998). This threshold velocity appears to lessen with
decreasing stellar mass. Some evidence also exists for a
``super-saturation'' regime, with magnetic activity somewhat lessened in
the most rapid rotators (e.g., James et al. 2000). The situation is less
clear in the mid to late M-dwarfs. Mohanty \& Basri (2003) argued that a
sample of stars ranging from M4 to M9 exhibited a common ``saturation
type'' rotation-activity relationship, with observed activity roughly
independent of rotation rate above a threshold value. Measuring the
rotation rates of the slowest rotators is difficult, so it remains unclear
whether magnetic activity in these stars increases gradually with rotation
as in solar-like stars, or instead changes more abruptly. Very recently,
two fully convective M6 stars have been found with reasonably rapid
rotation (v sin i $>$ 5 km s$^{-1}$) but no measurable chromospheric
emission (West \& Basri 2007, in preparation): these stars are not so cool
that low conductivity effects are likely to play a major role (see Mohanty
et al. 2002), so these observations raise further questions about how the
magnetic fields of such stars are influenced by rotation. Another striking
constraint on the strength and morphology of M-dwarf magnetic fields has
been provided by Donati et al. (2006), who used Zeeman Doppler imaging to
show that the rapidly rotating fully convective star v374 Peg possesses a
large-scale axisymmetric field of kG strength, but no surface differential
rotation.
An additional complication is that the rotation rates of stars (and hence
perhaps their magnetic activity) depend strongly upon their age: stars
generally arrive on the main sequence rapidly rotating, and slow over time
through angular momentum loss via a magnetized wind (e.g., Skumanich 1972;
Weber \& Davis 1967). The amount of rotational braking a star undergoes at
any point in its evolution is then presumably dependent on the strength of
its magnetic field, and perhaps also on the geometry of that field. Thus,
differences between the dynamo acting on either side of the ``tachocline
divide'' might manifest themselves as differences in the rotational
evolution of stars with or without radiative cores. Indeed, some recent
evidence suggests that the timescale for magnetic braking, as indicated by
the typical ages at which stars are no longer observably rotating, may
increase markedly at approximately the mass where full convection sets in
(e.g., West et al. 2007; Reiners et al. 2007; Barnes 2003).
\subsection{Prior modeling and this work}
A few previous authors have examined the generation of magnetic fields in
fully convective stars using either semi-analytical theory or simulation,
but the overall picture that emerges from these investigations is somewhat
murky. Durney, De Young \& Roxburgh (1993) argued that in the absence of a
tachocline of shear, the magnetic fields of fully convective stars should
be dominated by small-scale dynamo action, with typical spatial scales of
order the size of convective cells. Kuker \& Rudiger (1999) examined
dynamos in fully convective pre-main-sequence stars using mean field
theory; they assumed that such stars rotate approximately as rigid bodies,
and adopted a simple $\alpha$-quenching formula to account crudely for the
back-reaction of dynamo-generated fields upon the flows. They found that
$\alpha^2$ dynamo solutions could be excited for moderate rotation rates,
giving rise to steady, non-axisymmetric mean fields. Chabrier \& Kuker
(2006) performed analogous mean-field modeling for fully-convective
low-mass stars, typically also assuming no differential rotation, and found
large-scale non-axisymmetric fields were generated by an $\alpha^2$ dynamo;
these fields were steady and symmetric with respect to the equatorial
plane. Chabrier \& Kuker (2006) also constructed a model with internal
differential rotation (which they argued might apply to brown dwarfs with
conductive cores), and found that predominantly toroidal axisymmetric
fields were generated by the $\alpha^2-\Omega$ dynamo that resulted.
Finally, Dobler, Stix \& Brandenburg (2006; hereafter DSB06) conducted 3-D
hydrodynamic and MHD simulations of fully-convective spheres using a
Cartesian grid-based finite-difference code. They found that such stars
established ``anti-solar'' differential rotation, with the poles rotating
more rapidly than the equator. Dynamo action was realized, ultimately
yielding typical magnetic field strengths approximately in equipartition
with the flows near the surface. The resulting magnetic fields possessed
structure on a range of spatial scales, with a substantial large-scale
component.
The origins of the discrepancies between these theoretical predictions are
unclear. Part of the difficulty lies in the inherent limitations of
mean-field modeling, which separates the flows and fields into a
large-scale (mean) component and everything else, with the latter typically
parameterized in terms of the mean fields using a turbulence closure
model. Such modeling cannot provide detailed descriptions of the
spatial distribution of the magnetic fields; nor can it independently
constrain the field strengths that are achieved by dynamo action, since the
models usually adopt $\alpha$-quenching prescriptions that simply eliminate
the $\alpha$-effect as equipartition is reached. Further, the differential
rotation realized by the convection was essentially a free parameter within
the models quoted above. Nonetheless, the results of such modeling are
highly suggestive of the roles played by convection and rotation in
building magnetic fields on both large and small scales. Indeed, a variety
of related dynamo-theoretic work, in particular calculations of MHD spectra
within the eddy-damped quasi-normal Markovian approximation (Pouquet
et al. 1976) indicates that convection with helicity -- as imparted by
rotation -- may lead to cascades of magnetic energy from small toward large
scales, again hinting that shear need not be present in order to generate
large-scale magnetic fields. Meanwhile, the results of the numerical
simulations thus far cannot be taken as the last word on the subject
either: although they provide descriptions of the dynamics on many
different scales, they are still limited by numerical resolution to
parameter regimes far removed from those of stellar convection. Further,
it is difficult to gauge {\sl a priori} how the many different
simplifications adopted within DSB06 (or any other simulation) might impact
their results. In any event, the conflict between the results of all
theoretical models published so far and the observational constraints on
field geometry provided by Donati et al. (2006) lend further vibrancy to
the study of magnetism in fully convective stars.
Motivated by the wealth of observational and theoretical puzzles posed by
such stars, we turn in this paper to new 3-D MHD simulations of the
interiors of low-mass M-dwarfs. We aim here to provide additional
constraints on the nature of convection and differential rotation in these
stars, on the possible dynamo action achieved within their interiors, and
on the morphology of the magnetic fields generated by such dynamo action.
Our work is most analogous to that of DSB06, but we differ from them in
several ways -- both in the construction of our model and in the
results it produces. In \S 2 below, we describe our numerical model and
the principal simplifications adopted, highlighting some of the differences
between our simulations and those of DSB06. Section 3 contains a
description of the morphology of the convective flows, and an analysis of
the spatial variations present in those flows. In \S 4, we assess the
overall energetics of the dynamo action that is realized, and in \S 5 we
describe the morphology, strength, and temporal variability of the
resulting magnetism. The establishment of differential rotation in
hydrodynamic cases, and its quenching in MHD ones, is analyzed in \S 6. We
assess some aspects of the dynamo process itself in \S 7; we close in \S 8
with a comparison to prior modeling and observation, and a reflection on
what remains to be done.
\section{FORMULATING THE PROBLEM}
\subsection{Fully convective rotating sphere}
The simulations here are intended to be highly simplified descriptions of
the interiors of fully convective 0.3 solar mass M-dwarfs. We utilize the
Anelastic Spherical Harmonic (ASH) code, which solves within the anelastic
approximation the 3-D equations that govern fluid motion and magnetic field
evolution. Our computational domain is spherical, and extends from 0.08R
to 0.96R in radius, with R the overall stellar radius of $2.068 \times
10^{10}$ cm. We are forced to exclude the inner 8\% of the star from our
computations, both because the coordinate systems employed in ASH are
singular there, and because the small numerical mesh sizes very near the
center of the star would necessitate impractically small timesteps. This
excluded central region might in principle cause some spurious physical
behavior, for instance by projecting a Taylor column aligned with the
rotation axis (e.g., Pedlosky 1987) into the surrounding fluid, by giving
rise to a central viscous boundary layer, or simply by excluding motions
that would otherwise pass through the stellar center. In trial simulations
with smaller and larger excluded central regions (ranging from 0.04R to
0.12R), the properties of the mean flows were very similar to those
described here, giving us some confidence that the large-scale dynamics are
relatively insensitive to this inner boundary layer. Our computations also
do not extend all the way to the stellar surface, because the very low
densities in the outer few percent of the star favor the driving of fast,
small-scale motions that we cannot resolve.
The initial stratifications of mean density $\bar{\rho}$, energy generation
rate $\epsilon$, gravity $g$, radiative diffusivity $\kappa_{\rm rad}$, and
entropy gradient $dS/dr$ are adopted from a 1-D stellar model (I. Baraffe,
private communication; cf Chabrier et al. 2000). The initial
profiles of the remaining thermodynamic quantities are then given by
solving the system of equations described in \S2.2, as described more
thoroughly elsewhere (Miesch et al. 2000; BMT04).
The thermodynamic quantities are updated throughout the course of the
simulation, as the evolving convection modifies the spherically symmetric
mean state.
The main parameters of our hydrodynamic and magnetohydrodynamic (MHD)
simulations are described in Table 1. The three hydrodynamic cases $A$,
$B$, and $C$ sample flows of varying complexity, achieved by modifying the
effective eddy viscosities and diffusivities $\nu$ and $\kappa$, with case
$C$ the most turbulent (and highest-resolution) simulation. Two MHD
calculations Cm and Bm were begun by introducing small-amplitude seed
magnetic fields into the statistically mature progenitor cases C and B
respectively, and allowing the fields to evolve. A third MHD case Cm2 was
begun from a statistically mature instant in the evolution of case Cm. The
two cases Cm and Cm2 differ in the magnetic Prandtl number $Pm=\nu/\eta$
used, which in turn affects the magnetic Reynolds numbers $Rm \sim uL/\eta$
achieved by the evolved simulations. All the simulations presented here
rotate at the solar angular velocity of $\hat{\Omega}=2.6 \times 10^{-6}$
s$^{-1}$. In this paper, we have chosen for clarity's sake to concentrate
our discussion on the two cases C and Cm. These represent our highest $Re
\sim uL/\nu$ and $Rm$ cases, respectively, so we believe they are likely to
be most indicative of the highly turbulent conditions achieved in real
stars. Where appropriate, we give some indications of the behavior of the
other cases.
\begin{deluxetable*}{ccccccc}
\tablecolumns{7}
\tablenum{1}
\tablecaption{Simulation Attributes}
\tablehead{
\colhead{Case} & \colhead{{\sl A}} & \colhead{{\sl B}} & \colhead{{\sl C}}
& \colhead{{\sl Bm}} & \colhead{{\sl Cm}} & \colhead{{\sl Cm2}}}
\startdata
& & Input parameters & & \\
\hline
$\nu$ (cm$^2$ s$^{-1}$) & $5.0 \times 10^{11}$ & $2.2 \times 10^{11}$ &
$1.0 \times 10^{11}$ & $2.2 \times 10^{11}$ & $1.0 \times 10^{11}$ & $1.0
\times 10^{11}$ \\
$\kappa$ (cm$^2$ s$^{-1}$) & $2.0 \times 10^{12}$ & $8.8 \times 10^{11}$
& $4.0 \times 10^{11}$ & $8.8 \times 10^{11}$ & $4.0 \times 10^{11}$ &
$4.0 \times 10^{11}$ \\
$T_a$ & 1.1 $\times 10^7$ & 5.9 $\times 10^7$ & 2.8 $\times 10^8$ & 5.9
$\times 10^7$ & 2.8 $\times 10^8$ & 2.8 $\times 10^8$ \\
$\eta$ (cm$^2$ s$^{-1}$) & -- & --
& -- & $2.75 \times 10^{10}$ & $2.0 \times 10^{10}$ &
$1.25 \times 10^{10}$ \\
$P_m$ & -- & -- & -- & 8 & 8 & 5 \\
\hline
& & Measured quantities & & \\
\hline
$R_a$ & 3.4 $\times 10^5$ & 1.4 $\times 10^6$ & 6.5 $\times 10^6$ & 1.7
$\times 10^6$ & 6.3 $\times 10^6$ & 6.0 $\times 10^6$ \\
$R_e$ & 65 & 120 & 270 & 110 & 210 & 230 \\
$R_m$ & -- & -- & -- & 880 & 1650 & 1160 \\
$R_o$ & 1.6 $\times 10^{-2}$ & 1.3 $\times 10^{-2}$ & 1.3 $\times 10^{-2}$ & 1.2
$\times 10^{-2}$ & 1.0 $\times 10^{-2}$ & 1.1 $\times 10^{-2}$ \\
$R_c$ & 0.38 & 0.34 & 0.33 & 0.37 & 0.33 & 0.32 \\
\enddata
\tablecomments{The Prandtl number $P_r=\nu/\kappa=0.25$ for all
simulations; the magnetic Prandtl number $P_m=\nu/\eta$ is indicated for
each case, along with the viscosity $\nu$, eddy thermal diffusivity
$\kappa$, and magnetic diffusivity $\eta$ (in cm$^{2}$s$^{-1}$). The
rotation rate $\Omega=2.6 \times 10^{-6}$ s$^{-1}$ for all cases.
Evaluated as volume averages are the Rayleigh number $R_a=(-\partial
\bar{\rho}/\partial S)\Delta S g L^3/\rho \nu \kappa$ (with $\Delta S$ the entropy
contrast across the interior), the Taylor number $T_a = 4 \Omega^2
L^4/\nu^2$, and the convective Rossby number $R_c=\sqrt{R_a/T_a P_r}$. The Reynolds number
$R_e=\tilde{v}' L/\nu$, the magnetic Reynolds number $R_m=\tilde{v}' L/\eta$, and the
Rossby number $R_o = \tilde{v}'/2 \Omega L$ are evaluated at $r=0.88R$ using the
rms velocity $\tilde{v}$ there. Values based on the maximum velocity would be
about a factor of four higher; likewise, values would be slightly higher if
based on $\tilde{v}$ closer to the surface, and lower if based on $\tilde{v}$ deeper
within the star. }
\end{deluxetable*}
\subsection{Anelastic MHD Equations}
ASH solves the 3-D MHD anelastic equations of motion in a rotating
spherical geometry using a pseudospectral semi-implicit approach (e.g.,
Clune et al. 1999; Miesch et al. 2000; BMT04). The
equations are fully nonlinear in the velocities and magnetic fields, but
linearized in thermodynamic variables with respect to a spherically
symmetric mean state that is also allowed to evolve. This mean state is
taken to have density $\bar{\rho}$, pressure $\bar{P}$, temperature
$\bar{T}$, specific entropy $\bar{S}$; perturbations are denoted as $\rho$,
$P$, $T$, and $S$. The equations solved are
\begin{eqnarray}
\mbox{\boldmath $\nabla$}\cdot(\bar{\rho} {\bf v}) &=& 0, \\
\mbox{\boldmath $\nabla$}\cdot {\bf B} &=& 0, \\
\bar{\rho} \left(\frac{\partial {\bf v}}{\partial t}+({\bf v}\cdot\mbox{\boldmath $\nabla$}){\bf v}+2{\bf \Omega_o}\times{\bf v}\right)
&=& -\mbox{\boldmath $\nabla$} P + \rho {\bf g} \nonumber \\
+ \frac{1}{4\pi} (\mbox{\boldmath $\nabla$}\times{\bf B})\times{\bf
B}
&-& \mbox{\boldmath $\nabla$}\cdot\mbox{\boldmath $\cal D$}-[\mbox{\boldmath $\nabla$}\bar{P}-\bar{\rho}{\bf g}],
\\
\bar{\rho} \bar{T} \frac{\partial S}{\partial t}+\bar{\rho} \bar{T}{\bf v}\cdot\mbox{\boldmath $\nabla$} (\bar{S}+S)&=&
\mbox{\boldmath $\nabla$}\cdot[\kappa_r \bar{\rho} c_p \mbox{\boldmath $\nabla$} (\bar{T}+T) \nonumber \\
+\kappa \bar{\rho} \bar{T} \mbox{\boldmath $\nabla$} (\bar{S}+S)] &+&\frac{4\pi\eta}{c^2}{\bf j}^2\nonumber \\
+2\bar{\rho}\nu[e_{ij}e_{ij} &-& 1/3(\mbox{\boldmath $\nabla$}\cdot{\bf v})^2]
+ \bar{\rho} {\epsilon},\\
\frac{\partial {\bf B}}{\partial t}=\mbox{\boldmath $\nabla$}\times({\bf v}\times{\bf B})&-&\mbox{\boldmath $\nabla$}\times(\eta\mbox{\boldmath $\nabla$}\times{\bf B}),
\end{eqnarray}
where ${\bf g}$ is acceleration due to gravity, ${\bf v}=(v_r,v_{\theta},v_{\phi})$ is the velocity in spherical coordinates in
the frame rotating at constant angular velocity ${\bf \Omega_o}$, ${\bf B}=(B_r,B_{\theta},B_{\phi})$ is the magnetic field,
${\bf j}=c/4\pi\, (\mbox{\boldmath $\nabla$}\times{\bf B})$ is the current density, $\kappa_r$ is the radiative diffusivity,
$c_p$ is the specific heat at constant pressure, $\nu$ is the effective
eddy viscosity, $\kappa$ is the effective thermal diffusivity, $\eta$ is the
effective magnetic diffusivity, and ${\bf \cal D}$ is the viscous stress
tensor, defined by
\begin{eqnarray}
{\cal D}_{ij}=-2\bar{\rho}\nu[e_{ij}-1/3(\mbox{\boldmath $\nabla$}\cdot{\bf v})\delta_{ij}],
\end{eqnarray}
with $e_{ij}$ the strain rate tensor. The volume heating term $\bar{\rho} \epsilon$ is included to
represent energy generation by nuclear burning of the CNO cycle within the
convective core, and is assumed for simplicity to scale with temperature
alone. Our adoption of a sub-grid-scale
(SGS) heat transport term proportional to the entropy gradient in equation
(4) is most justifiable in convection zones, where the stratification will
tend toward adiabaticity; in stable zones, this term should be modified in
order to avoid spuriously large SGS heat fluxes directed radially inwards
(as discussed in Miesch 1998; Miesch et al. 2000). In ASH, we choose to
deal with this difficulty by specifying the spherically symmetric
($\ell=0$) eddy thermal diffusivity $\kappa_0$ separately from the $\ell
\ne 0$ component $\kappa$; the latter is here taken to be constant in
radius, whereas the former increases in a narrow layer near the surface,
where the SGS transport must account for the entire outward energy flux
(see \S 3.2). The eddy diffusivity $\kappa$ is in effect purely
dissipative, and acts to smooth out entropy variations, whereas $\kappa_0$
is essentially a cooling term near the surface. In simulations with stable
layers, $\kappa_0$ can be modified to account for the presence of radiative
regions (e.g., Miesch et al. 2000); here, however, this subgrid
transport is small throughout the interior because of the near-adiabatic
stratification. To close the set of equations, we take the thermodynamic
fluctuations to satisfy the linearized relations
\begin{equation}\label{eos}
\frac{\rho}{\bar{\rho}}=\frac{P}{\bar{P}}-\frac{T}{\bar{T}}=\frac{P}{\gamma\bar{P}}
-\frac{S}{c_p},
\end{equation}
assuming the ideal gas law
\begin{equation}\label{eqn: gp}
\bar{P}={\cal R} \bar{\rho} \bar{T} ,
\end{equation}
\noindent where ${\cal R}$ is the gas constant. The effects of
compressibility are included via the anelastic approximation, which filters out
sound waves that would otherwise limit the time steps allowed by the
simulation to the sound crossing time across the smallest computational
zone. In the low Mach-number flows typical of stellar interior convection,
adopting the anelastic approximation allows us to take much larger
timesteps that satisfy the Courant-Freidrichs-Lewy condition imposed by the
convective flows, rather than the much shorter one required to follow
acoustic waves. In the MHD simulations, the anelastic approximation
filters out fast magneto-acoustic modes but retains the Alfven and slow
magneto-acoustic modes. We use a toroidal--poloidal decomposition for the
mass flux and magnetic field in order to ensure that both remain
divergence-free throughout the simulation, with
\begin{eqnarray}
{\bar{\rho}\bf v}=\mbox{\boldmath $\nabla$}\times\mbox{\boldmath $\nabla$}\times (W {\bf e}_r) + \mbox{\boldmath $\nabla$}\times (Z {\bf
e}_r) , \\
{\bf B}=\mbox{\boldmath $\nabla$}\times\mbox{\boldmath $\nabla$}\times (C {\bf e}_r) + \mbox{\boldmath $\nabla$}\times (A {\bf e}_r) ~~~,
\end{eqnarray}
with ${\bf e}$ a unit vector, and involving the streamfunctions $W$,
$Z$ and magnetic potentials $C$, $A$.
This system of equations requires 12 boundary conditions and suitable
initial conditions. Because one of our aims is to assess the angular
momentum redistribution in our simulations, we have opted for torque-free
velocity boundary conditions at the top and bottom of the deep spherical
domain. We have assumed
$a$. impenetrable top and bottom surfaces: $v_r=0|_{r=r_{bot},r_{top}}$,
$b$. stress free top and bottom: $\frac{\partial}{\partial
r}\left(\frac{v_{\theta}}{r}\right)=\frac{\partial}{\partial
r}\left(\frac{v_{\phi}}{r}\right)=0|_{r=r_{bot},r_{top}}$,
$c$. constant entropy gradient at top and bottom: $\frac{\partial \bar{S}}{\partial
r}=$ constant$|_{r=r_{bot},r_{top}}$,
$d$. match to an external potential field at the top: ${\bf B} = \nabla
{\Phi} \rightarrow \Delta \Phi =0|_{r=r_{top}}$, and to a perfect
conductor (purely tangential field) at the bottom.
In the analysis that follows, we will often form various spatial and
temporal averages of the evolving convective flows and magnetic fields.
For clarity, we note that we use the symbol $\hat{a}$ to indicate temporal
and longitudinal averaging of a variable $a$, and the symbol $<a>$ to
denote longitudinal averaging to obtain the axisymmetric component of the
variable. Such averaging allows us to separate the fluctuating (denoted by
a prime as $a'$) from the axisymmetric (mean) parts of the variable. The
symbol $\tilde{a}$ designates the rms average of $a$, carried out on a
spherical surface for many realizations in time. Likewise, the combined
symbols $\tilde{a}'$ represent rms averaging of the variable from which the
axisymmetric portion has been subtracted.
\subsection{Numerical Approach}
No numerical simulation can model with perfect fidelity the intensely
turbulent convection occurring in stars. The range of spatial scales
present in such convection is too vast; some simplification is
unavoidable. We choose in our global modeling to resolve the largest scales
of motion, which we believe are likely to play dominant roles in
redistributing energy and angular momentum and in building magnetic fields.
Our simulations are therefore classified as Large Eddy Simulations (LES),
with the effects of unresolved small scales of turbulence incorporated
using a sub-grid-scale (SGS) treatment. Here these unresolved scales are
treated simply as enhanced viscosities and thermal and magnetic
diffusivities ($\nu$, $\kappa$, and $\eta$ respectively), which are thus
effective eddy viscosities and diffusivities. For simplicity, we have
taken these to be constant in radius. This implies that the viscous
damping at depth in our simulations may be too severe relative to that near
the surface. More sophisticated SGS treatments, in which $\nu$ and
$\kappa$ are proportional to properties of the resolved flow field (e.g.,
the velocity or strain rate) would certainly be desirable, and we hope to
explore such strategies in future work. Our simulations are characterized
by nondimensional numbers relating these diffusivities to one another and
to the various terms in the momentum equation (3); some of these
dimensionless numbers necessarily take on very different values in our
modeling than in actual stellar interiors. We have adopted a magnetic
Prandtl number $Pm = \nu/\eta > 1$ in our MHD cases, even though a $Pm$
based on microscopic viscosities and diffusivities would be much less than
unity. The large $Pm$ here reflects unresolved turbulent mixing processes,
and allows us to achieve moderately high values of the magnetic Reynolds
number $Rm = uL/\eta$ with tractable numerical resolution. At small $Pm$,
the critical $Rm$ needed for dynamo action increases considerably (Boldyrev
\& Cattaneo 2004; Schekochihin et al. 2005), rendering simulations more
computationally demanding. The strength and morphology of the magnetic
fields realized in our simulations are likely sensitive at some level to
the $Pm$ and $Rm$ chosen. We are somewhat encouraged by prior simulations
of convection in the solar interior in comparable parameter regimes (Brun
\& Toomre 2002; Brun, Browning \& Toomre 2005; Browning et al. 2006), for
these have been relatively successful in matching the detailed
observational constraints on, e.g., solar differential rotation provided by
helioseismology.
The numerical strategies employed within ASH are described in detail
elsewhere (Clune et al. 1999; BMT04); we here summarize only a
few key features. The dynamical variables within ASH are expanded in terms
of spherical harmonic basis functions $Y_l^m(\theta, \phi)$ in the
horizontal directions and Chebyshev polynomials $T_n(r)$ in the radial.
The spatial resolution is thus kept uniform everywhere on a spherical
surface by employing a complete set of spherical harmonics of degree
$\ell$, retaining all azimuthal orders $m$ in what is referred to as a
triangular truncation. We limit our expansion to a degree $\ell=\ell_{\rm
max}$, related to the number of latitudinal mesh points $N_{\theta}$ by
$\ell_{\rm max} = (2 N_{\theta} - 1)/3$, take $N_{\phi} = 2 N_{\theta}$
longitudinal mesh points, and use $N_r$ colocation points for the radial
projection onto Chebyshev polynomials. Our highest resolution simulations
here (Cm) have $\ell_{\rm max} =340$ (implying $N_{\theta}=512$ and
$N_{\phi}=1024$) and $N_r=192$. We employ an implicit, second-order
Crank-Nicholson scheme for calculating the time evolution of the linear
terms in equations (1-5), whereas an explicit second-order Adams-Bashforth
scheme is used for the advective, Lorentz, and Coriolis terms. ASH runs
efficiently on a variety of massively parallel supercomputers using the
message passing interface (MPI). The code's performance scales reasonably
well up to about 1000 processors. The simulations described here required
roughly half a million hours of computing time.
Our computational approach differs from that of DSB06 in a few important
ways. Because theirs is the only prior 3-D MHD simulation of fully
convective stars, we here comment briefly on these differences. In order
to keep thermal relaxation timescales small, the simulations in DSB06
adopted a stellar luminosity roughly $10^{12}$ times higher than
appropriate for an actual M dwarf. This choice is related to the fact
that DSB06 adopt the same thermal diffusivities for the mean temperature
gradient and for the small-scale turbulent temperature fluctuations; in our
modeling, the mean temperature gradient is acted on by the thermal
diffusivity $\kappa_r$ taken from a 1--D stellar model, whereas $\kappa$
for the turbulent temperature field is (as in DSB06) a sub-grid-scale eddy
diffusivity. Our strategy allows us to assess with reasonable fidelity the
radiative flux within the interior, since $\kappa_r$ in our models is
ultimately set by the radiative opacities of the 1--D stellar model. The
DSB06 rescaling of the luminosity implies a commensurate artificial
increase in the typical convective velocities. In order to keep the ratio
between the convective turnover times and the rotation period approximately
correct, they also considered very rapid rotation rates. In terms of the
nondimensional numbers $Re$, $Rm$, and $Ro$, our simulations are roughly
comparable to theirs. However, it is difficult to gauge with certainty the
impact that their rescalings of $L$, $v_c$, and $\Omega$ may have on the
resulting flows and magnetic fields. We have chosen not to perform such a
rescaling, which means that very long-term adjustments of the mean
temperature gradient would not be captured in our modeling. Both
strategies for dealing with the thermal diffusivity have been widely
employed; neither is perfect. Our simulations also differ from those of
DSB06 in a few smaller ways. The overall density contrast between the
inner and outer boundaries in our models is about 100, consistent with the
contrast between 0.1 and 0.96R in the 1--D stellar model we used for our
initial conditions. In DSB06, the density varied by a factor of about 5
from center to surface; the larger density contrasts in our modeling have
a substantial impact on the morphology of the convective flows. The boundary
conditions adopted in DSB06 also differ from ours; theirs is closer to a
no-slip boundary condition than to the stress-free boundaries used here.
This difference may impact our results on the differential rotation
realized in hydrodynamic simulations (\S 6). Because they employed a
Cartesian finite-difference code, DSB06 were able to model the central few
percent of the star, omitted in our modeling. These factors may all
contribute to differences between the results here and in DSB06.
Nonetheless, there is some common ground between our findings and theirs;
we comment more on both the similarities and differences of the two models
in \S 8 below.
\begin{deluxetable*}{lllllll}
\tablecolumns{7}
\tablenum{2}
\tablecaption{Properties of Flows and Fields}
\tablehead{
\colhead{Case} & \colhead{A} & \colhead{B} & \colhead{C} &
\colhead{Bm} & \colhead{Cm} & \colhead{Cm2}
}
\startdata
KE & 6.4$\times 10^6$ & 1.0$\times 10^7$ & 2.6$\times 10^7$ &
5.3$\times 10^6$ & 3.0 $\times 10^6$ & 3.8 $\times 10^6$ \\
DRKE & 2.8$\times 10^6$ & 7.7$\times 10^6$ & 2.2$\times 10^7$ &
2.8$\times 10^6$ & 4.8 $\times 10^5$ & 1.1 $\times 10^6$ \\
CKE & 3.7$\times 10^6$ & 2.6$\times 10^6$ & 3.5$\times 10^6$ &
2.5$\times 10^6$ & 2.5 $\times 10^6$ & 2.7 $\times 10^6$ \\
ME/KE & -- & -- & -- & 50\% & 120\% & 90\% \\
$\tilde{v}'$(0.94R) & 24 & 22 & 23 & 19 & 19 & 19\\
$\tilde{v}'$(0.50R) & 4 & 4 & 4 & 4 & 4 & 4 \\
$\tilde{B}$(0.94R) & -- & -- & -- & 2000 & 6200 & 7200\\
$\tilde{B}$(0.50R) & -- & -- & -- & 6000 & 13100 & 10400\\
$\Delta \Omega/\Omega$ & 8\% & 14\% & 22\% & 8\% & 2\% & 4\%\\
\enddata
\tablecomments{The kinetic energy density KE ($1/2 ~ \bar{\rho} v^2$), averaged over volume
and time, is listed along with energy density of
the convection (CKE) and the differential rotation (DRKE),
together with the average magnetic energy density ME
($B^2/8\pi$) (expressed, where appropriate, as a percentage of KE). Also
indicated at two depths are the fluctuating rms velocity $\tilde{v}'$ (m s$^{-1}$) and the
rms magnetic field strength (G). Angular velocity contrast from equator to
60$\degr$ is indicated as percentage of overall frame rotation rate. }
\end{deluxetable*}
\section{CONVECTIVE FLOWS AND ENERGY TRANSPORT}
\subsection{Morphology of the Flows}
The convective flows realized in our simulations possess structure on many
spatial scales. An instantaneous view of the flows in the hydrodynamic
case C is provided by Figure 1, which shows the radial velocity $v_r$ near
both the top of the computational domain (Fig. 1$a$, at $r=0.88R$) and
deeper within the interior (Fig. 1$b$, at r=0.24R). At large radii, a
marked asymmetry between upflows and downflows is apparent (Fig. 1$a$),
with the downflows compact and strong while upflows are broader and weaker.
This asymmetry is driven mainly by the strong density stratification at
these radii: downflows tend to contract, whereas upflows expand. Some of
these downflow lanes persist as coherent structures for extended intervals,
while many complex and intermittent features also appear on smaller scales.
Such coherent downflow plumes have previously been noted as a seemingly
generic feature of turbulent compressible convection (e.g., Brummell et
al. 2002; Brun \& Toomre 2002).
\begin{figure}[hpt]
\center
\epsscale{1.0}
\includegraphics[width=3.4in, trim= 72 0 72 0]{f1.ps}
\caption{\label{vr2depths}Radial velocity $v_r$ on spherical surfaces at
two depths for a single instant in the evolution of case Cm. Upflows
are rendered in red tones, and downflows in blue; the maxima and minima
of the colormaps are indicated. The flows are stronger and on smaller
spatial scales near the surface than they are are depth.}
\end{figure}
In Figure 1$a$, these downflows generally appear aligned with the rotation
axis at low latitudes; at high latitudes the distribution of upflows and
downflows is more isotropic. Similar distinctions between flows at high
latitudes versus those near the equator have often been realized in
simulations of the solar convective envelope (e.g., Miesch et al. 2000),
where the difference was identified with the ``tangent cylinder'' formed by
projecting the lower boundary at the equator onto the upper boundary. In
those simulations, the inner boundary prohibited connectivity between the
northern and southern hemispheres for flows far from the equator, whereas
motions at low latitudes could readily couple both hemispheres. Here, the
inner boundary is sufficiently deep to allow connectivity between
high-latitude flows as well, but motions that span both hemispheres are
still realized only near the equator. This may arise because the effects
of the Coriolis forces, which ultimately drive turbulent alignment with the
rotation axis, still vary with distance from the rotation axis (see, e.g.,
Busse 1970); the low-latitude flows may also reflect a lingering preference
for the most unstable modes in a rotating spherical shell, which at these
rotation rates tend to be symmetric fluid rolls perpendicular to the
equator (e.g., Gilman 1976).
Deeper within the interior (Fig. 1$b$), the convection is characterized by
broader, weaker flows that span large fractions of a hemisphere. Upflows
and downflows are fairly symmetric in appearance there, likely because the
density stratification at depth is weaker. The density scale height
$\lambda_p = P/g \rho$ varies from about $10^9$ cm at $r=0.88R$ to $4
\times 10^9$ cm at $r=0.24R$, which corresponds roughly to the physical
size of the convective cells at the two radii; note that the spatial size
of the convective patterns, not just their angular size, varies with
radius. The convective patterns at $r=0.24R$ are reminiscent of sectoral
spherical harmonics $Y^{\ell}_{m}$ with $\ell = m =3$; at other instants in
the evolution of case C, this identification is stronger. The motions deep
in the interior are linked to those farther out: small downflow plumes at
large radii merge as they descend, and coalesce to form the broad downflows
seen in Figure 1$b$. This coupling between depths is weakly discernible in Figure
1: the most striking downflow plumes at $r=0.88R$ generally occur in
regions where downflow persists at $r=0.24R$.
The amplitude of the convective motions also changes with depth. Typical
rms velocities $\tilde{v}$ at $r=0.88R$ are about 12 m s$^{-1}$, whereas at
$r=0.24R$, $\tilde{v} \approx 2$ m s$^{-1}$. This variation, together with
the smaller typical eddy sizes $l_{\rm eddy}$ near the surface, means that
the local convective overturning time $\tau_c \sim l_{\rm eddy}/\tilde{v}$
varies by a factor of about 20 across the domain. For small-scale local
dynamo action, the characteristic timescale for field amplification is
roughly the convective turnover time (e.g., Childress \& Gilbert 1995);
thus we might expect that fields will be built more rapidly in the outer
layers of the star than at depth. We will see in \S 5 that this is indeed
the case.
Thus, although the interior is unstably stratified at all depths, there are
still two conceptually distinct regions: one near the surface in which
convection is vigorous, possesses a variety of small-scale structure, and
might quickly amplify any seed magnetic fields, and another at depth where
the flows are more quiescent, with large-scale overturning motions that may
amplify fields somewhat more slowly. In the following subsection, we
examine why the vigor of convection may vary with depth.
\subsection{Spatial Variation of Energy Transport}
Convection in stars is driven ultimately by the need to transport energy
outwards. That energy arises primarily from nuclear-burning reactions
within the central regions of the star, so the total luminosity $L(r)$ that
must be carried outwards is an increasing function of radius, out to the
point where nuclear burning stops and $L(r)$ is equal to the surface
luminosity $L_{\rm *}$. In Figure 2, we assess for simulation C the radial
transport of energy by different physical processes, defined as
\begin{equation}
F_c+F_k+F_{\rm r}+F_u+F_v=\frac{L(r)}{4\pi r^2},
\end{equation}
with
\begin{eqnarray}
F_c&=&\bar{\rho}\, c_p\, \overline{v_r T'} \,,\\
F_k&=&\frac{1}{2}\, \bar{\rho}\, \overline{v^2 v_r} \,,\\
F_r&=& -\kappa_{r}\, \bar{\rho}\, c_p\, \frac{d\bar{T}}{d r} \,, \\
F_u&=& -\kappa\, \bar{\rho}\, \bar{T}\, \frac{d\bar{S}}{d r} \,, \\
F_v&=& -\overline{{\bf v}\cdot {\bf \cal D}} \,
\end{eqnarray}
where the overbar denotes an average over spherical surfaces and in time,
$F_c$ is the enthalpy flux due to resolved convective flows, $F_k$ the
kinetic energy flux, $F_{\rm r}$ the radiative flux, $F_u$ the unresolved
eddy flux, and $F_v$ the viscous flux. The unresolved eddy flux $F_u$ is
the enthalpy flux from subgrid-scale motions that we cannot resolve, which
in ASH takes the form of a thermal diffusion operating on the mean entropy
gradient. In MHD simulations (Cm, Bm, Cm2), the Poynting flux $F_m =
\frac{c}{4\pi} \overline{E_{\theta}B_{\phi}-E_{\phi}B_{\theta}}$ also
contributes, but is small. The viscous flux and the kinetic energy flux
are generally small compared to $F_c(r)$ and $F_r(r)$; the unresolved flux
becomes large near the surface, where it carries all the energy because the
radial velocity is forced to vanish at the upper boundary.
\begin{figure}[hpt]
\center
\epsscale{1.0}
\includegraphics[width=3.4in, trim= 36 0 36 0]{f2.ps}
\caption{\label{fluxbal}Time-averaged radial transport of energy in case
C. Shown are the convective enthalpy flux $F_c$, radiative flux $F_r$,
viscous flux $F_v$, kinetic energy flux $F_k$, the unresolved flux $F_u$,
and their sum the total flux $F_t$; all have been expressed as
luminosities. Although the stratification is convectively unstable
everywhere, the convective flux carries most of the energy only at radii
larger than about 0.45R. }
\end{figure}
Figure 2 shows that the radiative flux carries most of the energy at small
radii, with $L_r \approx 0.70 L(r) \approx 0.10 L_{\rm *}$ at $r=0.15R$.
The enthalpy flux is smaller there, with $L_c \approx 0.30 L_r = 0.04
L_{\rm *}$. Moving to larger radii, however, the total luminosity $L(r)$
rises up to $L_{\rm *}$ in accord with the continuing nuclear energy
generation, while at the same time the radiative luminosity $L_r(r)$
decreases. Thus the energy that must be transported by convection goes up
significantly, rising to a maximum of $L_c \approx 1.1 L_{\rm *}$ at
$r=0.80R$; the excess over the stellar luminosity is mostly compensated by
an inward-directed viscous flux. Although convection carries as little as
30\% of the local luminosity near the stellar center, the overall entropy
stratification is still nearly adiabatic. The radiative flux is fixed by
the radiative opacity, here input from the 1-D stellar model, and by the
mean temperature gradient; likewise, the variation in the total luminosity
with radius is set by the nuclear energy generation rate $\epsilon (r)$,
also input from the 1-D model. To the extent that these input properties
are accurate, and assuming there are no drastic long-term changes in the
prevailing temperature gradient (which we could not capture in our limited
simulation time), we believe that the significant variation of $L_c$ with
radius is likely to be a robust feature. However, lower-mass stars
than those considered here might have smaller radiative fluxes in the deep
interior, owing to their lower central temperatures and hence higher
opacity from metals, and so may exhibit less radial variation of $L_c$.
This variation in the energy that must be transported by convection is
linked to the radial change in convective velocity noted in \S 3.1. On
dimensional grounds, the typical convective velocity is given roughly by
$v_c \propto [L_c (R/M)]^{1/3}$, so the factor of 25 change in $L_c$ from
$r=0.15R$ to $r=0.88R$ would correspond to a factor of about 3 change in
typical convective velocity. This assumes that the correlations between
temperature fluctuations and the radial velocity field do not change
appreciably with depth; in practice, the energy transport is somewhat less
efficient amidst the highly turbulent flows near the surface, with many
regions where temperature and radial velocity are not as well-correlated as
they are at depth. This effect, plus the strong variation of density with
radius, leads to even larger radial variations of $v_c$ than this simple
scaling argument would suggest.
\section{DYNAMO ACTION REALIZED}
Magnetic dynamo action is achieved by the flows. Small initial seed fields
are amplified by several orders of magnitude until they reach a
statistically equilibrated state in which their growth is balanced by Ohmic
decay. The growth and saturation of the magnetic energy density in case Cm
is displayed in Figure 3$a$, while a phase of evolution after saturation is
examined in Figure 3$b$. Also shown there are the kinetic energy densities
due to non-axisymmetric motions, which we term the convective kinetic
energy density (CKE) and the total kinetic energy density KE (which is the
sum of CKE, the energy in differential rotation DRKE, and the energy in
meridional circulations MCKE). All are shown relative to the frame
rotating at $\Omega = 2.6 \times 10^{-6}$ s$^{-1}$. In the evolved
simulation, MCKE is approximately 300 times smaller than CKE, and DRKE is
likewise a factor of five smaller than CKE, so we have omitted them from
Figure 3 for clarity.
\begin{figure}[hpt]
\center
\epsscale{0.8}
\includegraphics[width=3.4in, trim= 10 0 10 0]{f3.eps}
\caption{\label{timetrace}Temporal evolution of the volume-averaged
magnetic and kinetic energy densities in case Cm. ($a$) The magnetic
energy density (ME) grows by many orders of magnitude from its initial
seed value, and ultimately equilibrates when comparable to the kinetic
energy density (KE) relative to the rotating frame. ($b$) Detailed
view of the evolution of KE and ME during an interval after
equilibration was reached; also shown is the energy density in the
convection (CKE). On average, the magnetic energy is about 120\% of KE
and 140\% of CKE.}
\end{figure}
The magnetic energy in the simulations grows exponentially until it is
approximately in equipartition with the flows. Over the last 200 days of
simulation C4m, a period during which no sustained growth or decay of the
various energy densities was evident, ME was approximately 120\%
of KE and about 140\% of CKE. The initial seed value of ME was about
$10^{-2}$ ergs cm$^{-3}$, and the phase of exponential growth lasted about
2500 days, implying a characteristic e-folding timescale for magnetic
energy growth of about 150 days. This may be compared to the typical eddy
convective turnover times in the simulation, which vary from about 20 days
near the surface to roughly 450 days at depth.
As the magnetic fields grow, they react back upon the flows that generated
them through the ${\bf j} \mbox{\boldmath $\times$} {\bf B}$ force term in equation (1). The
net effect is to begin to reduce KE once ME reaches a threshold value of
about 5\% of KE; this is visible in Figure 3 starting at about 3000 days.
The reduction in KE is associated mainly with a large decline in DRKE,
whereas CKE appears largely unaffected by the growing fields. In the
kinematic phase, DRKE is approximately 6 times CKE; after saturation of the
dynamo, DRKE/CKE is only about 0.2-0.4. Although the energy densities
exhibit no systematic variation after the initial phase of dynamo growth,
they still show substantial short-term stochastic fluctuations. During the
interval sampled by Figure 3, CKE varies by factors of about 3, and with it
the total KE; the magnetic energy density varies by similar amounts. Thus
although ME $\approx$ 1.2 KE on average, it can rise as high as twice KE
for short intervals.
The other MHD simulations behave in a similar fashion. In case Cm2, which
has a lower $Pm$ and hence lower $Rm$ (implying a less supercritical
dynamo), ME equilibrates at about 90\% of KE (120\% of CKE). The weaker
magnetic fields in that case lead to a slightly smaller reduction in DRKE
than is realized in case Cm. Case Bm, which is both more laminar (lower
$Re$) and less supercritical (lower $Rm$) than the other two simulations,
has still lower magnetic energy densities, with ME $\approx$ 50\% of KE.
These lower magnetic energy densities imply even less quenching of DRKE,
which varies between about 0.8-1.1 times CKE.
It is instructive to compare these energy densities to those realized in
other convective dynamo simulations. In numerical models of the solar
convective envelope (with $Pm=5$ and $R_m \approx 490$), Brun et al. (2004)
found that ME equilibrated at about 7\% of KE; Browning et al. (2006) found
comparable values within the convective envelope in simulations of the
convection zone and a forced tachocline, there adopting $Pm=8$. In modeling
fully convective stars with the PENCIL code, DSB06 found ME comparable to
KE in their most rapidly rotating runs. Models of core convection in
A-type stars (Brun, Browning \& Toomre 2005, hereafter BBT05) yielded ME/KE typically
between 0.28 and 0.90, depending upon rotation rate and other simulation
parameters. In those models, DRKE was strongly suppressed whenever ME was
greater than about 30\% of KE, leading to cyclical growth and decay of ME
and DRKE: large differential rotation led to growing ME, but the resulting
strong fields tended to suppress DRKE, which in turn resulted in a decline
in ME. No cyclical behavior of this sort is observed in case Cm; instead,
ME is always much greater than DRKE. The more laminar case Bm, with a
lower ME/KE ratio, does show some linked oscillations in DRKE and ME,
suggesting that such intertwined feedback is realized only for a fairly
narrow range of magnetic energy densities. We believe that case Cm is
likely to be more representative of the behavior of actual stellar
interiors at this rotation rate: if anything, the extraordinariliy high
$Re$ and $Rm$ realized in stars might be expected to lead to somewhat
higher magnetic energy densities than we find here, and hence perhaps to an
even stronger decline in DRKE.
\section{Properties of the Magnetic Fields}
\subsection{Morphology and Spatial Distribution}
The magnetic fields realized here possess both intricate small-scale
features as well as global-scale structures. Like that of the flows that
sustain them, the morphology of the fields varies with radius, with the
typical length scale of the field increasing with depth. A sampling of
such behavior is provided by Figure 4, which shows an instantaneous view of
the radial magnetic field $B_{r}$, the azimuthal magnetic field $B_{\phi}$,
and the radial velocity $v_r$ on spherical surfaces at two depths in case Cm. Near
the surface, complex structures on many different scales continually emerge
and evolve, ranging from localized ripples to large-scale patches in $B_{\phi}$
that extend around much of the domain. The smallest field structures are
typically on finer scales than the smallest flow fields, likely partly
because we have adopted a magnetic Prandtl number Pm greater than unity.
The strongest radial fields of both polarities are generally
associated with strong downflow plumes, but only slightly weaker fields may
be found in the relatively quiescent regions between these plumes. The
field strengths vary somewhat as a function of depth, with typical $B_r$
and $B_{\phi}$ sampled by Figure 4 declining by about a factor of two in
going from $r=0.88R$ to $r=0.24R$.
\begin{figure*}[hpt]
\center
\epsscale{1.0}
\includegraphics[width=6.7in, trim= 0 0 0 0]{f4.eps}
\caption{\label{vbview}Global views of radial velocity $v_r$, radial
magnetic field $B_r$, and longitudinal magnetic field $B_{\phi}$ at a
single instant in the evolution of case Cm. All are shown on spherical
surfaces both deep within the star (at $r=0.24R$) and closer to the
surface ($r=0.88R$), with red tones indicating positive polarity
(upflows) and blue tones negative polarity (downflow). Maxima and minima of the colormaps are
indicated (in m s$^{-1}$ and G). }
\end{figure*}
Deeper within the star (Fig. 4 $d-f$), the magnetic fields are larger in
scale. At these depths, the field is no longer structured on appreciably
finer scales than the flow, as revealed either by Fig 4 $d-f$ or by the
spectral analysis of \S5.2 below. Like the convective flows, the magnetic
fields at depth are coupled to those at larger radii, with the intricate
field structures near the surface emerging from the broader network of
magnetism below. The fields deep within the interior evolve much more
slowly than the small-scale magnetism near the surface, with some large
patterns in $B_{\phi}$ (Fig. 4$f$) persisting for thousands of days.
In the following subsections, we examine more quantitatively the strength
of the fields on both large and small spatial scales.
\subsection{Spatial Scales of the Magnetism}
One assessment of the overall field morphology is provided by decomposing
the magnetism into its azimuthal mean (the axisymmetric field) and
fluctuations about that mean. This is a coarse measure of the size of
typical field structures: if the field is mostly on small scales, only a
small signal will survive the azimuthal averaging. We define the
shell-averaged toroidal mean magnetic energy (TME), the fluctuating
magnetic energy (FME), and the total magnetic energy as follows:
\begin{eqnarray}
\mbox{ME} &=&\frac{1}{8\pi}\left(B_r^2 + B_{\theta}^2 + B_{\phi}^2 \right)\,, \\
\mbox{TME} &=&\frac{1}{8\pi}\left<B_{\phi}\right>^2 \,, \\
\mbox{FME} &=&\frac{1}{8\pi}\left((B_r-\left<B_{r}\right>)^2+(B_{\theta}-\left<B_{\theta}\right>)^2+(B_{\phi}-\left<B_{\phi}\right>)^2\right) \,,
\end{eqnarray}
recalling that the angle brackets $\left< ~ \right>$ denote a longitudinal
average. These energy components are displayed for case Cm in Figure 5.
Although the majority of the magnetic energy is in the non-axisymmetric
component, the mean toroidal field is still considerable. Within the bulk
of the interior, TME accounts for about 18\% of the total magnetic energy;
it is smallest near the surface, where TME $\approx$ 5\% ME, and largest
(as a fraction of ME) at depth.
\begin{figure}[hpt]
\center
\epsscale{1.0}
\includegraphics[width=3.4in, trim= 10 0 10 0]{f5.ps}
\caption{\label{magbal} Magnetic energy components in case Cm as a
function of radius. Shown are the toroidal mean (axisymmetric)
magnetic energy TME, the fluctuating (non-axisymmetric) magnetic energy
FME, and their sum the total magnetic energy ME, all averaged in time and
over spherical surfaces.}
\end{figure}
That the toroidal mean fields account for a reasonably large fraction of
the total ME is a striking result. In prior simulations of the bulk of the
solar convective envelope, TME was typically only about 3\% (Brun et
al. 2004); in simulations including a tachocline of shear, similar TME/ME
ratios to those reported here were attained only within the stably
stratified tachocline itself (Browning et al. 2006). Similarly, Brun et
al. (2005) found that TME/ME $\approx 0.05$ within most of the convective
cores of A-type stars, with higher values achieved only within a shear
layer at the boundary of that core.
To glean a more complete understanding of the spatial structure of the
magnetic fields and the flows that sustain them, we turn to the spatial
power spectra shown in Figure 6. There the time-averaged $B^2$ is shown at
selected depths as a function of spherical harmonic degree $\ell$
(Fig. 6$a$), along with the velocity spectra $v^2$ for comparison
(Fig. 6$b$) The velocity spectra broadly confirm the qualitative
descriptions of \S3.1: at large radii, the spectra peak at higher
wavenumbers (smaller spatial scales) than at small radii. Near the
surface, the velocity amplitudes rise gradually from low $\ell$ to a peak
at about $\ell=20$, with all spherical harmonic degrees $\ell < 35$
possessing as much power as the $\ell=1$ mode. The velocity amplitudes
fall of steeply with increasing $\ell$ beyond the peak, approximating a
power law of slope steeper than $\ell^{-2}$. In the deeper interior, the
spectra also show a comparatively gradual rise to a maximum amplitude at a
scale $\ell_{\rm peak}$, and a steep fall-off to higher $\ell$, but the
value of $\ell_{\rm peak}$ shifts to smaller $\ell$. At the smallest radii
sampled here, the slope of the velocity spectra becomes somewhat shallower
around $\ell=20$. One significant caveat is that our choice
of eddy viscosities and diffusivities that are constant with depth is
somewhat arbitrary; if these coefficients were instead taken to decrease
with decreasing radius (in order to crudely represent kinetic energy
dissipation by small-scale turbulence that is constant in radius), the
contrast between the flows at depth and those near the surface would likely
not be as great as that reported here.
\begin{figure}[hpt]
\center
\includegraphics[width=3.4in, trim= 0 0 0 0]{f6.eps}
\caption{\label{spectra} Time-averaged spectral distributions of ($a$)
$v^2$ and ($b$) $B^2$ in case Cm, each sampled on three spherical surfaces
at indicated depths.}
\end{figure}
Turning to the magnetic spectra (Fig. 6$b$), we see a somewhat different
picture. The magnetic energy is distributed more evenly among the largest
scales: at large radii ($r=0.88R$) the spectra show a broad plateau up to
wavenumbers $\ell \approx 30$; at intermediate radii ($r=0.50R$) this
plateau extends to about $\ell=20$. At $r=0.15R$, the magnetic energy
peaks at the largest scales ($\ell=1$) and declines continuously towards
smaller scales. There is a break at $\ell \approx 20$ to a nearly flat
distribution of power with increasing wavenumber. The magnetic energy in
the largest scales is actually greatest at depth, even though both the velocity
amplitudes and the total magnetic energy are smaller there. This is partly
in keeping with the radial variation of the mean density together with the
changing scale of the flows: $\bar{\rho}$ goes from about 3.5 g
cm$^{-3}$ at $r=0.88R$ to $\bar{\rho}=86$ at 0.15R; thus the ratio of
magnetic to kinetic energy at $\ell=1$ is roughly of order unity at all
depths.
Two important points about the magnetic and velocity spectra bear
emphasizing. One is that the distribution of magnetic energy as a function
of scale is not given simply by equipartition with the flows at each
wavenumber. On large scales the magnetic and kinetic energies are roughly
comparable, whereas at small scales ME exceeds KE by up to a factor of 50.
A second, related point is that the spatial distributions of the magnetic
fields and flows vary appreciably with radius. Deep within the star, the
magnetic field is dominated by its largest-scale components, while near the
surface the field is more broadly distributed in $\ell$. These radial
variations in the ME spectra appear to be somewhat more than simple
reflections of the depth-dependent KE spectra: rather, the ratio between ME
and KE is also a function of depth, reaching its maximum at radii around
$r=0.75R$.
We caution that the power spectra presented here are likely affected by the
many simplifications and limitations of our modeling. Both buoyancy
driving and viscous dissipation extend here over a broad range of scales,
so our simulations do not possess an extended ``inertial range'' in which
energy could simply cascade to smaller scales. A second caveat is that our
adoption of $Pm >1$ is a likely contributor to the abundance of small-scale
magnetic energy relative to kinetic energy; this behavior is at least
expected for $\ell$ between the viscous and Ohmic diffusive scales (here
$\ell > 100$), but the amplitudes on larger scales might also be impacted.
Finally, both ME and KE show much steeper declines than expected for
homogeneous turbulence with or without magnetism; in most such models,
KE$/\ell$ and ME$/\ell$ are proportional to $\ell^{-3/2}$ or $\ell^{-5/3}$
(see, e.g., Biskamp 1993; Goldreich \& Sridhar 1995; Boldyrev 2006). In
this respect, the spectra are qualitatively similar to those realized in
simulations of turbulent solar convection (Brun et al. 2004) or A-star core
convection (Browning et al. 2004). However, comparison of the spectra
realized here to those expected for homogeneous, isotropic turbulence is
problematic: rotation, stratification, buoyancy driving, and the
artificially high viscous dissipation in our simulation all likely impact
the spectral energy distribution.
\subsection{Structure and Evolution of Mean Fields}
The magnetic fields realized here clearly possess both intricate small
scale structure and large-scale ordering. We turn now to an assessment of
the large-scale mean fields, which we define to be the axisymmetric ($m=0$)
component of the magnetism. Many different divisions into mean and
fluctuating magnetism could be employed; an axisymmetric averaging is
perhaps the simplest. However defined, these large-scale fields hold
particular significance in dynamo theory.
Strong axisymmetric toroidal fields are realized at many depths. These
fields are displayed for three instants in the evolution of case Cm in
Figure 7. The magnetic fields took different times to grow at the
different depths. Field amplification is most rapid in the outermost
regions of the star, where convection is at its most vigorous and where
typical eddy sizes are smallest. Eventually, however, fields of comparable
strength are established in the deep interior. The panels in Figure 7
sample $<{B}_{\phi}>$ at three times: one near $t=3500$ days on the scale of Figure
3, and the others at $t=12000$ and $14500$ days respectively. By the time
of the first snapshot, mean fields in the outermost regions had already
grown to about 5 kG in strength , but the fields deeper down were weaker by
two orders of magnitude. In Figure 7$b$, which shows the simulation
roughly 8500 days later, the mean fields at depth have grown to values
comparable to those at larger radii (with typical values $<B_{\phi}>
\approx 10000$ G). We cannot reliably determine whether the fields at
depth are generated by local dynamo action there, or are instead produced
amid the more vigorous convection near the surface and then transported
downwards. In both scenarios, strong fields are realized at depth
only on timescales reflective of the slow overturning times deep within the
star, whereas field amplification near the surface proceeds on the faster
overturning time associated with the flows there.
\begin{figure*}[hpt]
\center
\epsscale{1.0}
\includegraphics[width=5.5in, trim= 0 0 0 0]{f7.eps}
\caption{\label{bevolve}Azimuthally averaged $B_{\phi}$ as contour plots
in radius and latitude at three instants in the evolution of case Cm.
The three renderings sample ($a$) a time prior to the saturation of the
volume-averaged magnetic energy density, and times roughly ($b$) 8500
and ($c$) 11000 days later. Polarity is indicated by the colormap, with reddish tones
positive (prograde polarity) and bluish tones negative (retrograde). }
\end{figure*}
Inspection of Figure 7 reveals that the mean fields are highly spatially
nonuniform in strength, with some regions hosting very large field
structures while others are more quiescent. The largest $B_{\phi}$
structures extend in radius over much of the domain, and can occupy large
fractions of a hemisphere. Comparison of Figure 7$b-c$ reveals that some
of these field structures -- in this case a prominent site of positive
polarity in the northern hemisphere -- persist over intervals of thousands
of days. There is still substantial field evolution -- e.g. the growth of
a structure of negative polarity in the southern hemisphere (Fig. 7$c$) --
but this is most pronounced within the outer $\sim 10$\% of the
computational domain. Once structures penetrate into the deep interior,
they appear to persist in some form for timespans more reflective of the
slow magnetic diffusion time ($\tau \sim L^2/(\pi^2 \eta) \sim 4400$ days
than of the faster convective overturning time.
The overall polarity of the fields is remarkably stable. Over the roughly
25 years that we have evolved simulation Cm after its magnetic energy
equilibrated, the field near the surface has reversed its polarity -- which
we define as the sign of $B_r$ integrated over a surface in the northern
hemisphere (see BMT04) -- only once. This stability is in marked contrast
to the frequent polarity reversals found in simulations of the solar
convective envelope without a tachocline (BMT04).
\section{ESTABLISHMENT AND QUENCHING OF DIFFERENTIAL ROTATION}
Our hydrodynamical calculations (A, B, C) begin in a state of uniform
rotation. In all of them, however, convection quickly acts to redistribute
angular momentum, ultimately establishing interior rotation profiles that
vary with radius and latitude. The resulting differential rotation is
partly akin to that observed at the solar surface, in that the equator
rotates more rapidly than the poles; unlike the bulk of the solar
convection zone, our simulations also exhibit substantial radial angular
velocity contrasts, with the outer regions rotating more rapidly than the
interior.
This differential rotation is assessed for the hydrodynamic case C in
Figure 8. Shown as a contour plot is the longitudinal velocity $\hat{v}_{\phi}$,
averaged in time and in longitude; rapidly rotating regions are reddish,
and slower ones are bluish, all shown relative to the rotating frame. The
rightmost panel (Fig. 8$b$) also shows the angular velocity $\hat{\Omega}$
as a function of radius along selected latitudinal cuts. There we can see
that at the surface, the overall angular velocity contrast between the
equator and 60$\degr$ latitude is about 90 nHz, implying a $\Delta \Omega /
\Omega \approx$ 22\%. This is comparable to the solar angular velocity
contrast $\Delta \Omega / \Omega \approx 0.25$. As in the Sun, the angular
velocity decreases monotonically from equator to pole. Figure 8$b$ also
reveals that $\hat{\Omega}$ generally decreases with depth, with the
equator rotating about 125 nHz faster at the top of the domain than at the
bottom. Turning to the contour plot of $\hat{v}_{\phi}$, we see that the interior
rotation profile is nearly constant on cylindrical lines parallel to the
rotation axis. This is in keeping with the strong Taylor-Proudman
constraint felt by the flows, which are heavily influenced by rotation.
The angular velocity contrasts in radius and latitude are smaller in our
more laminar cases A and B, but the sense of the differential rotation is
the same. We have tabulated in Table 2 the contrast from equator to
60$\degr$ for each of these simulations.
\begin{figure}[hpt]
\center
\epsscale{1.0}
\includegraphics[width=3.4in, trim= 10 0 10 0]{f8.eps}
\caption{\label{diffrotn} Differential rotation established in the
hydrodynamic case C, and its quenching in the MHD simulation Cm. Shown
(\emph{left}) as a contour plot is the longitudinal velocity $\hat{v}_{\phi}$ in
case C, averaged in time and in longitude. Also displayed is the
angular velocity $\hat{\Omega}$ in ($b$) case C and ($a$) case Cm,
shown as a function of radius along indicated latitudinal cuts. Case
Cm rotates essentially as a solid body.}
\end{figure}
The building of differential rotation by the rotating convective flows is
not unexpected. As convective parcels rise and fall, they may be turned by
Coriolis forces, yielding correlations between $v_r$ and $v_{\phi}$ whose
effect is to transport angular momentum outward. If, on the other hand,
Coriolis forces are weak (relative to buoyancy driving and pressure
forces), outward-moving flows may simply tend individually to conserve
angular momentum, implying an angular velocity that decreases with radius
(e.g., Gilman \& Foukal 1979). The convection in our models is strongly
influenced by rotation, as quantified for instance by the Rossby number
$Ro=\tilde{u}/(L \Omega)$ or the convective Rossby number $Roc = Ra/(Ta
Pr)$. The first of these roughly measures the strength of the Coriolis
terms in equation (3) relative to the inertial ones, while the second
estimates the influence of rotation compared to buoyancy driving. These
are tabulated for our simulations in Table 2. In prior studies of nonlinear
convection in rotating spherical shells (Gilman 1978, 1979; Brun \& Toomre
2002), a general finding has been that equatorial acceleration is realized
whenever $Roc$ is less than unity, with Coriolis forces therefore large.
When $Roc$ is large, conversely, the equatorial regions tend to rotate
slower than the poles. Under strong rotational influences, angular
momentum transport by the convection tends to be radially outward and
latitudinally toward the equator (e.g., Brun \& Toomre 2002). The analogy
in deeper spherical domains appears to be the acceleration of columns of
fluid that lie far from the rotation axis, as realized here and in the core
convection simulations of Browning et al. (2004). Angular momentum is
globally conserved in our models, so as these regions speed up, others near
the rotation axis must slow down.
The interior rotation profiles are quite different in our calculations with
magnetism. Intuitively, one expects that strong magnetic fields might act
like rubber bands, tying separate regions together and helping to enforce
solid body rotation. Although this analogy is simplistic, given the
complex spatial and temporal structure of the magnetic fields realized
here, the expectation that magnetism should lessen angular velocity
contrasts turns out to be correct. In our MHD simulations, the magnetic
fields react back strongly upon the flows, acting to strongly quench
the differential rotation. This behavior is assessed for case Cm in Figure
8$a$, which shows the angular velocity $\hat{\Omega}$ as a function of
radius along cuts at various latitudes. The interior is in nearly solid
body rotation; the angular velocity contrasts realized in the progenitor
case C (Fig. 8$b$) have been almost entirely eliminated. The equator
rotates less than 2\% faster than the polar regions. This transition towards a
uniform rotation profile is in keeping with the marked decline in DRKE
noted as the magnetic fields grew in case Cm (\S4). In cases Cm2 and Bm,
which have lower equilibrated magnetic energy densities, the quenching of
differential rotation is somewhat less severe. The angular velocity
contrast $\Delta \Omega / \Omega$ from the equator to 60$\degr$ is about
8\% in case Bm (compared to 14\% in the hydrodynamic case B) and about 4\%
in case Cm2. Whether differential rotation is entirely, partially, or
minimally quenched thus seems to be a fairly sensitive function of the
magnetic energy densities realized, for ME in these three simulations
differs only by a factor of about 1.4. Comparing the simulations here to
those of BBT05, BMT04, and Browning et al. (2006) reinforces the view that
differential rotation can persist when magnetism is weak relative to the
flows (with ME/KE less than about 30\%), is partially quenched for
intermediate-strength fields (with cyclical feedbacks between the magnetism
and differential rotation sometimes realized), and is strongly suppressed
whenever the magnetism is very strong (equipartition-strength or greater).
\emph{Our simulations show that for sufficiently turbulent flows,
such equipartition-strength magnetic fields can be realized even in fully
convective stars, and sustained once the differential rotation has been
eliminated. }
In the following section, we examine the manner in which these zonal flows
are established in the hydrodynamic simulations and quenched in the
presence of strong magnetic fields.
\subsection{Angular Momentum Transport}
The interior rotation profiles realized here result from a variety of
competing processes: angular momentum can be redistributed by Reynolds
stresses, by meridional circulations, by viscous diffusion, or by torques
and Maxwell stresses associated with the magnetic fields. No simple
analytical tools allow us to reliably predict how each of these effects
will act, and how they will combine to shape the interior rotation
profile. But the present simulations offer an opportunity to assess these
processes ``after the fact'': we can constrain the angular momentum
transport afforded by each and see how, together, they yield differential
rotation in the hydrodynamic cases and weaker angular velocity contrasts in the MHD
simulations.
To analyze the angular momentum transport, we turn (in the manner of
Elliott, Miesch \& Toomre 2000; BMT04) to the zonal
component of the momentum equation, expressed in conservative form and
averaged in time and in longitude:
\begin{equation}
\frac{1}{r^2} \frac{\partial(r^2 {\cal F}_r)}{\partial r}+\frac{1}{r \sin\theta}
\frac{\partial(\sin \theta {\cal F}_{\theta})}{\partial
\theta}=0,
\end{equation}
where
\begin{eqnarray}
{\cal F}_r&=&\bar{\rho} r\sin\theta[-\nu r\frac{\partial}{\partial
r}\left(\frac{\hat{v}_{\phi}}{r}\right)+\widehat{v_{r}^{'}
v_{\phi}^{'}}+\hat{v}_r(\hat{v}_{\phi}+\Omega r\sin\theta) \nonumber \\
&-&\frac{1}{4\pi\bar{\rho}}\widehat{B_{r}^{'}
B_{\phi}^{'}}-\frac{1}{4\pi\bar{\rho}}\hat{B}_r\hat{B}_{\phi}] \end{eqnarray}
and
\begin{eqnarray}
{\cal F}_{\theta}&=&\bar{\rho} r\sin\theta[-\nu
\frac{\sin\theta}{r}\frac{\partial}{\partial
\theta}\left(\frac{\hat{v}_{\phi}}{\sin\theta}\right)+\widehat
{v_{\theta}^{'} v_{\phi}^{'}}+\hat{v}_{\theta}(\hat{v}_{\phi}+\Omega
r\sin\theta)\nonumber \\ &-& \frac{1}{4\pi\bar{\rho}}\widehat{B_{\theta}^{'}
B_{\phi}^{'}}-\frac{1}{4\pi\bar{\rho}}\hat{B}_{\theta}\hat{B}_{\phi}].
\end{eqnarray}
are the mean radial and latitudinal angular momentum fluxes respectively.
The terms on the right hand side of equations (21) and (22) are, in order,
the contributions from viscous diffusion, Reynolds stresses, meridional
circulations, Maxwell stresses, and large-scale magnetic torques. The
Reynolds stresses are associated with correlations of the fluctuating
velocity components ($v_r'$, $v_{\theta}'$, $v_{\phi}'$) that arise when
the convective structures possess organized tilts. Similarly, Maxwell
stresses are correlations of the fluctuating magnetic field components that
correspond to the tilt and twist of magnetic structures.
To more easily analyze the various components of ${\cal F}_r$ and ${\cal
F}_{\theta}$, we integrate over co-latitude and radius to find the net
fluxes through shells at each radius and through cones at each latitude:
\begin{eqnarray}
I_r(r)&=&\int_0^{\pi} {\cal F}_r(r,\theta) \, r^2 \sin\theta
\, d\theta \; \mbox{ , } \nonumber \\ I_{\theta}(\theta)&=&\int_{r_{bot}}^{r_{top}} {\cal
F}_{\theta}(r,\theta) \, r \sin\theta \, dr \, .
\end{eqnarray}
These are displayed for cases C and Cm in Figure 9. We have identified
there the contributions from Reynolds stresses (denoted R), meridional
circulations (MC), viscous diffusion (V), Maxwell stresses (MS), and
large-scale magnetic torques (MT). In constructing Figure 9, we averaged
the fluxes over about 150 days of evolution.
Turning first to the integrated radial angular momentum flux in case C
(Fig. 9$a$), we see that the Reynolds stresses act to transport angular
momentum radially outwards. They are opposed mainly by viscous diffusion,
which transports angular momentum inwards at all radii; the flux associated
with meridional circulations plays a smaller role here, but acts to
transport angular momentum to mid-depth ($r\approx 0.66R$) from either
smaller or larger radii. The total flux (indicated by the solid line) is
nearly zero, confirming that the rotation profile is well-equilibrated.
Note that the steady-state inward transport due to viscous diffusion
sampled here is consistent with the prevailing differential rotation in
case C: the viscous flux is negative whenever $\frac{\partial}{\partial r} (\hat{v}_{\phi}/r)$ is
positive, as it is in case C.
\begin{figure}[hpt]
\center
\epsscale{1.0}
\includegraphics[width=3.4in, trim= 0 0 0 0]{f9.eps}
\caption{\label{angmomplot} Integrated radial and latitudinal fluxes of
angular momentum, averaged in time for case C (top panels) and case Cm
(bottom). Shown on left ($a$,$b$) is the radial angular momentum flux
$I_r$. Right panels ($c$,$d$) show the latitudinal flux $I_{\theta}$. In
both, we have indicated the contributions from Reynolds stresses (R),
meridional circulations (MC), Maxwell stresses (MS), viscous diffusion
(V), and large-scale magnetic torques (MT), together with their sum
(indicated by the solid line). Positive quantities represent fluxes
radially outward or directed latitudinally from north to south.}
\end{figure}
The latitudinal angular momentum flux in case C, sampled in Figure 9$c$,
conveys a similar picture. There the Reynolds stresses act to transport
angular momentum toward the equator, since the associated flux is positive
in the northern hemisphere and negative in the southern. Again, viscous
diffusion acts in the opposite manner, transporting angular momentum
polewards. The meridional circulations figure more prominently than they
did in the radial balance: here they act mainly in concert with the
Reynolds stresses.
Examining the angular momentum transport in case Cm (Fig. 9$b$,$d$) reveals
that magnetic fields can play a major role in establishing the interior
rotation profile. Figure 9$b$ shows that strong Maxwell stresses are
realized throughout much of the interior, and that they tend to transport
angular momentum inwards. That the Reynolds and Maxwell stresses transport
angular momentum in opposite directions is understandable: the
corresponding terms in equations (20) and (21) carry opposite signs, so as
long as correlations between fluctuating magnetic field components ($\widehat{B_{r}^{'}
B_{\phi}^{'}}$) are in the same sense as the corresponding velocity
correlations ($\widehat{v_{r}^{'} v_{\phi}^{'}}$), angular momentum
transport by Maxwell stresses will oppose that of the Reynolds stresses.
The Reynolds stresses did not grow to compensate for the inward-directed
flux due to Maxwell stresses; thus the region of prograde flow at large
radii (realized in case C) was gradually slowed, yielding the nearly
solid-body rotation profile of case Cm. In the equilibrated state
sampled by Figure 9$b$, the viscous flux of angular momentum is also much
smaller than it was in case C (Fig. 9$a$). This is a result, rather than
a cause, of the nearly solid-body rotation profile established in case
Cm: the viscous transport term is proportional to $-\frac{\partial}{\partial r}
(\hat{v}_{\phi}/r)$, so small gradients of the angular velocity imply a vanishing
viscous transport of angular momentum. In cases Bm and Cm2, which have
weaker magnetic fields, the Maxwell stresses act in the same sense, but
are weaker and so do not have as great an impact on the differential
rotation.
The latitudinal transport in case Cm, sampled in Figure 9$d$, tells a
similar story. The dominant balance is between equatorward transport by
the Reynolds stresses and polewward transport by Maxwell stresses. The
angular momentum fluxes due to viscous diffusion, meridional circulation,
and large-scale magnetic torques are all smaller. Somewhat surprisingly,
the large-scale magnetic torques, though small, generally oppose the
Maxwell stresses associated with the fluctuating fields, implying that
correlations of the form $\hat{B}_{\theta}\hat{B}_{\phi}$ and
$\widehat{B_{\theta}^{'} B_{\phi}^{'}}$ are of the opposite sign.
Taken together, these analyses yield some insight into how differential
rotation is established in hydrodynamic cases and quenched in MHD ones.
In the parameter regime probed here, the Reynolds stresses associated with
the turbulent convection tend to transport angular momentum radially
outward and latitudinally toward the equator. In the hydrodynamic cases,
this transport is opposed only by viscous diffusion and, to some extent,
meridional circulations; the result is an acceleration of regions at large
radii and low latitudes, until the steady state sampled by Figure 9$a$,$c$
(and by the contour plot of Fig. 8) is reached. In the MHD cases, however,
the Reynolds stresses must also counteract the effect of Maxwell stresses,
which tend to transport angular momentum poleward and inward. The
meridional circulations and large-scale magnetic torques do not adjust to
cancel out the effect of these Maxwell stresses, so the net result is a
lessening of the differential rotation.
This general picture appears robust, but we caution that some of the
\emph{detailed} features of Figure 9 -- e.g., the exact magnitude of the
viscous flux -- depend in a highly nonlinear fashion upon each other and
upon other attributes of the simulation. The viscous flux, for instance,
depends upon the overall angular velocity gradients that are established --
but it is in part responsible for setting those angular velocity gradients,
through its competition with the Reynolds stesses, meridional circulations,
etc. Thus viscous transport is a major player in the angular momentum
balance in case C, but a negligible one in the evolved state of case Cm: it
did not vanish because of the presence of magnetic fields, but gradually
tapered away as the Maxwell stresses lessened the angular velocity
contrast. We have chosen for simplicity to show only the steady-state
fluxes of angular momentum in both cases; during the initial transient
phases in which the rotation profiles are established, the sense of the
fluxes -- i.e., which ones sought to speed up the equator and which to slow
it -- was generally the same as that described here.
\section{ROTATION, HELICITY, AND THE GENERATION OF FIELDS}
The magnetic fields realized here possess several striking properties.
Although substantial magnetic energy is realized on small scales, there is
also some order on the largest scales. Strong ($\sim 10$ kG) axisymmetric
toroidal fields are generated by the dynamo, and account for up to 20\% of
the magnetic energy at some sites. Some of these strong large-scale field
structures persist for thousands of days; the overall polarity of the field
in case Cm, our longest-evolved simulation, has flipped only once in about
30 years of simulated evolution. Furthermore, these global field
structures can be realized without the aid of stretching by differential
rotation, for the interiors of our most turbulent MHD simulations rotate
nearly as solid bodies.
These results stand in sharp contrast to those of some prior simulations of
convection in more massive stars. In computational models of the solar
convective envelope, Brun et al. (2005) found that the toroidal mean
magnetic energy was typically less than 5\% of the total ME; the polarity
of the mean field typically reversed at chaotic intervals of less than 600
days. In simulations that also included the tachocline below (Browning et
al. 2006), higher values of TME/ME were achieved only amid the strong shear
of the tachocline itself; the polarity evolution of the fields was
stabilized by the presence of strong mean fields in the radiative layer.
In neither of these sets of simulations did the magnetic energy become
strong enough to quench the differential rotation entirely. Similarly,
models of dynamo action in the convective cores of A-type stars (Brun et
al. 2005) indicated that strong large-scale fields were mostly realized in
the shearing layer near the core-envelope boundary. Those simulations also
exhibited a rich variety of interactions between the magnetism and the
differential rotation: angular velocity contrasts were almost entirely
eliminated in rapidly rotating cases, while slower rotators showed cyclical
feedbacks between differential rotation and magnetism.
Some guidance in interpreting these results may be afforded by findings
from mean field theories (MFTs) of the dynamo process. In many such
theories, a key role is played by the kinetic helicity, ${\bf
v}\cdot\mbox{\boldmath $\nabla$}\times{\bf v}$, which is related to the twisting and winding
of the convective flows (see Moffatt and Tsinober 1992). Under a number of
simplifying assumptions, it can be shown that the ``$\alpha$-effect'' of
traditional MFT is proportional to the kinetic helicity of the turbulence,
implying that more helical flows might build strong large scale flows
(e.g., Parker 1955; Steenbeck et al. 1966; Moffatt 1978; Moffatt \& Proctor 1982; Brandenburg \& Subramanian
2005). This general expectation has been partly born out by simulations of
dynamo action in forced helical turbulence and by numerical calculations
using turbulent closure schemes (e.g., Blackman 2003; Pouquet \& Patterson
1978). In such modeling, the spectrum of the magnetic fields realized by
dynamo action changed as the kinetic helicity was varied, with more
large-scale field generated when the helicity was increased. A common
expectation is thus that flows without helicity can
act as dynamos, but the fields are typically on the scale of the turbulent
eddies (e.g., Brandenburg \& Subramanian 2005). More recent
numerical modeling has suggested, however, that the linkages between kinetic helicity
and large-scale magnetic fields may not be so clear -- e.g., Courvoisier,
Hughes \& Tobias (2006) found no relation between the $\alpha$-effect and
kinetic helicity in models of a variety of chaotic flows.
The convective flows in our simulation are strongly influenced by rotation.
The local Rossby number $Ro_w = \mbox{\boldmath $\nabla$} \times {\bf v}/2 \Omega$ (which
compares the fluid vorticity to the ``planetary'' vorticity $2\Omega$) is
less than 0.1 throughout most of the interior; its radial variation for
case Cm is shown in Figure 10. Also shown for comparison there is the
globally averaged value of $Ro_w \approx 0.86$ for the penetrative solar
dynamo calculations of Browning et al. (2006) (calculated within the bulk
of the convection zone). The local Rossby number here is significantly
less than in the solar simulation, even though the angular velocity in case
Cm is equal to the solar rate of $\Omega = 2.6 \times 10^{-6}$
s$^{-1}$. Likewise, the global estimates of $Ro$ and $Roc$ given in Table 2
are significantly lower than those calculated for the solar simulations of
BMT04 or Browning et al. (2006). The influence of rotation is stronger
here for the same rotation rate because the stellar luminosity is much
lower, so typical convective velocities are slower than in the solar case,
and the convective overturning time is longer. Flows that take more than
one rotation period to overturn can be strongly affected by Coriolis
forces, whereas those that overturn faster cannot; here only the rapid,
small-scale flows near the surface have overturning times that approach the
rotation period. Note that the rotation rate adopted here is lower than
that observed for most mid-late M-stars, so nearly all such stars are
probably strongly influenced by rotation.
\begin{figure}[hpt]
\center
\epsscale{1.0}
\includegraphics[width=3.4in, trim= 0 0 0 0]{f10.ps}
\caption{\label{kinhel} Local Rossby number $Ro = (\nabla \mbox{\boldmath $\times$} u)/2
\Omega$, averaged in time and on spherical surfaces. Also shown is the
average value of $Ro \approx 0.86$ within the solar convection zone
simulations of Browning et al. (2006). The lower Rossby numbers realized
here indicate a stronger rotational influence.}
\end{figure}
The general trend that emerges from comparing our simulations to prior ones
is that a stronger rotational influence, and hence lower $Ro$, implies both
higher magnetic energy densities relative to kinetic, and magnetic fields
of increasingly large spatial scale. When $Ro$ is close to unity, as in
the Sun (Browning et al. 2006; Brun et al. 2005), the magnetic energy
generally appears to be small enough that differential rotation can readily
persist. Under somewhat stronger rotational influences, as realized in
some of the A-star core dynamos of Brun et al. (2005), the differential
rotation and magnetism may feed back upon each other, possibly yielding
cyclical waxing and waning of field strength. At the still stronger
rotational influences sampled here and in the more rapidly rotating cases
of Brun et al. (2005), the helical convective flows are able to build
magnetism of equipartition strength without the aid of differential
rotation; angular velocity contrasts realized in hydrodynamic cases are
greatly reduced by the strong magnetic fields.
Simulations of the geodynamo also lend support to the idea that stronger
rotational influence can lead to magnetic fields on larger spatial scales.
Christensen \& Aubert (2006) found that lower $Ro$ led to magnetic fields
with a larger ``dipole fraction,'' defined as the power in the $\ell=1$
mode divided by that in modes $\ell=1-12$. They and others have suggested
that predominantly dipolar fields are realized when inertial forces are
small relative to Coriolis forces (see also Sreenivasan \& Jones 2006;
Olson \& Christensen 2006). The transition between dipolar and multipolar
fields in their models occurred at a ``modified Rossby number'' $Ro_l
\approx 0.1$, where $Ro_l = Ro\frac{\ell_u}{\pi}$, with $\ell_u$ the mean
spherical harmonic degree of the flows. Constructing a local $Ro_l$ as a
function of depth in the simulations here (using the power spectra in
\S5.2) suggests that our models may be on the cusp of entering into the
predominantly dipolar regime identified by Christensen \& Aubert (2006).
At most depths, $Ro_l$ in our simulations is still somewhat greater than
0.1, and the dipole fraction is low; deep in the interior, however, $Ro_l
\approx 0.1$, and the local dipole fraction rises to about 30\%. These
results are suggestive of the role that rotation may play in setting the
field geometry, though clearly much more work is needed to clarify how
stratification, dynamo supercriticality, and other effects might likewise
enter into the field strength and morphology.
Quantifying the connection between rotational influence (as measured by
$Ro$) and kinetic helicity, which figures so prominently in MFT, is a
complex task. The two are clearly related, for it is partly the overall
rotation that imparts a global ordering to the helicity: downflows in the
northern hemisphere tend to contract, and because of Coriolis forces rotate
counter-clockwise, implying anti-cyclonic vorticity (e.g., Miesch et
al. 2000). Thus on average the kinetic helicity is negative in the
northern hemisphere and positive in the southern hemisphere. A naive
expectation might therefore be that as Coriolis forces become increasingly
important, this process would lead to stronger net helicity. Indeed, in
simple models, the average kinetic helicity is often taken to be
proportional to the overall angular velocity (e.g., Durney, Mihalas \&
Robinson 1981; Noyes et al. 1984), reflecting the ability of Coriolis forces to
twist convective parcels as they rise or descend. Under such an
approximation, the ``dynamo number'' of MFT may then be proportional to the
angular velocity squared (e.g., Noyes et al. 1984). Such direct connections
between helicity and rotation rate are not realized in our simulations.
Both the azimuthally-averaged kinetic helicity and its rms values are
smaller here than in, for instance, the solar calculations of Browning et
al. (2006), even though Ro is substantially lower. This probably reflects
several key differences between the flows here and in the solar
simulations: the velocities here are lower, and the stratification weaker
throughout most of the interior, leading to less asymmetry between upflows
and downflows, and hence less preference for one sign of helicity.
Furthermore, the convection here shows some tendency to align in rolls
parallel to the rotation axis, reflecting the strong Taylor-Proudman
constraint; in such rolls, the velocity is mostly perpendicular to the
rotation axis, but the vorticity is mostly parallel to it, resulting in a
small average kinetic helicity (e.g., Knobloch, Rosner \& Weiss 1981). A
further complication is that recent numerical modeling and asymptotic
analysis of {\sl unstratified} turbulence has shown that a preference for
one sign of vorticity, and hence a high net helicity, is established at
moderate rotation rates with $Ro \approx 0.1 \rightarrow 1$, but eliminated
as rotation becomes even more rapid (Sprague et al. 2006). This results in
little net helicity in the most rapid rotators. Whether such analysis is
relevant to the stratified flows here is not clear at this stage, but bears
further study.
Drawing the further connection between kinetic helicity and the generation
of large-scale magnetic fields in our simulations appears to be even more
difficult. Like Livermore, Hughes \& Tobias (2007), we have seen no clear
linkages between the magnitude or power spectrum of the helicity and that
of the magnetic field. Indeed, the power spectrum of this quantity in the
present simulations is qualitatively similar to one constructed for the
convective envelope in the solar simulations of Browning et al. (2006). As
noted above, the magnitude of the kinetic helicity is smaller here than in
those simulations or the ones of BBT05, yet the axisymmetric magnetic field
is stronger. More sophisticated theories relating the growth of fields to
the kinetic helicity appear required; alternatively, it may be that the
dynamo action at some scales could be partly ``quenched'' by the growth of
current helicity at those scales, as suggested in some variants of MFT
(see, e.g., Blackman 2003b). Much further work will be required to assess
whether such theories or variations thereof can accurately describe the
growth of fields in the complex flows here.
We close this section with brief comments on the strength and temporal
variability of the fields realized here. That mean toroidal fields of
order 10 kG strength can be maintained in a fully convective star for long
periods of time might come as a surprise: in the solar convective envelope,
simple estimates suggest that fields of that strength would quickly rise
due to magnetic buoyancy (Parker 1975). Indeed, the rapid timescale for
such rise was a major factor in identifying the stable layer as the likely
seat of the global solar dynamo. Here, however, the limits on field
strength imposed by magnetic buoyancy are less severe. A crude estimate of
the upward velocity for a buoyant flux tube in an unstably stratified layer
is
\begin{equation}
u \sim v_a \left( \frac{\pi a}{C_d \Lambda}\right),
\end{equation}
with $\sim v_a = B/\sqrt(4 \pi \rho)$ the Alfven speed, $C_d$ the
aerodynamic drag coefficient, $a$ the tube radius, and $\Lambda$ the
pressure scale height (Parker 1975). For constant $\frac{a}{\Lambda}$ and
$C_d$, this estimate implies that the characteristic timescale for fields
to rise from the center of an M-dwarf to its surface is about a factor of
ten longer than the time needed for fields to rise through the solar
convective envelope. The main difference is that the density in the M-star is
substantially greater, so the Alfven speed is lower and the rise time
greater. Finally, we note that the long temporal stability of the field
polarity here is also striking, but may partly reflect the lower typical
convective velocities realized here. If the field is modeled as a
collection of overlapping dipole moments corresponding to typical
convective eddies, and these individual eddies are uncorrelated, then the
overall magnetic dipole should evolve on a random-walk timescale, $\tau_B
\sim R^2/\nu_{t}$, with $\nu_t \sim v_c l_{\rm ed}$ (Thompson \& Duncan
1993). This estimate would imply $\tau_B \sim 6$ years for motions near the
surface. The field actually appears to evolve on somewhat slower
timescales that are more characteristic of the flows deep in the interior.
A possibly analogous effect was found in Browning et al. (2006), where the
presence of organized mean fields deep in the interior served to largely
stabilize the sense of fields produced in the convection zone. This is a
cautionary tale: the field dynamics near the stellar surface may well
reflect couplings with an interior field that is hidden from view.
\section{CONCLUSIONS AND PERSPECTIVES}
We have presented global 3-D MHD simulations of the interiors of fully
convective M-stars rotating at the solar angular velocity. These nonlinear
simulations attempt to model a 0.3 solar-mass star with reasonable
fidelity: the stratifications of density and temperature are consistent
with those of a 1-D stellar model, as are the luminosity and rotation rate.
Although many great simplifications have been made, we believe that several
of the conclusions of our work may prove to be robust. We reiterate these
principal findings below, and comment briefly on the uncertainties
associated with each. We also compare our work to the simulations of
DSB06, and to the limited observational constraints presently available.
The convection realized in these simulations is characterized by
small-scale, intermittent flows near the stellar surface, and by weaker,
large-scale flows in the deep interior. The radial variation in the size
of typical convective eddies is driven mainly by the strong density
stratification (with $\rho_{bot} \approx 100 \rho_{top}$). The
weakening of convective velocities at depth arises partly because the
luminosity $L_c$ that must be carried by the convection -- essentially the
difference between the total luminosity $L_t$ and the radiative luminosity
$L_r$ -- is quite small there, even though the star is everywhere unstably
stratified. Thus in the cores of our model stars, convection is fairly weak
and radiation actually still transports much of the energy. The relatively modest energy transport
afforded by convection in the deep interior does not result directly from
convective suppression by magnetic fields: convection is weak at depth even
in hydrodynamic calculations.
The flows act as a magnetic dynamo, amplifying a small seed field by orders
of magnitude and sustaining it against Ohmic decay. The equilibrated
magnetic energy density in our highest resolution, most turbulent
simulation Cm is roughly 120\% \ of the kinetic energy density relative to
the rotating frame. The resulting magnetic fields possess structure on a
wide variety of spatial scales; the typical size of field structures is
largest in the deep interior, and smaller near the surface. Strikingly,
the magnetic field possesses a strong axisymmetric mean component, with the
toroidal mean energy accounting for up to 20\% \_ of the total magnetic
energy. Such prominent large-scale mean fields have not been realized in
simulations of the solar convection zone (Brun, Miesch \& Toomre 2004),
except within a stably stratified tachocline of shear (Browning et
al. 2006). The mean fields realized here also possess remarkably stable
polarities: only one reversal of the overall polarity was realized in the
roughly 25-year evolution of case Cm. This result, too, stands in contrast
to prior simulations of solar-like convection in spherical shells (Brun et
al. 2004), in which the overall polarity tended to flip at irregular
intervals of less than 600 days.
Differential rotation is established by the convection in hydrodynamic
cases, but reduced in MHD simulations. In our most turbulent case Cm,
which has the strongest dynamo-generated magnetic fields, the differential
rotation of the hydrodynamic progenitor is almost entirely eliminated.
This occurs because of strong Maxwell stresses, which tend to oppose the
equatorward transport of angular momentum by Reynolds stresses. In the
non-magnetic cases, the differential rotation is solar-like at the surface,
with a fast equator and slow poles; the interior rotation profile is
largely constant on cylinders, in accord with the strong Taylor-Proudman
constraint. The angular velocity contrasts in those hydrodynamic cases are
fairly modest, in keeping with simulations of rapidly rotating solar-like
stars (Brown et al. 2007); it is possible that this reflects non-magnetic
mechanisms for quenching zonal flows in rapidly rotating systems, as
discussed in mean-field models of angular momentum transport (e.g.,
Kitchatinov \& Rudiger 1995). No cyclical feedbacks between the
differential rotation and the magnetic field -- noted in prior simulations
of core convection in A-type stars at certain rotation rates (BBT05) -- are
seen in case Cm. Rather, equipartition-strength fields are sustained even
in the absence of any differential rotation. More interplay between the
differential rotation and magnetic fields is realized in the two cases Bm
and Cm2, in which the magnetism is weaker relative to the kinetic energy
and the differential rotation is somewhat more persistent. The magnetic
fields also impact the convective (non-axisymmetric) flows, though much less
drastically; this weakening of the convection may be somewhat akin to that
explored by Chabrier, Gallardo, \& Baraffe (2007) in the context of
mixing-length theory.
We have argued in \S7 that several key attributes of these simulations --
the strength of the magnetic fields realized, their strong large-scale
axisymmetric components, and the quenching of differential rotation -- may
depend crucially on the influence of rotation. Although the
simulations here rotate at the solar angular velocity, the rotational
influence upon the convective flows is far stronger than in simulations of
the solar convective envelope (e.g., Browning et al. 2006) or the cores of
A-type stars rotating at the same angular velocity (BBT05). Here, the very
slow convective flows imply Rossby numbers significantly smaller than in
more luminous stars at the same rotation rate. A comparison of our
simulations here to others at varying Rossby number (BBT05; Browning et
al. 2006) suggests that as the rotational influence becomes stronger, the
magnetic energy density grows larger relative to the kinetic energy
density. When the ratio ME/KE is small ($<$ 30\%), strong differential
rotation can persist; as ME grows larger (with increasing rotational
influence), a regime may be reached in which the differential rotation and
ME each undergo cyclical waxing and waning. Finally, in stars where
rotation is still more important -- as achieved in the 4-$\Omega_{\sun}$
cases of BBT05 and in our simulations Cm and Cm2 here -- ME can exceed KE without the
aid of differential rotation, and any persistent angular velocity contrasts
are strongly quenched. These conclusions are also tentatively born out by
preliminary simulations of dynamo action in more rapidly rotating
solar-like stars (Brown et al. 2007, in prep).
Thus, we think it likely that had the simulations here been rotating at
only a quarter of the solar angular velocity (to yield Rossby numbers more
in accord with the 1-$\Omega_{\sun}$ simulations of BBT05), cyclical
feedbacks of ME upon DRKE would have been obtained even in our most
turbulent cases. At still slower rotation rates, the differential rotation
established in hydrodynamic cases would likely have persisted in the
presence of the weaker sustained magnetism. But the simulations here
already correspond to rotational velocities below what can presently be
measured: for a star with a radius of $2 \times 10^{10}$ cm, the solar
angular velocity of $\Omega_{\sun}=2.6 \times 10^{-6}$ s$^{-1}$ implies
$v_{\rm rot} \approx 0.5$ km s$^{-1}$, below current detection limits of
vsini $\approx 2$ km s$^{-1}$ (e.g., Delfosse et al. 1998). From the point
of view of observations, even the models presented here would be
``non-rotating.''
Like all those who would simulate stellar convection, we have made
major simplifications in our modeling. Of these, our use of effective eddy
viscosities and diffusivities -- which are vastly greater than their
microscopic counterparts in actual stars -- is arguably the most severe.
Other potentially important simplifications include our omission of the
inner and outer few percent of the stellar interior (our computational
domain extends from 0.08 to 0.96R), our adoption of a perfect gas equation
of state, and the limited duration of the simulations compared to the very
long thermal relaxation timescales in actual stars. It is difficult to
estimate the impacts each of these may have, but we are encouraged by the
reasonable success that similar simulations of the solar interior have
enjoyed in matching the detailed observational constraints provided by
helioseismology (Miesch et al. 2006; Browning et al. 2006).
Probably the greatest uncertainty associated with these simplifications is
the extent to which changes in other simulation parameters -- e.g., the
magnetic Reynolds and Prandtl numbers -- could mimic the effects of
rotation upon magnetic field strength and differential rotation. Both the
strength and morphology of the field are likely sensitive to these
parameters at some level, as indicated by the modest differences in ME and
DRKE between our 3 cases Cm, Cm2, and Bm. Our simulations were conducted
in an Rm, Pm regime probed frequently by prior simulations, and yet they
exhibit quite distinct behavior. The closest analogues in our own prior
work are the most rapidly rotating cases of BBT05; this fact, and a
comparison to cases at other Rossby numbers, is a partial motivation for
our suggestion that the Rossby number is a dominant control parameter in
determining the strength and geometry of the magnetic fields. Future work
that more thoroughly probes the parameter space of rotation rate, Rm, and
Pm will be needed in order to put this tentative suggestion on firmer
ground.
Our results partly agree with those of DSB06, in that we find that strong,
equipartition-strength magnetic fields can be generated at certain rotation
rates. Like them, we also find that the fields have substantial
axisymmetric mean components, and that they possess structure on both large
and small spatial scales. The most significant differences between our
findings and theirs concern the differential rotation established in the
interior: they find that hydrodynamic simulations establish an anti-solar
differential rotation, and that this differential rotation persists even
when strong magnetism is present. We find solar-like differential rotation
in our hydrodynamical models, and nearly solid-body rotation in our most
turbulent MHD simulations. Although the origins of this discrepancy in our
results are not certain, it may be partly caused by the differing strengths
of rotation and turbulence in our simulations compared to those of DSB06.
Although the rotational velocities adopted in DSB06 are far greater than in
actual stars, so are the convective velocities (because the stellar
luminosity was greatly enhanced). It is difficult to say how this
rescaling affects the delicate balance of convection, rotation, and
magnetism, but it is reasonable to suggest that the overall rotational
influence in most of their simulations is somewhat smaller than in ours. A
crude estimate of the rotational influence is given by $Ro \approx (u_{\rm
rms}/R)/2\Omega$; for most of their simulations, Ro is greater than in
ours, indicating that rotation is weaker relative to inertia. Only for
their two most rapidly rotating cases does Ro drop below the values
reported for our cases. They note a trend towards increasing ME/KE with
increasing rotation rate in their simulations; they also note that the
overall angular velocity contrast is reduced as the rotation rate is
increased (in their magnetic cases). Thus we suspect that for somewhat
more rapid rotation, they too would find even stronger quenching of the
differential rotation, perhaps yielding solid-body rotation like that
realized in our case Cm. Alternatively, at fixed rotation rate, more
turbulent flow (and higher $Rm$ and $Re$) may lead to stronger magnetic
energy densitites (as evinced by a comparison of case Bm to case Cm), and
hence to stronger quenching of the differential rotation. Indeed, our case
Cm has a somewhat higher $Re$ (and significantly higher $Rm$) than the
simulations of DSB06 that have comparable rotation rates; our more laminar
case Bm possesses magnetic energy densities and angular velocity contrasts
somewhat more akin to those of DSB06. Other differences between our
simulations and those of DSB06 also impact the results in subtler ways.
For instance, the morphology of the convective flows and their variation
with radius are strongly affected by the density stratification; our
overall density contrast of $\approx 100$ is consistent with a 1--D stellar
model, whereas theirs is a factor of 20 smaller, so naturally the developed
flow patterns in our simulations differ somewhat. Other differences --
e.g., the boundary conditions adopted on the flow fields -- may also impact
the results at some level.
If we are correct in suggesting that rotational influence is the dominant
control parameter in setting the magnetic energy, the field morphology, and
(indirectly, through the feedback of Maxwell stresses) the differential
rotation, then some straightforward observational predictions follow.
Rapidly rotating M-stars should generally show strong magnetic fields and
little or no differential rotation. Less rapidly rotating ones may show
cyclical magnetic energy and differential rotation, while the slowest
rotators are most likely to harbor persistent angular velocity contrasts,
accompanied by somewhat weaker average magnetic energies. Again
extrapolating from our limited probing of parameter space, the axisymmetric
component of the field should account for a greater fraction of the
magnetic energy in progressively more rapid rotators. Although the
observational constraints on magnetic fields and differential rotation in
fully convective M-stars are still scarce, it appears that these
predictions are at least consistent with what has so far been observed.
Donati et al. (2006) reported that v374 Peg had a strong, mostly
axisymmetric magnetic field, with no evident differential rotation. Their
target star was very rapidly rotating, in keeping with our suggestion that
differential rotation should be strongly quenched in such stars. At still
lower masses, Reiners \& Basri (2007) have found that magnetic activity is
detectable even in some stars that are not detectably rotating; such stars
may conceivably have low enough convective velocities that even rotation
rates below the observational detection limit could imply a reasonably
strong rotational influence and hence lead to vigorous dynamo action. Much
more work will be required to elucidate the full role of rotation in such
stars, and to determine whether other effects ignored in our modeling --
e.g., degeneracy and decreasing surface conductivity -- also play roles in
the dynamo process.
\acknowledgements
It is a pleasure to thank Gibor Basri, Juri Toomre, A. Sacha Brun, Andrew
West, Mark Miesch, and Benjamin Brown for helpful discussions and/or
comments on this manuscript. We also gratefully acknowledge Isabelle
Baraffe and Gilles Chabrier for supplying the 1--D stellar model used here
for initial conditions. This work was supported by an NSF Astronomy \&
Astrophysics postdoctoral fellowship (AST 05-02413). The simulations were
carried out with NASA support of Project Columbia, and NSF PACI support of
NCSA, SDSC, and PSC.
|
1,314,259,995,317 | arxiv |
\section{Introduction}
\input{fig-tex/fig_teaser}
In this work, we propose a video instance segmentation algorithm based on the Mask R-CNN pipeline~\cite{he2017Mask}.
We focus on the problem of temporal instability in video instance segmentation (Figure~\ref{fig:teaser}). There are many reasons for temporal instability: missing proposals from a region proposal network, misclassification of the object's class, or aliasing from small visual displacements.
We address temporal instability by propagating masks using object boxes to neighbouring frames to complement missing detections.
Mask propagation through bounding boxes enables tracking of objects even when the detector misses the object's bounding box in the current frame.
To this end, we use the attention mechanism. Our propagation network predicts an attention map propagated from previous frames to the current frame. We apply the attention map to current frame features, which allows us to fill in absent instance masks and overcome temporal instability.
We have three main contributions in this work. First, we identify the temporal instability for video instance segmentation. Second, by propagating object masks through an inter-frame attention mechanism, we generate temporally coherent and spatially accurate mask tracks. Third, our method outperforms the conventional Mask R-CNN methods on the on the YouTube-VIS dataset~\cite{yang2019Video}.
\input{fig-tex/fig_model_init.tex}
\section{Related Works}
\bfsection{Video Instance Segmentation}
With the release of YouTube-VIS~\cite{yang2019Video}, several video instance segmentation has been developed based on MaskR-CNN, which is an object instance segmentation method for images. TrackR-CNN~\cite{voigtlaender2019MOTS} adopts 3D convolutions to the encoder of Mask R-CNN to exploit temporal features from adjacent frames. To track detected objects, TrackR-CNN measures similarities of objects between frames using embeddings from RoI align.
MaskTrackR-CNN~\cite{yang2019Video} has a tracking head, which yields semantic representations of detected objects. MaskTrackR-CNN conducts matching between outputs from the tracking head across frames to agglomerate a track of segmentation masks.
STEm-Seg~\cite{athar2020STEmSeg} predicts a heat map for each object and models Gaussian distributions across frames to output mask tubes.
VAE-VIS~\cite{lin2020Video} extracts richer features by training the main decoder for video instance segmentation as well as two auxiliary decoders, each of which performs future frame reconstruction and object detection, respectively.
MaskProp~\cite{bertasius2020Classifying} propagates masks between neighboring frames using deformable convolutions~\cite{dai2017Deformable}.
SipMask~\cite{cao2020SipMask} delineates adjacent objects using spatial coefficients for each bounding box which improves mask quality.
\bfsection{Attention Model}
The attention mechanism was first introduced by Bahdanau~\etal~\cite{Bahdanau2015attention} for text summarization using a neural transduction model. Following the success of attention in NLP tasks, several methods have emerged to use attention mechanisms in convolutional neural networks.
Attention has been used for several image~\cite{ramachandran2019standalone, hu2018relation, bello2019attention} and video recognition~\cite{xiao2018video} tasks.
The transformer model, first introduced by Vaswani~\etal~\cite{vaswani2017Attention}, also leverages attention mechanisms.
We use an inter-frame attention mechanism to improve temporal stability for video instance segmentation.
\section{Method}\label{sec:method}
\input{tab-tex/tab_oracle.tex}
\subsection{Oracle study}
In order to examine the problem of temporal stability in greater detail, we perform an oracle testing in Table~\ref{table:oracle} using MaskTrack R-CNN~\cite{yang2019Video}.
MaskTrack R-CNN achieves 36.8 mAP on the mini-validation set. Replacing the detected boxes with the closest ground-truth boxes based on intersection over union (IoU) achieves a performance of 41.9 mAP. Completely replacing the box head with ground truth boxes and category labels leads to an improvement of 11.2 mAP. When we replace the mask head with ground truth masks, a significant performance improvement is observed from 53.1 mAP to 85.3 mAP. Furthermore, the oracle tracking improves the performance by 14.7 points.
Motivated by this study, we found that the temporal instability of masks is the critical bottleneck, which means there is considerable room for improvement in mask generation and tracking.
Thus, we focus on improving the quality of masks, especially alleviating the temporal stability problem in video instance segmentation.
\subsection{Mask Propagation with Inter-frame Attentions}
\input{fig-tex/fig_prop_head}
Due to object deformation or aliasing, per-frame object instance segmentation models struggle to segment objects consistently throughout the video. This leads to several missing detections throughout the video, which drastically affects the segmentation results.
Using 3D convolutional neural networks (CNNs) poses a couple of challenges. First, 3D convolutions are computationally expensive. Second, since the large memory footprint constrains the number of images in memory, the learned temporal interaction is limited.
Instead, we propagate masks from previous frames $t-\delta$ to the current frame $t$ to compensate the missing detections due to the lack of temporal context. As illustrated in Figure~\ref{fig:pipeline}, we add our propagation module upon the MaskTrack R-CNN pipeline. Our propagation module enables learning of the temporal context without 3D convolutions.
In this work, we improve the transition-based propagation method using attention mechanism~\cite{vaswani2017Attention}.
Our inter-frame attentions can robustly propagate masks between frames considering the temporal context.
Our propagation module with inter-frame attentions is illustrated in Figure~\ref{fig:prop_head}. For object propagation, we set the backbone features of frames $t-\delta$ and $t$ and a binary map of an object to propagate at frame $t-\delta$ as input. Our propagation module will output a mask of the target object at frame $t$.
\bfsection{Inter-frame affinity}
We first measure inter-frame affinities between two frames $t-\delta$ and $t$.
The input backbone features from frames $t-\delta$ and $t$ are resized into the stride of 16 of the image resolution and then concatenated across each level of the feature pyramid. We represent these processed features as $\mathbf{F}_t$ and $\mathbf{F}_{t-\delta}$ for frames $t$ and $t-\delta$, respectively. By using these features, we compute the transition matrix to measure inter-frame feature affinity between each spatial location.
The inter-frame affinity matrix $\mathbf{W}_{t-\delta \rightarrow t} \in \mathbb{R}^{HW \times HW}$ is computed as
\begin{equation}
\mathbf{W}_{t-\delta \rightarrow t} = \mathbf{F}_t \circ \mathbf{F}_{t-\delta},
\label{eq: affinity}
\end{equation}
where $\circ$ is a matrix multiplication operator. Each element of the inter-frame affinity matrix represents the affinity between corresponding two locations in $t$ and $t-\delta$ frames. We normalize the affinity matrix to make the sum of each row to be 1.
\bfsection{Attention estimation via object propagation}
Similar to TVOS~\cite{zhang2020Transductive}, we aim to propagate masks using the transition matrix. However, instead of propagating pixel-level segmentation masks, we use a binary object map, which serves as a loose estimate for the pixel-level mask.
The input binary object map for frame $t-\delta$ is generated by marking all pixels within the instance-specific bounding box. In addition to the object map, we also generate a binary map for the background by inverting the object's binary map to take into account the background information when computing inter-frame attentions.
We vectorize both binary object and background maps for matrix multiplication.
Using the transition matrix, the binary object map, $\mathbf{b}_{t-\delta}$, is propagated to the current frame $t$ by
\begin{equation}
\mathbf{a}_{t} = \mathbf{W}_{t-\delta \rightarrow t}
\circ \mathbf{b}_{t-\delta}.
\end{equation}
Since we propagate both object and background maps, we apply softmax to find the regions of the object. We represent the softmax output of the propagated object map as $\hat{\mathbf{a}}_{t}$ and use it as an attention map.
Unlike existing attention-based algorithms which allow the machine to prioritize important features, we explicitly supervise the attention module to learn where to focus by propagating temporal information. This allows us to maximize the temporal context.
\bfsection{Attention-based mask prediction}
Our next aim is to predict a mask using the attention map, $\hat{\mathbf{a}}_{t}$, and the features from frame $t$, $\mathbf{F}_{t}$. We first apply the attention map to the features by conducting element-wise multiplication to each spatial location. We feed the attention-guided features to a mask prediction module, which consists of four convolution modules (a combination of a convolution layer and ReLU), one deconvolution module, and prediction module (a combination of a convolution layer and a sigmoid layer).
\bfsection{Loss functions}
The propagation loss $L_{prop}$ consists of two terms, the mask propagation loss $L_{prop}^{m}$ and the attention loss $L_{prop}^{a}$. $L_{prop}^{m}$ is computed identically to the mask head in Mask R-CNN~\cite{he2017Mask}.
The attention loss, $L_{prop}^{a}$, is computed as follows:
\begin{equation}
L_{prop}^{a} = -\sum_{i=1}^H\sum_{j=1}^W y_{ij} y_{ij} \log \Tilde{y}_{ij} + (1 - y_{ij}) \log (1 - \Tilde{y}_{ij}),
\end{equation}
where $y_{ij}$ is the value in the ground-truth attention map at location $(i, j)$, and $\Tilde{y}_{ij}$ is the predicted attention value.
\bfsection{Training}
We use Mask R-CNN~\cite{he2017Mask} with ResNet-50 backbone pre-trained on COCO dataset. Our model is trained on 4 Tesla V100 GPUs. We use a batch size of 24, with 6 images on each GPU. We train our model for 12 epochs using SGD optimizer with a learning rate of 0.005, which is decayed by a factor of 10 at 8 and 11 epochs.
\bfsection{Inference}
During inference, our framework is completely online and does not require any future frames. We store the output predictions for previous frames in memory and propagate instance masks to the current frames. Using mask propagation, we are able to alleviate the temporal instability problem. When a bounding box is missing from the current frame, we propagate an instance mask from the frame history to the current frame to segment the missing object instance. Therefore, we use mask propagation as an empty instance filling mechanism.
\section{Experiments}
\input{fig-tex/fig_results}
\input{tab-tex/tab_yt-vis.tex}
We use the YouTube-VIS~\cite{yang2019Video} validation set to compare video instance segmentation methods.
For evaluation, we follow 3 metrics. We use 1). mean Average Precision over the video sequence (mAP), 2). Average Precision over the video sequence at 50\% and 75\% IOU thresholds, and 3). Average Recall for the highest 1 and 10 ranked instances per video.
We compare our method with three state-of-the-art methods; MaskTrack R-CNN~\cite{yang2019Video}, STEm-Seg~\cite{athar2020STEmSeg}, and SipMask~\cite{cao2020SipMask}. Note that direct comparison with MaskProp~\cite{bertasius2020Classifying} is not fair due to the complex architecture, hence, we compare our method with Mask R-CNN based approaches.
It is observable that our method comprehensively outperforms all four conventional methods on all evaluation metrics. We achieve nearly 3.5\% greater mAP than the closest method on the benchmark, which demonstrates the effectiveness of our approach.
In Figure~\ref{fig:results}, we illustrate the results of our method (third row) compared to MaskTrack R-CNN (second row). We observe that our inter-frame attention propagation head leads to temporally consistent segmentation tracks throughout the video.
\section{Conclusions}
In this work, we introduce an inter-frame attention propagation network for video instance segmentation. Using box-level instance masks from the frame history, we propagate an attention map onto the current frame which is used to generate an instance-specific segmentation mask. Our method is online and requires limited computational overhead.
Using inter-frame attentions, we achieve state-of-the-art results on the YouTube-VIS benchmark using the Mask R-CNN pipeline.
Qualitative results demonstrate the effectiveness of our approach in alleviating missing detections due to temporal stability problem.
|
1,314,259,995,318 | arxiv | \section{Introduction}
The Hilbert symbol of degree $p$, $(\cdot,\cdot
)_p:K^\times} \newcommand\kkk[1]{K^{\times #1}/\kkk p\timesK^\times} \newcommand\kkk[1]{K^{\times #1}/\kkk p\to\Brr p(K)$, which is defined when $\car
K\neq p$ and $\mu_p\sbq K$, has an analogue in characteristic $p$. If
$\wp$ is the Artin-Schreier map, $x\mapsto x^p-x$, then we have the
Artin-Schreier symbol
$$[\cdot,\cdot )_p:K/\wp (K)\timesK^\times} \newcommand\kkk[1]{K^{\times #1}/\kkk p\to\Brr p(K),$$
given for every $a,b\in K$, $b\neq 0$ by $[a,b)_p=[A_{[a,b)_p}]$,
which is the class in the Brauer group of the central simple algebra (c.s.a.)
$A_{[a,b)_p}$ generated by $x,y$ with the relations
$$\wp (x)=x^p-x=a,\quad y^p=b,\quad yxy^{-1}=x+1.$$
The representation of $\Brr p(K)$ involves the groups of $1$-forms,
$\Omega^1(K)$. Recall, over an arbitrary abelian ring $R$,
$\Omega^1(R)$ is the $R$-module generated by $\mathop{}\!\mathrm{d} a$ with $a\in R$,
subject to $\mathop{}\!\mathrm{d} (a+b)=\mathop{}\!\mathrm{d} a+\mathop{}\!\mathrm{d} b$ and $\mathop{}\!\mathrm{d} (ab)=a\mathop{}\!\mathrm{d}
b+b\mathop{}\!\mathrm{d} a$. By [GS, Lemma 9.2.1] we have a group morphism $\gamma
:\Omega^1(K)\to\Omega^1(K)/\mathop{}\!\mathrm{d} K$, called the inverse Cartier
operator, or $C^{-1}$, given by $a\mathop{}\!\mathrm{d} b\mapsto a^pb^{p-1}\mathop{}\!\mathrm{d}
b$. An important property of $\gamma$ is a the existence of the
following exact sequence:
$$1\toK^\times} \newcommand\kkk[1]{K^{\times #1}\xrightarrow
pK^\times} \newcommand\kkk[1]{K^{\times #1}\xrightarrow\dlog\Omega^1(K)\xrightarrow{\gamma
-1}\Omega^1(K)/\mathop{}\!\mathrm{d} K.$$
This result, [GS, Theorem 9.2.2], is due to Jacobson and Cartier. With
the help of this theorem, Kato was able to prove that there is a group
isomorphism
$$\alpha_p:\coker (\gamma -1)=\Omega^1(K)/(\mathop{}\!\mathrm{d} K+(\gamma
-1)\Omega^1(K))\to\Brr p(K),$$
given by $a\mathop{}\!\mathrm{d} b\mapsto [ab,b)$ $\forall a,b\in K$, $b\neq 0$. (See
[GS, Theorem 9.2.4] and [GS, Proposition 9.2.5].)
Explicitly, the domain of $\alpha_p$ writes as
$\Omega^1(K)/\langle\mathop{}\!\mathrm{d} a,\, (a^pb^{p-1}-a)\mathop{}\!\mathrm{d} b\, :\, a,b\in
K\rangle$.
Note that if $b\neq 0$ and we write $a=cb$ then
$a^pb^{p-1}-a=(c^p-c)b^{-1}$ so $(a^pb^{p-1}-a)\mathop{}\!\mathrm{d}
b=(c^p-c)b^{-1}\mathop{}\!\mathrm{d} b=(c^p-c)\dlog b$. Hence in the formula for the
domain of $\alpha_p$ we can replace $(a^pb^{p-1}-a)\mathop{}\!\mathrm{d} b$, with
$a,b\in K$ by $(a^p-a)\dlog b=\wp (a)\dlog b$, with $a,b\in K$, $b\neq
0$. Thus the isomorphism $\alpha_p$ is defined as
$$\alpha_p:\Omega^1(K)/\langle\mathop{}\!\mathrm{d} a,\,\wp (a)\dlog b\, :\, a,b\in
K,\, b\neq 0\rangle\to\Brr p(K)$$
More generally, if $n\geq 1$ then we have an analogue of the $p^n$th
Hilbert symbol, which is defined in terms of Witt vectors.
Let $W(K)$ be the ring of $p$-typical Witt vectors over $K$,
i.e. $W_{\{ 1,p,p^2,\ldots\}}(K)$, and let $W_n(K)$ be its truncation
of lenght $n$, i.e. $W_{\{ 1,p,\ldots,p^{n-1}\}}(K)$. (When $n=0$ by
$W_0(K)$ we mean $W_\emptyset (K)=\{ 0\}$.) We have a truncation
morphism $W(K)\to W_n(K)$ given by $(x_0,x_1,\ldots )\mapsto
(x_0,\ldots,x_{n-1})$. More generally, if $m\geq n$ then we have a
trucation map $W_m(K)\to W_n(K)$.
We denote by $F$ the Frobenius endomorphism on $W(K)$ and on
$W_n(K)$, given by $(x_0,x_1,\ldots )\mapsto (x_0^p,x_1^p,\ldots )$,
and by $V$ the Verschiebung map $(x_0,x_1,\ldots )\mapsto
(0,x_0,x_1,\ldots )$, which is additive. Note that for any $n\geq 0$
we can define $V:W_n(K)\to W_{n+1}(K)$ by $(x_0,\ldots,x_{n-1})\mapsto
(0,x_0,\ldots,x_{n-1})$. However, in many cases we will be concerned
with the truncated version, $V:W_n(K)\to W_n(K)$, given by
$(x_0,\ldots,x_{n-1})\mapsto (0,x_0,\ldots,x_{n-2})$. We have the well
known formulas $(Va)b=V(aFb)$ and $FV=VF=p=(x\mapsto px)$,
i.e. $p(x_0,x_1,\ldots )=(0,x_0^p,x_1^p,\ldots )$. More generally,
$(V^ka)b=V^k(aF^kb)$ and $(V^ka)(V^lb)=V^{k+l}(F^laF^kb)$.
For any $a\in K$ we denote by $[a]$ its Teichm\"uller representative
in $W(K)$ or $W_n(K)$, $[a]=(a,0,0,\ldots )$. The map $a\mapsto
[a]$ is multiplicative, but not additive. The zero and unit elements of
the ring of Witt vectors are $0=[0]=(0,0,\ldots )$ and
$1=[1]=(1,0,0,\ldots )$. If $a=(a_0,a_1,a_2,\ldots )$ is a Witt vector
and $b\in K$ then $a[b]=(a_0b,a_1b^p,a_2b^{p^2},\ldots )$.
The map $V^n$ is zero on $W_n(K)$. Moreover, $V^n(W(K))=\{
(0,\ldots,0,a_n,a_{n+1},\ldots )\, :\, a_i\in K\}$, which is the kernel
of the truncation map $W(K)\to W_n(K)$. Therefore $W_n(K)$ can
also be written as $W(K)/V^n(W(K))$. As we will see later, in some
cases it is more advantageous to regard the truncated Witt vectors as
classes of full Witt vectors, especially when we work with truncations
of different lengths.
The Artin-Schreier map on Witt vectors is $\wp =F-1$, given by
$(x_0,x_1,\ldots )\mapsto (x_0^p,x_1^p,\ldots )-(x_0,x_1,\ldots )$. We
have $\ker (\wp :W_n(K)\to
W_n(K))=W_n(\FF_p)\cong\ZZ/p^n\ZZ$. The isomorphism
$\ZZ/p^n\ZZ\to W_n(\FF_p)$ is given by $\overline a\mapsto a\cdot
1=a(1,0,\ldots,0)$.
We now define the analogue of $(\cdot,\cdot )_{p^n}$ in characteristic
$p$,
$$[\cdot,\cdot )_{p^n}=[\cdot,\cdot )_{K,p^n}:W_n(K)/\wp
(W_n(K))\timesK^\times} \newcommand\kkk[1]{K^{\times #1}/\kkk{p^n}\to\Brr{p^n}(K),$$
given for any $a=(a_0,\ldots,a_{n-1})\in W_n(K)$ and any $b\inK^\times} \newcommand\kkk[1]{K^{\times #1}$
by $[a,b)_{p^n}=[A_{[a,b)_{p^n}}]$, where $[A_{[a,b)_{p^n}}]$ is a
c.s.a. over $K$ of degree $p^n$ generated by
$x=(x_0,\ldots,x_{n-1})$ and $y$, such that $x$ has mutually commuting
entries, with the relations
$$\wp (x)=Fx-x=a,\quad y^{p^n}=b,\quad yxy^{-1}=x+1.$$
Here $yxy^{-1}:=(yx_0y^{-1},\ldots,yx_{n-1}y^{-1})$ and $x+1$ is the
sum of Witt vectors $(x_0,\ldots,x_{n-1})+(1,0,\ldots,0)$.
\smallskip
{\bf Note.} The notation $[\cdot,\cdot )$ is not universally
used. Instead of $[(a_0,\ldots,a_{n-1}),b)$ some authors write
$(b,(a_0,\ldots,a_{n-1})]$ or $(b|a_0,\ldots,a_{n-1}]$.
Also the relation $yxy^{-1}=x+1$ from the definition of
$A_{[a,b)_{p^n}}$ is sometimes replaced by $y^{-1}xy=x+1$. With this
alternative definition $A_{[a,b)_{p^n}}$ becomes
$A_{[a,b)_{p^n}}^{op}$ so $[a,b)_{p^n}$ becomes $-[a,b)_{p^n}$, which
is essentially the same thing.
\smallskip
The symbol $[\cdot,\cdot )_{p^n}$ was introduced by Witt in [W] and is
called the Artin-Schreier-Witt symbol. It is bilinear and
$[a,b)_{p^n}$ depends only on the classes of $a\mod\wp (W_n(K))$ and
$b\mod\kkk{p^n}$, which justifies the set of definition. Then $[\wp
(a),b)_{p^n}=[a,b^{p^n})_{p^n}=0$ $\forall a\in W_n(K)$,
$b\inK^\times} \newcommand\kkk[1]{K^{\times #1}$. Also note that
$[Fa,b)_{p^n}-[a,b)_{p^n}=[Fa-a,b)_{p^n}=[\wp (a),b)_{p^n}=0$ so
$[Fa,b)_{p^n}=[a,b)_{p^n}$. As a consequence,
$[Va,b)_{p^n}=[FVa,b)_{p^n}=[pa,b)_{p^n}=p[a,b)_{p^n}$.
The symbols $[\cdot,\cdot )_{p^n}$ are related to each other by the
formula $[a,b)_{p^n}=[Va,b)_{p^{n+1}}$ $\forall a\in W_n(K)$. (See [W,
Satz 15].) More generally, if $m\geq n$ then
$[a,b)_{p^n}=[V^{m-n}a,b)_{p^m}$. Explicitly,
$[(a_0,\ldots,a_{n-1}),b)_{p^n}=
[(0,\ldots,0,a_0,\ldots,a_{n-1}),b)_{p^m}$.
We obtain a map between two directed systems.
$$\begin{array}{ccc}
W_n(K)\timesK^\times} \newcommand\kkk[1]{K^{\times #1}&
\xrightarrow{[\cdot,\cdot )_{p^n}}& \Brr{p^n}(K)\\
\hskip -40pt V^{m-n}\times 1\downarrow&{}&\downarrow\\
W_m(K)\timesK^\times} \newcommand\kkk[1]{K^{\times #1}&
\xrightarrow{[\cdot,\cdot )_{p^m}}& \Brr{p^m}(K)
\end{array}.$$
We take the direct limits. The limit of the directed system
$W_0(K)\xrightarrow VW_1(K)\xrightarrow VW_2(K)\xrightarrow V\cdots$
is $CW(K)$, where $(CW(K),+)$ is the group of Witt
covectors. Recall that the elements of $CW(K)$ write as
$(\ldots,a_{-2},a_{-1},a_0)$, with $a_i\in K$, such that $a_i=0$ for
$i\ll 0$. The canonical morphism $\psi_n:W_n(K)\to CW(K)$ is
given by $(a_0,\ldots,a_{n-1})\mapsto (\ldots
0,0,a_0,\ldots,a_{n-1})$. The Frobenius and Verschiebung maps are
defined on $CW(K)$ by
$F(\ldots,a_2,a_1,a_0)=(\ldots,a_2^p,a_1^p,a_0^p)$ and
$V(\ldots,a_2,a_1,a_0)=(\ldots,a_3,a_2,a_1)$. They are are compatible
with the canonical maps, in the sense that $\psi_n(Fa)=F\psi_n(a)$ and
$\psi_n(Va)=V\psi_n(a)$ $\forall a\in W_n(K)$.
Also $\varinjlim\Brr{p^n}(K)=\Brr{p^\infty}(K):=\bigcup_{n\geq
1}\Brr{p^n}(K)$. So we get a symbol
$$[\cdot,\cdot )_{p^\infty}:CW(K)\timesK^\times} \newcommand\kkk[1]{K^{\times #1}\to\Brr{p^\infty}(K).$$
If $a=(\ldots,a_{-2},a_{-1},a_0)\in CW(K)$ with $a_i=0$ for $i\leq -n$
and $b\inK^\times} \newcommand\kkk[1]{K^{\times #1}$ then $a=\psi_n((a_{-n+1},\ldots,a_0))$. Therefore
$[a,b)_{p^\infty}=[(a_{-n+1},\ldots,a_0),b)_{p^n}$. Since each
$[\cdot,\cdot )_{p^n}$ is bilinear, $[\cdot,\cdot )_{p^\infty}$ is
bilinear as well.
Let $a\in CW(K)$, $b\inK^\times} \newcommand\kkk[1]{K^{\times #1}$. We write
$a=\psi_n((a_{-n+1},\ldots,a_0))$ for some $n\geq 1$. Then $\wp
(a)=\psi_n(\wp (a_{-n+1},\ldots,a_0))$ so
$[\wp (a),b)_{p^\infty}=[\wp (a_{-n+1},\ldots,a_0),b)_{p^n}=0$. Also,
if $b\in\kkk{p^\infty}:=\bigcup_{n\geq 1}\kkk{p^n}$ then, in
particular, $b\in\kkk{p^n}$ so $[a,b)_{p^\infty}=[
(a_{-n+1},\ldots,a_0),b)_{p^n}=0$. Hence $[a,b)_{p^\infty}=0$ if
$a\in\wp (CW(K))$ or $b\in\kkk{p^\infty}$. Since $[\cdot,\cdot
)_{p^\infty}$ is bilinear, it follows that $[a,b)_{p^\infty}$ depends
only on $a\mod\wp (CW(K))$ and $b\mod\kkk{p^\infty}$. Hence
$[\cdot,\cdot )_{p^\infty}$ can be defined as
$$[\cdot,\cdot )_{p^\infty}:CW(K)/\wp
(CW(K))\timesK^\times} \newcommand\kkk[1]{K^{\times #1}/\kkk{p^\infty}\to\Brr{p^\infty}(K).$$
We also have $[a,b)_{p^n}=[Va,b)_{p^{n+1}}=p[a,b)_{p^{n+1}}$ $\forall
a\in W(K)$, $b\inK^\times} \newcommand\kkk[1]{K^{\times #1}$. More generally, if $m\geq n$ then
$p^{m-n}[a,b)_{p^m}=[a,b)_{p^n}$. Explicitly,
$p^{m-n}[(a_0,\ldots,a_{m-1}),b)_{p^m}=[(a_0,\ldots,a_{n-1}),b)_{p^n}$. This
is related to the similar relation for Hilbert symbols, $\frac
mn(a,b)_m=(a,b)_n$ (or $(a,b)_m^{\frac mn}=(a,b)_n$, in the more
familiar multiplicative notation).
There is an alternative definition of $[a,b)_{p^n}$ in terms of
cohomology.
If $a=(a_0,\ldots,a_{n-1})\in W_n(K)$ then $L=K(\wp^{-1}(a))$ is
called an Artin-Schreier-Witt extension of $K$. Explicitly, there is
$\alpha =(\alpha_0,\ldots,\alpha_{n-1})\in W_n(K_s)$ with $\wp (\alpha
)=a$ and
$L=K(\wp^{-1}(a)):=K(\alpha)=K(\alpha_0,\ldots,\alpha_{n-1})$. If
$\alpha,\beta\in\wp^{-1}(a)$ then $\beta -\alpha\in\ker\wp
=W_n(\FF_p)$. Then the extension $L/K$ is Galois and we have an
injective morphism $\chi_a:{\rm Gal}(L/K)\to W_n(\FF_p)$, given by
$\sigma\mapsto\sigma (\alpha )-\alpha$. The field $L=K(\wp^{-1}(a))$
and the morphism $\chi_a$ depend only on $a\mod\wp (W_n(K))$.
Assume that $a_0\notin\wp (K)$. By [W, Satz 13] $\chi_a$ is an
isomorphism so ${\rm Gal}(L/K)\cong W_n(\FF_p)\cong\ZZ/p^n\ZZ$. If
$x=(x_0,\ldots,x_{n-1})$ is a multivariable, then we have a surjective
morphism $K[x]/(\wp (x)-a)\to L$, given by $x\mapsto\alpha$,
i.e. $x_i\mapsto\alpha_i$ $\forall i$. Note that $\wp (x)=a$ writes as
a system of equations $x_i^p-x_i+P_i(x_0,\ldots,x_{i-1})=0$ $\forall
0\leq i\leq n-1$, for some $P_i\in K[X_0,\ldots,X_{i-1}]$. It follows
that $K[x]/(\wp (x)-a)$ has the basis $x_0^{k_0}\cdots
x_{n-1}^{k_{n-1}}$, with $0\leq k_i\leq p-1$. Then $\dim_KK[x]/(\wp
(x)-a)=p^n=\dim_KL$ so $K[x]/(\wp (x)-a)\cong L$.
If $a\in W_n(K)$ is arbitrary then let $0\leq k\leq n$ be
maximal such that $(a_0,\ldots,a_{k-1})\in\wp (W_k(K))$. Equivalently,
$k$ is maximal such that in the class of $a\mod\wp (W_n(K))$ there is
a Witt vector with $0$ on the first $k$ positions, i.e. $(a+\wp
(W_n(K)))\cap V^k(W_{n-k}(K))\neq\emptyset$. Let
$a'=(a'_0,\ldots,a'_{n-k-1})\in W_{n-k}(K)$ such that $a\equiv
V^ka'\mod\wp (W_n(K))$. If $a'_0\in\wp (K)$ then $a'\equiv Va''\mod\wp
(W_{n-k}(K))$ so $a\equiv V^ka'\equiv V^{k+1}a''\mod\wp (W_n(K))$ for
some $a''\in W_{n-k-1}(K)$, which contradicts the maximality of
$k$. So $a'_0\notin\wp (K)$. Let
$\alpha'=(\alpha'_0,\ldots,\alpha'_{n-k-1})\in W_{n-k}(K_s)$ with $\wp
(\alpha')=a'$. Then $\wp (V^k\alpha')=V^k\wp (\alpha')=V^ka'$. Since
$a\equiv V^ka'\mod\wp (W_n(K))$ we have
$L=K(\wp^{-1}(a))=K(\wp^{-1}(V^ka'))=
K(V^k\alpha')=K(\alpha')=K(\wp^{-1}(a'))$. Since $a'_0\notin\wp (K)$
we have that $\chi_{a'}:{\rm Gal}(L/K)\to W_{n-k}(\FF_p)$ is an
isomorphism. Hence ${\rm Gal}(L/K)\cong
W_{n-k}(\FF_p)\cong\ZZ/p^{n-k}\ZZ$. If $\sigma\in{\rm Gal}(L/K)$ then
$\chi_{a'}(\sigma )=\sigma (\alpha')-\alpha'$ so $\chi_a(\sigma
)=\chi_{V^ka'}(\sigma )=\sigma
(V^k\alpha')-V^k\alpha'=V^k\chi_{a'}(\sigma )$. If we identify
$W_n(\FF_p)$ and $W_{n-k}(\FF_p)$ with $\ZZ/p^n\ZZ$ and
$\ZZ/p^{n-k}\ZZ$ then $W_{n-k}(\FF_p)\xrightarrow{V^k}W_n(\FF_p)$
identifies with $\ZZ/p^{n-k}\ZZ\xrightarrow{p^k}\ZZ/p^n\ZZ$ so we have
$\chi_a(\sigma )=p^k\chi_{a'}(\sigma )$.
Given a finite Galois extension $L/K$ and $\chi :{\rm
Gal}(L/K)\to\ZZ/p^n\ZZ$ a morphism, we denote by $\tilde\chi\in{\rm
Hom_{cont}}({\rm Gal}(K_s/K),\ZZ/p^n\ZZ )=H^1(K,\ZZ/p^n\ZZ)$ the
induced morphism, given by $\tilde\chi (\sigma )=\chi
(\sigma_{|L})$. By [GS, Remark 4.3.13 2.], we have an isomorphism
$W_n(K)/\wp (W_n(K))\cong H^1(K,\ZZ/p^n\ZZ )$, given by the coboundary
morphism $W_n(K)\to H^1(K,W_n(\FF_p))=H^1(K,\ZZ/p^n\ZZ )$, which comes
from the exact sequence $0\to W_n(\FF_p)\to W_n(K_s)\xrightarrow\wp
W_n(K_s)\to 0$. Explicitly, if $a\in W_n(K)$ and $\alpha\in W_n(K_s)$
such that $\wp (\alpha )=a$ then the element of $H^1(K,W_n(\FF_p))$
corresponding to $a$ is a given by $\sigma\mapsto\sigma (\alpha
)-\alpha$ so it coincides with $\tilde\chi_a$. So the isomorphism
$W_n(K)/\wp (W_n(K))\cong H^1(K,\ZZ/p^n\ZZ )$ is given by
$a\mapsto\tilde\chi_a$.
From the cup product $\cup :H^2(K,\ZZ )\otimes
H^0(K,K^\times} \newcommand\kkk[1]{K^{\times #1}_s)\to H^2(K,K^\times} \newcommand\kkk[1]{K^{\times #1}_s)=\Br (K)$ and the coboundary morphism
$\delta :H^1(K,\ZZ/p^n\ZZ )\to H^2(K,\ZZ )$ coming from the exact
sequence $0\mapsto\ZZ\xrightarrow{p^n}\ZZ\to\ZZ/p^n\ZZ\to 0$ we get a
linear map
$$j_n:H^1(K,\ZZ/p^n\ZZ )\otimes K^\times\to\Brr{p^n}(K),\quad
j_n(\psi\otimes b)=\delta (\psi )\cup b.$$
\bpr With the above notations we have
(i) $[a,b)_{p^n}=j_n(\tilde\chi_a\otimes b)=\delta (\tilde\chi_a)\cup
b$.
(ii) $[a,b)_{p^n}=0$ if and only if $b\in{\rm N}_{L/K}(L^\times )$,
where $L=K(\wp^{-1}(a))$.
\epr
$Proof.$\,\,} \def\lb{\label Assume first that $a_0\notin\wp (K)$ so that $\chi_a:{\rm
Gal}(L/K)\to\ZZ/p^n\ZZ$ is an isomorphism and we can apply [GS,
Proposition 4.7.3 and Corollary 4.7.5]. By [GS, Proposition 4.7.3] we
have $\delta (\tilde\chi_a)\cup b=[(\chi_a,b)]$, where $(\chi_a,b)$ is
the c.s.a. described in [GS, Proposition 2.5.2]. Namely, $(\chi_a,b)$
is the $K$-algebra generated by $L$ and $y$, subject to the relations
$y^{p^n}=b$ and $y\lambda y^{-1}=\sigma (\lambda )$ $\forall\lambda\in
L$. Here $\sigma$ is the preimage of $1$ under $\chi_a:{\rm
Gal}(L/K)\to W_n(\FF_p)\cong\ZZ/\go p} \def\q{\go q} \def\P{\go P} \def\Q{\go Q} \def\mm{\go m^n\ZZ$, i.e. $\sigma$ is given by
$\sigma (\alpha )-\alpha =1$, i.e. by $\alpha\mapsto\alpha +1$, where
$\alpha =(\alpha_0,\ldots,\alpha_{n-1})\in\wp^{-1}(a)$. The relation
$y\lambda y^{-1}=\sigma (\lambda )$ only needs to be verified by the
generators $\alpha =(\alpha_0,\ldots,\alpha_{n-1})$ of $L/K$ so it is
equivalent to $y\alpha y^{-1}=\sigma (\alpha )=\alpha +1$. But, if
$x=(x_0,\ldots,x_{n-1})$ is a multivariable, then we have the
isomorphism $L=K[\alpha]\cong K[x]/(\wp (x)-a)$, given by
$\alpha\mapsto x$, i.e. $\alpha_i\mapsto x_i$ $\forall i$. Hence
$(\chi_a,b)$ is the algebra generated by $x=(x_0,\ldots,x_{n-1})$ and
$y$, where $x_i$'s commute with each other, $\wp (x)=a$, $y^{p^n}=b$
and $yxy^{-1}=x+1$. Hence $(\chi_a,b)=A_{[a,b)_{p^n}}$, so
$\delta(\tilde\chi_a)\cup b=[(\chi_a,b)]=[a,b)_{p^n}$. By [GS,
Corollary 4.7.5] we also have that
$[a,b)_{p^n}=\delta(\tilde\chi_a)\cup b=0$ iff $b\in{\rm
N}_{L/K}(L^\times )$.
If $a\in W_n(K)$ is arbitrary then let $0\leq k\leq n$ be maximal
such that $(a_0,\ldots,a_{k-1})\in\wp (W_k(K))$. Then there is
$a'=(a'_0,\ldots,a'_{n-k-1})\in W_{n-k}(K)$ with $a'_0\notin\wp (K)$
such that $a\equiv V^ka'\mod\wp (W_n(K))$. We have
$L=K(\wp^{-1}(a))=K(\wp^{-1}(a'))$ and $\chi_a=p^k\chi_{a'}$. Since
$a'_0\notin\wp (K)$ we have
$[a',b)_{p^{n-k}}=\delta'(\tilde\chi_{a'})\cup b$, where
$\delta':H^1(K,\ZZ/p^{n-k}\ZZ )\to H^2(K,\ZZ )$ is the coboundary
morphism obtained from the exact sequence
$0\to\ZZ\xrightarrow{p^{n-k}}\ZZ\to\ZZ/p^{n-k}\ZZ\to 0$, and
$[a',b)_{p^{n-k}}=0$ iff $b\in{\rm N}_{L/K}(L^\times )$. But
$\chi_a=p^k\chi_{a'}$, so $\tilde\chi_a=p^k\tilde\chi_{a'}$, which, by
straightforward calculations, implies that
$\delta(\tilde\chi_a)=\delta'(\tilde\chi_{a'})$, and $a\equiv
V^ka'\mod\wp (W_n(K))$ so $[a,b)_{p^n}=[V^ka',b)_{p^n}=[a',b)_{p^{n-k}}$.
Hence $[a,b)_{p^n}=\delta(\tilde\chi_a)\cup b$ and $[a,b)_{p^n}=0$ iff
$b\in{\rm N}_{L/K}(L^\times )$. \mbox{$\Box$}\vspace{\baselineskip}
\bpr The group $\Brr{p^n}(K)$ is generated by the image of
$[\cdot,\cdot )_{p^n}$.
\epr
$Proof.$\,\,} \def\lb{\label By [GS. Theorem 9.1.4] the map $j_n$ is surjective so
$\Brr{p^n}(K)$ is generated by $j_n(\psi\otimes b)=\delta (\psi )\cup
b$, with $\psi\in H^1(K,\ZZ/p^n\ZZ)$, $b\inK^\times} \newcommand\kkk[1]{K^{\times #1}$. But every $\psi\in
H^1(K,\ZZ/p^n\ZZ )$ writes as $\tilde\chi_a$ for some $a\in W_n(K)$
and $\delta (\tilde\chi_a)\cup b=[a,b)_{p^n}$. Hence $\Brr{p^n}(K)$ is
generated by $[a,b)_{p^n}$ with $a\in W_n(K)$ and $b\inK^\times} \newcommand\kkk[1]{K^{\times #1}$. \mbox{$\Box$}\vspace{\baselineskip}
For the purpose of this paper we only need Proposition 1.1(ii). This
result is very likely already known. However we didn't find it stated
explicitly in the general case in the literature. So we provided a
proof here.
\medskip
If $K$ is a local field then we have another definition of
$[\cdot,\cdot )_{p^n}$, in terms of the local Artin map, with values
in $W_n(\FF_p)$. Namely, if $a\in W_n(K)$ and $b\inK^\times} \newcommand\kkk[1]{K^{\times #1}$ then we
take $\alpha\in W_n(K_s)$ with $\wp (\alpha )=a$ and we define
$[a,b)_{p^n}:=(b,K(\alpha )/K)(\alpha )-\alpha\in W_n(\FF_p)$. This
new definition of $[\cdot,\cdot )_{p^n}$, with values in $W_n(\FF_p)$,
is related to the initial one, with values in $\Brr{p^n}(K)$, via the
local invariant $inv:\Br (K)\xrightarrow\sim\QQ/\ZZ$. It sends
$\Brr{p^n}(K)$ to $\frac 1{p^n}\ZZ/\ZZ$ so we have
$\Brr{p^n}(K)\cong\frac 1{p^n}\ZZ/\ZZ\cong\ZZ/p^n\ZZ\cong
W_n(\FF_p)$. (See [FV, (7.3)], where we have a general statement for
all cyclic algebras.)
\section{A key lemma}
In this section we prove that for every $b\inK^\times} \newcommand\kkk[1]{K^{\times #1}$ we have
$[[b],b)_{p^n}=0$. This result, together with [GS, Theorem 9.2.4] and
the basic properties of the $[\cdot,\cdot )_{p^n}$, the bilinearity
and the relations $[\wp (a),b)_{p^n}=0$ (or, equivalently,
$[Fa,b)_{p^n}=[a,b)_{p^n}$) and $[a,b)_{p^n}=[Va,b)_{p^{n+1}}$, are
all the ingredients we will use in this paper.
\blm If $R$ is a ring and ${\mathfrak a}\sbq R$ is an ideal then for
every $n\geq 1$ we have:
(i) $W_n({\mathfrak a})$ is an ideal of $W_n(R)$.
(ii) If $\alpha_h$, with $h\in S$, generate $({\mathfrak a},+)$ then
$V^i[\alpha_h]$, with $h\in S$ and $0\leq i\leq n-1$, generate
$(W_n({\mathfrak a}),+)$.
\elm
$Proof.$\,\,} \def\lb{\label (i) We have $W_n({\mathfrak a})=\ker(W_n(R)\to
W_n(R/{\mathfrak a}))$.
(ii) We use the induction on $n$. When $n=1$ we have $W_1({\mathfrak
a})={\mathfrak a}$, which is generated by $[\alpha_h]=\alpha_h$ with
$k\in S$.
Suppose now that $n>1$ and let $a=(a_0,\ldots,a_{n-1})\in
W_n({\mathfrak a})$. Then $a_0\in{\mathfrak a}$ writes as
$\sum_{h\in S}m_h\alpha_h$ for some $m_h\in\ZZ$ with $m_h=0$ for
almost all $h\in S$. Then $a-\sum_{h\in S}m_h[\alpha_h]$ belongs to
$(W_n({\mathfrak a}),+)$ and its first entry is $a_0-\sum_{h\in
S}m_h\alpha_h=0$. It follows that $a-\sum_{h\in S}m_h[\alpha_h]=Vb$
for some $b=(b_0,\ldots,b_{n-2})\in W_{n-1}({\mathfrak a})$. By the
induction hypothesis, $b$ writes as a linear combination with
coefficients in $\ZZ$ of $V^i[\alpha_h]$, with $h\in S$ and $0\leq
i\leq n-2$. It follows that $Vb$ writes as a linear combination of
$V^i[\alpha_h]$, with $h\in S$ and $1\leq i\leq n-1$. From here we
conclude that $a=\sum_{h\in S}m_k[\alpha_h]+Vb$ writes is a linear
combination of $V^i[\alpha_h]$, with $h\in S$ and $0\leq i\leq
n-1$. \mbox{$\Box$}\vspace{\baselineskip}
\blm (i) If $n\geq 1$ then $[[b],b)_{p^n}=0$ for every $b\inK^\times} \newcommand\kkk[1]{K^{\times #1}$.
(ii) More generally, if $k\sbq K$ is a perfect field and $a\in
W_n(bk[b])$ then $[a,b)_{p^n}=0$.
\elm
$Proof.$\,\,} \def\lb{\label First we prove (i) at $n=1$. By Proposition 1.1(ii) $[b,b)_p=0$
iff $b\in{\rm N}_{L/K}(L^\times )$, where $L=K(\wp^{-1}(b))$. If
$b\in\wp (K)$ then $[b,b)_p=0$ and $L=K$ so our claim is trivial. If
$b\notin\wp (K)$ then $L/K$ is an Artin-Schreier extension, of degree
$p$. We have $L=K(\alpha )$ for some $\alpha$ with $\wp
(\alpha)=b$. Then the minimal polynomial of $\alpha$ is $X^p-X-b=0$ so
${\rm N}_{L/K}(\alpha )=(-1)^p(-b)=b$. (Including the case when $\car
K=p=2$.) Thus $b\in{\rm N}_{L/K}(L^\times )$.
Next, we prove (i) and (ii) by induction on $n$ in two steps.
{\em Step 1.} We prove that if $n\geq 1$ then (i) at $n$ implies (ii)
at $n$. We use Lemma 2.1(ii). Since the ideal ${\mathfrak a}=bk[b]$ of
the ring $R=k[b]$ is generated, as a group, by $cb^h$, with $h\geq 1$
and $c\in k\setminus\{ 0\}$, we get that $W_n(bk[b])$ is generated
by $V^i[cb^h]$, with $h\geq 1$, $c\in k\setminus\{ 0\}$ and $0\leq
i\leq n-1$. Hence $a$ writes as a linear combination with coefficients
in $\ZZ$ of the generators $V^i(cb^h)$ and, by the linearity in the
first variable of $[\cdot,\cdot )_{p^n}$, $[a,b)_{p^n}$ writes as a
linear combination of $[V^i[cb^h],b)_{p^n}$. So it is enough to prove
that $[V^i[cb^h],b)_{p^n}=0$ for $h\geq 1$, $c\in k\setminus\{ 0\}$
and $0\leq i\leq n-1$. But $[V^i[cb^h],b)_{p^n}=p^i[[cb^h],b)_{p^n}$
so we only have to prove that $[[cb^h],b)_{p^n}=0$ (i.e. the case
$i=0$). We write $h=p^sl$ with $s\geq 0$ and $(p,l)=1$. Since $k$ is
perfect we have $c=d^{p^{n+s}}$ for some $d\in k\setminus\{ 0\}$. Then
$[[cb^h],b)_{p^n}=[[d^{p^{n+s}}b^{p^sl}],b)_{p^n}=
[F^s[d^{p^n}b^l],b)_{p^n}=[[d^{p^n}b^l],b)_{p^n}$. By (i) we have
$0=[[d^{p^n}b^l],d^{p^n}b^l)_{p^n}=
p^n[[d^{p^n}b^l],d)_{p^n}+l[[d^{p^n}b^l],b)_{p^n}=
l[[d^{p^n}b^l],b)_{p^n}$. Since also $p^n[[d^{p^n}b^l],b)_{p^n}=0$ and
$(p^n,l)=0$ we get $[[d^{p^n}b^l],b)_{p^n}=0$, as claimed.
{\em Step 2.} We prove that if $n>1$ then (ii) at $n-1$ implies (i) at
$n$. Let $\alpha =(\alpha_0,\ldots,\alpha_{n-1})\in W_n(K_s)$ with
$\wp (\alpha )=[b]$. By Proposition 1.1(ii), we must prove that
$b\in{\rm N}_{L/K}(L^\times )$, where $L=K(\wp^{-1}([b]))=K(\alpha )$.
When we identify the first coordinate in the equality $F\alpha -\alpha
=\wp (\alpha )=[b]$ we get $\alpha_0^p-\alpha_0=b$. We have $\alpha
=(\alpha_0,0,\ldots,0)+(0,\alpha_1,\ldots,\alpha_{n-1})=
[\alpha_0]+V\alpha'$, where $\alpha'=(\alpha_1,\ldots,\alpha_{n-1})\in
W_{n-1}(K_s)$. Then $[b]=\wp (\alpha )=\wp ([\alpha_0])+\wp (V\alpha')$
so $V\wp (\alpha')=\wp (V\alpha')=[b]-\wp
([\alpha_0])=[b]-[\alpha_0^p]+[\alpha_0]$. But
$[b]=[\alpha_0^p-\alpha_0]$, $[\alpha_0^p]$ and $[\alpha_0]$ belong to
$W_n(\alpha_0\FF_p[\alpha_0])$, which, by Lemma 2.1(i), is an ideal
of $W_n(\FF_p[\alpha_0])$. It follows that $V\wp
(\alpha')=[b]-[\alpha_0^p]+[\alpha_0]\in
W_n(\alpha_0\FF_p[\alpha_0])$ so $a:=\wp (\alpha')$ has all the
coordinates in $\alpha_0\FF_p[\alpha_0]$, i.e. $a\in
W_{n-1}(\alpha_0\FF_p[\alpha_0])$. Let now $K'=K[\alpha_0]$. We have
$\wp (\alpha_0)=b$ so $K'=K(\wp^{-1}(b))$. Also
$L=K(\alpha_0,\ldots,\alpha_{n-1})=
K'(\alpha_1,\ldots,\alpha_{n-1})=K'(\alpha')$. Since $\wp
(\alpha')=a\in W_{n-1}(\FF_p[\alpha_0])\sbq W_{n-1}(K')$ we have
$L=K'(\wp^{-1}(a))$. Now $\alpha_0\in K'^\times$, $\FF_p\sbq K'$ is a
perfect field and $a\in W_{n-1}(\alpha_0\FF_p[\alpha_0])$. By (ii) at
$n-1$, this implies that $[a,\alpha_0)_{K',p^{n-1}}=0$. Since
$L=K'(\wp^{-1}(a))$, by Proposition 1.1(ii) we have $\alpha_0\in{\rm
N}_{L/K'}(L^\times )$. There are two cases:
a) If $b\notin\wp (K)$ then, as seen from the proof of (i) in the case
$n=1$, $\wp (\alpha_0)=b$ implies that $K'=K(\alpha_0)$ is an
Artin-Schreier extension of $K$ and ${\rm N}_{K'/K}(\alpha_0)=b$. But
we also have $\alpha_0\in{\rm N}_{L/K'}(L^\times )$ so $\alpha_0={\rm
N}_{L/K'}(\gamma )$ for some $\gamma\in L^\times$. It follows that
$b={\rm N}_{K'/K}({\rm N}_{L/K'}(\gamma ))= {\rm N}_{L/K}(\gamma )$ so
$b\in{\rm N}_{L/K}(L^\times )$.
b) If $b\in\wp (K)$ then $K'=K(\wp^{-1}(b))=K$. Hence $\alpha_0\in{\rm
N}_{L/K}(L^\times )$. By the same reasoning, for any other
$\beta=(\beta_0,\ldots,\beta_{n-1})$ with $\wp (\beta )=[b]$ we have
that $\beta_0$ is in the norm group of $L=K(\wp^{-1}([b]))/K$. Let
$0\leq h\leq p-1$. We have $[h]=(h,0,\ldots,0)\in
W_n(\FF_p)=\ker\wp$. So if we take $\beta =\alpha +[h]$ then $\wp
(\beta )=\wp (\alpha )+\wp ([h])=[b]+0=[b]$. By identifying the first
coordinate in the equality $\beta =\alpha +[h]$, we get
$\beta_0=\alpha_0+h$. Hence $\alpha_0+h=\beta_0\in{\rm N}_{L/K}(L^\times
)$. Since $\alpha_0+h\in{\rm N}_{L/K}(L^\times )$ for $0\leq h\leq p-1$,
we have $b=\alpha_0^p-\alpha_0=\alpha_0(\alpha_0+1)\cdots
(\alpha_0+p-1)\in{\rm N}_{L/K}(K^\times )$ and we are done. \mbox{$\Box$}\vspace{\baselineskip}
In this paper we only use Lemma 2.2(i). We stated the stronger
result from (ii) only because it makes the induction possible. In
fact, for the induction to work we only need the statement (ii) for
$k=\FF_p$.
Note that the proof of $[[b],b)_{p^n}=0$ was done with rather
rudimentary methods. There is an alternative proof using class field
theory. We can also prove Lemma (ii), but only when $k$ is finite. So
we have $b\inK^\times} \newcommand\kkk[1]{K^{\times #1}$, $k=\FF_q\sbq K$ for some $p$-power $q$, $a\in
W_n(b\FF_q[b])$ and we want to prove that $[a,b)_{p^n}=0$.
First note that we can reduce to the case when $K=\FF_q(b)$. Indeed,
we have $a\in W_n(\FF_q(b))$ and $\FF_q(b)\sbq K$ so if
$[a,b)_{\FF_q(b),p^n}=0$ then $[a,b)_{K,p^n}=0$, as well. (We have
$A_{[a,b)_{K,p^n}}=A_{[a,b)_{\FF_q(b),p^n}}\otimes_{\FF_q(b)}K$ so if
$A_{[a,b)_{\FF_q(b),p^n}}\cong M_{p^n}(\FF_p(b))$ then also
$A_{[a,b)_{K,p^n}}\cong M_{p^n}(K)$.)
If $b$ is algebraic over $\FF_q$ then $K=\FF_q(b)$ is finite so
$[a,b)_{p^n}=0$ follows from $\Br (K)=\{0\}$. So we may assume that
$b$ is transcendental over $\FF_q$. It follows that $K$ is a global
field. Then we have the exact sequence
$$0\mapsto\Br (K)\to\bigoplus_{v\in\Omega_K}\Br (K_v)\to\QQ/\ZZ\to
0.$$
Here $\Omega_K$ is the set of all places of $K$ and the first map is
given by the localizations, $\xi\mapsto (\xi_v)_{v\in\Omega_K}$,
i.e. $[A]\mapsto ([A\otimes_KK_v])_{v\in\Omega_K}$ for any c.s.a.
$A$ over $K$. The second map is given by
$(\xi_v)_{v\in\Omega_K}\mapsto\sum_{v\in\Omega_K}inv_v(\xi_v)$, where
$inv_v:\Br (K_v)\xrightarrow\sim\QQ/\ZZ$ is the local invariant of the
Brauer group. Then for any $\xi\in\Br (K)$ we have $\xi =0$ iff
$\xi_v=0$ $\forall v\in\Omega_K$. If $v_0\in\Omega_K$ and $\xi_v=0$
only holds for $v\in\Omega_K\setminus\{ v_0\}$ then
$0=\sum_{v\in\Omega_K}inv_v(\xi_v)=inv_{v_0}(\xi_{v_0})$ so
$\xi_{v_0}=0$ as well. Therefore in order to prove that $\xi =0$ it
suffices to prove that $\xi_v=0$ holds for all but one value of
$v\in\Omega_K$.
If $v\in\Omega_K$ then we denote by ${\mathcal O}_v$ the ring of
integers from $K_v$, by ${\mathfrak p}_v$ the prime ideal and by
${\mathcal O}_v^\times ={\mathcal O}_v\setminus{\mathfrak p}_v$ the
group of units.
For every monic irreducible $f\in\FF_q[b]$ we have the place $v_f$ of
$K$ corresponding to the prime ideal $(f)$ of $\FF_q[b]$. Besides
these places, we have the place $v_\infty$ corresponding to the norm
$|\cdot |_\infty$, given by $|g/h|_\infty =q^{\deg g-\deg h}$ for every
$g,h\in\FF_q[b]\setminus\{0\}$.
Let $\xi =[a,b)_{p^n}\in\Br (K)$. Then for every $v\in\Omega_K$ we
have $\xi_v=[a,b)_{v,p^n}:=[a,b)_{K_v,p^n}$. So in order to prove that
$[a,b)_{p^n}=0$ it is enough to prove that $[a,b)_{v,p^n}=0$ holds for
every place $v$, except $v=v_\infty$. We have two cases.
If $v=v_f$ for some monic irreducible $f\in\FF_q[b]$, $f\neq b$, then
$b\in{\mathcal O}_v^\times$ and the entries of $a$ belong to
$b\FF_q[b]\sbq{\mathcal O}_v$, which, by [T, Corollary 2.1], implies
that $L_v:=K_v(\wp^{-1}(a))$ is an unramified extension of
$K_v$. Since $L_v/K_v$ is unramified and $b\in{\mathcal O}_v^\times$,
we have $b\in{\rm N}_{L_v/K_v}(L_v^\times )$ and so
$[a,b)_{v,p^n}=0$.
If $v=v_b$ then the entries of $a$ belong to $b\FF_q[b]\sbq{\mathfrak
p}_v$. By [T, Proposition 6.1], this implies that $a\in\wp
(W_n(K_v))$, so again $[a,b)_{v,p^n}=0$.
\section{The symbols $((\cdot,\cdot ))_{p^m,p^n}$ and $((\cdot,\cdot
))_{p^m,p^n}$}
From now on we make the convention that $[0,0)_{p^n}=0$.
\bdf For any field $K$ of characteristic $p$ and $n\geq 1$ we define
the symbol
$$((\cdot,\cdot ))_{p^n}:W_n(K)\times W_n(K)\to\Brr{p^n}(K)$$
as follows. If $a=(a_0,\ldots,a_{n-1}),\, b=(b_0,\ldots,b_{n-1})\in
W_n(K)$ then
$$((a,b))_{p^n}:=\sum_{j=0}^{n-1}[F^ja[b_j],b_j)_{p^n}.$$
By our convention, if $b_j=0$ then
$[F^ja[b_j],b_j)_{p^n}=[0,0)_{p^n}=0$ so the terms with $b_j=0$
should be ignored in the sum above.
In particular, if $a=0$ or $b=0$ then $((a,b))_{p^n}=0$.
\edf
\begin{bof}\rm} \def\eff{\end{bof} {\bf Remarks}
(1) If $a\in W_n(K)$, $b\in K$ then all but the first term in the
definition of $((a,[b]))_{p^n}$ are zero and we have
$((a,[b]))_{p^n}=[a[b],b)_{p^n}$.
Thus $[\cdot,\cdot )_{p^n}$ writes in terms of $((\cdot,\cdot
))_{p^n}$ as $[a,b)_{p^n}=((a[b]^{-1},[b]))_{p^n}$.
(2) If $n=1$ then $((\cdot,\cdot ))_p$ is defined by
$((a,b))_p=[ab,b)_p$.
\eff
\blm (i) With the notations from Definition 1, we have
$$((a,b))_{p^n}=\sum_{i,j=0}^{n-1}[[a_i^{p^j}b_j^{p^i}],b_j^{p^i})_{p^n}.$$
(ii) If $a,b\in K$ and $k,l\geq 0$ then
$((V^k[a],V^l[b]))_{p^n}=[[a^{p^l}b^{p^k}],b^{p^k})_{p^n}$.
\elm
$Proof.$\,\,} \def\lb{\label (i) We prove that $[F^ja[b_j],b_j)_{p^n}=
\sum_{i=0}^{n-1}[[a_i^{p^j}b_j^{p^i}],b_j^{p^i})_{p^n}$ for every
$j$. Then our result follows by summation over $j$.
If $b_j=0$ this is just $0=0$ so we may assume that $b_j\neq 0$. We
have
$$F^ja[b_j]=(a_0^{p^j},\ldots,a_{n-1}^{p^j})[b_j]=
(a_0^{p^j}b_j,\ldots,a_{n-1}^{p^j}b_j^{p^{n-1}})=
\sum_{i=0}^{n-1}V^i[a_i^{p^j}b_j^{p^i}].$$
It follows that
$$[F^ja[b_j],b_j)_{p^n}=
\sum_{i=0}^{n-1}[V^i[a_i^{p^j}b_j^{p^i}],b_i)_{p^n}=
\sum_{i=0}^{n-1}[[a_i^{p^j}b_j^{p^i}],b_j^{p^i})_{p^n}.$$
(We have $[V^ia,b)_{p^n}=p^i[a,b)_{p^n}=[a,b^{p^i})_{p^n}$.) Hence the
conclusion.
(ii) Assume first that $k,l\leq n-1$. We use (i) in the case when
$a_i=0$ for $i\neq k$ and $b_j=0$ if $j\neq l$, i.e. when $a=V^k[a_k]$
and $b=V^l[b_l]$. It follows that
$[[a_i^{p^j}b_j^{p^i}],b_j^{p^i})_{p^n}=0$ for $(i,j)\neq (k,l)$ and
so $((V^k[a_k],V^l[b_l]))_{p^n}=((a,b))_{p^n}=
[[a_k^{p^l}b_l^{p^k}],b_l^{p^k})_{p^n}$. When we drop the indices $k$
and $l$ we get our result.
If $k\geq n$ or $l\geq n$ then $V^k[a]$ or $V^l[b]=0$ so
$((V^k[a],V^l[b]))_{p^n}=0$. If $k\geq n$ then $b^{p^k}$ is a
$p^n$-power so $[[a^{p^l}b^{p^k}],b^{p^k})_{p^n}=0$ and we are
done. Similarly, if $l\geq n$ then
$[[a^{p^l}b^{p^k}],a^{p^l})_{p^n}=0$. But by Lemma 2.2 we also have
$0=[[a^{p^l}b^{p^k}],a^{p^l}b^{p^k})_{p^n}=
[[a^{p^l}b^{p^k}],a^{p^l})_{p^n}+[[a^{p^l}b^{p^k}],b^{p^k})_{p^n}$ so
again $[[a^{p^l}b^{p^k}],b^{p^k})_{p^n}=0$. \mbox{$\Box$}\vspace{\baselineskip}
\bpr The symbol $((\cdot,\cdot))_{p^n}$ has the following properties.
(i) $((a,b))_{p^n}=((a+F^nc,b+F^nd))_{p^n}$ $\forall a,b,c,d\in
W_n(K)$.
(ii) $((\cdot,\cdot ))_{p^n}$ is bilinear.
(iii) $((a,b))_{p^n}=-((b,a))_{p^n}$ $\forall a,b\in W_n(K)$,
i.e. $((\cdot,\cdot))_{p^n}$ is skew-symmetric.
(iv) $((a,bc))_{p^n}+((b,ac))_{p^n}+((c,ab))_{p^n}=0$ $\forall
a,b,c\in W_n(K)$.
\epr
$Proof.$\,\,} \def\lb{\label (iii) If $a=(a_0,\ldots,a_{n-1})$ and $b=(b_0,\ldots,b_{n-1})$
then by Lemma 3.2(i) we have $((a,b))_{p^n}+((b,a))_{p^n}=
\sum_{i,j=0}^{n-1}[[a_i^{p^j}b_j^{p^i}],b_j^{p^i})_{p^n}+
\sum_{i,j=0}^{n-1}[[a_i^{p^j}b_j^{p^i}],a_i^{p^j})_{p^n}$. But by
Lemma 2.2 for every $i,j$ we have
$0=[[a_i^{p^j}b_j^{p^i}],a_i^{p^j}b_j^{p^i})_{p^n}=
[[a_i^{p^j}b_j^{p^i}],a_i^{p^j})_{p^n}+
[[a_i^{p^j}b_j^{p^i}],b_j^{p^i})_{p^n}$. Hence
$((a,b))_{p^n}+((b,a))_{p^n}=0$.
(ii) The linearity of $((\cdot,\cdot ))_{p^n}$ in the first variable
follows directly from the definition. Then the linearity in the second
variable will follow from the skew-symmetry, which we have already
proved.
(i) Since $((\cdot,\cdot ))_{p^n}$ is bilinear it suffices to prove
that $((F^na,b))_{p^n}=((a,F^nb))_{p^n}=0$ $\forall a,b\in
W_n(K)$. By using the formula $[a,b^{p^n})_{p^n}=0$ we get
$((a,F^nb))_{p^n}=\sum_{j=0}^{n-1}[F^ja[b_j^{p^n}],b_j^{p^n})_{p^n}=0$. Then
$((F^na,b))_{p^n}=0$ follows from the skew-symmetry.
(iv) We must prove that the map $f:W_n(K)^3\to\Brr{p^n}(K)$ given
by $(a,b,c)\mapsto ((a,bc))_{p^n}+((b,ac))_{p^n}+((c,ab))_{p^n}$ is
identically zero. Now $((\cdot,\cdot ))_{p^n}$ is bilinear so $f$ is
linear in each variable. Since $(W_n(K),+)$ is generated by $S=\{
V^i[a]\, :\, a\inK^\times} \newcommand\kkk[1]{K^{\times #1},\, 0\leq i\leq n-1\}$ it suffices to prove that
$f(a,b,c)=0$ when $a,b,c\in S$. So we must prove that
$f(V^i[a],V^j[b],V^k[c])=0$ $\forall a,b,c\inK^\times} \newcommand\kkk[1]{K^{\times #1}$, $0\leq i,j,k\leq
n-1$. We have $V^j[b]V^k[c]= V^{j+k}(F^k[b]F^j[c])=
V^{j+k}[b^{p^k}c^{p^j}]$. By Lemma 3.2(ii)
$((V^i[a],V^j[b]V^k[c]))_{p^n}= -((V^j[b]V^k[c],V^i[a]))_{p^n}$ writes
as
\begin{multline*}
-((V^{j+k}[b^{p^k}c^{p^j}],V^i[a]))_{p^n}=
-[[(b^{p^k}c^{p^j})^{p^i}a^{p^{j+k}}],a^{p^{j+k}})_{p^n}\\
=-[[a^{p^{j+k}}b^{p^{i+k}}c^{p^{i+j}}],a^{p^{j+k}})_{p^n}.
\end{multline*}
Similarly for the remaining two terms of
$f(V^i[a],V^j[b],V^k[c])$. Hence
\begin{multline*}
f(V^i[a],V^j[b],V^k[c])=
-[[a^{p^{j+k}}b^{p^{i+k}}c^{p^{i+j}}],a^{p^{j+k}})_{p^n}
-[[a^{p^{j+k}}b^{p^{i+k}}c^{p^{i+j}}],b^{p^{i+k}})_{p^n}\\
-[[a^{p^{j+k}}b^{p^{i+k}}c^{p^{i+j}}],c^{p^{i+j}})_{p^n}=
-[[a^{p^{j+k}}b^{p^{i+k}}c^{p^{i+j}}],a^{p^{j+k}}b^{p^{i+k}}c^{p^{i+j}})_{p^n},
\end{multline*}
which is zero by Lemma 2.2. \mbox{$\Box$}\vspace{\baselineskip}
Properties (i)-(iii) summarize as follows.
\bco $((\cdot,\cdot ))_{p^n}$ is a bilinear skew-symmetric map defined
as
$$((\cdot,\cdot ))_{p^n}:W_n(K)/F^n(W_n(K))\times
W_n(K)/F^n(W_n(K))\to\Brr{p^n}(K).$$
Note that $F^n(W_n(K))$ also writes as $W_n(K^{p^n})$.
\eco
\blm For every ring $R$ there is a group isomorphism
$$(R\otimes R)/\langle a\otimes bc-ab\otimes c-ac\otimes b\, :\,
a,b,c\in R\rangle\to\Omega^1(R)$$
given by $x\otimes y\mapsto x\mathop{}\!\mathrm{d} y$.
\elm
$Proof.$\,\,} \def\lb{\label $\Omega^1(R)$ is the $R$-module generated by $\mathop{}\!\mathrm{d} a$ with $a\in
R$, subject to $\mathop{}\!\mathrm{d} (a+b)=\mathop{}\!\mathrm{d} a+\mathop{}\!\mathrm{d} b$ and $\mathop{}\!\mathrm{d} (ab)=a\mathop{}\!\mathrm{d}
b+b\mathop{}\!\mathrm{d} a$ $\forall a,b\in R$. Then $\Omega^1(R)$ writes as $M/N$,
where $M$ is the $R$-module generated by $\mathop{}\!\mathrm{d} a$ with $a\in R$
subject to $\mathop{}\!\mathrm{d} (a+b)=\mathop{}\!\mathrm{d} a+\mathop{}\!\mathrm{d} b$ and $N$ is the $R$-submodule
of $M$ generated by $\mathop{}\!\mathrm{d} (ab)-a\mathop{}\!\mathrm{d} b-b\mathop{}\!\mathrm{d} a$, with $a,b\in R$.
We claim that there is a group isomorphism $f:R\otimes R\to M$ given
by $x\otimes y\mapsto x\mathop{}\!\mathrm{d} y$. The existence of $f$ defined this way
follows from the fact that the map $R\times R\to M$ given by $(x,y)\to
x\mathop{}\!\mathrm{d} y$ is bilinear. (In $M$ we have $(a+b)\mathop{}\!\mathrm{d} c=a\mathop{}\!\mathrm{d} c+b\mathop{}\!\mathrm{d}
c$ and $a\mathop{}\!\mathrm{d} (b+c)=a(\mathop{}\!\mathrm{d} b+\mathop{}\!\mathrm{d} c)=a\mathop{}\!\mathrm{d} b+a\mathop{}\!\mathrm{d} c$.)
Conversely, we regard $R\otimes R$ as an $R$-module by defining
$x\alpha :=(x\otimes 1)\alpha$ $\forall x\in R,\, \alpha\in R\otimes
R$ and we define a morphism of $R$-modules $g:M\to R\otimes R$ by
$\mathop{}\!\mathrm{d} x\mapsto 1\otimes x$. This is well defined because the
relations among generators in $M$, $\mathop{}\!\mathrm{d} (a+b)=\mathop{}\!\mathrm{d} a+\mathop{}\!\mathrm{d} b$, are
preserved by $g$. (We have $1\otimes (a+b)=1\otimes a+1\otimes b$.)
Now $g(x\mathop{}\!\mathrm{d} y)=xg(\mathop{}\!\mathrm{d} y)=x(1\otimes y)=(x\otimes 1)(1\otimes
y)=x\otimes y$. It follows that $f$ and $g$ are inverse to each other
group isomorphisms.
Then $f$ induces a group isomorphism $(R\otimes R)/g(N)\to
M/N=\Omega^1(R)$, given by $x\otimes y\to x\mathop{}\!\mathrm{d} y$. Now, as an
$R$-module, $N$ is generated by $\mathop{}\!\mathrm{d} (bc)-b\mathop{}\!\mathrm{d} c-c\mathop{}\!\mathrm{d} b$, with
$b,c\in R$. As a group, it will be generated by $a(\mathop{}\!\mathrm{d} (bc)-b\mathop{}\!\mathrm{d}
c-c\mathop{}\!\mathrm{d} b)=a\mathop{}\!\mathrm{d} (bc)-ab\mathop{}\!\mathrm{d} c-ac\mathop{}\!\mathrm{d} b$, with $a,b,c\in R$. It
follows that $g(N)$ is the group generated by $g(a\mathop{}\!\mathrm{d} (bc)-ab\mathop{}\!\mathrm{d}
c-ac\mathop{}\!\mathrm{d} b)=a\otimes bc-ab\otimes c-ac\otimes b$, with $a,b,c\in
R$. Hence the conclusion. \mbox{$\Box$}\vspace{\baselineskip}
\bpr There is a group morphism
$\alpha_{p^n}:\Omega^1(W_n(K))/\mathop{}\!\mathrm{d} W_n(K)\to\Brr{p^n}(K)$ given by
$a\mathop{}\!\mathrm{d} b\mapsto ((a,b))_{p^n}$.
In particular, if $n=1$ then $((a,b))_p=[ab,b)_p$ (see Remark 3.1(2))
so we recover the original definition of $\alpha_p$ from the
introduction.
\epr
$Proof.$\,\,} \def\lb{\label For convenience, we write $((\cdot,\cdot ))$ instead of
$((\cdot,\cdot ))_{p^n}$. By Proposition 3.3(ii) $((\cdot,\cdot
)):W_n(K)\times W_n(K)\to\Brr{p^n}(K)$ is bilinear so there is
a group morphism $f:W_n(K)\otimes W_n(K)\to\Brr{p^n}(K)$ given
by $a\otimes b\mapsto ((a,b))$. By Proposition 3.3(iii) and (iv) for
every $a,b,c\in W_n(K)$ we have $f(a\otimes bc-ab\otimes c-ac\otimes
b)=((a,bc))-((ab,c))-((ac,b))=((a,bc))+((c,ab))+((b,ac))=0$. So $f$
can be defined on
$$(W_n(K)\otimes W_n(K))/\langle a\otimes bc-ab\otimes
c-ac\otimes b\, :\, a,b,c\in W_n(K)\rangle,$$
which, by Lemma 3.5, is isomorphic to $\Omega^1(W_n(K))$, via
$a\otimes b\mapsto a\mathop{}\!\mathrm{d} b$. Then we get a group morphism
$\alpha_{p^n}:\Omega^1(W_n(K))\to\Brr{p^n}(K)$ given by $a\mathop{}\!\mathrm{d}
b\mapsto f(a\otimes b)=((a,b))$. But for every $a,b\in W_n(K)$ we
have $\alpha_{p^n}(\mathop{}\!\mathrm{d} (ab))=\alpha_{p^n}(a\mathop{}\!\mathrm{d} b+b\mathop{}\!\mathrm{d}
a)=((a,b))+((b,a))=0$. In particular, if $b=1$ we get
$\alpha_{p^n}(\mathop{}\!\mathrm{d} a)=0$ $\forall a\in W_n(K)$. Hence
$\alpha_{p^n}$ is defined in fact on $\Omega^1(W_n(K))/\mathop{}\!\mathrm{d}
W_n(K)$. \mbox{$\Box$}\vspace{\baselineskip}
{\bf Remark} Proposition 3.6 is a consequence of Proposition
3.3(ii)-(iv). But in fact we have equivalence. Indeed, (ii) follows
from $\alpha_{p^n}(a\mathop{}\!\mathrm{d} (b+c))=\alpha_{p^n}(a\mathop{}\!\mathrm{d}
b)+\alpha_{p^n}(a\mathop{}\!\mathrm{d} c)$ and $\alpha_{p^n}((a+b)\mathop{}\!\mathrm{d}
c)=\alpha_{p^n}(a\mathop{}\!\mathrm{d} c)+\alpha_{p^n}(b\mathop{}\!\mathrm{d} c)$. For (iii) we have
$\alpha_{p^n}(a\mathop{}\!\mathrm{d} b+b\mathop{}\!\mathrm{d} a)=\alpha_{p^n}(\mathop{}\!\mathrm{d} (ab))=0$,
i.e. $((a,b))+((b,a))=0$. And for (iv) we have $\alpha_{p^n}(a\mathop{}\!\mathrm{d}
(bc))=\alpha_{p^n}(ab\mathop{}\!\mathrm{d} c+ac\mathop{}\!\mathrm{d} b)$,
i.e. $((a,bc))=((ab,c))+((ac,b))$. Together with $((ab,c))=-((c,ab))$
and $((ac,b))=-((b,ac))$, this implies
$((a,bc))+((b,ac))+((c,ab))=0$.
\bpr The Frobenius and Verschiebung maps are adjoint:
$$((Fa,b))_{p^n}=((a,Vb))_{p^n}\quad\text{and}\quad
((Va,b))_{p^n}=((a,Fb))_{p^n}\quad\forall a,b\in W_n(K).$$
\epr
$Proof.$\,\,} \def\lb{\label Since both maps $(a,b)\mapsto ((Fa,b))_{p^n}$ and
$(a,b)\mapsto ((a,Vb))_{p^n}$ are bilinear and $(W_n(K),+)$ is
generated by $S=\{ V^i[a]\, :\, a\inK^\times} \newcommand\kkk[1]{K^{\times #1},\, 0\leq i\leq n-1\}$, it
suffices to prove that $((Fa,b))_{p^n}=((a,Vb))_{p^n}$ for $a,b\in
S$. So we must prove that
$((FV^i[a],V^j[b]))_{p^n}=((V^i[a],VV^j[b]))_{p^n}$ $\forall a,b\inK^\times} \newcommand\kkk[1]{K^{\times #1}$,
$0\leq i,j\leq n-1$. We use Lemma 3.2(ii) and we get
\begin{multline*}
((FV^i[a],V^j[b]))_{p^n}= ((V^i[a^p],V^j[b]))_{p^n}=
[[(a^p)^{p^j}b^{p^i}],b^{p^i})_{p^n}\\
=[[a^{p^{j+1}}b^{p^i}],b^{p^i})_{p^n}=((V^i[a],V^{j+1}[b]))_{p^n}.
\end{multline*}
The second statement follows from the first by the skew-symmetry. \mbox{$\Box$}\vspace{\baselineskip}
\bpr If $a,b\in W_n(K)$ then
$((a,b))_{p^n}=((Va,Vb))_{p^{n+1}}$.
More generally, if $m\geq n$ then
$((a,b))_{p^n}=((V^{m-n}a,V^{m-n}b))_{p^m}$.
\epr
$Proof.$\,\,} \def\lb{\label Let $b=(b_0,\ldots,b_{n-1})$. Then $Vb=(0,b_0,\ldots,b_{n-1})$ so,
by definition,
$$((Va,Vb))_{p^{n+1}}= [0,0)_{p^{n+1}}+
\sum_{j=0}^{n-1}[F^{j+1}Va[b_j],b_j)_{p^{n+1}}.$$
But $F^{j+1}Va[b_j]=V(F^{j+1}a)[b_j]=V(F^{j+1}aF
[b_j])$. It follows that
$$[F^{j+1}Va[b_j],b_j)_{p^{n+1}}= [V(F^{j+1}aF[b_j]),b_j)_{p^{n+1}}=
[F^{j+1}aF[b_j],b_j)_{p^n}= [F^ja[b_j],b_j)_{p^n}.$$
(We used the formulas $[Va,b)_{p^{n+1}}=[a,b)_{p^n}$ and
$[Fa,b)_{p^n}=[a,b)_{p^n}$.)
Hence
$((Va,Vb))_{p^{n+1}}=\sum_{j=0}^{n-1}[F^ja[b_j],b_j)_{p^n}=((a,b))_{p^n}$.
\mbox{$\Box$}\vspace{\baselineskip}
\bco If $m\geq n$ then for any $a,b\in W(K)$ we have
$((a,b))_{p^n}=p^{m-n}((a,b))_{p^m}$. Explicitly,
$$(((a_0,\ldots,a_{n-1}),(b_0,\ldots,b_{n-1})))_{p^n}=
p^{m-n}(((a_0,\ldots,a_{m-1}),(b_0,\ldots,b_{m-1})))_{p^m}.$$
\eco
$Proof.$\,\,} \def\lb{\label By Propositions 3.8 and 3.7, we have
$((a,b))_{p^n}=((V^{m-n}a,V^{m-n}b))_{p^m}=((F^{m-n}V^{m-n}a,b))_{p^m}=
((p^{m-n}a,b))_{p^m}=p^{m-n}((a,b))_{p^m}$. \mbox{$\Box$}\vspace{\baselineskip}
\bpr $(\cdot,\cdot )_{p^n}$ is antisymmetric.
\epr
$Proof.$\,\,} \def\lb{\label Since $(\cdot,\cdot )_{p^n}$ is skew-symmetric we have
$2((a,a))_{p^n}=((a,a))_{p^n}+((a,a))_{p^n}=0$. If $p>2$ then also
$p^n((a,a))_{p^n}=0$. Since $(2,p^n)=1$ we get $((a,a))_{p^n}=0$. If
$p=2$ then by Corollary 3.9 we have
$((a,a))_{2^n}=2((a,a))_{2^{n+1}}=0$. \mbox{$\Box$}\vspace{\baselineskip}
\bdf For $m,n\geq 1$ we define the symbol
$$((\cdot,\cdot ))_{p^m,p^n}:W_m(K)\times W_n(K)\to\Br (K)$$
by $((a,b))_{p^m,p^n}=((V^{l-m}a,V^{l-n}b))_{p^l}$ for any $l\geq m,n$.
In particular, if $m=n$ we may take $l=n$ and we have $((\cdot,\cdot
))_{p^n,p^n}=((\cdot,\cdot ))_{p^n}$.
\edf
\bpr (i) $((\cdot,\cdot))_{p^m,p^n}$ is well defined.
(ii) $((a,b))_{p^m,p^n}=((a+F^nc,b+F^md))_{p^m,p^n}$ $\forall
a,c\in W_m(K)$, $b,d\in W_n(K)$.
(iii) $((\cdot,\cdot))_{p^m,p^n}$ is bilinear.
(iv) $((a,b))_{p^m,p^n}=-((b,a))_{p^n,p^m}$ $\forall a\in W_m(K)$,
$b\in W_n(K)$.
\epr
$Proof.$\,\,} \def\lb{\label (i) We must prove that the formula for $((a,b))_{p^m,p^n}$ from
Definition 2 is independent of the choice of $l$. Assume that
$l'\geq l\geq m,n$. Then by Proposition 3.8 we have
$((V^{l-m}a,V^{l-n}b))_{p^l}=
((V^{l'-l}V^{l-m}a,V^{l'-l}V^{l-n}b))_{p^{l'}}= ((V^{l'-m}a,V^{l'-n}b))_{p^{l'}}$.
(iii) follows from the bilinearity of $((\cdot,\cdot))_{p^l}$ and the
fact that the maps $a\mapsto V^{l-m}a$ and $b\mapsto V^{l-n}b$ are
linear.
(ii) Since $((\cdot,\cdot))_{p^m,p^n}$ is bilinear it suffices to
prove that $((a,F^mb))_{p^m,p^n}=((F^na,b))_{p^m,p^n}=0$
$\forall a\in W_m(K)$, $b\in W_n(K)$. If $l\geq m,n$ then
\begin{multline*}
((a,F^mb))_{p^m,p^n}= ((V^{l-m}a,V^{l-n}F^mb))_{p^l}\\
=((V^mV^{l-m}a,V^{l-n}b))_{p^l}= ((0,V^{l-n}b))_{p^l}=0.
\end{multline*}
(Here we used the adjoint property of $F$ and $V$ and the fact that
$V^l\equiv 0$ on $W_l(K)$.) The proof of $((F^na,b))_{p^m,p^n}=0$ is
similar.
(iv) Follows from the definition of $((\cdot,\cdot ))_{p^m,p^n}$ and
the skew-symmetry of $((\cdot,\cdot ))_{p^l}$. \mbox{$\Box$}\vspace{\baselineskip}
\bco $((\cdot,\cdot ))_{p^m,p^n}$ is a bilinear map defined as
$$((\cdot,\cdot ))_{p^m,p^n}:W_m(K)/F^n(W_m(K))\times
W_n(K)/F^m(W_n(K))\to\Brr{p^k}(K),$$
where $k=\min\{m,n\}$.
Note that $F^n(W_m(K))$ and $F^m(W_n(K))$ also write as $W_m(K^{p^n})$
and $W_n(K^{p^m})$.
\eco
$Proof.$\,\,} \def\lb{\label By Proposition 3.11(ii), $((a,b))_{p^m,p^n}$ depends only on
$a\mod F^n(W_m(K))$ and $b\mod F^m(W_n(K))$. This justifies the
new domain for $((\cdot,\cdot ))_{p^m,p^n}$. The fact that the image
of $((\cdot,\cdot ))_{p^m,p^n}$ is in $\Brr{p^k}(K)$ follows from
bilinearity of $((\cdot,\cdot ))_{p^m,p^n}$ and the fact that
$W_m(K)/F^n(W_m(K))$ and $W_n(K)/F^m(W_n(K))$
are $p^k$-torsion. Indeed, $FV=VF=p$ so $p^mW_m(K)\sbq
V^m(W_m(K))=\{0\}$ and $p^nW_m(K)\sbq F^n(W_m(K))$. Thus
$W_m(K)/F^n(W_m(K))$ is killed by both $p^m$ and $p^n$ and
so by $p^k$. Similarly for $W_n(K)/F^m(W_n(K))$. \mbox{$\Box$}\vspace{\baselineskip}
\bpr The Frobenius and Verschiebung maps are adjoint:
$((Fa,b))_{p^m,p^n}=((a,Vb))_{p^m,p^n}=((a,b))_{p^m,p^{n-1}}$
$((Va,b))_{p^m,p^n}=((a,Fb))_{p^m,p^n}=((a,b))_{p^{m-1},p^n}$
Here we make the convention that $((a,b))_{p^m,p^n}=0$ if $m$ or
$n=0$.
More generally, $\forall a,b\in W(K)$ we have
$$((F^iV^ja,F^kV^lb))_{p^m,p^n}=\begin{cases}
((a,b))_{p^{m-j-k},p^{n-i-l}}&\text{if }m>j+k,\, n>i+l\\
0&\text{otherwise}
\end{cases}.$$
\epr
$Proof.$\,\,} \def\lb{\label Let $N\geq m,n$. Since $F$ and $V$ are adjoint with respect to
$((\cdot,\cdot ))_{p^N}$ we have
\begin{multline*}
((F^iV^ja,F^kV^lb))_{p^m,p^n}=
((V^{N-m}F^iV^ja,V^{N-n}F^kV^lb))_{p^N}\\
=((F^iV^{N-m+j}a,F^kV^{N-n+l}b))_{p^N}=
((V^kV^{N-m+j}a,V^iV^{N-n+l}b))_{p^N}.
\end{multline*}
If $m>j+k$, $n>i+l$ then $N\geq m-j-k,n-i-l\geq 1$ so, by definition,
$((V^{N-m+j+k}a,V^{N-n+l+i}b))_{p^N}=((a,b))_{p^{m-j-k},p^{n-i-l}}$. If
$m\leq j+k$ or $n\leq i+l$ then $N-m+j+k$ or $N-n+l+i\geq N$ so
$V^{N-m+j+k}a$ or $V^{N-n+l+i}b=0$ in $W_N(K)$. (On $W_N(K)$ we have
$V^N\equiv 0$.) Hence $((V^{N-m+j+k}a,V^{N-n+l+i}b))_{p^N}=0$. \mbox{$\Box$}\vspace{\baselineskip}
\begin{bof}\rm} \def\eff{\end{bof}{\bf Remark.} In short notation we may write
$W_m(K)/F^n(W_m(K))$ as $W_m(K)/(F^n)$, where by
$(F^n)$ we mean the image of $F^n$ on $W_m(K)$. In the same
short notation $W_m(K)=W(K)/V^m(W(K))$ may be written as
$W(K)/(V^m)$. Hence in the short notation we have
$W_m(K)/F^n(W_m(K))=W(K)/(V^m,F^n)$, where
$(V^m,F^n)$ is the subgroup of $W(K)$ generated by the images of
$V^m$ and $F^n$. Similarly,
$W_n(K)/F^m(W_n(K))=W(K)/(V^n,F^m)$ so the domain of
$((\cdot,\cdot ))_{p^m,p^n}$ can be written as
$W(K)/(V^m,F^n)\times W(K)/(V^n,F^m)$.
One can see that $F$ and $V$ switch roles in $W(K)/(V^m,F^n)$ and
$W(K)/(V^n,F^m)$. This is explained by the fact that $F$ and $V$
are adjoint with respect to $((\cdot,\cdot ))_{p^m,p^n}$ so if
$P\in\ZZ [X,Y]$ then
$((P(V,F)a,b))_{p^m,p^n}=((a,P(F,V)b))_{p^m,p^n}$ $\forall
a,b\in W(K)$. In particular, $((P(V,F)a,b))_{p^m,p^n}=0$ $\forall
a,b\in W(K)$ iff $((a,P(F,V)b))_{p^m,p^n}=0$ $\forall a,b\in
W(K)$. Two polynomials $P$ with this property are $X^m$ and $Y^n$.
\eff
\bpr (i) If $a=(a_0,\ldots,a_{m-1})\in W_m(K)$,
$b=(b_0,\ldots,b_{n-1})\in W_n(K)$ then
$$((a,b))_{p^m,p^n}=\sum_{j=0}^{n-1}[F^{m+j}aF^n[b_j],b_j)_{p^m}.$$
(Same as in Definition 1, the terms with $b_j=0$ should be ignored.)
(ii) $[F^{m+j}aF^n[b_j],b_j)_{p^m}= ((a,V^j[b_j]))_{p^m,p^n}=
((a,[b]_j))_{p^m,p^{n-j}}$ $\forall 0\leq j\leq n-1$.
\epr
$Proof.$\,\,} \def\lb{\label (i) In Definition 2 we take $l=m+n$ and we get
$((a,b))_{p^m,p^n}=((V^na,V^mb))_{p^{m+n}}$. For $0\leq j\leq n-1$ the
$m+j$ entry of $V^mb$ is $b_j$ and all the other entries are $0$. By
Definition 1 we have
$$((V^na,V^mb))_{p^{m+n}}=
\sum_{j=0}^{n-1}[F^{m+j}V^na[b_j],b_j)_{p^{m+n}}.$$
But $F^{m+j}V^na[b_j]= V^n(F^{m+j}a)[b_j]=
V^n(F^{m+j}aF^n[b_j])$. It follows that
$$[F^{m+j}V^na[b_j],b_j)_{p^{m+n}}=
[V^n(F^{m+j}aF^n[b_j]),b_j)_{p^{m+n}}=
[F^{m+j}aF^n[b_j],b_j)_{p^m}.$$
Hence the conclusion.
(ii) The $j$ entry of $V^j[b_j]$ is $b_j$ and all the other entries
are $0$. Then by (i) we have
$((a,V^j[b_j]))_{p^m,p^n}=[F^{m+j}aF^n[b_j],b_j)_{p^m}$.
The relation $((a,V^j[b_j]))_{p^m,p^n}= ((a,[b_j]))_{p^m,p^{n-j}}$
follows from Proposition 3.13. \mbox{$\Box$}\vspace{\baselineskip}
\begin{bof}\rm} \def\eff{\end{bof}{\bf Remarks}
(1) Since $[Fa,b)_{p^m}=[a,b)_{p^m}$, we can simplify $F^k$,
where $k=\min\{ m+j,n\}$, in the formula
$[F^{m+j}aF^n[b_j],b_j)_{p^m}$. So
$[F^{m+j}aF^n[b_j],b_j)_{p^m}=[F^{m-n+j}a[b_j],b_j)_{p^m}$ if
$m+j\geq n$ and $=[aF^{n-m-j}[b_j],b_j)_{p^m}$ if $m+j<n$.
So if $m\geq n$ then $((a,b))_{p^m,p^n}=
\sum_{j=0}^{n-1}[F^{m-n+j}a[b_j],b_j)_{p^m}$, while if $m<n$ then
$((a,b))_{p^m,p^n}= \sum_{j=0}^{n-m-1}[aF^{n-m-j}[b_j],b_j)_{p^m}+
\sum_{j=n-m}^{n-1}[F^{m-n+j}a[b_j],b_j)_{p^m}$.
In particular, when $m=n$ we recover Definition 1 for $((\cdot,\cdot
))_{p^n}=((\cdot,\cdot ))_{p^n,p^n}$.
(2) A priori, $[F^{m+j}aF^n[b_j],b_j)_{p^m}$ is
$p^m$-torsion. But in fact, since it writes as
$((a,[b_j]))_{p^m,p^{n-j}}$, it is $p^{\min\{ m,n-j\}}$-torsion. In
particular, when $m=n$ the term $[F^ja[b_j],b_j)_{p^n}$ from
Definition 1 writes as $((a,[b_j]))_{p^n,p^{n-j}}$ so it is
$p^{n-j}$-torsion.
(3) We have $((a,b))_{p^m,p^n}=-((b,a))_{p^n,p^m}$ so
$$((a,b))_{p^m,p^n}=
\sum_{j=0}^{n-1}[F^{m+j}aF^n[b_j],b_j)_{p^m}=
-\sum_{i=0}^{m-1}[F^{n+i}bF^m[a_i],a_i)_{p^n}.$$
In terms of c.s.a., $((a,b))_{p^m,p^n}=[A]=[B]$, where
$A=\bigotimes_{j=0}^{n-1}A_{[F^{m+j}aF^n[b_j],b_j)_{p^m}}$ and
$B=\bigotimes_{i=0}^{m-1}A_{[F^{n+i}bF^m[a_i],a_i)_{p^n}}^{op}$. Thus
$A$ writes as the tensor product of $n$ c.s.a. of degree $p^m$ and $B$
as the tensor product of $m$ c.s.a. of degree $p^n$. Hence $\deg
A=\deg B=p^{mn}$. Therefore the Schur index of $((a,b))_{p^m,p^n}$ is
at most $p^{mn}$. Philippe Gille raised the question whether this
upper bound can be achieved. The answer is yes. If
$F=\FF_p(a_0,\ldots,a_{m-1},b_0,\ldots,b_{n-1})$, where $a_i,b_j$ are
variables, and $a=(a_0,\ldots,a_{m-1})$, $b=(b_0,\ldots,b_{n-1})$ then
the Schur index of $((a,b))_{p^m,p^n}$ is $p^{mn}$. To prove this we
need a new way to describe $((\cdot,\cdot ))_{p^m,p^n}$ in terms of
c.s.a. given by generators and relations.
\eff
As seen in the introduction, the symbols $[\cdot,\cdot
)_{p^n}:W_n(K)\timesK^\times} \newcommand\kkk[1]{K^{\times #1}\to\Brr{p^n}(K)$ have a direct limit
$[\cdot,\cdot )_{p^\infty}:CW(K)\timesK^\times} \newcommand\kkk[1]{K^{\times #1}\to\Brr{p^\infty}(K)$. For
every $n\geq 1$ the canonical morphism $\psi_n:W_n(K)\to CW(K)$ is
given by $(a_0,\ldots,a_{n-1})\mapsto
(\ldots,0,0,a_0,\ldots,a_{n-1})$.
We can do the same with the symbols $((a,b))_{p^m,p^n}$ indexed over
$({\mb N}} \def\FF{{\mb F}^*\times{\mb N}} \def\FF{{\mb F}^*,\leq )$, where $(m,n)\leq (m',n')$ if $m\leq m'$,
$n\leq n'$.
If $m\leq m'$, $n\leq n'$ then, by Proposition 3.13, for any $a\in
W_m(K)$, $b\in W_n(K)$ we have
$((a,b))_{p^m,p^n}=((V^{m'-m}a,V^{n'-n}b))_{p^{m'},p^{n'}}$. So we
have the commuting diagram
$$\begin{array}{ccc}W_m(K)\times W_n(K)&\xrightarrow{((\cdot,\cdot
))_{p^m,p^n}}&\Brr{p^k}(K)\\
\hskip -60pt V^{m'-m}\times V^{n'-n}\downarrow&{}&\downarrow\\
W_{m'}(K)\times W_{n'}(K)&\xrightarrow{((\cdot,\cdot
))_{p^{m'},p^{n'}}}&\Brr{p^{k'}}(K)
\end{array},$$
where $k=\min\{ m,n\}$, $k'=\min\{ m',n'\}$. So we have a map between
two directed systems. By taking direct limits we get a symbol
$((\cdot,\cdot ))_{p^\infty}:CW(K)\times CW(K)\to\Brr{p^\infty}(K)$. If
$a=(\ldots,a_{-1},a_0)$, $b=(\ldots,b_{-1},b_0)\in CW(K)$ with $a_i=0$
for $i\leq -m$ and $b_j=0$ for $j\leq -n$ then
$a=\psi_m((a_{-m+1}\ldots,a_0))$ and
$b=\psi_n((b_{-n+1}\ldots,b_0))$. So $((a,b))_{p^\infty}=
(((a_{-m+1}\ldots,a_0),(b_{-n+1}\ldots,b_0)))_{p^m,p^n}$.
Now $\{ (n,n)\, :\, n\in{\mb N}} \def\FF{{\mb F}^*\}$ is cofinal in $({\mb N}} \def\FF{{\mb F}^*\times{\mb N}} \def\FF{{\mb F}^*,\leq
)$ so $((\cdot,\cdot ))_{p^\infty}$ can be regarded also as the direct
limit of $((\cdot,\cdot ))_{p^n,p^n}=((\cdot,\cdot ))_{p^n}$
only. Since $((\cdot,\cdot ))_{p^n}$ are bilinear and antisymmetric, so
is $((\cdot,\cdot ))_{p^\infty}$.
Note that if $a\in CW(K^{p^\infty})$ then $a_i\in K^{p^\infty}\sbq
K^{p^n}$ $\forall i$. Hence $(a_{-m+1}\ldots,a_0)\in
W_m(K^{\go p} \def\q{\go q} \def\P{\go P} \def\Q{\go Q} \def\mm{\go m^n})=F^n(W_m(K))$. It follows that
$(((a_{-m+1}\ldots,a_0),(b_{-n+1}\ldots,b_0)))_{p^m,p^n}=0$,
i.e. $((a,b))_{p^\infty}=0$. Similarly, $((a,b))_{p^\infty}=0$ if
$b\in CW(K^{p^\infty})$. Since $((\cdot,\cdot ))_{p^\infty}$ is
bilinear, this implies that $((a,b))_{p^\infty}$ depends only on $a$
and $b\mod CW(K^{p^\infty})$. So the symbol can be defined as
$((\cdot,\cdot ))_{p^\infty}:CW(K)/CW(K^{p^\infty})\times
CW(K)/CW(K^{p^\infty})\to\Brr{p^\infty}(K)$. We get:
\bpr There is a bilinear antisymmetric symbol
$$((\cdot,\cdot ))_{p^\infty}:CW(K)/CW(K^{p^\infty})\times
CW(K)/CW(K^{p^\infty})\to\Brr{p^\infty}(K)$$
given for any $a=(\ldots,a_{-1},a_0)$, $b=(\ldots,b_{-1},b_0)\in
CW(K)$ with $a_i=0$ if $i\leq -m$ and $b_j=0$ if $j\leq -n$ by
$((a,b))_{p^\infty}=
(((a_{-m+1}\ldots,a_0),(b_{-n+1}\ldots,b_0)))_{p^m,p^n}$.
\epr
{\bf Remark.} The symbols $[\cdot,\cdot )_{p^n}$ too satisfy the
adjoint property between Frobenius and Verschiebung. We have
$[Va,b)_{p^n}=[a,b)_{p^{n-1}}$, but also
$[a,Fb)_{p^n}=[a,b^p)_{p^n}=p[a,b)_{p^n}=[a,b)_{p^{n-1}}$. So
$[Va,b)_{p^n}=[a,Fb)_{p^n}=[a,b)_{p^{n-1}}$.
For the other adjoint property we need to define a Verschiebung map on
$K^\times} \newcommand\kkk[1]{K^{\times #1}$. On $(W_n(F),+)$ the Verschiebung map is defined by the property
$FV=VF=p:=(x\mapsto px)$. On $(K^\times} \newcommand\kkk[1]{K^{\times #1},\cdot )$ we need a multiplicative
Verschiebung map $V^\times$, which should satisfy $FV^\times =VF^\times
=p:=(x\mapsto x^p)$. Obvious such map is the identity, $V^\times =1$.
Then we have $[Fa,b)_{p^n}=[a,V^\times b)_{p^n}=[a,b)_{p^n}$.
As we will see in a future paper, this is a particular case of a more
general result.
\section{The representation theorem}
In Proposition 3.6 we introduced the linear map
$\alpha_{p^n}:\Omega^1(W_n(K))\to\Brr{p^n}(K)$ given by $a\mathop{}\!\mathrm{d}
b\mapsto ((a,b))_{p^n}$ . We are now able to prove that $\alpha_{p^n}$
is surjective and to find its kernel, thus to generalize the
result of [GS, Theorem 9.2.4] for $n=1$, which we mentioned in the
introduction.
Note that in fact we already have the surjectivity. Indeed,
$\Ima\alpha_{p^n}$ is the subgroup of $\Brr{p^n}(K)$ generated by the
image of $((\cdot,\cdot ))_{p^n}$, which, by Remark 3.1(1), coincides
with the subgroup generated by the image of $[\cdot,\cdot
)_{p^n}$, which, by Proposition 1.2, is the whole $\Brr{p^n}(K)$.
\blm For every $a\in W_n(K)$, $b\inK^\times} \newcommand\kkk[1]{K^{\times #1}$ we have
$[a,b)_{p^n}=\alpha_{p^n}(a\dlog [b])$.
\elm
$Proof.$\,\,} \def\lb{\label By Remark 3.1 we have
$[a,b)_{p^n}=((a[b]^{-1},[b]))_{p^n}=\alpha_{p^n}(a[b]^{-1}\mathop{}\!\mathrm{d}
[b])=\alpha_{p^n}(a\dlog [b])$. \mbox{$\Box$}\vspace{\baselineskip}
\blm The following elements of $\Omega^1(W_n(K))$ belong to
$\ker\alpha_{p^n}$.
\begin{center}
$\mathop{}\!\mathrm{d} a$, $Fa\mathop{}\!\mathrm{d} b-a\mathop{}\!\mathrm{d} Vb$, $Va\mathop{}\!\mathrm{d} b-a\mathop{}\!\mathrm{d} Fb$,
$a,b\in W_n(K)$.
$\wp (a)\dlog [b]=(Fa-a)\dlog [b]$, $a\in W_n(K)$, $b\inK^\times} \newcommand\kkk[1]{K^{\times #1}$.
\end{center}
In particular, $\wp ([a])\dlog [b]=([a^p]-[a])\dlog
[b]\in\ker\alpha_{p^n}$, $\forall a\in K$, $b\inK^\times} \newcommand\kkk[1]{K^{\times #1}$.
\elm
$Proof.$\,\,} \def\lb{\label We have $\mathop{}\!\mathrm{d} a\in\ker\alpha_{p^n}$ by Proposition 3.6.
By Proposition 3.7 we have $\alpha_{p^n}(Fa\mathop{}\!\mathrm{d} b-a\mathop{}\!\mathrm{d}
Vb)=((Fa,b))_{p^n}-((a,Vb))_{p^n}=0$. Similarly for $Va\mathop{}\!\mathrm{d}
b-a\mathop{}\!\mathrm{d} Fb$.
By Lemma 4.1 we have $\alpha_{p^n}(\wp (a)\dlog
[b])=[\wp(a),b)_{p^n}=0$. \mbox{$\Box$}\vspace{\baselineskip}
\bdf We define $G_n:=\Omega^1(W_n(K))/M_n$, where
$$M_n=\langle Fa\mathop{}\!\mathrm{d} b-a\mathop{}\!\mathrm{d} Vb\, :\, a,b\in W_n(K),\,\wp
([a])\dlog [b]\, :\, a\in W_n(K),\, b\inK^\times} \newcommand\kkk[1]{K^{\times #1}\rangle.$$
By Lemma 4.2 we have $M_n\sbq\ker\alpha_{p^n}$. Therefore we can
regard $\alpha_{p^n}$ as being defined
$\alpha_{p^n}:G_n\to\Brr{p^n}(K)$.
We also define $G'_n:=\Omega^1(W_n(K))/M'_n$, where $M'_n\sbq M_n$,
$$M'_n=\langle Fa\mathop{}\!\mathrm{d} b-a\mathop{}\!\mathrm{d} Vb\, :\, a,b\in W_n(K)\rangle.$$
\edf
We will prove by induction on $n$ that
$\alpha_{p^n}:\Omega^1(W_n(K))\to\Brr{p^n}(K)$ is surjective and
$M_n$ is its kernel so $\alpha_{p^n}:G_n\to\Brr{p^n}(K)$ is an
isomorphism.
\blm We have $M_1=\langle\mathop{}\!\mathrm{d} a,\, \wp (a)\dlog b\, :\, a\in K,\,
b\inK^\times} \newcommand\kkk[1]{K^{\times #1}\rangle$.
Consequently, $\alpha_p:G_1\to\Brr p(K)$ is an isomorphism by [GS,
Theorem 9.2.4]. (See also \S 1.)
\elm
$Proof.$\,\,} \def\lb{\label In $W_1(K)=K$ we have $V\equiv 0$ and $[a]=a$ $\forall a\in K$. It
follows that $Fa\mathop{}\!\mathrm{d} b-a\mathop{}\!\mathrm{d} Vb=a^p\mathop{}\!\mathrm{d} b-a\mathop{}\!\mathrm{d} 0=\mathop{}\!\mathrm{d}
a^pb$. Thus $\{Fa\mathop{}\!\mathrm{d} b-a\mathop{}\!\mathrm{d} Vb\, :\, a,b\in K\} =\{\mathop{}\!\mathrm{d} a\, :\,
a\in K\}$. Also $\wp ([a])\dlog [b]=\wp (a)\dlog b$ $\forall a\in K$,
$b\inK^\times} \newcommand\kkk[1]{K^{\times #1}$. Hence the conclusion. \mbox{$\Box$}\vspace{\baselineskip}
Note that we used only a minimal set of generators for $M_n$. In fact
$M_n$ contains all elements of $\ker\alpha_{p^n}$ we know so far.
\blm All elements of $\ker\alpha_{p^n}$ from Lemma 4.2 also belong to
$M_n$.
With the exception of $\wp (a)\dlog [b]$, they also belong to $M'_n$.
\elm
$Proof.$\,\,} \def\lb{\label In $G'_n$ we have $Fa\mathop{}\!\mathrm{d} b=a\mathop{}\!\mathrm{d} Vb$. More generally,
$F^ka\mathop{}\!\mathrm{d} b=a\mathop{}\!\mathrm{d} V^kb$. So $\mathop{}\!\mathrm{d} a=F^n1\mathop{}\!\mathrm{d} a=1\mathop{}\!\mathrm{d}
V^na=0$. (We have $V^n\equiv 0$ in $W_n(K)$ so $V^na=0$.)
Consequently, $a\mathop{}\!\mathrm{d} b+b\mathop{}\!\mathrm{d} a=\mathop{}\!\mathrm{d} (ab)=0$ so $a\mathop{}\!\mathrm{d} b=-b\mathop{}\!\mathrm{d}
a$. Hence $Va\mathop{}\!\mathrm{d} b=-b\mathop{}\!\mathrm{d} Va=-Fb\mathop{}\!\mathrm{d} a=a\mathop{}\!\mathrm{d} Fb$. The relations
$\mathop{}\!\mathrm{d} a=0$ and $Va\mathop{}\!\mathrm{d} b=a\mathop{}\!\mathrm{d} Fb$, which hold in $G'_n$, imply
$\mathop{}\!\mathrm{d} a,Va\mathop{}\!\mathrm{d} b-a\mathop{}\!\mathrm{d} Fb\in K'_n\sbq K_n$. Note that $\mathop{}\!\mathrm{d} a=0$
and $Va\mathop{}\!\mathrm{d} b=a\mathop{}\!\mathrm{d} Fb$ also hold in $G_n$.
We are left to prove that $\wp a\dlog [b]\in M_n$, i.e. that $\wp
a\dlog [b]=0$ in $G_n$. The map $f:W_n(K)\to
G_n$, given by $a\mapsto\wp a\dlog [b]$, is linear. We must prove that
$f\equiv 0$. The group $(W_n(K),+)$ is generated by $[a]$, with
$a\in K$, and $Va$, where $a$ is a Witt vector, so it suffices to
prove that $f$ vanishes at these generators. We have $\wp ([a])\dlog
[b]\in K_n$ so $f([a])=0$. In $G_n$ we have $Va\dlog
[b]=Va[b]^{-1}\mathop{}\!\mathrm{d} [b]=V(aF[b]^{-1})\mathop{}\!\mathrm{d} [b]=aF[b]^{-1}\mathop{}\!\mathrm{d}
F[b]=a\dlog F[b]$. But $\dlog F[b]=\dlog [b]^p=p\dlog [b]$. It follows
that $Va\dlog [b]=pa\dlog [b]=FVa\dlog [b]$ so $f(Va)=\wp (Va)\dlog
[b]=(FVa-Va)\dlog [b]=0$. \mbox{$\Box$}\vspace{\baselineskip}
By Lemma 3.5 we have a surjective morphism $W_n(K)\otimes
W_n(K)\to\Omega^1(W_n(K))$, given by $a\otimes b\mapsto a\mathop{}\!\mathrm{d}
b$, and its kernel is $\langle a\otimes bc-ab\otimes c-ac\otimes b\,
:\, a,b,c\in W_n(K)\rangle$. Hence $G_n=\Omega^1(W_n(K))/M_n$ also
writes as $G_n=(W_n(K)\otimes W_n(K))/N_n$, where $N_n\sbq
W_n(K)\otimes W_n(K)$ is the preimage of
$M_n\sbq\Omega^1(W_n(K))$.
Explicitly, $N_n$ is the group generated by
\begin{center}
$a\otimes bc-ab\otimes c-ac\otimes b$, with $a,b,c\in
W_n(K),$
$Fa\otimes b-a\otimes Vb$, with $a,b\in W_n(K)$,\quad
$\wp ([a])[b]^{-1}\otimes [b]$, with $a\in K$, $b\inK^\times} \newcommand\kkk[1]{K^{\times #1}$.
\end{center}
If we write $G_n$ as $(W_n(K)\otimes W_n(K))/N_n$ then
$\alpha_{p^n}:G_n\to\Brr{p^n}(K)$ is given by $a\otimes b\mapsto
((a,b))_{p^n}$.
Similarly, $G'_n=\Omega^1(W_n(K))/M'_n$ also writes as
$G_n=(W_n(K)\otimes W_n(K))/N'_n$, where $N'_n$ has the same
generators as $N_n$ except $\wp ([a])[b]^{-1}\otimes [b]$, with $a\in
K$, $b\inK^\times} \newcommand\kkk[1]{K^{\times #1}$.
\blm In $G_n=(W_n(K)\otimes W_n(K))/N_n$ we have:
$Fa\otimes b=a\otimes Vb$,\quad $Va\otimes b=a\otimes Fb$,\quad
$a\otimes b=-b\otimes a$\quad $\forall a,b\in W_n(K)$.
$a\otimes b^k=kab^{k-1}\otimes b$\quad $\forall a,b\in W_n(K)$,
$k\geq 0$.
$\wp (a)[b]^{-1}\otimes [b]=0$\quad $\forall a\in W_n(K)$,
$b\inK^\times} \newcommand\kkk[1]{K^{\times #1}$.
All relations above, except the last one, also hold in
$G'_n=(W_n(K)\otimes W_n(K))/N'_n$.
\elm
$Proof.$\,\,} \def\lb{\label When we regard $G_n$ and $G'_n$ as $\Omega^1(W_n(K))/M_n$ and
$\Omega^1(W_n(K))/M'_n$, the relations we want to prove write as
$Fa\mathop{}\!\mathrm{d} b=a\mathop{}\!\mathrm{d} Vb$, $Va\mathop{}\!\mathrm{d} b=a\mathop{}\!\mathrm{d} Fb$, $a\mathop{}\!\mathrm{d} b=-b\mathop{}\!\mathrm{d}
a$, $a\mathop{}\!\mathrm{d} b^k=kab^{k-1}\mathop{}\!\mathrm{d} b$ and $\wp (a)[b]^{-1}\mathop{}\!\mathrm{d} [b]=\wp
(a)\dlog [b]=0$. They follow from Lemma 4.4 and the properties of the
differentials. (For $a\mathop{}\!\mathrm{d} b=-b\mathop{}\!\mathrm{d} a$ note that $a\mathop{}\!\mathrm{d} b+b\mathop{}\!\mathrm{d}
a=\mathop{}\!\mathrm{d} (ab)\in M'_n\sbq M_n$.) \mbox{$\Box$}\vspace{\baselineskip}
Since $W_n(K)=W(K)/V^n(W(K))$ the elements of $W_n(K)$ can be
regarded as classes of elements in $W(K)$. Therefore elements of
$W_n(K)\otimes W_n(K)$ can be regarded as classes of elements
of $W(K)\otimes W(K)$. Hence we may regard $\alpha_{p^n}$ as being
defined on $W(K)\otimes W(K)$.
\blm With the convention above, if $m\geq n$ then
$\alpha_{p^n}=p^{m-n}\alpha_{p^m}$, in the sense that
$\alpha_{p^n}(\eta )=p^{m-n}\alpha_{p^m}(\eta )$ $\forall\eta\in
W(K)\otimes W(K)$.
\elm
$Proof.$\,\,} \def\lb{\label Since $\alpha_{p^m}$ and $\alpha_{p^n}$ are linear it is enough to
verify our statement when $\eta$ is a generator of $W(K)\otimes W(K)$,
$\eta =a\otimes b$, with $a,b\in W(K)$. But by Corollary 3.9 we have
$((a,b))_{p^n}=p^{m-n}((a,b))_{p^m}$, i.e. $\alpha_{p^n}(a\otimes
b)=p^{m-n}\alpha_{p^m}(a\otimes b)$. \mbox{$\Box$}\vspace{\baselineskip}
\blm If $n\geq 2$ then $V\otimes V:W_{n-1}(K)\otimes W_{n-1}(K)\to
W_n(K)\otimes W_n(K)$ induces a linear map $f_n:G_{n-1}\to G_n$.
We have $f_n(\eta)=p\eta$ for any $\eta\in W(K)\otimes W(K)$ so $\Ima
f_n=pG_n$.
\elm
$Proof.$\,\,} \def\lb{\label We have $G_{n-1}=(W_{n-1}(K)\otimes W_{n-1}(K))/N_{n-1}$ and
$G_n=(W_n(K)\otimes W_n(K))/N_n$ so we must prove that
$(V\otimes V)(N_{n-1})\sbq N_n$. Equivalently, if $h$ is the
composition $W_{n-1}(K)\otimes W_{n-1}(K)\xrightarrow{V\otimes V}
W_n(K)\otimes W_n(K)\to (W_n(K)\otimes W_n(K))/N_n=G_n$, then we must
prove that $N_{n-1}\sbq\ker h$.
First note that for any $a,b\in W(K)$, by Lemma 4.5, in $G_n$ we have
$h(a\otimes b)=Va\otimes Vb=FVa\otimes b=pa\otimes b$. More
generally, by linearity we have $h(\eta )=p\eta$ $\forall\eta\in
W(K)\otimes W(K)$.
To prove that $N_{n-1}\sbq\ker h$ we note that $N_{n-1}$ is generated
by the elements $\eta =a\otimes bc-ab\otimes c-ac\otimes b$, $Fa\otimes
b-a\otimes Vb$ and $\wp([a])[b]^{-1}\otimes [b]$ of
$W(K)\otimes W(K)$. In each case we have $\eta =0$ in
$G_n=(W_n(K)\otimes W_n(K))/N_n$ so $h(\eta )=p\eta =0$, as
well. Hence $N_{n-1}\sbq\ker h$.
We obtain a linear map $f_n:G_{n-1}=(W_{n-1}(K)\otimes
W_{n-1}(K))/N_{n-1}\to G_n=(W_n(K)\otimes W_n(K))/N_n$ given by
$f_n(\eta )=h(\eta )=p\eta$ $\forall\eta\in W(K)\otimes W(K)$. \mbox{$\Box$}\vspace{\baselineskip}
\blm Let $f:A\to B$ be a morphism of abelian groups. If $A'\sbq A$ is
a subgroup and $B'=f(A')$ and $f'':A/A'\to B/B'$ is the morphism given
by $f''(x+A')=f(x)+B'$ then $\ker f''$ is the image in $A/A'$ of $\ker f\sbq A$.
\elm
$Proof.$\,\,} \def\lb{\label We use the snake lemma for the following diagram.
$$\begin{array}{ccccccc}
0\to & A'&\to & A & \to & A/A' & \to 0\\
{} & \quad\downarrow f' & {} & \quad\downarrow f & {} &
\quad\downarrow f'' & {}\\
0\to & B'&\to & B & \to & B/B' & \to 0
\end{array},$$
where $f'$ is the restriction of $f$ to $A'$. We have $f(A')=B'$ so
$f'$ is surjective so $\coker f'=0$. Then, as a part of the long exact
sequence obtained by the snake lemma, we have the exact sequence $\ker
f\to\ker f''\to\coker f'=0$. Hence $\ker f''$ is the image in $A/A'$ of $\ker
f\sbq A$, as claimed. \mbox{$\Box$}\vspace{\baselineskip}
\blm If $n\geq 1$ and $T:W_n(K)\to W_1(K)=K$ is the truncation map
then $T\otimes T:W_n(K)\otimes W_n(K)\to K\otimes K$ induces a
surjective morphism $g_n:G_n\to G_1$ satisfying $\ker g_n=pG_n$.
\elm
$Proof.$\,\,} \def\lb{\label Since $T$ is given by $(a_0,a_1,\ldots )\mapsto a_0$ its kernel
consists of elements of the form $(0,a_1,a_2,\ldots )$, i.e. $\ker
T=\Ima V$. The truncation map $T\otimes T$ sends the generators of
$N_{n-1}$ to the generators of $N_1$ so $(T\otimes T)(N_n)=N_1$. Let
$g_n:(W_n(K)\otimes W_n(K))/N_n=G_n\to (K\otimes K)/N_1=G_1$
be the induced morphism. Since $T\otimes T$ is surjective, so is
$g_n$. By Lemma 4.8, $\ker g_n$ is the image of $\ker (T\otimes T)\sbq
W_n(K)\otimes W_n(K)$ in $G_n=(W_n(K)\otimes W_n(K))/N_n$. But $\ker
(T\otimes T)=\ker T\otimes W_n(K)+W_n(K)\otimes\ker T=\Ima V\otimes
W_n(K)+W_n(K)\otimes\Ima V$.
We now prove that $\ker g_n=pG_n$. Now $pG_n$ is generated by elements
of the form $pa\otimes b$. But $pa\otimes b=V(Fa)\otimes b\in\ker
g_n$. Conversely, we must prove that the generators $Va\otimes b$ and
$a\otimes Vb$ of $\ker g_n$ belong to $pG_n$. If
$b=(b_0,b_1,b_2,\ldots )$ then $b=[b_0]+Vb'$, where
$b'=(b_1,b_2,\ldots )$. Hence $Va\otimes b=Va\otimes [b_0]+Va\otimes
Vb'$. But, by Lemma 4.5, in $G_n$ we have $Va\otimes [b_0]=a\otimes
F[b_0]=a\otimes [b_0]^p=pa[b_0]^{p-1}\otimes [b_0]\in pG_n$ and
$Va\otimes Vb'=FVa\otimes b'=pa\otimes b'\in pG_n$. So $Va\otimes
b\in pG_n$. Similarly, $Vb\otimes a\in pG_n$ so $a\otimes
Vb=-Vb\otimes a\in pG_n$. \mbox{$\Box$}\vspace{\baselineskip}
\btm The map $\alpha_{p^n}:G_n\to\Brr{p^n}(K)$ is an isomorphism.
\end{tm}
$Proof.$\,\,} \def\lb{\label We use the induction on $n$. The case $n=1$ was handled by Lemma
4.3.
Suppose now that $n\geq 2$. By Lemma 4.9, $g_n$ is surjective with
$\ker g_n=pG_n$. By Lemma 4.7, $\Ima f_n=pG_n$. So we have the exact
sequence $G_{n-1}\xrightarrow{f_n}G_n\xrightarrow{g_n}G_1\to 0$. We
also have the obvious exact sequence
$0\to\Brr{p^{n-1}}(K)\to\Brr{p^n}(K)\xrightarrow{p^{n-1}}\Brr
p(K)$. By Lemma 4.6, for any $\eta\in W(K)\otimes W(K)$ we have
$\alpha_{p^{n-1}}(\eta )=p\alpha_{p^n}(\eta )$ and $\alpha_p(\eta
)=p^{n-1}\alpha_{p^n}(\eta )$. Also by Lemma 4.7 $f_n(\eta )=p\eta$,
while $g_n$ is given by truncations so $g_n(\eta )=\eta$. Hence
$\alpha_p(g_n(\eta ))=p^{n-1}\alpha_{p^n}(\eta )$ and
$\alpha_{p^n}(f_n(\eta ))=p\alpha_{p^n}(\eta
)=\alpha_{p^{n-1}}(\eta )$. So we have the commutative exact diagram
$$\begin{array}{ccccccc}
{}& G_{n-1}& \xrightarrow{f_n}& G_n& \xrightarrow{g_n}& G_1&\to 0\\
{}& \quad\downarrow\alpha_{p^{n-1}}& {}& \quad\downarrow\alpha_{p^n}&
{}& \quad\downarrow\alpha_{p}& {}\\
0\to & \Brr{p^{n-1}}(K)& \to & \Brr{p^n}(K) & \xrightarrow{p^{n-1}} &
\Brr p(K) & {}
\end{array}.$$
By the snake lemma we have the exact sequences
$\ker\alpha_{p^{n-1}}\to\ker\alpha_{p^n}\to\ker\alpha_p$ and
$\coker\alpha_{p^{n-1}}\to\coker\alpha_{p^n}\to\coker\alpha_p$. But by
the induction hypothesis $\alpha_{p^{n-1}}$ and $\alpha_p$ are
isomorphisms so their $\ker$ and $\coker$ are zero. It follows that
$\ker\alpha_{p^n}=\coker\alpha_{p^n}=0$ so $\alpha_{p^n}$ is an
isomorphism as well. \mbox{$\Box$}\vspace{\baselineskip}
As seen from the proof of Theorem 4.10, we have the commutative square
$$\begin{array}{ccc}
G_{n-1}& \xrightarrow{\alpha_{p^{n-1}}}& \Brr{p^{n-1}}(K)\\
\quad\downarrow f_n& {} &\downarrow\\
G_n& \xrightarrow{\alpha_{p^n}}& \Brr{p^n}(K)
\end{array}.$$
Then we have an isomorphism
$\alpha_{p^\infty}:G_\infty\to\Brr{p^\infty}(K)$, where the
$G_\infty=\varinjlim G_n$. We have $G_n=(W_n(K)\otimes
W_n(K))/N_n$ so $G_\infty =\varinjlim (W_n(K)\otimes
W_n(K))/\varinjlim N_n$. Recall that $f_n:G_{n-1}\to G_n$ is induced
by $V\otimes V:W_{n-1}(K)\otimes W_{n-1}(K)\to W_n(K)\otimes
W_n(K)$. But the limit of the directed system $W_1(K)\xrightarrow
VW_2(K)\xrightarrow VW_3(K)\xrightarrow V\cdots$ is $CW(K)$, with the
canonical morphisms $\psi_n:W_n(K)\to CW(K)$ given by
$(a_0,\ldots,a_{n-1})\mapsto (\ldots,0,0,a_0,\ldots,a_{n-1})$. Hence
$\varinjlim W_n(K)\otimes W_n(K)=CW(K)\otimes CW(K)$, with the canonic
morphisms $\psi_n\otimes\psi_n:W_n(K)\otimes W_n(K)\to CW(K)\otimes
CW(K)$. Since $\alpha_{p^n}$ is given by $a\otimes b\mapsto
((a,b))_{p^n}$ $\forall a,b\in W_n(K)$, $\alpha_{p^\infty}$ will be
given by $a\otimes b\mapsto ((a,b))_{p^\infty}$ $\forall a,b\in
CW(K)$, where $((\cdot,\cdot ))_{p^\infty}$ is defined as in
Proposition 3.17.
By the same proof from Lemma 4.7, but with the part involving $\wp
([a])[b]^{-1}\otimes [b]$ ignored, we have $(V\otimes V)(N'_{n-1})\sbq
N'_n$ so $V\otimes V:W_{n-1}(K)\otimes W_{n-1}(K)\to W_n(K)\otimes
W_n(K)$ induces a morphism $f'_n:G'_{n-1}\to G'_n$. So the groups
$G'_n$ also form a directed system and we denote $G'_\infty
=\varinjlim G'_n$. Same as for $G_\infty$, we have $G'_\infty
=CW(K)\otimes CW(K)/\varinjlim N'_n$.
\blm We have $\varinjlim N_n=\varinjlim N'_n$ so $G_\infty
=G'_\infty$.
\elm
$Proof.$\,\,} \def\lb{\label We have $\varinjlim N_n=\bigcup_{n\geq
1}(\psi_n\otimes\psi_n)(N_n)$ and $\varinjlim
N'_n=\bigcup_{n\geq 1}(\psi_n\otimes\psi_n)(N'_n)$. Then
$\varinjlim N_n\spq\varinjlim N'_n$ follows from $N_n\spq N'_n$
$\forall n$. Conversely, we must prove that
$(\psi_n\otimes\psi_n)(N_n)\sbq\varinjlim N'_n$ for any $n\geq
1$. It suffices to prove that $(\psi_n\otimes\psi_n)(\eta
)\in\varinjlim N'_n$ for every generator $\eta$ of $N_n$. If $\eta
=a\otimes bc-ab\otimes c-ac\otimes b$ for some $a,b,c\in W_n(K)$
or $Fa\otimes b-a\otimes Vb$ for some $a,b\in W_n(K)$ then
$\eta\in N'_n$ so $(\psi_n\otimes\psi_n)(\eta
)\in(\psi_n\otimes\psi_n)(N'_n)\sbq\varinjlim N'_n$. Assume
now that $\eta =\wp ([a])[b]^{-1}\otimes [b]$ for some $a,b\in K$,
$b\neq 0$. We have $(\psi_n\otimes\psi_n)(\eta
)=(\psi_{n+1}\otimes\psi_{n+1})((V\otimes V)(\eta ))$. We prove that
$(V\otimes V)(\eta)\in N'_{n+1}$ so
$(\psi_n\otimes\psi_n)(\eta
)\in(\psi_{n+1}\otimes\psi_{n+1})(N'_{n+1})\sbq\varinjlim N'_n$. Now
$\eta =[a^pb^{-1}]\otimes [b]-[ab^{-1}]\otimes [b]$ so $(V\otimes
V)(\eta )=V[a^pb^{-1}]\otimes V[b]-V[ab^{-1}]\otimes V[b]$. But by
Lemma 4.5 in $G'_{n+1}$ we have $V[a^pb^{-1}]\otimes
V[b]=FV[a^pb^{-1}]\otimes [b]=p[a^pb^{-1}]\otimes [b]$ and
$V[ab^{-1}]\otimes V[b]=F[ab^{-1}]\otimes F[b]=[a^pb^{-p}]\otimes
[b]^p=p[a^pb^{-p}][b]^{p-1}\otimes [b]=p[a^pb^{-1}]\otimes [b]$. Thus
$V[a^pb^{-1}]\otimes V[b]=V[ab^{-1}]\otimes V[b]$ in
$G'_{n+1}=(W_{n+1}(K)\otimes W_{n+1}(K))/N'_{n+1}$ so $(V\otimes
V)(\eta )=V[a^pb^{-1}]\otimes V[b]-V[ab^{-1}]\otimes V[b]\in
N'_{n+1}$, as claimed. \mbox{$\Box$}\vspace{\baselineskip}
\blm We have $\varinjlim N_n=N$, where $N\sbq CW(K)\otimes CW(K)$ is
generated by
\begin{center}
$a\otimes b+b\otimes a$ and $Fa\otimes b-a\otimes Vb$,
with $a,b\in CW(K)$
$[a]_l\otimes [bc]_l+[b]_l\otimes [ac]_l+[c]_l\otimes [ab]_l$,
with $a,b,c\in K$, $l\leq 0$.
\end{center}
Here for $l\leq 0$ by $[a]_l$ we mean $(\ldots,0,0,a,0,\ldots,0)\in
CW(K)$, with $a$ on the $l$th position. Alternatively,
$[a]_{-n}=\psi_{n+1}([a])$ for $n\geq 0$.
\elm
$Proof.$\,\,} \def\lb{\label We first prove that $N\sbq\varinjlim N_n$. We must prove that
every generator $\eta$ of $N$ belongs to $\varinjlim N_n$. If $\eta
=a\otimes b+b\otimes a$ or $Fa\otimes b-a\otimes Vb$ for some
$a,b\in CW(K)$ then for $n\geq 1$ large enough we have
$a=\psi_n(c)$, $b=\psi_n(d)$ for some $c,d\in W_n(K)$. It
follows that $\eta =(\psi_n\otimes\psi_n)(\nu )$, where $\nu
=c\otimes d+d\otimes c$ or $Fc\otimes d-c\otimes Vd$,
respectively. But, by Lemma 4.5, in both cases we have $\nu=0$ in
$G_n$ so $\nu\in N_n$ and $\eta\in
(\psi_n\otimes\psi_n)(N_n)\sbq\varinjlim N_n$. If $\eta
=[a]_l\otimes [bc]_l+[b]_l\otimes [ac]_l+[c]_l\otimes [ab]_l$, for
some $a,b,c\in K$ and $l\leq 0$ then $l=-(n-1)$ for some $n\geq
1$. We have $[\alpha ]_l=\psi_n([\alpha ])$ $\forall\alpha\in
K$. Hence $\eta =(\psi_n\otimes\psi_n)(\nu )$, where $\nu
=[a]\otimes [bc]+[b]\otimes [ac]+[c]\otimes [ab]$. But, by Lemma 4.5,
in $G_n$ we have $\nu =[a]\otimes [bc]-[ab]\otimes [c]-[ac]\otimes
[b]=[a]\otimes [b][c]-[a][b]\otimes [c]-[a][c]\otimes [b]=0$. So again
$\nu\in N_n$ and $\eta\in\varinjlim N_n$.
Before proving the reverse inclusion, note that in $(CW(K)\otimes
CW(K))/N$ we have $a\otimes b=-b\otimes a$ and $Fa\otimes
b=a\otimes Vb$ $\forall a,b\in CW(K)$, but also $Va\otimes b=-b\otimes
Va=-Fb\otimes a=a\otimes Fb$.
By Lemma 4.11, we have $\varinjlim N_n=\varinjlim N'_n=\bigcup_{n\geq
1}(\psi_n\otimes\psi_n)(N'_n)$ so we must prove that
$(\psi_n\otimes\psi_n)(N'_n)\sbq N$ $\forall n\geq 1$. It
suffices to prove that $(\psi_n\otimes\psi_n)(\eta )\in N$ for
all generators $\eta$ of $N_n'$. If $\eta =Fa\otimes b-a\otimes
Vb$ for some $a,b\in W_n(K)$ then $(\psi_n\otimes\psi_n)(\eta
)=Fa'\otimes b'-a'\otimes Vb'\in N$, where $a'=\psi_n(a)$,
$b'=\psi_n(b)$. For elements $\eta$ of the form $a\otimes bc-ab\otimes
c-ac\otimes b$ with $a,b,c\in W_n(K)$ note that the map
$(a,b,c)\mapsto a\otimes bc-ab\otimes c-ac\otimes b$ is trilinear so
it is enough to take the case when $a,b,c$ belong to the set of
generators $\{ V^i[\alpha]\, :\, \alpha\in K,\, 0\leq i\leq n-1\}$ of
$W_n(K)$. So we must prove that $(\psi_n\otimes\psi_n)(\eta )\in N$ for $\eta\in W_n(K)$ of the form
$$\eta =V^i[a]\otimes V^j[b]V^k[c]-V^i[a]V^j[b]\otimes
V^k[c]-V^i[a]V^k[c]\otimes V^j[b]$$
for some $a,b,c\in K$, $0\leq i,j,k\leq n-1$. But
$V^j[b]V^k[c]=V^{j+k}(F^k[b]F^j[c])=V^{j+k}[b^{p^k}c^{p^j}]$ and
similarly for the other products. Hence
$$\eta =V^i[a]\otimes
V^{j+k}[b^{p^k}c^{p^j}]-V^{i+j}[a^{p^j}b^{p^i}]\otimes
V^k[c]-V^{i+k}[a^{p^k}c^{p^i}]\otimes V^j[b].$$
But for any $\alpha\in K$ we have
$\psi_n([\alpha ])=[\alpha ]_l$, where $l=-(n-1)$, so in
$(CW(K)\otimes CW(K))/N$ we have
\begin{multline*}
(\psi_n\otimes\psi_n)(\eta )\\
=V^i[a]_l\otimes
V^{j+k}[b^{p^k}c^{p^j}]_l-V^{i+j}[a^{p^j}b^{p^i}]_l\otimes
V^k[c]_l-V^{i+k}[a^{p^k}c^{p^i}]_l\otimes V^j[b]_l\\
=V^i[a]_l\otimes V^{j+k}[b^{p^k}c^{p^j}]_l+V^j[b]_l\otimes
V^{i+k}[a^{p^k}c^{p^i}]_l+V^k[c]_l\otimes V^{i+j}[a^{p^j}b^{p^i}]_l.
\end{multline*}
Recall that in $(CW(K)\otimes CW(K))/N$ we have $Fa\otimes
b=a\otimes Vb$ and $Va\otimes b=a\otimes Fb$ $\forall a,b\in
CW(K)$. Therefore
$$V^i[a]_l\otimes V^{j+k}[b^{p^k}c^{p^j}]_l=
F^{j+k}[a]_l\otimes F^i[b^{p^k}c^{p^j}]_l=[a^{p^{j+k}}]_l\otimes
[b^{p^{i+k}}c^{p^{i+j}}]_l.$$
Similarly for the other two terms. Hence if we denote by
$a'=a^{p^{j+k}}$, $b'=b^{p^{i+k}}$, $c'=c^{p^{i+j}}$ then in
$(CW(K)\otimes CW(K))/N$ we have $(\psi_n\otimes\psi_n)(\eta
)=[a']_l\otimes [b'c']_l+[b']_l\otimes [a'c']_l+[c']_l\otimes
[a'b']_l=0$. Thus $(\psi_n\otimes\psi_n)(\eta )\in N$. \mbox{$\Box$}\vspace{\baselineskip}
In conclusion we have:
\btm With $N$ defined as in Lemma 4.12, there is an isomorphism
$\alpha_{p^\infty}:(CW(K)\otimes CW(K))/N\to\Brr{p^\infty}(K)$, given
by $a\otimes b\mapsto ((a,b))_{p^\infty}$ $\forall a,b\in CW(K)$.
\end{tm}
If in the set of generators of $N$ we replace $a\otimes b+b\otimes a$,
with $a,b\in CW(K)$, by $a\otimes a$, with $a\in CW(K)$, then we obtain a
new subgroup $\overline N\sbq CW(K)\otimes CW(K)$. Since $a\otimes
b+b\otimes a=(a+b)\otimes (a+b)-a\otimes a-b\otimes b\in\overline N$
we have $N\sbq\overline N$. For the reverse inclusion note that
$2a\otimes a=a\otimes a+a\otimes a\in N$ $\forall a\in CW(K)$. If
$p\neq 2$ then for $s$ large enough we also have $p^sa\otimes a=0$ so
$a\otimes a\in N$. If $p=2$ then let $a'\in CW(K)$ such that
$a=Va'$. (Say, $a'=(a,0)$.) Then in $(CW(K)\otimes CW(K))/N$ we have
$a\otimes a=Va'\otimes Va'=FVa'\otimes a'=2a'\otimes a'=0$ so
again $a\otimes a\in N$.
Hence $N=\overline N$. Note that $(CW(K)\otimes CW(K))/\langle
a\otimes a\, :a\,\in CW(K)\rangle =\Lambda^2(CW(K))$. Therefore
Theorem 4.13 can be written in the following equivalent form.
\bco There is an isomorphism
$\alpha_{p^\infty}:\Lambda^2(CW(K))/N'\to\Brr{p^\infty}(K)$, given by
$a\wedge b\mapsto ((a,b))_{p^\infty}$, where
\begin{multline*}
N'=\langle Fa\wedge b-a\wedge Vb\, :\, a,b\in CW(K),\\
[a]_l\wedge [bc]_l+[b]_l\wedge [ac]_l+[c]_l\wedge [ab]_l\, :\,
a,b,c\in K,\, l\leq 0\rangle.
\end{multline*}
\eco
\bigskip
{\bf Acknowledgement} I want to thank Philippe Gille for his interest
in my work and for his advice. It was he who suggested that I should
try to determine the kernel of the map
$\alpha_{p^n}:\Omega^1(W_n(K))\to\Brr{p^n}(K)$ and so to obtain a
representation theorem for $\Brr{p^n}(K)$ which generalizes [GS,
Theorem 9.2.4]. As it turned out, the properties already known,
$((Fa,b))_{p^n}=((a,Vb))_{p^n}$ and $[\wp (a),b)_{p^n}=0$, which
translate to $Fa\mathop{}\!\mathrm{d} b-a\mathop{}\!\mathrm{d} Vb$, $\wp (a)\dlog
[b]\in\ker\alpha_{p^n}$, were enough to generate $\ker\alpha_{p^n}$.
\bigskip
{\bf References}
[FV] I.V. Fesenko, S.V. Vostokov, "Local Fields and Their Extensions",
The Second Edition, American Math Society, Translations of Math
Monographs vol 121, (2002).
[GS] P. Gille, T. Szamuely, "Central simple algebras and Galois
cohomology" 2nd edition, Cambridge Studies in Advanced Mathematics
165, Cambridge University Press (2017).
[T] Thomas, Lara, "Ramification groups in Artin-Schreier-Witt
extensions", Journal de Th\'eorie des Nombres de Bordeaux 17 (2005),
689–720.
[W] Witt, Ernest "Zyklische K\"orper un Algebren der Charakteristik
$p$ vom Grad $p^n$", (1934).
\bigskip
Institute of Mathematics Simion Stoilow of the Romanian Academy, Calea
Grivitei 21, RO-010702 Bucharest, Romania.
E-mail address: Constantin.Beli@imar.ro
\end{document}
|
1,314,259,995,319 | arxiv | \section{Introduction}
There are many intrinsic properties of hexagonal boron nitride (hBN),
such as large band gap, chemical inertness and atomic-level flatness
\citep{Watanabe2004,Corso2004,Pakdel2014}, that make it very interesting
subject of scientific research and a promising material for future
applications. The investigation of hBN gained additional momentum
after the realization of layered van der Waals heterostructures and
the respective devices, such as transistors \citep{Dean2010a,Lee2015},
light-emitting diodes \citep{Ross2014}, gas sensors \citep{Sajjad2013}
and solar cells \citep{Lin2015p}, where the properties of hBN turned
out to be crucial for those complex systems as a whole. In stacked
heterostructures, hBN is in close contact with other substances and
is often subjected to electric or magnetic fields, and in order to
fully understand and exploit those systems, a thorough investigation
of all relevant interactions is needed.
Interlayer interactions are certainly crucial and need to be addressed
on a fundamental level. This can be achieved by utilizing a surface
science approach, i.e., by fabricating well-defined (mono)layers of
hBN in ultra-high vacuum (UHV), followed by their controlled decoration
with atoms or molecules of interest \citep{Auwarter2019}. Hereafter,
we focus on materials which act as charge donors or acceptors, and
as such are able to significantly alter the electronic properties
of hBN. The simplest, yet very efficient charge donors are the alkali
atoms. Their effect on the electronic structure of hBN has been investigated
in several studies, where one of the most notable observations is
the shift of the electronic bands to higher binding energies. Such
shift arises from the electric potential which is induced by the charge
transferred form alkali atoms to their surroundings. Fedorov \textit{et
al}. found that K and Li deposition on hBN/Au/Ni(111) results in two
different structures: Li remains adsorbed on top of hBN and causes
a shift of the valence bands of 0.9 eV, while K intercalates under
the hBN and induces a shift of 2.77 eV \citep{Fedorov2015}. Cai \textit{et
al}. conducted Cs deposition on hBN/Ir(111), and identified two Cs
configurations, adsorbed and a combination of adsorbed and intercalated,
inducing valence band shifts of 0.35 and 3.25 eV, respectively \citep{Cai2018}.
Besides the valence bands, B and N core levels have also shifted to
higher binding energies in these two studies. By investigating a somewhat
different system comprised of multilayer hBN on TiO\textsubscript{2}(100),
Koch \textit{et al}. measured a 2.5 eV shift of the valence bands
to higher binding energies after K deposition \citep{Koch2018}. In
an analogous way to alkali metals, deposition of charge acceptor species
on epitaxial hBN causes shift of the electronic bands to lower binding
energies, as has been shown for molecular oxygen adsorption on hBN/Ni(111),
where the valence bands shifted by 1.2 eV closer to the Fermi level
\citep{Spath2019}.
Being the smallest of alkali metals, Li is an interesting candidate
for adsorbtion on and intercalation of hBN mono- or multi-layers at
a wide range of concentrations, which could enable realization of
interesting hBN-based systems. For example, it has been demonstrated
that Li-functionalized hBN has a potential to serve as an electrode
in batteries \citep{Zhang2016k}. Also, calculations predict that
Li atoms encapsulated by two hBN layers could be suitable for hosting
plasmonic excitations \citep{Loncaric2018}. Li adsorbed on monolayer
hBN could potentially invoke n-type conductivity and expedite integration
of hBN into electronics \citep{Ding2016}.
In this work, we further elaborate on the effects of hBN decoration
with Li atoms. More specifically, angle-resolved photoemission spectroscopy
(ARPES) and low-energy electron diffraction (LEED) are utilized to
investigate the electronic and morphological characteristics of hBN
on Ir(111) at different Li concentrations and with that, at variable
electronic charge arrangement. Sequential Li deposition employed in
our experiments reveals the pathway for Li intercalation and adsorption,
and allows for a detailed investigation of charge transfer dynamics.
\section{Methods}
Sample preparation and all experimental measurement were conducted
in an ultra-high vacuum setup (base pressure of $\approx5\times10^{-10}$
mbar) dedicated to ARPES, with the LEED instrument available as an
auxiliary technique. Ir(111) single crystal cleaning consisted of
repeated cycles of Ar ion sputtering at room temperature (RT) at 1.5
keV energy, oxygen glowing ($p=10^{-6}$ mbar) at 1170 K and annealing
at 1470 K. The growth of hBN proceeded by exposing Ir(111) to borazine
(B\textsubscript{3}H\textsubscript{6}N\textsubscript{3}, $p=2\text{\texttimes}10^{-7}$
mbar) at 1170 K for 15 minutes. Keeping the sample temperature below
1220 K at all times prevented decomposition of hBN and appearance
of epitaxial boron \citep{Petrovic2017a}. The quality of hBN was
checked by ARPES and LEED, where well-defined $\pi$ and $\sigma$
bands, and the pronounced moiré diffraction spots have been sought.
Li was deposited in a sequence of steps from commercial dispensers
(SAES getters) at RT. Prior to each Li deposition sequence, a fresh
hBN sample has been synthesized.
ARPES measurements were carried out at RT with a Scienta SES 100 analyzer
(25 meV energy resolution, 0.2$^{\circ}$ angular resolution). Data
has been collected in the $\mathrm{\Gamma K}$ direction. A helium
discharge lamp ($h\nu=21.2$ eV, non-polarized) was utilized as a
photon source, with the spot diameter on the sample of $\approx2$
mm.
\section{Results}
As a starting point for Li deposition experiments, epitaxial hBN samples
were prepared. ARPES mapping of hBN/Ir(111) in the $\Gamma\mathrm{K}$
direction, shown in Fig. \ref{fig1}, provides proof of good quality
of hBN, with $\sigma_{1}$, $\sigma_{2}$ and $\pi$ bands visible.
The replicated $\sigma$ bands ($\sigma_{\mathrm{R1}}$ and $\sigma_{\mathrm{R2}}$)
are also evident, indicating that a well-defined moiré corrugation
is present in the hBN/Ir(111) system \citep{Usachov2012}, again signaling
for a very good uniformity of hBN over mesoscopic scales. For the
sake of completeness, we fit the measured $\pi$ band with the first-nearest-neighbor
tight binding approximation (1NN TBA) \citep{Sawinska2010,Ribeiro2011a}.
Such fitting which provides boron and nitrogen onsite energies of
3.73 and -2.37 eV, respectively, and the hopping energy between the
nearest neighbors of 2.78 eV (see Appendix A for more details). The
fitted $E_{\mathrm{TBA}}\left(k_{\parallel}\right)$ dispersion is
plotted in Fig. \ref{fig1}(b) by a dashed line. Crystallographic
quality of hBN has been confirmed by LEED data {[}see inset in Fig.
\ref{fig1}(b){]}, where the diffraction satellite spots of the moiré
structure surround the first order Ir and hBN spots.
\begin{figure}
\begin{centering}
\includegraphics{fig1}
\par\end{centering}
\caption{\label{fig1} ARPES map of hBN/Ir(111) system along the $\Gamma\mathrm{K}$
direction presented as (a) raw data and (b) second derivative in the
$y$ coordinate. Electronic bands of hBN, $\pi$ and the two $\sigma$
bands ($\sigma_{1}$ and $\sigma_{2}$) are visible. Additionally,
replicated $\sigma$ bands ($\sigma_{\mathrm{R1}}$ and $\sigma_{\mathrm{R2}}$)
can be discerned as well. Thin dashed line in panel (b) is the TBA
fit to the $\pi$ band. The inset in panel (b) shows LEED image of
the system ($E=56$ eV), with the moiré diffraction spots surrounding
the first order diffraction spots of Ir and hBN.}
\end{figure}
Li deposition on hBN/Ir(111) has been conducted in a series of steps.
Due to the limited photon energy and the respective restrictions in
the size of $k$-space available in our experiments, focus is put
on the $\sigma$ bands and evolution of their binding energy as a
function of the deposited Li amount ($\theta_{\mathrm{Li}}$). A stack
of energy distribution curves (EDCs) extracted at $k_{\parallel}=0.4$
Å\textsuperscript{-1} is shown in a color plot in Fig. \ref{fig2}(a).
At first, Li was deposited in 1-minute-long steps, and subsequently
in 2-, 4- and 6-minutes-long steps {[}data right of the dashed line
in Fig. \ref{fig2}(a){]} in order to reach the maximum shift of the
$\sigma$ bands more efficiently. The downshift of the $\sigma$ bands,
i.e., the increase of their binding energies is evident. Initially,
the shift progresses relatively fast, but afterwards it slows down
and eventually saturates at a value of 2.35 eV. This is visible from
the comparison of $\sigma$ band position in consecutive EDCs: employment
of 1-minute-long steps induces progressively diminishing shift (up
to the deposition step 14), and only utilization of longer, i.e.,
2-, 4- and 6-minutes-long steps (deposition step 15 and beyond) allows
for a distinction of further shift of the bands. Such observation
points to a non-linear shift of the bands as a function of number
of Li deposition steps and the cumulative amount of deposited Li.
Application of additional Li deposition steps, beyond the ones shown
in \ref{fig2}(a), did not produce further shift of the $\sigma$
bands. Instead, it resulted in blurring and reduction of photoemission
intensity of the $\sigma$ bands, which might stem from Li multilayer
formation on top of hBN.
\begin{figure*}
\begin{centering}
\includegraphics{fig2}
\par\end{centering}
\caption{\label{fig2}(a) A stack of EDCs at $k_{\parallel}=0.4$ Å\protect\textsuperscript{-1}
for a sequence of Li deposition steps. 1-minute-long (left of dashed
line) and 2-, 4- and 6-minutes-long deposition steps (right of dashed
line) have been employed. The shifting bands are the $\sigma$ bands
of hBN. Non-shifting Ir bands are also visible. (b) Increase of the
$\sigma$ bands binding energy, $\Delta E_{\mathrm{B}}$, as a function
of Li coverage, $\theta_{\mathrm{Li}}$ (gray circles). The data has
been fitted (full black curve) by a sum of the two exponential functions
$\Delta E_{\mathrm{B1}}$ and $\Delta E_{\mathrm{B2}}$ (dashed red
and blue curves).}
\end{figure*}
Electronic bands that do not show a change in binding energy in Fig.
\ref{fig2}(a) originate from Ir. The most notable bands are the ones
located at $\approx1$ eV below the Fermi level. These bands show
a decrease in intensity as the number of Li deposition steps increases,
which can be explained by attenuation of Ir photoelectrons due to
passage through additional Li layers (similar trend is found for the
$\sigma$ bands intensity). It should be noted that these states are
Ir bulk states and not surface states \citep{Pletikosic2010}, and
therefore their attenuation cannot be related to interaction between
Ir surface atoms and deposited Li atoms.
The binding energy increase of the $\sigma$ bands, $\Delta E_{\mathrm{B}}$,
has been determined for each deposition step by fitting the corresponding
EDC curve with a Lorentzian lineshape. Furthermore, the amount of
deposited Li on the sample after each step has been calculated (i)
by adopting a linear relation between Li dispenser yield and the deposition
time, and (ii) by defining that $\theta_{\mathrm{Li}}=\theta_{\mathrm{max}}$
induces a maximum shift of the $\sigma$ bands observed in our experiments
(see Appendix B for more details). With this data, it is possible
to plot $\Delta E_{\mathrm{B}}$ as a function of $\theta_{\mathrm{Li}}$,
as shown in Fig. \ref{fig2}(b) by gray circles, and such calibration
can be used for a straightforward estimation of $\theta_{\mathrm{Li}}$
in other experiments based on the binding energy of hBN bands.
Due to technical restrictions, work function measurements could not
be performed in our setup. It is therefore not possible to disclose
whether $\Delta E_{\mathrm{B}}$ arises merely from the Li-induced
modification of the sample work function and alignment of hBN bands
to the vacuum level. Hence, we consider the local electric potential
induced by Li dipoles that acts on hBN ($\phi_{\mathrm{loc}}$) for
a proper description of our system. This potential is directly proportional
to the band shift, $\Delta E_{\mathrm{B}}\sim\phi_{\mathrm{loc}}$
\citep{Fedorov2015,Cai2018}, implying that Fig. \ref{fig2}(b) can
also be interpreted as $\phi_{\mathrm{loc}}\left(\theta_{\mathrm{Li}}\right)$
graph.
Saturation of data in Fig. \ref{fig2}(b) suggests exponential-like
shift of the hBN bands as Li deposition progresses. Fitting with a
single exponential function yields poor results, but a fit with a
two-component exponential function of the form $\Delta E_{\mathrm{B1}}+\Delta E_{\mathrm{B2}}=A_{1}\left(1-e^{-\theta_{\mathrm{Li}}/\theta_{1}}\right)+A_{\mathrm{2}}\left(1-e^{-\theta_{\mathrm{Li}}/\theta_{2}}\right)$
provides an excellent agreement with the data, as indicated by a full
line in Fig. \ref{fig2}(b). Dashed lines in Fig. \ref{fig2}(b) denote
the two components of the fitting function, $\Delta E_{\mathrm{B1}}$
and $\Delta E_{\mathrm{B2}}$, where the first one exhibits rapid
growth ($\theta_{1}=0.04\:\theta_{\mathrm{max}}$) and saturation
at $A_{1}=1.40$ eV, while the second component rises more slowly
($\theta_{2}=0.32\:\theta_{\mathrm{max}}$) and saturates at $A_{2}=0.99$
eV. At low $\theta_{\mathrm{Li}}$, the $\Delta E_{\mathrm{B1}}$
component provides dominant contribution to $\Delta E_{\mathrm{B}}$,
and after $\theta_{\mathrm{Li}}\approx0.2\:\theta_{\mathrm{max}}$,
the $\Delta E_{\mathrm{B2}}$ component becomes the source of further
shift of the $\sigma$ bands.
During sequential Li deposition, the diffraction pattern of the sample
has been inspected several times, as shown in Fig. \ref{fig3}(a),
upper panels. At $\theta_{\mathrm{Li}}=0\:\theta_{\mathrm{max}}$,
a diffraction pattern corresponding to the epitaxially aligned hBN
on Ir(111) is observed, along with the moiré diffraction spots surrounding
the first order hBN and Ir spots. A more detailed view of the moiré
spots is given in the lower, zoom-in panels in Fig. \ref{fig3}(a).
Good visibility of the moiré spots indicates significant corrugation
of the hBN layer \citep{Usachov2012}. A cross-section through the
first order hBN and Ir spots, as well as through the two closest moiré
spots is given in Fig. \ref{fig3}(b). After deposition of only 0.07
$\theta_{\mathrm{max}}$ of Li, intensity of the diffraction spots
changes notably: Ir and moiré spots are significantly reduced, and
the intensity of the hBN spot has increased. Additional Li deposition
leads to further reduction of Ir and moiré spots, while hBN spot intensity
remains approximately constant. At $\theta_{\mathrm{Li}}=\theta_{\mathrm{max}}$,
moiré and Ir spots are barely visible, as evident in Figs. \ref{fig3}(a)
and (b).
It should be noted that no Li superstructures were observed at any
Li coverage. Also, an increase of the Ir spot intensity, which might
indicate the formation of an intercalated $\mathrm{Li-}\left(1\times1\right)$
superstructure \citep{Pervan2015,Silva2019}, has not been registered.
Hence, we conclude that Li most likely forms disordered structures
on hBN/Ir(111) at room temperature, which is in line with Li intercalation
in bulk hBN \citep{Sumiyoshi2012}.
\begin{figure*}
\begin{centering}
\includegraphics{fig3}
\par\end{centering}
\caption{\label{fig3}(a) A sequence of LEED images (top panel) along with
their zoom-ins (bottom panels) at several characteristic Li coverages,
$\theta_{\mathrm{Li}}$, indicated above. Diffraction spots of Ir,
hBN, and the moiré structure (m\protect\textsubscript{1} and m\protect\textsubscript{2})
are noted. $E=56$ eV. (b) Cross-section through LEED images shown
in (a) as indicated by a red line. The curves have been shifted in
the $y$ direction for clarity. (c) A schematic model of hBN on Ir(111)
without Li (top), at intermediate Li coverage (middle), and at a maximum
Li coverage $\theta_{\mathrm{max}}$ (bottom).}
\end{figure*}
To further investigate the arrangement of Li on hBN/Ir(111), $\theta_{\mathrm{Li}}=0.32\:\theta_{\mathrm{max}}$
and $\theta_{\mathrm{Li}}=\theta_{\mathrm{max}}$ samples have been
exposed to 100 L ($p=5\times10^{-7}$ mbar for 270 s) of molecular
oxygen (99.999 \% purity) at RT. Since alkali metals readily oxidize
in such environment \citep{Matyba2015}, this experiment can reveal
whether the alkali atoms are intercalated (therefore, protected from
oxygen by hBN), adsorbed on top of hBN (i.e., exposed to oxygen),
or take on some mixed intermediate configuration. The effects of O\textsubscript{2}
exposure are shown in Fig. \ref{fig4} where the corresponding EDCs
extracted at $k_{\parallel}=0.4$ Å\textsuperscript{-1} are shown.
For the $\theta_{\mathrm{Li}}=0.32\:\theta_{\mathrm{max}}$ sample,
deposited Li increased the binding energy of the $\sigma$ bands by
2 eV. Subsequent O\textsubscript{2} exposure resulted in a 0.28 eV
backshift to the Fermi level. For the $\theta_{\mathrm{Li}}=\theta_{\mathrm{max}}$
sample, deposited Li increased the binding energy of the $\sigma$
bands by 2.35 eV, i.e., by the largest amount observed in our experiments.
The following O\textsubscript{2} exposure then caused a significantly
larger backshift of 1.49 eV, as illustrated in Fig. \ref{fig4} by
green arrows. Additional 100 L O\textsubscript{2} exposure of the
$\theta_{\mathrm{Li}}=\theta_{\mathrm{max}}$ sample did not produce
any additional shift of the bands. Apparently, introduction of oxygen
triggers reduction of the local electric potential $\phi_{\mathrm{loc}}$,
and the magnitude of such reduction depends on the amount of Li that
has been deposited on the sample. In agreement with Fig. \ref{fig2}(a),
intensity of the Ir bands at $\approx1$ eV below the Fermi level
gets reduced after the deposition of $\theta_{\mathrm{max}}$ of Li.
Importantly, the intensity of these Ir bands is not restored upon
oxygen exposure.
\begin{figure}
\begin{centering}
\includegraphics{fig4}
\par\end{centering}
\caption{\label{fig4}EDCs at $k_{\parallel}=0.4$ Å\protect\textsuperscript{-1}
for samples with $\theta_{\mathrm{Li}}=0.32\:\theta_{\mathrm{max}}$
(top panel) and $\theta_{\mathrm{Li}}=\theta_{\mathrm{max}}$ (bottom
panel) that were subsequently exposed to molecular oxygen. Green arrows
indicate the shift of the $\sigma$ bands to higher binding energies
due to Li deposition (+Li, red curves), followed by a backshift to
the Fermi level after oxygen exposure (+O\protect\textsubscript{2},
orange curves).}
\end{figure}
\section{Discussion}
The outlined experiments clarify spatial arrangement of Li atoms at
different coverages. Reduction of the Ir and moiré spots and an increase
in intensity of the hBN spots, as visible in Fig. \ref{fig3}(b),
are typical signatures of decoupling related to the insertion of atoms
between 2D material and its substrate \citep{Ulstrup2014a,Pervan2015,Silva2019,Lin2018a}.
Since such LEED intensity modification has been observed already for
small amounts of Li, we conclude that Li atoms initially intercalate
between hBN and Ir, rather then stay adsorbed on the vacuum side of
hBN.
Further elaboration of Li positioning is provided by O\textsubscript{2}
exposure experiments. Any Li atoms adsorbed on hBN are able to react
with O\textsubscript{2} molecules and form Li oxides, Li\textsubscript{2}O\textsubscript{x}
\citep{SHEK1990,Pervan2017}. In contrast to elemental Li, Li oxide
does not act as an efficient electron donor. Therefore, in a system
with Li\textsubscript{2}O\textsubscript{x} present, the total amount
of charge transferred to Ir is reduced in comparison to a system with
elemental Li. Consequently, corresponding induced dipole magnitude,
electric potential and shift of the electronic bands are also reduced.
We believe this scenario is visible in Fig. \ref{fig4}. Exposure
to O\textsubscript{2} resulted in a backshift for both $\theta_{\mathrm{Li}}=0.32\:\theta_{\mathrm{max}}$
and $\theta_{\mathrm{Li}}=\theta_{\mathrm{max}}$ samples as a result
of Li\textsubscript{2}O\textsubscript{x} formation on top of hBN.
Therefore, the observation of a backshift indicates the presence of
adsorbed Li, since hBN layer is chemically inert and its electronic
structure would not be affected by the presence of O\textsubscript{2}.
However, the backshift is significantly larger (approximately five
times) for the $\theta_{\mathrm{Li}}=\theta_{\mathrm{max}}$ sample.
This is explained by a larger quantity of Li adsorbed on hBN prior
to oxygen exposure, accompanied by a larger charge loss and a more
pronounced electric potential reduction when Li\textsubscript{2}O\textsubscript{x}
forms. Also, the fact that Ir bands remain attenuated after O\textsubscript{2}
exposure corroborate the scenario of Li oxide formation on top of
hBN, rather than oxygen-promoted removal of adsorbed Li. Intercalated
Li is protected from oxidation by hBN layer \citep{Matyba2015,Pervan2017}
and is responsible for the residual electric potential causing the
remaining shift of the $\sigma$ bands. We speculate that discrepancy
in the remaining shift for the two oxidized samples originates form
different amounts of Li\textsubscript{2}O\textsubscript{x} on each
of them, which give rise to different dielectric surroundings.
Overall, it can be concluded that Li is initially being intercalated
at the hBN-Ir interface, and starts adsorbing and accumulating on
top of hBN at the later stages of deposition {[}see Fig. \ref{fig3}(c){]}.
Similar behavior has been found for Li deposition on graphene/Ir(111)
\citep{Petrovic2013a,Pervan2015}, and is conditioned by the size
of Li atoms. Having small dimensions, Li atoms require a small amount
of energy to delaminate 2D materials from their substrate, and are
able to intercalate already at RT. In addition, Li atoms can penetrate
2D materials even through the smallest defects which might be impermeable
to other, larger atoms. Intercalation through defects is a reasonable
assumption, since the migration of a Li atom through a perfect hBN
mesh is energetically very expensive ($\approx7$ eV \citep{Oba2010}).
Intercalated Li cations are effectively screened by the Ir substrate
\citep{Petrovic2013a} (see also discussion below), which additionally
lowers their energy and promotes intercalation as a preferred system
configuration. However, as the concentration of intercalated atoms
approaches its maximum, adsorption from the vacuum side also becomes
allowed.
A stepwise deposition of alkali atoms employed in our study, which
has not been examined before for other similar systems \citep{Fedorov2015,Cai2018},
allows investigation of the respective charge transfer and hBN valence
band shift dynamics in more detail. This dynamics, in conjunction
with the conclusions about the Li positioning and the conducted fitting
of $\Delta E_{\mathrm{B}}$ with $\Delta E_{\mathrm{B1}}+\Delta E_{\mathrm{B2}}$,
is interpreted as follows. Initially, Li atoms form dilute intercalated
structures without well-defined crystallography. In such configuration,
Li atoms give a significant fraction of their 2\textit{s} electrons
to the Ir substrate, since the Li-Li distance is large and the corresponding
Coulomb repulsion penalty is negligible. Hence, at low $\theta_{\mathrm{Li}}$,
Li atoms are highly charged and they give rise to rapidly increasing
$\phi_{\mathrm{loc}}$ and $\Delta E_{\mathrm{B}}$. Accordingly,
it is plausible that the shift of the bands induced only by the intercalated
Li corresponds to the fast component $\Delta E_{\mathrm{B1}}$. As
$\theta_{\mathrm{Li}}$ increases, intercalated Li atoms become more
closely spaced. Then, it becomes energetically unfavorable for them
to be highly charged because the Coulomb repulsion between them would
be large, even though the charge of Li cations is partially screened
by the proximity of the metal substrate \citep{Petrovic2013a}. Overall,
due to the Coulomb penalty and screening, the effective charge per
intercalated Li atom reduces, which leads to the saturation of $\Delta E_{\mathrm{B1}}$.
In parallel, Li atoms become adsorbed on hBN, and also donate charge
to Ir. The mechanism of Coulomb penalty applies to them as well, only
without screening from the substrate due to the larger Li-Ir separation.
The charge that adsorbed Li atoms give to Ir induces additional electric
potential and the $\Delta E_{\mathrm{B2}}$ component of the overall
$\sigma$ band shift. According to Fig. \ref{fig2}(b), adsorbed Li
becomes the dominant source of $\Delta E_{\mathrm{B}}$ increase for
$\theta_{\mathrm{Li}}\gtrsim0.2\:\theta_{\mathrm{max}}$.
Therefore, an increase of Li coverage inevitably leads to a progressive
reduction of the charge transferred from each Li atom to Ir (in both
the intercalated and the adsorbed subsystems) and also screening of
Li atoms (in the intercalated subsystem). Such charge transfer dynamics
results in a continuously diminishing increase, and eventually saturation,
of $\phi_{\mathrm{loc}}$ and $\Delta E_{\mathrm{B}}$, as is evident
from Fig. \ref{fig2}(b). Indeed, it is expected that the Coulomb
penalty would provide exponential saturation of $\Delta E_{\mathrm{B1}}$
and $\Delta E_{\mathrm{B2}}$ (and therefore also $\Delta E_{\mathrm{B}}$),
since the charge of individual Li atoms reduces proportionally to
the total number of atoms in the system. In the presence of both intercalated
and adsorbed Li, the respective electric potentials add up and jointly
shift the electronic bands of hBN to higher binding energies, since
the corresponding electric dipole fields point in the same direction
towards Ir \citep{Fedorov2015}. Adsorbed Li atoms are not as highly
charged as intercalated atoms due to the substantial separation from
the metal substrate which hinders charge donation \citep{Fedorov2015},
and they cannot induce as large shifts of the electronic bands as
intercalated Li atoms do initially. However, the presence of adsorbed
Li atoms provides additional, slowly increasing electric potential
which pushes the $\sigma$ bands further to yield $\Delta E_{\mathrm{B}}$
of 2.35 eV.
\section{Summary}
Sequential deposition of Li on hBN/Ir(111) results in a stepwise shift
of the electronic bands of hBN to higher binding energies. The shift
is proportional to the magnitude of the electric potential acting
on the electrons of hBN, where the source of the potential are electric
dipoles arising from the charge transferred from Li to Ir. In the
initial stages of deposition, Li atoms get intercalated in a disordered
structure and decouple hBN from its substrate. Once intercalated,
Li atoms are highly charged, and they give rise to a significant (up
to 1.4 eV) and fast-progressing shift of the electronic bands. As
the deposition continues, Li also adsorbs on top of hBN, from where
it induces additional, somewhat smaller shift of the bands (up to
1 eV) that is characterized by a moderate increase rate. Overall,
the shift progresses rapidly in the beginning, slows down as the deposition
advances and eventually saturates at a maximum Li coverage studied.
The main reasons for the observed dynamics of valence band shift are
the facilitated charge transfer from intercalated Li atoms in comparison
to adsorbed Li atoms, the Coulomb repulsion penalty (for intercalated
and adsorbed Li) and screening from the substrate (for intercalated
Li). These factors all together cause progressive reduction of the
charge transferred per Li atom to Ir and reduction of the respective
electric potential. The presented results shed new light onto the
interaction of epitaxial hBN with charged species and describe the
response of its electronic bands to variable electric potentials,
and as such can be beneficial for optimization of chemical functionalization
and electric gating of hBN.
\begin{acknowledgments}
Financial support by the Center of Excellence for Advanced Materials
and Sensing Devices (ERDF Grant KK.01.1.1.01.0001) and by the Alexander
von Humboldt foundation is acknowledged.
\end{acknowledgments}
\section*{Declaration of competing interests}
The author declares no competing financial interests.
\setcounter{equation}{0}
\renewcommand{\theequation}
{A\arabic{equation}}
\section*{Appendix: TBA fitting and Li coverage calibration}
\subsection*{A. TBA fitting}
A good description of the $\pi$ bands of hBN can be obtained form
tight binding approximation (TBA). By taking into account only the
first nearest neighbors, the dispersion of the $\pi$ bands can be
described by \citep{Sawinska2010,Ribeiro2011a}
\begin{equation}
\begin{aligned}E_{\mathrm{TBA}}= & \frac{E_{\mathrm{B0}}+E_{\mathrm{N0}}}{2}\pm\frac{1}{2}\sqrt{\left(E_{\mathrm{B0}}-E_{\mathrm{N0}}\right)^{2}+4\left|\phi\right|^{2}}\end{aligned}
\label{eq1}
\end{equation}
\begin{equation}
\phi=t\left[1+e^{ia\left(-\frac{k_{x}}{2}+\frac{\sqrt{3}k_{y}}{2}\right)}+e^{ia\left(\frac{k_{x}}{2}+\frac{\sqrt{3}k_{y}}{2}\right)}\right]\label{eq2}
\end{equation}
\noindent where $E_{\mathrm{B0}}$ and $E_{\mathrm{N0}}$ are the
onsite energies at the boron and nitrogen atoms, $t$ is the hopping
energy between nearest neighbors, $a$ is the lattice parameter of
hBN, and $\left(k_{x},k_{y}\right)$ is the electron in-plane wavevector.
Fitting of the experimentally measured hBN bands with TBA models is
often avoided in the literature due to the inability to access the
unoccupied conduction band, i.e., to measure the quasiparticle band
gap of epitaxial hBN. However, in order to provide a quantitative
description of the $\pi$ band, we accept a simplistic assumption
that the quasiparticle band gap of monolayer hBN on Ir(111) ($E_{\mathrm{\mathrm{B0}}}-E_{\mathrm{\mathrm{N0}}}$)
corresponds to the optical band gap of hBN monolayer on graphite and
equals 6.1 eV \citep{Elias2019}. After setting the lattice parameter
to $a=2.483$ Å \citep{FarwickZumHagen2016}, fitting the ARPES data
at $\mathrm{\Gamma}$ and K points with Eqs. \ref{eq1} and \ref{eq2}
provides $E_{\mathrm{B0}}=3.73$ eV, $E_{\mathrm{N0}}=-2.37$ eV,
and $t=2.78$ eV. The fitted value of the hopping parameter is in
excellent agreement with the one obtained by fitting the TBA bands
to \textit{ab-initio} calculations of hBN on graphene bilayer \citep{Sawinska2010}.
\subsection*{B. Li coverage calibration}
Alkali metal dispensers used in the experiments exhibit time-dependent
yield \citep{alkali} which depends on the current used for heating
the dispenser, age of the dispenser, and potentially other experimental
factors as well. The amount of alkali atoms released from the dispenser
per unit time increases as the deposition progresses, and this needs
to be taken into account when converting Li deposition time (seconds
or minutes) into Li coverage (in units of $\theta_{\mathrm{max}}$).
In practice, for example, this means that one 5-minute-long deposition
step provides more Li than five consecutive 1-minute-long steps.
We conducted a series of 1- and 2-minute-long deposition steps. By
analyzing the corresponding ARPES spectra and the binding energy of
the $\sigma$ bands, it was found that 13 1-minute-long steps induce
the same $\Delta E_{\mathrm{B}}$ (within the experimental error),
and therefore provide the same amount of Li, as do 4 2-minute-long
steps. By recognizing that Li dispenser yield, $y\left(t\right)$,
is a linear function of time $y\left(t\right)=ct+y_{0}$ for typical
deposition times used in our experiments \citep{alkali}, and by assuming
that the sticking coefficient of Li does not change significantly
during deposition, the equivalence of the two Li coverages requires
\begin{equation}
13\intop_{0}^{1\,\mathrm{min}}\left(ct+y_{0}\right)dt=4\intop_{0}^{2\,\mathrm{min}}\left(ct+y_{0}\right)dt\label{eq3}
\end{equation}
\noindent to be valid. It is then straightforward to show that Li
coverage can be expressed as
\begin{equation}
\theta_{\mathrm{Li}}=\sum_{i}\intop_{0}^{\tau_{i}}y\left(t\right)dt=y_{0}\sum_{i}\tau_{i}\left(\frac{5}{3}\tau_{i}+1\right)\label{eq4}
\end{equation}
\noindent for a sequence of $i$ Li deposition steps of duration $\tau_{i}$.
By defining that a particular combination of deposition steps equals
to $\theta_{\mathrm{Li}}=\theta_{\mathrm{max}}$, all other Li deposition
combinations can be converted to MLs.
\bibliographystyle{elsarticle-num}
|
1,314,259,995,320 | arxiv | \section{Introduction}
Globular clusters may not be a robust example for $simple$ stellar populations
any more. Perhaps there is no such thing as simple stellar populations
from the beginning.
The classic globular clusters, such as $\omega$~Cen, NGC\,2808, and
NGC\,1851 are now suspected to be composed of heterogeneous populations,
and recent data from the space with unprecedented resolving accuracy
are hinting at a great fraction of Milky Way globular clusters
being composite populations at least chemically.
At the centre of debates is $\omega$~Cen.
It has long been known as a mysterious object.
To begin with, the spectacular southern cluster is the most massive in
Milky Way, with some million solar mass.
Its unusually broad red giant branch was found to indicate {\em discrete}
multiple populations by the magnificent effort and insight of
Lee et al. (1999) using the mere 0.9m telescope.
More recent works with much superior instruments unambiguously revealed the
multiplicity of the giant cluster.
Norris (2004) and Bedin et al. (2005) sequentially found that the
multiplicity is evident not just in the red giant branch but also in the
main sequence.
To everyone's surprise, its bluest main sequence is too blue for the
metallicity of $\omega$~Cen and in fact more metal rich
than the redder main-sequence stars (Piotto et al. 2005).
If such a blue colour for such a metal-rich population is real,
it unavoidably indicates possibility of the scorchingly high
helium abundance, $Y \approx 0.4$.
The blue main-sequence population constitutes 30\% in number (Bedin et al.
2004; Sollima et al. 2007) and thus not something we can simply sweep under
the carpet.
If there is any good news in this apparent nightmare
the blue main-sequence population seems to be at least younger than the
majority of the stars in this cluster, perhaps by a couple of billion years
(Villanova et al. 2007).
Significant is its implication to the extended horizontal branch in this
cluster. This and many other clusters exhibit an extended horizontal branch,
and its origin has been a long-debated issue.
Apparently, the same level of extreme helium inferred by the blue main
sequence explains the extreme blueness of the extended horizontal branch
as well (Lee et al. 2005).
If this prevails in other clusters as well, the hitherto mysterious
origin for the extended horizontal branch may also be solved by the extreme
helium.
Apparently many more clusters show multiple sequences, either on the main
sequence and/or on the sub-giant branch (Piotto 2008, this volume), even
though it is not yet clear whether such multiplicities are also to be
interpreted as originated from extreme helium.
More massive clusters tend to
show multiple sequences more often, and interestingly
the same trend is found for the extended horizontal branch (Recio-Blanco et al.
2006; Lee et al. 2007).
Extragalactic counterparts to $\omega$~Cen and the like may have been
found in the giant elliptical galaxy M87 in the Virgo cluster
as well (Sohn et al. 2006; Kaviraj et al. 2007).
Using the Hubble Space Telescope Sohn et al. (2006) found that most of
the massive globular clusters in M87 are UV bright despite their likely
old ages, as if they have an extremely hot horizontal branch.
Through an extensive test using the UV-focused population synthesis models
of Yi (2003), Kaviraj et al. (2007) concluded that the UV
strengths (a tracer of the horizontal branch morphology) of these clusters
are even stronger than that of $\omega$~Cen by more than a magnitude.
Whatever is causing the mutiplicity to $\omega$~Cen seems to affect
the M87 clusters even more greatly.
Massive clusters showing various anomalies seems to corroborate the
idea of them originally being something of a different nature, for example, nucleated dwarf spheroidal galaxies (Lee et al. 1999).
All things considered, there appears to be a huge conspiracy.
It is not yet clear whether the multiplicity in the main sequence is
the cause of that in the horizontal branch. But, they all fit in a very
sensible storyline.
Although it ruins the old and naive concept of ``simple stellar populations'',
multiplicity itself is perhaps not a huge problem.
The extreme helium abundance inferred by the blue main sequence population
would be an exciting discovery to observers but a desperate-to-forget nightmare to theorists.
I discuss why that is.
\section{Significance}
The significance of this issue is immense.
First, understanding how such an extreme helium abundance is possible is
an interesting challenge.
It also influences the current age dating techniques that are based on
the precise main sequence fitting and on detailed horizontal branch
analysis.
The seemingly-settled issue on the second parameter problem of horizontal
branch may enter a new stage with the not-so-new idea of helium with
a clearer understanding on helium enrichment processes.
The endless debate on the origin of the UV upturn found in bright
elliptical galaxies may find a new and compelling explanation with helium.
Obvious too is the impact on the issue of the age of the universe, as
globular clusters and bright elliptical galaxies are often considered
the oldest stellar populations in the universe.
\section{Observational facts and inferences}
Finding a solution to the case of $\omega$~Cen is only a beginning step
because other clusters show different constraints, but it would still
be a good start. So I attempt to find a solution adopting some of the most
widely-discussed channels.
Our {\em simplified} constraints are as follows.
\begin{itemize}
\item The age separation: the blue main sequence subpopulation is
1--3 billion years younger than the red main sequence subpopulation;
i.e., $t(bMS) \approx t(rMS) - 1$--3 (Lee et al. 2005; Stanford et al. 2007;
Villanova 2007). I think the exact value is poorly constrained but
for now take $\Delta t =1$Gyr as a face value.
\item The mass fraction: the number (and mass) fraction of the blue main
sequence subpopulation is roughly 30\% (Bedin et al. 2004),
i.e., $f(bMS) = 0.3$. I will try to aim to find a solution to satisfy this.
However, this may not place as strong a constraint as I take it,
if the mass evolution of sub-populations is complex.
I will discuss this in detail in \S 6.
\item Discrete sub-populations: the main sequence and horizontal branch
splits appear very sharp and discrete. Hence, a stochastic element
in a solution to the extreme helium abundance cannot be dominant.
Instead, it has to offer a process that leads to a clear prediction in
helium abundance.
\item The metal abundance: $Z(rMS)=0.001$ and $Z(bMS)=0.002$
(Piotto et al. 2005). The metallicity of the blue main sequence is
difficult to pin down due to their faintness and so still uncertain.
But it seems clear that it is higher than that of the red main sequence.
\item The helium abundance: the helium abundance of the blue main sequence
sub-population is 40\% in mass, i.e., $Y=0.4$. In reality, the observed
colour-magnitude diagramme shows even up to 5 sub-populations.
But it is impossible to make a model that pins down all the sub-populations
found. Hence, I approximate them into 2 sub-populations: the red main
sequence has an ordinary helium abundance and the blue main sequence has
an extreme helium abundance. As I will discuss in the end, it is
perhaps very important to remind ourselves repeatedly that {\em the
helium abundance was never directly measured but inferred from the
main sequence fitting.} Despite this, I take it as a face value.
\item The helium enrichment parameter: the helium and metal abundances
lead to the incredible helium enrichment parameter,
$\Delta$$Y$/$\Delta$$Z \approx 70$. Ordinary populations with ordinary stars
yield $\Delta$$Y$/$\Delta$$Z \approx 2$--3 even for a wide variance of
initial mass functions. Hence, this poses the most challenging problem of
all. I will focus pretty much of my tests on this issue.
\item Other elements: spectroscopic measurements on carbon and nitrogen
are available, i.e., $[C/M]\sim 0$ and $[N/M] \sim 1$ for the
blue main sequence population. However, their accuracy seems not as
good as one might hope for and the error estimations (i.e. measurement
significance) are not provided. It is already a daunting task to reproduce
the helium properties, and so I will only use this information as a reference.
\end{itemize}
\section{Asymptotic giant branch stars}
The most obvious candidate origin for such an extreme helium is asymptotic
giant branch stars (e.g., Izzard et al. 2004; D'Antona et al. 2005 among many).
Although there is quite a scatter in the chemical yield prediction,
there is consensus that asymptotic giants generate a copious amount of
helium but only a small amount of metals (e.g., Maeder 1992).
This is good to us because we do not just want to produce a lot of helium
but want to achieve the high value of helium enrichment parameter as well.
Supernovae for comparison produce too much metals to satisfy this
constraint, although they are also good producers of helium.
This is such a basic understanding that it does not require elaboration, but
it has recently been pointed out in quantitative matter anyway
(Choi \& Yi 2007).
The asymptotic giants in a narrow mass range ($M \approx 5$--6) indeed
release ejecta of the high value of helium enrichment parameter that we aim
to achieve. So if a population receives the stellar mass ejecta mainly
from asymptotic giant stars but nothing else, it is in principle possible
to achieve such a high value of helium enrichment parameter.
More massive stars would produce both metals and helium.
Hence, an {\em ad hoc} scenario, where all the mass ejecta from massive stars
(say $M>M_{\rm esc}$ where $M_{\rm esc} \sim 5$--10 solar mass) would
escape the gravitational potential where a subsequence star formation
occurs, can be set up to maximise the impact of the asymptotic giants
in terms of the helium enrichment parameter.
The effectiveness of the {\em maximum AGB scenario} has been discussed
by a few groups (e.g., Karakas et al. 2006; Bekki et al. 2007), and
Choi \& Yi (2008) performed a detailed calculation to check its viability.
Choi \& Yi (2008) adopted a toy model where the original gas reservoir
does not accept any new gas infall from outside but the mass ejecta
from massive stars above $M_{\rm esc}$ escape it supposedly via
supernova explosion, hence maximising the helium enrichment effect
from asymptotic giants. It is plausible that the kinetic energy
of the mass ejecta from supernova explosion achieve the
escape energy of such a small potential well.
It is assumed that a fraction (50 -- 100\%) of the initial gas is
used to form the first (red main sequence) population and the
subsequent population (blue main sequence) will be born from the
remnant gas mixed with the stellar mass ejecta from the first population.
The abundance of the initial gas is assumed to be the abundance of the
red main sequence population of $\omega$~Cen.
If a higher fraction of the initial gas reservoir is used to build
the first population, it would obviously result in a higher value of
helium abundance and helium enrichment parameter for the second
population, but only a small amount of gas becomes available for
the second population formation.
If we do not adopt any constraint on the age difference between the
red and blue main sequence populations, we can achieve a very high
helium abundance ($Y\approx 0.36$ which is almost as high as we aime
to reach) and the maximum value of helium enrichment parameter of
about 70 as we hoped for.
In this case, the age difference is roughly 0.1Gyr, and the second
generation is virtually a pure recycling product of the first generation
stellar mass ejecta within a narrow mass range of 5--6 solar mass.
But in this case, the total mass ratio between the red and blue main
sequence populations becomes 99.3: 0.7; that is, only 0.7\% of the
total population in $\omega$~Cen can benefit from this scenario.
Since the blue main sequence population is observed to be 30\% instead
0.7\%, there is a factor of 40 discrepancy! I call this ``the mass
deficit problem''.
One may achieve somewhat different estimates by adopting different
yields. For example, Renzini (2008) uses the recent yield
for the so-called ``super-AGB stars'' to find that the mass discrepancy
can be as small as 15 instead of 40.
If we take the age difference of roughly 1Gyr as a valid constraint,
the situation becomes dramatically worse.
This is because, even if we assume the $M_{\rm esc}$ argument,
the stars in the mass range 2--5 solar mass will now contribute
to the gas reservoir through stellar mass ejecta which is in general
of substantially lower helium abundance ($\sim 0.3$)
and helium enrichment parameter ($\sim 2$--5).
Consequently, this scenario with 1Gyr age separation can achieve up to
$Y \approx 0.3$ and $\Delta$$Y$/$\Delta$$Z \approx 10$ while the
upper limit in the mass fraction of the second generation is just 7\%
(instead of 30\%).
Let alone the shortcoming in the helium properties, the mass
fraction requirement cannot be met, either.
The verdict on the maximum AGB scenario and its variation can be summarised
as follows. The extreme helium-related propertis are almost impossible if
the age difference is a meaningful constraint, hence making this
scenario totally implausible.
If the age separation constraint can be eased off, the extreme
value of the helium enrichment parameter ({\em but not the helium abundance
itself}) can be reproduced by the first generation of
asymptotic giants for the following conditions and criticisms.
\begin{itemize}
\item The stellar mass ejecta from massive stars of $M > 6$ must all escape
the gravitational potential well. If the super AGB scenario (e.g. Siess
2007) is adopted, this mass limit can be as high as 10 solar mass.
If all supernova ejecta leave the system as high wind material, this is
not a bad assumption, but assuming that the supernova ejecta leave
completely without affecting the remaining gas in the reservoir
is extreme and very unlikely.
\item The blue main sequence population must form exactly after 0.1Gyr
after the red main sequence population, in disagreement with the
1Gyr separation suggested by previous studies. I personally think
the age separation constraint is not strong and thus
0.1Gyr is not particularly unappealing.
\item The mass decifit of a factor of 40 (which can be somewhat smaller
if super AGB stars are adopted) is a serious threat and requires
a rescue plan.
A possible remedy may be found in the details of the cluster dynamical
evolution, which is discussed in \S 6.
\item An encouraging aspect of this scenario is that the discreteness of the
separated populations is easy to explain. The second generation forms
from the mass loss of the first generation 0.1Gyr later.
\end{itemize}
\section{Fast-rotating massive stars}
A totally different solution was put forward by the massive stellar evolution
models. Maeder \& Meynet (2006) suggested that metal-poor
massive stars that are rotating
nearly at their break-up speed may release a lot of helium via
{\em slow wind}
before they start burning heavy elements and explode as a supernova.
Their idea came from their earlier works (Maeder \& Meynet 2001)
that suggested (1) low metallicity stars reach break-up rotational speed
more easily by the combined effect of stellar (slow) winds and rotation,
(2) they have efficient mixing of their core materials, that is,
helium and other heavier elements (depending on the rotation speed)
out to the surface, and (3) during their blue
loop after the red giant phase, a fast contraction leads to excessive
mass loss from the helium and nitrogent enhanced surface material.
The elemental yields via slow wind are sensitive to the rotational speed
adopted. For example for a 60 solar mass star with $log Z = -5$,
a fast rotating model at 85\% of the break-up speed yields
the helium abundance of 5.86 solar mass, the metal abundance of
0.09 solar mass, and thus the helium enrichment parameter of
63.3 (which is very close to our aim!). On the other hand,
a moderately fast rotating model at 35\% of the break-up speed,
the yields become $\Delta Y = 1.73$, $\Delta Z = 2.6e-5$, and
$\Delta$$Y$/$\Delta$$Z \approx 10^5$. These extremely fast-rotating
stars generate excessively high values of $\Delta$$Y$/$\Delta$$Z$ and
too little of helium.
The fast rotating stars overproduce carbon and nitrogen abundances
compared to observation, while the moderate
rotating stars reproduce the observation better. But we still select the
fast-rotating models in our exercise (Choi \& Yi 2007)
because they produce much more helium and thus more likely to satisfy our aim.
The toy model of Choi \& Yi (2007) using the metal-poor massive rotating
stars of Maeder \& Meynet (2006) show that a simple population based
on an ordinary initial mass function indeed achieves
the high values of helium abundance and of $\Delta$$Y$/$\Delta$$Z$
in the stellar mass loss, as we aim to recover. These values are further
elivated by the helium-dominant contribution from asymptotic
giants until lower mass giants become the main source of chemical yields.
Thus this phenomenon of high helium properties lasts only for a
short period of time of order 0.1--0.2Gyr, just as in the AGB scenario.
Once the population becomes older than that, its stellar mass loss
accumulated will no longer have such high values of helium properties.
We find, however, that the mount of the gas with the high helium properties
can be only roughly 1.4\% of the total stellar mass of $\omega$~Cen
which is a factor of 20 too small for it to be the sole solution to
this problem.
This mass deficit of a factor of 20 is smaller (and thus better) than
that of the asymptotic giant branch star scenario simply because this time
we have helium contributions from massive rotating stars as well as
from asymptotic giants.
Here, we assume that only the slow wind stellar mass loss from the massive
stars remain in the gravitating system and the fast wind (explosions)
materials leave the system without polutting the gas reservoir.
In conclusion, we could not find a solution if the age separation of
1 Gyr or so is a meaningful constraint. For a much smaller age
separation of order 0.1Gyr, we could achieve high values of helium
parameters, but even in this case the mass available for the formation
of the second generation is a factor of 20 smaller than what we have
in $\omega$~Cen. This problem has been noted also by a much more
detailed dynamical simulation of Decressin, Baumgardt, \& Kroupa (2008;
see also the article by Decressin in this volume).
We will discuss this further in \S 6.
Another serious problem in this scenario is the carbon abundance.
While it depends strongly on the rotational speed adopted,
the 60 solar mass model with 85\% of the break-up speed suggests
that the slow wind mass loss will be highly enriched in carbon, which
is not supported by the observational data (Piotto et al. 2005).
For this scenario to be appealing, we also need to understand how
a specific rotation speed is determined for the stars. Why does it happen
to some clusters (like $\omega$~Cen) but not to others?
Is it randomed given to each cluster, and not to each forming star?
That will be very odd.
This scenario with fast-rotating mass stars certainly adds to what
was already possible from the asymptotic giant stars and thus provides
a positive contribution. However, it alone does not appear to provide
a full solution to our problem.
\section{Dynamical evolution}
The blue main sequence population seems more centrally concentrated
than the red main sequence population. If this was true from the start
one can expect that the spatially more extended red main sequence
population will lose more stars throughout its dynamical evolution.
D'Ercole et al. (2008; and also the poster at this meeting)
indeed suggested that a substantial fraction of the first generation of stars
may escape the system if some conditions are met.
For example, if the initial mass distribution within each globular
cluster follows the King profile and {\em if its King radius is equal to
its true tidal radius}, then it is very easy to shed some high velocity stars
into space.
In this case, only 2-3\% of the original first generation stars
may remain in the cluster mainly due to the kinetic energy injection by
supernova explosions and two-body relaxation.
If this is true, it makes both the AGB scenario and the massive
fast-rotating star scenario viable.
Whether these conditions were easy to meet by the first generation
clusters is not yet clear, however. More traditional studies (e.g., Fall \&
Zhang 2001) based on evaporation by two-body relaxation, gravitational shocks,
and stellar mass loss suggest an order of magnitude milder mass evolution.
The mass evaporation is supposed to be sensitive to the mass of the
cluster in the sense that {\em a more massive cluster would shed less mass}.
So, if the dynamical evaporation was indeed the key to this extreme helium
phenomenon, it would be very unlikely to happen preferrentially to the most
massive clusters.
Unfortunately to this scenario, $\omega$~Cen is the Milky Way's most
massive cluster and the other clusters showing multiplicity, NGC\,2808
and M\,54, are among the most massive, too.
Besides, the extended horizontal branch globular clusters in the Milky Way
and the UV-brightest clusters in M\,87 all occupy the highest
mass end in the total cluster mass distribution of the galaxy.
In this sense, the dynamical evaporation picture loses its charm.
If D'Ercole et al's dramatic mass evolution is applicable to
all globular clusters, then it would have a significant impact on the
cluster luminosity function evolution. Typical clusters in the Milky Way
are of a million solar mass presently which is in the same order as the
size of the giant molecular
clouds, the main site of star formation, and as the mass of the
star clusters forming in nearby mergeing galaxies.
In this regard, I feel that this scenario of shedding 98\% of the
initial mass of the first population is likely a rare event.
Perhaps this is why the main sequence splits are not a common feature.
Otherwise, that is, if such a dramatic mass evaporation had been
true to all clusters, then we should find our galactic stellar halo
to have at least ten billion solar masses, which is an order or magnitude
greater than the current estimate.
I strongly feel that phyiscal understanding on the dynamical
process (when such conditions are met) is required, and detailed
dynamical models, cross-checking with
the observed cluster lumnosity functions, are called for.
\section{The first stars}
There must have been stars before population I and II stars.
This is evident from several arguments.
Theoretically, the mass of the first objects that experience dynamical
instability is estimated to be stellar rather than galactic.
This is consistent with the fact that reionisation is (although indirectly)
observed through the cosmic microwave background radiation studies.
Observationally, despite the fact that the big bang itself did not
generate any appreciable amount of heavy elements,
totally metal-deficient stars are not found anywhere.
Even the most metal-deficient stars show $log Z \sim -5$ and more
typical metal-poor stars have metallicity greater than a hundredth
of the solar value.
This means the pregalactic gas reservoir must have been enriched in metals
substantially.
The most probable objects for this are the first stars, a.k.a. population
III stars.
The first stars are often thought to have been very massive, above
a hundred solar mass, while other possibilities are also being considered.
The duration of the first star formation episode is considered to be
extended well into that of population II (Bromm \& Loab 2006).
If we are considering a proto-galactic scale system, the mixing time
scale for the chemical elements may have been of order hundred million
years, and thus considering both the extended star formation timescale
and varying mixing timescale, some {\em chemical inhomegeneity} in
the gas reservoir for the population II star formation was inevitable.
Marigo et al. (2003) have computed the chemical yields for such metal-free
stars of mass between 100 and 1000 solar mass.
Surprisingly their models suggest that the first stars were very efficient
in generating and releasing helium into the space but not metals.
This is mainly because the first stars had such an enormous radiation pressure
that the balance between the mass accretion for growing up and
radiative pressure was difficult to achieve; that is,
the strong radiation pressure blew away the gas that was being accreted.
So the first energy generation involving hydrogen burning was possible
but before the star reaches the next stage it would release much
materials processed by then: i.e., helium.
This results in a high helium to metal ratio, as we were looking for.
Choi \& Yi (2007) have indeed investigated this effect to the
helium enrichment in the gas cloud.
They found that the the range between 100 and 1000 solar masses,
a lower-mass first star produces a much larger value of
$\Delta$$Y$/$\Delta$$Z \sim 10^{7-8}$. (No, this is not a typo.)
First stars of 1000 solar mass are predicted by this model to have
$\Delta$$Y$/$\Delta$$Z \sim 10^{2}$, which is much closer to our aim.
Adopting a Salpeter initial mass function\footnote{As I type this part
I just learned of Professor Salpeter's death. We have just lost one of
the greatest astrophysicists of our time.},
we found that a first star
population with a mass range 100---1000 solar mass releases virtually
no metals but abundant helium, and thus reaching
$\Delta$$Y$/$\Delta$$Z \sim 500$.
A population with a higher value of the lower bound results in a
gradually lower value.
Eventually, a population purely made up of 1000 solar-mass stars would
have $\Delta$$Y$/$\Delta$$Z \approx 70$.
After letting the first star population evolve for a billion years
the remnant gas cloud (primordial gas left out of the first star formation
plus the stellar mass loss mixed evenly) reaches the metallicity
of the blue main sequence ($Z=0.002$), the hulium abundance ($Y=0.4$), and
so the helium enrichment parameter ($\Delta$$Y$/$\Delta$$Z \approx 70$),
with no further free parameter.
This scenario is briefly investigated by Choi \& Yi (2008) and
can be chronologically described as follows.
\begin{enumerate}
\item The majority of first stars form in the universe at redshift roughly
at 20 ($t \equiv 0$).
\item These stars release much helium and some metals.
\item The chemical mixing in the proto-galactic cloud took a long time,
and after hundreds of million years, chemically-mixed regions are more
common than unmixed regions.
\item From a chemically mixed region, the red main-sequence population of
$\omega$~Cen forms ($t \sim 0.5$Gyr).
\item From the pristine (unmixed) gas in the vicinity a second generation
of first stars form ($t \sim 0.7$Gyr).
\item They generate abundant helium and little metals and enrich the remnant
gas cloud to $Z \sim 0.002$, $Y \sim 0.4$ and thus
$\Delta$$Y$/$\Delta$$Z \sim 70$.
\item From this gas cloud, the blue main-sequence population of $\omega$~Cen
forms ($t \sim 1.5$Gyr).
\item The blue main-sequence population merges into the more massive
red main-sequence population soon after their formation.
\end{enumerate}
This picture is very rough however and contains many caveats.
\begin{itemize}
\item The first star chemical yields may be highly uncertain.
A more robust understanding on the formation and evolution on first stars
will perhaps come in the near future, but more importantly independent
calculations (besides Marigo et al.) would be required immediately.
\item In this scenario the first stars (at least the ones that led to the
gas reservoir for the formation of the blue main-sequence population)
should have very high mass of order 1000 solar mass.
This is not supported by some recent first star studies.
\item We need not just a couple of first stars in this region but more than
one hundred. How such a material gathers up in this proto cloud is
a mestery, especially when first stars are often believed to form
isolated rather than in multiplets.
\item The physics in terms of the chemical mixing and its timescale is
highly uncentain, as is the case for other scenarios.
\end{itemize}
Given all these uncertainties, it is difficult to argue that the first
scenario is any more compelling than others. However, it is still
a very exciting possibility. After all, we astronomers are always
the first one to find something wrong as well as new and important.
This conjecture at least implores for more studies on the first stars.
\section{Alternative theories}
Alternative theories are also available.
The velocity-dispersion dependent surface pollution of AGB ejecta scenario
was put forward by Tsujimoto et al. (2007).
A similar surface pollution scenario was presented by Newsham \& Turndrup
(2007). While the channels for the pollution can be several,
it provides an interesting possibility that the blue main-sequence stars
are not truly so helium-rich as we believe but pretend to be so
by having unusually high helium abundance only on the stellar surface.
Mass transfer of the surface material in binary stellar systems could
be one channel, or if stars passing through the cluster central region
where helium-rich gas from the accumulated stellar mass loss is located
such stars may be polluted on the surface.
However, it is very unlikely that 30\% of the stars get contaminated
like this. Besides, all these processes would occur in random manner that
the discreteness of the blue main-sequence would be unnatural.
A possibility of having primordial fluctuations in the helium density
was presented by Chuzhoy (2006). In this study, the helium diffusion time
scale for the primordial gas of stellar mass was of order of hundred
million years, and thus some of the birth clouds for first stars were
heavily enriched in helium. But again, the diffusion timescale
must depend on each birthcloud condition which should be rather random,
which makes the main sequence discreteness a tough problem to solve.
\section{Conclusions}
Theoriests are often very optimistic thinking that a tough problem
to solve is challenging instead of mind-boggling. But I must admit
that I am much more than puzzled by this ``extreme helium population problem''.
The presence of multiple populations in globular cluster size populations
is surely a problem, as numerical simulations of the kind performed
by Bate, Bonnell, \& Bromm (2008) suggest that the star formation
in a cluster probably happens in the crossing time scale,
which is an order of million years.
But we have seen other small populations having a complex star formation
history, e.g., Carina dwarf galaxy (Smecker-Hane et al. 1994).
A more critical issue is the extreme value of helium properties.
I do not believe that we have a compelling theory yet.
Asymptotic giant branch stars are a familiar class and thus makes
our mind susceptible. But I believe that I have shown that it still
has the mass deficit problem by a factor of at least 40, which is
threateningly large even to astronomers.
Same is true for the massive stars rotating nearly at the break-up
speed. They alone cannot provide the full answer and suffer from
a similar mass deficit problem.
Its physical plausibility is also something to be worked out.
The first star scenario is fascinating because first stars are
a mystery in general. We believe that they were once around
but have never seen them, a bit like black holes.
They provide a plausible solution, but just barely.
It has so many caveats and uncertainties that cannot be clarified
in the next few years. Hence it loses its charm, too.
I said at the end of my presentation at this conference that
the enigmatic extrme helium population is so tough to theorists
that I would almost feel happy if someone comes up to say
``It was all a mistake from the start. There is no such extreme
helium population''. George Meynet disagreed with me. He instead
said the problem is so enigmatic that we are greatly challenged and
excited. I became humble at his constructive attitudes. I hope to
see a more believable solution in the near future.
\begin{acknowledgments}
I thank Ena Choi for countless constructive discussions.
Much of this review is based on several key papers written by Choi \& Yi
(2007, 2008), Decressin, Charbonnel \& Meynet (2008), and by Renzini (2008).
I am grateful to Young-Wook Lee, Suk-Jin Yoon, Ken Nomoto (during my visit
to Tokyo University), and Enrico Vesperini for constructive discussions.
Special thanks are due to Changbom Park for stimated discussion on
stellar collisions in clusters during my visit to the KIAS.
This research has been supported by Korea Research Foundation (SKY).
\end{acknowledgments}
|
1,314,259,995,321 | arxiv | \section{Introduction}
In order to accommodate the ever-growing demand for higher data-rates, the wireless spectrum has been continuously expanding over the past several decades. Millimeter wave (mm-wave) communication networks are already being used in
the fifth generation (5G) wireless systems to allow for larger channel bandwidths
compared to earlier generation radio frequency (RF) systems which operate in frequencies {below 6 GHz \cite{6GHz}}. In pursuing this relentless trend towards achieving higher data rates using larger channel bandwidths, attention is shifting to the next frontier, which is to utilize the terahertz (THz) frequency bands, broadly defined as 0.1–10 THz \cite{sarieddeen2021overview,chaccour2022seven,do2021terahertz}. {This paradigm is being facilitated by recent advances in the design of integrated circuits and systems and antenna elements operating at THz frequencies. \cite{AghasiTHz,RuonanTHz,PayamTHz,NiknejadTHz,HajimiriTHz,hedayat}}
The energy consumption of components such as analog to digital converters (ADCs) increases significantly in mm-wave and THz systems due to several factors as elaborated in the following. In theory, the power consumption of an ADC grows linearly with bandwidth, and the rate of increase is even more significant in practical implementations due to the excessive loss associated with the passive components at higher frequencies which causes an abrupt drop in ADC energy-efficiency as the bandwidth is pushed past 100 MHz \cite{do2021terahertz,murmann2018adc,BR,ADCpower}.
For instance,
the power consumption of current commercial
high-speed ($\geq$ 20 GSample/s), high-resolution
(e.g. 8-12 bits) ADCs is around 500 mW per ADC \cite{zhang2018low}. Furthermore, in order to mitigate the inherent high isotropic path loss and sensitivity to blockages at high frequencies mm-wave and THz systems must leverage {directive} narrow-beams, by using large antenna arrays {to increase the antenna gain} at both base stations (BS) and user-ends (UE) \cite{rappaport2015millimeter,do2021terahertz,sarieddeen2021overview,chaccour2022seven}. For instance, 5G wireless networks envision hundreds of antennas at the BS and in excess of ten antennas at the UE \cite{hong2014study}, and in THz application scenarios antenna arrays with hundreds of elements are being considered \cite{sarabandi2018novel,ning2021prospective}.
\begin{figure}[t]
\centering
\includegraphics[width=0.7\textwidth]{JSTSP_Overview_2.pdf}
\caption{The receiver architecture consists of an analog/hybrid beamformoing module, an elementwise analog operator $f_a(\cdot)$, $n_q$ low-resolution ADCs, and a blockwise digital operator $f_d(\cdot)$ with blocklength $n$. $Y$ represents the received signal, $\widehat{M}$ is the message reconstruction, and $[\Theta]$ is the message set.
}
\label{fig:0}
\end{figure}
In conventional multiple-input multiple-output (MIMO) systems with digital beamforming, each antenna output is digitized separately by a dedicated ADCs \cite{DigBF1}. This requires a large number of ADCs which are a significant source of power consumption in large bandwidth MIMO receivers \cite{heath2016overview,mendez2015channel}. Analog and hybrid beamforming has been proposed as a way to mitigate ADC power consumption by reducing the number of ADCs. Under hybrid beamforming, the receiver terminals use a collection of analog beamformers to linearly combine the large number of received signals and feed them to a small set of ADCs. Additionally, in standard ADC design, power consumption is proportional to the number of quantization bins and hence grows exponentially in the number of output bits \cite{walden1999analog} which prohibits the use of high resolution ADCs. There has been extensive recent efforts to design receiver architectures and coding strategies using
analog and hybrid beamforming with a small number of few-bit ADCs \cite{ning2021prospective,yuan2020hybrid,han2021hybrid,molisch2017hybrid,heath2016overview,alkhateeb2014mimo,abbasISIT2018,khalili2020throughput,dutta2020capacity,mollen2017achievable,mollen2016one,jacobsson2017throughput,pirzadeh2018spectral,li2017channel}.
\begin{figure}[t!]
\centering
\includegraphics[width=100mm]{Fig1.pdf}
\caption{
\hspace{-0.1in} (a) $X\in \{-0.5,0.5,1.5\}$ and the ADCs generate $Y\lessgtr 0$ and $Y\lessgtr 1$, and (b) $X\in \{-1.5, -0.5,0.5,1.5\}$, and the two ADCs generate $Y\lessgtr 0$ and $|Y|\lessgtr1$.
\vspace{-0.2in}}
\label{fig:1}
\end{figure}
In this work, we consider the use of nonlinear analog operators as a way to mitigate the rate-loss due to coarse quantization and increase channel capacity of MIMO systems with few-bit ADCs.
To explain the aforementioned rate gains, let us consider a simple single-input single-output (SISO) scenario operating in the high signal-to-noise ratio (SNR) regime, i.e. $Y\approx X$. Assume that the receiver is equipped with two one-bit threshold ADCs. Then, as shown in Figure \ref{fig:1}(a), it can receive at most three different messages per channel-use by performing two threshold comparisons, e.g. comparisons with threshold zero $Y\lessgtr 0$ and threshold one $Y \lessgtr 1$ , hence achieving a rate of $R=\log{3}$ bits/channel-use. Alternatively, if the receiver can produce the absolute value of the amplitude of the received analog signal, $|Y|$, then it can use the two comparators $Y\lessgtr 0$ and $|Y|\lessgtr 1$ as shown in Figure \ref{fig:1}(b) to achieve $R=2$ bits/channel-use, hence improving performance. The absolute value of the amplitude of the analog signal can be produced using envelope detectors. As an alternative approach, one could use polynomial analog operators, whose circuit implementation is studied in Section \ref{sec:cir}, instead of envelope detectors to replace $|Y|$ by $Y^2$ in the above construction. As discussed in this paper, design and implementation of envelope detectors is simpler and more power efficient, whereas receiver architectures using polynomial analog operators may achieve higher communication rates especially in MIMO scenarios with large number of received signals.
The example described in Figure \ref{fig:1} serves as proof of concept that one can potentially reduce the rate-loss due to coarse quantization via non-linear analog processing. In the sequel, we will study the design of receiver architectures utilizing several classes of non-linear analog operators, their associated circuit designs, coding strategies, and the resulting channel capacities. To elaborate, we consider the general receiver model shown in Figure \ref{fig:0}. The receiver operation is decomposed into analog processing, ADC, and digital processing modules. At each channel use, the channel output vector $Y^{n_r}=(Y_{1},Y_{2},\cdots, Y_{n_r})$ is received, where $n_r$ is the number of receiver antennas.
The vector $Y^{n_r}$ is passed through
an analog/hybrid beamforming module which produces the vector $\widetilde{Y}^{n_s}$, where $n_s=1$ if analog beamforming is used and $n_s>1$ if hybrid beamforming is used. $\widetilde{Y}^{n_s}$ is fed to
an analog processing module $f_{a}(\cdot)$ to produce $W^{n_q}$, where $n_q$ is the number of ADCs.
The choice of the analog processing function $f_{a}(\cdot)$ is restricted due to the practical limitations in analog hardware design. In general, the analog function $f_a(\cdot)$ is chosen from a predefined set of implementable functions $\mathcal{F}_a$. The output of the analog processing module is fed to the ADCs, and the resulting sequence $\widehat{W}^{n_q}$ is given to the digital processing module which operates \textit{jointly} on the received block of output vectors over $n$ channel-uses to reconstruct the transmitted message. In Section \ref{sec:env}, we consider the scenario where $\mathcal{F}_a$ consists of analog processing functions generated by collections of envelope detectors, and propose receiver architectures, design coding strategies, derive the fundamental performance limits in terms of achievable rates. In Section \ref{sec:poly}, we study receiver architectures, circuit design, and coding strategies associated with analog polynomial operators. Transceiver Circuit designs and simulations evaluating the power consumption of the nonlinear processing operators including sequences of concatenated envelope detectors and polynomial operators are provided in Section \ref{sec:cir}. Table I includes the design specifications of our proposed proof-of-concept THz transceiver to achieve data rates above 100 Gbps. In this instance, we adopt a 64-QAM modulation scheme across a 34 GHz bandwidth centered at the 120GHz carrier frequency to support a 100+ GbpS data rate. These specifications are comparable with recent implementations of transmitter circuit and antenna designs at this frequency range \cite{intel,hedayat,PayamTHz}.
\begin{table}[]
\caption{System Design Parameters of an instance of the proposed THz communication system achieving a 100+ Gbps bit rate.}
\centering
\begin{tabular}{|@{}c@{}|@{}c@{}|}
\hline Modulation & RF-64QAM \\
\hline Baud Rate (Gbaud) & 17 \\
\hline Bit Rate (Gbps) & 102 \\
\hline Carrier Frequency (GHz) & 120 \\
\hline RF Bandwidth (GHz) & 34 \\
\hline Wireless Distance (m) & 1 \\
\hline Path Loss (dB) & 74.02 \\
\hline TX Antenna Array Gain (dBi) & 25 \\
\hline RX Antenna Array Gain (dBi) & 25 \\
\hline Transmitted Power (dBm) & 5 \\
\hline Receiver Noise Figure (dB) & 15 \\
\hline Receiver Signal Power (dBm) & -19 \\
\hline Receiver Sensitivity (dBm) & -28.7 \\
\hline Signal-to-Noise Ratio (dB) & 25 \\
\hline Bit-Error-Rate & 10$^{-4}$ \\
\hline
\end{tabular}
\label{tab:my_label}
\end{table}
In summary, our main contributions of are as follows:
\begin{itemize}[leftmargin=*]
\item To characterize the channel capacity under analog beamforming when envelope detectors are used for analog signal processing. The capacity expression is derived in terms of the number of ADCs, $n_q$, number of output levels of each ADC, $\ell$, and
number of concatenated envelope detectors, $\delta_{env}$ (Theorems \ref{th:1}-\ref{th:4}).
\item To characterize the high SNR capacity under analog beamforming when polynomial operators are used for analog signal processing. The high SNR capacity and inner-bounds to the low SNR achievable rates are provided in terms of the number of ADCs, $n_q$, number of output levels of each ADC, $\ell$, and the
polynomial degree, $\delta_{poly}$ (Theorem \ref{th:5}).
\item To provide a receiver architecture for hybrid beamforming using envelope detectors for ultra high data rate communication with QAM demodulation (Theorem \ref{th:6}).
\item To provide computational methods for finding the set of achievable rates and quantifying the gains due to nonlinear analog processing under the proposed analog and hybrid beamforming architectures, and to provide explanations of how these gains change as SNR, $n_q$, $\ell$, and $\delta$ are changed.
\item To provide circuit designs and associated performance simulations for implementing polynomials of degree up to four and concatenated envelope detector sequences with a pair of envelope detectors; and to evaluate their power consumption.
\end{itemize}
{\em Notation:}
The set $\{1,2,\cdots, n\}, n\in \mathbb{N}$ is represented by $[n]$.
The vector $(x_1,x_2,\hdots, x_n)$ is written as $x(1\!\!:\!\!n)$ and $x^n$, interchangeably, and $(x_k,x_{k+1},\cdots,x_n)$ is denoted by $x(k:n)$. The $i$th element is written as $x(i)$ and $x_i$, interchangeably. We write $||\cdot||_2$ to denote the $L_2$-norm. An $n\times m$ matrix is written as $h(1\!\!:\!\!n,1\!\!:\!\!m)=[h_{i,j}]_{i,j\in [n]\times [m]}$,
, its $i$th column is $h(:,i), i\in [m]$, and its $j$th row is $h(j,:), j\in [m]$. We write $\mathbf{x}$ and $\mathbf{h}$ instead of $x(1\!\!:\!\!n)$ and $h(1\!\!:\!\!n,1\!\!:\!\!m)$, respectively, when the dimension is clear from context. The notation $\mathbf{x}^H$ is used to denote the hermitian of $\mathbf{x}$.
Sets are denoted by calligraphic letters such as $\mathcal{X}$, families of sets by sans-serif letters such as $\mathsf{X}$, and collections of families of sets by $\mathscr{X}$.
\section{Analog Beamforming Architecture with Envelop Detectors}
\label{sec:env}
In this section, we consider a MIMO receiver with analog beamforming equipped with a collection of envelope detectors for analog signal processing and a set of $n_q$ few-bit ADCs. Analog beamforming utilizes analog phase shifters and only one RF chain
for the beamforming operation \cite{ning2021prospective,tan2020thz}. This leads to a simplified design and low power consumption compared to hybrid and digital beamforming. However, analog beamforming can only support single-stream transmission which yields lower data rates. The low power consumption makes analog beamforming suitable for long-distance transmission in THz applications \cite{tan2020thz}. Envelope detectors are suitable for analog processing of signals at high frequencies due to their low power consumption and simple circuit design \cite{kleine2011review}. This is particularly the case for THz communication, where compared to analog polynomial operators studied in \cite{shirani2022quantifying,Shirani2022}, the power consumption of envelop detectors is significantly lower at high data rates (see Section \ref{sec:cir}).
An envelope detector is parametrized by its threshold $a\in\mathbb{R}$ and its operations on an input $x\in \mathbb{R}$ is captured by the the function $A_1(x,a)= |x-a|, x\in \mathbb{R}$.
Envelope detectors can be concatenated in a sequence to generate a larger collection of analog operators. The operation of a sequence of $s\in \mathbb{N}$ envelop detectors with bias vector $a^s=(a_1,a_2,\cdots,a_s)$ is captured by the iterative relation $A_s(x,a^s)=|A_{s-1}(x,a^{s-1})-a_s|,s>1$. {The concatenated envelope detector implementation developed in Section \ref{sec:cir} exhibits substantial power saving in THz communication systems compared to other possible approaches to mitigate the rate-loss due to coarse quantization, such as the approach of incorporating pipeline ADC architectures to perform beamforming \cite{pipeline, khalili2021mimo,khalili2018mimo}. This major power saving is achieved by removing the digital-to-analog converters (DAC) and the subtractors inside a pipeline ADC which are also very challenging to design at higher data rates. It should be noted that concatenating large numbers of envelop detectors leads to increased circuit noise, and power consumption. Hence, there is a tradeoff between power-consumption, circuit complexity and robustness, and degrees of freedom in generating analog processing functions which in turn affects the set of achievable rates. This tradeoff is quantified in this section by deriving the set of achievable rates in terms of
the number of concatenated envelope detectors.
\subsection{System Model}
\label{sec:form}
We consider a MIMO channel, whose input and output\footnote{To simplify notation, we have considered real-valued variables. The derivations can be extended to complex variables in a straightforward manner.} $(\mathbf{X}, \mathbf{Y})\in \mathbb{R}^{n_t\times n_r}$ are related through $
\mathbf{Y}=\mathbf{h}\mathbf{f}\mathbf{X}+\mathbf{N}$, where $\mathbf{f}\in \mathbb{R}^{n_t\times 1}$ is the transmitter's beamforming vector with $||\mathbf{f}||_2=1$, $\mathbf{h}\in \mathbb{R}^{n_r\times n_t}$ is the
(fixed) channel matrix,
and
$\mathbf{N}$ is a jointly Gaussian noise vector. To perform analog beamforming, the receiver chooses a combining vector $\mathbf{w}\in \mathbb{R}^{n_r\times 1}$, where $||\mathbf{w}||_2=1$ and produces ${Y}\triangleq\mathbf{w}^H\mathbf{Y}$.
To simplify the notation, we define ${h}\triangleq \mathbf{w}^H\mathbf{h}$, ${X}\triangleq \mathbf{f}\mathbf{X}$ and ${N}\triangleq \mathbf{w}^H\mathbf{N}$ and write ${Y}={h}{X}+{N}$ as the resulting SISO channel.
Without loss of generality, we assume that ${N}\sim \mathcal{N}(0,1)$ and
that the channel input has average power constraint $P$, i.e. $\mathbb{E}(X^2)\leq P$.
In the model captured by Figure \ref{fig:0} and explained in the introduction, the choice of the decoding function at the receiver is restricted by the limitations on the number of low-resolution threshold ADCs, $n_q\in \mathbb{N}$, the number of output levels of the ADCs, $\ell\in \mathbb{N}$, and the set of \textit{implementable nonlinear analog functions}. In this section, we consider using concatenated sequences of envelop detectors for analog signal processing. So, the set of implementable nonlinear analog functions is:
\begin{align*}
\mathcal{F}^\delta_{env}=\{f(y)=A_s(y,a^s), y\in \mathbb{R}| s\in [\delta],
a^s\in \mathbb{R}^s\},
\quad \delta\in \mathbb{N},
\end{align*}
where $A_1(y,a)\triangleq |y-a|, y,a\in \mathbb{R}$ and $A_s(y,a^s)\triangleq A_1(A_{s-1}(y,a^{s-1}),a_s)= |A_{s-1}(y,a^{s-1})-a_s|, s\in \mathbb{N}$.
That is, $\mathcal{F}^\delta_{env}$ consists of all functions which can be generated using sequences of $s\leq \delta$ concatenated envelop detectors with thresholds $a_1,a_2,\cdots,a_{s}$, respectively. The restriction to a limited number of envelop detectors is due to limitations in analog circuit design, and the implementability of such functions is justified by the circuit designs and simulations provided in Section \ref{sec:cir}. Formally, the receiver (Figure \ref{fig:0}), consists of:
\\i) An analog beamforming module characterized by $\mathbf{w}$ which takes $\mathbf{Y}$ as input and outputs ${Y}= \mathbf{w}^H\mathbf{Y}$ using an analog power combiner (e.g., collection of phase shifters).
\\i) A set of elementwise analog processing functions $f_{j}\in \mathcal{F}^{\delta}_{env}, j\in [n_q], \delta\in \mathbb{N}$ operating on the beamformer output $Y$ to produce the vector $W(1:n_q)$, where $W(j)=f_{j}(Y), j\in [n_q]$.
\\ii) A set of $n_q$ ADCs, each with $\ell$ output levels and threshold vectors $t(j,1:\ell-1)\in \mathbb{R}^{\ell-1}, j\in [n_q]$ operating on the vector $W(1:n_q)$ and producing $\widehat{W}(1:n_q)$, where
\[\widehat{W}(j)=
k \quad \text{ if } \quad W(j)\in [t(j,k),t(j,k+1)], k\in [0,\ell-1],\]
where $j\in [n_q]$
and we have defined $t(j,0)\triangleq -\infty$ and $t(j,\ell)\triangleq\infty$. We call $t(1\!\!:\!n_q,1\!\!:\!\ell-1)$ the \textit{threshold matrix}. It is assumed that $0<t(i,j)<t(i,j'), i\in [n_q], j,j'\in [\ell-1], j<j'$\footnote{Note that the assumption $0<t(i,j)$ does not loose generality since $0\leq |y|, \forall y\in \mathbb{R}$. Hence, a negative threshold would yield trivial ADC output.}.
\\ iii) A digital processing module represented by $f_d:\{0,1,\cdots,\ell-1\}^{n\times n_q}\to [\Theta]$, operating on the block of ADC outputs after $n$-channel uses $\widehat{W}(1\!\!:\!\!n,1\!\!:\!\!n_q)$.
After the $n$th channel-use, the digital processing module produces the message reconstruction $\widehat{M}=f_d(\widehat{W}(1\!:\!n,1\!:\!n_q))$. The communication system is characterized by $(P,{h}, n_q, \delta,\ell)$, and the transmission system by $(n,\Theta,e,f^{n_q},t(1\!:\!n_q,1\!:\!\ell-1),f_d)$, where $f^{n_q}=(f_{1},f_{2},\cdots,f_{n_q})$ and $f_{j}\in \mathcal{F}_{env}^{\delta}, j\in [n_q]$ , and $e(\cdot)$ is such that the channel input satisfies the average power constraint. Achievability and probability of error are defined in the standard sense. The capacity maximized over all implementable analog functions using sequences of envelop detectors is denoted by $C_{env}(P,h,n_q, \delta,\ell)$.
\subsection{Communication Strategies and Achievable Rates}
\label{sec:results}
In this section, we investigate the channel capacity for a given communication system parametrized by $(P,\mathbf{h},n_q,\delta,\ell)$.
\subsubsection{Preliminaries}
\label{sec:sen:III}
We first introduce some useful terminology and preliminary results which will be used in the derivations throughout the rest of the paper. The quantization process at the receiver is modeled by two sets of functions. The analog processing functions $f_{j}(\cdot), j\in [n_q]$ and ADC threshold matrix $\mathbf{t}\!\in \!\mathbb{R}^{n_q\times {(\ell-1)}}$.
\begin{Definition}[\textbf{Quantizer}]
Given a threshold matrix $\mathbf{t}\!\in \!\mathbb{R}^{n_q\times {(\ell-1)}}$ and a collection of functions $f_{j}\in\mathcal{F}_{env}^{\delta}$, a quantizer $Q:\mathbb{R}\to [\ell]^{n_q}$ characterized by the tuple $(\ell,\delta,n_q,f^{n_q}, \mathbf{t})$ is defined as $Q(\cdot)\triangleq (Q_{1}(\cdot),Q_{2}(\cdot),\cdots,Q_{n_q}(\cdot))$, where $Q_{j}(y)\triangleq k$ iff $ f_{j}(y)\in [t(j,k),t(j,k+1)], j\in [n_q]$. The associated partition of $Q(\cdot)$ is:
\begin{align*}
\mathsf{P}=\{\mathcal{P}_{\mathbf{i}}, \mathbf{i}\in [\ell]^{n_q}\}- \Phi, \text{ where } \mathcal{P}_\mathbf{i}= \{y\in\mathbb{R}| Q(y)= \mathbf{i}\}, \mathbf{i}\in [\ell]^{n_q}.
\end{align*}
\label{def:quant}
\end{Definition}
\vspace{-.2in}
For a quantizer $Q(\cdot)$, we call $y\in \mathbb{R}$ a \textit{point of transition} if the value of $Q(\cdot)$ changes at input $y$, i.e. if it is a point of discontinuity of $Q(\cdot)$. Let $r$ be a point of transition of $Q(\cdot)$. Then, there must exist output vectors $\mathbf{c}\neq \mathbf{c}'$ and $\epsilon>0$ such that $Q(y)=\mathbf{c}, y\in (r-\epsilon,r)$ and $Q(y)=\mathbf{c}', y\in (r,r+\epsilon)$. So, there exists $j\in [n_q]$ and $k\in [\ell-1]$ such that $f_{j}(y)<t(j,k), y\in (r-\epsilon,r)$ and $f_{j}(y)\geq t(j,k), r\in (r,r+\epsilon)$, or vice versa; so that $r$ is a root of the function $f_{j,k}(y)\triangleq f_{j}(y)-t(j,k)$. Let $r_1,r_2,\cdots,r_{\gamma}$ be the sequence of roots of $f_{j,k}(\cdot), j\in [n_q], k\in [\ell-1]$ (including repeated roots), written in non-decreasing order, where $\gamma\triangleq (\ell-1) n_q2^\delta$. Let $\mathcal{C}=(\mathbf{c}_0,\mathbf{c}_1,\cdots, \mathbf{c}_{\gamma})$ be the corresponding quantizer outputs, i.e. $\mathbf{c}_{i-1}= \lim_{y\to r_i^-}Q(y), i\in [\gamma]$ and $\mathbf{c}_{\gamma}=\lim_{y\to\infty}Q(y)$. We call $\mathcal{C}$ the \textit{code} associated with the quantizer and it plays an important role in the analysis provided in the sequel. Note that the associated code is an ordered set of vectors.
The size of the code $|\mathcal{C}|$ is defined as the number of unique vectors in $\mathcal{C}$. Each $\mathbf{c}_i= (c_{i,1},c_{i,2},\cdots,c_{i,n_q}), i\in \{0,1,\cdots,\gamma\}$ is called a codeword. For a fixed $j\in [n_q]$, the transition count of position $j$ is the number of codeword indices where the value of the $j$th element changes, and it is denoted by $\kappa_j$, i.e., $\kappa_j\triangleq \sum_{k=1}^{\gamma}\mathbbm{1}(c_{i_k-1,j}\neq c_{i_k,j})$.
It is straightforward to see that $|\mathsf{P}|=|\mathcal{C}|$ since both cardinalities are equal to the number of unique outputs the quantizer produces.
The following example clarifies the definitions given above.
\begin{figure}[t]
\centering
\includegraphics[width=.7\textwidth]{code_JSTSP.pdf}
\caption{The quantizer outputs in Example \ref{Ex:1}. The first four rows show the sign of the function $f_{j,k}, j,k\in \{1,2\}$ for the values of $y$ within each interval. The last row shows the quantizer output in that interval.
}
\vspace{-.15in}
\label{fig:code}
\end{figure}
\begin{Example}[\textbf{Associated Code}]
\label{Ex:1}
Let $n_q=\delta=2$ and $\ell=3$ and consider a quantizer characterized by analog processing functions
$f_{1}(y)=A_2(y,(2,4))=||y-2|-4|$ and $f_{2}(y)= A_2(y,(4,0))=||y|-4|, y\in \mathbb{R}$, and thresholds
\begin{align*}
& t(1,1)= 0, \quad t(1,2)=1, \quad t(2,1)= 1, \quad t(2,2)= 2,
\end{align*}
We have:
\begin{align*}
& f_{1,1}(y)= ||y-2|-4|-1, \quad f_{1,2}(y)= ||y-2|-4|\\
& f_{2,1}(y)=||y|-4|-1, \quad f_{2,2}(y)=||y|-4|-2.
\end{align*}
The ordered root sequence is $(r_1,r_2,\cdots,r_{10})=
(-6,-5,-3,-2, $ $-1,2,3,5,6,7)$. The
associated partition is:
\begin{align*}
& \mathsf{P}= \Big\{[-\infty,-6), (-6,-5),(-5,-3),
(-3,-2),(-2,-1),(-1,2),
\\&\qquad (2,3),(3,5),(5,6),(6,7),(7,\infty)\Big\}.
\end{align*}
The associated code is given by $22,21,20,11,02,12,$ $11,10,01,12,22$. This is shown in Figure \ref{fig:code}. The size of the code is $|\mathcal{C}|=8$. The high SNR capacity of a channel using this quantizer at the receiver is $\log{|\mathsf{P}|}=\log{|\mathcal{C}|}=\log{8}$.
\end{Example}
\label{sec:prelim}
\subsubsection{Single Envelope Detector and One-bit Quantization}
As a first step, we investigate scenarios with $\ell=2$ and $\delta=1$. We will build upon this to derive capacity expressions for $\delta,\ell\in \mathbb{N}$.
It can be noted that since $\delta=1$, each $f_j(y)$ is of the form $|y-a_j|$ for some $a_j\in \mathbb{R}$. We sometimes write $f_{a_j,j}(y)=|y-a_j|$ to explicitly denote the value of $a_j$. Given threshold $t_j>0$ the roots of $f_{a_j,j}(y)-t_j$ are equal to $a_j \pm t_j$. The following proposition provides the high SNR capacity when $\ell=2, \delta=1$.
\begin{Proposition}
\label{Prop:1}
Let $h\in \mathbb{R}$ and $n_q>1$. Then,
\begin{align*}
\lim_{P\to \infty} C_{env}(P,h,n_q,2,1)= 1+\log{n_q}.
\end{align*}
\end{Proposition}
\begin{proof}
For a given quantizer, the high SNR achievable rate is equal to $\log{|\mathsf{P}|}= \log{|\mathcal{C}|}$. So, finding the capacity is equivalent to finding the maximum $|\mathcal{C}|$ over all choices of $Q(\cdot)$.
First, let us prove the converse result. Note that $|\mathcal{C}|\leq 2n_q$ since $\mathbf{c}_0=\mathbf{c}_{2n_q}$. The reason is that for the absolute value function $f_{a_j,j}(\cdot),j\in [n_q]$, we have $\lim_{y\to \infty}f_{a_j,j}(y)=\lim_{y\to -\infty}f_{a_j,j}(y)= \infty $. So,
\begin{align}
\label{eq:lim}
c_{0,j}=\lim_{y\to -\infty} \mathbbm{1}(f_{a_j,j}-t_j>0)=\lim_{y\to \infty} \mathbbm{1}(f_{a_j,j}-t_j>0)=c_{2n_q,j}.
\end{align}
As a result, $\log{|\mathcal{C}|}\leq 1+\log{n_q}$. Next, we prove achievability.
Let $t_j=\frac{n_q+1}{2}, j\in [n_q]$ and $f_{a_j,j}(y)\triangleq |y-j-\frac{n_q+1}{2}|, j\in [n_q]$, so that the roots of $f_{a_j,j}(\cdot)-t_j$ are $j$ and $j-n_q-1$. Then, $(r_1,r_2,\cdots,r_{2n_q})\!=\!(-n_q,-n_q\!+\!1,\cdots,-1,1,2,\cdots, n_q)$ and \begin{align*}
c(i,j)=
\begin{cases}
1-\mathbbm{1}(j\leq i)\qquad& \text{if }\quad i\leq n_q,\\
\mathbbm{1}(n_q-j+1\leq i-n_q) & \text{otherwise}.
\end{cases}
\end{align*}
For instance for $n_q=3$, we have $\mathcal{C}=(111,011,001,$ $000,001,011,111)$. Hence, the only repeated codewords are $\mathbf{c}_0$ and $\mathbf{c}_{2n_q}$. As a result, $|\mathcal{C}|=2n_q$, and $\log{|\mathcal{C}|}=1+\log{n_q}$ is achievable.
\end{proof}
The following theorem provides a computable expression for the capacity under general assumptions on channel SNR.
\begin{Theorem}
\label{th:1}
Consider a system parametrized by $(P,h,n_q,\delta,\ell)$, where $P>0, h\in \mathbb{R}, n_q>1$, and $\delta=1,\ell=2$. Then, the capacity is given by:
\begin{align}
\label{eq:th:1}
C_{env}(P,h,n_q,\delta,\ell)=\sup_{\mathbf{x}\in \mathbb{R}^{2n_q+1}} \sup_{P_{X}\in \mathcal{P}_{\mathbf{x}}(P)} \sup_{\mathbf{t}\in \mathbb{R}^{2n_q}} I(X;\widehat{Y}),
\end{align}
where $\widehat{Y}= Q(hX+N)$, $N\sim \mathcal{N}(0,1)$, $\mathcal{P}_{\mathbf{x}}(P)$ is the set of probability distributions defined on $\{x_1,x_2,\cdots,x_{2n_q+1}\}$ such that $\mathbb{E}(X^2)\leq P$,
and $Q(y)=k$ if $y\in [t_{k},t_{k+1}], k\in \{1,\cdots,2n_q]$ and $Q(y)=0$ if $y>t_{2n_q}$ or $y<t_{1}$.
\end{Theorem}
\begin{proof}
We provide an outline of the proof. First, we argue that the input alphabet has at most $2n_q+1$ mass points since
based on the proof of Proposition \ref{Prop:1}, the channel output can take at most $2n_q$ values. Let the quantized channel output be denoted by $\widehat{Y}$.
Since the conditional measure $P_{\widehat{Y}|X}(\cdot|x), x\in\mathbb{R}$ is continuous in $x$, and $\lim_{x\to \infty} P_{\widehat{Y}|X}(\mathcal{A}|x) = \mathbbm{1}(\hat{y}\in \mathcal{A}), \mathcal{A}\in \mathbb{B}$ for some fixed $\hat{y}$, the conditions in the proof of \cite[Prop. 1]{singh2009limits} hold, and the optimal input distribution has bounded support. Then, using the extension of Witsenhausen's result \cite{witsenhausen1980some} given in \cite[Prop. 2]{singh2009limits}, the optimal input distribution is discrete and takes at most ${2n_q+1}$ values. This completes the proof of converse. To prove achievability, it suffices to show that one can choose the set of functions $f_{a_j,j}(\cdot), j\in [n_q]$ and quantization thresholds $t_j, j\in [n_q]$ such that the resulting quantizer operates as described in the theorem statement, i.e., it generates $\widehat{Y}=Q(hX+N)$ where $Q(y)=k$ if $y\in [t_{k},t_{k+1}], k\in \{1,\cdots,2n_q]$ and $Q(y)=0$ if $y>t_{2n_q}$ or $y<t_{1}$. To this end, let $\mathbf{t}^*$ be the optimal quantizer thresholds in \eqref{eq:th:1}. Let $r_1,r_2,\cdots,r_{2n_q}$ be the elements of $\mathbf{t}^*$ written in non-decreasing order. Define a quantizer with associated analog functions $f_{a,j}(y)\triangleq |y-\frac{r_j+r_{n_q+j}}{2}|$ and $t_j=\frac{r_{n_q+j}-r_j}{2}$. Note that $t_j>0$ since $r_j, j\in [n_q]$ are non-decreasing. Then, similar to the proof of Proposition \ref{Prop:1}, the quantization rule gives distinct outputs for $y\in [r_{k},r_{k+1}], k\in \{1,\cdots,2n_q]$ and $y\in [r_{2n_q},\infty) \cup [-\infty,r_{1}]
$ as desired. \end{proof}
\begin{Remark}
The capacity expression in Equation \eqref{eq:th:1} can be evaluated numerically, e.g. via the cutting plane algorithm \cite{huang2005characterization,singh2009limits}, or extension of Blahut-Arimoto algorithm in \cite{kobayashi2018joint}.
\end{Remark}
\subsubsection{Low-resolution ADCs and Concatenated Sequences of Envelop Detectors}
Next, we consider systems with $\delta>1,\ell>2$. Recall that for $\delta>1$, each $f_j(y),j \in [n_q]$ is of the form $f_j(y)=A_\delta(y,a_j^\delta)=|A_{\delta-1}(y,a_j^{\delta-1})-a_{j,\delta}|$, where $A_1(y,a_{j,1})=|y-a_{j,1}|$. For tractability, we use the notation $f_{a^{\delta}_j,j}(y)\triangleq A_{\delta}(y,a_j^{\delta})$ to explicitly denote the bias vector $a^{\delta}_j$ used for the $j$th analog function. The following example introduces the concept of degenrate bias vectors for a given threshold matrix $\mathbf{t}$.
\begin{Example}
\label{ex:2}
Let $n_q=1, \ell=3, \delta=2$, and consider the thresholds $t_{1,1}=1$ and $t_{1,2}=2$. Given a bias vector, $(a_1,a_2)$, the associated analog function is $f_{a_1^{\delta},j}(y)= ||y-a_1|-a_2|$. The ADC output is
\begin{align*}
Q(y)=
\begin{cases}
0 \qquad & \text{ if } ||y-a_1|-a_2|<1\\
1 & \text{ if } 1<||y-a_1|-a_2|<2
\\ 2 & \text{ if } 2<||y-a_1|-a_2|
\end{cases}.
\end{align*}
Note that if $a_2-1<0$, then this would be equivalent with:
\begin{align*}
Q(y)=
\begin{cases}
0 \qquad & \text{ if } |y-a_1|<1+a_2\\
1 & \text{ if } 1<|y-a_1|<2+a_2
\\ 2 & \text{ if } 2+a_2<|y-a_1|
\end{cases}.
\end{align*}
In this case, the second envelope detector does not affect the quantization process and can be omitted without change in quantizer output, i.e., the input to the corresponding absolute value is always positive, so it can be removed.
\end{Example}
We call bias vectors $a_j^{\delta}, j\in [n_q]$ which yield redundant envelope detectors, such as the one in Example \ref{ex:2}, degenerate bias vectors. The following proposition characterizes necessary and sufficient conditions for non-degeneracy of bias vectors.
\begin{Proposition}[\textbf{Non-Degenerate Bias Vectors}]
Let the threshold vector corresponding to the $j$th ADC be $t^{\ell-1}$, where $j\in [n_q]$. The bias vector of the corresponding analog operator $f_j(\cdot)$ is non-degenerate if and only if:
\begin{align}
\label{eq:degen}
0< t_1+\sum_{i=2}^\delta(-1)^{b_i}a_i, \quad \forall b_i \in \{-1,1\}.
\end{align}
\end{Proposition}
The proof follows by noting that from definition $0<t_1<t_2<\cdots<t_{\ell}-1$ so that Equation \eqref{eq:degen} is sufficient to ensure non-degeneracy.
We will use the following notion of a fully-symmetric vector in deriving properties of roots of quantizers with non-degenerative bias vectors.
\begin{Definition}[\textbf{Fully-Symmetric Vector}] A vector $\mathbf{b}=(b_1,b_2,\cdots,b_{2^n})$ is called symmetric if $b_i+b_{2^n-i}=b_j+b_{2^n-j}, i,j\in [2^n-1]$.
$\mathbf{b}$ is called fully-symmetric if it is symmetric
and the vectors $(b_1,b_2,\cdots,b_{2^{n-1}})$ and $(b_{2^{n-1}+1},b_{2^{n-1}+2},\cdots,b_{2^{n}})$ are both fully-symmetric for $n>2$ and symmetric for $n=2$.
\end{Definition}
For instance, the vector $\mathbf{b}=(-7,-6,-5-4,4,5,6,7)$ is fully symmetric since it is symmetric and $(-7,-6,-5-4)$
and $(4,5,6,7)$ are both symmetric. \begin{Proposition}
[\textbf{Properties of Roots of Associated Analog Functions}]
\label{prop:1.5}
Consider a quantizer $Q(\cdot)$ with threshold matrix $\mathbf{t}\in \mathbb{R}^{n_q\times (\ell-1)}$,
and analog processing functions $f_{j}(\cdot), j\in n_q$, such that
the corresponding bias vectors are non-degenerate and $f_{j,k}(\cdot)\triangleq f_{j}(\cdot)- t(j,k), j\in [n_q], k\in [\ell-1]$ do not have repeated roots. Let $r_1,r_2,\cdots,r_{\gamma}$ be the increasing sequence of roots, where $\gamma\triangleq (\ell-1)n_q2^{\delta}$. Then, there exists a partition $\{\mathcal{P}_{j,k}, j\in [n_q], k\in [\ell-1]\}$ of $[\gamma]$ such that
\\1) $|\mathcal{P}_{j,k}|=2^{\delta}, j\in [n_q], k\in [\ell-1]$.
\\2) For $j\in [n_q], k\in [\ell-1]$, let $\mathcal{P}_{j,k}=\{i_1,i_2,\cdots, i_{2^{\delta}}\}$, where $i_j<i_{j'}$ for $j<j'$. The vector $(r_{i_1}, r_{i_2}, \cdots, r_{i_{2^\delta}})$ is fully-symmetric,
\\3) For all $j\in [n_q], k,k'\in [\ell-1]$, we have $r_{i_t}-r'_{i_t}= r_{i'_t}-r'_{i'_t}, i_t,i'_t\in [2^\delta]$, where $r_{i_t},r_{i'_t}\in \mathcal{P}_{j,k}$ and $r'_{i_t},r'_{i'_t}\in \mathcal{P}_{j,k'}$.
\end{Proposition}
The proof follows by taking each $\mathcal{P}_{j,k}$ to be the ordered set of roots of $f_{j,k}$ for a given $j\in [n_q], k\in [\ell-1]$ and using properties of the absolute value. The following proposition states several useful properties for the code associated of a quantizer $Q(\cdot)$.
\begin{Proposition}[\textbf{Properties of the Associated Code}]
\label{Prop:2}
Consider a quantizer $Q(\cdot)$ with threshold matrix $\mathbf{t}\in \mathbb{R}^{n_q\times (\ell-1)}$ such that $0<t(i,j)<t(i,j'), i\in [n_q], j,j'\in [\ell-1], j<j'$,
and analog processing functions $f_{j}(\cdot), j\in n_q$, such that
the corresponding bias vectors are non-degenerate and $f_{j,k}(\cdot)\triangleq f_{j}(\cdot)- t(j,k), j\in [n_q], k\in [\ell-1]$ do not have repeated roots. The associated code $\mathcal{C}$ satisfies the following:
\begin{enumerate}[leftmargin=*]
\item The number of codewords in $\mathcal{C}$ is equal to $\gamma\triangleq (\ell-1) n_q2^\delta +1$, i.e. $\mathcal{C}=(\mathbf{c}_0, \mathbf{c}_1,\cdots, \mathbf{c}_{\gamma-1})$.
\item All elements of the first codeword $\mathbf{c}_0$ are equal to $\ell-1$, i.e. $c_{i,0}=\ell-1, i\in \{0,1,\cdots,\gamma-1\}$.
\item Consecutive codewords differ in only one position, and their $L_1$ distance is equal to one, i.e. $\sum_{j=1}^{n_q}|c_{i,j}-c_{i+1,j}|=1, i\in \{0,1,\cdots,\gamma-1\}$.
\item The transition count at every position is $\kappa_j= \frac{\gamma}{n_q}= (\ell-1)2^\delta, j\in [n_q]$.
\item Let $i_1,i_2,\cdots, i_{\kappa}$ be the non-decreasingly ordered indices of codewords, where the $j$th element has value-transitions. Then, the sequence $(c_{i_1,j},c_{i_2,j},\cdots,c_{i_\kappa,j})$ is periodic, in each period it takes all values between $0$ and $\ell-1$, and $|c_{i_k,j}-c_{i_{k+1},j}|=1, k\in [\kappa-1]$ holds. Furthermore, $c_{i_1,j}\in \{0,\ell-1\}$.
\item $|\mathcal{C}|\leq min(\ell^{n_q}, (\ell-1)n_q2^{\delta})$.
\end{enumerate}
\end{Proposition}
Proposition \ref{Prop:2} is an extension of the properties shown in the proof of Theorem \ref{th:1}. We provide a brief justification of each property in the following. Property 1 follows by the fact that the number of codewords in $\mathcal{C}$ is equal to the number of roots of $f_{j,k}, j\in [n_q], k\in [\ell-1]$ plus one (e.g., see Figure \ref{fig:code}). Property 2 and 6 follow by a similar argument as Equation \eqref{eq:lim} in proof of Proposition \ref{Prop:1}. Properties 3 and 5 follow by the fact that each root of $f_{j,k}, j\in [n_q], k\in [\ell-1]$ corresponds to a value transition in the output of exactly one of the ADCs (since the roots are not repeated) and at each transition the value changes either one unit up or down since in the input crosses one threshold at a time at its value is changed continuously. Property 4 follows by the fact that the transition count at each position is equal to the number of roots of $f_{j,k}, k\in [\ell-1]$ for a fixed $j\in [n_q]$.
As a step towards characterizing capacity when $\ell>2,\delta>1$, we first study the capacity region for systems with one-bit ADCs, i.e., $\ell=2,\delta>1$. To this end, we prove two useful propositions. The first one shows that given an ordered set $\mathcal{C}$ satisfying the properties in Proposition \ref{Prop:2} and a sequence of real numbers $(r_1,r_2,\cdots,r_{\gamma})$ satisfying the properties in Proposition \ref{prop:1.5}, one can always construct a quantizer whose associated code is equal to $\mathcal{C}$ and whose polynomial roots sequence is $(r_1,r_2,\cdots,r_{\gamma})$. The second proposition provides conditions under which there exists a code satisfying the properties in Proposition \ref{Prop:2}. The proof ideas follow techniques used in study of balanced and locally balanced gray codes \cite{bhat1996balanced,bykov2016locally}.
Combining the two results allows us to characterize the necessary and sufficient conditions for existence of quantizers with desirable properties.
In the statement of the following proposition, for a given code $\mathcal{C}$, we have used the notation $\xi_1,\xi_2,\cdots,\xi_{\gamma-1}$ for the transition sequence of $\mathcal{C}$. That is, $\xi_k, k\in \{1,\dots,\gamma-1\}$ is the bit position which is different between $\mathbf{c}_{k-1}$ and $\mathbf{c}_{k}$. We have defined
the transition sets $\mathcal{I}_j\triangleq\{s| \xi_s=j\}, j\in [n_q]$. Note from Property 5) in Proposition \ref{Prop:2}, we have $|\mathcal{I}_j|=\kappa_j= 2^{\delta}$.
\begin{Proposition}[\textbf{Quantizer Construction}]
\label{Prop:3}
Let $\ell=2, n_q\in \mathbb{N}$ and $\delta>1$ and consider an ordered set $\mathcal{C}\subset \{0,1\}^{n_q}$ satisfying properties 1)-5) in Proposition \ref{Prop:2}, and a sequence of increasing real numbers $r_1,r_2,\cdots, r_{\gamma}$, where $\gamma= n_q2^{\delta}$, such that $(r_{i_s}, s\in \mathcal{I}_j)$ is fully-symmetric for all $j\in [n_q]$, where $\mathcal{I}_j$ are the transition sets of $\mathcal{C}$. Then, there exists a quantizer $Q(\cdot)$ with associated analog functions $f_{j}(\cdot), j\in [n_q]$ such that its associated code is $\mathcal{C}$, and $r_1,r_2,\cdots, r_{\gamma}$ is the non-decreasing sequence of roots of its associated analog functions $f_{j,k}(\cdot),j\in [n_q], k\in [\ell-1]$.
\end{Proposition}
\begin{proof}
For $j\in [n_q]$ and non-decreasing vector $(r_{i_1},r_{i_2},\cdots, r_{i_{2^\delta}})$ where $i_\lambda\in \mathcal{I}_j, j\in [n_q], \lambda\in [2^{\delta}]$, define
\begin{align*}
&a_{1,j}\triangleq \frac{r_{i_1}+r_{i_{2^{\delta}}}}{2}, \qquad a_{s,j}\triangleq \frac{r_{i_{2^{\delta}}}+r_{i_{\eta_s}}}{2}-\sum_{s'=1}^{s-1} a_{s',j}, 1<s\leq \delta,\\
&t_{j}\triangleq r_{i_{2^{\delta}}}-\sum_{s'=1}^{\delta-1} a_{s',j}, 1<s\leq \delta
\end{align*}
where $\eta_s\triangleq 2^{\delta}- \sum_{s'=1}^{s-1}2^{\delta-s'}+1, s>1$.
Consider a quantizer $Q(\cdot)$ with ADC thresholds $t(1:n_q)$ and associated analog functions $f_{j}(y)\triangleq A_{\delta}(x,a^{\delta}), j\in [n_q]$. Then, $r_1,r_2,\cdots, r_{\gamma}$ are the non-decreasing sequence of roots of $f_{j}(\cdot), j\in [n_q]$, and the associated code of the quantizer $Q(\cdot)$ is $\mathcal{C}$ as desired.
\end{proof}
\begin{Proposition}(\textbf{Code Construction})
Let $\ell=2$, $n_q\in \mathbb{N}$, and $\kappa_1$, $\kappa_2,\cdots,\kappa_{n_q}$ be even numbers such that $|\kappa_j-\kappa_{j'}|\leq 2, j,j'\in [n_q]$. Then, there exists a code $\mathcal{C}$ with transition count at position j equal to $\kappa_j, j\in [n_q]$ satisfying properties 1), 2), 3), and 5) in Proposition \ref{Prop:2} such that $|\mathcal{C}|=\min\{2^{n_q}, \sum_{j=1}^{n_q}\kappa_j\}$. Particularly, if $\kappa_j=2^\delta, j\in [n_q]$, then there exists $\mathcal{C}$ with $|\mathcal{C}|= \min\{2^{n_q}, n_q2^\delta\}$ satisfying properties 1)-5) in Proposition \ref{Prop:2}.
\label{Prop:4}
\end{Proposition}
\begin{proof}
Please refer to Appendix \ref{App:Prop:4}.
\end{proof}
Using Propositions \ref{Prop:3} and \ref{Prop:4}, we characterize the channel capacity for $\ell=2$ and $\delta>1$. Let us define $\Gamma_2\triangleq \min(2^{n_q}, n_q2^\delta +1)$ and the set $\mathcal{T}_{\Gamma_2}\subseteq \mathbb{R}^{\Gamma_2-1}$ as the set of sequences of increasing real numbers $r_1,r_2,\cdots, r_{\gamma}$, where $\gamma= n_q2^{\delta}$ for which there exists a partition $\{\mathcal{I}_j, j\in[n_q]\}$ of $[\delta]$
such that $(r_{i_s}, s\in \mathcal{I}_j)$ is fully-symmetric for all $j\in [n_q]$, and there exists a code satisfying Properties 1)-5) in Proposition \ref{Prop:2} whose transition sets are equal to $\mathcal{I}_j, j\in [n_q]$ and which has exactly one repeated codeword, i.e., only the first and last codewords are repeated. The following theorem characterizes the channel capacity.
\begin{Theorem}
\label{th:2}
Consider a system parametrized by $(P,h,n_q,\delta,\ell)$, where $P>0, h\in \mathbb{R}, n_q\in \mathbb{N}$, $\delta>1$, and $\ell=2$. Then, the capacity is given by:
\begin{align}
\label{eq:th:2}
C_{env}(P,h,n_q,\delta,\ell)=\sup_{\mathbf{x}\in \mathbb{R}^{ \Gamma_2}} \sup_{P_{X}\in \mathcal{P}_{\mathbf{x}}(P)} \sup_{\mathbf{t}\in \mathcal{T}_{\Gamma_2}} I(X;\widehat{Y}),
\end{align}
where
$\widehat{Y}= Q(hX+N)$, $N\sim \mathcal{N}(0,1)$, $\mathcal{P}_{\mathbf{x}}(P)$ is the set of distributions on $\{x_1,x_2,\cdots,x_{\Gamma_2}\}$ such that $\mathbb{E}(X^2)\leq P$,
and $Q(y)=k$ if $y\in [t_{k},t_{k+1}], k\in \{1,\cdots,\Gamma_2-1\}$ and $Q(y)=0$ if $y>t_{\Gamma_2-1}$ or $y<t_{1}$.
\end{Theorem}
The proof follows by similar arguments as in the proof of Theorem \ref{th:1}. The converse follows from Proposition \ref{Prop:2} Item 4). Achievability follows from Proposition \ref{Prop:4}.
The region given in Theorem \ref{th:2} is difficult to analyze since finding the set $\mathcal{T}_{\Gamma_2}$ may be computationally complex. Inner bounds to the achievable region may be numerically derived by assuming additional symmetry restriction such as uniform quantization restrictions. This is studied in more detail in the numerical evaluations provided in Section \ref{sec:num}.
For scenarios with $\ell>2,\delta>1$,
let us define $\Gamma_\ell\triangleq \min(2^{n_q}, (\ell-1)n_q2^\delta +1)$ and the set $\mathcal{T}_{\Gamma_\ell}\subseteq \mathbb{R}^{\Gamma_\ell-1}$ as the set of sequences of increasing real numbers $r_1,r_2,\cdots, r_{\gamma}$ satisfying the properties in Proposition \ref{prop:1.5}, for which there exists a code $\mathcal{C}$ satisfying Properties 1)-5) in Proposition \ref{Prop:2} such that i)
the sets $\mathcal{P}_{j,k}, j\in [n_q], k\in [\ell-1]$ in \ref{prop:1.5} are the indices of the codewords of $\mathcal{C}$ which have transition to or from value $k$ in their $j$th element, and ii) $\mathcal{C}$ has exactly one repeated codeword, i.e., only the first and last codewords are repeated.
The following theorem characterizes the channel capacity. The proof follows from Propositions \ref{Prop:2} and \ref{Prop:4} similar to the arguments given in the proof of Theorem \ref{th:1}.
\begin{Theorem}
\label{th:4}
Consider a system parametrized by $(P,h,n_q,\delta,\ell)$, where $P>0, h\in \mathbb{R}, n_q\in \mathbb{N}$, and $\ell,\delta\in \mathbb{N}$. Let $\Gamma_{\ell}$ be the maximum size of codes satisfying condition 1)-5) in Proposition \ref{Prop:2}. Then,
\begin{align}
\label{eq:th:4}
C_{env}(P,h,n_q,\delta,\ell) =\sup_{\mathbf{x}\in \mathbb{R}^{\Gamma}} \sup_{P_{X}\in \mathcal{P}_{\mathbf{x}}} \sup_{\mathbf{t}\in \mathcal{T}_{\Gamma_\ell}} I(X;\widehat{Y}),
\end{align}
where $\Gamma_\ell$ is the maximum number of unique codewords in a code $\mathcal{C}$ satisfying Properties 1)-5)
in Proposition \ref{Prop:2}, and $\widehat{Y}= Q(hX+N), N\sim \mathcal{N}(0,1)$, $\mathcal{P}_{\mathbf{x}}(P)$ consists of distributions on $\{x_1,x_2,\cdots,x_{\Gamma_\ell}\}$ such that $\mathbb{E}(X^2)\leq P$,
and $Q(y)=k$ if $y\in [t_{k},t_{k+1}], k\in [\Gamma_\ell-1]$ and $Q(y)=0$ if $y>t_{\Gamma_\ell-1}$ or $y<t_{1}$.\end{Theorem}
Optimizing \eqref{eq:th:4} requires calculating $\Gamma_\ell$. The total number of codes satisfying conditions 1)-5) in Proposition \ref{Prop:2} is bounded from above by ${(\ell-1)2^\delta n_q\choose (\ell-1)2^\delta, (\ell-1)2^\delta , \cdots, (\ell-1)2^\delta }$.
Hence, for ystems with a few low resolution ADCs and small $\delta$ ($\delta=1,2$), one can find $\Gamma_\ell$ by searching over all such codes.
\section{Analog Beamforming Architecture with Polynomial Operators}
\label{sec:poly}
In the previous section, we investigated the channel capacity under analog beamforming when sequences of concatenated envelop detectors are used for analog signal processing. In this section, we evaluate the resulting channel capacity when analog polynomial operators are used instead of envelop detectors. We define the set of implementable analog operators
\[\mathcal{F}^{\delta}_{poly}\triangleq \{f(x)=\sum_{i=0}^\delta{a_i}x^i, x\in \mathbb{R}| a_i \in \mathbb{R}, i=0,1,\cdots,\delta\}.\]
We denote the resulting channel capacity by $C_{poly}(P,h,n_q,\delta,\ell)$.
\begin{Example}[\textbf{Associated Code}]
\label{Ex:1p}
Let $n_q=\delta=2$ and $\ell=3$ and consider a quantizer characterized by polynomials
$f_{a,1}(y)=y^2+2y$ and $f_{a,2}(y)= y^2+3y, y\in \mathbb{R}$, and thresholds
\begin{align*}
& t(1,1)= 3, \quad t(1,2)=0, \quad t(2,1)= 10, \quad t(2,2)= 18,
\end{align*}
We have:
\begin{align*}
& f_{a,1,1}(y)= y^2+2y-3, \quad f_{a,1,2}(y)= y^2+2y\\
& f_{a,2,1}(y)=y^2+3y-10, \quad f_{a,2,2}(y)= y^2+3y-18.
\end{align*}
The ordered root sequence is $(r_1,r_2,\cdots,r_8)=
(-6,$ $-5,-3,-2,0,1,2,3)$. The
associated partition is:
\begin{align*}
& \mathsf{P}= \Big\{[-\infty,-6), (-6,-5),(-5,-3),
(-3,-2),(-2,0),
\\&\qquad (0,1),(1,2),(2,3),(3,\infty)\Big\}.
\end{align*}
The associated code is given by $22,21,20,10,00,10,20,21,22$. The size of the code is $|\mathcal{C}|=5$. The high SNR capacity of a channel using this quantizer at the receiver is $\log{|\mathsf{P}|}=\log{|\mathcal{C}|}=\log{5}$.
\end{Example}
As a first step, we show the following proposition about properties of codes and their associated polynomial functions which is analogous to Proposition \ref{Prop:2} which addressed codes and their associated envelop-detector-based analog processing functions. The proof follows by similar arguments as that of Proposition \ref{Prop:2} and is omitted for brevity.
\begin{Proposition}
\label{Prop:5}
Consider a quantizer $Q(\cdot)$ with threshold matrix $\mathbf{t}\in \mathbb{R}^{n_q\times (\ell-1)}$ and associated polynomials $f_{j}(\cdot)\in \mathcal{F}^{\delta}_{poly}, j\in n_q$, such that
$f_{j,k}(\cdot)\triangleq f_{j}(\cdot)- t(j,k), j\in [n_q], k\in [\ell-1]$ do not have repeated roots. The associated code $\mathcal{C}$ satisfies the following:
\begin{enumerate}[leftmargin=*]
\item The number of codewords in $\mathcal{C}$ is equal to $\gamma\triangleq (\ell-1)\delta n_q+1$, i.e. $\mathcal{C}=(\mathbf{c}_0, \mathbf{c}_1,\cdots, \mathbf{c}_{\gamma-1})$.
\item All elements of the first codeword $\mathbf{c}_0$ are either equal to $\ell-1$ or equal to $0$, i.e. $c_{i,0}=0, i\in \{0,1,\cdots,\gamma-1\}$ or $c_{i,0}=\ell-1, i\in \{0,1,\cdots,\gamma-1\}$.
\item Consecutive codewords differ in only one position, and their $L_1$ distance is equal to one, i.e. $\sum_{j=1}^{n_q}|c_{i,j}-c_{i+1,j}|=1, i\in \{0,1,\cdots,\gamma-1\}$.
\item The transition count at every position is $\kappa_j= \frac{\gamma}{n_q}= (\ell-1)\delta, j\in [n_q]$.
\item Let $i_1,i_2,\cdots, i_{\kappa}$ be the non-decreasingly ordered indices of codewords where the $j$th element has value-transitions. Then, the sequence $(c_{i_1,j},c_{i_2,j},\cdots,c_{i_\kappa,j})$ is periodic, in each period it takes all values between $0$ and $\ell-1$, and $|c_{i_k,j}-c_{i_{k+1},j}|=1, k\in [\kappa-1]$ holds. Furthermore, $c_{i_1,j}\in \{0,\ell-1\}$.
\item If $\delta$ is even, then $|\mathcal{C}|\leq min(\ell^{n_q}, (\ell-1)\delta n_q)$ and if $\delta$ is odd, then $|\mathcal{C}|\leq min(\ell^{n_q}, (\ell-1)\delta n_q+1)$
\end{enumerate}
\end{Proposition}
Using Proposition \ref{Prop:4} and Proposition \ref{Prop:5}, and following the arguments in the proof of Theorem \ref{eq:th:1}, one can prove the following theorem.
\begin{Theorem}
\label{th:5}
Consider a system parametrized by $(P,h,n_q,\delta,\ell)$, where $P>0, h\in \mathbb{R}, n_q\in \mathbb{N}$, and $\ell=2$. Then,
\begin{align}
\label{eq:th:5}
C_{poly}(P,h,n_q,\delta,\ell)=\sup_{\mathbf{x}\in \mathbb{R}^{ \Gamma}} \sup_{P_{X}\in \mathcal{P}_{\mathbf{x}}(P)} \sup_{\mathbf{t}\in \mathbb{R}^{\Gamma-1}} I(X;\widehat{Y}),
\end{align}
if $\delta$ is even and
\begin{align}
\label{eq:th:3}
\sup_{\mathbf{x}\in \mathbb{R}^{\Gamma}} \sup_{P_{X}\in \mathcal{P}_{\mathbf{x}}(P)} \sup_{\mathbf{t}\in \mathbb{R}^{\Gamma-1}} I(X;\widehat{Y}) & \leq C_Q(P,h,n_q,\delta,\ell)
\\&\nonumber \leq \sup_{\mathbf{x}\in \mathbb{R}^{\Gamma'}} \sup_{P_{X}\in \mathcal{P}_{\mathbf{x}}(P)} \sup_{\mathbf{t}\in \mathbb{R}^{\Gamma'-1}} I(X;\widehat{Y}),
\end{align}
if $\delta$ is odd, where $\Gamma\triangleq \min(2^{n_q}, \delta n_q)$, $\Gamma'\triangleq \min(2^{n_q}, \delta n_q+1)$
$\widehat{Y}= Q(hX+N)$, $N\sim \mathcal{N}(0,1)$, $\mathcal{P}_{\mathbf{x}}(P)$ is the set of distributions on $\{x_1,x_2,\cdots,x_{\Gamma}\}$ such that $\mathbb{E}(X^2)\leq P$,
and $Q(y)=k$ if $y\in [t_{k},t_{k+1}], k\in \{1,\cdots,\Gamma-1]$ and $Q(y)=0$ if $y>t_{\Gamma-1}$ or $y<t_{1}$.
\end{Theorem}
We make the following observations regarding the achievable regions in Theorems \ref{th:4} and \ref{th:5}:
\\1) It can be noted from Equations \eqref{eq:th:2} and \eqref{eq:th:3} that the capacity expression for odd and even values of $\delta$ are not the same. This is due to Property 6) in Proposition \ref{Prop:5} which gives different number of unique codewords for odd and even $\delta$. The reason is that while even degree polynomials yield the same output sign as their input converges to $-\infty$ and $\infty$, for odd degree polynomials the output signs are different as their input converges to $-\infty$ and $\infty$. This can potentially yield a larger number of unique codewords in the associated code of the quantizer since the first and last codeword are not equal to each other. This is in contrast with Theorem \ref{th:4}, where the capacity expression is the same for even and odd values of $\delta$. The reason is that the for absolute values the output sign is positive as their input converges to $-\infty$ and $\infty$.
\\2) The region given in Theorem \ref{th:5} strictly contains that of Theorem \ref{th:4} for the same value of $\Gamma$. The reason is that envelope detectors generate absolute value functions which force a symmetric structure on the Voronoi regions of $Q(\cdot)$. This manifests in the fully-symmetric condition $\mathbf{t}\in \mathcal{T}_{\Gamma}$ in Theorem \ref{th:4} and the properties given in Proposition \ref{prop:1.5}; whereas for polynomial functions, no such symmetry is required and hence the optimization in Theorem \ref{th:5} is over all $\mathbf{t}\in \mathbb{R}^{\Gamma}$. However, as shown in Section \ref{sec:cir} generating polynomial operators of degree up to $\delta$ requires a larger power budget compared to concatenating $\delta$ envelop detectors. This points to a rate-power tradeoff in using envelop detectors and polynomials operators.
\\3) One potential approach to improve upon the capacity of the system in Theorem \ref{th:4} is to augment the envelop detectors by linearly combining their output with the original signal. That is, to generate operators of the form $f(y)=|y-a|+by, a,b\in \mathbb{R}$ instead of $f(y)=|y-a|, a\in \mathbb{R}$. This removes the fully-symmetric condition $\mathbf{t}\in \mathcal{T}_{\Gamma}$ in Theorem \ref{th:4} and yields a larger channel capacity. However, such linear combinations are challenging to implement using analog circuits due to timing issues in synchronizing the output of the envelop detector with the original signal. We hope to address these implementation challenges in future works.
\\4) For envelop detectors, the dimension $\Gamma$ is equal to $n_q(\ell-1)2^{\delta}$, whereas for polynomial operators, it is equal to $n_q(\ell-1)\delta$. So, the dimension of the optimization space increases faster when concatenating envelop detectors compared to when increasing the polynomial degree. That is, at high SNRs, the capacity in Theorem \ref{th:4} is larger than that of Theorem \ref{th:5} for the same value of $\delta>1$. This is also observed in the numerical evaluations in Section \ref{sec:num}.
\section{A Hybrid Beamforming Architecture with One-bit ADCs}
\label{sec:hyb}
In the previous sections, we have investigated the channel capacity under analog beamforming equipped with different collections of implementable analog functions. In this section, we consider hybrid beamforming with one-bit ADCs, where the beamforming vector at the receiver $\mathbf{w}\in \mathbb{R}^{n_r\times n_s}$ maps the received signal $Y^{n_r}$ to $\widetilde{Y}^{n_s}$, where $n_s>1$ (Figure \ref{fig:0}). In this case, we provide a quantization setup, using envelop detectors, which accommodates QAM modulation, and provide inner bounds to the system capacity. In the next section, we numerically evaluate the resulting capacity and provide comparisons with prior works.
\subsection{Quantizer Construction}
We assume that $n_q>n_s$, otherwise, one can use analog beamformers to further reduce the dimension of the beamformer output without performance loss in terms of achievable rates. As mentioned in Section \ref{sec:form} a quntizer is characterized by its analog processing functions $f_j(\cdot), j\in [n_q]$ and one-bit ADC thresholds $t^{n_q}\in \mathbb{R}^{n_q}$. Let us fix a threshold step parameter $\zeta>0$. We take the analog processing functions as follows:
\begin{align*}
f_j(\widetilde{y}^{n_s})=
\begin{cases}
\widetilde{y}_j \quad &\text{ if } j\leq n_s,\\
|\widetilde{y}_{\bar{j}}|& \text{ if } j>n_s,
\end{cases} \qquad
\end{align*}
where $\bar{j}$ is the module $n_s$ residual of $j$. We take the threshold values as follows:
\begin{align*}
t_j=
\begin{cases}
0 \quad &\text{ if } j\leq n_s,\\
\floor{\frac{j}{n_q}}\zeta& \text{ if } j>n_s.
\end{cases} \qquad
\end{align*}
This choice is clarified through the following example.
\begin{figure}[t]
\centering
\vspace{0.05in}
\includegraphics[width=0.6\textwidth]{quant.pdf}
\vspace{-.15in}
\caption{The quantizer outputs and Voronoi regions in Example \ref{ex:4}.}
\vspace{-.25in}
\label{fig:quant}
\end{figure}
\begin{Example}
\label{ex:4}
Let $n_s=2$, $n_q=6$, and $\zeta=1$. Then, for the construction described above, the six one-bit ADC operations are as follows:
\begin{align*}
& Q_1:\widetilde{y}_1\lessgtr 0, \quad Q_2:\widetilde{y}_2\lessgtr 0, \quad
Q_3:|\widetilde{y}_1|\lessgtr 1, \quad
\\&Q_4:|\widetilde{y}_2|\lessgtr 1, \quad
Q_5:|\widetilde{y}_1|\lessgtr 2, \quad
Q_6:|\widetilde{y}_2|\lessgtr 2.
\end{align*}
The quantizer outputs are shown in Figure \ref{fig:quant}. Note that this resembles a 16-QAM modulation.
\end{Example}
\subsection{Achievable Rates at High SNR}
As argued in Section \ref{sec:env}, the high SNR capacity is equal to the maximum number of quantization regions which can be generated given the number of one-bit ADCs $n_q$ and set of implementable analog functions $\mathcal{F}$. The following theorem provides upper and lower bounds on the high SNR channel capacity of beamforming architectures equipped with envelope detectors for analog signal processing.
\begin{Theorem}
\label{th:6}
Assume that the channel matrix observed after beamforming is full-rank, i.e. the rank of $\mathbf{w}^H\mathbf{h}\mathbf{f}$ is equal to $n_s$. Let $C_{env}(P,n_s, n_q,\ell,\delta)$ denote the channel capacity under power constrain P. Then,
\begin{align}
\label{eq:hyb}
n_s\left(1+\log\left(\frac{n_q-n_s}{n_s}\right)\right)
\leq \lim_{P\to \infty} C_{env}(P,n_s, n_q,2,1)\leq \log\sum_{k = 0}^{n_s} {2n_q \choose k}.
\end{align}
\end{Theorem}
The lower bound in Equation \eqref{eq:hyb} is achieved by the quantizer described in this section. To see this, note that by construction, the quantizer partitions each axis into $2\left(\frac{n_q-n_s}{n_s}\right)$ intervals, and each resulting quantization region is mapped to a unique quantizer output (e.g., Figure \ref{fig:quant}). So, the total number of unique quantizer outputs is $|\mathcal{C}|= 2^{n_s}(\frac{n_q-n_s}{n_s})^{n_s}$. The result follows by noting that the communication rate is $\log{|\mathcal{C}|}$.
The upper bound follows by counting the number of partition regions generated by $2n_q$ hyperplanes in general position in the $n_s$-dimensional Euclidean space (e.g. \cite{khalili2021mimo,alexanderson1978simple}). Figure \ref{fig:th:5} provides numerical simulations of the i) upper bounds and ii) lower bounds in Equation \eqref{eq:hyb} and iii) the high SNR channel capacity under hybrid beamforming without analog processing derived in \cite{khalili2021mimo} for $n_s=3$. It can be observed that the proposed architecture outperforms the one in \cite{khalili2021mimo} if the number of one-bit ADCs is larger than $n_q=8$.
\begin{figure}[t]
\centering
\vspace{0.05in}
\includegraphics[width=0.7\textwidth]{hyb.pdf}
\vspace{-.15in}
\caption{Channel capacity for i) the proposed architecture and $n_q$ one-bit ADCs (lower-bound in Theorem \ref{th:6}), ii) no analog processing prior to quantization and $2n_q$ one-bit ADCs (upper-bound in Theorem \ref{th:6}) and
iii) no analog processing prior to quantization and $n_q$ one-bit ADCs (\cite{khalili2021mimo}).
}
\vspace{-.25in}
\label{fig:th:5}
\end{figure}
\section{Numerical Analysis of Channel Capacity}
\label{sec:num}
In this section, we provide a numerical analysis of the capacity bounds derived in the prequel and evaluate the gains due to the use of nonlinear analog components in the receiver terminal.
\subsection{Capacity Evaluation for Envelop Detector Architectures}
We compute an inner-bound to the capacity expression in Theorem \ref{th:4}. To this end, we first use the extension of the Blahut-Arimoto algorithm to discrete memoryless channels with input cost constraints given in \cite{kobayashi2018joint} to find the best input distribution. Then, we conduct a brute-force search over all possible uniform quantizers. In order to find the mass points of $X$, we discretize the real-line using a grid with step-size 0.1, and optimize the distribution over the resulting discrete space. Fig. \ref{fig:env} shows the resulting achievable rates for SNRs in the range of 0 to 30 dB for various values of $(n_q,\ell,\delta)$. Observe that without nonlinear analog processing, for $\ell=2$, the high SNR capacity is $\log{n_q+1}$ \cite{khalili2021mimo}. So, for instance, for $n_q=\delta=2$, the inner bound in Figure \ref{fig:env} surpasses the high SNR capacity without nonlinear analog processing for SNRs higher than 15dB, and its high SNR capacity is more than $60\%$ greater than the case when there is no nonlinear analog processing.
\begin{figure}[t]
\centering
\vspace{0.05in}
\includegraphics[width=0.7\textwidth]{env.pdf}
\vspace{-.15in}
\caption{The set of achievable rates for various values of $(n_q,\ell,\delta)$ for architectures using envelop detectors.
}
\vspace{-.25in}
\label{fig:env}
\end{figure}
\subsection{Capacity Evaluation for Polynomial Operator Architectures}
We numerically evaluate the inner bound to the capacity region given in Theorem \ref{th:5}. Similar to the previous case, we first use the extension of the Blahut-Arimoto algorithm to find the best input distribution. Then, we conduct a brute-force search over all possible symmetric threshold vectors, where a vector $\mathbf{t}$ is symmetric if $\mathbf{t}=-\mathbf{t}$ \cite{singh2009limits}. To find the mass points of $X$, we discretize the real-line using a grid with step-size 0.1, and optimize the distribution over the resulting discrete space. Figure \ref{fig:delta} shows the resulting achievable rates for SNRs in the range of 0 to 30 dB for various values of $(n_q,\ell,\delta)$. It can be observed that the performance improvements due to the use of higher degree polynomials are more significant at high SNRs. Furthermore, it can be observed that the set of achievable rates only depends on $min(\ell^{n_q},(\ell-1)\delta n_q+1)$. As a result, for instance the achievable rate when $n_q=2,\ell=2,\delta=2$ is the same as that of $n_q=3,\ell=2,\delta=1$ as shown in the figure. So, in this case, using higher degree polynomials can compensate for a lower number of ADCs. On the other hand, the achievable rate for $n_q=3,\ell=2,\delta=1$ is lower than that of $n_q=3,\ell=2,\delta=2$ as shown in the figure. So, using higher degree polynomials leads to rate improvements in this scenario.
\begin{figure}[t]
\centering
\vspace{0.05in}
\includegraphics[width=0.7\textwidth]{delta_sim.pdf}
\vspace{-.15in}
\caption{The set of achievable rates for various values of $(n_q,\ell,\delta)$ for architectures using polynomial operators.
}
\vspace{-.25in}
\label{fig:delta}
\end{figure}
\subsection{Achievable Inner-Bound for Hybrid Beamforming with Envelop Detectors}
We numerically evaluate the inner-bound to the channel capacity which is achievable using the beamforming architecture described in Section \ref{sec:hyb}. We have simulated the communication system for $n_s=2$, $\ell=2$, and $n_q\in \{5,6\}$ and found an estimate of the achievable rates empirically. The results are shown in Figure \ref{fig:hybrid}. To perform the simulation, we have optimized the threshold parameter $\zeta$, the input alphabet values $\mathbf{x}\in \mathbb{R}^{2}$, and the probability distribution $P_X$ on the alphabet of the input points using a gradient descent optimization method. We have simulated the channel by generating 15000 independent and identically distributed samples of noise vectors and input messages. We have used the empirical observations to estimate transition probability of the discrete channel resulting from the quantization process. We have used the Blahut-Arimoto algorithm to find the capacity of the resulting channel. It can be observed in Figure \ref{fig:hybrid} that for $n_q=5$, the high SNR rate is larger than
the $n_s\left(1+\log{\left(\frac{n_q-n_s}{n_s}\right)}\right)$ lower-bound given in Theorem \ref{th:5}, whereas it is equal to this lower-bound for $n_q=6$.
\begin{figure}[!h]
\centering
\includegraphics[width=0.7\textwidth]{nonlin5.pdf}
\caption{(a) The circuit design for the generation of fourth and second order polynomials, (b) the power consumption breakdown of the circuits for generation of equal voltage amplitude (corresponding to 0 dBm power) at the second and fourth harmonics.}
\label{fig:poly}
\end{figure}
\begin{figure}[t]
\centering
\vspace{0.05in}
\includegraphics[width=0.7\textwidth]{hybrid.pdf}
\vspace{-.15in}
\caption{Inner-bound to achievable rates for the hybrid beamforming architecture of Section \ref{sec:hyb}.
}
\vspace{-.25in}
\label{fig:hybrid}
\end{figure}
\section{Circuit Design for Polynomial Operators and Envelope Detectors}
\label{sec:cir}
In this section, we assume that we are given a direct current (DC) signal and our objective is to produce a polynomial function of degree up to four of the input DC value.
It can be noted that if the input baseband signal is not a DC value such as a $sinc(\cdot)$ function, one can use an integrator to transform the signal into a DC value (e.g. \cite{Shirani2022}). Hence, this assumption does not loose generality.
In practice, there are two methods to achieve the above objective: (i) DC domain nonlinear function synthesis based on the quadratic I-V characteristic of the transistor and increasing the order of polynomial by cascading circuits \cite{DCNON}, (ii) translating DC values to sinusoidal waveforms, and then generating harmonics of these waveforms whose amplitude is polynomially dependent on the fundamental frequency amplitude. The former has a simpler circuitry; however it can only be used to produce a specific set of polynomials, i.e., restricted polynomial coefficient values. The latter can produce polynomials with arbitrary coefficients through efficient filtering of undesired harmonic terms. However, it has higher power consumption and more complex circuitry, and its implementation requires careful quantification of the non-linear behavior of transistors, e.g., using Volterra-Weiner series representation methods \cite{AghasiTHz}.
\begin{figure*}[thb]
\centering
\includegraphics[width=0.9\textwidth]{pipeline2.pdf}
\caption{(a) The cascade of circuits emulating absolute value operators, (b) conventional pipeline ADC architecture.
}
\vspace{-.25in}
\label{fig:pipeline}
\end{figure*}
\begin{figure}[t]
\centering
\vspace{0.05in}
\includegraphics[width=0.7\textwidth]{Circuits.pdf}
\vspace{-.15in}
\caption{(a) The deployed differential to single-ended op-amp deployed in the envelope detector (b) the circuit diagram of the implemented envelope detector}
\label{Xuyang1}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\textwidth]{ImpactGain.pdf}
\vspace{-.15in}
\caption{The impact of amplifier gain on the half-cycle rectification amplitude distortion illustrated for (a) gain of 10 dB, and (b) 25 dB.}
\label{Xuyang2}
\end{figure}
To explain the proposed construction, let us consider the problem of producing the fourth order polynomial $f(x)=x^4+x^2$, where $x$ is the DC input value. Since naturally the amplitude level of the fourth harmonic is less than that of the second harmonic, a harmonic-centric power optimization is needed to produce the desired polynomial. Fig. \ref{fig:poly}(a) shows a circuit design to generate $f(x)=x^4+x^2$. In order to generate equal amplitudes at the second and fourth harmonics, the power gain of the transistors generating the fourth harmonic should be larger, leading to an increased power consumption in generating the fourth order term compared to the second order term. Figure \ref{fig:poly}(b) illustrates numerical values for the power consumption of the proposed circuit through simulations. It can be observed that the ratio of the power consumption for the generation of fourth order term compared to the second order term increases with frequency since the transistor power gain drops at higher frequencies. These results are based on CMOS 65nm technology. The power consumption can be further improved by transitioning into smaller transistor nodes.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\textwidth]{eyediagrams.pdf}
\caption{The input waveform eye diagram for (a) PAM4 and (b) PAM8 modulations compared with the corresponding envelope detector output eye diagrams in (c) and (d). }
\label{Xuyang3}
\end{figure}
{\subsection{Implementation of Envelop Detectors}}
\label{envelope_circuit}
In this section, we explain a circuit design for envelope detectors which can be used to realize the receiver architectures studied in prior sections.
The circuit block diagram of the proposed multi-step envelope detector is shown in Fig. \ref{fig:pipeline}(a). Compared with the conventional pipe-line ADC shown in Fig. \ref{fig:pipeline}(b) this circuit exhibits major advantage in terms of power saving by removing the one-bit DAC and subtractor used in each stage. In terms of functionality, the linearity in both scenarios is mainly limited by gain and bandwidth of operational amplifiers (Op-amp) used in functional blocks.
The schematic of the envelope detector is shown in Fig. \ref{Xuyang1}. The operational amplifiers deployed in the envelope detector are two stages differential to single-ended amplifiers with gain-bandwidth product (GBW) at 32 GHz in Fig. \ref{Xuyang1}(a). This GBW allows to amplify signals up to 10 GHz with a gain above 15 dB, which is critical for the operation of the envelope detector shown in Fig. \ref{Xuyang1}(b). The resistors in this circuit establish a trade-off between the bandwidth and the waveform distortion. In other words, the larger resistance value leads to smaller distortion of the flipping negative part at the expense of increasing the resistance associated with the output pole of each envelope detector stage, and subsequently limiting the bandwidth of operation. In our simulations, we assumed 500$\Omega$ resistors. According to the simulation results shown in Fig. \ref{Xuyang2}, the higher gain of each operational amplifier leads to smaller amplitude distortion at the output, which naturally is achieved at the expense of a smaller bandwidth for the amplifier, thereby leading to a distortion-bandwidth tradeoff.
\begin{figure}[t]
\centering
\includegraphics[width=0.7\textwidth]{Pdc.pdf}
\caption{The linear growth of power consumption with the data rate in envelope detector analog circuits.
}
\label{Xuyang4}
\end{figure}
By cascading multiple stages of the envelope detectors, as mentioned in Section \ref{sec:env} the achievable data-rate increases. To justify this claim, we have simulated the two-stage envelope detector circuit in Fig. \ref{fig:pipeline}(a). The DC level shifter is realized by diode-based circuits which consume no DC power and can operate up to desired datarates.
The simulated eye-diagram performance of the two-stage envelope detector is illustrated in Fig. \ref{Xuyang3}. It can be noted that at such higher data rates, in contrast to lower data rates where a square wave is feasible, sinusoidal waveforms are the only feasible inputs to an ADC \cite{Savoj}. Therefore, in our simulations we consider the input to the absolute value function circuitry to be a sinusoidal waveform.
An important characteristic of the envelope detectors is the input dynamic range to support an output waveform that follows the input amplitude with minimal deviation. By increasing the input magnitude, the amplifier's constituent transistors will be pushed into nonlinear regions of operation, and a ``compression behavior'' is observed. The point at which this compression happens is an upper-bound on the magnitude of input signals. This value is critical in scenarios where amplitude modulation is deployed in the transceiver. To evaluate our proposed envelope detector, two modulation scenarios of Pulse Amplitude Modulation (PAM)4 and PAM8 modulation schemes are considered as the input to the envelope detector, as shown in Fig. \ref{Xuyang3}. In both cases, the output rectified eye diagram exhibits clear distinction between the amplitude levels (denoted by different colors) such that the comparators in the following one-bit ADCs can distinguish the amplitude levels with small error. The simulation results in Fig. \ref{Xuyang3} demonstrate that the amplitude ratios at the output follow the input amplitude ratios even when the dynamic range of input waveform is only between -400 to +400 mV. It is also noteworhty that a PAM-8 waveform in the receiver base-band can be constrcutred by passing the QAM-64 signal (in Table I) through a quadrature down-conversion mixer \cite{RazaviRF}.
By comparing the power consumption of the 2nd and 4th order polynomial analog operations in Fig. \ref{fig:poly}(b) with the absolute value circuits in Fig. \ref{Xuyang4}, it is clear that the absolute value function lends itself to high-data-rate operations with significantly smaller DC power consumption, which justifies its deployment for THz communication system.
Theorems \ref{th:2} and \ref{th:5} show that the channel capacity depends on the number of ADCs through $n_q2^{\delta_{env}}+1$ and $n_q\delta_{poly}+1$, respectively, so that the use of a envelope detectors and quadratic analog operators instead of a linear operators ($\delta_{env}:0\to 1$ and $\delta_{poly}:1\to 2$) has an equivalent effect on capacity as that of quadrupling and doubling the number of ADCs $n_q$, respectively. This fact, along with the power consumption values given in Figures \ref{Xuyang4} and \ref{fig:poly}(b) justify the use of nonlinear analog operators. It should be noted that power consumption is dependent on circuit configuration, transistor size, and passive quality factors. These simulations serve as a proof-of-concept to justify the effectiveness of the proposed receiver architecture designs.
\section{Conclusion}
The application of nonlinear analog operations in MIMO receivers was considered. A receiver architecture consisting of linear analog combiners, implementable nonlinear analog operators, and few-bit threshold ADCs was designed, and
the fundamental information theoretic performance limits of the resulting communication system were investigated. Furthermore, circuit-level simulations, using a 65 nm Bulk CMOS technology, were provided to show the implementability of the desired nonlinear analog operators with practical power budgets.
\begin{appendices}
\section{Proof of Proposition \ref{Prop:4}}
\label{App:Prop:4}
We provide an outline of the proof. Let us consider the following cases:
\\\textbf{Case 1:} $\sum_{j=1}^{n_q }\kappa_j\geq 2^{n_q}$
\\In this case, one can use a balanced Gray code \cite{bhat1996balanced} to construct $\mathcal{C}$. A balanced Gray code is a (binary) code where consecutive codewords have Hamming distance equal to one, and each of the bit positions changes value either $2\floor{\frac{2^{n_q}}{2n_q}}$ times or $2\ceil{\frac{2^{n_q}}{2n_q}}$ times. If $\min_{j\in [n_q]} \kappa_j\geq 2\ceil{\frac{2^{n_q}}{2n_q}}$ the proof is complete as one can concatenate the balanced gray code with a series of additional repeated codewords to satisfy the transition counts, and since the balanced gray code is a subcode of the resulting code, we have $|\mathcal{C}|=2^{n_q}$. Otherwise, there exists $j\in [n_q]$ such that $\kappa_j< 2\ceil{\frac{2^{n_q}}{2n_q}}$.
In this case, without loss of generality, let us assume that $\kappa_1\leq \kappa_2,\cdots \leq \kappa_{n_q}$. Note that since $|\kappa_j-\kappa_j'|\leq 2, j,j'\in [n_q]$ and $\kappa_j, j\in [n_q]$ are even, there is at most one $j^*\in [n_q]$ such that $\kappa_{j^*}\leq \kappa_{j^*+1}$. Let $\kappa'_1,\kappa'_2,\cdots,\kappa'_{n_q}$ be the transition count sequence of a balanced gray code $\mathcal{C}'$ written in non-decreasing order.
Note that $2\ceil{\frac{2^{n_q}}{2n_q}}-2\floor{\frac{2^{n_q}}{2n_q}}=2$. Hence, similar to the above argument, there can only be one $j'\in [n_q]$ for which $\kappa_{j'}\leq \kappa_{j'+1}$.
Since $\sum_{j=1}^{n_q} \kappa_j\geq 2^{n_q}= \sum_{j=1}^{n_q} \kappa'_j$, we must have $j^*\leq j'$. So, the balanced gray code can be used as a subcode similar to the previous case by correctly ordering the bit positions to match the order of $\kappa_j, j\in [n_q]$. This completes the proof.
\\\textbf{Case 2:} $\sum_{j=1}^{n_q}\kappa_j< 2^{n_q}$
\\The proof is based on techniques used in the construciton of balanced Gray codes \cite{bhat1996balanced}.
We prove the result by induction on $n_q$.
The proof for $n_q=1,2$ is straightforward and follows by construction of length-one and length-two sequences. For $n_q>2$, Assume that the result holds for all $n'_q\leq n_q$. Without loss of generality, assume that $\kappa_1\leq \kappa_2,\leq \cdots \leq \kappa_{n_q}$. The proof considers four sub-cases as follows.
\\\textbf{Case 2.i:} $\sum_{j=3}^{n_q}\kappa_j\in [0,2^{n_q-2}]$
\\In this case, by the induction assumption, there exists $\mathcal{C}'$, a code with codewords of length $n_q-2$, whose transition sequence is $\kappa_3,\kappa_4,\cdots,\kappa_{n_q}$, and $|\mathcal{C}'|= \sum_{j=3}^{n_q}\kappa_j$. We construct $\mathcal{C}$ from $\mathcal{C}'$ as follows. Let $\mathbf{c}_{0}=(0,0,\mathbf{c}'_0)$, $\mathbf{c}_{1}=(0,1,\mathbf{c}'_0)$, $\mathbf{c}_{2}=(1,1,\mathbf{c}'_0)$,
$\mathbf{c}_{3}=(1,0,\mathbf{c}'_0)$,
$\mathbf{c}_{4}=(1,0,\mathbf{c}'_1)$,
$\mathbf{c}_{5}=(0,0,\mathbf{c}'_1)$, $\mathbf{c}_{6}=(0,1,\mathbf{c}'_1)$,
$\mathbf{c}_{7}=(1,1,\mathbf{c}'_1)$,$\cdots$. This resembles the procedure for constructing balanced gray codes \cite{bhat1996balanced}. We continue concatenating the first two bits of each codeword in $\mathcal{C}$ to the codewords in $\mathcal{C}'$ using the procedure described above until $\kappa_1$ transitions for position 1 and $\kappa_2$ transitions for position 2 have taken place. Note that this is always possible since i) for each two codewords in $\mathcal{C}'$, we `spend' two transitions of each of the first and second positions in $\mathcal{C}$ to produce four new codewords, ii) $\kappa_2-\kappa_1\leq 2$, and iii) $\kappa_2\leq \sum_{j=3}^{n_q}\kappa_j$, where the latter condition ensures that we do not run out of codewords in $\mathcal{C}'$ before the necessary transitions in positions 1 and 2 are completed. After $\kappa_2+1$ codewords, the transitions in positions 1 and 2 are completed, and the last produced codeword is $(0,0,\mathbf{c}'_{\kappa_2+1})$ since $\kappa_1$ and $\kappa_2$ are both even. To complete the code $\mathcal{C}$, we add $(0,0,\mathbf{c}'_{i}), i\in [\kappa_2+2,\sum_{j=3}^{n_q}\kappa_j]$. Then, by construction, we have $|\mathcal{C}|=|\mathcal{C}'|+\kappa_1+\kappa_2=\sum_{j=1}^{n_q}\kappa_j$ and the code satisfied Properties 1), 2), 3), and 5) in Proposition \ref{Prop:2}.
\\\textbf{Case 2.ii:}$\sum_{j=3}^{n_q}\kappa_j\in [2^{n_q-2}, 2^{n_q-1}]$
\\ Similar to the previous case, let $\mathcal{C}'$ be a balanced gray code with codeword length $n_q-2$ and transition counts $\kappa'_1\leq \kappa'_2\leq \cdots\leq \kappa'_{n_q-2}$. Define $\kappa''_{j}=\kappa_j- \kappa'_{j+2}, j\in \{3,4,\cdots,n_q\}$. Note that $\kappa''_j$ satisfy the conditions on transition counts in the proposition statement, and hence by the induction assumption, there exists a code $\mathcal{C}''$ with transition counts $\kappa''_j, j\in [n_q-2]$. The proof is completed by appropriately concatenating $\mathcal{C}'$ and $\mathcal{C}''$ to construct $\mathcal{C}$. Let $\gamma''$ be the number of codewords in $\mathcal{C}''$ and define $\mathbf{c}_i=(0,0,\mathbf{c}''_i), i\in [\gamma'']$, $\mathbf{c}_{\gamma''+1}=(0,1,\mathbf{c}''_{\gamma''})$, $\mathbf{c}_{\gamma''+2}=(1,1,\mathbf{c}''_{\gamma''})$, $\mathbf{c}_{\gamma''+3}=(1,0,\mathbf{c}''_{\gamma''})$, $\mathbf{c}_{\gamma''+4}=(1,0,\mathbf{c}'_{1})$,$\cdots$. Similar to the previous case, it is straightforward to show that this procedure yields a code $\mathcal{C}$ with the desired transition sequence.
The proof for the two subcases where $\sum_{j=3}^{n_q}\kappa_j\in [2^{n_q-1},3 \times 2^{n_q-2}]$ and $\sum_{j=3}^{n_q}\kappa_j\in [3\times 2^{n_q-1}, \times 2^{n_q-1}]$ is similar and is ommited for brevity.
\end{appendices}
\bibliographystyle{unsrt}
|
1,314,259,995,322 | arxiv |
\section*{Author Statement}
\textbf{Conflict of interest}: The authors state no conflict of interest.
\smallskip
\\ \textbf{Informed consent}: This study contains patient data from a publicly available dataset.
\smallskip
\\ \textbf{Ethical approval}: This article does not contain any studies with human participants or animals performed by any of the authors.
\begin{comment}
\section*{Compliance with Ethical Standards}\label{CES}
\textbf{Disclosure of potential conflicts of Interest}
\\ Funding: This study was funded by German Federal Ministry of Education and Research (BMBF) under the project COMPASS (grant no. - 16\,SV\,8019).
\\ Conflict of Interest: The authors declare that they have no conflict of interest.
\textbf{\\* Research involving Human Participants and/or Animals \\*}
This article does not contain any studies with human participants or animals performed by any of the authors.
\textbf{\\* Informed consent \\*}
This article contains patient data from a publicly available dataset.
\end{comment}
\subsection{Data}
\label{dataset}
\paragraph{\textbf{Simulation}~\cite{pfeiffer2019generating}}
data contain $20K$ rendered images acquired via 3-D laparoscopic simulations from the CT scans of 10 patients.
The images describe a rendered view of a laparoscopic scene with each tissue having a distinct texture and a presence of two conventional surgical instruments (grasper and hook) under a random placement of the camera (coupled with a light source).
\paragraph{\textbf{Cholec}~\cite{sahu2020endo}}
data contain around $7K$ endoscopic video frames acquired from 15 videos of the Cholec80 dataset.\cite{twinanda2016endonet}
The images describe the laparoscopic cholecystectomy scene with seven conventional surgical instruments (grasper, hook, scissors, clipper, bipolar, irrigator and specimen bag).
The data provide segmentations for each instrument type, however, the specimen bag is considered as a counterexample that is treated as background during evaluation, following the definition of an instrument in RobustMIS challenge.\cite{ross2020robust}
\begingroup
\begin{table}[htbp]
\subimport{./Tables/}{Datasets.tex}%
\end{table}
\endgroup
\paragraph{\textbf{EndoVis}~\cite{endovis2015}}
data consist of 300 images from six different in-vivo 2D recordings of complete laparoscopic colorectal surgeries.
The data provide binary segmentations of instruments for validation where images describe an endoscopic scene containing seven conventional instruments (including hook, traumatic grasper, ligasure, stapler, scissors and scalpel).\cite{bodenstedt2018comparative}
\paragraph{\textbf{RobustMIS}~\cite{ross2020robust}}
data consist of around $10K$ images acquired from 30 surgical procedures of three different types of colorectal surgery (10 rectal resection procedures, 10 proctocolectomy procedures and 10 procedures of sigmoid resection procedures).
An instrument is defined as an elongated rigid object that is manipulated directly from outside the patient. Therefore,
grasper, scalpel, clip applicator, hooks, stapling device, suction and even trocar is considered as an instrument while non-rigid tubes, bandages, compresses, needles, coagulation sponges, metal clips etc. are considered as counterexamples as they are indirectly manipulated from outside.\cite{ross2020robust}
The data provide instance level segmentations for validation, which are performed in three different stages with an increasing domain gap between the training- and the test-data.
Stage 1 contains video frames from 16 cases of the training data, stage 2 has video frames of two proctocolectomy and rectal surgeries each, and stage 3 has video frames from 10 sigmoid resection surgeries.
\paragraph{It} is important to note that the domain gap increases not only in the three stages of testing in Robust-MIS dataset, but also from \textit{Simulation} towards \textit{Real} datasets (EndoVis $<$ Cholec $<$ Robust-MIS) as the definition of instrument (and/or counterexample) changes along with other factors.
\subsection{Implementation}
\label{sec:implementation}
We have redesigned the implementation of the \emph{Endo-Sim2Real} framework in view of a teacher-student approach. To ensure a direct and fair comparison, we employ the same \textit{TerNaus11}~\cite{shvets2018automatic} as a backbone segmentation model.
Also, we utilize the best performing perturbation scheme (i.e. applying one of the \textit{pixel-intensity} perturbation\footnote{\textit{pixel-intensity}: random brightness and contrast shift, posterisation, solarisation, random gamma shift, random HSV color space shift, histogram equalization and contrast limited adaptive histogram equalization}
followed by one of the \textit{pixel-corruption} perturbation\footnote{\textit{pixel-corruption}: gaussian noise, motion blurring, image compression, dropout, random fog simulation and image embossing}) and the loss function (i.e. \emph{cross-entropy} and \emph{jaccard}) of \emph{Endo-Sim2Real} for evaluation.
All simulated input images and labels are first pre-processed with a stochastically-varying circular outer mask to give them the appearance of real endoscopic images.
We use a batch size of 8 for 50 epochs and apply weight decay ($1e-6$) as standard regularization.
During consistency training, we use a time-dependent weighting function, where the weight of the unlabeled loss term is linearly increased over the training.
The teacher model is updated with $\alpha$ (0.95) at each training step.
During evaluation of a dataset, we use an image-based dice score and average over all images to obtain a global dice metric for the dataset.
For computation of the dice score, we exclude the cases where both the prediction and ground truth images are empty. However, we include cases with false positives for the empty images and set it to zero. So the dice score for empty ground-truth images (without any instrument) is either zero and considered in case of any false positives or undefined and not considered in case of correct prediction.
Also, we report all the results as an average performance of three runs throughout our experiments.
\section{Results}
This section provides a quantitative comparison with respect to the state-of-the-art approaches to demonstrate the effectiveness of our approach.
Moreover, we perform a quantitative and qualitative analysis on three different datasets with varying degrees of the domain gap. This shows the strengths and weaknesses of our approach in order to better understand the challenges and provide valuable insights into addressing the remaining performance gap.
\begin{comment}
\begingroup
\begin{table}[t]
\caption{Quantitative comparison using Dice score (std).}
\label{table:Result_Compare}
\subimport{./Tables/}{Comparisons.tex}
\end{table}
\endgroup
\end{comment}
\begingroup
\begin{table}[b]
\subimport{./Tables/}{Quantitative_Comparison.tex}%
\end{table}
\endgroup
\begin{comment}
\begin{figure}
\centering
\begin{subfigure}[]{0.7\textwidth}
\includegraphics[width=\linewidth]{Figures/Violinplot_Cholec.eps}
\caption{Violin plot for EndoVis}
\label{violin_plot_endovis}
\end{subfigure}
\begin{subfigure}[]{0.7\textwidth}
\includegraphics[width=\linewidth]{Figures/Violinplot_Cholec.eps}
\caption{Violin plot for Cholec. The paired t-test for the combinations of I2I, Endo-Sim2Real and our work results in \textit{p-value} $<<$ $0.01$.}
\label{violin_plot_cholec}
\end{subfigure}
\begin{subfigure}[]{0.7\textwidth}
\includegraphics[width=\linewidth]{Figures/Violinplot_Cholec.eps}
\caption{Violin plot for RobustMIS}
\label{violin_plot_robustmis}
\end{subfigure}
\caption{Violin plot for the data.}
\label{plots_comparison}
\end{figure}
\end{comment}
\subsection{Comparison with \emph{baseline} and \emph{state-of-the-art}}
\begin{comment}
In this experiment, we compare our teacher-student approach with \emph{Endo-Sim2Real} (student-as-teacher) across three different datasets.
The empirical results (see Table~\ref{table:Result_Compare}) show that the teacher-student approach outperforms state-of-the-art \emph{Endo-Sim2Real}.
For a majority of images (see high peak in Figure~\ref{fig:qualitative_analysis_endovis}, ~\ref{fig:qualitative_analysis_cholec} and ~\ref{fig:qualitative_analysis_rs}), the segmentation predictions are usually correct with small variations across the instrument boundary.
We also compare the performance with respect to the two baselines: \textit{lower baseline} (supervised learning purely on simulated data) and \textit{upper baseline} (supervised learning purely on annotated real data) in Table~\ref{table:Result_Compare}.
The substantial performance gap between the baselines indicate the domain gap between simulated and real data.
The empirical results also demonstrate the enhancement in performance generalization by employing unsupervised consistency learning on unlabeled data.
Finally, the performance gap with the upper baseline calls for identifying the issues needed to bridge the remaining domain gap.
\end{comment}
In these experiments, we first highlight the performance of the two baselines: the \textit{lower baseline} (supervised learning purely on simulated data) and the \textit{upper baseline} (supervised learning purely on annotated real data) in Table~\ref{table:Result_Compare}.
The substantial performance gap between the baselines indicates the domain gap between simulated and real data.
Secondly, we compare our proposed teacher-student approach with other unsupervised domain adaptation approaches, i.e. the domain style transfer approach (\emph{I2I}) and the plain consistency-based joint learning approach (\emph{Endo-Sim2Real}) on the \textit{Cholec} dataset. The empirical results show that \emph{Endo-Sim2Real} works similar to \emph{I2I}, while our proposed approach outperforms both of these approaches.
Later, we evaluate our approach on two additional datasets and show that it consistently outperforms \emph{Endo-Sim2Real}.
These experiments demonstrate that the generalization performance of the DNN can be enhanced by employing unsupervised consistency learning on unlabeled data.
Finally, the performance gap with the upper baseline calls for identification of the issues needed to bridge the remaining domain gap.
\begingroup
\begin{figure}[b]
\centering
\includegraphics[width=1.0\textwidth]{Figures/endovis_vis.pdf}
\caption{Qualitative analysis on EndoVis dataset. The green color in the images represents the network predictions while the yellow color represents under-segmentation.}
\label{fig:qualitative_analysis_endovis}
\end{figure}
\endgroup
\subsection{Analysis on EndoVis}
Among the three datasets, our proposed approach performs best for EndoVis as shown in Table~\ref{table:Result_Compare}.
A visual analysis of the low performing cases in Figure~\ref{fig:qualitative_analysis_endovis} highlights factors such as false detection on specular reflection, under-segmentation for small instruments, tool-tissue interaction and partially occluded instruments.
These factors can in part be addressed by utilizing the temporal information of video frames.~\cite{gonzalez2020isinet}
\begingroup
\begin{landscape}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.95\columnwidth]{Figures/upset_plot.png}
\caption{Visualization of the relation between tool co-occurrence and segmentation quality for the Cholec dataset. Please note that the dice score is zero for no tool cases and specimen bag as it is treated as background.}
\label{fig:upset}
\end{figure}
\end{landscape}
\endgroup
\subsection{Analysis on Cholec}
We performed an extensive performance analysis of our proposed approach on the Cholec dataset as instrument-specific labels are available for it (in comparison to EndoVis).
To understand the distinctive performance aspects for the Cholec dataset, we compare the segmentation performance across different instrument co-occurrence in Figure~\ref{fig:upset}.
A similar range of dice scores highlights that the performance of our approach is less impacted by the presence of multiple tool combinations in an endoscopic image.
However, it also clearly shows that the segmentation performance of our approach drops when the specimen bag and its related co-occurrences are present (as seen in the respective box plots in Figure~\ref{fig:upset}). A visual analysis highlights false detection on the reflective surface of the specimen bag.
\begingroup
\begin{figure}[htbp]
\centering
\includegraphics[width=1.0\textwidth]{Figures/cholec_vis.pdf}
\caption{Qualitative analysis on the Cholec dataset. The green color in the images represents the network predictions while the yellow color represents under-segmentation.}
\label{fig:qualitative_analysis_cholec}
\end{figure}
\endgroup
Apart from the previously analyzed performance degrading factors in the EndoVis dataset, other major factors affecting the performance are as follows:
\begin{itemize}
\item[$\ast$] \textbf{Out of distribution cases} such as a non-conventional tool-shape-like instrument: specimen bag (see box-plots for labelsets with specimen bag in Figure~\ref{fig:upset}).
\item[$\ast$] \textbf{False detection for scenarios} such as an endoscopic view within the trocar, instrument(s) near the image border or under-segmentation for small instruments.
\item[$\ast$] \textbf{Artefact cases} such as specular reflection. The impact of other artefacts such as blood, smoke or motion blur is lower.
\end{itemize}
\noindent Although our proposed approach struggles to tackle these artefacts and out of distribution cases, addressing these performance degrading factors is itself an open research problem.~\cite{ali2020artefacts}
\begingroup
\begin{figure}[htbp]
\centering
\includegraphics[width=1.0\textwidth]{Figures/robustmis_vis.pdf}
\caption{Qualitative analysis on the RobustMIS dataset. The green color in the images represents the network predictions while the yellow color represents under-segmentation.}
\label{fig:qualitative_analysis_rs}
\end{figure}
\endgroup
\begingroup
\begin{table}[htbp]
\subimport{./Tables/}{Quants_RobustMIS.tex}%
\end{table}
\endgroup
\subsection{Analysis on RobusMIS}
We analyzed the performance of our approach on images with a different number of instruments in the RobustMIS dataset. We found that the performance is not significantly affected by the presence of multiple tools (see Table ~\ref{table:dice_per_tool}).
A low performance for a single visible instrument is attributed to small, stand-alone instruments across image boundary.
Apart from the factors in the Cholec dataset, other real-world performance degrading factors in RobustMIS include: presence of other out-of-distribution cases such as non-rigid tubes, bandages, needles etc.; presence of corner cases such as trocar-views and specular reflections producing a instrument-shape-like appearance.
These failure cases highlight a drawback of our approach, which works under the assumption that the shape of the instrument remains consistent between the domains. Therefore, our approach may not be able to produce faithful predictions in case instruments with different shapes are encountered in the real domain (compared to instruments in simulation) or counterexamples with instrument like appearance.
\subsection{Impact of empty ground-truth frames}
The performance of our teacher-student approach is negatively affected by the video frames that do not contain instruments.
This is because the dice score is assigned to zero when the network predicts false positives (as seen in Figure~\ref{fig:qualitative_analysis_cholec} and ~\ref{fig:qualitative_analysis_rs}) in instrument-free video frames.
A direct relation of this effect can be seen in Table~\ref{table:Result_Compare} where the dice score across the datasets decreases as the number of empty frames increases (in \%) from EndoVis to RobustMIS.
It suggests that utilizing false detection techniques in the current framework can help in enhancing the generalization capabilities.
\section{Introduction} \label{intro}
\input{Sections/Introduction}
\section{Related Work} \label{sec:rel_work}
\input{Sections/RelatedWork}
\section{Method} \label{sec:method}
\input{Sections/Method}
\section{Experimental Setup} \label{sec:experiments}
\input{Sections/Experiments}
\section{Results and Discussion} \label{sec:results}
\input{Sections/Results}
\section{Conclusion} \label{sec:disc}
\input{Sections/Discussion}
\input{Sections/COI}
\bibliographystyle{spbasic}
\section{Supplemental Material}
\setcounter{equation}{0}
\setcounter{figure}{0}
\setcounter{table}{0}
\setcounter{page}{1}
\setcounter{section}{0}
\renewcommand{\thesection}{S-\Roman{section}}
\renewcommand{\theequation}{S\arabic{equation}}
\renewcommand{\thefigure}{S\arabic{figure}}
\renewcommand{\thetable}{S\arabic{table}}
\section{Hyper-parameters}
text...
\section{Implementation Details}
text...
\section{Data Perturbation Schemes}
We apply data augmentation in three forms: base, pixel intensity and pixel corruption.
Apply as weak and strong
Source contains all three but target data has without base.
\begingroup
\begin{table}[htbp]
\centering
\captionsetup{justification=centering}
\caption{List of perturbation schemes}
\label{table:perturbation_schemes}
\begin{tabular}{| c | c | c | c |}
\hline
Perturbation & Description & Parameter & Range \\
\hline
\hline
Rotate & Rotates the image (in degrees) & $\theta$ & [-30, 30] \\
\hline
\end{tabular}
\end{table}
\endgroup
|
1,314,259,995,323 | arxiv | \section{Introduction}
Given a set $A$, we define its \textit{sum set}, \textit{product set} and \textit{ratio set} as
$$A+A:=\{a+b:a,b \in A\},\,\,\,\,\,\,\, AA:=\{ab:a,b \in A\},\,\,\,\,\,\,\, A/A:=\{a/b:a,b \in A, b \neq 0\}.$$
respectively. It was conjectured by Erd\H{o}s and Szemer\'{e}di that, for any finite set $A$ of integers, at least one of the sum set or product set has near-quadratic growth. Solymosi \cite{solymosi} used a beautiful and elementary geometric argument to prove that, for any finite set $A \subset \mathbb R$,
\begin{equation}
\max \{|A+A|,|AA|\} \gg \frac{|A|^{4/3}}{\log^{1/3}|A|}.
\label{soly}
\end{equation}
Recently, a breakthrough for this problem was achieved by Konyagin and Shkredov \cite{KS}. They adapted and refined the approach of Solymosi, whilst also utilising several other tools from additive combinatorics and discrete geometry in order to prove that
\begin{equation}
\max \{|A+A|,|AA|\} \gg |A|^{\frac{4}{3}+\frac{1}{20598}-o(1)}.
\label{KS}
\end{equation}
Further refinements in \cite{KS2} and more recently \cite{RSS} have improved this exponent to $4/3 +1/1509 +o(1)$. See \cite{KS}, \cite{KS2}, \cite{RSS} and the references contained therein for more background on the sum-product problem.
In this paper the related problem of establishing lower bounds for the sets
$$AA+A:=\{ab+c:a,b,c \in A\},\,\,\,\,\,\,\,\,\, A/A+A:=\{a/b+c:a,b,c \in A, b \neq 0\}$$
are considered. It is believed, in the spirit of the Erd\H{o}s-Szemer\'{e}di conjecture, that these sets are always large. It was conjectured by Balog \cite{balog} that, for any finite set $A$ of real numbers, $|AA+A| \geq |A|^2$. In the same paper, he proved the following result in that direction:
\begin{theorem} \label{thm:old}
Let $A$ and $B$ be finite sets of positive real numbers. Then
$$|AB+A| \gg |A||B|^{1/2}.$$
In particular,
$$|AA+A| \gg |A|^{3/2} \,\,\,\,\,\,\, |A/A+A| \gg |A|^{3/2}.$$
\end{theorem}
The proof of Theorem \ref{thm:old} uses similar elementary geometric arguments to those of \cite{solymosi}. In fact, one can obtain the same bound by a straightforward application of the Szemer\'{e}di-Trotter Theorem (see \cite[Exercise 8.3.3]{tv}).
Some progress in this area was made by Shkredov \cite{shkredov}, who built on the approach of Balog in order to prove the following result:
\begin{theorem} \label{thm:ilya}
For any finite set $A$ of positive real numbers,
\begin{equation}
|A/A+A| \gg \frac{|A|^{\frac{3}{2}+\frac{1}{82}}}{\log^{\frac{2}{41}}|A|},
\label{a+a:a}
\end{equation}
\end{theorem}
The main result of this paper is the following improvement on Theorem \ref{thm:ilya}:
\begin{theorem} \label{thm:main} Let $A$ be a finite set of positive reals. Then
$$|A/A+A| \gg \frac{|A|^{\frac{3}{2}+\frac{1}{26}}}{\log^{5/6}|A|}.$$
\end{theorem}
For the set $AA+A$ the situation is different, and it has proven rather difficult to beat the threshold exponent of $3/2$. A detailed study of this set can be found be in \cite{RRSS}. However, the corresponding problem for sets of integers is resolved, up to constant factors, thanks to a nice argument of George Shakan.\footnote{See http://mathoverflow.net/questions/168844/sum-and-product-estimate-over-integers-rationals-and-reals, where this argument first appeared.}
\subsection{Sketch of the proof of Theorem \ref{thm:main}} The proof is a refined version of the argument used by Balog to prove Theorem \ref{thm:old}. Balog's argument goes roughly as follows:
Consider the point set $A \times A$ in the plane. Cover this point set by lines through the origin. Let us assume for simplicity that all of these lines are equally rich, so we have $k$ lines with $|A|^2/k$ points on each line. Label the lines $l_1,l_2,\dots,l_k$ in increasing order of steepness. Note that if we take the vector sum of a point on $l_i$ with a point on $l_{i+1}$, we obtain a point which has slope in between those of $l_i$ and $l_{i+1}$. The aim is to show that many elements of $(A/A+A) \times (A/A+A)$ can be obtained by studying vector sums from neighbouring lines.
Indeed, for any $1 \leq i \leq k-1$, consider the sum set
$$\{(b/a,c/a)+(d,e): a\in A, (b,c) \in (A \times A) \cap l_{i}, (d,e) \in (A \times A) \cap l_{i+1} \}.$$
There are at least $|A|$ choices for $(b/a,c/a)$ and at least $|A|^2/k$ choices for $(d,e)$. Since all of these sums are distinct, we obtain at least $|A|^3/k$ elements of $(A/A+A) \times (A/A+A)$ lying in between $l_i$ and $l_{i+1}$. Summing over all $1\leq i \leq k-1$, it follows that
$$|A/A+A|^2 \gg |A|^3.$$
There are two rather crude steps in this argument. The first is the observation that there are at least $|A|$ choices for the point $(b/a,c/a)$. In fact, the number of points of this form is equal to the cardinality of product set of $A$ and a set of size $|A|^2/k$. This could be as small as $|A|$, but one would typically expect it to be considerably larger. This extra information was used by Shkredov \cite{shkredov} in his proof of \eqref{a+a:a}.
The second wasteful step comes at the end of the argument, when we only consider sums coming from pairs of lines which are neighbours. This means that we consider only $k-1$ pairs of lines out of a total of ${k \choose 2}$. A crucial ingredient in the proof of \eqref{KS} was the ability to find a way to count sums coming from more than just neighbouring lines.
The proof of Theorem \ref{thm:main} deals with these two steps more efficiently. Ideas from \cite{shkredov} are used to improve upon the first step, and then ideas from \cite{KS} improve upon the second step. We also make use of the fact that the set $A/A$ is invariant under the function $f(x)=1/x$, which allows us to use results on convexity and sumset of Elekes, Nathanson and Ruzsa \cite{ENR} in order to get a better exponent in Theorem \ref{thm:main}.
\section{Notation and Preliminary results}
Throughout the paper, the standard notation
$\ll,\gg$ is applied to positive quantities in the usual way. Saying $X\gg Y$ means that $X\geq cY$, for some absolute constant $c>0$.
The main tool is the Szemer\'{e}di-Trotter Theorem.
\begin{theorem} \label{thm:SzT}
Let $P$ be a finite set of points in $\mathbb R^2$ and let $L$ be a finite set of lines. Then
$$|\{(p,l)\in P \times L : p \in l\}| \ll (|P||L|)^{2/3}+|P|+|L|.$$
\end{theorem}
Define
\begin{equation}
d(A)=\min_{C \neq \emptyset} \frac{|AC|^2}{|A||C|}.
\label{eq:d(A)def}
\end{equation}
We will need the following consequence of the Szemer\'{e}di-Trotter Theorem, which is \cite[Corollary 8]{KS}.
\begin{lemma} \label{thm:ST}
Let $A_1,A_2$ and $A_3$ be finite sets of real numbers and let $\alpha_1,\alpha_2$ and $\alpha_3$ be arbitrary non-zero real numbers. Then the number of solutions to the equation
$$\alpha_1a_1+\alpha_2a_2+\alpha_3a_3=0,$$
such that $a_1 \in A_1$, $a_2 \in A_2$ and $a_3 \in A_3$, is at most
$$C\cdot d^{1/3}(A_1)|A_1|^{1/3}|A_2|^{2/3}|A_3|^{2/3},$$ for some absolute constant $C$.
\end{lemma}
Another application of (a variant of) the Szemer\'{e}di-Trotter Theorem is the following result of Elekes, Nathanson and Ruzsa \cite{ENR}:
\begin{theorem} \label{ENR} Let $f: \mathbb R \rightarrow \mathbb R$ be a strictly convex or concave function and let $X,Y,Z \subset \mathbb R$ be finite. Then
$$|f(X)+Y||X+Z| \gg |X|^{3/2}|Y|^{1/2}|Z|^{1/2}.$$
\end{theorem}
In particular, this theorem can be applied with $f(x)=1/x$, $X=A/A$, $Y=Z=A$, using the fact that $f(A/A)=A/A$, to obtain the following corollary:
\begin{cor} \label{ENRcor} For any finite set $A \subset \mathbb R$,
$$|A/A+A| \gg |A/A|^{3/4}|A|^{1/2}.$$
\end{cor}
\section{Proof of main theorem}
Recall that the aim is to prove the inequality
$$|A/A+A| \gg \frac{|A|^{\frac{3}{2}+\frac{1}{26}}}{\log^{1/2}|A|}.$$
Consider the point set $A \times A$ in the plane. At the outset, we perform a dyadic decomposition, and then apply the pigeonhole principle, in order to find a large subset of $A\times A$ consisting of points lying on lines through the origin which contain between $\tau$ and $2\tau$ points, where $\tau$ is some real number.
Following the notation of \cite{KS}, for a real number $\lambda$, define
$$ \mathcal A_{\lambda}:= \left\{(x,y) \in A \times A : \frac{y}{x}=\lambda \right\},$$
and its projection onto the horizontal axis,
$$A_{\lambda}:=\{x:(x,y) \in \mathcal A_{\lambda}\}.$$
Note that $|A_{\lambda}|=|A \cap \lambda A|$ and
\begin{equation}
\sum_{\lambda} |A_{\lambda}|=|A|^2.
\label{obvious}
\end{equation}
Let $S_{\tau}$ be defined by
$$|S_{\tau}|:=|\{\lambda: \tau \leq |A \cap \lambda A| < 2\tau \}|.$$
After dyadically decomposing the sum \eqref{obvious}, we have
$$|A|^2=\sum_{\lambda} |A_{\lambda}| = \sum_{j=1}^{\lceil \log|A| \rceil} \sum_{\lambda \in S_{2^{j-1}}}|A_{\lambda}| .$$
Applying the pigeonhole principle, we deduce that there is some $\tau$ such that
\begin{equation}
\sum_{\lambda \in S_{\tau}}|A_{\lambda}| \geq \frac{|A|^2}{\lceil \log|A| \rceil} \geq \frac{|A|^2}{2\log|A|}.
\label{sumbound}
\end{equation}
Since $\tau \leq |A|$, this implies that
\begin{equation}
|S_{\tau}| \geq \frac{|A|}{2\log |A|}.
\label{Sbound}
\end{equation}
Also, since $|A_{\lambda}| < 2\tau$ for any $\lambda \in S_{\tau}$, we have
\begin{equation}
\tau|S_{\tau}| \gg \frac{|A|^2}{ \log|A|}.
\label{taubound}
\end{equation}
\subsection{A lower bound for $\tau$}
Suppose that $|A/A| \geq |A|^{\frac{4}{3}+\frac{2}{39}}$. Then, by Corollary \ref{ENRcor},
$$|A/A+A| \gg |A/A|^{\frac{3}{4}}|A|^{\frac{1}{2}} \gg |A|^{\frac{3}{2}+\frac{1}{26}},$$
as required. Therefore, we may assume that $|A/A| \leq |A|^{\frac{4}{3}+\frac{2}{39}}$. In particular, by \eqref{taubound},
$$\tau|A|^{\frac{4}{3}+\frac{2}{39}} \geq \tau|A/A| \geq \tau|S_{\tau}| \gg \frac{|A|^2}{ \log|A|}.$$
Therefore
\begin{equation}
\tau \gg \frac{|A|^{\frac{2}{3}-\frac{2}{39}}}{ \log|A|}.
\label{taubound3}
\end{equation}
\subsection{An upper bound for $d(A)$}
Define $P$ to be the subset of $A \times A$ lying on the union of the lines through the origin containing between $\tau$ and $2\tau$ points. That is, $P = \cup_{\lambda \in S_{\tau}} \mathcal A_{\lambda}$. We will study vector sums coming from this point set by two different methods, and then compare the bounds in order to prove the theorem. To begin with, we use the methods from the paper \cite{shkredov} to obtain an upper bound for $d(A)$. The deduction of the forthcoming bound \eqref{part1} is a minor variation of the first part of the proof of \cite[Theorem 13]{shkredov}.
After carrying out the aforementioned pigeonholing argument, we have a set of $|S_{\tau}|$ lines through the origin, each containing approximately $\tau$ points from $A \times A$. Label the lines $l_1,l_2,\dots,l_{|S_{\tau}|}$ in increasing order of steepness. The line $l_i$ has equation $y=q_ix$ and so $q_1<q_2<\dots<q_{|S_{\tau}|}$. For any $1 \leq i \leq |S_{\tau}|-1$, consider the sum set
\begin{equation}
\mathcal A_{q_i}+\mathcal A_{q_{i+1}} \cdot \Delta(A^{-1}) \subset (A+A/A) \times (A+A/A),
\label{vectorsum}
\end{equation}
where $\Delta(B)=\{(b,b):b \in B\}$. Note that $\mathcal A_{q_{i+1}} \cdot \Delta(A^{-1})$ has cardinality $|A_{q_{i+1}}A^{-1}|$, and therefore the set in \eqref{vectorsum} has at least $|A_{q_{i+1}}A^{-1}||A_{q_i}|$ elements, all of which lie in between $l_i$ and $l_{i+1}$. This is a consequence of the observation of Solymosi that the sum set of $m$ points on one line through the origin and $n$ points on another line through the origin consists of $mn$ points lying in between the two lines. It is important to note that this fact is dependent on the points lying inside the positive quadrant of the plane, which is why the assumption that $A$ consists of strictly positive reals is needed for this proof.
Summing over all $1 \leq i < |S_{\tau}|$, applying the definition of $d(A)$ and using the bounds \eqref{taubound} and \eqref{sumbound}, we obtain
\begin{align*}
|A/A+A|^2 &\geq \sum_{i=1}^{|S_{\tau}|-1} |A_{q_i}||A/A_{q_{i+1}}|
\\&\geq |A|^{1/2}d^{1/2}(A)\sum_{i=1}^{|S_{\tau}|-1}|A_{q_i}||A_{q_{i+1}}|^{1/2}
\\& \gg \frac{|A|^{3/2}d^{1/2}(A)}{|S_{\tau}|^{1/2}\log^{1/2}|A|} \sum_{i=1}^{|S_{\tau}|-1}|A_{q_i}|
\\& \gg \frac{|A|^{7/2}d^{1/2}(A)}{|S_{\tau}|^{1/2}\log^{3/2}|A|}.
\end{align*}
This can be rearranged to obtain
\begin{equation}
d(A) \ll \frac{|A/A+A|^4|S_{\tau}|\log^3|A|}{|A|^7}.
\label{part1}
\end{equation}
This bound will be utilised later in the proof. We now analyse the vector sums in a different way, based on the approach of \cite{KS}.
\subsection{Clustering setup}
For each $\lambda \in S_{\tau}$, we identify an element from $\mathcal A_{\lambda}$, which we label $(a_{\lambda},\lambda a_{\lambda})$. These fixed points will have to be chosen with a little care later, but for the next part of the argument, we can think of the choice of $(a_{\lambda},\lambda a_{\lambda})$ as completely arbitrary, since the required bound holds whichever choice we make for these fixed points.
Then, fixing two distinct slopes $\lambda$ and $\lambda'$ from $S_{\tau}$ and following the observation of Balog \cite{balog}, we note that at least $\tau|A|$ distinct elements of $(A/A+A) \times (A/A+A)$ are obtained by summing points from the two lines. Indeed,
$$\mathcal A_{\lambda}+(a_{\lambda'},\lambda'a_{\lambda'}) \cdot \Delta(A^{-1}) \subset (A/A+A) \times (A/A+A).$$
Once again, these vector sums are all distinct and have slope in between $\lambda$ and $\lambda'$.
Following the strategy of Konyagin and Shkredov \cite{KS}, we split the family of $|S_{\tau}|$ slopes into clusters of $2M$ consecutive slopes, where $2\leq 2M \leq |S_{\tau}|$ and $M$ is a parameter to be specified later. For example, the first cluster is $U_1= \{l_1,\dots,l_{2M}\}$, the second is $U_2=\{l_{2M+1},\dots,l_{4M}\}$, and so on. We then split each cluster arbitrarily into two disjoint subclusters of size $M$. For example, we have $U_1=V_1 \sqcup W_1$ where $V_1=\{l_1,\dots,l_M\}$ and $W_1=\{l_{M+1},\dots,l_{2M}\}$.
The idea is to show that each cluster determines many different elements of $(A+A/A) \times (A+A/A)$. Since the slopes of these elements are in between the maximal and minimal values in that cluster, we can then sum over all clusters without overcounting.
If a cluster contains exactly $2M$ lines, then it is called a \textit{full cluster}. Note that there are $\left\lfloor \frac{|S_{\tau}|}{2M} \right\rfloor \geq \frac{|S_{\tau}|}{4M}$ full clusters, since we place exactly $2M$ lines in each cluster, with the possible exception of the last cluster which contains at most $2M$ lines.
The proceeding analysis will work in exactly the same way for any full cluster, and so for simplicity of notation we deal only with the first cluster $U_1$. We further simplify this by writing $U_1=U$, $V_1=V$ and $W_1=W$.
Let $\mu$ denote the number of elements of $(A/A+A) \times (A/A+A)$ which lie in between $l_1$ and $l_{2M}$. Then\footnote{For the sake of simplicity of presentation, a small abuse of notation is made here. The lines in $V$ and $W$ are identified with their slopes. In this way, the notation $\lambda_i \in V$ is used as a shorthand for $\{ (x,y) : y=\lambda_i x\} \in V$.}
\begin{equation}\mu \geq \tau|A| M^2 - \sum_{\lambda_1, \lambda_3 \in V ,\lambda_2,\lambda_4 \in W: \{\lambda_1,\lambda_2\} \neq \{\lambda_3,\lambda_4\}} \mathcal E(\lambda_1,\lambda_2,\lambda_3,\lambda_4),
\label{mucount2}
\end{equation}
where
$$\mathcal E(\lambda_1,\lambda_2,\lambda_3,\lambda_4):=|\{z \in (\mathcal A_{\lambda_1}+(a_{\lambda_2},\lambda_2a_{\lambda_2})\cdot\Delta(A^{-1}))\cap (\mathcal A_{\lambda_3}+(a_{\lambda_4},\lambda_4a_{\lambda_4})\cdot\Delta(A^{-1})) \}|.$$
In \eqref{mucount2}, the first term is obtained by counting sums from all pairs of lines in $V \times W$. The second error term covers the overcounting of elements that are counted more than once in the first term.
The next task is to obtain an upper bound for $\mathcal E(\lambda_1,\lambda_2,\lambda_3,\lambda_4)$ for an arbitrary quadruple $(\lambda_1,\lambda_2,\lambda_3,\lambda_4)$ which satisfies the aforementioned conditions.
Suppose that
$$z=(z_1,z_2) \in (\mathcal A_{\lambda_1}+(a_{\lambda_2},\lambda_2a_{\lambda_2})\cdot\Delta(A^{-1}))\cap (\mathcal A_{\lambda_3}+(a_{\lambda_4},\lambda_4a_{\lambda_4})\cdot\Delta(A^{-1})).$$
Then
\begin{align*}
(z_1,z_2) &=(a_1,\lambda_1a_1)+(a_{\lambda_2}a^{-1},\lambda_2a_{\lambda_2}a^{-1})
\\&=(a_3,\lambda_3a_3)+(a_{\lambda_4}b^{-1},\lambda_4a_{\lambda_4}b^{-1}),
\end{align*}
for some $a_1 \in A_{\lambda_1}$, $a_3 \in A_{\lambda_3}$ and $a,b \in A$. Therefore,
\begin{align*}
z_1&=a_1+a_{\lambda_2}a^{-1}=a_3+a_{\lambda_4}b^{-1}
\\z_2&=\lambda_1a_1+\lambda_2a_{\lambda_2}a^{-1}=\lambda_3a_3+\lambda_4a_{\lambda_4}b^{-1}
\end{align*}
\subsection{Bounding $\mathcal E(\lambda_1,\lambda_2,\lambda_3,\lambda_4)$ in the case when $\lambda_4 \neq \lambda_2$}
Let us assume first that $\lambda_4 \neq \lambda_2$. Note that this assumption implies that $\lambda_4\neq \lambda_1,\lambda_2,\lambda_3$. We have
$$0=\lambda_1a_1+\lambda_2a_{\lambda_2}a^{-1}-\lambda_3a_3-\lambda_4a_{\lambda_4}b^{-1} - \lambda_4(a_1+a_{\lambda_2}a^{-1}-a_3-a_{\lambda_4}b^{-1}),$$
and thus
\begin{equation}
0=a_{\lambda_2}(\lambda_2-\lambda_4)a^{-1}+(\lambda_1-\lambda_4)a_1+(\lambda_4-\lambda_3)a_3.
\label{STsetup}
\end{equation}
Note that the values $\lambda_1-\lambda_4, a_{\lambda_2}(\lambda_2-\lambda_4)$ and $\lambda_4-\lambda_3$ are all non-zero. We have shown that each contribution to $\mathcal E (\lambda_1,\lambda_2,\lambda_3,\lambda_4)$ determines a solution to \eqref{STsetup} with $(a,a_1,a_3) \in A \times A_{\lambda_1} \times A_{\lambda_3}$. Furthermore, the solution to \eqref{STsetup} that we obtain via this deduction is unique, and so a bound for $\mathcal E(\lambda_1,\lambda_2,\lambda_3,\lambda_4)$ will follow from a bound to the number of solutions to \eqref{STsetup}.
It therefore follows from an application of Lemma \ref{thm:ST} that
$$\mathcal E(\lambda_1,\lambda_2,\lambda_3,\lambda_4) \leq C\cdot d(A^{-1})^{1/3}|A|^{1/3}(A)\tau^{4/3}=C\cdot d(A)^{1/3}|A|^{1/3}(A)\tau^{4/3},$$
where $C$ is an absolute constant. Therefore,
\begin{equation}
\mu \geq M^2 |A|\tau - M^4Cd^{1/3}(A)|A|^{1/3}\tau^{4/3} -\sum_{\lambda_1, \lambda_3 \in V ,\lambda_2 \in W: \lambda_1 \neq \lambda_3} \mathcal E(\lambda_1,\lambda_2,\lambda_3,\lambda_2)
\label{mu}
\end{equation}
We now impose a condition on the parameter $M$ (recall that we will choose an optimal value of $M$ at the conclusion of the proof) to ensure that the first error term is dominated by the main term. We need
$$CM^4d^{1/3}(A)|A|^{1/3}\tau^{4/3} \leq \frac{M^2|A|\tau}{2},$$
which simplifies to
\begin{equation} \label{Mcond}
M\leq \frac{|A|^{1/3}}{\sqrt{2C}d^{1/6}(A)\tau^{1/6}} .
\end{equation}
With this restriction on $M$, we now have
\begin{equation}
\mu \geq \frac{M^2 |A|\tau}{2} -\sum_{\lambda_1, \lambda_3 \in V ,\lambda_2 \in W: \lambda_1 \neq \lambda_3} \mathcal E(\lambda_1,\lambda_2,\lambda_3,\lambda_2).
\label{mu}
\end{equation}
It remains to bound this second error term.
\subsection{Bounding $\mathcal E(\lambda_1,\lambda_2,\lambda_3,\lambda_4)$ in the case $\lambda_4 = \lambda_2$}
It is in this case that we need to take care to make good choices for the fixed points $(a_{\lambda},\lambda a_{\lambda})$ on each line $l_{\lambda}$.
Fix $\lambda_2 \in W $. We want to prove that there is a choice for $(a_{\lambda_2},\lambda a_{\lambda_2}) \in \mathcal A_{\lambda_2}$ such that
$$\sum_{\lambda_1, \lambda_3 \in V : \lambda_1 \neq \lambda_3} \mathcal E(\lambda_1,\lambda_2,\lambda_3,\lambda_2) \ll M^2\tau^{1/3}|A|^{4/3}.$$
We will do this using the Szemer\'{e}di-Trotter Theorem. Consider the sum
$$\sum_{a_{\lambda_2} \in A_{\lambda_2}} \sum_{\lambda_1, \lambda_3 \in V : \lambda_1 \neq \lambda_3} \mathcal |\{z \in (\mathcal A_{\lambda_1}+(a_{\lambda_2},\lambda_2a_{\lambda_2})\cdot\Delta(A^{-1}))\cap (\mathcal A_{\lambda_3}+(a_{\lambda_2},\lambda_2a_{\lambda_2})\cdot\Delta(A^{-1})) \}|.$$
Suppose that
$$z=(z_1,z_2) \in (\mathcal A_{\lambda_1}+(a_{\lambda_2},\lambda_2a_{\lambda_2})\cdot\Delta(A^{-1}))\cap (\mathcal A_{\lambda_3}+(a_{\lambda_2},\lambda_2a_{\lambda_2})\cdot\Delta(A^{-1})).$$
Then
\begin{align*}
(z_1,z_2) &=(a_1,\lambda_1a_1)+(a_{\lambda_2}a^{-1},\lambda_2a_{\lambda_2}a^{-1})
\\&=(a_3,\lambda_3a_3)+(a_{\lambda_2}b^{-1},\lambda_2a_{\lambda_2}b^{-1}),
\end{align*}
for some $a_1 \in A_{\lambda_1}$, $a_3 \in A_{\lambda_3}$ and $a,b \in A$. Therefore,
\begin{align*}
z_1&=a_1+a_{\lambda_2}a^{-1}=a_3+a_{\lambda_2}b^{-1}
\\z_2&=\lambda_1a_1+\lambda_2a_{\lambda_2}a^{-1}=\lambda_3a_3+\lambda_2a_{\lambda_2}b^{-1}.
\end{align*}
We have
$$0=\lambda_1a_1+\lambda_2a_{\lambda_2}a^{-1}-\lambda_3a_3-\lambda_2a_{\lambda_2}b^{-1} - \lambda_1(a_1+a_{\lambda_2}a^{-1}-a_3-a_{\lambda_2}b^{-1}),$$
and thus
\begin{equation}
\frac{\lambda_3-\lambda_1}{\lambda_2-\lambda_1}a_3=a_{\lambda_2}(a^{-1}-b^{-1}).
\label{STsetup2}
\end{equation}
As in the previous subsection, this shows that the quantity
$$\sum_{a_{\lambda_2} \in A_{\lambda_2}} \sum_{\lambda_1 \neq \lambda_3 \in V }|(\mathcal A_{\lambda_1}+(a_{\lambda_2},\lambda_2a_{\lambda_2})\cdot\Delta(A^{-1}))\cap (\mathcal A_{\lambda_3}+(a_{\lambda_2},\lambda_2a_{\lambda_2})\cdot\Delta(A^{-1}))|$$
is no greater than the number of solutions to \eqref{STsetup2} such that $(\lambda_1,\lambda_3,a,b,a_{\lambda_2},a_3) \in V \times V \times A \times A \times A_{\lambda_2} \times A_{\lambda_3}$.
Fix, $\lambda_1, \lambda_3 \in V$ such that $\lambda_1\neq \lambda_3$. Let $Q=A^{-1} \times A_{\lambda_3}$. Define $l_{m,c}$ to be the line with equation $\frac{\lambda_3-\lambda_1}{\lambda_2-\lambda_1}y=m(x-c)$ and define $L$ to be the set of lines
$$L=\{l_{a_{\lambda_2},b^{-1}}: a_{\lambda_2} \in A_{\lambda_2}, b \in A \}.$$
Note that $|Q|\approx |L| \approx \tau|A|$ and so
$$I(Q,L) \ll (\tau|A|)^{4/3}.$$
Repeating this analysis via the Szemer\'{e}di-Trotter Theorem for each pair of distinct $\lambda_1,\lambda_3 \in V$, it follows that the number of solutions to \eqref{STsetup2} is $O(M^2(\tau|A|)^{4/3})$. In summary,
$$\sum_{a_{\lambda_2} \in A_{\lambda_2}} \sum_{\lambda_1 \neq \lambda_3 \in V }|(\mathcal A_{\lambda_1}+(a_{\lambda_2},\lambda_2a_{\lambda_2})\cdot\Delta(A^{-1}))\cap (\mathcal A_{\lambda_3}+(a_{\lambda_2},\lambda_2a_{\lambda_2})\cdot\Delta(A^{-1}))|\ll M^2(\tau|A|)^{4/3}.$$
Therefore, by the pigeonhole principle, there is some $a_{\lambda_2} \in A_{\lambda_2}$ such that
\begin{equation} \label{fix1}
\sum_{\lambda_1 \neq \lambda_3 \in V }|(\mathcal A_{\lambda_1}+(a_{\lambda_2},\lambda_2a_{\lambda_2})\cdot\Delta(A^{-1}))\cap (\mathcal A_{\lambda_3}+(a_{\lambda_2},\lambda_2a_{\lambda_2})\cdot\Delta(A^{-1}))|\ll M^2\tau^{1/3}|A|^{4/3}.
\end{equation}
We can then choose the fixed point $(a_{\lambda_2}, \lambda_2a_{\lambda_2})$ on $l_{\lambda_2}$ to be that corresponding to the value $a_{\lambda_2}$ satisfying inequality \eqref{fix1}. This in fact shows that
\begin{equation} \label{fix2}
\sum_{\lambda_1 \neq \lambda_3 \in V }\mathcal E (\lambda_1,\lambda_2,\lambda_3,\lambda_2) \ll M^2\tau^{1/3}|A|^{4/3}.
\end{equation}
We repeat this process for each $\lambda_2 \in W$ to choose a fix point for each line with slope in $W$. Summing over all $\lambda_2 \in W$, we now have
\begin{equation} \label{fix2}
\sum_{\lambda_1 \neq \lambda_3 \in V, \lambda_2 \in W }\mathcal E (\lambda_1,\lambda_2,\lambda_3,\lambda_2) \ll M^3\tau^{1/3}|A|^{4/3}.
\end{equation}
We have a bound for error term in \eqref{mu}. Still, we need to impose a condition on $M$ so that this error term is dominated by the main term. We need
$$M^3\tau^{1/3}|A|^{4/3} \leq \frac{M^2|A|\tau}{4},$$
which simplifies to
\begin{equation} \label{Mcond2}
M\leq \frac{\tau^{2/3}}{4|A|^{1/3}} .
\end{equation}
With this restriction on $M$, we now have
\begin{equation}
\mu \geq \frac{M^2|A|\tau}{4}.
\label{almost}
\end{equation}
Our integer parameter $M$ must satisfy \eqref{Mcond} and \eqref{Mcond2}. We therefore choose
$$M:=\min \left\{ \left \lfloor \frac{|A|^{1/3}}{\sqrt{2C}d^{1/6}(A)\tau^{1/6}} \right \rfloor, \left \lfloor \frac{\tau^{2/3}}{4|A|^{1/3}} \right \rfloor \right\}.$$
Summing over the full clusters, of which there are at least $\frac{|S_{\tau}|}{4M}$, yields
\begin{align}
|A/A+A|^2&\geq \frac{|S_{\tau}|}{4M}\frac{M^2}{4}|A|\tau
\\&\gg |S_{\tau}|M|A|\tau
\label{aa+a}
\end{align}
\subsection{Choosing $M$ - case 1}
Suppose first that $M=\left \lfloor \frac{|A|^{1/3}}{\sqrt{2C}d^{1/6}(A)\tau^{1/6}} \right \rfloor$.
Recall that we need $2 \leq 2M \leq |S_{\tau}|$. It is easy to check that the upper bound for $M$ is satisfied. Indeed, it follows from \eqref{Sbound} that
$$2M \leq \frac{2}{\sqrt{2C}}|A|^{1/3} \leq \frac{|A|}{2 \log |A|} \leq |S_{\tau}|,$$
The first inequality above uses the fact that $d(A) \geq 1$ for all $A$ (since one can take $C$ to be a singleton in the \eqref{eq:d(A)def}), as well as the bound $\tau \geq 1$.
The second inequality is true for sufficiently large $|A|$. Since smaller sets can be dealt with by choosing sufficiently small implied constants in the statement, we may assume that $2M \leq |S_{\tau}|$.
Assume first that $M \geq 1$ (we will deal with the other case later). Then, by \eqref{aa+a} and the definition of $M$
$$|A/A+A|^2 \gg \frac{|S_{\tau}||A|^{4/3}\tau^{5/6}}{d^{1/6}(A)}.$$
Applying the inequality $|S_{\tau}|\tau \gg \frac{|A|^2}{\log |A|}$, it follows that
\begin{equation}
d^{1/6}(A)|A/A+A|^2 \gg \frac{ |A|^{3}|S_{\tau}|^{1/6}}{\log^{5/6}|A|}.
\label{aa+a2}
\end{equation}
After bounding the left hand side of this inequality using \eqref{part1}, we obtain
$$\frac{|A/A+A|^{2/3}|S_{\tau}|^{1/6}\log^{1/2}|A|}{|A|^{7/6}}|A/A+A|^2 \gg d^{1/6}(A)|A/A+A|^2 \gg \frac{ |A|^{3}|S_{\tau}|^{1/6}}{\log^{5/6}|A|}.$$
Rearranging this expression leads to the bound
$$|A/A+A| \gg \frac{|A|^{25/16}}{\log^{1/2}|A|},$$
which is stronger than the claim of the theorem.
It remains is to consider what happens if $M \leq 1$. Indeed, if this is the case, then
$$\frac{|A|^{1/3}}{\sqrt{8C}d^{1/6}(A)\tau^{1/6}}<1$$
and so
$$\frac{|A|^{1/3}}{d^{1/6}(A)\tau^{1/6}} \ll 1.$$
After applying the bound $\tau \leq |A|^2/|S_{\tau}|$, it follows that
$$\frac{|S_{\tau}|^{1/6}}{d^{1/6}(A)} \ll 1 \ll \frac{|A/A+A|^2}{|A|^3},$$
where the latter inequality is a consequence of Theorem \ref{thm:old}. In particular, this implies that \eqref{aa+a2} holds. We can then repeat the earlier analysis and once again reach the conclusion that
$$|A/A+A| \gg \frac{|A|^{\frac{3}{2}+\frac{1}{16}}}{\log^{1/2}|A|}.$$
\subsection{Choosing $M$ - case 2}
Suppose now that $M=\left \lfloor \frac{\tau^{2/3}}{4|A|^{1/3}} \right \rfloor$.
Again, we need to check that $2 \leq 2M \leq |S_{\tau}|$. If the lower bound does not hold then \eqref{taubound3} gives a contradiction for sufficiently large $|A|$. Smaller sets can be dealt with by choosing sufficiently small implied constants in the statement. If the upper bound does not hold then
$$\frac{\tau^{2/3}}{|A|^{1/3}} \geq 2M>|S_{\tau}|.$$
Multiplying both sides of this inequality by $\tau$ and applying \eqref{taubound} gives the contradiction
$$|A|^{5/3} \geq \tau^{5/3} \gg \frac{|A|^{7/3}}{\log |A|}.$$
Since this choice of $M$ is valid, we can now conclude the proof. From \eqref{taubound3}, we have
$$M \gg \frac{\tau^{2/3}}{|A|^{1/3}} \gg \frac{|A|^{\frac{1}{13}}}{\log^{\frac{2}{3}}|A|}.$$
Then, by \eqref{aa+a} and \eqref{taubound},
$$|A/A+A|^2 \gg \frac{|A|^3}{\log|A|}M \gg \frac{|A|^{3+\frac{1}{13}}}{\log^{5/3}|A|} .$$
We conclude that
$$|A/A+A| \gg \frac{|A|^{\frac{3}{2}+\frac{1}{26}}}{\log^{5/6}|A|},$$
and so the proof is complete. \qedsymbol
|
1,314,259,995,324 | arxiv | \section{Introduction}\label{sec:introduction}
\IEEEPARstart{A} huge number of surveillance cameras have been installed in public places (\textit{e.g.} offices, stations, airports, and streets) in recent year to closely monitor scenes and give early warnings of events such as accidents and crimes.
However, dealing with large camera networks requires a lot of human effort.
Automatic person re-identification tasks that can associate people across images from non-overlapping cameras have been widely utilized to reduce the required human effort.
Most previous works have generally relied on people's appearances such as color, shape, and texture to re-identify them, since there is no continuity between non-overlapping cameras in terms of time and space.
Thus, many works have focused on appearance modeling and learning such as via feature descriptor extraction~\cite{farenzena2010person,zhao2014learning}, metric learning~\cite{koestinger2012large, roth2014mahalanobis}, and saliency learning~\cite{zhao2013unsupervised} methods for efficient re-identification.
Unfortunately, a person's appearance can change considerably across images depending on a camera's viewpoint and a person's pose as shown in Fig.~\ref{fig:chal_reid}; thus, person re-identification tasks that only rely on appearance are very challenging.
Nonetheless, many previous re-identification frameworks~\cite{koestinger2012large, roth2014mahalanobis, zhao2013unsupervised} have commonly adopted single-shot matching methods that utilize a single appearance for each person to measure the similarity (\textit{or} difference) between a pair of person image patches. However, it is difficult to identify people with single-shot appearance matching because of the people's aforementioned severe appearance changes.
Several multi-shot matching methods~\cite{farenzena2010person, wang2014person, limulti} have been proposed in recent years to overcome the limitation of single-shot matching; however, the ambiguities that arise due to the viewpoint and pose variations remain.
\begin{figure}[t]
\includegraphics[width=1\columnwidth]{./images/figure1/figure1.pdf}
\caption{Challenging in person re-identification due to person appearance changes. Person appearance changes depending on the camera viewpoint and the pose of a person. Pairs of red squares have same locations but show different appearances due to pose variations of people.}
\label{fig:chal_reid}
\end{figure}
In real world surveillance scenarios, each person provides multiple observations along a moving path. Therefore, we can exploit plenty of appearances for re-identification tasks.
Furthermore, surveillance videos contain scene structures and scene contexts such as a ground plane of a scene, a person's trajectory, etc.
In practice, it is possible to estimate the camera viewpoint from the scene information via human height-based auto-calibrations~\cite{kusakunniran2009direct,lv2006camera} and vanishing point-based auto-calibrations~\cite{orghidan2012camera}; then the difficulties in person re-identification become more tractable.
This paper proposes a novel framework for person re-identification by analyzing camera viewpoints and person poses called Pose-aware Multi-shot Matching~(PaMM). First camera viewpoints are calibrated and poeple's poses are robustly estimated based on the proposed pose estimation method.
We then generate a multi-pose model that contains four feature descriptor groups extracted from four image clusters grouped by person poses (\textit{i.e.} \textit{f}ront, \textit{r}ight, \textit{b}ack, and \textit{l}eft).
After generating multi-pose models, we calculate matching scores between multi-pose models in a weighted summation manner based on pre-trained matching weights.
The proposed person re-identification framework permits the exploitation of additional cues such as person poses and 3D scene information, which makes the person re-identification problem more tractable.
To validate our methods, we extensively evaluate the performance of the proposed methods and other state-of-the-art methods that use public person re-identification datasets \texttt{3DPeS}~\cite{baltieri2011_308}, \texttt{PRID 2011}~\cite{hirzer11a} and \texttt{iLIDS-Vid}~\cite{wang2014person}. Experimental results show that the proposed framework is promising for person re-identification from diverse viewpoint and pose variations and outperforms other state-of-the-art methods.
For reproducibility, the PaMM code is openly available to the public at: \url{https://cvl.gist.ac.kr/pamm/}.
The main ideas of this work are simple but very effective. In addition, our method can flexibly adopt any existing person re-identification methods such as feature descriptor extraction~\cite{farenzena2010person,zhao2014learning} and metric learning~\cite{davis2007information, koestinger2012large, weinberger2005distance} methods as the baseline in our re-identification framework.
This is the first attempt to exploit viewpoint and pose information for \emph{multi-shot} person re-identification to the best of our knowledge.
The rest of the paper is organized as follows: Section~\ref{sec:preivous} summarizes previous person re-identification works.
Section~\ref{sec:motiv}, explains the motivation behind this work. We then describe our proposed methods in Section~\ref{sec:proposed}.
The datasets and evaluation methodology used are described in Section~\ref{sec:data_metho}.
The experimental results are reported in Section~\ref{sec:exp} and we conclude this paper in Section~\ref{sec:conclusion}.
\section{Previous Works}
\label{sec:preivous}
We have classified previous person re-identification methods into single-shot matching methods and multi-shot matching methods and briefly reviewed them. Single-shot matching methods only use a single appearance of each person to find people correspondences between two different cameras, whereas the multi-shot matching methods exploit multiple appearances to find the correspondences.
\subsection{Single-shot matching methods}
\label{subsec:single_shot}
Most works that attempt to re-identify people across non-overlapping cameras generally rely on poeple's appearance since there is no spatiotemporal continuity; we cannot fully utilize the motion or spatial information of a person for their re-identification.
Therefore, most works have focused on appearance-based techniques such as feature descriptor extraction and metric learning methods for efficient person re-identification.
Regarding feature extraction methods, Farenzena~\emph{et al}\onedot~\cite{farenzena2010person} proposed the symmetry-driven accumulation of local features that are extracted based on the principles of the symmetry and asymmetry of the human body.
This method exploits the human body model which is robust to human pose variations.
Feature extraction methods that select or weight discriminative features have been proposed in~\cite{liu2012person,zhao2014learning}. These methods enable us to adaptively exploit features depending on the person's appearance.
Regarding metric learning, several methods have been proposed such as KISSME~\cite{koestinger2012large}, LMNN-R~\cite{dikmen2011pedestrian}, and applied to the re-identification problem.
Some works \cite{koestinger2012large,roth2014mahalanobis} have extensively evaluated and compared several metric learning methods (\textit{e.g.} ITML~\cite{davis2007information}, KISSME~\cite{koestinger2012large}, LMNN~\cite{weinberger2005distance} and Mahalnobis~\cite{roth2014mahalanobis}) and shown the effectiveness of metric learning for re-identification.
Similar to metric learning methods, a saliency learning method was also proposed by R. Zhao~\emph{et al}\onedot~\cite{zhao2013unsupervised} that learned saliency for handling severe appearance changes.
Recently, many person re-identification methods have been proposed that are based on learning deep convolutional neural network~(CNN)~\cite{su2016deep} and Siamese convolutional network~\cite{ahmed2015improved, yi2014deep,wang2016joint} for simultaneously learning both features and metrics. In addition, \cite{liao2015person} proposed both feature descriptor extraction and metric learning methods for re-identification.
However, a few works~\cite{bak2015person,wu2015viewpoint} that use person pose for re-identification have been proposed very recently.
Bak \emph{et al}\onedot~\cite{bak2015person} proposed learning a generic metric pool that consists of metrics, each of which are learned to match specific pairs of poses.
Wu. \emph{et al}\onedot~\cite{wu2015viewpoint} proposed person re-identification involving human appearance modeling using pose priors and person-specific feature learning. Although these methods utilized pose priors for person re-identification, they consider single-shot matching that recognizes people using a single appearance, which has difficulties for handling diverse appearance changes. This paper proposes a person re-identification framework that uses pose cues for efficient \emph{multi-shot matching}.
\subsection{Multi-shot matching methods}
Several multi-shot matching methods that have sought to overcome the limitations of single-shot matching methods have been proposed in recent years.
Li~\emph{et al}\onedot~\cite{li2015multi} proposed a random forest-based person re-identification that exploits multiple appearances. They calculated similarity scores between two multi-shot sets and averaged them into a final similarity score.
Farenzena~\emph{et al}\onedot~\cite{farenzena2010person} also provided multi-shot matching results by comparing each possible pair of histograms between different signatures (a set of appearances) and selecting the lowest obtained distance for the final matching score.
Similarly, Su~\emph{et al}\onedot\cite{su2016deep} and Zheng~\emph{et al}\onedot\cite{zheng2015scalable} exploited multiple queries for re-identification. Instead of the multi-shot matching strategies such as \cite{farenzena2010person,li2015multi}, they merged multiple queries (\textit{i.e.} multi-shot) into a single query and performed re-identification using the merged queries.
Wang~\emph{et al}\onedot~\cite{wang2014person,wang2016person} proposed video ranking methods for multi-shot matching that automatically selected discriminative video fragments and learned a video ranking function. You~\emph{et al}\onedot~\cite{you2016top} proposed a top-push distance learning model for efficiently matching video features of people.
Similarly, Liu~\emph{et al}\onedot~\cite{liu2015spatio} proposed a video-based pedestrian re-identification method based on the proposed spatiotemporal body-action model.
Li~\emph{et al}\onedot~\cite{limulti} also proposed a multi-shot person re-identification method based on iterative appearance clustering and subspace learning for effective multi-shot matching.
In addition, a multi-shot matching person re-identification using deep recurrent neural network~(RNN)~\cite{mclaughlin2016recurrent} was recently proposed. This implies that multi-shot matching with deep learning techniques will be a new trend in person re-identification.
Although multi-shot matching person re-identification methods have overcome the limitations of single-shot matching to some extent, ambiguities still arise from the various viewpoints and pose changes.
\begin{figure*}[]
\centering
{\includegraphics[width=1.9\columnwidth]{./images/figure2/figure2.pdf} }
\caption{The proposed pose-aware multi-shot matching (PaMM) framework for person re-identification. First, the pose of each person is estimated. Second, a multi-pose model is generated. Finally two multi-pose models are matched based on pre-trained matching weights. The thicknesses of lines indicate the matching weights.}
\label{fig:framework}
\end{figure*}
\section{Motivation and Main Ideas}
\label{sec:motiv}
As shown in Fig.~\ref{fig:chal_reid}, person re-identification is quite challenging due to camera viewpoint and person pose variations.
However, what if the camera viewpoint and the pose priors of people in every non-overlapping camera are known in advance?
In fact, progress in auto-calibration techniques~\cite{kusakunniran2009direct, lv2006camera} has enabled the extraction of additional cues such as camera parameters, ground plane, and the 3D position of people without any offline calibration tasks~\cite{zhang1999flexible}. Exploiting those additional cues permits the estimation of poeple's poses, as described in Section~\ref{subsec:viewpoint_est}.
This paper fully exploits those additional cues for multi-shot matching and proposes the Pose-aware Multi-shot Matching (PaMM) for efficient person re-identification.
Suppose that upon estimating camera viewpoints and people's poses, there is a simple 2 vs. 2 matching scenario that contains one same-pose matching (\textit{f}ront--\textit{f}ront) and three different-pose matchings (\textit{f}ront--\textit{r}ight, \textit{l}eft--\textit{f}ront, \textit{l}eft--\textit{r}ight) as shown in Fig.~\ref{fig:framework}.
The result of the same-pose matching can generally be expected to be more reliable than those of different-pose matchings, since people maintain their appearance between cameras when their poses are the same~(This work excludes photometric issues such as illumination changes and camera color response differences).
Then, such a multi-shot matching scenario, it is desirable that same-pose matching (\textit{f}ront--\textit{f}ront) plays a more important role than different-pose matchings.
Hence, this work incorporates this matching idea by aggregating the matching scores of all pose matchings in a weighted summation manner, as shown in Fig.~\ref{fig:framework}, where the thicknesses of the lines indicate the matching weights. We also study how to efficiently train matching weights and match between multi-shot appearances using pose information.
\section{Proposed PaMM Framework}
\label{sec:proposed}
The proposed person re-identification framework, first estimates the camera viewpoint and people's poses (Section~\ref{subsec:viewpoint_est}) and then
generate multi-pose models containing feature descriptors groups extracted from four image clusters obtained based on the person poses (\textit{i.e.} \textit{f}ront, \textit{r}ight, \textit{b}ack, \textit{l}eft) (Section~\ref{subsec:model_gen}).
Matching scores between multi-pose models in a weighted summation manner are calculated using the pre-trained matching weights after multi-pose models are generated (Section~\ref{subsec:multi-pose matching}).
The matching weight training is described in Section~\ref{subsec:train_weights}.
Fig.~\ref{fig:framework} illustrates the overall framework for the proposed person re-identification.
\subsection{Person pose estimation}
\label{subsec:viewpoint_est}
Before estimating people's poses, the camera intrinsic and extrinsic parameters (or camera pose) are estimated using auto-calibration algorithms such as~\cite{kusakunniran2009direct,lv2006camera}.
Then, the relationship between an image (pixel coordinates) and the real world (world coordinates) is described as
\begin{equation}
\left[ \begin{matrix} u \\ v \\ 1 \end{matrix} \right] ={ \mathbf{ K } }\left[ \begin{matrix} { \mathbf{ R } } & { \mathbf{ t } } \end{matrix} \right] \left[ \begin{matrix} \begin{matrix} X \\ Y \end{matrix} \\ \begin{matrix} Z \\ 1 \end{matrix} \end{matrix} \right] ,
\end{equation}
where $\mathbf{K}$, $\mathbf{R}$, and $\mathbf{t} = {[{ X }_{ cam },{ Y }_{ cam },{ Z }_{ cam }]}^{\top}$ represent a camera intrinsic matrix, rotation matrix, and position, respectively, $[u,v]$ and $[X,Y,Z]$ represent the image and world coordinates, respectively.
Knowing the camera parameters permits the projection of every object in an image onto the ground plane (world XY plane).
An object $k$ that appears in frame $t$ for camera $C$ is denoted as $\mathbf{ O }_{ t }^{ C,k }=(\mathbf{ P }_{ t }^{ C,k },\mathbf{ v }_{ t }^{ C,k }, { \theta }_{ t }^{ C,k })$, where $\mathbf{ P }_{ t }^{ C,k }=\left[ { X }_{ t }^{ C,k },Y_{ t }^{ C,k },1 \right] ,$ $\mathbf{ v }_{ t }^{ C,k },{ \theta }_{ t }^{ C,k }$ are the position, velocity, and person pose angle with respect to the camera, respectively.
\begin{figure}[t]
\centering
{\includegraphics[width=0.585\columnwidth]{./images/figure3/figure3_a.pdf}}
{\includegraphics[width=0.395\columnwidth]{./images/figure3/figure3_b.pdf}}
\caption{Person pose estimation: (left) estimated 3D structure and person poses along the path, (right) corresponding 2D images and appearances grouped by poses.}
\label{fig:view_estimation}
\end{figure}
Inspired by~\cite{wu2015viewpoint}, the velocity of a person $\mathbf{ v }_{ t }^{ C,k }$ and camera viewpoint vector $\mathbf{ c }_{ t}^{ C,k }$ are defined to estimate person's pose as
\vspace{0pt}\begin{equation}
\mathbf{ v }_{ t }^{ C,k }=\left[ ({ X }_{ t+1 }^{ C,k }-{ X }_{ t }^{ C,k }),\quad({ Y }_{ t+1 }^{ C,k }-{ Y }_{ t }^{ C,k }) \right], \vspace{0pt}
\end{equation}
\vspace{0pt}\begin{equation}
\mathbf{ c }_{ t}^{ C,k }=\left[ ({ X }^{ C}_{ cam }-{ X }_{ t }^{ C,k }),\quad({ Y }^{ C}_{ cam }-{ Y }_{ t }^{ C,k }) \right].\vspace{0pt}
\end{equation}
Assuming that pedestrians mostly walk forward, the pose angle of object $k$ can be estimated by ($C$ is omitted from here for convenience),
\begin{equation}
{ \theta }^{k}_{ t }=\arccos { \left( \frac { {\mathbf{ c }^{k}_{ t }}^{\top}\cdot \mathbf{ v }^{k}_{ t } }{ \left\| \mathbf{ c }^{k}_{ t } \right\| \left\| \mathbf{ v }^{k}_{ t } \right\| } \right) }.
\label{equ:4}
\end{equation}
Fig.~\ref{fig:view_estimation} shows an example of estimated person poses.
However, the initially estimated ${\theta}^{k}_{t}$ is noisy as shown in Fig.~\ref{fig:smoothing}~(a).
The noise is reduced by smoothing ${\theta}^{k}_{t}$ based on a moving average algorithm in the polar coordinate system as
\begin{equation}
{ \hat { \theta } }_{ t }^{ k }=\arctan { \left( { \frac { \sum _{ i=t-m }^{ t+m }{ \sin { \left( { { \theta } }^{k}_{ i } \right) } } }{ \sum _{ i=t-m }^{ t+m}{ \cos { \left( { { \theta } }^{k}_{ i } \right) } } } } \right) },
\end{equation}
where $m$ is the moving average parameter (here $m=10$).
Although there are several discontinuities around $0^{ \circ}$ and $360^{ \circ}$, the smoothing result is reliable due to the smoothing process in the polar coordinates, whereas the smoothing result in Cartesian coordinates is unreliable (Fig.~\ref{fig:smoothing}~(b,c)).
\begin{figure}[t]
\centering
{\includegraphics[width=0.32\columnwidth]{./images/figure4/figure4_a.pdf}}
{\includegraphics[width=0.32\columnwidth]{./images/figure4/figure4_b.pdf}}
{\includegraphics[width=0.32\columnwidth]{./images/figure4/figure4_c.pdf}}
\caption{(left) initial pose angle, (middle) smoothing result in Cartesian coordinates, (right) smoothing result in polar coordinates.}
\label{fig:smoothing}
\end{figure}
\subsection{Multi-pose appearance selection}
\label{subsec:model_gen}
\subsubsection{Sample selection based on sample confidence} \label{subsubsec:sample_selection}
Generating good multi-pose models requires filtering out unreliable samples that have incorrect angles or polluted appearances along a moving trajectory.
Thus, we define sample confidence to measure the reliability of person samples based on following requirements (R1--R3):
\begin{itemize}
\item
\textbf{Angle variation} (R1): We assume that the angle of a walking person will not change abruptly between temporally neighboring frames.
If there are rapid angle changes across consecutive frames, these are regarded as unreliable samples and filtered out. We observe that, the inaccurate localization of a person generally causes a large angle variation. This is considered by measuring the angle variation as
\begin{equation}
{ \delta }^{ k }_{ t }=\min { \left( d( { \hat { \theta } }_{ t }^{ k } ) ,\left| d( { \hat { \theta } }_{ t }^{ k } ) -360 \right| \right) , } \vspace{0pt}
\end{equation}
where $d( { \hat { \theta } }_{ t }^{ k } ) =| { \hat { \theta } }_{ t-1 }^{ k }-{ \hat { \theta } }_{ t }^{ k } | $. Even though there is an angle discontinuity between $0^{ \circ}$ and $360^{ \circ}$, ${ \delta }^{ k }_{ t }$ is reliably calculated due to the second term of the $\min$ function.
\item
\textbf{Magnitude of the velocity} (R2): When a person is stationary for several frames, their velocity $\mathbf{v}^{k}_{t}$ is close to 0 and the estimated person angle based on Eq.~\eqref{equ:4} becomes
unreliable\footnote{To estimate person angles, we assume that people mostly move forward in Section~\ref{subsec:viewpoint_est}. However, in the case of stationary person, the assumption is not satisfied. Note that the stationary people are likely to have pure rotational motion which cannot be handled by Eq.~\eqref{equ:4}.}.
This problem is handled by measuring the magnitude of the person's velocity as
${ \left\| \mathbf{ v }^{k}_{ t } \right\| }_{ 2 }$. A sample with a small velocity magnitude is regarded as unreliable.
\item
\textbf{Occlusion rate} (R3):
When a person is occluded by others, the sample is again considered unreliable, since person's appearance is polluted.
Occluded samples are dealt with by measuring each person's occlusion rate as
\begin{equation}
{ Occ }_{ t }^{ k }=\max _{ h\in \mathbf{H}^{k} }{ \left( \frac { area({ B }_{ t }^{ k }\cap { B }^{ h }_{ t }) }{ area({ B }_{ t }^{ k }) } \right) }, \vspace{0pt}
\end{equation}
where ${ B }^{k}_{ t }$ is a 2D bounding box of an object $k$ at frame $t$, ${ B }^{h}_{ t }$ is a 2D bounding box occluding ${ B }^{k}_{ t }$, and $\mathbf{H}^{k}$ is a set of object indexes occluding object $k$. As we know the 3D position of each person $\mathbf{P}^{k}_{t}$, it is easy to find $\mathbf{H}^{k}$.
\end{itemize}
\begin{figure}[t]
\subfigure[sample confidence under wrong detection]{\includegraphics[width=1\columnwidth]{./images/figure5/figure5_a.pdf} } \\
\subfigure[sample confidence under pure rotation]{\includegraphics[width=1\columnwidth]{./images/figure5/figure5_b.pdf}} \\
\subfigure[sample confidence under occlusion]{\includegraphics[width=1\columnwidth]{./images/figure5/figure5_c.pdf}}
\caption{Sample confidence under various conditions (best viewed in color and at a high resolution).}
\label{fig:sam_conf}
\end{figure}
\noindent Based on the above requirements, we define the sample confidence as
\begin{equation}
{ conf }\left( \mathbf{ O }^{k}_{ t } \right) = { e }^{ -{ { \delta }^{k}_{ t } } }\cdot \tanh { { \left\| \mathbf{ v }^{k}_{ t } \right\| }_{ 2 } } \cdot \left( 1-{ Occ }^{k}_{ t } \right). \vspace{0pt}
\label{eq:sample_conf}
\end{equation}
The sample confidence lies in [0,1]. Fig.~\ref{fig:sam_conf} shows the sample confidences under various situations. We regard a person sample as a reliable one with high sample confidence when ${ conf }\left( \mathbf{ O }^{k}_{ t } \right)>0.8$.
\subsubsection{Generating multi-pose model}
After the sample selection, we divide samples into four groups, according to their pose angles~(\textit{i.e.} \textit{f}ront, \textit{r}ight, \textit{b}ack, \textit{l}eft).
Each group covers $90^{\circ}$ as follows:
\begin{equation}
\begin{split}
& { G }_{ f }^{ k } =\left\{ I\left(\mathbf{ O }_{ t }^{ k }\right)|{ 0 }^{ \circ }\le { \hat { \theta } }_{ t }^{ k }<{ 45 }^{ \circ },{ 315 }^{ \circ }\le { \hat { \theta } }_{ t }^{ k }<{ 360 }^{ \circ } \right\},\\
& { G }_{ r }^{ k } =\left\{ I\left(\mathbf{ O }_{ t }^{ k }\right)|{ 45 }^{ \circ }\le { \hat { \theta } }_{ t }^{ k }<{ 135 }^{ \circ } \right\},\\
& { G }_{ b }^{ k } =\left\{ I\left(\mathbf{ O }_{ t }^{ k }\right)|{ 135 }^{ \circ }\le { \hat { \theta } }_{ t }^{ k }<{ 225 }^{ \circ } \right\},\\
& { G }_{ l }^{ k } =\left\{ I\left(\mathbf{ O }_{ t }^{ k }\right)|{ 225 }^{ \circ }\le { \hat { \theta } }_{ t }^{ k }<{ 315 }^{ \circ } \right\},\\
& \text{where} \qquad t^{k}_{start} \le t \le t^{k}_{end}.
\end{split}
\end{equation}
$t^{k}_{start}$, $t^{k}_{end}$ are start and end frame indexes of the object $k$, respectively. $I\left(\mathbf{ O }_{ t }^{ k }\right)$ is an image sample of the object $k$.
It is worth noting that the proposed sample confidence~(Eq.~\eqref{eq:sample_conf}) efficiently filters unreliable samples out as shown in Fig.~\ref{fig:result_conf}.
We simply represent the four groups as ${ G }_{ p }^{ k }$, where $p\in \{f,r,b,l\}$.
Then, an individual image that belongs to each group ${ G }_{ p }^{ k }$ is represented as
\begin{equation*}
{ G }_{ p_{i} }^{ k }, \quad p\in \left\{f,r,b,l \right\}, \quad 1\le i\le {N}_{p}^{k},
\end{equation*}
where $i$ is the index of each image in each group, and ${N}_{p}^{k}$ is the number of images in ${ G }_{ p}^{ k }$.
For example, ${ G }_{ f_{2} }^{ k }$ denotes a second image in the group $f$ront $({ G }_{ f }^{ k })$ of the object $k$.
After the sample grouping, we extract feature descriptors from the four groups. Finally, the multi-pose model of object $k$ is defined as
\begin{equation}
\mathcal{ M }^{ k }={ { \left\{ f\left( { G }_{ p_{i} }^{ k } \right)| p\in\left\{ f,r,b,l \right\}, 1\le i\le {N}_{p}^{k} \right\} } },
\end{equation}
where $f(\cdot)\in\mathbb{R}^{d}$ is a function that extracts a $d$-dimensional feature descriptor from an image.
The multi-pose model $\mathcal{ M }^{ k }$ consists of multiple feature descriptors grouped by their pose angles.
Details of the feature extraction are described below.
\begin{figure}[t]
\centering
\subfigure[average of each cluster without sample selection]{\includegraphics[width=0.46\columnwidth]{./images/figure6/figure6_a}} \hspace{5pt}
\subfigure[average of each cluster with sample selection]{\includegraphics[width=0.46\columnwidth]{./images/figure6/figure6_b}}
\caption{Grouping results according to person pose angles without and with sample selection. The clusters with sample selection represent more clear directivity.}
\label{fig:result_conf}
\end{figure}
\textbf{Feature extraction}: Our method can apply any kind of feature descriptor extraction method. In this paper, we apply several feature extraction methods such as Histogram of Oriented Gradient~(HoG)~\cite{dalal2005histograms}, dcolorSIFT~\cite{zhao2013unsupervised} and LOMO~\cite{liao2015person} which show promising re-identification performance.
In our feature extraction process, each person image is resized to 128$\times$48 pixels.
Using the resized images, we extract several feature descriptors.
HoG~\cite{dalal2005histograms} counts occurrences of gradient orientation on a densely sampled grid and makes an orientation histogram, and it describes the overall shape of an object well.
dColorSIFT~\cite{zhao2013unsupervised} is a dense feature descriptor containing dense LAB-color histogram and dense SIFT. The authors pointed out that the densely sampled local features have been widely applied to matching problems due to their robustness in matching.
LOMO~\cite{liao2015person} analyzes the horizontal occurrence of local features and makes a stable representation by maximizing the occurrence. In addition, it handles illumination variations by applying a Retinex transform and a scale invariant texture operator.
The results of testing the various feature extraction methods are given in Section~\ref{subsec:feature_and_metrics}.
\begin{algorithm}[t]
\KwData{Query objects, Gallery objects}
\KwResult{Matching scores between queries and galleries}
Random split training and test set;
\For{training query and gallery}{
Extract feature descritor\;
}
$\mathbf{M}$ = Metric learning; {\footnotesize \% using only training set. }
\For{test query and gallery}{
$\theta_{t}$ = Estimate person pose\;
$\hat{\theta}_{t}$ = Smooth person pose\;
$conf$ = Estimate sample confidence\;
\eIf{$conf < 0.8$}{
Reject sample\;
}
{ $G_{p}$ = Perform sample grouping\;}
}
\For{test query and gallery}{
$f(G_{p})$ = Extract group feature descritors\;
$\mathcal{ M }$ = Generate multi-pose model\;
}
\For{test query and gallery}{
$C\left( \mathcal{ M }^{ k }, \mathcal{ M }^{ l }\right)$ = Multi-pose model matching\;
{\footnotesize \% $\mathcal{ M }^{ k }$ and $\mathcal{ M }^{ l }$ belong to different cameras.}
}
\caption{An algorithm of the propose PaMM}
\label{alg1}
\end{algorithm}
\subsection{Multi-pose model matching}
\label{subsec:multi-pose matching}
In this section, we describe the matching process of multi-pose models.
Suppose that we have $\mathcal{ M }^{k }$ and $\mathcal{ M }^{ l }$, which are the multi-pose models of objects $k$ and $l$ appearing in different cameras. In order to measure the similarity between the two multi-pose models, we first calculate all pairwise feature distances between the two multi-pose models as
\begin{equation}
\label{equ:distance_measure}
{ x }_{ p_{i}q_{j} }^{k,l}= \sqrt{\left[{ f( { G }_{ p_{i} }^{ k } ) }-{ f( { G }_{ q_{j} }^{ l } ) }\right]^{\top}\mathbf{M}\left[{ f( { G }_{ p_{i} }^{ k } ) }-{ f( { G }_{ q_{j} }^{ l } ) }\right]},
\end{equation}
where $p,q\in\left\{ f,r,b,l \right\} $, $1\le i\le {N}_{p}^{k}$, $1\le j\le {N}_{q}^{l}$ and $\mathbf{M}$ is a $d\times d$ positive semi-definite matrix $(\mathbf{M}\preceq 0)$ learned by a metric learning algorithm\footnote{In practice, we first applied Principle Component Analysis (PCA)~\cite{jolliffe2002principal} to reduce the dimensions of the descriptors. We then performed metric learning on the PCA subspace. This is a conventional two-stage process for metric learning.}. For metric learning, we can utilize any method, such as KISSME~\cite{koestinger2012large}, ITML~\cite{davis2007information}, or LMNN~\cite{weinberger2005distance}.
Then, the multi-pose model matching cost is computed in a weighted summation manner as
\begin{equation}
\begin{split}
C\left( \mathcal{ M }^{ k }, \mathcal{ M }^{ l }\right) = \frac { \sum _{ p,q }{ { \sum _{ i,j }{{ w }_{ pq } { e }_{ pq } { x }_{ { p }_{ i }{ q }_{ j } }^{ k,l } } } } }{ \sum_{ p,q }{\sum _{ i,j }{ w }_{ pq } { e }_{ pq }}}, &\\
\text{where} \qquad { e }_{ pq } = \begin{cases} 1\quad \text{if}~ (p,q)~\text{pair exists} \\ 0\quad \text{otherwise} \end{cases}, &\\
p,q \in \{f,r,b,l\}, \quad 1 \le i \le {N}_{p}^{k}, \quad 1\le j\le {N}_{q}^{l}. &
\end{split}
\label{eq:weighted sum}
\end{equation}
$w_{pq}$ is a matching weight that attaches importance to pairwise matching ${ x }_{ { p }_{ i }{ q }_{ j } }^{ k,l }$.
Note that a high matching cost denotes low similarity between two multi-pose models.
The overall algorithm of the proposed PaMM is summarized in Algorithm~\ref{alg1}.
Matching weights training process is described in the next section.
\subsection{Training matching weights}
\label{subsec:train_weights}
When training the matching weights, we assume that every $p,q$ pair exists~($e_{pq}$ is eliminated). In addition, we assume that each group $G_p$ has a single image~($\sum_{i,j}$ is eliminated). Then, Eq.~\eqref{eq:weighted sum} is rewritten as
\begin{equation}
C\left( \mathcal{ M }^{ k }, \mathcal{ M }^{ l }\right) = \frac { \sum _{ p,q } { w }_{ pq } { x }_{ p_{1} q_{1} }^{k,l} }{ \sum _{ p,q } { w }_{ pq } }.
\label{eq:weighted_single}
\end{equation}
For convenience, we also omit several indexes and terms such as object labels $(k,l)$, sample indexes $(i,j)$, and a normalization term $\sum _{ p,q }{ w }_{ pq } $ in the training step. Then, the pairwise feature distance between two multi-pose models is simply represented as $x_{pq}$, and Eq.~\eqref{eq:weighted_single} is rewritten as
\begin{equation}
\begin{split}
C\left( \mathbf{ x } \right) & =\mathbf{ w }^{ \top }\mathbf{ x },\\
\text{where} \qquad \mathbf{ x } & = { \left\{ { x }_{ ff },{ x }_{ fr },{ x }_{ fb },\dots ,{ x }_{ ll } \right\} }^{ \top } ,\\
\mathbf{ w } & = { \left\{ { w }_{ ff },{ w }_{ fr },{ w }_{ fb },\dots ,{ w }_{ ll } \right\} }^{ \top }.
\end{split}
\end{equation}
$\mathbf{x}\in \mathbb{R}^{16\times1}$ is a vector of pairwise feature distances and $\mathbf{w}\in \mathbb{R}^{16\times1}$ is a vector of matching weights.
\begin{figure}[tb]
\centering
\subfigure[camera layout of video sets]{\includegraphics[width=0.423\columnwidth]{./images/figure10/figure10_a.pdf}} \hspace{5pt}
\subfigure[sample frames of each camera (d-h)]{\includegraphics[width=0.532\columnwidth]{./images/figure10/figure10_b.pdf}}
\caption{Test dataset: 3DPeS~\cite{baltieri2011_308}. We utilize three camera pair sets among the dataset (best viewed in color).}
\label{fig:3DPeS}
\end{figure}
In order to train matching weights $\mathbf{w}$, we collect training samples $\mathcal{D}={ \left\{ { \left( \mathbf{ x }_{ a },{ y }_{ a } \right) }|{ y }_{ a }\in \left\{ -1,1 \right\}, 1\le a \le A \right\}}$, where $A$ is the number of training samples and $y_{a}$ is a corresponding class of the sample.
Fig.~\ref{fig:training_sam_fig} shows examples of the training samples (positives: $y=1$, negatives: $y=-1$).
Given the training set $\mathcal{D}$, we exploit a Support Vector Machine (SVM)~\cite{cortes1995support} to find the weights $\mathbf{w}$ by solving the following optimization problem:
\begin{equation}
\begin{split}
\underset { \mathbf{ w },\xi }{ \text{arg min} } \left( \frac { 1 }{ 2 } { \left\| \mathbf{ w } \right\| }^{ 2 }+\lambda \sum _{ a }^{ A }{ { \xi }_{ a } } \right),&
\\ s.t.~~ { y }_{ a }\left( \mathbf{ w }^{ \top }\mathbf{ x }_{ a } \right) \ge 1-{ \xi }_{ a },~~{ \xi }_{ a }\ge 0,
~~\text{for}~~ 1\le& a\le A,
\end{split}
\end{equation}
where $\lambda$ is a margin tradeoff parameter and ${ \xi }_{ a }$ is a slack variable. The solution given by SVM ensures a maximal margin.
The details and results of matching weight training are given in Section~\ref{subsec:exp_training_weights}.
\section{Datasets and Methodology}
\label{sec:data_metho}
\noindent\textbf{Datasets.~}
For training matching weights, we used \texttt{CUHK02}~\cite{li2013locally} and \texttt{VIPeR}~\cite{gray2007evaluating}.
For testing methods, we used \texttt{PRID 2011}~\cite{hirzer11a}, \texttt{iLIDS-Vid}~\cite{wang2014person}, and \texttt{3DPeS}~\cite{baltieri2011_308}.
\begin{figure}[t]
\centering
\subfigure[Examples of positive pairs]{\includegraphics[width=1\columnwidth]{./images/figure8/figure8_a.pdf}}\\
\subfigure[Examples of negative pairs]{\includegraphics[width=1\columnwidth]{./images/figure8/figure8_b.pdf}}
\caption{Examples of positive and negative training sample pairs.}
\label{fig:training_sam_fig}
\end{figure}
\begin{figure*}[t]
\centering
\subfigure[HoG~\cite{dalal2005histograms}]{\includegraphics[width=0.63\columnwidth]{./images/figure12/figure12_a.pdf}} \hspace{10pt}
\subfigure[dcolorSIFT~\cite{zhao2013unsupervised}]{\includegraphics[width=0.63\columnwidth]{./images/figure12/figure12_b.pdf}} \hspace{10pt}
\subfigure[LOMO~\cite{liao2015person}]{\includegraphics[width=0.63\columnwidth]{./images/figure12/figure12_c.pdf}}
\vspace{-0pt}
\caption{Testing of various feature extraction methods and metric learning methods. R1 denotes the rank-1 accuracy and AUC is the area under the curve of the CMC. Tested feature descriptors: (a) HoG, (b) dcolorSIFT, (c) LOMO.}
\label{fig:test_various_feat_met}
\end{figure*}
\begin{figure*}[t]
\centering
\subfigure[$x_{ff}$]{\includegraphics[width=0.35\columnwidth]{./images/figure7/figure7_a.pdf}}
\subfigure[$x_{rr}$]{\includegraphics[width=0.35\columnwidth]{./images/figure7/figure7_b.pdf}} \hspace{20pt}
\subfigure[$x_{fr}$]{\includegraphics[width=0.35\columnwidth]{./images/figure7/figure7_c.pdf}}
\subfigure[$x_{fb}$]{\includegraphics[width=0.35\columnwidth]{./images/figure7/figure7_d.pdf}}
\subfigure[$x_{fl}$]{\includegraphics[width=0.35\columnwidth]{./images/figure7/figure7_e.pdf}}\\
\subfigure[$x_{bb}$]{\includegraphics[width=0.35\columnwidth]{./images/figure7/figure7_f.pdf}}
\subfigure[$x_{ll}$]{\includegraphics[width=0.35\columnwidth]{./images/figure7/figure7_g.pdf}} \hspace{20pt}
\subfigure[$x_{rb}$]{\includegraphics[width=0.35\columnwidth]{./images/figure7/figure7_h.pdf}}
\subfigure[$x_{rl}$]{\includegraphics[width=0.35\columnwidth]{./images/figure7/figure7_d.pdf}}
\subfigure[$x_{bl}$]{\includegraphics[width=0.35\columnwidth]{./images/figure7/figure7_j.pdf}}
\vspace{-0pt}
\caption{Distributions of pairwise feature distances $\left\{x_{pq}\right\}$ extracted from training data. The left two columns show the distributions of same-pose matching, while the right three columns show the distributions of different-pose matching. Tested feature descriptor: LOMO~\cite{liao2015person} and metric learning method: KISSME~\cite{koestinger2012large}.}
\vspace{-0pt}
\label{fig:sample_dist}
\end{figure*}
\begin{itemize}
\item
\texttt{CUHK02}~\cite{li2013locally} contains 1,816 people from five different outdoor camera pairs. The five camera pairs have 971, 306, 107, 193, and 239 people each with the size of 160$\times$60 pixels.
Each person has two images per camera that were taken at different times.
Most people are with burdens (\textit{e.g.} backpack, handbag, or baggage).
For our experiments, we manually extract all pose angles of each person in four directions (\textit{i.e.} \textit{f}ront, \textit{r}ight, \textit{b}ack, \textit{l}eft), since CUHK02 does not provide the pose angles. This dataset was used for training multi-shot weights $\mathbf{w}$.
\item
\texttt{VIPeR}~\cite{gray2007evaluating} includes 632 people and two outdoor cameras under different viewpoints and light conditions. Each person has one image per camera and each image has been scaled to be 128$\times$48 pixels. It provides the pose angle of each person as $0^{\circ}$(\textit{f}ront), $45^{\circ},$ $90^{\circ}$(\textit{r}ight), $135^{\circ}$, and $180^{\circ}$(\textit{b}ack).
\item
\texttt{PRID 2011}~\cite{hirzer11a} provides multiple person trajectories recorded from two different static surveillance cameras, monitoring crosswalks and sidewalks. The dataset shows a clean background, and the people in the dataset are rarely occluded. In the dataset, 200 people appear in both views. Among the 200 people, 178 people have more than 20 appearances.
\item
\texttt{iLIDS-Vid}~\cite{wang2014person} was created from pedestrians in two non-overlapping cameras monitoring an airport arrival hall. It provides multiple cropped images for 300 distinct individuals and is very challenging due to clothing similarities, lighting and viewpoint variations, cluttered backgrounds, and severe occlusions.
Since the datasets~(\texttt{PRID}~\cite{hirzer11a}, \texttt{iLIDS}\cite{wang2014person}) do not provide full surveillance video sequences but only cropped images, we could not automatically estimate camera viewpoints and poses of targets. Therefore, to evaluate our method with the datasets, we annotated the pose of each person manually.
\item
\texttt{3DPeS}~\cite{baltieri2011_308} was collected by eight non-overlapped outdoor cameras, monitoring different sections of a campus. Unlike other re-identification datasets~(\texttt{iLIDS}, \texttt{PRID}), it provides full surveillance video sequences:
six sets of video pairs, and uncompressed images with a resolution of 704x576 pixels at a frame rate of 15 frames per second containing hundreds of people and calibration parameters. However, this dataset provides ground-truth person identity only for selected snapshots (\textit{i.e.} no ground-truth person identity for video sequences). For our experiments, we used three sets of video pairs and manually extracted ground truth labels (identities, center points, widths, heights) of video \texttt{Set3},\texttt{4},\texttt{5}.
We did not use \texttt{Set1},\texttt{2},\texttt{6} due to the small number of people, and the lack of correspondences between the two cameras.
The pose of each person was estimated as described in Section~\ref{subsec:viewpoint_est}. The camera layout and sample frames are given in Fig.~\ref{fig:3DPeS}.
Even though the test datasets \texttt{Set3},\texttt{4},\texttt{5} contain people having various appearances and poses, they contain a small number of identities (\texttt{Set3}: 39, \texttt{Set4}: 24, \texttt{Set5}: 36). When the number of identities is small, the re-identification task becomes much easier because of the small pool of comparison targets. To show the person re-identification performance under more large scale data, we concatenate all datasets and generate \texttt{3DPeS-Set All} containing 99 identity pairs. It is reasonable, since the datasets~(\texttt{Set3},\texttt{4},\texttt{5}) do not share identities.
\end{itemize}
For readers, we open the pose annotations of \texttt{CUHK02}~\cite{li2013locally}, \texttt{iLIDS}\cite{wang2014person}, and \texttt{PRID}~\cite{hirzer11a} to the public.
In addition, we open the ground-truth labels of~\texttt{3DPeS}~\cite{baltieri2011_308} to the public. The annotations and ground-truth labels are available online at~\url{https://cvl.gist.ac.kr/pamm/}.
\noindent\textbf{Evaluation methodology.~}
To compare the re-identification methods, we followed the evaluation steps described in~\cite{farenzena2010person}. First, we randomly split people identities in video pairs into two sets with equal numbers of identities, one set for training and the other set for testing. We learned several metrics, such as LMNN~\cite{weinberger2005distance}, ITML~\cite{davis2007information}, KISSME~\cite{koestinger2012large}, and Mahal~\cite{roth2014mahalanobis}, for the baseline distance functions of our person re-identification framework. After training distance metrics, we calculated all possible matches between testing video pairs. We repeated the evaluation steps 10 times.
We plotted the Cumulative Match Curve (CMC)~\cite{gray2007evaluating} representing the true match found within the first $n$ ranks to compare the performances of the different methods.
Among all ranks, we mainly evaluated rank-1 accuracy which correctly finds true correspondences between two cameras.
We also measured the Area Under Curve (AUC) of the CMC, which denotes the average accuracy of all ranks.
\begin{figure*}[t]
\centering
\includegraphics[width=2\columnwidth]{./images/figure9/figure9.pdf}
\vspace{-0pt}
\caption{Results of training matching weights $\mathbf{w}\in\mathbb{R}^{10\times1}$ with LOMO~\cite{liao2015person} feature descriptor and several metric learning methods. The matching weights are equalized to lie in [0, 2].}
\label{fig:weight_bar}
\vspace{-0pt}
\end{figure*}
\section{Experimental Results}
\label{sec:exp}
\subsection{Testing various features and metric learning methods}
\label{subsec:feature_and_metrics}
We first tested various combinations of feature descriptor extraction and metric learning methods for the baseline of our person re-identification framework.
For the feature descriptor, we tested several feature descriptor extraction methods: Histogram of Oriented Gradient~(HoG)~\cite{dalal2005histograms}, dcolorSIFT~\cite{zhao2013unsupervised}, and LOMO~\cite{liao2015person}.
For metric learning, we tested six methods such as KISSME~\cite{koestinger2012large}, Mahalanobis~\cite{roth2014mahalanobis}, XQDA~\cite{liao2015person}, LMNN~\cite{weinberger2005distance}, ITML~\cite{davis2007information} and $L_{2}$. Note that $L_{2}$ measures the Euclidean distance between two feature vectors, \textit{i.e.}, $\mathbf{M}=\mathbf{I}$ in Eq.~\eqref{equ:distance_measure}.
Therefore, we tested 18 combinations of feature descriptor extraction and metric learning methods.
Initially, we had no trained matching weight $\mathbf{w}$ in Eq.~\eqref{eq:weighted sum}; hence, we used uniform weights as $\mathbf{w}=\mathbf{1}$ for multi-shot matching called FullMatch-avg.
Fig.~\ref{fig:test_various_feat_met} shows the re-identification performance of each combination.
As we can see, the combination of the feature descriptor LOMO~\cite{liao2015person} and the metric learning method KISSME~\cite{koestinger2012large} shows the best re-identification performance among the 18 combinations.
It shows 81.6\% rank-1 accuracy and a 98.5\% AUC score.
In general, the LOMO feature descriptor shows promising results regardless of metric learning method (69.4\% -- 81.6\% rank-1 accuracy).
We built several fusion feature descriptors, such as (HoG + dcolorSIFT), (dcolorSIFT + LOMO), (HoG + dcolorSIFT + LOMO), etc. by concatenating the feature descriptors. However, their performance was are lower than that of LOMO.
Therefore, we utilize LOMO~\cite{liao2015person} feature as the baseline feature descriptor of our framework.
\begin{table}[t]
\centering
\caption{Performance enhancement via PaMM. We used the same feature descriptor (LOMO) for all baselines. $\dag$ denotes a multi-shot matching method.}
\renewcommand{\arraystretch}{0.7}
{
\begin{tabular}{r||c|c|c|c|c}
\noalign{\hrule height 1pt}
Dataset& \multicolumn{5}{c}{\textbf{\ti{3DPeS - Set All~\cite{baltieri2011_308}}}} \\ \hline\hline
Baseline & \multicolumn{5}{c}{KISSME~\cite{koestinger2012large}} \\ \hline
Method $\backslash$ Rank & $r$ = 1 & $r$ = 3 & $r$ = 5 & $r$ = 10 & {AUC} \\ \hline
{SingleMatch} & 34.7 & 58.2 & 68.4 & 87.8 & 90.7 \\
{MultiQ-max$^{\dag}$} & 59.2 & 75.5 & 85.7 & 93.9 & 95.4 \\
{MultiQ-avg$^{\dag}$} & 80.6 & 91.8 & 94.9 & 98.0 & 98.5 \\
{FullMatch-min$^{\dag}$} & 78.6 &\tbf{93.9}&\tbf{96.9}& 98.0 & 98.8 \\
{FullMatch-avg$^{\dag}$} & 81.6 &\tbf{93.9}& 95.9 & 98.0 & 98.5 \\
PaMM$^{\dag}${\scriptsize(ours)} &\tbf{83.7}&\tbf{93.9}&\tbf{96.9}&\tbf{100} & \tbf{99.2} \\ \hline\hline
Baseline & \multicolumn{5}{c}{Mahal~\cite{roth2014mahalanobis} }\\ \hline
Method $\backslash$ Rank & $r$ = 1 & $r$ = 3 & $r$ = 5 & $r$ = 10 & {AUC} \\ \hline
SingleMatch & 39.8 & 61.2 & 75.5 & 86.7 & 91.8 \\
{MultiQ-max$^{\dag}$} & 60.2 & 72.5 & 79.6 & 89.8 & 93.8 \\
{MultiQ-avg$^{\dag}$} & 74.5 & 87.8 & 90.8 & 95.9 & 96.6 \\
FullMatch-min$^{\dag}$ & 75.5 &\tbf{90.8}& 92.7 & 93.9 &\tbf{97.6} \\
FullMatch-avg$^{\dag}$ & 80.6 & 88.8 &\tbf{93.9}&\tbf{95.9}& 96.9 \\
PaMM$^{\dag}${\scriptsize(ours)} &\tbf{81.6}& 89.8 &\tbf{93.9}&\tbf{95.9}&\tbf{97.6} \\ \hline\hline
Baseline & \multicolumn{5}{c}{XQDA~\cite{liao2015person} } \\ \hline
Method $\backslash$ Rank & $r$ = 1 & $r$ = 3 & $r$ = 5 & $r$ = 10 & {AUC} \\ \hline
SingleMatch & 46.9 & 68.4 & 78.6 & 90.8 & 93.7 \\
{MultiQ-max$^{\dag}$} & 53.1 & 71.4 & 82.7 & 91.8 & 94.2 \\
{MultiQ-avg$^{\dag}$} &\tbf{75.5}& 87.8 &\tbf{93.9}& 96.9 & 97.9 \\
FullMatch-min$^{\dag}$ & 73.5 & 87.8 &\tbf{93.9}&\tbf{98.0}& 97.8 \\
FullMatch-avg$^{\dag}$ & 73.5 & 89.8 &\tbf{93.9}& 95.9 & 97.9 \\
PaMM$^{\dag}${\scriptsize(ours)} &\tbf{75.5}&\tbf{90.8}&\tbf{93.9}&\tbf{98.0}&\tbf{98.1} \\ \hline\hline
Baseline & \multicolumn{5}{c}{ITML~\cite{davis2007information} }\\ \hline
Method $\backslash$ Rank & $r$ = 1 & $r$ = 3 & $r$ = 5 & $r$ = 10 & {AUC} \\ \hline
SingleMatch & 44.9 & 69.4 & 80.6 & 90.8 & 93.8 \\
{MultiQ-max$^{\dag}$} & 58.2 & 70.4 & 78.6 & 88.8 & 93.6 \\
{MultiQ-avg$^{\dag}$} &\tbf{75.5}& 82.7 & 89.8 & 94.9 & 96.4 \\
FullMatch-min$^{\dag}$ & 63.3 & 86.7 & 91.8 & 94.9 & 97.1 \\
FullMatch-avg$^{\dag}$ & 74.5 & 87.8 & 91.8 & 94.9 & 97.2 \\
PaMM$^{\dag}${\scriptsize(ours)} & 73.5 &\tbf{89.8}&\tbf{92.9}&\tbf{95.9}&\tbf{97.5} \\ \hline\hline
Baseline & \multicolumn{5}{c}{LMNN~\cite{weinberger2005distance}}\\ \hline
Method $\backslash$ Rank & $r$ = 1 & $r$ = 3 & $r$ = 5 & $r$ = 10 & {AUC} \\ \hline
SingleMatch & 45.9 & 68.4 & 77.6 & 90.8 & 93.6 \\
{MultiQ-max$^{\dag}$} & 51.0 & 72.5 & 81.6 & 89.8 & 93.7 \\
{MultiQ-avg$^{\dag}$} &\tbf{70.4}&\tbf{87.8}& 92.9 & 96.9 & 97.4 \\
FullMatch-min$^{\dag}$ & 69.4 & 86.7 &\tbf{93.9}& 96.9 &\tbf{97.8} \\
FullMatch-avg$^{\dag}$ & 69.4 & 86.7 & 90.8 & 95.9 & 97.3 \\
PaMM$^{\dag}${\scriptsize(ours)} &\tbf{70.4}&\tbf{87.8}& 91.8 &\tbf{98.0}&\tbf{97.8} \\
\noalign{\hrule height 1pt}
\end{tabular}}
\label{Tab1}
\end{table}
\subsection{Training multi-shot matching weights}
\label{subsec:exp_training_weights}
Based on Section~\ref{subsec:train_weights}, we train the matching weights $\textbf{w}$ in this section.
In practice, we consider 10 weights rather than 16 weights due to the weight symmetry. We let ${w}_{pq}={w}_{qp}$, where $p\neq q$. Consequentially, we learn four same-pose matching weights $({w}_{ff},{w}_{rr},{w}_{bb},{w}_{ll})$ and six different-pose matching weights $({w}_{fr},{w}_{fb},{w}_{rb},{w}_{rl},{w}_{bl},{w}_{fl})$.
As mentioned in Section~\ref{sec:data_metho}, in order to train the weights $\mathbf{w}\in \mathbb{R}^{10\times1}$, we use two datasets: \texttt{CUHK02}~\cite{li2013locally} and \texttt{VIPeR}~\cite{gray2007evaluating}.
By using the datasets, we generate 3,520 positive image pairs and 35,200 negative image pairs that cover diverse pose combinations as shown in Fig.~\ref{fig:training_sam_fig}.
Here, a positive image pair is a pair of images of the same person and a negative image pair is a pair of images of different people regardless of the poses of people. We then extracted pairwise feature distances $\left\{ x_{ff}, x_{rr}, \dots,x_{fl}\right\} $ for all images pairs by following the metric learning steps described in Section~\ref{sec:data_metho}.
Distributions of feature distances $\left\{x_{pq}\right\}$ are plotted in Fig.~\ref{fig:sample_dist}. For example, Fig.~\ref{fig:sample_dist} (a) shows the feature distance distribution of \textit{f}ront-\textit{f}ront image pairs of the same person (positive) and difference people (negative).
Unfortunately, we could not make $(r,l)$ pairs using training datasets \texttt{CUHK02}, \texttt{VIPeR}, since they do not have such pairs. In order to make the distribution of $x_{rl}$, we assume that $x_{rl}$ follows a similar with similar to that of $x_{fb}$.
Note that a large statistical distance between positive and negative distributions implies high discriminating power.
We observe that the same-pose matchings (Fig.~\ref{fig:sample_dist} (a,b,f,g) left two columns) are more discriminative than the different-pose matchings (Fig.~\ref{fig:sample_dist} (c-e,h-j) right three columns).
After obtaining distributions of feature distances, we generate training samples $\left( \mathbf{ x }_{ a },{ y }_{ a } \right)$, where $\mathbf{ x }_{ a }\in \mathbb{R}^{10\times1}$, $y_{a} \in \left\{1,-1\right\}$ by randomly selecting each ${x}_{pq}$ from each distribution.
Fig.~\ref{fig:weight_bar} shows the result of weight training with the LOMO~\cite{liao2015person} feature descriptor and several metric learning methods.
The result indicates that the weights of the same-pose matchings $(ff,rr,bb,ll)$ are generally larger than those of the different-pose matchings $(fr,fb,rb,rl,bl,fl)$. The training results do not depend on the metric learning methods and show similar tendencies.
For the following experiments, we use these trained matching weights for each baseline (feature descriptor and metric learning) individually.
\begin{table*}[t]
\centering
\caption{Performance comparison result with dataset \texttt{3DPeS}. $\dag$ denotes a multi-shot matching method. The best and second best scores in each rank are marked with \tbf{bold} and \tb{blue}. AUC is an area under a curve of CMC.}
\begin{tabular}{l||c|c|c||c|c|c||c|c|c||c|c|c|c|c}
\noalign{\hrule height 1pt}
\qquad\qquad\qquad Dataset & \multicolumn{3}{c||}{\textbf{\ti{3DPeS - Set 3}}} & \multicolumn{3}{c||}{\textbf{\ti{3DPeS - Set 4}}}& \multicolumn{3}{c||}{\textbf{\ti{3DPeS - Set 5}}} & \multicolumn{5}{c}{\textbf{\ti{3DPeS - Set All~\cite{baltieri2011_308}}}} \\ \hline
\qquad\quad Method $\backslash$ Rank &$r$ = 1 & $r$ = 3 & {AUC} & $r$ = 1 & $r$ = 3& {AUC} & $r$ = 1 & $r$ = 3 & {AUC} & {$r$=1} & {$r$=3} & {$r$=5} & {$r$=10}& {AUC} \\ \hline
LOMO + $L_{2}$ & 47.4 & 63.2 & 85.0 & 54.2 & 79.2 & 86.8 & 41.7 & 80.6 & 90.0 & 25.5 & 39.8 & 52.0 & 71.4 & 85.9 \\
LOMO + KISSME~\cite{koestinger2012large} & 36.8 & 57.9 & 80.6 & 41.7 & 66.7 & 82.6 & 41.7 & 69.4 & 87.5 & 34.7 & 58.2 & 68.4 & 87.8 & 90.7 \\
LOMO + Mahal\cite{roth2014mahalanobis} & 39.5 & 60.5 & 82.3 & 37.5 & 66.7 & 83.0 & 44.4 & 72.2 & 88.1 & 39.8 & 61.2 & 75.5 & 86.7 & 91.8 \\
LOMO + XQDA\cite{liao2015person} & 42.0 & 73.7 & 87.3 & 50.0 & 75.0 & 87.5 & 52.8 & 86.1 & 92.3 & 46.9 & 68.4 & 78.6 & 90.8 & 93.7 \\
LOMO + ITML\cite{davis2007information} & 52.6 & 76.3 & 88.4 & 58.3 & 75.0 & 85.1 & 58.3 & 80.6 & 89.0 & 44.9 & 69.4 & 80.6 & 90.8 & 93.8 \\
LOMO + LMNN\cite{weinberger2005distance} & 50.0 & 73.7 & 86.8 & 58.3 & 75.0 & 85.1 & 52.8 & 69.4 & 87.3 & 45.9 & 68.4 & 77.6 & 90.8 & 93.6 \\ \hline
MultiQ-max$^{\dag}$ & 55.3 & 76.3 & 90.7 & 62.5 &\tb{79.2}& 92.4 & 58.3 & 83.3 & 92.6 & 59.2 & 75.5 & 85.7 & 93.9 & 95.4 \\
MultiQ-avg$^{\dag}$ &\tb{81.6}& 94.7 &\tR{98.3}&\tb{83.3}&\tR{100}&\tb{98.3}& 69.4 &\tb{88.9}&\tR{96.3}& 80.6 &\tb{91.8}& 94.9 &\tb{98.0}& 98.5 \\
FullMatch-min$^{\dag}$ & 78.9 &\tb{90.0}&\tb{98.1}&\tb{83.3}&\tR{100}& 97.9 & 69.4 &\tR{90.0}&\tb{96.0}& 78.6 &\tR{93.9}&\tR{96.9}&\tb{98.0}& 98.8 \\
FullMatch-avg$^{\dag}$ &\tR{84.2}&\tR{94.7}&\tR{98.3}&\tb{83.3}&\tR{100}& 97.9 &\tb{72.2}&\tR{90.0}&\tR{96.3}&\tb{81.6}&\tR{93.9}&\tb{95.9}&\tb{98.0}& 98.5 \\
{PaMM-ns$^{\dag}${\scriptsize(ours)}} &\tR{84.2}&\tR{94.7}&\tR{98.3}&\tR{91.7}&\tR{100}&\tR{99.3}&\tb{72.2}&\tR{90.0}&\tb{96.0}&\tR{83.7}&\tR{93.9}&\tR{96.9}&\tR{100} &\tb{99.1}\\
{PaMM$^{\dag}${\scriptsize(ours)}} &\tR{84.2}&\tR{94.7}&\tR{98.3}&\tR{91.7}&\tR{100}&\tR{99.3}&\tR{75.0}&\tR{90.0}&\tR{96.3}&\tR{83.7}&\tR{93.9}&\tR{96.9}&\tR{100} &\tR{99.2}\\ \noalign{\hrule height 1pt}
\end{tabular}
\label{Tab2}
\end{table*}
\subsection{Performance enhancements via PaMM}
\label{subsec:perform_enhance}
According to the experimental result in Section~\ref{subsec:feature_and_metrics}, we utilized LOMO~\cite{liao2015person} for extracting a feature descriptor from each appearance and utilized several metric learning methods (KISSME~\cite{koestinger2012large} and others \cite{roth2014mahalanobis}\cite{liao2015person}\cite{davis2007information}\cite{weinberger2005distance}) as the baselines of this experiment.
In this experiment, we compare the person re-identification performance based on various matching strategies (\textit{e.g.} single-shot, multi-shot, and proposed) as follows:
\begin{figure*}[t]
\centering
\subfigure[\texttt{3Dpes}~\cite{baltieri2011_308}]{\includegraphics[width=0.63\columnwidth]{./images/figure13/figure13_a.pdf}}\hspace{15pt}
\subfigure[\texttt{PRID 2011}~\cite{hirzer11a}]{\includegraphics[width=0.63\columnwidth]{./images/figure13/figure13_b.pdf}}\hspace{15pt}
\subfigure[\texttt{iLIDS-Vid}~\cite{wang2014person}]{\includegraphics[width=0.63\columnwidth]{./images/figure13/figure13_c.pdf}}
\caption{Qualitative analysis results of PaMM. The true matches are highlighted with red boxes.}
\label{fig:Ex_com_result_PaMM}
\end{figure*}
\begin{itemize}
\item
SingleMatch: performing single-shot re-identification which only uses a single appearance for each person for matching.
As the appearance of each identity for SingleMatch, we randomly selected a single appearance for each identity. For unbiased selections, we repeated the appearance selection 10 times and calculated the average performance for the final result.
\item
MultiQ-max: merging multiple appearances (\textit{i.e.} appearance feature vectors) into a single merged appearance based on the max-pooling approach: a merged feature vector takes the maximum value in each dimension from all feature vectors.
\item
MultiQ-avg: merging multiple appearances into a single merged appearance based on the average-pooling approach: a merged feature vector takes the average value in each dimension from all feature vectors.
Zheng~\emph{et al}\onedot~\cite{zheng2015scalable} employed MultiQ-max and MultiQ-avg to efficiently merge multiple appearances into a single appearance and performed re-identification using the merged appearances.
\item
FullMatch-min: matching all possible pairs between multiple appearances and selecting the smallest matching score for the final score as used in \cite{farenzena2010person}.
\item
FullMatch-avg: matching all possible pairs between multiple appearances and averaging all matching scores as used in~\cite{li2015multi}.
\item
PaMM: proposed method.
\end{itemize}
For validating the performance enhancement via PaMM, we used the dataset \texttt{3DPeS Set All} and followed the evaluation steps explained in Section~\ref{sec:data_metho}.
As shown in Table~\ref{Tab1}, all single-shot matching methods (SingleMatch) with different metric learning methods are improved considerably for all ranks ($r$=1,3,5,10) thanks to the proposed PaMM. The performance enhancement at $r$=1 is remarkable (achieving 24.5--49\% enhancement).
Compared to single-shot matching methods, multi-shot matching methods (MultiQ, FullMatch, PaMM) show better re-identification performance, since they exploit several appearances for both metric learning and appearance matching.
Among the various multi-shot matching strategies (MultiQ-max, MultiQ-avg, FullMatch-min, FullMatch-avg), the proposed PaMM shows the best performance regardless of the baseline: showing the best AUC scores for all baselines and showing the best rank-1 accuracies except for the ITML case. The result implies that the proposed PaMM, which exploits people's pose information, can improve the re-identification performance regardless of the baseline.
In the consecutive experiments, we use KISSME~\cite{koestinger2012large} as the baseline metric learning method for PaMM and other multi-shot matching methods (MultiQ, FullMatch).
\subsection{Test results of \texttt{3DPeS} dataset}
\label{subsec:performance_comparison}
In this experiment, we provide the detailed evaluation results of the \texttt{3DPeS} dataset. We tested \texttt{3DPeS-Set3}, \texttt{4}, \texttt{5}, \texttt{ALL} and denoted different versions of the proposed person re-identification framework as follows:
\begin{itemize}
\item
PaMM: PaMM with all proposed methods.
\item
PaMM-ns: PaMM without appearance selection.
\end{itemize}
As with the experiment in Section~\ref{subsec:perform_enhance}, we tested several single-shot and multi-shot matching methods. The multi-shot matching methods (Multi-Q, FullMatch, and PaMM) utilized a LOMO \cite{liao2015person} feature descriptor and a KISSME \cite{koestinger2012large} metric learning method for their baselines.
Table~\ref{Tab2} shows that our methods outperform all single-shot matching and other multi-shot matching methods.
Even though FullMatch-avg and FullMatch-min also exploit all multiple appearances of targets, the performance of both methods is lower than that of PaMM.
This suggests that the proposed PaMM reasonably extracts key appearances among multiple appearances (Section~\ref{subsec:model_gen}) and efficiently matches multi-pose models (Section~\ref{subsec:multi-pose matching}).
Compared to PaMM-ns, PaMM showed a slight improvement since the test dataset \texttt{3DPeS} did not encounter the challenges described in Section~\ref{subsec:model_gen}. We expect that the proposed appearance selection method in Section~\ref{subsec:model_gen} would give more performance enhancement with more challenging and complex dataset.
Figure~\ref{fig:Ex_com_result_PaMM} (a) shows several qualitative analysis results of the \texttt{3DPeS} dataset. As we can see, the proposed PaMM correctly finds correspondences under diverse viewpoint variations of people.
\subsection{Test results of \texttt{PRID} and \texttt{iLIDS} datasets}
We also provide evaluation results and comparisons with other state-of-the-art person re-identification methods with more public datasets such as \texttt{PRID}~\cite{hirzer11a}, and \texttt{iLIDS}~\cite{wang2014person}. In these experiments, PaMM also utilized the LOMO feature descriptor and KISSME metric learning method for its baseline.
In the \texttt{PRID} dataset, 200 people appear in both views. We evaluated PaMM under two different test scenarios marked as $^{200}$ and $^{178}$.
Scenario $^{200}$ uses all 200 people pairs for testing. On the other hand, scenario $^{178}$ uses only 178 people pairs having more than 20 appearances. Most works~\cite{karanam2015person, wang2014person, wang2016person, limulti, liu2015spatio, you2016top} tested under the scenario $^{178}$, since their methods needed a sufficient video length (\textit{i.e.} multiple appearances) for extracting spatiotemporal features.
\begin{table}[t]
\centering
\caption{Performance comparison result with dataset \texttt{PRID}~\cite{hirzer11a}. The best and second best scores in each rank are marked with \tbf{bold} and \tb{blue}. ${\dag}$ denotes a multi-shot matching method.}
{\small
\begin{tabular}{r||c|c|c|c}
\noalign{\hrule height 1pt}
\qquad\qquad\qquad Dataset& \multicolumn{4}{c}{\textbf{\texttt{{PRID 2011}}}\cite{hirzer11a}} \\ \hline
\qquad Method $\backslash$ Rank & $r$=1 &$r$=5 & $r$=10 & $r$=20 \\ \hline
$^{178}$SDALF~\cite{farenzena2010person} & 4.9 & 21.5 & 30.9 & 45.2 \\
$^{200}$LOMO\cite{liao2015person} + KISSME\cite{koestinger2012large} & 22.0 & 43.0 & 55.0 & 70.0 \\
$^{178}$Salience~\cite{zhao2013unsupervised} & 25.8 & 43.6 & 52.6 & 62.0 \\
$^{200}$LOMO + XQDA~\cite{liao2015person} & 39.0 & 68.0 & 83.0 & 91.0 \\ \hline
$^{178}$SDALF$^{\dag}$~\cite{farenzena2010person} & 5.2 & 20.7 & 32.0 & 47.9 \\
$^{178}$DVR$^{\dag}$~\cite{wang2016person} & 40.0 & 71.7 & 84.5 & 97.2 \\
$^{178}$DTDL$^{\dag}$~\cite{karanam2015person} & 40.6 & 69.7 & 77.8 & 85.6 \\
$^{178}$Salience+DVR$^{\dag}$~\cite{wang2014person} & 41.7 & 64.5 & 77.5 & 88.8 \\
$^{178}$AFDA$^{\dag}$~\cite{limulti} & 43.0 & 72.7 & 84.6 & 91.6 \\
$^{200}$LOMO + XQDA$^{\dag}$~\cite{liao2015person} & 43.0 & 82.0 & 90.0 & 98.0 \\
$^{178}$TDL$^{\dag}$~\cite{you2016top} & 56.7 & 80.0 & 87.6 & 93.6 \\
$^{178}$STFV3D$^{\dag}$~\cite{liu2015spatio} & 64.1 & 87.3 & 89.9 & 92.0 \\
$^{200}$RNN$^{\dag}$~\cite{mclaughlin2016recurrent} & 70.0 & 90.0 & 95.0 & 97.0 \\
$^{200}$PaMM$^{\dag}$ (Ours) &\tb{76.0}&\tb{94.0}&\tb{98.0}&\tb{99.0} \\
$^{178}$PaMM$^{\dag}$ (Ours) &\tR{78.1}&\tR{95.5}&\tR{98.9}&\tR{100} \\
\noalign{\hrule height 1pt}
\end{tabular}}
\label{Tab3}
\end{table}
In Table~\ref{Tab3}, PaMM shows the best performance among ten state-of-the-art methods for all ranks while significantly enhancing its baseline performance (54\% enhancement at $r$=1).
Although the baseline of our method (LOMO+KISSME) showed low performance, it improved significantly and showed the best performance among the state-of-the-art methods thanks to the proposed PaMM method. Compared to other state-of-the-art methods, the rank-1 accuracies of $^{200}$PaMM and $^{178}$PaMM were 6\% and 14\% higher than those of $^{200}$RNN$^{\dag}$~\cite{mclaughlin2016recurrent} and $^{178}$STFV3D$^{\dag}$~\cite{liu2015spatio}.
Table~\ref{Tab4} shows the result of the performance comparison with dataset \texttt{iLIDS}~\cite{wang2014person}. As mentioned in Section~\ref{sec:data_metho}, \texttt{iLIDS} is much more challenging than \texttt{PRID} due to severe occlusions and lighting variations. PaMM improved its baseline performance for all ranks (46\% enhancement at $r$=1).
Among ten state-of-the-art methods, the propose PaMM ranked second for rank-1 accuracy and third for other ranks unlike the result of \texttt{PRID} dataset.
We think that the many severe occlusions of people broke the assumption of the proposed PaMM, because the assumption of the proposed multi-shot matching model is not satisfied under severe appearance variations.
In particular, when the same-pose matching score is unreliable due to severe occlusions, it has a negative influence on the final multi-pose model matching score aggregation and degrades the re-identification performance.
However, even though the proposed method can be influenced by severe occlusions, PaMM shows comparable performance to RNN$^{\dag}$~\cite{mclaughlin2016recurrent}, which shows the best rank-1 accuracy. The gap of rank-1 accuracy between PaMM and RNN$^{\dag}$ is $0.7$.
In addition, when we consider both benchmark datasets \texttt{PRID} and \texttt{iLIDS}, the proposed PaMM generally shows superior performance compared to the other methods.
It should also be noted that PaMM can achieve better performance by adopting better baseline methods.
Several qualitative analysis results of the proposed PaMM with \texttt{PRID} and \texttt{iLIDS} datasets are illustrated in Fig.~\ref{fig:Ex_com_result_PaMM} (b,c).
\begin{table}[t]
\centering
\caption{Performance comparison result with dataset \texttt{iLIDS}~\cite{wang2014person}. The best and second best scores in each rank are marked with \tbf{bold} and \tb{blue}. ${\dag}$ denotes a multi-shot matching method.}
{\small
\begin{tabular}{r||c|c|c|c}
\noalign{\hrule height 1pt}
\qquad\qquad\qquad Dataset& \multicolumn{4}{c}{\textbf{\texttt{{iLIDS-Vid}}}\cite{wang2014person}} \\ \hline
\qquad Method $\backslash$ Rank & $r$=1 &$r$=5 & $r$=10 & $r$=20 \\ \hline
SDALF~\cite{farenzena2010person} & 5.1 & 14.9 & 20.7 & 31.3 \\
Salience~\cite{zhao2013unsupervised} & 10.2 & 24.8 & 35.5 & 52.9 \\
LOMO~\cite{liao2015person} + KISSME~\cite{koestinger2012large} & 11.3 & 27.3 & 37.3 & 49.7 \\
LOMO + XQDA~\cite{liao2015person} & 18.0 & 41.2 & 54.7 & 67.0 \\ \hline
SDALF$^{\dag}$~\cite{farenzena2010person} & 6.3 & 18.8 & 27.1 & 37.3 \\
LOMO + XQDA$^{\dag}$~\cite{liao2015person} & 20.3 & 47.0 & 63.0 & 78.7 \\
DTDL$^{\dag}$~\cite{karanam2015person} & 25.9 & 48.2 & 57.3 & 68.9 \\
Salience+DVR$^{\dag}$~\cite{wang2014person} & 30.9 & 54.4 & 65.1 & 77.1 \\
AFDA$^{\dag}$~\cite{limulti} & 37.5 & 62.7 & 73.0 & 81.8 \\
DVR$^{\dag}$~\cite{wang2016person} & 39.5 & 61.1 & 71.7 & 81.0 \\
STFV3D$^{\dag}$~\cite{liu2015spatio} & 44.3 & 71.7 & 83.7 & 91.7 \\
TDL$^{\dag}$~\cite{you2016top} & 56.3 &\tR{87.6}&\tR{95.6}&\tR{98.3} \\
PaMM$^{\dag}$ (Ours) &\tb{57.3}& 79.3 & 87.3 & 93.3 \\
RNN$^{\dag}$~\cite{mclaughlin2016recurrent} &\tR{58.0}&\tb{84.0}&\tb{91.0}&\tb{96.0} \\
\noalign{\hrule height 1pt}
\end{tabular}}
\label{Tab4}
\end{table}
%
\section{Conclusions}
\label{sec:conclusion}
In this paper, we proposed a novel framework for person re-identification, called Pose-aware Multi-shot Matching (PaMM), which robustly estimates people's poses and efficiently conducts multi-shot matching based on the pose information. We extensively evaluated and compared the performance of the proposed method using public person re-identification datasets such as \texttt{3DPeS}, \texttt{PRID 2011} and \texttt{iLIDS-Vid}.
The idea of this work is simple but very effective.
We showed that PaMM can improve person re-identification regardless of its baseline method. In addition, PaMM can flexibly adopt any existing person re-identification method (\textit{e.g.} feature extraction and metric learning methods) for computing pairwise feature distance in our framework.
The results showed that the proposed methods are promising for person re-identification under diverse person pose variances and the PaMM outperforms other state-of-the-art re-identification methods.
We expect that PaMM will achieve much better re-identification performance when it adopts better baseline methods.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
1,314,259,995,325 | arxiv | \section{Introduction}
Several mechanisms have been proposed for the acceleration of UHECR. At the
most general level, they can be classified into two large groups: bottom-up
and top-down mechanisms. Bottom-up mechanisms, although more conservative,
imply the stretching of rather well known acceleration processes to their
theoretical limits (and sometimes beyond). They involve particle acceleration
in the accretion flows of cosmological structures (e.g., Norman, Melrose and
Atcherberg 1995, Kang, Ryu and Jones 1996), galaxy collisions (Cesarsky and
Ptuskin 1993, Al-Dargazelli et al. 1997, but see Jones 1998), galactic wind
shocks (Jokipii and Morfill 1987), pulsars (Hillas 1984, Shemi 1995), active
galactic nuclei (Biermann and Strittmatter 1987), powerful radio galaxies
(Rawlings and Saunders 1991, Biermann 1998), gamma ray bursts (Vietri 1995,
1998, Waxmann 1995, but see Stanev, Schaefer and Watson 1996), etc.. Top-down
mechanisms, on the other hand, escape from the acceleration problems at the
expense of exoticism. They already form the particles at high energies and
involve the most interesting Physics, for example: the decay of topological
defects into superheavy gauge and Higgs bosons, which then decay into high
energy neutrinos, gamma rays and nucleons with energies up to the GUT scale
($\sim 10^{25}$ eV) (e.g., Bhattacharge, Hill and Schramm 1992, Sigl, Schramm
and Bhattacharge 1994, Berezinsky, Kachelrie and Vilenkin 1997, Berezinsky
1998, Birkel and Sarkar 1998), high energy neutrino annihilation on relic
neutrinos (Waxmann 1998), etc..
In general, the distribution of sources of UHECR particles in bottom-up
mechanisms should be related to the distribution of luminous matter in
the Universe. In contrast, for top-down mechanisms, an isotropic
distribution of sources should be expected in most of the models
(c.f., Hillas 1998, Dubovsky and Tinyakov 1998). Hence the importance
of distinguishing observationally between these two scenarios.
A possible correlation between compact radio quasars and the five
most energetic UHECR has already been proposed by Farrar and
Biermann (1998).
Furthermore,
the clusters of events observed by AGASA (Hayashida et al 1996) are
consistent with UHECR production regions at distances of the order of
$\sim 30$ Mpc, for an intervening IGMF $\sim 10^{-10}$ to $10^{-9}$ Gauss
(Medina Tanco 1998a). Local maxima in the galaxy density distribution are
located at those positions. This can be viewed as a point in favor of the
hypothesis that the UHECR sources are distributed in the same way as the
luminous matter in the local Universe does. Furthermore, it could naturally
explain the extension of the UHECR spectrum beyond the GZK cut-off hinted
by extreme high energy events of Volcano Ranch (Linsley 1963, 1978), Haverah
Park (Watson 1991, Lawrence, Reid and Watson 1991), Fly's Eye (Bird et al.
1995) and AGASA (Hayashida et al., 1994), and recently confirmed by the
latter experiment (Takeda et al. 1998, Nagano 1998).
In the following sections we use numerical simulations to assess both, the
statistical significance of the AGASA result (Takeda et al. 1998) at the very
end of the energy spectrum, and the degree to which it is compatible with a
non-homogeneous distribution of sources that follows closely the spatial
distribution of luminous matter in the nearby Universe. The possibility of
solving the puzzle in few years of integration with the next generation of
large area ($10^{3}$ km$^{2}$) experiments is exemplified through the
Southern site of the Auger observatory.
\section{Numerical approach and discussion of results}
Energy losses due to photo-pion production in interactions with the
cosmic microwave background, should lead to the formation of a bump in
the spectrum beyond $5 \times 10^{19}$ eV, followed by the GZK cut-off
(Greisen 1966, Zatsepin and Kuzmin 1966) at higher energies.
The existence and
exact position of these spectral features depends on the spatial
distribution of the sources, their cosmological evolution and injection
spectrum at the sources (Berezinsky and Grigor'eva 1988). Nevertheless,
both bump and cut-off tend to smooth away for predominantly nearby
sources or strong cosmological evolution. The most natural way to avoid
the GZK cut-off is by invoking either top-down mechanisms or the
existence of relatively very near (compared with the UHECR mean
free path) sources.
The spectrum calculated by Yoshida and Teshima (1993) for an isotropic,
homogeneous distribution of cosmic ray sources, and shown superimposed
on the observed AGASA spectrum in Takeda et al (1998), seems unable to
explain the extension of the UHECR spectrum beyond $10^{20}$ eV. It is
not clear, however, whether the available data ($461$ events for
$E > 10^{19}$ eV, and only $6$ events for $E > 10^{20}$ eV) is
sufficient to support any conjecture about the actual shape of
the spectrum above $10^{20}$ eV. Furthermore, it is the nearby sources
that are expected to be responsible for this region of the spectrum
and their distribution is far from isotropic or homogeneous.
Therefore, it is not clear either what is the influence that
the differential exposure in declination, peculiar to the AGASA
experiment, has on the deduced spectral shape at the highest energies.
To analyze the effects of the previously mentioned factors on the
observed energy spectrum, two different sets of simulations are
discussed here.
As a check on the simulations, the energy spectrum by Yoshida and
Teshima (1993) was reproduced using a homogeneous distribution of
sources from $z=0$ to $z=0.1$, including adiabatic energy losses
due to redshift, and pair production and photo-pion production due
to interactions with the cosmic microwave background radiation (CMBR)
in a Friedmann-Robertson-Walker metric. Furthermore, a fiducial
intergalactic magnetic field (IGMF), characterized by an intensity
$B_{IGMF} = 10^{-9}$ G and a correlation length $L_{c} = 1$ Mpc
(cf., Kronberg, 1994), was also included. The IGMF was assumed uniform
inside cells of size $L_{c}$ and randomly oriented with respect to
adjacent cells (Medina Tanco et al 1997). The IGMF component was
neglected in the original work of Yoshida and Teshima (1993).
Individual sources were treated as standard candles supplying
the same luminosity in UHECR protons above $10^{19}$ eV. The
injected spectrum was a power law, $dN/dE \propto E^{-\nu}$ ,
with $\nu = 3$ above the latter threshold. From the $\sim 10^{7}$
particles output by the simulation and arriving isotropically in
right ascension and declination, one hundred samples were extracted,
with the same distribution in declination as the quoted exposure of
AGASA (Uchihori et al. 1996). The determination of the arrival
energy of protons was performed assuming an error of 20\%
(energy-independent Gaussian distribution), typical of AGASA
(e.g., Yoshida and Dai 1998).
Similarly, the same bin and number of events above $10^{19}$ eV
($461$ protons) as in the AGASA paper (Takeda et al, 1998) were
used here for the calculation of the spectra.
The resulting predicted spectrum is shown in figure 1, where the
different shades indicate 63\% and 95\% confidence levels, i.e.,
the region in the $E^{3} \times dJ/dE$ vs. $E$ space where $63$\%
and $95$\% of the spectra fell respectively. It can be seen that,
as predicted by Yoshida and Teshima (1993), the model is able to
fit the observed AGASA spectrum quite well up to $\sim 10^{20}$
eV. The introduction of the IGMF does not make appreciable changes.
At higher energies, however, AGASA observations seem unaccountable
by the homogeneous approximation, even when the quoted errors are
considered.
The distribution of luminous matter in scales comparable with a
few mean free paths of UHECR protons in the CMBR (i.e., tens of Mpc)
is, nevertheless, far from homogeneous. Therefore, given the
relatively small mean free path of protons above $10^{20}$ eV, it
should be expected that the local distribution galaxies plays a
key role in determining the shape of the UHECR spectrum if the
sources of the particles have the same spatial distribution as
the luminous matter.
The second set of simulations is intended to address the latter
problem. In figure 2, the number of galaxies inside shells of
constant thickness in redshift, $\Delta z = 0.001$, are shown as
a function of $z$ for the latest release (version of Jul 27, 1998)
of the CfA Redshift Catalogue (Huchra et al 1992). Also shown in
the same figure is a homogeneous, isotropic distribution of sources.
The normalization of the latter is such that both distribution
enclose the same number of galaxies inside $r_{0} = 100$ Mpc.
The observed distribution of galaxies shows an excess for $r < 60$
Mpc compared to the homogeneous distribution. Between $r \sim 60$ and
$r \sim 100$ Mpc both distributions increase with the same slope. This
suggests that the approximation of homogeneity begins to be valid
beyond $r \sim 60$ Mpc and that the actual distribution of galaxies
is reasonably well sampled (even if obviously incomplete) up to
$r \sim 100$ Mpc $= r_{0}$. Farther away the slope of the observed
distribution changes abruptly, very likely due to the predominance
of bias effects.
The approximation adopted here is, therefore, that the distribution
of luminous matter at $r < r_{0} = 100$ Mpc is well described by the
CfA catalog, while the homogeneous approximation holds outside that
volume. The previously described simulation scheme is used for the
distant sources in the homogeneous region, while the actual
distribution of galaxies is used for the UHECR sources nearer
than $100$ Mpc. Additionally, in the latter case, the same
procedure as in Medina Tanco (1997, 1998a) is used in the description
of the IGMF: a cell-like spatial structure, with cell size given by
the correlation length, $L_{c} \propto B_{IGMF}^{-2}(r)$.
The intensity of the IGMF, in turn, scales with luminous matter
density, $\rho_{gal}$ as $B_{IGMF} \propto \rho_{gal}^{0.3}(r)$
(e.g., Vall\'ee 1997) and the observed IGMF value at the Virgo
cluster ($\sim 10^{-7}$ G, Arp 1988) is used as the normalization
condition.
The resultant spectrum is obtained by combining both contributions,
from nearby and distant sources respectively, after taking into
account the complicating fact that our knowledge of the distribution
of galaxies is not uniform over the celestial sphere (e.g., obscuring
by dust over the galactic plane). The results, particularized for the
AGASA experiment (i.e., same declination exposure and energy error,
as well as number of events and bin size), are shown in figure 3.
It can be seen that, when the actual distribution of galaxies is
taken into account, the $63$\% confidence spectrum is able to fit
all the data if the corresponding experimental errors are considered.
Consequently, the UHECR spectrum observed by the AGASA experiment is,
given the available data, compatible with a distribution of cosmic ray
sources that follows the distribution of luminous matter in the Universe.
The latter is true up to the highest energies observed so far. Clearly,
more data is needed before the hypothesis can be falsified.
There is hope, however, for a solution in the relatively near future.
The same calculations have been performed for the first three years
of operation of the future Southern site of the Auger experiment.
The appropriate dependence of exposure on declination was used
(A. Watson, private communication), and the expected number of
events, i.e., $9075$ (Auger Design Report, 1997). The results
are given in figure 4, superimposed with the present AGASA spectrum
and its previously calculated uncertainty.
Finally, a word of caution should be given regarding the possibility
of large scale structuring of the IGMF (Ryu, Kang and Biermann 1998).
The effects that this could have on UHECR propagation have been
discussed extensively in Medina Tanco (1998b). Unfortunately, it
is not possible to state undoubtedly in which direction this would
influence the resulting particle spectrum without knowing the exact
topology of the IGMF and location of nearby sources with respect to
the field.
\section{Conclusions}
From the previous analysis it can be concluded that, given the
low number of events detected by the AGASA experiment so far with
$E > 10^{19}$ eV, the observed UHECR spectrum is consistent with a
spatial distribution of sources that follows the luminous matter
distribution in the nearby Universe. In the latter approach a single
power-law injection energy-spectrum is assumed, extending up to the
highest observed energies beyond the GZK cut-off. Therefore, based
on the observational uncertainties at present, there is no need for
a second UHECR component responsible for the events observed above
the nominal GZK cut-off.
Three years of integration by the future Southern site of the Auger
observatory should suffice to decide whether the spatial distribution
of UHECR sources is the same of the nearby luminous matter or not.
I am very grateful to Alan Watson and Michael Hillas for valuable
comments and interesting discussions, and to the High-Energy
Astrophysics group of the University of Leeds for its kind
hospitality. This work was partially supported by the Brazilian
agency FAPESP.
|
1,314,259,995,326 | arxiv | \section{Introduction}
A century after Einstein's discovery namely general relativity, the domain
of its applications has become as vast as it covers even condensed matter
physics which seemed at the opposite end of physics building compared to
gravity \cite{HDCMP}. This strange topic which connects gravity to almost
all fields of physics (see \cite{Nat}) is called gauge/gravity duality
(GGD); the extended version of AdS/CFT correspondence \cite{MW}. GGD has
attracted increasing interests during recent years and become one of the
most promising fields of physics which is hoped to be able to solve many of
unsolved problems in different fields of physics including condensed matter
physics.
Real materials in condensed matter physics do not respect the translational
symmetry i.e. there is a dissipation in momentum. The momentum dissipation
may come from the existence of a lattice or impurities. Although this
dissipation has no important influence on the values of some observable, it
affects the behavior of some others for instance conductivity. The DC
conductivity in the presence of translational symmetry diverges, whereas in
the absence of this symmetry (when momentum is dissipating) it has a finite
value. In the context of GGD, it is important to study a gravity model which
includes holographic momentum dissipation. There are some attempts to
construct such gravity model \cite{momdis}. One of these models proposed by
D. Vegh \cite{vegh}, provides an effective bulk description of a theory in
which momentum is no longer conserved. The conservation of momentum is due
to the diffeomorphism invariance of stress-energy tensor in dual theory. In
\cite{vegh}, the proposal is to break this symmetry holographically by
giving a mass to graviton state. The resulting gravity is therefore \textit
massive gravity}. One of the advantages of this theory is that the black
hole solutions of it are solvable analytically and therefore it is an
excellent toy model to study holographically the properties of materials
without momentum conservation.
Thermal behaviors of black hole solutions in the context of massive gravity
was explored extensively in recent years \cite{vegh,cai,hendimann,thermo}.
Thermodynamics of linearly charged massive black branes has been
investigated in \cite{vegh}. In \cite{cai}, a class of higher-dimensional
linearly charged solutions with positive, negative and zero constant
curvature of horizon in the context of massive gravity accompanied by a
negative cosmological constant has been presented and thermodynamics and
phase structure of these black solutions have been studied in both canonical
and grand canonical ensembles. In \cite{hendimann}, van der Waals phase
transitions of linearly charged black holes in massive gravity have been
investigated and it has been shown that the massive gravity can present
substantially different thermodynamic behavior in comparison with Einstein
gravity. Also it has been shown that the graviton mass can cause a range of
new phase transitions for topological black holes which are forbidden for
other cases. The properties of massive solutions have been studied in
different scenarios \cite{dsMG}. From holographic point of view, the
behaviors of different holographic quantities have been studied \cit
{vegh,holo,1404.5321,1407.0306,1512.07035,1611.00677,1504.00535,1507.03105,1704.03989,matteo1,matteo2,matteo3,matteo4,1612.03627
. The behavior of holographic conductivity for systems dual to linearly
charged massive black branes has been explored in \cite{vegh}. In \cit
{1404.5321}, a holographic superconductor has been constructed in the
massive gravity background. \cite{1512.07035} studies holographic
superconductor-normal metal-superconductor Josephon junction in the massive
gravity. Also the holographic thermalization process has been investigated
in this context \cite{1611.00677}. Analytic DC thermo-electric
conductivities in the context of massive gravity have been calculated in
\cite{1407.0306}. In massive Einstein-Maxwell-dilaton gravity, DC and Hall
conductivities have been computed in \cite{1504.00535}. \cite{1507.03105}
presents a holographic model for insulator/metal phase transition and
colossal magnetoresistance within massive gravity. Inspired by the recent
action/complexity duality conjecture, it has been shown in \cite{1612.03627}
that the holographic complexity grows linearly with time in the context of
massive gravity.
As we mentioned above, one of the quantities which is affected by momentum
dissipation is conductivity. On the other hand, the choice of
electrodynamics model has a direct influence on the behavior of
conductivity. So, it is worthy to consider the effects of nonlinearity as
well as massive gravity on the conductivity of the black hole solutions. It
is well-known that the nonlinear electrodynamics brings reach physics
compared to the linear Maxwell electrodynamics. For example, Maxwell theory
is conformally invariant only in four dimensions and thus the corresponding
energy-momentum tensor is only traceless in four dimensions. A natural
question then arises: Is there an extension of Maxwell action in arbitrary
dimensions that is traceless and hence possesses the conformal invariance?
The answer is positive and the invariant Maxwell action under conformal
transformation $g_{\mu \nu }\rightarrow \Omega ^{2}g_{\mu \nu }$, $A_{\mu
}\rightarrow A_{\mu }$ in $(n+1)$-dimensions is given by \cite{PLM},
\begin{equation*}
S_{m}=\int {d^{n+1}x\sqrt{-g}(-\mathcal{F})^{p}},
\end{equation*
where $\mathcal{F}=F_{\mu \nu}F^{\mu \nu}$ is the Maxwell
invariant, provided $p=(n+1)/4$. The associated energy-momentum tensor of
the above Maxwell action is given b
\begin{equation}
T_{\mu \nu }=2\left( pF_{\mu \eta }F_{\nu }^{\text{ }\eta }\mathcal{F}^{p-1}
\frac{1}{4}g_{\mu \nu }\mathcal{F}^{p}\right) . \label{T}
\end{equation
One can easily check that the above energy-momentum tensor is traceless for
p=(n+1)/4$. Also, quantum electrodynamics predicts that the electrodynamic
field behaves nonlinearly through the presence of virtual charged particles
that is reported by Heisenberg and Euler \cite{HE}.\ Hence, nonlinear
electrodynamics has been subject of much researches \cite{NEM1,NEM2,NEM3}.
This motivates us to extend the linearly charged black hole solutions of
massive gravity \cite{vegh,cai} to nonlinearly charged ones in the presence
of power-law Maxwell electrodynamics and investigate the thermodynamics of
them as well as the behavior of conductivity corresponding to the dual
system.\ In addition to power-law Maxwell electrodynamics, other types of
nonlinear electrodynamics have been introduced in \cite{BI,Selong,hendibtz}.
In spite of the special property for $p=(n+1)/4$, different aspects of
various solutions have been investigated for different $p$'s \cit
{NFP,hendi,shey1}. In the context of AdS/CFT correspondence, the power-law Maxwell field
has been considered as electrodynamics source in \cit
{holsup1,holsup2,holsup3,shey200,shey201,shey202}.
The layout of this letter is as follows. In section \ref{lifsol}, we present
the action of the massive gravity in the presence of power-Maxwell
electrodynamics and then by varying the action we obtain the field
equations. We also derive a class of topological black hole solutions of the
field equations in higher dimensions. In section \ref{Therm}, we study
thermodynamics of the solutions and examine the first law of thermodynamics
for massive black holes with power-law Maxwell field. In section \re
{conductivity}, we investigate the holographic conductivity of black brane
solutions in the presence of a power-law Maxwell gauge field. In particular,
we shall disclose the effects of the power-law Maxwell electrodynamics as
well as massive gravity on the holographic conductivity of dual systems. We
finish with closing remarks in section \ref{Clos}.
\section{Action and massive gravity solutions\label{lifsol}}
The ($n+1$)-dimensional ($n\geq 3$) action describing Einstein-massive
gravity accompanied by a negative cosmological constant $\Lambda $ in the
presence of power-law Maxwell electrodynamics i
\begin{eqnarray}
\mathcal{S} &=&\int d^{n+1}x\mathcal{L}, \label{Action} \\
\mathcal{L} &=&\frac{\sqrt{-g}}{16\pi }\left[ \mathcal{R}-2\Lambda +\left(
\mathcal{F}\right) ^{p}+m^{2}\sum_{i}^{4}c_{i}\mathcal{U}_{i}(g,\Gamma
\right] , \label{DL}
\end{eqnarray
where $g$ and $\mathcal{R}$ are respectively the determinant of the metric
and the Ricci scalar and $\Lambda =-n(n-1)/2l^{2}$ is the negative
cosmological constant where $l$ is the AdS radius. $\mathcal{F}=F_{\mu \nu
}F^{\mu \nu }$ and $F_{\mu \nu }=\partial _{\lbrack \mu }A_{\nu ]}$ is
electrodynamic tensor where $A_{\nu }$ is vector potential. $p$ determines
the nonlinearity of the electrodynamic field. For $p=1$, the linear Maxwell
gauge field will be recovered. In action (\ref{Action}), $\Gamma $ is the
reference metric, $c_{i}$'s are constants and $\mathcal{U}_{i}$'s are
symmetric polynomials of eigenvalues of the $(n+1)\times (n+1)$ matrix
\mathcal{K}_{\nu }^{\mu }\equiv \sqrt{g^{\mu \alpha }\Gamma _{\alpha \nu }}$
so tha
\begin{eqnarray}
\mathcal{U}_{1} &=&\left[ \mathcal{K}\right] , \\
\mathcal{U}_{2} &=&\left[ \mathcal{K}\right] ^{2}-\left[ \mathcal{K}^{2
\right] , \\
\mathcal{U}_{3} &=&\left[ \mathcal{K}\right] ^{3}-3\left[ \mathcal{K}\right]
\left[ \mathcal{K}^{2}\right] +2\left[ \mathcal{K}^{3}\right] , \\
\mathcal{U}_{4} &=&\left[ \mathcal{K}\right] ^{4}-6\left[ \mathcal{K}^{2
\right] \left[ \mathcal{K}\right] ^{2}+8\left[ \mathcal{K}^{3}\right] \left[
\mathcal{K}\right] +3\left[ \mathcal{K}^{2}\right] ^{2}-6\left[ \mathcal{K
^{4}\right] ,
\end{eqnarray
where the square root in $\mathcal{K}$ is related to mean matrix square root
i.e. $\left( \sqrt{\mathcal{K}}\right) _{\nu }^{\mu }\left( \sqrt{\mathcal{K
}\right) _{\lambda }^{\nu }=\mathcal{K}_{\lambda }^{\mu }$ and rectangular
brackets mean trace $\left[ \mathcal{K}\right] \equiv \mathcal{K}_{\mu
}^{\mu }$. Here $m$ is the massive gravity parameter so that in limit
m\rightarrow 0$, one recovers the diffeomorphism invariant Einstein-Hilbert
action with a gauge field and a negative cosmological constant. The
equations of motion for gravitation and gauge field ar
\begin{equation}
R_{\mu \nu }-\frac{1}{2}\mathcal{R}g_{\mu \nu }+\Lambda g_{\mu \nu
}-2pF_{\mu \lambda }F_{\nu }^{\text{ \ }\lambda }\left( -\mathcal{F}\right)
^{p-1}-\frac{1}{2}\left( -\mathcal{F}\right) ^{p}g_{\mu \nu }+m^{2}\chi
_{\mu \nu }=0, \label{Field equation}
\end{equation
\begin{equation}
\nabla _{\mu }\left( \mathcal{F}^{p-1}F^{\mu \nu }\right) =0,
\label{Maxwell equation}
\end{equation
which are obtained by varying the action (\ref{Action}) with respect to the
metric tensor $g_{\mu \nu }$ and gauge field $A_{\mu }$ respectively. In Eq.
(\ref{Field equation}), we hav
\begin{eqnarray}
\chi _{\mu \nu } &=&-\frac{c_{1}}{2}\left( \mathcal{U}_{1}g_{\mu \nu }
\mathcal{K}_{\mu \nu }\right) -\frac{c_{2}}{2}\left( \mathcal{U}_{2}g_{\mu
\nu }-2\mathcal{U}_{1}\mathcal{K}_{\mu \nu }+2\mathcal{K}_{\mu \nu
}^{2}\right) -\frac{c_{3}}{2}(\mathcal{U}_{3}g_{\mu \nu }-3\mathcal{U}_{2
\mathcal{K}_{\mu \nu } \notag \\
&&+6\mathcal{U}_{1}\mathcal{K}_{\mu \nu }^{2}-6\mathcal{K}_{\mu \nu }^{3})
\frac{c_{4}}{2}(\mathcal{U}_{4}g_{\mu \nu }-4\mathcal{U}_{3}\mathcal{K}_{\mu
\nu }+12\mathcal{U}_{2}\mathcal{K}_{\mu \nu }^{2}-24\mathcal{U}_{1}\mathcal{
}_{\mu \nu }^{3}+24\mathcal{K}_{\mu \nu }^{4}).
\end{eqnarray
The static spacetime line element takes the usual for
\begin{equation}
ds^{2}=-f(r)dt^{2}+f^{-1}(r)dr^{2}+r^{2}h_{ij}dx^{i}dx^{j}, \label{Metric}
\end{equation
where $f(r)$ is the metric function and $h_{ij}$ is a function of
coordinates $x_{i}$ which spanned an $(n-1)$-dimensional hypersurface with
constant scalar curvature $(n-1)(n-2)k$ and volume $\omega _{n-1}$. Without
loss of generality, one can take $k=0,1,-1$, such that the black hole
horizon or cosmological horizon in (\ref{Metric}) can be a zero (flat),
positive (elliptic) or negative (hyperbolic) constant curvature
hypersurface. The reference metric (fixed symmetric tensor) $\Gamma _{\mu
\nu }$ can be considered as \cite{vegh,cai
\begin{equation}
\Gamma _{\mu \nu }=\mathrm{diag}(0,0,c_{0}^{2}h_{ij}), \label{f11}
\end{equation
where $c_{0}$ is a positive constant. Using (\ref{Metric}) and (\ref{f11}),
one can easily calculates $\mathcal{U}_{i}$'s a
\begin{eqnarray}
\mathcal{U}_{1} &=&\frac{(n-1)c_{0}}{r}, \notag \\
\mathcal{U}_{2} &=&\frac{(n-1)(n-2)c_{0}^{2}}{r^{2}}, \notag \\
\mathcal{U}_{3} &=&\frac{(n-1)(n-2)(n-3)c_{0}^{3}}{r^{3}}, \notag \\
\mathcal{U}_{4} &=&\frac{(n-1)(n-2)(n-3)(n-4)c_{0}^{4}}{r^{4}}. \label{PL}
\end{eqnarray
Notice that $\mathcal{U}_{3}$ and $\mathcal{U}_{4}$ vanish for ($3+1
)-dimensional spacetime while $\mathcal{U}_{4}=0$ for ($4+1$)-dimensional
spacetime. Using the metric (\ref{Metric}), the electrodynamic field can be
immediately found as
\begin{equation}
F_{tr}=-F_{rt}=\frac{q}{r^{\left( n-1\right) /\left( 2p-1\right) }},
\label{Gaugefield}
\end{equation
where $q$ is a constant parameter related to the total charge of black hole.
Inserting Eqs. (\ref{f11}), (\ref{PL}) and (\ref{Gaugefield}) into field
equations (\ref{Field equation}), one receive
\begin{eqnarray}
\frac{f^{\prime }}{r}+\frac{(n-2)f}{r^{2}}-\frac{(n-2)k}{r^{2}}+\frac
2\Lambda }{n-1}+\frac{2p-1}{n-1}\left( 2q^{2}r^{-\frac{2n-2}{2p-1}}\right)
^{p}-\frac{c_{0}m^{2}}{r}\left( c_{1}+\frac{(n-2)c_{0}c_{2}}{r}\right. &&
\notag \\
\left. +\frac{(n-2)(n-3)c_{0}^{2}c_{3}}{r^{2}}+\frac
(n-2)(n-3)(n-4)c_{0}^{3}c_{4}}{r^{3}}\right) &=&0, \label{Eq1}
\end{eqnarray}
\begin{eqnarray}
f^{\prime \prime }+\frac{2(n-2)f^{\prime }}{r}+\frac{(n-2)(n-3)f}{r^{2}}
\frac{(n-2)(n-3)k}{r^{2}}+2\Lambda -\left( 2q^{2}r^{-\frac{2n-2}{2p-1
}\right) ^{p}-\frac{(n-2)c_{0}m^{2}}{r}\left( c_{1}+\frac{(n-3)c_{0}c_{2}}{r
\right. && \notag \\
\left. +\frac{(n-3)(n-4)c_{0}^{2}c_{3}}{r^{2}}+\frac
(n-3)(n-4)(n-5)c_{0}^{3}c_{4}}{r^{3}}\right) &=&0, \notag \\
&& \label{Eq2}
\end{eqnarray
where prime denotes the derivative with respect to $r$. Solving above
equations, $f(r)$ can be obtained as
\begin{eqnarray}
f(r) &=&k-\frac{m_{0}}{r^{n-2}}-\frac{2\Lambda r^{2}}{n(n-1)}+\frac
2^{p}q^{2p}(2p-1)^{2}}{(n-1)(n-2p)r^{2(np-3p+1)/(2p-1)}} \notag \\
&&+\frac{c_{0}m^{2}r}{n-1}\left( c_{1}+\frac{(n-1)c_{0}c_{2}}{r}+\frac
(n-1)(n-2)c_{0}^{2}c_{3}}{r^{2}}+\frac{(n-1)(n-2)(n-3)c_{0}^{3}c_{4}}{r^{3}
\right) , \label{Metricfunction}
\end{eqnarray
\qquad where $m_{0}$ is an integration constant which is related to total
mass of black hole as we see later. One may note that the metric function
\ref{Metricfunction}) reduces to those of Refs. \cite{vegh,cai} in the case
p=1$. Also the solution (\ref{Metricfunction}), in the absent of massive
parameter ($m=0$), leads t
\begin{equation}
f_{0}(r)=k-\frac{m_{0}}{r^{n-2}}-\frac{2\Lambda r^{2}}{n(n-1)}+\frac
2^{p}q^{2p}(2p-1)^{2}}{(n-1)(n-2p)r^{2(np-3p+1)/(2p-1)}}, \label{f0}
\end{equation
which was presented in \cite{hendi}. The mass parameter ($m_{0}$) in Eq.
\ref{Metricfunction}) can be found a
\begin{eqnarray}
m_{0} &=&kr_{+}^{n-2}-\frac{2\Lambda r_{+}^{n}}{n(n-1)}+\frac
2^{p}q^{2p}(2p-1)^{2}}{(n-1)(n-2p)r_{+}^{(n-2p)/(2p-1)}} \notag \\
&&+\frac{c_{0}m^{2}r_{+}^{n-1}}{n-1}\left( c_{1}+\frac{(n-1)c_{0}c_{2}}{r_{+
}+\frac{(n-1)(n-2)c_{0}^{2}c_{3}}{r_{+}^{2}}+\frac
(n-1)(n-2)(n-3)c_{0}^{3}c_{4}}{r_{+}^{3}}\right) , \label{ZM}
\end{eqnarray
where $r_{+}$ is the radius of the event horizon given by the largest root
of $f(r_{+})=0$. According to Eq. (\ref{Gaugefield}) and regarding
A_{t}(r)=\int F_{rt}dr$, the gauge potential $A_{t}$ can be calculated a
\begin{equation}
A_{t}\left( r\right) =\mu +\frac{q(2p-1)}{(n-2p)r^{(n-2p)/(2p-1)}}.
\label{potential}
\end{equation
In (\ref{potential}), $\mu $ is the chemical potential of the quantum field
theory locates on boundary which can be found by demanding the regularity
condition on the horizon i.e. $A_{t}\left( r_{+}\right) =0$ a
\begin{equation}
\mu =\frac{q(2p-1)}{(2p-n)r_{+}^{(n-2p)/(2p-1)}}.
\end{equation
One should note that the electric potential $A_{t}\left( r\right) $ has a
finite value at infinity ($r\rightarrow \infty $) provided the parameter $p$
is restricted a
\begin{equation}
\frac{1}{2}<p<\frac{n}{2},
\end{equation
obtained from $(n-2p)/(2p-1)>0$. One can also obtain the electric potential
a
\begin{equation}
U=A_{\nu }\chi ^{\nu }\left\vert _{r\rightarrow ref}-A_{\nu }\chi ^{\nu
}\right\vert _{r=r_{+}}, \label{Pot}
\end{equation
where $\chi =C\partial _{t}$ is the null generator of the horizon and $C$ is
a constant. When one applies the power-law Maxwell electrodynamics, it is
common to use a general Killing vector with a constant $C$\ \cite{14,15}.
This is due to the fact that every linear combination of Killing vectors is
also a Killing vector. Then, $C$ is fixed so that the first law of
thermodynamics is satisfied \cite{14,15}. For linear Maxwell case ($p=1$),
the constant $C$ reduces to $1$. Choosing infinity as the reference point,
one can calculate the electric potential energ
\begin{equation}
U=C\mu . \label{Poten}
\end{equation
One can obtain the Hawking temperature of the black hole on the event
horizon a
\begin{eqnarray}
T &=&\frac{f^{\prime }\left( r_{+}\right) }{4\pi } \notag \\
&=&\frac{(n-2)k}{4\pi r_{+}}-\frac{2\Lambda r_{+}}{4\pi (n-1)}+\frac
2^{p}q^{2p}(1-2p)}{4\pi (n-1)r_{+}^{(2p\left[ n-2\right] +1)/(2p-1)}} \notag
\\
&&+\frac{c_{0}m^{2}}{4\pi }\left( c_{1}+\frac{(n-2)c_{0}c_{2}}{r_{+}}+\frac
(n-2)(n-3)c_{0}^{2}c_{3}}{r_{+}^{2}}+\frac{(n-2)(n-3)(n-4)c_{0}^{3}c_{4}}
r_{+}^{3}}\right) .
\end{eqnarray
The extremal black hole, whose temperature vanishes, can be also determined
by an extremal charge,
\begin{eqnarray}
q_{\mathrm{ext}}^{2p} &=&\frac{(n-1)(n-2)r_{\mathrm{ext}}^{2\left[ p(n-3)+
\right] /(2p-1)}}{(2p-1)2^{p}}-\frac{\Lambda r_{\mathrm{ext
}^{2p(n-1)/(2p-1)}}{(2p-1)2^{p-1}} \notag \\
&&+\frac{c_{0}m^{2}(n-1)r_{\mathrm{ext}}^{\left[ 2p(n-2)+1\right] /(2p-1)}}
(2p-1)2^{p}}\left( c_{1}+\frac{(n-2)c_{0}c_{2}}{r_{\mathrm{ext}}}+\frac
(n-2)(n-3)c_{0}^{2}c_{3}}{r_{\mathrm{ext}}^{2}}+\frac
(n-2)(n-3)(n-4)c_{0}^{3}c_{4}}{r_{\mathrm{ext}}^{3}}\right) , \notag \\
&&
\end{eqnarray
For $q>q_{\mathrm{ext}}$, there is a naked singularity in spacetime while
q<q_{\mathrm{ext}}$ describes solutions with two inner and outer horizons (
r_{+}$ and $r_{-}$). These two horizons degenerate for $q=q_{\mathrm{ext}}$.
The behaviors of the metric function $f(r)$ versus $r$ for different
topologies of horizon are depicted in Fig. \ref{fig1}.
\begin{figure*}[t]
\begin{center}
\begin{minipage}[b]{0.32\textwidth}\begin{center}
\subfigure[~$k=0$, $q_{\rm ext}=2.03$]{
\label{fig1a}\includegraphics[width=\textwidth]{1.eps}\qquad}
\end{center}\end{minipage} \hskip+0cm
\begin{minipage}[b]{0.32\textwidth}\begin{center}
\subfigure[~$k=1$, $q_{\rm ext}=2.25$]{
\label{fig1b}\includegraphics[width=\textwidth]{2.eps}\qquad}
\end{center}\end{minipage} \hskip0cm
\begin{minipage}[b]{0.32\textwidth}\begin{center}
\subfigure[~$k=-1$, $q_{\rm ext}=1.78$]{
\label{fig1c}\includegraphics[width=\textwidth]{3.eps}\qquad}
\end{center}\end{minipage} \hskip0cm
\end{center}
\caption{The behavior of $f(r)$ versus $r$ for $n=4$, $l=1$, $p=5/4$, $m=1$,
$r_{+}=1$, $c_{0}=1$, $c_{1}=1$, $c_{2}=3/2$, $c_{3}=-1/2$ and $c_{4}=1$.}
\label{fig1}
\end{figure*}
Up to now, we have obtained the higher-dimensional black hole solutions in
the context of massive gravity and in the presence of power-law Maxwell
gauge field. In the next section, we will study the thermodynamics of the
obtained solutions. To do that, we shall obtain the Smarr-type formula and
check the satisfaction of the first law of black holes thermodynamics.
\section{THERMODYNAMICS OF MASSIVE GRAVITY \label{Therm}}
The main purpose of this section is to examine the first law of
thermodynamics for massive black holes with power-law Maxwell field. It was
shown that the entropy of black holes in massive gravity still obeys the
area law \cite{cai}. It is easy to show that the entropy of black hole per
unit volume $\omega _{n-1}$ as an extensive quantity of thermodynamics is
given by \cite{cai
\begin{equation}
S=\frac{r_{+}^{n-1}}{4}, \label{entropy}
\end{equation
which is a quarter of the event horizon area \cite{cai,bek}. The electric
charge of black hole per unit volume $\omega _{n-1}$ can be calculated
through the use of Gauss la
\begin{equation}
Q=\frac{\,{1}}{4\pi }\int r^{n-1}\left( -\mathcal{F}\right) ^{p-1}F_{\mu \nu
}n^{\mu }u^{\nu }dr, \label{chdef}
\end{equation
where $n^{\mu }$ and $u^{\nu }$are respectively the unit spacelike and
timelike normals to the hypersurface of radius $r$ defined b
\begin{equation}
n^{\mu }=\frac{1}{\sqrt{-g_{tt}}}dt=\frac{1}{\sqrt{f(r)}}dt,\text{ \ \ \ \
u^{\nu }=\frac{1}{\sqrt{g_{rr}}}dr=\sqrt{f(r)}dr.
\end{equation
Thus, one can obtai
\begin{equation}
Q=\frac{2^{p-1}q^{2p-1}}{4\pi }. \label{charge}
\end{equation
In order to obtain the mass of black holes in massive gravity one can apply
the Hamiltonian approach presented in Ref. \cite{cai}. The total mass ($M$)
of massive black hole per unit volume $\omega _{n-1}$ can be calculated as
\cite{cai}
\begin{equation}
M=\frac{(n-1)m_{0}}{16\pi }, \label{Mass}
\end{equation
where $m_{0}$ as a function of the horizon radius $r_{+}$ was given in Eq.
\ref{ZM}). In order to check the first law of thermodynamic, we need to
compute Smarr-type formula for mass $M$ as a function of extensive
quantities entropy and electric charge. Using relations (\ref{entropy}),
\ref{charge}) and (\ref{Mass}), one can obtain the Smarr-type formula for
mass a
\begin{eqnarray}
M(S,Q) &=&\frac{k(n-1)(4S)^{(n-2)/(n-1)}}{16\pi }-\frac{\Lambda \left(
4S\right) ^{n/(n-1)}}{8\pi n}+\frac{Q^{2p/(2p-1)}(2p-1)^{2}}{2(n-2p)\left(
4S\right) ^{\frac{n-2p}{(n-1)(2p-1)}}}\left( \frac{\pi }{2^{p-3}}\right)
^{1/(2p-1)} \notag \\
&&+\frac{c_{0}m^{2}S}{4\pi }\left( c_{1}+\frac{(n-1)c_{0}c_{2}}{\left(
4S\right) ^{1/(n-1)}}+\frac{(n-1)(n-2)c_{0}^{2}c_{3}}{\left( 4S\right)
^{2/(n-1)}}+\frac{(n-1)(n-2)(n-3)c_{0}^{3}c_{4}}{\left( 4S\right) ^{3/(n-1)}
\right) . \label{Smarr}
\end{eqnarray
Now, one can show that the thermodynamic quantities satisfy the first law of
thermodynamic a
\begin{equation}
dM=TdS+UdQ, \label{TFL}
\end{equation
in whic
\begin{equation}
T=\left( \frac{\partial M}{\partial S}\right) _{Q}\text{ \ \ \ \ and \ \ \ \
}U=\left( \frac{\partial M}{\partial Q}\right) _{S}, \label{intqua}
\end{equation
provided $C=p$ in (\ref{Poten}). As it is clear, for linear Maxwell case (
p=1$), the constant $C$\ is reduced to $1$. In the remainder of this work,
we study the effect of power-law Maxwell electrodynamics on the holographic
conductivity of dual systems with and without translational symmetry.
\begin{figure*}[t]
\begin{center}
\begin{minipage}[b]{0.32\textwidth}\begin{center}
\subfigure[~$n=3$]{
\label{fig2a}\includegraphics[width=\textwidth]{4.eps}\qquad}
\end{center}\end{minipage}\hskip+0cm
\begin{minipage}[b]{0.32\textwidth}\begin{center}
\subfigure[~$n=4$]{
\label{fig2b}\includegraphics[width=\textwidth]{5.eps}\qquad}
\end{center}\end{minipage}\hskip0cm
\begin{minipage}[b]{0.32\textwidth}\begin{center}
\subfigure[~$n=3$]{
\label{fig2c}\includegraphics[width=\textwidth]{6.eps}\qquad}
\end{center}\end{minipage}\hskip0cm
\begin{minipage}[b]{0.32\textwidth}\begin{center}
\subfigure[~$n=4$]{
\label{fig2d}\includegraphics[width=\textwidth]{7.eps}\qquad}
\end{center}\end{minipage}\hskip+0cm
\begin{minipage}[b]{0.32\textwidth}\begin{center}
\subfigure[~$p=(n+1)/4$]{
\label{fig2e}\includegraphics[width=\textwidth]{8.eps}\qquad}
\end{center}\end{minipage}\hskip0cm
\end{center}
\caption{The behaviors of real parts of conductivity $\protect\sigma $
versus $\protect\omega /T$ for $m=0$ with $l=r_{+}=1$.}
\label{fig2}
\end{figure*}
\begin{figure*}[t]
\begin{center}
\begin{minipage}[b]{0.32\textwidth}\begin{center}
\subfigure[~$n=3$]{
\label{fig3a}\includegraphics[width=\textwidth]{9.eps}\qquad}
\end{center}\end{minipage}\hskip+0cm
\begin{minipage}[b]{0.32\textwidth}\begin{center}
\subfigure[~$n=4$]{
\label{fig3b}\includegraphics[width=\textwidth]{10.eps}\qquad}
\end{center}\end{minipage}\hskip0cm
\begin{minipage}[b]{0.32\textwidth}\begin{center}
\subfigure[~$n=3$]{
\label{fig3c}\includegraphics[width=\textwidth]{11.eps}\qquad}
\end{center}\end{minipage}\hskip0cm
\begin{minipage}[b]{0.32\textwidth}\begin{center}
\subfigure[~$n=4$]{
\label{fig3d}\includegraphics[width=\textwidth]{12.eps}\qquad}
\end{center}\end{minipage}\hskip+0cm
\begin{minipage}[b]{0.32\textwidth}\begin{center}
\subfigure[~$p=(n+1)/4$]{
\label{fig3e}\includegraphics[width=\textwidth]{13.eps}\qquad}
\end{center}\end{minipage}\hskip0cm
\end{center}
\caption{The behaviors of imaginary parts of conductivity $\protect\sigma $
versus $\protect\omega /T$ for $m=0$ with $l=r_{+}=1$. }
\label{fig3}
\end{figure*}
\begin{figure*}[t]
\begin{center}
\begin{minipage}[b]{0.32\textwidth}\begin{center}
\subfigure[~$n=3$]{
\label{fig4a}\includegraphics[width=\textwidth]{14.eps}\qquad}
\end{center}\end{minipage}\hskip+0cm
\begin{minipage}[b]{0.32\textwidth}\begin{center}
\subfigure[~$n=4$]{
\label{fig4b}\includegraphics[width=\textwidth]{15.eps}\qquad}
\end{center}\end{minipage}\hskip0cm
\begin{minipage}[b]{0.32\textwidth}\begin{center}
\subfigure[~$n=3$]{
\label{fig4c}\includegraphics[width=\textwidth]{16.eps}\qquad}
\end{center}\end{minipage}\hskip0cm
\begin{minipage}[b]{0.32\textwidth}\begin{center}
\subfigure[~$n=4$]{
\label{fig4d}\includegraphics[width=\textwidth]{17.eps}\qquad}
\end{center}\end{minipage}\hskip+0cm
\begin{minipage}[b]{0.32\textwidth}\begin{center}
\subfigure[~$p=(n+1)/4$]{
\label{fig4e}\includegraphics[width=\textwidth]{18.eps}\qquad}
\end{center}\end{minipage}\hskip0cm
\end{center}
\caption{The behaviors of real parts of conductivity $\protect\sigma $
versus $\protect\omega /T$ for $m=1$ with $l=r_{+}=1$, $c_{0}=1$, $c_{1}=-1$
and $c_{2}=0$. }
\label{fig4}
\end{figure*}
\begin{figure*}[t]
\begin{center}
\begin{minipage}[b]{0.32\textwidth}\begin{center}
\subfigure[~$n=3$]{
\label{fig5a}\includegraphics[width=\textwidth]{19.eps}\qquad}
\end{center}\end{minipage}\hskip+0cm
\begin{minipage}[b]{0.32\textwidth}\begin{center}
\subfigure[~$n=4$]{
\label{fig5b}\includegraphics[width=\textwidth]{20.eps}\qquad}
\end{center}\end{minipage}\hskip0cm
\begin{minipage}[b]{0.32\textwidth}\begin{center}
\subfigure[~$n=3$]{
\label{fig5c}\includegraphics[width=\textwidth]{21.eps}\qquad}
\end{center}\end{minipage}\hskip0cm
\begin{minipage}[b]{0.32\textwidth}\begin{center}
\subfigure[~$n=4$]{
\label{fig5d}\includegraphics[width=\textwidth]{22.eps}\qquad}
\end{center}\end{minipage}\hskip+0cm
\begin{minipage}[b]{0.32\textwidth}\begin{center}
\subfigure[~$p=(n+1)/4$]{
\label{fig5e}\includegraphics[width=\textwidth]{23.eps}\qquad}
\end{center}\end{minipage}\hskip0cm
\end{center}
\caption{The behaviors of imaginary parts of conductivity $\protect\sigma $
versus $\protect\omega /T$ for $m=1$ with $l=r_{+}=1$, $c_{0}=1$, $c_{1}=-1$
and $c_{2}=0$. }
\label{fig5}
\end{figure*}
\section{Holographic conductivity\label{conductivity}}
In this section, we will obtain the electrical transport behavior of the
dual field theory in the presence of a power-law Maxwell gauge field. In
order to do this, one should use the solution of the black brane ($k=0$)
found in the pervious section. First, we investigate the effects of the
power-law Maxwell electrodynamics on the holographic conductivity of dual
systems in which momentum is conserved ($m=0$). Next, we consider the
solutions dual to the systems which no longer possess momentum conservation
$m\neq 0$).
\subsection{Vanishing $m$}
The planer ($n+1$)-dimensional metric can be rewritten a
\begin{equation}
ds^{2}=-\mathcal{F}(u)dt^{2}+l^{2}\mathcal{F
(u)^{-1}u^{-4}du^{2}+l^{2}u^{-2}\sum_{i=1}^{n-1}dx_{i}^{2},
\end{equation
which is given by defining $u=lr^{-1}$ in the metric (\ref{Metric}).
Accordingly, the event horizon of black brane is at $u_{+}=lr_{+}^{-1}$ and
the $n$-dimensional thermal field theory lives at $u=0$. The metric function
of spacetime in absence of massive parameter i
\begin{equation}
\mathcal{F
(u)=-m_{0}l^{2-n}u^{n-2}+u^{-2}+2^{p}q^{2p}(2p-1)^{2}(n-1)^{-1}(n-2p)^{-1
\left[ l^{-1}u\right] ^{2(np-3p+1)/(2p-1)},
\end{equation
obtained by substituting $r=lu^{-1}$ and $k=0$ in Eq. (\ref{f0}). Perturbing
the vector potential component $A_{x}$ and the metric component $g_{tx}$ by
turning on $a_{x}(u)e^{-i\omega t}$ and $g_{tx}(u)e^{-i\omega t}$
respectively, we can easily derive two linear equations of motion for
electrodynamic
\begin{equation}
a_{x}^{\prime \prime }+\left( (8p-n-3)\left( 2p-1\right) ^{-1}u^{-1}
\mathcal{F}^{\prime }\mathcal{F}^{-1}\right) a_{x}^{\prime }+l^{2}\omega
^{2}u^{-4}\mathcal{F}^{-2}a_{x}+h^{\prime }\mathcal{F}^{-1}\left(
g_{tx}^{\prime }+2u^{-1}g_{tx}\right) =0, \label{ax1}
\end{equation
and for gravit
\begin{equation}
g_{tx}^{\prime }+2u^{-1}g_{tx}+2^{p+1}ph^{\prime }\left(
u^{4}l^{-2}h^{\prime 2}\right) ^{p-1}a_{x}=0, \label{ax2}
\end{equation
where now the prime means derivative with respect to $u$ and $h(u)$ is
electric potential in the for
\begin{equation}
h(u)=\mu +\frac{q(2p-1)u^{(n-2p)/(2p-1)}}{(n-2p)l^{(n-2p)/(2p-1)}},
\end{equation
which is obtained by transforming $r\rightarrow lu^{-1}$ in Eq. (\re
{potential}). By eliminating $g_{tx}$ between Eqs. (\ref{ax1}) and (\ref{ax2
), the differential equation for $a_{x}$ i
\begin{eqnarray}
a_{x}^{\prime \prime }+\left( (8p-n-3)\left( 2p-1\right) ^{-1}u^{-1}
\mathcal{F}^{\prime }\mathcal{F}^{-1}\right) a_{x}^{\prime }+a_{x}\mathcal{F
^{-1}\left( l^{2}\omega ^{2}u^{-4}\mathcal{F}^{-1}-2^{p+1}ph^{\prime
2}\left( u^{4}l^{-2}h^{\prime 2}\right) ^{p-1}\right) &=&0. \notag \\
&&
\end{eqnarray
The behavior of above relation near the boundary ($u\rightarrow 0$) i
\begin{equation}
a_{x}^{\prime \prime }+(4p-n-1)\left( 2p-1\right) ^{-1}u^{-1}a_{x}^{\prime
}+\cdots =0, \label{deqax}
\end{equation
which has the following solutio
\begin{equation}
a_{x}(u)=a_{1}+a_{2}u^{(n-2p)/(2p-1)}+\cdots ,
\end{equation
where $a_{1}$ and $a_{2}$ are two constant parameters. To calculate the
expectation value of current for boundary theory, we can use the following
formula \cite{hartnoll,tong
\begin{equation}
\left\langle J_{x}\right\rangle =\left. \frac{\partial \mathcal{L}}{\partial
\left( \partial _{u}\delta a_{x}\right) }\right\vert _{u=0},
\end{equation
where $\delta a_{x}=a_{x}(u)e^{-i\omega t}$ and $\mathcal{L}$ was given in
Eq. (\ref{DL}). So, it is obvious that the holographic conductivity can be
obtained a
\begin{equation}
\sigma =\frac{\left\langle J_{x}\right\rangle }{E_{x}}=-\frac{\left\langle
J_{x}\right\rangle }{\partial _{t}\delta a_{x}}=-\frac{i\left\langle
J_{x}\right\rangle }{\omega \delta a_{x}}=\frac{2^{p-3}p(n-2p)q^{2(p-1)}a_{2
}{(2p-1)\pi i\omega a_{1}}. \label{conduc}
\end{equation}
It is easy to show that the holographic conductivity (\ref{conduc}) reduces
to $\sigma =a_{2}/\left( 4\pi i\omega a_{1}\right) $ for $n=3$ and $p=1$
\cite{vegh,hartnoll}. In Figs. \ref{fig2a} and \ref{fig3a}, the behaviors of
real and imaginary parts of holographic conductivity for linear Maxwell case
($p=1$) are illustrated as a function of $\omega /T$\ and for various values
of the charges of black brane $q$ for $n=3$. This figure shows that the real
part of conductivity $\mathrm{Re}[\sigma ]$\ decreases as $q$\ increases
(temperature decreases) for $\omega \rightarrow 0$ (Fig. \ref{fig2a}). Our
numerical computations show that $\mathrm{Re}[\sigma ]$\ diverges at $\omega
=0$\ independent of the value of the charge parameter $q$. Also, the maximum
value of $\mathrm{Re}[\sigma ]$\ is greater for greater $q$'s. We observe
that $\mathrm{Re}[\sigma ]$\ tends to a constant for high frequencies
independent of the value of the charge parameter. Next, we turn to study
imaginary part of the conductivity $\mathrm{Im}[\sigma ]$ plotted in Fig.
\ref{fig3a}. Imaginary part of conductivity includes a minimum for different
charges. This minimum is deeper for larger charges (lower temperatures). At
\omega =0$, imaginary part of conductivity $\mathrm{Im}[\sigma ]$ diverges
(Fig. \ref{fig3a}). This fact supports our numerical computation which shows
that real part of conductivity blows up at zero frequency, according to
Kramers-Kronig relation. For high frequencies, the imaginary part of
conductivity vanishes independent of the value of charge. In Figs. \re
{fig2b} and \ref{fig3b}, the behaviors of real and imaginary parts of
holographic conductivity for linear Maxwell in terms of frequency for
different values of black brane's charge $q$ for $n=4$ are depicted. For low
frequencies the behavior of holographic conductivity is the same as the case
$n=3$. However, for high frequencies the behaviors are different. In $n=3$
case, the real (imaginary) part of conductivity tends to a constant for high
frequencies whereas for $n=4$ case it increases (decreases) as $\omega $
increases.
Now, we intend to study the effect of nonlinearity of the electrodynamics
(power parameter $p$ of the power-law Maxwell field) on holographic
conductivity. Figs. \ref{fig2c}, \ref{fig2d}, \ref{fig3c} and \ref{fig3d}
show the behavior of $\mathrm{Re}[\sigma ]$\ and $\mathrm{Im}[\sigma ]$\ as
a function of $\omega /T$\ for different values of $p$\ (restricted by
1/2<p<n/2$) for $n=3$ and $4$. In the $\omega \rightarrow 0$ limit,
increasing $p$ leads to the smaller $\mathrm{Re}[\sigma ]$. For high
frequencies, $\mathrm{Re}[\sigma ]$\ increases (decreases) as linear
function of $\omega /T$\ and its slope increases (decreases) as $p$
decreases (increases) for $p<(n+1)/4$\ ($p>(n+1)/4$). For $p=(n+1)/4$,
\mathrm{Re}[\sigma ]$ and $\mathrm{Im}[\sigma ]$ tend to a constant for high
frequencies as one can see in Figs. \ref{fig2e} and \ref{fig3e}. Above
behaviors show that for high frequencies \textrm{Re}$\left[ \sigma \right]
\propto \omega ^{a}$ where $a\propto n+1-4p$. This result is important from
holographic point of view since similar results can be found in experimental
observations \cite{7,8}. In \cite{7}, for a ($2+1$)-dimensional graphene
system, it was reported that the value of \textrm{Re}$[\sigma ]$\ tends to a
constant for large frequencies. We observed such a behavior in the
conformally invariant case, $p=(n+1)/4$. For conductivity of a ($2+1
)-dimensional single-layer graphene induced by mild oxygen plasma exposure,
a positive slope with respect to frequency for high frequencies has been
reported in \cite{8}. We observed similar behavior for conductivity in case
of $p<(n+1)/4$. For all values of $p$, we see that $\mathrm{Im}[\sigma ]$
blows up at zero frequency (Figs. \ref{fig3c} and \ref{fig3d}). For high
frequencies, imaginary part of conductivity decreases for low values of $p$,
whereas it flattens for bigger $p$'s.
\subsection{Nonvanishing $m$}
Now, we intend to demonstrate the influence of power-law Maxwell parameter
p $ on the holographic conductivity in massive gravity theory. Employing
again $r\rightarrow lu^{-1}$ and setting $k=0$ in (\ref{Metricfunction}), we
obtai
\begin{eqnarray}
\mathcal{F}(u)
&=&-m_{0}l^{2-n}u^{n-2}+u^{-2}+2^{p}q^{2p}(2p-1)^{2}(n-1)^{-1}(n-2p)^{-1
\left[ l^{-1}u\right] ^{2(np-3p+1)/(2p-1)} \notag \\
&&+(n-1)^{-1}c_{0}m^{2}lu^{-1}\left(
c_{1}+(n-1)l^{-1}c_{0}c_{2}u+(n-1)(n-2)l^{-2}c_{0}^{2}c_{3}u^{2}+(n-1)(n-2)(n-3)l^{-3}c_{0}^{3}c_{4}u^{3}\right) .
\notag \\
&&
\end{eqnarray
Hereon,we should perturb the gauge field and the metric by turning on
a_{x}(u)e^{-i\omega t}$, $g_{tx}(u)e^{-i\omega t}$ and $g_{ux}(u)e^{-i\omega
t}$. At the linear regime, we have three independent differential equations
for gauge fiel
\begin{equation}
\left( \mathcal{F}a_{x}^{\prime }\right) ^{\prime }+(8p-n-3)\left(
2p-1\right) ^{-1}u^{-1}\mathcal{F}a_{x}^{\prime }+l^{2}\omega ^{2}u^{-4
\mathcal{F}^{-1}a_{x}+h^{\prime }\left( g_{tx}^{\prime
}+2u^{-1}g_{tx}+i\omega g_{ux}\right) =0, \label{axm}
\end{equation
and for massive gravity
\begin{equation}
g_{tx}^{\prime }+2u^{-1}g_{tx}+i\omega g_{ux}+2^{p+1}ph^{\prime }\left(
u^{4}l^{-2}h^{\prime 2}\right) ^{p-1}a_{x}+ic_{0}l^{-2}\omega ^{-1}u^{2}\Xi
\mathcal{F}g_{ux}=0, \label{gtx}
\end{equation
\begin{equation}
g_{tx}^{\prime \prime }+(5-n)u^{-1}g_{tx}^{\prime
}-2(n-2)u^{-2}g_{tx}+i\omega g_{ux}^{\prime }+2^{p+1}ph^{\prime }\left(
u^{4}l^{-2}h^{\prime 2}\right) ^{p-1}a_{x}^{\prime }-i(n-3)\omega
u^{-1}g_{ux}+c_{0}\Xi u^{-2}\mathcal{F}^{-1}g_{tx}=0, \label{gux}
\end{equation
in whic
\begin{equation}
\Xi =m^{2}\left(
c_{1}lu^{-1}+2(n-2)c_{0}c_{2}+3(n-3)(n-2)c_{0}^{2}c_{3}l^{-1}u+4(n-4)(n-3)(n-2)c_{0}^{3}c_{4}l^{-2}u^{2}\right) .
\end{equation
Eliminating $g_{tx}$ between Eqs. (\ref{axm}), (\ref{gtx}) and (\ref{gux}),
one arrives at the two following second-order differential equation
\begin{eqnarray}
\left( \mathcal{F}a_{x}^{\prime }\right) ^{\prime }+ &&(8p-n-3)\left(
2p-1\right) ^{-1}u^{-1}\mathcal{F}a_{x}^{\prime } \notag \\
+ &&\left[ l^{2}\omega ^{2}u^{-4}\mathcal{F}^{-1}-2^{p+1}ph^{\prime 2}\left(
u^{4}l^{-2}h^{\prime 2}\right) ^{p-1}\right] a_{x}-ic_{0}l^{-2}\omega ^{-1
\mathcal{F}h^{\prime }\Xi u^{2}g_{ux}=0, \label{Eqax}
\end{eqnarray}
\begin{eqnarray}
l^{-2}u^{-2}\left( u^{4}\Xi ^{-1}\mathcal{F}\left( u^{2}\Xi \mathcal{F
g_{ux}\right) ^{\prime }\right) ^{\prime }-i2^{p+1}p\omega c_{0}^{-1}u^{-2
\left[ \Xi ^{-1}u^{4}\mathcal{F}a_{x}\left( h^{\prime }\left(
u^{4}l^{-2}h^{\prime 2}\right) ^{p-1}\right) ^{\prime }\right] ^{\prime } &&
\notag \\
+i(n-3)2^{p+1}p\omega c_{0}^{-1}u^{-2}\left[ \Xi ^{-1}u^{3}\mathcal{F
a_{x}h^{\prime }\left( u^{4}l^{-2}h^{\prime 2}\right) ^{p-1}\right] ^{\prime
}-(n-3)l^{-2}u^{-2}(u^{5}\mathcal{F}^{2}g_{ux})^{\prime } && \notag \\
+\omega ^{2}g_{ux}-i2^{p+1}p\omega h^{\prime }\left( u^{4}l^{-2}h^{\prime
2}\right) ^{p-1}a_{x}+c_{0}u^{2}l^{-2}\Xi \mathcal{F}g_{ux} &=&0.
\end{eqnarray
One can show that the solution of differential equation (\ref{Eqax}) near
boundary ($u\rightarrow 0$) i
\begin{equation}
a_{x}^{\prime \prime }+(4p-n-1)\left( 2p-1\right) ^{-1}u^{-1}a_{x}^{\prime
}+\cdots =0,
\end{equation
which is the same as (\ref{deqax}) and also the holographic conductivity has
the same form as (\ref{conduc}). To solve above differential equations
numerically, we impose incoming boundary conditions at the horizo
\begin{equation}
a_{x}(u),\text{ }g_{ux}(u)\propto (u_{+}-u)^{-i\omega /4\pi T},
\end{equation
where $T$ is the Hawking temperature.
In Figs. \ref{fig4} and \ref{fig5}, we depict the holographic conductivity
for ($2+1$)- and ($3+1$)-dimensional dual systems including momentum
dissipation in the presence of linear Maxwell and nonlinear electrodynamics.
Fig. \ref{fig5} shows that the imaginary part of conductivity near zero
frequency does not have diverging behavior in the presence of momentum
dissipation. Consequently, according to Kramers-Kronig relation, the real
part of conductivity does not diverge at $\omega =0$ and includes a Drude
peak (in contrast with the case of previous subsection with no momentum
dissipation where imaginary part of conductivity blows up at zero frequency
and accordingly real part diverges there). Also, real part of DC
conductivity becomes larger as $q$\ ($p$) increases. For high frequencies,
the behaviors of real and imaginary parts of conductivity for $n=3$ and $4$
in terms of black brane charge $q$ and nonlinear parameterar $p$ are similar
to the case of previous subsection with no momentum dissipation.
\section{Closing remarks\label{Clos}}
A gravity theory called massive gravity \cite{vegh} was proposed in order to
describe a class of strongly interacting quantum field theories with broken
translational symmetry via a holographic principle. In this letter, we
consider the massive gravity theory when the gauge field is in the form of
the power-Maxwell electrodynamics. First, we derive a class of higher
dimensional topological black hole solutions of this theory. Then, we
calculate the conserved and thermodynamic quantities of the system and check
that these quantities satisfy the first law of black holes thermodynamics on
the horizon.
The main purpose of this letter is to investigate the electrical transport
behavior of the dual field theory in the presence of a power-law Maxwell
gauge field for the obtained solutions. In order to clarify the effects of
the massive gravity on the holographic conductivity, we have first
considered the holographic conductivity of the dual systems in which
momentum is conserved ($m=0$). Then, we have extended our study to the case
where translational symmetry is broken and consequently the system no longer
possess momentum conservation ($m\neq 0$). For both cases, we have plotted
the behaviour of the real and imaginary parts of the holographic
conductivity in terms of the frequency per temperature ($\omega /T$)\ for (
2+1$)- and ($3+1$)-dimensional dual systems. In the former case ($m=0$), we
observed that the real part of conductivity $\mathrm{Re}[\sigma ]$\ for $n=3$
decreases as $q$\ increases (temperature decreases) for $\omega \rightarrow
0 $. Besides, $\mathrm{Re}[\sigma ]$\ has a maximum which is greater for
greater charges. Also, $\mathrm{Re}[\sigma ]$ tends to a constant for high
frequencies independent of the value of charge. In addition, the imaginary
part of conductivity $\mathrm{Im}[\sigma ]$ diverges as $\omega \rightarrow
0 $. For high frequencies, the imaginary part of conductivity vanishes
independent of the value of charge. The low frequencies behavior of
holographic conductivity for $n=4$\ is the same as the case of $n=3$. For
high frequencies, in contrast with $n=3$, the real (imaginary) part of
conductivity increases (decreases) as $\mathrm{Re}[\sigma ]$ increases for
n=4$. Next, we explored the effect of the power-law Maxwell field on
holographic electrical transport. We observed that increasing $p$\ leads to
the smaller $\mathrm{Re}[\sigma ]$ for $\omega \rightarrow 0$\ while for
high frequencies $\mathrm{Re}\left[ \sigma \right] \propto \omega ^{a}$\
where $a\propto (n+1-4p)$. Similar results for high frequencies can be found
in experimental observations on ($2+1$)-dimensional graphene systems \cit
{7,8}. This is important from holographic point of view.
In the latter case ($m\neq 0$), we find out that the imaginary part of the
DC conductivity, $\mathrm{Im}[\sigma ]$, is zero at $\omega =0$ and\ becomes
larger as $q$\ increases (temperature decreases). This is in contrast to the
case without momentum dissipation. It also has a maximum value for $\omega
\neq 0$ which increases with increasing $q$ (with fixed $p$) or increasing
p $ (with fixed $q$) for $n=3$. For the real part of the conductivity,
\mathrm{Re}[\sigma ]$, we see that in case of $p=1$ the maximum value (Drude
peak) achieves at $\omega =0$. Again this is in contrast to the former case
$m=0$) in which the minimum value of $\mathrm{Re}[\sigma ]$, occurs for
\omega \rightarrow 0$. For different values of the power parameter, $p$, the
real and imaginary part of the conductivity has relative minimum and
maximum, respectively. Finally, we observed that both real and imaginary
parts of the holographic conductivity are similar to the previous case for
high frequencies.
In this work, we obtained the conductivity by applying the linear response
theory where the electric field is treated as a probe. This may restrict the
study from fully explaining the effects of nonlinearity of electrodynamics
model. Therefore, it is an interesting issue for future researches to
consider the case where the properties of the system are functions of
electric field. In such case, nonlinear response happens. Some examples of
such studies can be found in literature in Refs. \cit
{nonlinres0,nonlinres3,nonlinres4,nonlinres5,nonlinres1,nonlinres2}.
\begin{acknowledgments}
AD and AS thank the Research Council of Shiraz University. MKZ would like to
thank Shanghai Jiao Tong University for the warm hospitality during his
visit. This work has been financially supported by the Research Institute
for Astronomy \& Astrophysics of Maragha (RIAAM), Iran.
\end{acknowledgments}
|
1,314,259,995,327 | arxiv |
\section{Introduction}
Self-driving is a complex task, therefore the standard approach is to divide it into separate modules. A typical modular pipeline consists of several modules focusing on different aspects of the problem such as perception, prediction, planning, and control.
In this work, we assume that the perceptual input including the detections, the tracks, and the map information is provided in a bird's eye view representation, and focus on prediction by proposing a motion forecasting algorithm.
Motion forecasting is the problem of predicting the future location of traffic agents for safe navigation. This requires understanding the scene by representing the map information as well as the interactions of agents with the scene and with each other. Furthermore, there are multiple plausible future scenarios which need to be considered by the following planner module in the stack. %
Previous work on motion forecasting mostly focuses on learning representations of the scene, specifically the map and the agent history, i.e\onedot} \def\Ie{I.e\onedot the previous locations of agents. While early attempts~\cite{Tang2019NeurIPS, Lee2017CVPR, Chai2019arXiv, Cui2019ICRA, Phan2020CVPR, Rhinehart2019CVPR, Casas2018CoRL, Hong2019CVPR, Biktairov2020NeurIPS} create a rasterized representation that can easily be processed with a 2D Convolutional Neural Network~(CNN), recent work mostly focuses on the spatial aspect with a lane graph~\cite{Liang2020ECCV, Gilles2021arXiv, Zeng2021IROS} or vector representations~\cite{Gao2020CVPR, Zhao2020arXiv, Gu2021ICCV}. An explicit representation of the topology and interactions with a Graph Neural Network~(GNN) leads to better representations of the surrounding environment as well as the agent interactions. In this work, we adapt the vectorized representation for the spatial aspect of the scene and then focus on the temporal aspect.
The temporal aspect is mostly ignored in motion forecasting by simply dividing the time into two: the present with the information up to now and the future to be predicted. Typically, the history of each agent is independently encoded with a recurrent neural network~\cite{Alahi2016CVPR, Gupta2018CVPR, Khandelwal2020arXiv, Mercat2020ICRA, Buhet2020CoRL, Park2020ECCV, Salzmann2020ECCV} or simply with a 1D CNN~\cite{Liang2020ECCV, Gilles2021ITSC, Gilles2021arXiv, Zeng2021IROS}. This approach fails to represent the evolving interactions between the agents and their relation with the scene elements through time. We claim that learning temporal dynamics plays a crucial role in prediction. Consider a scenario where two vehicles approach an intersection, their speed history changing together decides their future locations, for example one of them slowing down and letting the other vehicle continue with the turn at its current speed. Our results show improvements in these scenarios where the temporal dynamics are crucial for future prediction.
Ye et~al\onedot} \def\iid{i.i.d\onedot~\cite{Ye2021CVPR} recently proposed to model the temporal aspect by focusing on the dynamics of the agent of interest only. While this improves the results, it overlooks the dynamics in the other parts of the scene that might still be relevant for predicting the next location of the agent of interest.
We propose to learn a temporal graph representation that is aware of the entire scene dynamics.
We construct a temporal graph representing the dynamic scene with agents moving and interacting with the scene and with each other. We dynamically update the features of each scene element in a way informed by the other scene elements such as the other agents in the scene and nearby road segments. While these interactions are modelled with a static GNN in previous work, we model the interactions on the graph temporally by considering the time axis in the updates. In addition, we introduce two memory modules; one specific to the agent of interest and another to the entire scene. Our experiments show the importance of dynamic updates and the two types of memory modules.
Another aspect in motion forecasting is the multi-modality which can be addressed by predicting multiple futures. While there are various approaches that predict a heatmap~\cite{Gilles2021ITSC, Gilles2021arXiv, Gilles2021ICLR} or learn a distribution~\cite{Tang2019NeurIPS, Huang2020RAL, Lee2017CVPR, Yuan2019ICLR, Gupta2018CVPR, Deo2018IV} to sample from, we follow a simpler approach that is commonly used in the literature by generating a set of predictions and applying the loss only on the closest one during training.
Common metrics used in motion forecasting evaluate both the quality and the diversity of endpoint predictions. As shown in recent work~\cite{Zhao2020arXiv, Gilles2021ITSC}, addressing these two aspects together is challenging. While one option is to develop separate objectives optimizing each metric, we instead focus on learning representations that are good at predicting accurate endpoint distributions without sacrificing diversity.
\begin{figure}[t!]
\centering
\includegraphics[width=1\linewidth]{gfx/FTGN_pipeline_figure.pdf}
\caption{\textbf{Temporal Graph Learning for Motion Forecasting.} We learn a dynamic scene representation where each timestamp is encoded as a temporal graph. We keep track of changes to the agent of interest with a sequential memory and to the entire scene with a scene memory.
We generate goal proposals by using both the scene memory and the motion information related to the agent of interest.
Finally, we predict the full trajectories conditioned on the refined goal locations.}
\label{fig:overview}
\vspace{-0.35cm}
\end{figure}
\section{Related Work}
\label{sec:rw}
\boldparagraph{Context Representation}
Representing the context, i.e\onedot} \def\Ie{I.e\onedot the surrounding environment is an important aspect of motion forecasting. Typically, the context consists of a map in 2D bird's eye view (BEV) representation as well as the past trajectories of agents on the map. There are two types of approaches to context representation: rasterized and vectorized.
In rasterized representation, the context is rastered and encoded with a 2D CNN. Despite the convenience of CNNs especially when predicting a heatmap~\cite{Gilles2021ITSC}, 2D raster image cannot explicitly represent the complex topology of the map such as long range connectivity and hierarchy between the scene elements due to the limited receptive field size.
Vector representations initially proposed in VectorNet~\cite{Gao2020CVPR} can capture the complex topology of the road networks as well as the spatial locality of semantic entities with a hierarchical representation. Several followup work including ours~\cite{Zhao2020arXiv, Gu2021ICCV, Liu2021CVPR} build on VectorNet to represent the context. Differently, we also learn a temporal representation of the scene.
\boldparagraph{Temporal Encoding} Rossi~et~al\onedot} \def\iid{i.i.d\onedot propose to learn changes on a massive graph, e.g\onedot} \def\Eg{E.g\onedot Wikipedia, Twitter, with a temporal graph neural network~\cite{rossi2020temporal}. We also propose to learn a temporal graph representation but for multiple, smaller graphs representing scenes observed through limited time intervals. The number of time steps are smaller but the changes are more dynamic through interactions between agents.
Recent work called TPCN~\cite{Ye2021CVPR} shows the importance of learning temporal relations in motion forecasting. Similar to us, in TPCN, there is a spatial module for a global representation and a temporal module for learning dynamics. Differently, multi-interval learning for temporal representation is specialized to the agent of interest in TPCN, whereas in our case, we learn temporal relations between all the entities. This way, learned dynamics can still help even when the interactions of the agent of interest is limited but there are other moving agents in the scene. Besides, predictions should be informed by the scene dynamics occurring in spatially further regions than the agent.
\boldparagraph{Goal-Conditioned Prediction}
Earlier methods~\cite{Liang2020ECCV, Gao2020CVPR, Ye2021CVPR, Huang2021arXiv, Zeng2021IROS, Mercat2020ICRA, Song2021CoRL} directly predict $K$ full trajectories based on the features of the agent of interest. However, this approach may fail to cover diverse future locations on the map since it only focuses on the agent of interest. Some methods~\cite{Park2020ECCV} follow an auto-regressive approach which may lead to drift due to accumulating error in consecutive timestamps. Another line of work~\cite{Zhao2020arXiv, Liu2021CVPR, Gu2021ICCV, Gilles2021ITSC, Gilles2021ICLR, Gilles2021arXiv, Zhang2021arXiv} first predicts the endpoint of future trajectory and then conditioned on the predicted endpoint, the whole trajectory is predicted. We also follow this target-based approach because predicting trajectory is mostly straightforward once the target endpoint is identified.
The target-based methods~\cite{Zhao2020arXiv, Liu2021CVPR, Gu2021ICCV} typically follow a two-stage approach. First, a distribution over target locations is predicted, either to find the closest lane to the endpoint~\cite{Zhao2020arXiv} or densely over all possible locations on a grid~\cite{Gu2021ICCV}. Finding the closest lane is typically not accurate enough to locate the target point, therefore an offset is also predicted with respect to the lane~\cite{Zhao2020arXiv}. In this work, we show that our learned temporal representation is capable of directly regressing the target locations without scoring lanes or dense grid locations. Another option is to predict a heatmap representing the probability distribution of the target location~\cite{Gilles2021ITSC, Gilles2021ICLR, Gilles2021arXiv}. In heatmap-based approaches, the sampling strategy becomes very important. While it can be optimized for very low miss rates, it is difficult to optimize it together with the endpoint accuracy.
\section{Conclusion, Limitations, and Future Work}
We propose a temporal graph representation for motion forecasting with two types of memory modules. Our method is among the top-performing methods on Argoverse, especially in terms of the official metric b-minFDE which measures the quality of the distributions.
We address diversity with a simple goal conditioning, therefore our method is not among the top-performing methods in terms of MR. In future, we plan to focus on diversity by extending our temporal graph to a probabilistic formulation.
Contemporary work~\cite{Ngiam2021ICLR} shows the importance of learning a holistic representation for the scene rather than focusing on a single agent. In this work, we still focus on a single agent in prediction but we consider temporal relations for the whole scene. An interesting future work can build on our temporal representation to improve predictions for all the agents in the scene.
In motion forecasting, algorithms rely on map info and perception input which may not be available in real life.
\section{Methodology}
Given the past states of agents in the scene and an HD map of the environment, our goal is to predict the future locations of the agent of interest. Our approach illustrated in \figref{fig:overview} is based on the following observation: traffic scenes consist of a dynamic part with agents moving through time and a static context which remains unchanged except for the interactions with the dynamic part. We first build a holistic representation of the static part and then model the dynamics as a temporal graph on top of it.
\subsection{Scene Encoding}
\subsubsection{Temporal Graph Representation}
\label{sec:temporal_graph_rep}
We construct a dynamic graph to represent the state of the scene at time $t$ as a graph $\mathcal{G}_t = \{\mathcal{V}_t, \mathcal{E}_t \}$ where $\mathcal{V}_t$ and $\mathcal{E}_t$ denote the set of vertices and undirected edges on the graph. Each vertex $\mathbf{v}^i_t \in \mathcal{V}_t$ corresponds to a lane segment or an agent $i$ at time $t$. Due to the cost of a fully connected graph at each time step, we selectively build two types of edges between different types of nodes. We first connect each agent to the lane segments in their vicinity based on a threshold. The undirected edge between an agent and the surrounding lane segments allows to represent the current location of an agent on the map as well as the occupancy of map locations through time. We also connect the agent of interest to all the other agents at that timestamp to model dynamic interactions between agents.
Let $\mathbf{F}_t$ denote the feature matrix at time $t$ where each row corresponds to a vertex $\mathbf{v}^i_t$ and $d_k$ denote the length of key features. We perform dynamic updates on the features through time using self attention \cite{Vaswani2017NeurIPS} between the connected nodes:
\begin{align}
\hat{\mathbf{F}}_t = \mathrm{softmax}\left(\dfrac{\mathbf{F}_{t-1}^Q \left(\mathbf{F}_{t-1}^K\right)^T}{\sqrt{d_k}} \right) \mathbf{F}_{t-1}^V
\end{align}
where $\mathbf{F}_{t-1}^Q, \mathbf{F}_{t-1}^K$ and $\mathbf{F}_{t-1}^V$ are linear transformations of the feature matrix from the previous timestamp.
After initializing the node features from VectorNet, we accumulate temporal information to learn the dynamics.
We explicitly encode the time information to form the final feature matrix $\mathbf{F}_t$:
\begin{align}
\label{eq:temp_feat}
\mathbf{f}^i_t = g_1 \left(\hat{\mathbf{f}}^i_t + \varphi_{\mathrm{time}}(t)\right)
\end{align}
where $\mathbf{f}^i_t$ denotes the features of the agent $i$ at time $t$, $\varphi_{\mathrm{time}}(\cdot)$ is a time encoder as proposed in \cite{rossi2020temporal} and $g_1$ is simply a two-layer MLP.
\subsubsection{Memory Modules}
\label{sec:memory_modules}
An important aspect of learning dynamics is building a memory to remember necessary information from past steps. In Temporal Graph Networks~\cite{rossi2020temporal}, Rossi et~al\onedot} \def\iid{i.i.d\onedot propose to keep track of changes with a memory for every node and edge on the graph. While it is shown to be crucial for node or edge addition or removal tasks in case of TGN, such a fine-grained memory module is not only infeasible in our case but also excessive for predicting the trajectory of a single agent of interest. On the other hand, to predict future reliably, the agent of interest needs to remember the changes in its representation as well as the changes to the whole scene through time. Therefore, we consider two types of memory modules in our temporal graph representation: one for the agent of interest and another for the whole scene.
Given the temporal features of the agent of interest $\mathbf{f}_t$ at time $t$, we keep track of changes to its representation sequentially with a GRU:
\begin{align}
\label{eq:mem_agent}
\mathbf{h}_t^{\mathrm{seq}} = \mathrm{GRU}\left(\mathbf{f}_t, \mathbf{h}_{t-1}^{\mathrm{seq}}\right)
\end{align}
where $\mathbf{h}_t$ refers the hidden state of the GRU.
Note that we drop the superscript on the node features as we build the sequential memory model only for the agent of interest.
We introduce another memory module for the whole scene: scene memory.
We initialize the scene memory at time $t$ by applying a linear layer to the feature matrix at that time step: $\mathbf{M}_t^{(0)} = g_0\left(\mathbf{F}_t\right)$. We then build the scene memory module as layers of self attention operations \eqref{eq:scene_mem_layered_att} followed by layer normalization at each layer $l$ \eqref{eq:scene_mem_norm}.
\begin{align}
\label{eq:scene_mem_layered_att}
\hat{\mathbf{M}}_t^{(l)} &= \mathrm{softmax}\left( \dfrac{\mathbf{M}_t^{(l-1), Q}\left(\mathbf{M}_t^{(l-1), K}\right)^T}{\sqrt{d_k}}\right) \mathbf{M}_t^{(l-1), V} \\
\label{eq:scene_mem_norm}
\mathbf{M}_t^{(l)} &= \varphi_{\mathrm{norm}}\left(\hat{\mathbf{M}}_t^{(l)}\right)
\end{align}
After the last layer $L$, we aggregate all the node features with a max pool operation \eqref{eq:scene_mem_pool} to summarize the relevant scene features in $\mathbf{m}_t^{(L)}$ and then relate them across time with a GRU \eqref{eq:mem_scene}.
\begin{align}
\label{eq:scene_mem_pool}
\mathbf{m}_t^{(L)} &= \varphi_{\mathrm{pool}}\left(\mathbf{M}_t^{(L)}\right) \\
\label{eq:mem_scene}
\mathbf{h}_t^{\mathrm{mem}} &= \mathrm{GRU}(\mathbf{m}_t^{(L)}, \mathbf{h}_{t-1}^{\mathrm{mem}})
\end{align}
In addition to the memory modules, we use cross attention between the temporally updated features of the agent of interest and the context elements including the lanes and the other agents following~\cite{Gu2021ICCV}.
The final representation for the agent of interest contains the lane and agent features as the result of cross attention and the temporal features \eqref{eq:temp_feat} including their initialization for learning the change as well as the memory modules for the agent \eqref{eq:mem_agent} and the whole scene \eqref{eq:mem_scene}.
\subsection{Goal-Conditioned Trajectory Prediction}
\label{sec:goal_conditioned_pred}
Several previous work~\cite{Zhao2020arXiv, Liu2021CVPR, Gu2021ICCV} address multi-modality by predicting $K$ number of trajectories for a scene. The loss is calculated based on the prediction that has the closest endpoint to the ground truth endpoint.
A recent line of work ~\cite{Zhao2020arXiv, Liu2021CVPR} first predicts the endpoints as the target locations, and then, conditioned on the targets, predicts the full trajectory.
We follow a similar goal-conditioned approach but in a more focused way on target locations. While the common approach in this line of work is to score a large number of map elements first, in some cases even densely~\cite{Gu2021ICCV}, and then refine them, it distributes the focus evenly to relevant target locations and irrelevant, distant locations on the map. Therefore, we first regress $K$ number of goal locations, and then focus on refinement and scoring.
\subsubsection{Goal Prediction}
\label{sec:goal_pred}
We initially predict $K$ number of goal locations.
As indicated by the failure cases of the previous work~\cite{Ye2021CVPR}, goal locations need to be constrained by both the motion and the map information. We obtain the motion information from the features of the agent of interest as explained at the end of \secref{sec:memory_modules}. Although the agent of interest interacts with the scene, its features do not directly correspond to the scene elements. Therefore, for map constrained goal locations, we construct a map feature $\mathbf{f}^{\mathrm{map}}$ by max-pooling the feature of the lane nodes in the scene graph, $\mathbf{L}$, and concatenating them with the updated scene memory from~\eqref{eq:mem_scene}:
\begin{align}
\label{eq:map_feature}
\mathbf{f}^{\mathrm{map}}_T &= g_2\left(
\varphi_{\mathrm{pool}}\left(\mathbf{L}_{T}\right),
\varphi_{\mathrm{agg}}\left(\mathbf{h}_{T}^{\mathrm{mem}} \right) \right)
\end{align}
where $T$ is the last observed time step before prediction and $g_2(\cdot)$ is a 2-layer MLP. We generate half of the proposals from the features of the agent of interest and the other half from the map features.
Once the proposals are created, we refine and score them in a way informed by the scene features. Our goal is to assign high scores to the proposals that are consistent with dynamic scene features. We first encode the proposals that are 2D coordinates and then apply cross attention with scene features $\mathbf{F}_T$ to place it in our map representation before refinement and scoring.
Our objective is to minimize the distance between the predicted goal location that is the closest to the ground truth with a smooth-$\mathnormal{L_1}$ loss and also to increase its score with respect to the other proposals with a cross entropy loss.
We predict the full trajectory conditioned on $K$ goal locations and apply a smooth-$\mathnormal{L_1}$ loss to minimize the distance between the ground truth and the predicted trajectory for the best goal prediction.
\section{Experiments}
\subsection{Experimental Setup}
\boldparagraph{Dataset}
We use Argoverse motion forecasting dataset~\cite{Chang2019CVPR} which has more than 300K sequences with the map information and the agent trajectories. Each sequence or a scene is 5 seconds long sampled at 10~Hz, corresponding to 50 time steps.
Given the map information and the agent history in the first 2 seconds~(20 frames), the goal is to predict the trajectory of the agent of interest for the next 3 seconds,~(30 frames). We follow the original training, validation, and test splits.
\boldparagraph{Metrics}
We use four different metrics to evaluate our work.
\begin{enumerate*}[label=(\roman*)]
\item \textbf{Minimum Average Displacement Error}~(min-ADE) is the average displacement error between the ground truth trajectories and the predicted trajectory that has the closest final step to the ground truth endpoint over all time steps.
\item \textbf{Minimum Final Displacement Error}~(min-FDE) is the displacement error between the best final step prediction and the ground truth final step.
$\mathrm{minADE}_K$ and $\mathrm{minFDE}_K$ refers to the minimum over $K$ predictions.
\item \textbf{Miss Rate}~(MR) is the percentage of the scenes where none of the predicted trajectory endpoints are not within a threshold ~(2 meters) to the ground truth endpoint.
\item \textbf{Brier-minFDE}~($\mathrm{b-minFDE}$) is the official metric of the challenge by also considering the probability distribution $p$ of a trajectory. Specifically, $\mathrm{b-minFDE}$ is calculated by adding a probability score, $(1-p)^2$ to the $\mathrm{minFDE}$ value. We report the values for $K=6$.
\end{enumerate*}
\boldparagraph{Training Details} We construct the graph by including the lanes that are closer than 50~meters to the agent of interest in terms of Manhattan distance. We normalize and rotate the scene with respect to the position and the orientation of the agent of interest. %
We create an edge between a lane segment and an agent if the distance between them is less than 2~meters.
We train our models with a batch size of $64$ for 36 epochs. We use Adam optimizer~\cite{Kingma2015ICLR} with an initial learning rate of $1 \times 10^{-4}$ and divide it by 5 at the end of $24^{th}$ and $30^{th}$ epochs.
We randomly scale the scene by using a scale factor in the range of $[0.75, 1.25]$ and also apply random translations to the polylines for data augmentation. We initialize the global context encoder of our model from a pre-trained VectorNet.
\subsection{Ablation Study}
\input{tab/comp_ablation}
\boldquestion{What is the contribution of each component proposed?}
We perform an ablation study to measure the effect of each component on the performance in \tabref{tab:component_ablation}. Adding the temporal graph (\textbf{TG}; \secref{sec:temporal_graph_rep}) significantly improves the performance in each metric compared to the VectorNet initialization in the first row. This shows the importance of learning temporal dynamics in the scene.
On top of the temporal graph, we measure the effect of two types of memory modules~(\secref{sec:memory_modules}). The scene memory provides temporal information about the overall scene and the sequential memory about temporal dynamics related to the agent of interest such as speed changes. The sequential memory for the agent of interest only~(\textbf{Seq}) degrades the performance slightly but the scene memory~(\textbf{Scene}) improves it, and using them together results in the best performance. This shows the importance of propagating information from past time steps together with scene information.
\input{tab/regression}
\boldquestion{How useful is the learned temporal representation?}
In order to measure the effect of learning a temporal representation, in \tabref{tab:regression_result}, we compare our method to the other methods by discarding the effect of goal conditioning in target-based methods~\cite{Zhao2020arXiv, Gu2021ICCV} and target sampling in a heatmap-based method~\cite{Gilles2021ITSC}. Following the previous work on representation learning~\cite{chen2020ICML, grill2020neurips} where an MLP is trained on top of a frozen backbone to measure the quality of a learned representation, we train an MLP on top of our backbone to directly regress $K$ trajectories. The more informative the features extracted by the backbones are, the better the predicted trajectories will be. As can be seen from \tabref{tab:regression_result}, the two methods, TPCN~\cite{Ye2021CVPR} and ours, which learn temporal representations outperform the other methods. This shows the importance of learning dynamics independent of other factors.
\boldquestion{What is the role of goal prediction?}
In the upper part of \tabref{tab:component_ablation}, we basically regress $K$ trajectories directly. In the bottom part, we measure the importance of goal prediction by first predicting $K$ goal locations and then predicting the full trajectories conditioned them. Goal conditioning improves the performance in terms of both minFDE and b-minFDE. Predicting the endpoint accurately is crucial since we condition on it to predict the trajectory next. Therefore, we apply an additional goal loss on the best endpoint directly which results in the best performance in all metrics.
We also ablate the source, i.e\onedot} \def\Ie{I.e\onedot which features to use, for goal prediction (Supplementary) and find that using both the map and the motion information improves the results in terms of MR and b-minFDE.
\subsection{Comparison to Previous Work}
\input{tab/sota}
We compare our method's performance with the other published methods on both the validation set~(\tabref{tab:argo_val}) and the leaderboard~(\tabref{tab:argo_test}). Our method is among the top-performing methods, top-3 in minADE and minFDE on both the validation and test, which shows the endpoint prediction and trajectory completion accuracy, and the second best in the official ranking metric b-minFDE. Note that our method can obtain competitive performance in all metrics without optimizing for any metric specifically. Without targeting the MR specifically, our method can reach $10\%$ on the validation set which is the third best. This is because our method can predict a good endpoint distribution as shown by the second best b-minFDE on the test set.
\input{tab/val_sota}
\subsection{Qualitative Results}
\vspace{-0.2cm}
\begin{figure}[t!]
\centering
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\linewidth]{gfx/naive_v_temporal/new_naive_v_temporal_391.pdf}
\caption{}
\label{fig:map_comp1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\linewidth]{gfx/naive_v_temporal/new_naive_v_temporal_11376.pdf}
\caption{}
\label{fig:map_comp2}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\linewidth]{gfx/naive_v_temporal/new_naive_v_temporal_13023.pdf}
\caption{}
\label{fig:speed_comp1}
\end{subfigure}
\\
\begin{subfigure}[b]{\textwidth}
\includegraphics[width=1\linewidth]{gfx/regression_v_goal/new_mode_19054.pdf}
\caption{}
\label{fig:mode_comp1}
\end{subfigure}
\caption{\textbf{Temporal Graph Representation with Goal Conditioning.} We visualize the effect of learning temporal dynamics with our model~(\textbf{Temporal}) to a basic version without temporal graph~(\textbf{Naive}). Learning temporal dynamics leads to more admissible predictions agreeing with the map~(\subref{fig:map_comp1}, \subref{fig:map_comp2}) and better capturing the changes to the velocity~(\subref{fig:speed_comp1}). We also show that goal conditioning results in better distributions by covering more modes compared to simple regression (\subref{fig:mode_comp1}).}
\label{fig:qual_temporal}
\vspace{-0.5cm}
\end{figure}
We visualize the effectiveness of our temporal graph representation in \figref{fig:qual_temporal} where we compare our model~(\textbf{Temporal}) to a basic version without temporal graph~(\textbf{Naive}) which is similar to vanilla VectorNet~\cite{Gao2020CVPR} for global context encoding. Learning a temporal representation allows our model to generate more admissible trajectories respecting the borders of the map as shown in (\ref{fig:map_comp1}) and (\ref{fig:map_comp2}) and also to better capture the changes in speed as shown in (\ref{fig:speed_comp1}). In general~(see Supplementary for more examples), the improvements are more pronounced at the intersections where the map information is crucial and there are typically significant alterations to the speed of the agents.
In (\ref{fig:mode_comp1}), we compare our goal conditioned prediction to simple regression without any goal conditioning by visualizing all the $K$ trajectories predicted in orange. Even though simple regression performs well quantitatively~(\tabref{tab:component_ablation}), we can cover more modes with goal conditioning. %
The simple regression misses the left turn. With goal conditioning, we can not only predict the left turn but also the possibility of a right turn as well as going straight, all in agreement with the map. %
\section{Implementation Details}
In this section, we provide the details of our model for reproducibility. We will also publish our code when this work is published.
\textbf{Scene Representation:} In scene representation, we converted lanes and agent trajectories in vector form as in original VectorNet paper ~\cite{Gao2020CVPR}. We included all lanes that are closer than 50 meters to any agent in any past timestamp to make lane-agent interaction possible during temporal encoding. We kept the the feature vector size as 128.
\textbf{Encoders:} Both MLP inputs and outputs are vectors of size 128 except enhanced agent of interest feature. As explained in the paper, we used cross attention between all nodes and and lane nodes and concatenated them form the final enhanced agent of interest feature of size 384. We extracted features of 2D points to vectors of size 128 with point subgraph of DenseTNT ~\cite{Gu2021ICCV} which is 3 linear layers taking inputs of a 2D point and agent feature. We followed cross attention mechanism of DenseTNT ~\cite{Gu2021ICCV} to get final point features.
\textbf{Graph Networks:} We set layer number to 3 in subgraphs of VectorNet backbone and Scene Memory Encoder GNN where each layer is 2 layer MLP. We $L_2$ normalized Subgraph outputs and did not use map completion loss. Global graphs of VectorNet backbone and Temporal Graph is implemented as a single head self attention. If there is no edge between nodes in temporal graph, we set probability of node inclusion to 0 by using adjacency matrix of a timestamp as the mask before softmax operation as proposed in original transformer \cite{Vaswani2017NeurIPS}.
\textbf{Goal Conditioned Prediction:} We conditionally predicted the full trajectory with 2 layer MLP whose input is concatenation of the agent of interest feature and endpoint feature. To Similarly, we predicted endpoints by giving point features to 2 layer MLP.
\textbf{Data Augmentations:} Since we normalized the scene according to agent of interest, to scale the scene, we just multiplied the coordinate points with a value between $[0.75, 1.25]$. We added noise sampled from $\mathcal{N}(0, 0.2)$ to polyline locations as perturbation.
\textbf{Training Details:} As mentioned in the paper, we trained our models with a batch size of $64$ for 36 epochs corresponding to $115848$ iterations in 4 Tesla T4 GPUs in a distributed manner. We used Adam optimizer~\cite{Kingma2015ICLR} with an initial learning rate of $1 \times 10^{-4}$ and divide it by 5 at the end of $24^{th}$ and $30^{th}$ epochs.
\section{Qualitative Results}
\subsection{Component Comparison}
In this section, we first provide some qualitative results of the same scene from regressive model without temporal encoding [Naive], regressive model with temporal encoding [Temporal (Reg.)] and goal conditioned model with temporal encoding [Temporal (Goal Cond.)] in \figref{fig:supp_comp_set1} and \figref{fig:supp_comp_set2}. Furthermore, we provide some fail cases with the reasons which are \textbf{mode missing} in \figref{fig:mode_missing_set}, \textbf{lane change} in \figref{fig:lane_change_set}, \textbf{inaccurate prediction} with true prediction of future mode in \figref{fig:inaccurate_set} and \textbf{data defects} in past input or future output sequence being also pointed out by other works ~\cite{Ye2021CVPR, Song2021CoRL} in \figref{fig:defect_set}
\begin{figure}[h]
\centering
\vspace{1cm}
\begin{subfigure}[b]{\textwidth}
\includegraphics[width=1\linewidth]{gfx/qualitative_supp/comparison/supp_comparison_8605.pdf}
\end{subfigure}
\begin{subfigure}[b]{\textwidth}
\includegraphics[width=1\linewidth]{gfx/qualitative_supp/comparison/supp_comparison_12203.pdf}
\end{subfigure}
\caption{\textbf{Component Comparison (a)}}
\label{fig:supp_comp_set1}
\end{figure}
\begin{figure}[h!]
\centering
\begin{subfigure}[h]{\textwidth}
\includegraphics[width=1\linewidth]{gfx/qualitative_supp/comparison/supp_comparison_17938.pdf}
\end{subfigure}
\begin{subfigure}[h]{\textwidth}
\includegraphics[width=1\linewidth]{gfx/qualitative_supp/comparison/supp_comparison_21692.pdf}
\end{subfigure}
\begin{subfigure}[h]{\textwidth}
\includegraphics[width=1\linewidth]{gfx/qualitative_supp/comparison/supp_comparison_22319.pdf}
\end{subfigure}
\begin{subfigure}[h]{\textwidth}
\includegraphics[width=1\linewidth]{gfx/qualitative_supp/comparison/supp_comparison_30116.pdf}
\end{subfigure}
\begin{subfigure}[h]{\textwidth}
\includegraphics[width=1\linewidth]{gfx/qualitative_supp/comparison/supp_comparison_30346.pdf}
\end{subfigure}
\caption{\textbf{Component Comparison (b)}}
\label{fig:supp_comp_set2}
\end{figure}
\begin{figure}[h!]
\centering
\begin{subfigure}[h]{\textwidth}
\includegraphics[width=1\linewidth]{gfx/qualitative_supp/fail/supp_mode_1596.pdf}
\end{subfigure}
\begin{subfigure}[h]{\textwidth}
\includegraphics[width=1\linewidth]{gfx/qualitative_supp/fail/supp_mode_553.pdf}
\end{subfigure}
\caption{\textbf{Fail cases caused by mode missing.} In some cases, our model could not catch the future intention of agent of interest vehicle such as left turn and U-turn.}
\label{fig:mode_missing_set}
\end{figure}
\begin{figure}[h!]
\centering
\begin{subfigure}[h]{\textwidth}
\includegraphics[width=1\linewidth]{gfx/qualitative_supp/fail/supp_lane_12092.pdf}
\end{subfigure}
\begin{subfigure}[h]{\textwidth}
\includegraphics[width=1\linewidth]{gfx/qualitative_supp/fail/supp_lane_12520.pdf}
\end{subfigure}
\caption{\textbf{Fail cases caused by lane change.} In some cases, lane changes of agent of interest caused the fail. Since there is no sign about lane change on input sequences, lane changes remain hard to predict cases.}
\label{fig:lane_change_set}
\end{figure}
\begin{figure}[h!]
\centering
\begin{subfigure}[h]{\textwidth}
\includegraphics[width=1\linewidth]{gfx/qualitative_supp/fail/supp_inaccurate_11614.pdf}
\end{subfigure}
\begin{subfigure}[h]{\textwidth}
\includegraphics[width=1\linewidth]{gfx/qualitative_supp/fail/supp_inaccurate_7752.pdf}
\end{subfigure}
\caption{\textbf{Fail cases caused by inaccurate prediction despite true mode prediction.} Although our model predicted the intention of agent of interest such as turns, it could not generate endpoints and trajectories close enough to ground truth.}
\label{fig:inaccurate_set}
\end{figure}
\begin{figure}[h!]
\centering
\begin{subfigure}[h]{\textwidth}
\includegraphics[width=1\linewidth]{gfx/qualitative_supp/fail/supp_defect_1145.pdf}
\end{subfigure}
\begin{subfigure}[h]{\textwidth}
\includegraphics[width=1\linewidth]{gfx/qualitative_supp/fail/supp_defect_2188.pdf}
\end{subfigure}
\begin{subfigure}[h]{\textwidth}
\includegraphics[width=1\linewidth]{gfx/qualitative_supp/fail/supp_defect_11827.pdf}
\end{subfigure}
\caption{\textbf{Fail cases caused by data defects.} There are some faulty or uninformative input sequences as well as inconsistent ground truth sequences in the dataset resulting illogical trajectory predictions or high reported errors despite admissible predictions respectively.}
\label{fig:defect_set}
\end{figure}
\section{Quantitative Results}
\textbf{Does it matter which features we use for goal prediction?}
\input{tab/target_source}
As explained in the methodology of the paper, we propose to use both the map information and the motion information about the agent of interest for predicting goal locations. In \tabref{tab:target_source}, we ablate this decision by comparing the performance using the map information only~(\textbf{Context}), the motion information only~(\textbf{Agent}), and using both as proposed. As can be seen from \tabref{tab:target_source}, using both improves the results in terms of MR and b-minFDE.
|
1,314,259,995,328 | arxiv | \section*{Model and Theory}\label{s2}
We consider a system consisting of a two-leg ladder that each leg describes SSH chain in the presence of light illumination, as represented in Fig. \ref{fig1}. We will examine topological properties for two different cases; (i) asymmetric ladder case where the dimerization and the corresponding lattice spacings of legs ($b_0 \ne b_1$) are asymmetric [see Fig. \ref{fig1}(a)], and (ii) symmetric ladder case where the dimerization of both SSH chains is identical and the corresponding lattice spacing of legs is equal ($b_0=b_1$) [see Fig. \ref{fig1}(b)]. In the absence of irradiation, the tight-binding Hamiltonian of this model containing four sublattices per unit cell can be written as
\begin{align}\label{e1}
H=\sum_j^N [t_1 A^{\dagger}_{uj}B_{uj} +t_2 A^{\dagger}_{lj}B_{lj} + t_3 A^{\dagger}_{uj} A_{lj} + t_4 B^{\dagger}_{uj}B_{lj}] + \sum_j^{N-1} [t_1^\prime B^{\dagger}_{uj}A_{u{j+1}}+t_2^\prime B^{\dagger}_{lj}A_{l{j+1}}] + h.c,
\end{align}
where $X^{(\dagger)}_{u/lj}$ is the electron annihilation (creation) operator of sublattice $X$ (which can be either A or B type) on the upper/lower chain at the $j$th unit cell. $t_1^{(\prime)}$ and $t_2^{(\prime)}$ are intra (inter) unit cell hoppings along the upper and lower legs, respectively. The hopping energies along the rungs of the ladder are $t_3$ and $t_4$. We choose $t_1=t_2^\prime=t-\delta t$ and $t_2=t_1^\prime=t+\delta t$ for asymmetric ladder whereas $t_1=t_2=t+\delta t$ and $t_1^\prime=t_2^\prime=t-\delta t$ for symmetric ladder where $\delta t=\delta_0\cos\theta$ is the dimerization strength with $\theta$ and $\delta_0$ being a cyclical parameter varying from $0$ to $2\pi$ continuously and dimerization amplitude, respectively. For both asymmetric and symmetric cases, we choose $t_3=t_4=t+\delta t$. Notably, the symmetric ladder relies on poly acetylene including identical dimerization of chains. We also set $t$ as a unit of energy, the lattice constant $a_0$ as a length unit. Throughout the paper $\delta_0$= 0.8 without loss of generality.
\begin{figure}[t!]
\centerline{\includegraphics[width=17cm]{fig1.eps}}
\caption{(Color online) Two-leg ladder that each leg describes SSH chain under the light irradiation. (a) Asymmetric ladder geometry with opposite dimerization of the legs and different inter unit cell spacings $b_0$ and $b_1$ of the upper and lower leg, respectively. (b) Symmetric ladder geometry with identical dimerization and inter unit cell spacing $b_0$ of the legs. $a_0$ is the length of unit cell and $c_0$ is the inter chain distance.}
\label{fig1}
\end{figure}
In the presence of externally applied electromagnetic field comprising of the periodic time-dependent electric field $\mathbf{E}(t)=-\partial_t\mathbf{A}(t)$ with vector potential
\begin{align}\label{vec}
\mathbf{A}(t) = (A_xsin\omega t, A_ysin (\omega t +\phi)),
\end{align}
Hamiltonian (\ref{e1}) can be periodic in time
$H(t)=H(t+T)$ through Peierls substitution
\begin{align}\label{Peierls}
t_{ij} \longrightarrow t_{ij}e^{-\frac{ie}{\hslash c}\int_{R_{i}}^{R_{j}}\mathbf{A}(t)\cdot d\mathbf{r}}.
\end{align}
Here, $A_{x(y)}=\frac{E_{x(y)}}{\omega}$ is the driving amplitude along the x (y)-direction which can be related to the amplitude of electric field $E_{x(y)}$. The period $T=\frac{2\pi}{\omega}$ is determined by the driving frequency $\omega$, $\phi$ is a phase shift, c is light speed, and $e$ is electron charge. We take $\hslash=1$ and $\frac{e}{c}=1$ hereafter.
Floquet theorem \cite{Floquet1,Floquet2} can be used to find a solution to the time-dependent Schrodinger equation with time-periodic potentials. This theorem guarantees the existence of a set of solutions
\begin{equation}\label{e2}
\psi_n(t)=e^{-i\epsilon_n t} \varphi_n (t),
\end{equation}
where $\epsilon_n$ is the Floquet quasi-energy and Floquet state $\varphi_n (t)$ has the same time periodicity as Hamiltonian, $\varphi_n(t+T)=\varphi_n (t)$, in analogy with Bloch theorem in which the so-called Bloch states are periodic in real space.
For every solution $\varphi_n (t)$ with quasi-energy $\epsilon_n$ one can construct another solution $\varphi_{\alpha n}(t)=exp(-i\alpha \omega t) \varphi_n(t)$ with quasi-energy $\epsilon_{n\alpha}=\epsilon_{n}+\alpha\omega$, that corresponds to the same physical state $\varphi_n(t)$.
In fact, the Floquet states are the solutions of the eigenvalue equation
\begin{equation}\label{e3}
H_F\vert \varphi_n(t) \rangle=\epsilon_n\vert \varphi_n(t)\rangle,
\end{equation}
where $H_F=H-i\frac{\partial }{\partial t}$ is Floquet Hamiltonian. Eventually, matrix elements of the Floquet Hamiltonian can be written as,
\begin{equation}\label{e4}
H_F^{\alpha\beta}=\frac{1}{T}\int _{0}^{T} H(t) e^{i(\alpha-\beta)\omega t}dt - \alpha \omega \delta_{\alpha \beta},
\end{equation}
where $\alpha$ and $\beta$ are Floquet index.
Hence, by involving Peierls substitution (\ref{Peierls}) in the static Hamiltonian (\ref{e1}) and using Eq. (\ref{e4}), the Floquet Hamiltonian can be obtained as
\begin{align}\label{e5}
H_F^{\alpha\beta} =\sum_j^N [\tilde{t}_1^{\alpha\beta} A^{\dagger}_{uj}B_{uj} + \tilde{t}_2^{\alpha\beta} A^{\dagger}_{lj}B_{lj}
+\tilde{t}_3^{\alpha\beta} A^{\dagger}_{uj} A_{lj} + \tilde{t}_4^{\alpha\beta} B^{\dagger}_{uj}B_{lj}]+ \sum_j^{N-1} [\tilde{t^{\prime}}_1^{\alpha\beta} B^{\dagger}_{uj}A_{u{j+1}} + \tilde{t^{\prime}}^{\alpha\beta}_2 B^{\dagger}_{lj}A_{l{j+1}}] + h.c - \alpha \omega \delta_{\alpha \beta}.
\end{align}
Here, we have defined
\begin{eqnarray}\label{e6}
\tilde{t}_1^{\alpha\beta}&=&t_1J_{\alpha-\beta}[A_{x}(a_0-b_0)], \nonumber \\
\tilde{t}_2^{\alpha\beta}&=&t_2J_{\alpha-\beta}[A_{x}(a_0-b_1)], \nonumber \\
\tilde{t}_3^{\alpha\beta}&=&t_3J_{\alpha-\beta}[\sqrt{(A_xb_2)^2+(A_yc_0)^2+2A_xb_2A_yc_0\cos\phi}], \nonumber \\
\tilde{t}_4^{\alpha\beta}&=&t_4J_{\alpha-\beta}[\sqrt{(A_xb_2)^2+(A_yc_0)^2-2A_xb_2A_yc_0\cos\phi}], \nonumber \\
\tilde{t^{\prime}}_1^{\alpha\beta}&=&t^{\prime}_1J_{\alpha-\beta}[A_{x}b_0],\nonumber \\
\tilde{t^{\prime}}_2^{\alpha\beta}&=&t^{\prime}_2J_{\alpha-\beta}[A_{x}b_1],
\end{eqnarray}
where $b_2=(b_0-b_1)/2$ and $J_m[x]$ is the first kind Bessel function of order $m$. Considering the high-frequency regime (off-resonant regime) where the Floquet bands are decoupled from each other, the system can be well described by zeroth order static Floquet Hamiltonian
\begin{eqnarray}\label{e7}
H_F^{00} =\sum_j^N [\tilde{t}_1^{00} A^{\dagger}_{uj}B_{uj} +\tilde{t}_2^{00}A^{\dagger}_{lj}B_{lj}+\tilde{t}_3^{00} A^{\dagger}_{uj} A_{lj}+ \tilde{t}_4^{00} B^{\dagger}_{uj}B_{lj}]+ \sum_j^{N-1} [\tilde{t^{\prime}}^{00}_1 B^{\dagger}_{uj}A_{u{j+1}} +\tilde{t^{\prime}}^{00}_2 B^{\dagger}_{lj}A_{l{j+1}}] + h.c.
\end{eqnarray}
In the following, we omit the super index "$00$" from the parameters of Eq. (\ref{e7}) for the sake of brevity.
To study the bulk properties of system, we impose the periodic boundary conditions and take Fourier transformation $X_{u/lj}=\frac{1}{\sqrt{N}}\Sigma_k e^{-ikj} X_{u/lk}$, where N is the number of the unit cells. Then the Hamiltonian can be written in the form of
\begin{equation}\label{e8}
H_F=\sum_k\psi^\dagger_k h_F(k)\psi_k,
\end{equation}
where $\psi^\dagger_k=(A_{uk},A_{lk},B_{lk},B_{uk})^\dagger$ and
\begin{eqnarray}\label{e9}
h_F(k)=
\left ( \begin{array}{c c c c}
0 & \tilde{t}_3 & 0& \tilde{t}_1+\tilde{t}_1^\prime e^{ik} \\
\tilde{t}_3 & 0 & \tilde{t}_2+\tilde{t}_2^\prime e^{ik} & 0 \\
0 & \tilde{t}_2+\tilde{t}_2^\prime e^{-ik} & 0 & \tilde{t}_4 \\
\tilde{t}_1+\tilde{t}_1^\prime e^{-ik} & 0 &\tilde{t}_4 & 0
\end{array}\right).\nonumber \\
\end{eqnarray}
After diagonalizing the Hamiltonian (\ref{e9}), the eigenvalues can be obtained in the momentum space as
\begin{equation}\label{e10}
E_{l,p}(k)=\frac{l \sqrt{\zeta +p \sqrt{\eta}}}{\sqrt{2}},
\end{equation}
with
\begin{eqnarray}\label{e11}
\zeta &=& \tilde{t}_3^2 + \tilde{t}_4^2+\tilde{t}_1^2 + \tilde{t}_2^2+ \tilde{t}_1^\prime+\tilde{t}_2^\prime + 2(\tilde{t}_1\tilde{t}_1^\prime +\tilde{t}_2\tilde{t}_2^\prime)cos(k), \nonumber \\
\eta &=& \zeta^2-4(\tilde{t}_2^2\tilde{t}_1^{\prime2}+(\tilde{t}_3\tilde{t}_4-\tilde{t}_1^\prime\tilde{t}_2^\prime)^2+2\tilde{t}_1 \tilde{t}_2(-\tilde{t}_3 \tilde{t}_4+\tilde{t}_1^\prime\tilde{t}_2^\prime)+\tilde{t}_1^2(\tilde{t}_2^2 +\tilde{t}_2^{\prime2})+
2(\tilde{t}_1 \tilde{t}_2^\prime + \tilde{t}_2 \tilde{t}_1^\prime)(\tilde{t}_1 \tilde{t}_2 - \tilde{t}_3 \tilde{t}_4 + \tilde{t}_1^\prime \tilde{t}_2^\prime)cos(k) \nonumber \\
&+&2\tilde{t}_1 \tilde{t}_2 \tilde{t}_1^\prime \tilde{t}_2^\prime cos(2k)),
\end{eqnarray}
where the band index $l=-(+)$ stands for valence (conduction) band and $ p=+(-) $ indicates upper (lower) subband. The topological phase transition is accompanied by closing and reopening the gap at the super-symmetry points of k-space, i.e., $k=0$ and $k=\pi$. It is straightforward to see that the conditions of gap closing between the two valence bands can be obtained by solving $E_{l=-,p=-}(k)=E_{l=-,p=+}$, yielding
\begin{eqnarray}\label{e12}
t_a+e^{ik}t_a^\prime &=&\pm \sqrt{-(\tilde{t}_3-\tilde{t}_4)^2}, \ \textrm{if}\ \tilde{t}_3 = \tilde{t}_4, \nonumber \\
t_b+e^{ik}t_b^\prime &=& \pm \sqrt{-(\tilde{t}_3+\tilde{t}_4)^2}, \ \textrm{if}\ \tilde{t}_3 =-\tilde{t}_4,
\end{eqnarray}
at the momentum $k=0$ and $k=\pi$. Here, we have defined $t_a=\tilde{t}_1+\tilde{t}_2$, $t_a^\prime =\tilde{t}_1 ^\prime + \tilde{t}_2^\prime$, $t_b=\tilde{t}_1-\tilde{t}_2$, and $t_b^\prime = \tilde{t}_1^\prime - \tilde{t}_2^\prime$. As can be seen from above equations, the square root expression must be zero to occur topological phase transition. Also, the gap closure conditions between the upper valence band ($l=-,p=+$) and lower conduction band ($l=+,p=-$) are
\begin{eqnarray}\label{e14}
(t_a+ e^{ik} t_a^\prime)^2-(t_b+e^{ik}t_b^\prime)^2 =4\tilde{t}_3\tilde{t}_4,
\end{eqnarray}
at the momentum $k=0$ and $k=\pi$. Equations (\ref{e12}) and (\ref{e14}) represent boundaries between topologically distinct phases where the value of topological invariant will be changed at these points.
We define exchange operator $\Upsilon$ that exchanges the two legs of ladder and their corresponding sublattices as
\begin{eqnarray}\label{e17}
\Upsilon \psi \rightarrow \psi '=\left(\begin{array}{c}
A_{l}\\
A_{u}\\
B_{u}\\
B_{l}
\end{array}\right).
\end{eqnarray}
In the basis of exchange operator, obviously, $\Upsilon$ must be diagonalized,
\begin{eqnarray}
U_1\Upsilon U_1^{-1}= \left(\begin{array}{c c c c}
-1&0&0&0 \\
0&-1&0&0 \\
0&0&1&0 \\
0&0&0&1
\end{array}\right),
\end{eqnarray}
through the unitary matrix
\begin{eqnarray}\label{e027}
U_1&=&\frac{1}{\sqrt{2}}\left(\begin{array}{c c c c}
0&0&-1&1 \\
-1&1&0&0 \\
0&0&1&1 \\
1&1&0&0
\end{array}\right).
\end{eqnarray}
Transforming Hamiltonian (\ref{e9}), with the unitary matrix $U_1$, yields
\begin{eqnarray}\label{e26}
\tilde{h}_F=U_1 h_F(k)U_1^{-1}=\left(\begin{array}{c c}
h_{1} & h_{cou} \\
-h_{cou}^ \star & -h_1
\end{array}\right),
\end{eqnarray}
where
\begin{eqnarray}\label{e27}
h_1&=&\frac{1}{2}\left(\begin{array}{c c}
\tilde{t}_4&t_a+t_a ^\prime e^{ik}\\
t_a+t_a ^\prime e^{-ik} & \tilde{t}_3
\end{array}\right), \nonumber \\
h_{cou}&=&\frac{1}{2}\left(\begin{array}{c c}
0&t_b+t_b ^\prime e^{ik}\\
-t_b-t_b ^\prime e^{-ik} &0
\end{array}\right).
\end{eqnarray}
From Hamiltonian (\ref{e26}), one finds that the diagonal blocks ($h_1$, $-h_1$) are the well-studied Hamiltonian of generalized SSH model \cite{GeSSH} which are coupled by the off-diagonal block $h_{cou}$. Note, the structure of matrix (\ref{e26}) implies that the energy spectra of the individual diagonal block will be shifted from zero energy and the off-diagonal block $h_{cou}$ is responsible for opening a gap around zero energy. Therefore, one may expect that Hamiltonian (\ref{e26}) has two kinds of edge states, one of them is zero-energy edge states which may be protected by symmetries of the whole Hamiltonian and another is finite-energy edge states due to SSH-like analogue of the block $h_1$ which may be protected by symmetries of the diagonal block.
It is easy to check that Hamiltonian (\ref{e26}) has time-reversal and particle-hole symmetry defined, respectively, as $\mathcal{T} \tilde{h}_F(k)\mathcal{T}=\tilde{h}^\star_F(-k)$ and $\mathcal{P} \tilde{h}_F(k)\mathcal{P}=-\tilde{h}^\star_F(-k)$ with the corresponding operators $\mathcal{T}=\sigma_0 \otimes \sigma_0 \mathcal{K}$ and $\mathcal{P}=\sigma_{x} \otimes \sigma_0 \mathcal{K}$ where $\sigma_{0}$ and $\sigma_x$ being the identity matrix and x component of Pauli matrix. $\mathcal{K}$ is complex conjugate operator. In fact, since $\mathcal{T}\cdot\mathcal{P}=\mathcal{C}$, the unitary chiral operator can be determined as $\mathcal{C}=\sigma_{x} \otimes \sigma_0$. Also, in addition to the mentioned symmetries, under the condition $\tilde{t}_3=\tilde{t}_4$ the Hamiltonian (\ref{e26}) has inversion symmetry with operator $\Pi= \sigma_z \otimes \sigma_x$ as a result of the inversion symmetry of the diagonal blocks.
Before proceeding, to distinguish localized and extend states, we use the logarithm of inverse participation ratio (IPR) which is given by \cite{IPR}
\begin{equation}\label{e25}
I(E)=\frac{Ln \sum ^{j=4N}_{j=1}|\psi(j)|^4}{Ln4N}.
\end{equation}
Here $\psi (j)$ is the eigenvector at site j with energy $E$. When the IPR is close to zero, the wave function is more localized (energy levels shown in red in the figures). But for extended wave function IPR tends to -1 (energy levels shown in blue in the figures).
\section*{Relevant topological invariants} \label{s3}
The bulk-edge correspondence is a hallmark to confirm the topological feature of system relating topological edge states under open boundary conditions to the bulk topological invariants \cite{TI} calculated under periodic boundary conditions. Therefore, topological invariants of Hamiltonian (\ref{e26}) should predict nontrivial values in the space of parameters where edge states are emerged under open boundary conditions. In the following, we introduce three relevant topological invariants to characterize properly the topology of edge states due to the existence of certain symmetries in the whole and/or diagonal block of Hamiltonian.
First, one of the relevant topological invariants is $\mathcal{Z}$ \cite{Ztopo} that originates from the inversion symmetry of the diagonal blocks, i.e., ($h_1, -h_1$). Each of the diagonal blocks can commute with the inversion operator at the super symmetry points $k=0$ and $\pi$. Hence, the eigenstates of $h_1$ have a well-defined parity at supersymmetry points. Subsequently, one can define an integer invariant for each band gap of the system as
\begin{equation}\label{e30}
\mathcal{N}_{i,j}=|\mathcal{E}_{1,i,j}-\mathcal{E}_{2,i,j}|,
\end{equation}
where $\mathcal{E}_{1,i,j}$ and $\mathcal{E}_{2,i,j}$ are the number of negative parities of band structure, respectively, at the $k=0$ and $k=\pi$ in the $i$th bandgap of $j$th subspace. Eventually, by using the relation \cite{Tm2}
\begin{equation}\label{e31}
\mathcal{Z}:=\sum_j \sum_i \mathcal{N}_{i,j},
\end{equation}
we can expose the topology of finite-energy edge states, originating from the diagonal blocks, under open boundary condition.
Second, it is well-known that a relevant topological invariant for quantum system with chiral symmetry which determines topologically distinct phase is winding number. The winding number enumerates the number of pairs of zero-energy edge states. The chiral symmetric Hamiltonian (\ref{e26}) can be brought into a block off-diagonal form in the basis of chiral operator. This can be done by the unitary operator
\begin{equation}\label{e32}
U_2=\frac{1}{\sqrt{2}}\left(\begin{array}{c c c c}
0 & -1 & 0 &1 \\
-1 & 0 & 1 & 0 \\
0 & 1 & 0 & 1 \\
1 & 0 & 1 & 0
\end{array}\right).
\end{equation}
Transforming Hamiltonian (\ref{e26}) by $U_2$ leads to
\begin{equation}\label{e33}
U_2 \tilde{h}_F(k)
U_2^{-1}=\left(\begin{array}{c c}
0 & G \\
G^\dagger & 0
\end{array}\right),
\end{equation}
where
\begin{equation}\label{e34}
G=\left(\begin{array}{c c}
\tilde{t}_4 & \tilde{t}_2+\tilde{t}_2e^{ik} \\
\tilde{t}_1+\tilde{t}_1e^{-ik} & \tilde{t}_3
\end{array}\right).
\end{equation}
Now, we can use the following relation to obtain the winding number \cite{Class10,superconduc}
\begin{equation}\label{e35}
\mathcal{W}=\frac{1}{2\pi i}\int_{-\pi} ^{\pi}dk \partial_k Ln(Z(k)),
\end{equation}
where
\begin{eqnarray}\label{e36}
Z(k)=Det(G)=\tilde{t}_3\tilde{t}_4-\tilde{t}_1\tilde{t}_2-\tilde{t}_1^\prime\tilde{t}_2^\prime -(\tilde{t}_1\tilde{t}_2^\prime+\tilde{t}_1^\prime\tilde{t}_2)cos(k) -i(\tilde{t}_1\tilde{t}_2^\prime-\tilde{t}_1^\prime\tilde{t}_2)sin(k).
\end{eqnarray}
The integral of Eq. (\ref{e35}) can be evaluated analytically via Cauchy's residue theorem. We find a simple formula characterizing the topology of the system associated with zero-energy edge states as
\begin{eqnarray}\label{e37}
\mathcal{W}= \Theta(x-y)\Theta(x+y)+\Theta(-x+y)\Theta(-x-y),
\end{eqnarray}
where
\begin{eqnarray}\label{e38}
x&=&-(\tilde{t}_1\tilde{t}_2^\prime+\tilde{t}_1^\prime\tilde{t}_2), \nonumber \\
y&=&\tilde{t}_3\tilde{t}_4-\tilde{t}_1\tilde{t}_2-\tilde{t}_1^\prime\tilde{t}_2^\prime,
\end{eqnarray}
and $\Theta(\xi)$ is the Heaviside function. $\mathcal{W}=1$ means the system hosts one pairs of topological edge states at zero energy and $\mathcal{W}=0$ shows trivial topological phase where the system is an ordinary insulator.
Third, when the chiral symmetry is broken by symmetry breaking perturbations, the inversion symmetry of the whole Hamiltonian allows us to use the multi-band Zak phase \cite{ZakPh}
\begin{eqnarray}\label{e39}
\gamma=\sum_{E<0}\int \langle u(k)\vert i\nabla _k \vert u(k) \rangle dk,
\end{eqnarray}
to calculate topological invariant of zero-energy edge states. Here, $\vert u(k)\rangle$ is occupied Bloch states with the corresponding eigenvalue $E$.
\section*{Asymmetric ladder case} \label{s4}
Now, we study band structures and topological properties of asymmetric ladder irradiated by circularly polarized light [see Fig. \ref{fig1}(a)]. We apply the light beam with the vector potential (\ref{vec}) involving circular polarization, i.e., $A_x=A_y=A$. Then the hoppings of Eqs. (\ref{e6}) reduce as
\begin{eqnarray}\label{e60}
\tilde{t}_1&=&t_1J_0[A(a_0-b_0)], \nonumber \\
\tilde{t}_2&=&t_2J_0[A(a_0-b_1)], \nonumber \\
\tilde{t}_3&=&t_3J_0[A\sqrt{b_2^2+c_0^2+2b_2c_0\cos\phi}], \nonumber \\
\tilde{t}_4&=&t_4J_0[A\sqrt{b_2^2+c_0^2-2b_2c_0\cos\phi}], \nonumber \\
\tilde{t}_1^{\prime}&=&t^{\prime}_1J_0[Ab_0], \nonumber \\
\tilde{t}_2^{\prime}&=&t^{\prime}_2J_0[Ab_1].
\end{eqnarray}
Note that if $\phi= n\pi/2$ with $n$ an odd number, then the two hoppings of rungs are equal, $\tilde{t}_3 = \tilde{t}_4$. We set $\phi= \pi/2$ in the current section. The case $\phi \ne n\pi/2$ where $\tilde{t}_3 \ne \tilde{t}_4$ owing to $2b_2=b_0 - b_1\ne0$ will be discussed in Sec. 6. Remarkably, in the asymmetric ladder case, the symmetry operators are the same as those for Sec. 2 with features $\mathcal{T}^2=1$, $\mathcal{P}^2=1$, and $\mathcal{C}^2=1$, so the symmetry class belongs to BDI \cite{Class10,AZClass1,AZClass3,AZClass4}. It is worthwhile noting that if we regard the leg degrees of freedom as spin degrees of freedom, then, in the asymmetric ladder, the unequal hopping of upper and lower legs resembles spin-dependent hopping, i.e., spin-orbit interaction. As such, the exchange operator plays the role of spin rotation operator.
\begin{figure*}[th]
\centering
\includegraphics[width=1\linewidth]{fig2.eps}
\caption{(Color online) Quasi-energy spectrum along with zero- and finite-energy edge states and their relevant topological invariants $\mathcal{Z}$ and $\mathcal{W}$ (a) as a function of $\theta/\pi$ with $A = 3.2$ and (b) as a function of $A$ with $\theta/\pi$ =0.45. The colors in the energy spectrum represent IPR of the wave function localization. (c) The probability distribution of energy states; Main panel: bulk states (the red curve with aster symbol) and the hybridized edge states with bulk states (the blue curve with circle symbol). Inset: the localized edge states within band gap. Here, $b_0=0.2, b_1=0.1$ and $c_0=0.6$.}
\label{fig4}
\end{figure*}
Using Eqs. (\ref{e60}), the energy spectra of Hamiltonian (\ref{e7}) can be obtained numerically under open boundary conditions. The dependence of quasi-energy spectra and the appropriate bulk topological invariants on $\theta/\pi$ and on $A$, respectively, is shown in Figs. \ref{fig4}(a) and \ref{fig4}(b). As already predicted above, there exist two kinds of edge states: zero-energy edge states with flat band and finite-energy edge states. As will be shown in Sec. 6, the former can be protected by the chiral or inversion symmetry of the whole Hamiltonian with the corresponding $\mathcal{W}$ or $\gamma$ invariant, respectively. While the latter is protected by the inversion symmetry of block $h_1$ with the corresponding $\mathcal{Z}$ invariant.
From Fig. \ref{fig4}(a) one can see that, interestingly, without occurring topological phase transition, the finite-energy edge states can leave from an energy gap and enter to a new one by passing through bulk states. In such process, the $\mathcal{Z}$ invariant exhibits a nontrivial value resulting in the existence of symmetry protected edge states inside the topological bulk states. Furthermore, the finite-energy edge states hybridize with the extended bulk ones establishing hybridized Floquet topological metal phase with less localized topological edge states. As a result, by varying $\theta$, the values of IPR of edge states change significantly in transition from topological insulator phase, where the edge states are within gapped states, to the hybridized Floquet topological metal phase originating from breaking of the exchange symmetry $\Upsilon$ in the asymmetric ladder case.
Also, as shown in Fig. \ref{fig4}(b) with the increase of driving amplitude $A$ the energies of finite-energy edge states decrease non-monotonically manifesting, alternatively, topological insulator and hybridized Floquet topological metal phases. Furthermore, the zero-energy edge states characterized by the topological invariant $\mathcal{W}$ as functions of $\theta/\pi$ and $A$ reveal either topologically nontrivial stable or trivial phases which are separated by topological phase transition.
To gain insight into the nature of states, in Fig. \ref{fig4}(c), we have plotted the probability distribution of hybridized and localized finite-energy edge states and of the bulk states as a function of unit cell index along the ladder. As usual the localized edge states [see the inset] and the extended bulk states [see the red curve indicated by "aster" symbols in the main panel] have highest probability, respectively, at the ends and in the middle of the system. Moreover, one finds that the hybridized edges states can have finite probability both at the ends and on the bulk of system [see the blue curve marked by "circle" symbols in the main panel].
\begin{figure}[ht!]
\centerline{\includegraphics[width=10.5cm]{fig3.eps}}
\caption{(Color online) Topological phase diagram in ($\theta/\pi$, $A$)-plane associated with (a) finite-energy edge states which the red, green, and yellow regions indicate nontrivial topological regions with related topological invariant $\mathcal{Z}=2$ where the corresponding edge states lie, respectively, within subband gap, within bulk states, and in the main gap and (b) zero-energy edge states which the yellow and the gray regions are related to the topologically non-trivial ($\mathcal{W}=1$) and trivial phase ($\mathcal{W}=0$), respectively. Here, $b_0=0.2, b_1=0.1$, and $c_0=0.6$.}
\label{phase}
\end{figure}
The phase diagrams in the plane ($\theta/\pi$, $A$) including topologically distinct phases with finite- and zero-energy edge states, respectively, are shown in Figs. \ref{phase}(a) and \ref{phase}(b). We represent the topological phases in which the edge states reside in the gap between the subbands and in the main gap by red and yellow colors, respectively. Also, the hybridized Floquet topological metal phase and normal insulator are indicated by green and gray area, respectively.
From Fig. \ref{phase}(a), one can see that around $\theta/\pi \simeq$ 0 and 2, topological insulator phase with edge states within subband gap dominates for most of the $A$ values. Moreover, around $\theta/\pi \simeq$ 1 the finite-energy edges states associated with topologically nontrivial phases penetrate into the subband bulk states except for particular values $A\simeq$ 3 and 7. In these values of $A$, the finite-energy edge states completely reside within the main gap and, subsequently, the hybridized Floquet topological metal phase vanishes. Furthermore, for $\theta/\pi \simeq$ 0.5 and 1.5 with $A\simeq$ 4 and 9 the finite-energy edge states lie in the main gap emerging topological insulator phase.
As shown in Fig. \ref{phase}(b), the nontrivial topological phase associated with zero-energy edge states can be found for weak $A$ independent of $\theta$ values. But for intermediate and strong $A$ with $\theta/\pi \simeq$ 0.5 and 1.5 trivial insulator is dominated. Whereas for $\theta/\pi \simeq$ 0, 1, and 2 the phase changes from topological insulator to trivial one successively as a function of $A$.
\section*{Symmetric ladder case}\label{s5}
We consider symmetric ladder case where the dimerization pattern and lattice spacings of upper leg are the same as those for the lower leg as shown in Fig. \ref{fig1}(b). So, using Eq. (\ref{e6}), the hoppings of this case can be rewritten as
\begin{eqnarray}\label{e200}
\tilde{t}_1&=&\tilde{t}_2=t_1J_0[A_{x}(a_0-b_0)], \nonumber \\
\tilde{t}_3&=&\tilde{t}_4=t_3J_0[A_{y}c_0], \nonumber \\
\tilde{t_1^{\prime}}&=&\tilde{t_2^{\prime}}=t^{\prime}_1J_0[A_{x}b_0].
\end{eqnarray}
From the above equations, one finds that the horizontal (vertical) hoppings are affected only by x (y)-component of vector potential independent of $\phi$ due to rectangular symmetry of the lattice. This means that the circularly polarized light ($A_x=A_y=A$) can act as two independent linearly polarized fields in both directions. For this case, the Hamiltonian (\ref{e9}) commutes with exchange operator, $[\Upsilon, h_F(k)]=0$, and can be brought into a block diagonal form by the unitary matrix (\ref{e027}) as
\begin{eqnarray}\label{e18}
\tilde{h}_F=U_1 h_F(k) U_1^{-1}=\left(\begin{array}{c c}
h_2&0\\
0 & -h_2
\end{array}\right),
\end{eqnarray}
where
\begin{eqnarray}\label{e19}
h_2=\left(\begin{array}{c c}
\tilde{t}_{3}&\tilde{t}_1+\tilde{t}_1 ^\prime e^{ik}\\
\tilde{t}_1+\tilde{t}_1 ^\prime e^{-ik} & \tilde{t}_3
\end{array}\right).
\end{eqnarray}
This indicates that the existence of exchange symmetry will prevent the hybridization of edge states with the bulk states because the coupling block, $h_{cou}$, is zero. Likewise, the zero-energy edge states will be suppressed. Therefore, one may anticipate that the spectra of each block overlap with those of the other block so that the finite-energy edge states of a subsystem cross through bulk states of the other one without hybridization.
In the symmetric ladder model, there is time-reversal symmetry defined by $\mathcal{T}_i \tilde{h}_F(k)\mathcal{T}_i=\tilde{h}^\star_F(-k)$ (with i=1,2) where $\mathcal{T}_1=\sigma_0\otimes\sigma_0 \mathcal{K}$ and $\mathcal{T}_2=\sigma_z\otimes\sigma_0 \mathcal{K}$. Since the system has two particle-hole operators $\mathcal{P}_1=\sigma_x\otimes\sigma_0\mathcal{K}$ and $\mathcal{P}_2=\sigma_y\otimes\sigma_0\mathcal{K}$ satisfying $\mathcal{P}_i \tilde{h}_F(k)\mathcal{P}_i=-\tilde{h}^\star_F(-k)$, the corresponding chiral operators fulfilling the sublattice symmetry $\mathcal{C}_i\tilde{h}_F(k)\mathcal{C}_i=-\tilde{h}_F(k)$ can be determined as
\begin{eqnarray}\label{e20}
\mathcal{C}_1 &=& \sigma_x \otimes \sigma_0, \\ \nonumber
\mathcal{C}_2 &=& \sigma_y \otimes \sigma_0.
\end{eqnarray}
Also, the Hamiltonian (\ref{e18}) has two inversion symmetry operators as
\begin{eqnarray}\label{e21}
\Pi_1 &=& \sigma_0 \otimes \sigma_x, \\\nonumber
\Pi_2 &=& \sigma_z \otimes \sigma_x.
\end{eqnarray}
According to the above-mentioned symmetry statements, the symmetry operators exhibit the features that $\mathcal{T}^2=1$, $\mathcal{P}^2=1$, and $\mathcal{C}^2=1$. Therefore, the symmetry class is still BDI \cite{Class10,AZClass1,AZClass3,AZClass4}. However, the diagonal blocks do not fall in BDI class.
We can obtain the eigenvalues of the model by diagonalizing Hamiltonian (\ref{e18}) yielding
\begin{equation}\label{e22}
E=\pm \tilde{t}_3\pm\sqrt{\tilde{t}_1+\tilde{t}_1^\prime +2\tilde{t}_1\tilde{t}_1^\prime cos(k)}.
\end{equation}
Note, this energy spectrum is reminiscent of the spectrum of SSH model with the additional term $\tilde{t}_{3}$ which can be tuned by externally applied light. Such additional term acts like Zeeman field splitting the energy levels of SSH chain \cite{SSHZeeman}. When the vertical hopping $\tilde{t}_3=0$, the model reduces to two decoupled SSH chains with two-fold degenerate bulk states and the two dispersive finite-energy edge states convert to flat zero-energy edge states with four-fold degeneracy.
\begin{figure*}[t]
\centering
\includegraphics[width=1\linewidth]{fig4.eps}
\caption{(Color online) Dependence of quasi-energy spectrum and its relevant topological invariant $\mathcal{Z}$ on (a) $\theta$ with $A = 5$ and on (b) $A$ with $\theta/\pi$ =0.8. The colors in the energy spectrum represent IPR of the wave function localization. (c) The probability distribution of energy states; Main panel: bulk states (the red curve with aster symbol) and the edge states within bulk states (the blue curve with circle symbol). Inset: the localized edge states within band gap. Here, $b_0=0.6$ and $c_0=0.3$.}
\label{fig2}
\end{figure*}
\begin{figure}[t!]
\centerline{\includegraphics[width=7cm]{fig5.eps}}
\caption{(Color online) Topological phase diagram in the plane ($A, \theta/\pi$) for symmetric ladder case. Yellow and red regions show topological insulator phase having $\mathcal{Z}$=2 with edge states within the main gap and subband gap, respectively. Green and gray regions indicate the Floquet topological metal state ($\mathcal{Z}$=2) and normal insulator ($\mathcal{Z}$=0). The parameters are $b_0=0.6$ and $c_0=0.3$.}
\label{fig3}
\end{figure}
As already mentioned above, for the present model, applying the circularly polarized light modifies the hoppings in the x-direction and y-direction independently. However, the topological phase transition again occurs at $k=0$ and $k=\pi$. So, by plugging Eq. (\ref{e200}) into Eq. (\ref{e12}), the gap closure/reopening conditions reduce as
\begin{eqnarray}\label{e121}
\tilde{t}_1 =-e^{ik} \tilde{t}_1^\prime.
\end{eqnarray}
Note that this relation which depends only on the horizontal hoppings is similar to the topological phase transition condition of original SSH model. So, the vertical hoppings have no effect on the topological phase transition points taking place at $\theta/\pi=$0.5 and $\theta/\pi=$1.5 in the static limit \cite{SSH,SSH1}. However, the energy levels at which gap closes are not zero and will be shifted by $\tilde{t}_3$ [see also Eq. (\ref{e22})] which is in contrast to the original SSH model [see Fig. \ref{fig2}(a)].
In Figs. \ref{fig2}(a) and \ref{fig2}(b), we have plotted the quasi-energy spectra along with bulk topological invariants versus $\theta/\pi$ and $A$, respectively. As already discussed, there are no zero-energy edge states and also the energy levels of finite-energy edge states change as functions of $\theta/\pi$ and $A$. From both figures, one can see that the finite-energy edge states penetrate into the bulk states and leave their band gap without occurring topological phase transition. Unlike the asymmetric ladder case, interestingly, due to presence of the exchange symmetry, the finite-energy edge states appear in the bulk states without hybridization \cite{Tm1,Tm2} resulting in Floquet topological metal phase.
Also, the probability distribution in terms of unit cell index along the ladder is shown in Fig. \ref{fig2}(c) for bulk states and finite-energy edge states in the bulk and in the gap. The finite-energy edge states remain localized within bulk and gapped states as indicated by the curves with blue "circle" symbols in the main panel and black "circle" symbols in the inset, respectively. Whereas the bulk states themselves exhibit extended feature [see the red curves with "star" symbols in the main panel].
In Fig. \ref{fig3}, the topological phase diagram is depicted in the ($A, \theta/\pi$)-plane. Also, we have distinguished the topological phases with edge states in the gap of subbands and in the main gap by red and yellow colors, respectively. The Floquet topological metal phase and trivial insulator are indicated by green and gray colors. Except for certain values of $A$, for $\theta/\pi$ around 1 the topological insulator with edge states in the main gap is dominated. By going away from $\theta/\pi\simeq $ 1 and approaching $\theta/\pi \simeq$ 0, 2 the Floquet topological metal, the topological insulator with edge states in the subband gap, and trivial insulator take place for weak and intermediate $A$. If $A$ is strong enough, the region corresponding to topological insulator containing edge states in the subband gap vanishes. This trend is due to the decrease in energy of the finite-energy edge states as $A$ increases [see Fig. \ref{fig2}(b)].
\begin{figure*}[t!]
\centering
\includegraphics[width=0.7\linewidth]{fig6.eps}
\caption{(Color online) Quasi-energy spectrum and the related topological invariants of the asymmetric ladder case exposed to the circularly polarized laser field as function of $\theta/\pi$ for (a) $\phi=\pi/4$ with the broken inversion symmetries of both diagonal blocks and whole Hamiltonian, (b) $\phi=\pi/2$ in the presence of $H^\prime$ with the broken chiral symmetry and preserved inversion symmetries of whole Hamiltonian and block $h_1$, (c) $\phi=\pi/2$ in the presence of $H^{''}$ with the broken inversion symmetry of whole Hamiltonian and preserved inversion symmetry of block $h_1$, and (d) $\phi=\pi/4$ in the presence of $H^{''}$ with the broken inversion symmetries of whole Hamiltonian and block $h_1$ as well as chiral symmetry. Here, the parameters are the same as Fig. \ref{fig4} and $V=t/2$.}
\label{fig6}
\end{figure*}
\section*{Stability Of Edge States}\label{s6}
Now, we examine the stability of topological phases and demonstrate that which symmetry is responsible for the appearance of edge states. To do so, we consider the asymmetric ladder case subjected to a circularly polarized field in order to have maximum number of symmetry protected edge states, including zero- and finite-energy edge states.
Before illustrating the stability of topological edge states against perturbations like on-site potentials, we discuss about the effect of circular polarization of light with $\phi \ne n\pi/2$ on the topological characteristics of asymmetric ladder. According to Eq. (\ref{e60}), for $\phi \ne n\pi/2$, the two hoppings along the rungs are not equal, $\tilde{t}_3 \ne \tilde{t}_4$, resulting in the breaking of the inversion symmetry of the diagonal blocks ($h_1, -h_1$). This subsequently breaks the inversion symmetry of whole Hamiltonian as well. Consequently, the lack of inversion symmetry in the block $h_1$ gaps out the gapless finite-energy edge states lifting their degeneracy so that their relevant invariant $\mathcal{Z}$ takes continuous values as shown in Fig. \ref{fig6}(a). But despite the absence of inversion symmetry in the whole Hamiltonian, one can see that the zero-energy edge states and their relevant invariant $\mathcal{W}$ remain topologically nontrivial because of preserving the chiral symmetry.
In what follows, we assume $\phi=\pi/2$, otherwise specified. We add the on-site potential
\begin{equation}\label{e29}
H^\prime =V \sum _n A^{\dagger}_{uj}A_{uj}+B^{\dagger}_{uj}B_{uj}+A^{\dagger}_{lj}A_{lj}+B^{\dagger}_{lj}B_{lj},
\end{equation}
to the Hamiltonian (\ref{e7}) with $V$ being the amplitude of on-site potential. Moreover, the existence of $H^\prime$ breaks the chiral symmetry of whole Hamiltonian and shifts the energy levels as depicted in Fig. \ref{fig6}(b). But because of preserving the inversion symmetry of whole Hamiltonian, the multi-band Zak phase (\ref{e39}) can be employed as the topological invariant to characterize the topology of midgap edge states near the zero energy taking quantized values [see Fig. \ref{fig6}(b)]. Also, the inversion symmetry of diagonal block is preserved and the finite-energy edge states remain intact. On the other hand, we add the on-site potential of the form
\begin{equation}\label{e291}
H^{''} =V \sum _n A^{\dagger}_{uj}A_{uj} + B^{\dagger}_{lj}B_{lj},
\end{equation}
to Hamiltonian (\ref{e7}). This perturbation breaks both the chiral symmetry and the inversion symmetry of whole Hamiltonian while it preserves the inversion symmetry of blocks ($h_1, -h_1$). This means that the topological properties cannot transferred from the diagonal blocks to the full Hamiltonian. As shown in Fig. \ref{fig6}(c), the topology of zero-energy edge states is destroyed, however, the finite-energy edge states remain degenerate and nontrivial. As a result, the zero-energy edge states are protected by either the chiral symmetry or inversion of whole Hamiltonian.
Finally, We add the on-site potential $H^{''}$ to the system that is exposed to the circularly polarized field with $\phi=\pi/4$. In such situation, the chiral symmetry and inversion symmetry of whole Hamiltonian as well as the inversion symmetry of blocks ($h_1, -h_1$) will be broken. In Fig. \ref{fig6}(d), we have plotted the band structure illustrating that the finite- and zero-energy edge states are gapped with trivial values of their topological numbers. Consequently, the inversion symmetry of block $h_1$ is the fundamental symmetry protecting finite-energy edge states.
\section*{Summary} \label{s7}
We studied topological features of the two-leg SSH ladder periodically driven by circularly polarized light uncovering the role of lattice geometry. We considered asymmetric and symmetric ladders whose legs, respectively, have different and identical patterns of dimerization as well as lattice spacings. We found that there exist zero- and finite-energy edge states in the asymmetric ladder case, whereas the symmetric ladder hosts only the finite-energy ones. In both ladder models, the finite-energy edge states can leave from the gap of subbands and enter into the gap between the upper valence and lower conduction bands by crossing through the bulk states of subbands depending on the dimerization strength and driving amplitude. For asymmetric ladder, when the finite-energy edge states are within the bulk ones, due to the absence of exchange symmetry, these two types of states having the same energy and quantum number would hybridize together providing the hybridized Floquet topological metal states. Such new topological states are no longer localized. In contrast, for symmetric ladder case, the presence of exchange symmetry prevents hybridization between the finite-energy edge and bulk states establishing the Floquet topological metal phase with localized edge states. We also obtained the topological phase diagram that in addition of the two above-mentioned topological phases it contains a usual topological insulator and ordinary insulator. Furthermore, based on underlying symmetries of the system, we introduced relevant topological invariants to show the topology of the edge states. By involving symmetry breaking perturbations, we demonstrated that the finite-energy edge states are protected by the inversion symmetry of the diagonal blocks of Hamiltonian. But, the zero-energy edge states are protected by either the inversion or chiral symmetry of whole Hamiltonian. Moreover, we obtained an analytical formula for winding number to show the topology of zero-energy edge states when the chiral symmetry exists.
Finally, we note interestingly that interleg and intraleg hopping, respectively, can play the same roles as realistic Zeeman field and spin-orbit coupling effectively in our spinless model. So, such ingredients may not be necessary for quasi-1D systems \cite{doubleLadder}, unlike the topological 1D systems, to establish topological phases. This provides an alternative route to simulate Zeeman field and spin-orbit interaction in the absence of spin degree of freedom by engineering the existing degrees of freedom, for example, sublattice space. Furthermore, current experimental status can provide a possibility to realize two-leg ladder composed of coupled SSH chains \cite{ladder} and can manifest the topological signatures employing density and momentum-distribution measurements \cite{ExperLadder}. Also, the possible topological states can be recognized by using spatially resolved radio-frequency spectroscopy from the local density of states \cite{Expermesur}.
{\it Note added.} After completing the present study, we became aware that the Floquet topological metal phase has been investigated in Ref. \cite{FloMetal,FloSemiMetal}. In these works although the edge states can have the same energy as bulk states but, unlike our case, they are left isolated inside the band gap.
|
1,314,259,995,329 | arxiv | \section{Introduction}
The purpose of this paper is to clarify an intricate argument recently
introduced by Bony and H\"afner \cite{Bony-Haefner1} and use these ideas to
generalize certain of the results of \cite{Bony-Haefner1}. The
central thrust of \cite{Bony-Haefner1} is first of all to obtain
certain kinds of commutator estimates for the the Laplacian and its
square root on asymptotically Euclidean space. The authors then
employ those estimates to yield energy decay results for the wave
equation, and, ultimately, global existence results for quadratically
semilinear wave equations on these spaces. In a subsequent note
\cite{Bony-Haefner2}, applications of the linear results to the low
frequency limiting absorption principle were shown. The novel tool
central to all of these applications is the commutator estimate
\begin{equation}\label{BHest}
\chi_I(H^2\Delta_g)\frac{i}{2}[H^2\Delta_g,A]\chi_I(H^2\Delta_g)
\geq C\chi_I(H^2\Delta_g)^2,
\end{equation}
where $H\uparrow \infty$ is a \emph{large} parameter, $I$ is a compact
interval in $(0,\infty),$ and $\chi_I$ its indicator function, and
where $A$ is a differential operator supported outside a compact set
and equal to
$(1/2)(r D_r +(r D_r)^*)$ near infinity. The estimate
\eqref{BHest} is thus a low-energy version of the positive commutator
construction that is ubiquitous in scattering theory; we remark that
the analogous \emph{high}-energy estimate would not be true with this
choice of $A,$ supported outside a compact set: by standard results in
microlocal analysis, the symbol of $A$ would have to be strictly
increasing along all geodesics, lifted to the cotangent bundle.
Indeed, on a manifold with trapped geodesics, the construction of such
a high-energy commutant is manifestly impossible.
In \cite{Bony-Haefner1}, the estimate \eqref{BHest} is proved by a
multi-step process involving a sequence of perturbation arguments,
starting from flat $\RR^n.$ It is thus a priori unclear whether such
estimates continue to hold if we vary the topology of our space and
its end structure. In this paper we show that \eqref{BHest} (as well
as a related estimate for $\sqrt{\Delta}$) does indeed continue to hold
on any long-range metric perturbation of a \emph{scattering manifold},
and further holds even if a short range (in a suitable sense) non-negative
potential is added.
The class of scattering manifolds, introduced by Melrose
\cite{RBMSpec}, consists of all manifolds with ends that look
asymptotically like the large ends of cones. The topology of interior
and of the cross sections of the ends is unrestricted. Our methods
are nonperturbative and simple, involving only commutator estimates
for differential operators and a sharp Poincar\'e-type inequality on
these manifolds. We anticipate that these methods will prove quite
flexible in the investigation of energy decay in a variety of other
asymptotic geometries.
We do not explore the applications of our estimate in detail here, as
the methods of \cite{Bony-Haefner1} apply, mutatis mutandis, directly
to our situation. We content ourselves with restating the energy
decay estimate of \cite{Bony-Haefner1} for solutions to the wave
equation in the final section of the paper and sketching the main
ingredients in its proof, adapted to our setting. This estimate applies on
scattering manifolds with no trapped geodesics.\footnote{Such a
manifold must in fact be contractible, but we note that even $\RR^n$
can be equipped with scattering metrics different from the round
metric on the sphere at infinity, so this result remains broader
than that of \cite{Bony-Haefner1}.}
We point out here that Guillarmou and Hassell started an extensive and
very detailed study of the Laplacian on scattering manifolds near the
bottom of the spectrum, \cite{Guillarmou-Hassell:Resolvent-I}, with a
particular emphasis on the Schwartz kernel of the resolvent of the
Laplacian on a resolved space. Our methods give the estimates we need
more quickly, but naturally the results of
\cite{Guillarmou-Hassell:Resolvent-I} give more detail on the
resolvent kernel, which in principle implies for instance results on
the energy decay\footnote{Note, however, that $L^2$-based estimates
are not always easy to get from precise description of the Schwartz
kernel!}. We also remark that Bouclet \cite{Bouclet1} has recently
proved weighted low-energy estimates generalized those of
\cite{Bony-Haefner1} for \emph{powers} of the resolvent on an asymptotically
Euclidean space.
Our paper is structured as follows. In Section~\ref{sec:b-sc} we recall
the background material concerning b- (or totally characteristic)
and scattering differential operators. In Section~\ref{sec:Poincare}
we obtain Poincar\'e inequalities and in Section~\ref{sec:weights} weighted
diferential estimates that we use in Section~\ref{sec:Mourre}
to prove our positive commutator estimate. Finally, in Section~\ref{sec:wave}
we show how these results can be applied to study energy decay for the
wave equation, following the method of Bony and H\"afner \cite{Bony-Haefner1}.
\section{b- and scattering geometry}\label{sec:b-sc}
We very briefly recall the basic definitions of the b- and scattering
structures on an $n$-dimensional manifolds with boundary, denoted $X$;
we refer to \cite{RBMSpec} for more detail.
A boundary defining function $x$ on $X$ is a non-negative $\CI$ function
on $X$ whose zero set is exactly $\pa X$, and whose differential
does not vanish there. We recall that $\dCI(X)$, which may also be
called the set of Schwartz functions, is the subset of
$\CI(X)$ consisting of functions vanishing at the boundary with
all derivatives, the dual of $\dCI(X)$ is tempered distributional densities
$\dist(X;\Omega X)$; tempered distributions $\dist(X)$ are elements of
the dual of Schwartz densities, $\dCI(X;\Omega X)$.
Let $\Vf(X)$ be the Lie algebra of all $\CI$ vector fields on $X$;
thus $\Vf(X)$ is the set of all $\CI$ sections of $TX$. In local
coordinates $(x,y_1,\ldots,y_{n-1})$,
$$
\pa_x,\pa_{y_1},\ldots,\pa_{y_{n-1}}
$$
form a local basis for $\Vf(X)$, i.e.\ restrictions of elements
of $\Vf(X)$ to the coordinate chart can be expressed uniquely as
a linear combination of these vector fields with $\CI$ coefficients.
We next define $\Vb(X)$ to be the Lie algebra of $\CI$ vector fields tangent
to $\pa X$; in local coordinates
$$
x\pa_x,\pa_{y_1},\ldots,\pa_{y_{n-1}}
$$
form a local basis in the same sense. Thus, $\Vb(X)$ is the set
of all $\CI$ sections of a bundle, called the b-tangent bundle of
$X$, denoted $\Tb X$. Finally, $\Vsc(X)=x\Vb(X)$ is the Lie algebra
of scattering vector fields;
$$
x^2\pa_x,x\pa_{y_1},\ldots,x\pa_{y_{n-1}}
$$
form a local basis now. Again, $\Vsc(X)$ is the set of
all $\CI$ sections of a bundle, called the scattering tangent bundle of
$X$, denoted $\Tsc X$.
The dual bundles of $TX,\Tb X,\Tsc X$ are $T^*X,\Tb^*X,\Tsc^*X$ respectively,
with local bases
$$
dx,\ dy_j,\ \text{resp.}\ \frac{dx}{x},\ dy_j,\ \text{resp.}\ \frac{dx}{x^2},
\ \frac{dy_j}{x},\ j=1,\ldots,n-1.
$$
These induce form bundles and density bundles as usual. In particular,
local bases of the density bundles are
$$
|dx\,dy_1\ldots dy_{n-1}|\ \text{resp.}\ x^{-1}|dx\,dy_1\ldots dy_{n-1}|,
\ \text{resp.}\ x^{-n-1}|dx\,dy_1\ldots dy_{n-1}|.
$$
If $X$ is compact, the $L^2$-spaces relative to
these classes of densities
are well-defined as Banach spaces, up to equivalence
of norms; they are denoted by $L^2(X)$, $L^2_{\bl}(X)$, $L^2_{\scl}(X)$,
respectively.
The classes of vector fields mentioned induce algebras of differential
operators, consisting of locally finite sums of products of these
vector fields and elements of $\CI(X)$, considered as operators on $\CI(X)$.
These are denoted by $\Diff(X)$, $\Diffb(X)$ and $\Diffsc(X)$, respectively.
These in turn give rise to (integer order) Sobolev spaces. Thus,
for $m\geq 0$
integer,
$$
H_{\bullet}^m(X)=\{u\in L^2_{\bullet}(X):\ Qu\in L^2_{\bullet}(X)
\ \forall Q\in\Diff_{\bullet}^m(X)\},
$$
where $\bullet$ is either $\bl$ or $\scl$ and
where $Qu$ is a priori defined as a (tempered) distribution.
A similar construction leads to symbol classes:
We let $S^k(X),$ the space of symbols of order $k$, consist of
functions $f$ such that
$$
x^k Lf\in L^\infty(X)\ \text{for all}\ L\in\Diffb(X).
$$
We note, in particular, that
$$
x^\rho\CI(X)\subset S^{-\rho}(X)
$$
since $\Diffb(X)\subset\Diff(X)$.
As $\Diffb(X)$ (a priori acting, say, on tempered distributions)
preserves $S^k(X)$, and one can extend $\Diffb(X)$ and $\Diffsc(X)$
by `generalizing the coefficients':
$$
S^k\Diffb^m(X)=\{\sum_j a_j Q_j:\ a_j\in S^k(X),\ Q_j\in\Diffb^m(X)\},
$$
with the sum being locally finite, and defining $S^k\Diffsc^m(X)$
similarly. In particular,
$$
x^k\Diffb^m(X)\subset S^{-k}\Diffb^m(X),
\ x^k\Diffsc^m(X)\subset S^{-k}\Diffsc^m(X).
$$
Then $Q\in S^{k}\Diffsc^m(X)$, $Q'\in S^{k'}\Diffsc^{m'}(X)$ gives
$QQ'\in S^{k+k'}\Diffsc^{m+m'}(X)$, and the analogous statement
for $S^k\Diffb^m(X)$ also holds.
An example of particular interest is the radial, or geodesic,
compactification of $\RR^n$, which compactifies $\RR^n$ as a closed
ball, $X=\overline{\BB^n}$; see \cite[Section~1]{RBMSpec} for an extended
discussion, with the compactification called stereographic
compactification there. In this case, the set of Schwartz functions on $\RR^n$
lifts to $\dCI(X)$ (justifying the `Schwartz' terminology for the latter),
the set of 0th order classical symbols
on $\RR^n$, i.e.\ 0th order symbols $a$ with an asymptotic expansion
$a(r\omega)\sim \sum_{j=0}^\infty r^{-j} a_j(\omega)$ in polar coordinates,
lifts to $\CI(X)$, the translation invariant
vector fields on $\RR^n$ lift to a basis of $\Vsc(X)$, and $\Hsc^m(X)$
is the standard Sobolev space $H^m(\RR^n)$ (under the natural identification
of functions), while $S^k(X)$ is the
standard symbol space $S^k(\RR^n)$. One way of seeing these statements
is to introduce `inverse polar coordinates' $z=x^{-1}\omega$, $x\in(0,1)$,
$\omega\in\sphere^{n-1}$, in the exterior of a closed ball in
$\RR^n_z$, and use polar coordinates $(\rho,\omega)\in(1/2,1)
\times\sphere^{n-1}$
near $\pa\BB^n$, with $\BB^n$ considered as the unit ball in $\RR^n$; then
one suitable identification of the exterior of the ball of radius 2 in
$\RR^n_z$ with the interior
of a collar neighborhood of $\pa\BB^n$ in $\overline{\BB^n}$ is
$$
(0,1/2)\times\sphere^{n-1}\ni(x,\omega)\mapsto(\rho,\omega)=(1-x,\omega)
\in(1/2,1)\times\sphere^{n-1}.
$$
\section{Poincar\'e inequalities}\label{sec:Poincare}
Let $g$ be an scattering metric on a compact manifold
with boundary $X$ of dimension $n$, and $L^2_g(X)$ the metric $L^2$-space.
That is, as introduced by Melrose \cite{RBMSpec},
we assume that $g$ is a Riemannian metric on $X^\circ$, and
that $\pa X$ has a collar neighborhood $U$ and a boundary defining function
$x$ such that on $U$,
$$
g=\frac{dx^2}{x^4}+\frac{h}{x^2},
$$
where $h$ is a symmetric 2-cotensor, $h\in\CI(X;T^*X\otimes T^*X)$,
which restricts to a metric on $\pa X$. Then with $h_0=h|_{\pa X}$, and
also extended to $U$ using the product decomposition,
\begin{equation}\label{eq:asymp-Eucl}
g=\frac{dx^2}{x^4}+\frac{h_0}{x^2}+g_1,\ g_1\in x\CI(X;\Tsc^*X\otimes\Tsc^*X).
\end{equation}
Below we assume that $g$ is of this form, with merely
\begin{equation}\label{eq:weaker-asymp}
g_1\in S^{-\rho}(X;\Tsc^*X\otimes\Tsc^*X),\ \rho>0.
\end{equation}
Then the Laplacian $\Delta_g\in\Diffsc^2(X)$ satisfies
$\Delta_g\in x^2\Diffb^2(X)$, namely $\Delta_g=x^2\Delta_{\bl}$,
$\Delta_{\bl}\in\Diffb^2(X)$. Explicitly, as shown by Melrose
\cite[Proof of Lemma~3]{RBMSpec}, in local
coordinates $(x,y)$ on a collar neighborhood of $\pa X$,
\begin{equation}\label{eq:Delta-b-form}
\Delta_{\bl}=D_x x^2D_x+i(n-1)xD_x+\Delta_0+x^\rho R,\ R\in S^0\Diffb^2(X),
\end{equation}
where $\Delta_0$ is the Laplacian of the boundary metric.
Moreover, the density $|dg|=x^{-n}|dg_{\bl}|$,
where $|dg_{\bl}|$ is a non-degenerate b-density,
so $L^2_g(X)=x^{n/2}L^2_{\bl}(X)$.
We now recall a standard result in the b-calculus on the mapping
properties of $\Delta_g$.
Although we work with $\Delta_g+V$ with $V=0$ to be concise,
$V\in S^{-2-\rho}(X)$ with $V\geq 0$, $\rho>0$, can easily be accommodated.
We will in fact not use this result in the sequel,
but remark that its use eliminates the need for some of our
arguments (at the expense of b-machinery)
in sufficiently high dimension ($n\geq 5$)
\footnote{A proof of Lemma~\ref{lemma:Vb-est} proceeds as follows.
The statement of this lemma with `isomorphism'
replaced by `Fredholm of index 0' follows from
\cite[Lemma~2.1]{Guillarmou-Hassell:Resolvent-I} (which in turn essentially
quotes \cite{Melrose:Atiyah}), since, keeping in mind
that $\Delta=x^{n/2+1}P_b x^{-n/2+1}$ with the notation of that paper,
$P_b:x^{-1}H^2_\bl(X)\to xL^2_{\bl}(X)$ is shown to be Fredholm of index 0
there. By Lemma~2.2 of \cite{Guillarmou-Hassell:Resolvent-I} elements
of the nullspace of $\Delta$ would necessarily be in
$x^{n/2-1} H_{\bl}^\infty(X)$.
One deduces that $du\in L^2_\scl(X;\Tsc^*X)$, and a regularization argument
allows one to conclude from $\Delta u=0$ that $du=0$, and then that $u=0$.}.
\begin{lemma}\label{lemma:Vb-est}
Suppose $n\geq 5$. Then
$$
\Delta_g:x^{n/2-2}H_\bl^2(X)\to x^{n/2}L^2_\bl(X)=L^2_g(X)
$$
is an isomorphism.
In particular, for any $Q\in\Vb(X)$,
$$
\|x^2 u\|_{L^2_g(X)}+\|x^2Q u\|_{L^2_g(X)}\leq C\|\Delta_g u\|_{L^2(X)}.
$$
\end{lemma}
It is also useful to have the Poincar\'e inequality at our disposal.
This can be proved by b-techniques; we give an elementary proof.
\begin{lemma}\label{lemma:Poincare}
Suppose $l>1$, $l>l'$. Then for $u\in x^{l+1} H^1_{\bl}(X)$,
$$
\|x u\|_{x^{l'} L^2_\bl(X)}\leq C\|\nabla_g u\|_{x^l L^2_{\bl}(X)}.
$$
In particular, for $n\geq 3$, with $l=n/2$, $\ep=l-l'>0$,
$$
\|x^{1+\ep} u\|_{L^2_g(X)}\leq C\|\nabla_g u\|_{L^2_g(X)}.
$$
\end{lemma}
\begin{proof}
It suffices to prove this for $u\in\dCI(X)$ as both sides are continuous
on $x^{l+1}H^1_{\bl}(X)$. Moreover, it suffices to show that for such $u$,
$$
\|\chi x u\|_{x^{l'} L^2_\bl(X)}\leq C\|\nabla_g u\|_{x^l L^2_{\bl}(X)},
$$
$\chi\in\CI_c(X)$ supported in a collar neighborhood of $\pa X$, which
is then identified with $[0,x_0)_x\times\pa X$, for the rest will then follow
by the standard Poincar\'e inequality on $H^1_0(K)$ where $K\subset X^\circ$
is compact. This in turn follows from
$$
\|\chi x u\|_{x^{l'} L^2_\bl(X)}\leq C\|x^2D_x u\|_{x^l L^2_{\bl}(X)},
$$
i.e.
\begin{equation}\label{eq:bdy-Poincare}
\int \chi^2 |u|^2 x^{-2l'+1}\,dx\,dy\leq C^2\int |D_x u|^2 x^{-2l+3}\,dx\,dy.
\end{equation}
But in local coordinates near $\pa X$, for $k<1/2$, and for $x\leq x_0$
\begin{equation*}\begin{split}
|u(x,y)|&=\Big|\int_0^x (\pa_x u)(s,y)\,ds\Big|
=\Big|\int_0^x s^{k}(\pa_x u)(s,y) s^{-k}\,ds\Big|\\
&\leq \Big(\int_0^x s^{2k}|(\pa_x u)(s,y)|^2\,ds\Big)^{1/2}
\Big(\int_0^x s^{-2k}\,ds\Big)^{1/2}\\
&\leq \Big(\int_0^{x_0} s^{2k}|(\pa_x u)(s,y)|^2\,ds\Big)^{1/2}
C'x^{-k+1/2},
\end{split}\end{equation*}
and thus, provided $p-2k+1>-1$,
$$
\int_0^{x_0} x^p |u(x,y)|^2\,dx\leq C''
\int_0^{x_0} s^{2k}|(\pa_x u)(s,y)|^2\,ds.
$$
Integration with respect to $y$ now gives
$$
\int \chi^2 |u|^2 x^{p}\,dx\,dy\leq C^2\int |D_x u|^2 x^{2k}\,dx\,dy.
$$
So take $k=-l+3/2$, so $k<1/2$ is satisfied for $l>1$. Then let
$p=-2l'+1$, so $p-2k+1=-2l'+2l-1$, and $p-2k+1>-1$ is satisfied if $l'<l$.
\end{proof}
We now prove a sharp version of the Poincar\'e inequality; this
will follow from a weighted Hardy inequality, which can
be found in the Appendix of \cite{Mazzeo-McOwen}; we give a proof for
completeness:
\begin{lemma}\label{lemma:Hardy}
Let $u \in \dCI_c([0,\infty)),$ and let $d\mu=x^{-n-1} \, dx$ on $(0,\infty).$ If $s<(n-2)/2,$ we have
$$
\norm{x^{1+s} u}^2_{L^2(d\mu)}\leq \frac{4}{(n-2-2s)^2} \norm{x^{2+s}
\pa_x u}^2_{L^2(d\mu)}.
$$
\end{lemma}
\begin{proof}
We follow the usual proof of Hardy's inequality, noting that one
usually uses $r=1/x$ as the independent variable. We will use the
abbreviated notation $L^2_\mu= L^2 (d\mu).$
As
$$
[x^2\pa_x, x^{1+2s}] = (1+2s) x^{2+2s},
$$
pairing with $u$ yields
\begin{align*}
(1+2s)\norm{x^{1+s}u}_{L^2_\mu}^2 &=\ang{x^2 \pa_x (x^{1+2s}u),u}_{L^2_\mu} - \ang{x^{1+2s}
x^2 \pa_x u,u}_{L^2_\mu}\\
&= \int_0^\infty \pa_x(x^{1+2s}u) x^{-n+1} \overline{u} \, dx-
\int_0^\infty x^{3+2s} u' \overline{u} x^{-n-1} \, dx
\end{align*}
Integrating by parts yields
$$
(1+2s)\norm{x^{1+s}u}_{L^2_\mu}^2 = (n-1) \int \abs{x^{1+s} u}^2 \, x^{-n-1} \,
dx- 2 \int \Re (u u') x^{3+2s} \, x^{-n-1} \, dx,
$$
hence if $s<(n-2)/2,$
$$
(n-2-2s) \norm{x^{1+s} u}_{L^2_\mu}^2 \leq 2\big\lvert\ang{x^{1+s} u, x^{2+2s} u'}_{L^2_\mu}\big\rvert \leq
\lambda \norm{x^{1+s} u}_{L^2_\mu}^2 + \frac 1\lambda \norm{x^{2+s} u'}_{L^2_\mu}^2
$$
for all $\lambda>0.$
Thus if we also have $\lambda < n-2-2s,$
$$
\norm{x^{1+s}u}_{L^2_\mu}^2 \leq \frac{\norm{x^{2+s} u'}_{L^2_\mu}^2}{(n-2-2s)\lambda-\lambda^2}
$$
Optimizing by taking
$\lambda=(n-2)/2 -s$ yields the desired estimate.
\end{proof}
Since in a collar neighborhood of $\pa X,$
$$
\abs{\nabla_g u}^2_g \sim \abs{x\pa_\theta u}^2+ \abs{x^2 \pa_x u}^2,
$$
we can combine our Hardy inequality with the non-sharp Poincar\'e inequality
above to get a sharp result:
\begin{prop}\label{prop:sharp}
If $s<(n-2)/2$ and
$$u \in x^{-s+(n-2)/2}H^1_{\bl}(X),$$ then
$$
\norm{x^{1+s} u}^2_{L^2_g(X)}\leq C_s \norm{x^s \nabla u}_{L^2_g(X)}.
$$
In particular, the estimate holds for $u\in x^{-s}H^1_{\scl}(X),$ hence
for $u\in H^1_{\scl}(X)$ for $s\geq 0$.
\end{prop}
\begin{proof}
By density of $\dCI(X)$
and continutity of both sides in $x^{-s+(n-2)/2}H^1_{\bl}(X)$ (recall
that $L^2_g(X)=x^{n/2}L^2_{\bl}(X)$), it
suffices to consider $u\in\dCI(X)$ when proving the estimate.
Let $\phi \in \CI(X)$ equal $1$ on a collar neighborhood of $\pa X$ of
the form $\{x<\ep\}$ and equal $0$ on $\{x>2\ep\}.$ By integrating
the inequality Lemma~\ref{lemma:Hardy} in the angular variables,
i.e.\ along $\pa X$ in the collar neighborhood, we have
\begin{align*}
\norm{x^{1+s} \phi u}^2&\lesssim \norm{x^{2+s}
\pa_x (\phi u)}^2 \\ &\lesssim \norm{x^s \nabla_g (\phi u)}^2\\ &\lesssim
\norm{x^s \nabla u}^2 + \norm{\phi' u}^2.
\end{align*}
(We use the notation $f \lesssim g$ to indicate that there exists $C
>0$ such that $|f| \leq Cg.$)
So overall we obtain
$$
\norm{x^{1+s} u}^2 \lesssim \norm{x^s \nabla u}^2 +
\norm{\phi' u}^2 + \norm{(1-\phi) u}^2.
$$
Now by compact support we certainly have
$$
\phi', (1-\phi) \lesssim x^{s+\ep},
$$
for all $\ep>0,$
hence by Lemma~\ref{lemma:Poincare} (with $l=n/2-s>1,$ $l'=n/2-s-\ep$),
$$
\norm{\phi' u}^2 + \norm{(1-\phi) u}^2 \lesssim \norm{x^s \nabla u}^2,
$$
and the desired estimate follows.
\end{proof}
Interpolating between $\|x^s u\|_{L^2_g(X)}\leq \|x^s u\|_{L^2_g(X)}$ and
Proposition~\ref{prop:sharp}, we immediately deduce:
\begin{cor}
For $s<(n-2)/2$, $u\in x^{-s}H^1_{\scl}(X)$,
\begin{equation}\label{eq:Poincare-interpolate-gen}
\|x^{s+\theta} u\|_{L^2_g(X)}\leq
C\|x^s\nabla_g u\|_{L^2_g(X)}^\theta\|x^s u\|_{L^2_g(X)}^{1-\theta},
\ 0\leq\theta\leq 1.
\end{equation}
In particular, if $n\geq 3$, $s=0$, then for $u\in H^1_{\scl}(X)$,
\begin{equation}\label{eq:Poincare-interpolate}
\|x^{\theta} u\|_{L^2_g(X)}\leq
C\|\nabla_g u\|_{L^2_g(X)}^\theta\|u\|_{L^2_g(X)}^{1-\theta},
\ 0\leq\theta\leq 1.
\end{equation}
\end{cor}
Of course, we can estimate $\nabla_g u$ with a right side of similar form:
as $\Delta_g=\nabla_g^*\nabla_g$,
\begin{equation}\label{eq:nabla_g-est}
\|\nabla_g u\|^2_{L^2_g(X)}
=\langle \Delta_g u,u\rangle\leq \|\Delta_g u\|_{L^2_g(X)}
\|u\|_{L^2_g(X)}.
\end{equation}
Note also that if $Q\in x\Vb(X)=\Vsc(X)$ then
\begin{equation}\label{eq:Vsc-est}
\|Qu\|_{L^2_g(X)}\leq C\|\nabla_g u\|_{L^2_g(X)}.
\end{equation}
We can also consider $P=\Delta_g+V$, $V\in S^{-2-\rho}(X)$, $V\geq 0$,
$\rho>0$
as beforehand. Then
\begin{equation}\label{eq:nabla_g-V-est}
\|\nabla_g u\|^2_{L^2_g(X)}
=\langle \Delta_g u,u\rangle\leq
\langle (\Delta_g+V) u,u\rangle \leq \|(\Delta_g+V) u\|_{L^2_g(X)}
\|u\|_{L^2_g(X)}.
\end{equation}
\section{Weighted estimates for $\Delta_g+V$}\label{sec:weights}
We assume
throughout this section that $n\geq 3$, $g$ is a scattering metric
in the sense
of \eqref{eq:asymp-Eucl} with $g_1$ satisfying \eqref{eq:weaker-asymp},
$V\in S^{-2-\rho}(X)$, $V\geq 0$,
$\rho>0$. As below only $L^2_g(X)$ is of interest, we will write
$L^2(X)=L^2_g(X)$ henceforth.
For $0\leq s\leq 1$, $u\in\dCI(X)$, we now compute
\begin{equation}\begin{split}\label{eq:weighted-nabla}
\|x^s\nabla_g u\|^2_{L^2(X)}&=\langle\nabla_g u,x^{2s}\nabla_g u\rangle
=\langle\Delta_g u,x^{2s}u\rangle
+\langle\nabla_g u,[\nabla_g,x^{2s}]u\rangle\\
&=\langle(\Delta_g+V) u,x^{2s}u\rangle-\langle Vu,x^{2s}u\rangle
+\langle\nabla_g u,[\nabla_g,x^{2s}]u\rangle.
\end{split}\end{equation}
Now, for $0\leq s\leq 1/2$,
\begin{equation}\begin{split}\label{eq:weighted-nabla-calc-2}
|\langle(\Delta_g+V) u,x^{2s} u\rangle|&\leq \|(\Delta_g+V) u\|_{L^2(X)}
\|x^{2s} u\|_{L^2(X)}\\
&\leq C\|(\Delta_g+V) u\|_{L^2(X)} \|\nabla_g u\|^{2s}_{L^2(X)}
\|u\|^{1-2s}_{L^2(X)},
\end{split}\end{equation}
where we used \eqref{eq:Poincare-interpolate}.
On the other hand
$$[\nabla_g,x^{2s}]=x^{2s+1}f,\ f\in\CI(X;TX),$$
and $\sup|f|\leq C_0 s$, so using Proposition~\ref{prop:sharp} and $n\geq 3$
\begin{equation*}\begin{split}
|\langle\nabla_g u,[\nabla_g,x^{2s}]u\rangle|
&\leq C_0s\|x^s\nabla_g u\|_{L^2(X)} \|x^{s+1}u\|_{L^2(X)}\\
&\leq C_0 Cs\|x^s\nabla_g u\|_{L^2(X)} \|x^s\nabla_g u\|_{L^2(X)}
=C_0Cs\|x^s\nabla_g u\|_{L^2(X)}^2,
\end{split}\end{equation*}
and for $s$ sufficiently small this can be absorbed into
the left hand side of \eqref{eq:weighted-nabla}. Since
$\langle Vu,x^{2s}u\rangle\geq 0$, we deduce from
\eqref{eq:weighted-nabla} that there exists $s_0>0$ such
that for $0\leq s\leq s_0$,
\begin{equation}\label{eq:weighted-nabla-result}
\|x^s\nabla_g u\|^2_{L^2(X)}
\leq C\|(\Delta_g+V) u\|_{L^2(X)} \|\nabla_g u\|^{2s}_{L^2(X)}
\|u\|^{1-2s}_{L^2(X)};
\end{equation}
indeed this holds even with $\langle Vu,x^{2s}u\rangle$ added
to the left hand side. Although we had assumed
$u\in\dCI(X)$, by density and continuity, the estimate
holds for $u\in H^2_{\scl}(X)$, i.e.\ for $u$ in the domain of
$\Delta_g+V$.
Using the Poincar\'e inequality, Proposition~\ref{prop:sharp}, we
deduce
\footnote{If $n\geq 5$, one can use Lemma~\ref{lemma:Vb-est} (or its analogue
if $V\geq 0$) to
obtain an estimate that slightly shortens some of the arguments that follow;
one then
needs to rely on the lemma, i.e.\ on b-machinery.
Namely, by Lemma~\ref{lemma:Vb-est}, if $n \geq 5,$
\begin{equation}\label{eq:b-estimates}
\|xQ_i u\|_{L^2(X)}\leq C\|\Delta_g u\|_{L^2(X)},
\ \|x^2 u\|_{L^2(X)}\leq C\|\Delta_g u\|_{L^2(X)}.
\end{equation}
On the other hand,
\begin{equation}\label{eq:sc-estimate}
\|Q_i u\|_{L^2(X)}\leq C\|\nabla_g u\|_{L^2(X)}.
\end{equation}
Interpolating between the first inequality of \eqref{eq:b-estimates}
and \eqref{eq:sc-estimate} gives for $n\geq 5$
\begin{equation}\label{eq:sc-b-interpolate}
\|x^s Q_i u\|_{L^2(X)}\leq C\|\nabla_g u\|^{1-s}_{L^2(X)}
\|\Delta_g u\|^s_{L^2(X)},\ 0\leq s\leq 1.
\end{equation}}:
\begin{prop}\label{prop:weighted-b-estimate-weak}
There exists $s_0>0$ such that for $0\leq s\leq s_0$
\begin{equation}\begin{split}\label{eq:weighted-combined-result-weak}
&\|x^{s+1} u\|_{L^2(X)}+\|x^s\nabla_g u\|_{L^2(X)}\\
&\qquad\leq C_s\|(\Delta_g+V) u\|_{L^2(X)}^{1/2} \|\nabla_g u\|^{s}_{L^2(X)}
\|u\|^{1/2-s}_{L^2(X)},\ u\in H^2_{\scl}(X).
\end{split}\end{equation}
In particular, for $L\in S^{-1-s}\Diffb^1(X)$,
\begin{equation}\label{eq:weighted-b-result-weak}
\|L u\|_{L^2(X)}
\leq C_s\|(\Delta_g+V) u\|_{L^2(X)}^{1/2} \|\nabla_g u\|^{s}_{L^2(X)}
\|u\|^{1/2-s}_{L^2(X)},\ u\in H^2_{\scl}(X).
\end{equation}
\end{prop}
Since any $L\in S^{-2-2s}\Diffb^2(X)$ can be rewritten as
$L=\sum Q_i^*R_i$, $Q_i,R_i\in S^{-1-s}\Diffb^1(X)$, with the sum
finite, we immediately deduce
\begin{cor}\label{cor:weighted-b-pairing-weak}
Let $s_0>0$ be as in Proposition~\ref{prop:weighted-b-estimate}.
For $0\leq s\leq s_0$, $L\in S^{-2-2s}\Diffb^2(X)$,
\begin{equation}\label{eq:weighted-pairing-estimate-weak}
|\langle Lu,u\rangle|\leq
C_s\|(\Delta_g+V) u\|_{L^2(X)} \|\nabla_g u\|^{2s}_{L^2(X)}
\|u\|^{1-2s}_{L^2(X)},\ u\in H^2_{\scl}(X).
\end{equation}
\end{cor}
In fact we can improve upon these results by allowing the
full range $0\leq s<(n-2)/2$
as follows. Rather than working
with $\langle x^{2s}(\Delta_g+V)u,u\rangle$, and rewriting it
in terms of $\|x^s\nabla_g u\|^2_{L^2(X)}$ plus a commutator, we
work with a symmetric expression:
$$
\langle f(\Delta_g+V)u,u\rangle+\langle u,f(\Delta_g+V)u\rangle
$$
for some $f$ which behaves like $x^{2s}$ for small $x$. First we compute
$$
f(\Delta_g+V)+(\Delta_g+V)f=2\nabla_g^* f\nabla_g+((\Delta_g+2V)f),
$$
where the last term on the right hand side is multiplication by
the function $(\Delta_g+2V)f$,
which can be seen by observing that both sides are real self-adjoint
second order scalar differential operators with the same principal symbol, so
their difference is first order, hence by reality and self-adjointness
zeroth order, and it vanishes on the constant function
$1$. Now if $f\geq 0$ then $Vf\geq 0$,
so all terms on the right hand side are positive provided $\Delta_g f\geq 0$,
and we have
\begin{equation*}\begin{split}
&2\langle f\nabla_g u,\nabla_g u\rangle +\langle ((\Delta_g f)u,u\rangle
+\langle 2Vf u,u\rangle\\
&\qquad=\langle f(\Delta_g+V)u,u\rangle+\langle u,f(\Delta_g+V)u\rangle,
\end{split}\end{equation*}
hence
\begin{equation}\label{eq:real-part-weighted-est}
\|f^{1/2}\nabla_g u\|^2\leq \|(\Delta_g+V)u\|\,\|fu\|.
\end{equation}
It remains to find $f\geq 0$ such that $\Delta_g f\geq 0$; we remind the
reader that this is the {\em positive} Laplacian.
With $t_0>0$ to be fixed, we consider
\begin{equation*}\begin{split}
&\chi(t)=e^{1/(t-t_0)},\ t<t_0,\\
&\chi(t)=0,\ t\geq t_0,
\end{split}\end{equation*}
and define $f$ by
\begin{equation}\label{eq:weight-def}
\begin{aligned}
f(p)&=\mathsf{g}(p)^{2s},\text{where}\\ \mathsf{g}(p)&=\chi(0)-\chi(x(p)/\ep),\ p\in X,
\end{aligned}
\end{equation}
where $\ep>0$. For $\ep>0$ is sufficiently small, $d\chi$ is supported
in such a collar neighborhood of $\pa X$ in which we can take $x$ as one of the
coordinates and $g$ is of the form \eqref{eq:asymp-Eucl} with $g_1$
as in \eqref{eq:weaker-asymp}. Moreover, $\mathsf{g}\geq 0$ (hence $f\geq 0$),
$\mathsf{g}'(0)>0$, and $\pa_x\mathsf{g}(x=0)=0$, hence $f\sim x^{2s}$ for $x$ near $0$.
As usual, we abuse notation and
write $f=f(x)$. Recall that
$$
\Delta_g =x^2\Delta_{\bl},\ \Delta_{\bl}=-(x\pa_x)^2+(n-2)(x\pa_x)+ \Delta_0
+x^\rho R,
\ R\in S^0\Diffb^2(X),
$$
and $R$ annihilates constants.
We then compute, for $x/\ep<t_0$ (since $df=0$ for $x/\ep\geq t_0$),
i.e.\ with $t=x/\ep$ for $0\leq t<t_0$, writing $f(p)=\mathsf{g}(p)^{2s}$, and primes
denoting derivatives in $t$,
\begin{equation*}\begin{split}
&\big(-x^2\pa_x^2+(n-3)x\pa_x\big)f=\big(-t^2\pa_t^2+(n-3)t\pa_t\big)\mathsf{g}^{2s}\\
&\qquad
=2s \mathsf{g}^{2s-2}\Big(-(2s-1)t^2 (\mathsf{g}')^2+(n-3)t\mathsf{g}\g'-t^2\mathsf{g} \mathsf{g}''\Big).
\end{split}\end{equation*}
Now, for $0\leq t<t_0$,
\begin{equation*}\begin{split}
&\mathsf{g}'=(t-t_0)^{-2} e^{1/(t-t_0)}>0,\\
&\mathsf{g}''=(t-t_0)^{-4}\big(-1-2(t-t_0)\big)e^{1/(t-t_0)}.
\end{split}\end{equation*}
We deduce that for $t_0<1/2$, $\mathsf{g}''< 0$ (on $[0,t_0)$).
Thus, for $n\geq 3$, $0<s<1/2$,
\begin{equation*}\begin{split}
&\big(-(x\pa_x)^2+(n-2)x\pa_x+\Delta_0\big)f\\
&\qquad=2s \mathsf{g}^{2s-2}\Big(-(2s-1)t^2 (\mathsf{g}')^2+(n-3)t\mathsf{g}\g'-t^2\mathsf{g} \mathsf{g}''\Big)\geq 0,
\end{split}\end{equation*}
i.e.\ the `model Laplacian' of $f$ is always non-negative provided $s\leq 1/2.$
If $n=3$, we have obtained non-negativity of $\Delta_g f$ for
the whole range $0<s<(n-2)/2$. In general, however if $n>3$ and
$s\geq 1/2$, we need to estimate $t\mathsf{g}'$ relative to $\mathsf{g}$. An estimate
$t\mathsf{g}'\leq C\mathsf{g}$ is automatic for sufficiently large $C>0$, as it is
easily checked at $0$, and $\mathsf{g}$ is bounded away from $0$ elsewhere.
However, we need a sharp constant, so we proceed as follows. A
straightforward calculation gives
$$
t\mathsf{g}'-\mathsf{g}=\Big(\frac{t}{(t-t_0)^2}+1\Big)e^{1/(t-t_0)}-e^{-1/t_0},
$$
so $t\mathsf{g}'-\mathsf{g}$ vanishes at $t=0$ and it is decreasing, as its derivative is
$$
\frac{t}{(t-t_0)^4}\big(-1-2(t-t_0)\big)e^{1/(t-t_0)}\leq 0,\ 0\leq t<t_0,
\ t_0<1/2,
$$
so $t\mathsf{g}'\leq \mathsf{g}$ on $[0,t_0)$.
In summary
\begin{equation*}\begin{split}
\big(-t^2\pa_t^2+(n-3)t\pa_t\big)\mathsf{g}^{2s}
=2s \mathsf{g}^{2s-2}\Big((n-2s-2)t\mathsf{g}'\mathsf{g}-t^2\mathsf{g}\g''\Big)\geq 0,
\end{split}\end{equation*}
provided $1/2\leq s<(n-2)/2$, so
$$
\Big(-(x\pa_x)^2+(n-2)(x\pa_x)+ \Delta_0\Big)f\geq 0
$$
in this case.
We deduce that for any $0<s<(n-2)/2$ we have
$$
\Big(-(x\pa_x)^2+(n-2)(x\pa_x)+ \Delta_0\Big)f\geq 0,
$$
provided that we choose $0<t_0<1/2$, and indeed we have the
somewhat stronger estimate (useful for error terms below) that
for $c>0$ sufficiently small,
\begin{equation}\label{eq:positive-Delta-lb}
\Big(-(x\pa_x)^2+(n-2)(x\pa_x)+ \Delta_0\Big)f\geq
c\,\mathsf{g}^{2s-2}\Big(t^2(\mathsf{g}')^2-t^2\mathsf{g}\g''\Big),
\end{equation}
where both summands on the right hand side are non-negative, and where
we used $t\mathsf{g}'\leq \mathsf{g}$ in the case $s\geq 1/2$.
Note that this estimate is valid for any choice of $\ep>0$
provided it is sufficiently small (i.e.\ $\ep\leq\ep_1$, $\ep_1$ suitably
chosen) so that $df$ is supported in
the collar neighborhood of $\pa X$.
We can also deal with the error term $x^\rho R$ by letting $\ep\to 0$.
Namely, on the support of $Rf$, $x\leq\ep$, so $x^\rho R f\leq\ep^\rho Rf$,
so $\Delta_g f\geq 0$ follows provided
\begin{equation}\label{eq:Rf-needed-est}
Rf\leq C\Big(-(x\pa_x)^2+(n-2)(x\pa_x)\Big)f
\end{equation}
for some $C>0$.
But writing out $Rf$ explicitly in terms of $x\pa_x$ and $\pa_{y_j}$
in local coordinates (of which the latter annihilate $f$),
using that $R$ annihilates constants, we conclude that for
$C'>0$ sufficiently large
$Rf$ is bounded by
$$
C' \mathsf{g}^{2s-2}(t^2(\mathsf{g}')^2+t\mathsf{g}\g'-t^2 \mathsf{g}\g''),
$$
where we note that all terms in the parantheses are non-negative and
$C'$ is independent of $\ep\in (0,\ep_1]$. We now note that sufficiently
close to $0$, $t\mathsf{g}\g'$ can be absorbed into $t^2(\mathsf{g}')^2$ (uniformly in $\ep$)
for both are quadratic in $t$, and the latter is non-degenerate, while
outside any neighborhood of $0$, $t\mathsf{g}\g'$ can be absorbed in $-t^2\mathsf{g}\g''$,
i.e.\ $\mathsf{g}'$ can be absorbed into $\mathsf{g}''$, as is easy to check.
Thus, for $C''>0$ sufficiently large, $Rf$ is bounded by
$$
C'' \mathsf{g}^{2s-2}(t^2(\mathsf{g}')^2-t^2 \mathsf{g}\g''),
$$
and this is bounded by
$C'''\Big(-(x\pa_x)^2+(n-2)(x\pa_x)+\Delta_0\Big)f$ for sufficiently large
$C'''>0$ by \eqref{eq:positive-Delta-lb}, i.e.\ \eqref{eq:Rf-needed-est}
holds. This proves that for
$\ep>0$ sufficiently small $\Delta_g f\geq 0$. In summary we have proved:
\begin{lemma}\label{lemma:positive-Laplacian}
Let $0<t_0<1/2$, $0<s<(n-2)/2$.
Then there exists $\ep_0>0$ such that for $0<\ep<\ep_0$,
with $f$ as in \eqref{eq:weight-def}, $\Delta_g f\geq 0$.
\end{lemma}
As immediate consequences of Lemma~\ref{lemma:positive-Laplacian} and
\eqref{eq:real-part-weighted-est} we deduce that
\begin{equation}\label{eq:real-part-weighted-est-mod}
\|x^s\nabla_g u\|^2\leq C_s\|(\Delta_g+V)u\|\,\|x^{2s}u\|,
\end{equation}
which yields, in view of the Poincar\'e inequality, for $0\leq s<1/2$,
\begin{equation}\label{eq:real-part-weighted-est-Poincare}
\|x^s\nabla_g u\|^2
\leq C'_s \|(\Delta_g+V)u\| \|\nabla_g u\|^{2s}_{L^2(X)}
\|u\|^{1-2s}_{L^2(X)}.
\end{equation}
Using the Poincar\'e inequality again, and applying \eqref{eq:nabla_g-V-est}
we therefore deduce the
following strengthening of Proposition~\ref{prop:weighted-b-estimate-weak}:
\begin{prop}\label{prop:weighted-b-estimate}
For $0\leq s<1/2$
\begin{equation}\begin{split}\label{eq:weighted-combined-result}
&\|x^{s+1} u\|_{L^2(X)}+\|x^s\nabla_g u\|_{L^2(X)}\\
&\qquad\leq C_s\|(\Delta_g+V) u\|_{L^2(X)}^{1/2} \|\nabla_g u\|^{s}_{L^2(X)}
\|u\|^{1/2-s}_{L^2(X)}\\
&\qquad\leq C_s\|(\Delta_g+V) u\|_{L^2(X)}^{(1+s)/2}
\|u\|^{(1-s)/2}_{L^2(X)},\ u\in H^2_{\scl}(X).
\end{split}\end{equation}
In particular, for $L\in S^{-1-s}\Diffb^1(X)$,
\begin{equation}\begin{split}\label{eq:weighted-b-result}
\|L u\|_{L^2(X)}
&\leq C_s\|(\Delta_g+V) u\|_{L^2(X)}^{1/2} \|\nabla_g u\|^{s}_{L^2(X)}
\|u\|^{1/2-s}_{L^2(X)}\\
&\leq C_s\|(\Delta_g+V) u\|_{L^2(X)}^{(1+s)/2}
\|u\|^{(1-s)/2}_{L^2(X)},\ u\in H^2_{\scl}(X).
\end{split}\end{equation}
\end{prop}
Using again that any $L\in S^{-2-2s}\Diffb^2(X)$ can be rewritten as
$L=\sum Q_i^*R_i$, $Q_i,R_i\in S^{-1-s}\Diffb^1(X)$, with the sum
finite, we conclude
\begin{cor}\label{cor:weighted-b-pairing}
For $0\leq s<1/2$, $L\in S^{-2-2s}\Diffb^2(X)$,
\begin{equation}\begin{split}\label{eq:weighted-pairing-estimate}
|\langle Lu,u\rangle|&\leq
C_s\|(\Delta_g+V) u\|_{L^2(X)} \|\nabla_g u\|^{2s}_{L^2(X)}
\|u\|^{1-2s}_{L^2(X)}\\
&\leq
C_s\|(\Delta_g+V) u\|_{L^2(X)}^{1+s}\|u\|^{1-s}_{L^2(X)},\ u\in H^2_{\scl}(X).
\end{split}\end{equation}
\end{cor}
Now, suppose that $u=\psi(H^2(\Delta_g+V))v$, $v\in L^2(X)$,
where $\psi\in L^\infty_c(I)$,
$I\subset (0,\infty)$ compact, $0\leq\psi\leq 1$, $H>0$. Then
$u\in\Hsc^2(X)$ and
$$
C'_I\|u\|_{L^2(X)}\leq\|H^2(\Delta_g+V) u\|_{L^2(X)}\leq C_I\|u\|_{L^2(X)}
$$
and
$$
C'_I\|u\|^2_{L^2(X)}\leq\langle H^2(\Delta_g+V) u,u\rangle\leq C_I\|u\|^2_{L^2(X)}.
$$
Combining these with Corollary~\ref{cor:weighted-b-pairing}
we deduce that
for $L\in S^{-2-\sigma}\Diffb^2(X)$ with $0\leq\sigma<1$,
\begin{equation}\label{eq:localized-weighted-b-est}
|\langle Lu,u\rangle|\leq C' C_I^{1+\sigma/2} H^{-2-\sigma}\|u\|^2_{L^2(X)}.
\end{equation}
Note that $|\langle Vu,u\rangle|$ satisfies the same estimate as
$|\langle Lu,u\rangle|$. If $\sigma>0$ this gives a gain of $H^{-\sigma}$
over e.g.\ $\langle(\Delta+V)u,u\rangle$ as $H\to\infty$; ultimately,
this gain arose due to
the Poincar\'e estimate in \eqref{eq:weighted-nabla-calc-2}.
We also remark that \eqref{eq:weighted-b-result} yields for $L\in S^{-1-s}\Diffb^1(X)$, $0\leq s<1/2$,
\begin{equation}\label{eq:localized-weighted-b-est-2}
\|L u\|_{L^2(X)}
\leq C C_I^{(1+s)/2} H^{-1-s}\|u\|_{L^2(X)}.
\end{equation}
The estimates
\eqref{eq:localized-weighted-b-est}--\eqref{eq:localized-weighted-b-est-2}
are analogues of Lemma~B.12 of \cite{Bony-Haefner1}, with $\lambda=H^2$
in their notation: one can trade powers of $x$ for negative powers of $H$
(within limits),
i.e.\ in the notation of \cite{Bony-Haefner1}, one can trade
negative powers of $\langle x\rangle$ for powers of $\lambda^{-1/2}$
(see the exponent $\gamma$ in \cite{Bony-Haefner1}).
Below we actually need a somewhat stronger result, using the resolvent in
place of the compactly supported functions of $P=\Delta_g+V$.
Thus, for $L\in S^{-1-s}\Diffb^1(X)$, $u\in L^2(X)$, replacing
$u$ by $(\Delta_g+V-z)^{-1}u$, and using
$\|(\Delta_g+V-z)^{-1}\|_{\cL(L^2(X))}\leq |\im z|^{-1}$ (for $\im z\neq 0$),
we deduce that
\begin{equation}\begin{split}\label{eq:res-weight-gain}
\|&L (\Delta+V-z)^{-1}u\|_{L^2(X)}\\
&\leq C\|(\Id+z(\Delta_g+V-z)^{-1})u\|_{L^2(X)}^{(1+s)/2}
\|(\Delta_g+V-z)^{-1}u\|^{(1-s)/2}_{L^2(X)}\\
&\leq C(1+|z|/|\im z|)^{(1+s)/2}\|u\|_{L^2(X)}^{(1+s)/2}
\,|\im z|^{-(1-s)/2}\|u\|^{(1-s)/2}_{L^2(X)}\\
&\leq 2C(|z|/|\im z|)^{(1+s)/2}|\im z|^{-(1-s)/2}\|u\|_{L^2(X)}.
\end{split}\end{equation}
In addition, using the positivity of $\Delta_g+V$, we have for $z$ with
$\re z<0$,
$$
\|(\Delta_g+V-z)^{-1}\|_{\cL(L^2(X))}\leq |z|^{-1},
$$
so
in fact
\begin{equation}\label{eq:res-weight-gain-neg}
\|L (\Delta+V-z)^{-1}u\|_{L^2(X)}\leq 2C|z|^{-(1-s)/2}\|u\|_{L^2(X)},\ \re z<0.
\end{equation}
Replacing $z$ by $z=w/H^2$, we deduce the following:
\begin{prop}
Suppose $L\in S^{-1-s}\Diffb^1(X)$, $0\leq s<1/2$. Then there exists
$C>0$ such that for all $u\in L^2(X)$ we have
\begin{equation}\begin{split}\label{eq:res-weight-gain-H}
&\|L (H^2(\Delta+V)-w)^{-1}u\|_{L^2(X)}\\
&\qquad\qquad\qquad
\leq 2CH^{-1-s}(|w|/|\im w|)^{(1+s)/2}|\im w|^{-(1-s)/2}\|u\|_{L^2(X)},
\ \im w\neq 0,\\
&\|L (H^2(\Delta+V)-w)^{-1}u\|_{L^2(X)}\leq 2CH^{-1-s}
|w|^{-(1-s)/2}\|u\|_{L^2(X)},\ \re w<0.
\end{split}\end{equation}
\end{prop}
In particular, this gives
uniform bounds on $L(H^2(\Delta+V)-w)^{-1}$ in $\cL(L^2(X))$.
\section{Low frequency Mourre estimate}\label{sec:Mourre}
We now prove the low frequency Mourre estimate.
Let $\phi\in\CI_c(X)$ be chosen as above, i.e.\ let it be identically $1$ near $\pa X$, supported
in a collar neighborhood of $\pa X$, on which $x D_x$ is thus defined,
and let
$$
A=-\frac{1}{2}\big((\phi xD_x)+(\phi xD_x)^*\big).
$$
Since (cf.\ \eqref{eq:Delta-b-form})
$$
\Delta_g=\sum Q_i^* G_{ij} Q_j
=(x^2D_x)^*(x^2 D_x)+x^2 d_{\pa X}^*d_{\pa X}+x^{2+\rho}R,
$$
where
$$
Q_i\in\Vsc(X),\ G_{ij}\in S^0(X),\ R\in S^0\Diffb^2(X),
$$
we have
\begin{equation}\begin{split}\label{eq:comm-calc}
&[\Delta_g+V,A]=-2i(\Delta_g+L),\ L\in S^{-2-\rho}\Diffb^2(X),
\end{split}\end{equation}
at first as a quadratic form on $\dCI(X)$, but then noting that the
right hand side extends (by density) to a continuous map from
$H^2_{\scl}(X)$ to $L^2(X)$.
For $u=\psi(H^2(\Delta_g+V))v$, $v\in L^2(X)$, we
now use Corollary~\ref{cor:weighted-b-pairing}.
Thus, without loss of generality taking $\rho<1,$
\eqref{eq:localized-weighted-b-est} gives
\begin{equation*}
|\langle Lu,u\rangle|\leq C' C_I^{1+\rho/2} H^{-2-\rho}\|u\|^2_{L^2(X)}.
\end{equation*}
Note that $|\langle Vu,u\rangle|$ satisfies the same estimate as
$|\langle Lu,u\rangle|$.
In summary,
\begin{equation*}\begin{split}
&\langle \frac{i}{2}[\Delta_g+V,A]u,u\rangle
=\Big\langle \Big(\Delta_g+V+L-V\Big)u,u\Big\rangle\\
&\qquad\geq \langle(\Delta_g+V) u,u\rangle
-CH^{-2-\rho}\|u\|_{L^2(X)}^2
=H^{-2}\langle (H^2(\Delta_g+V)-CH^{-\rho})u,u\rangle.
\end{split}\end{equation*}
We thus deduce that there exist $H_0>0$ and $C'>0$ such that for $H>H_0$,
\begin{equation*}
\langle \frac{i}{2}[H^2(\Delta_g+V),A]u,u\rangle
\geq C'\|u\|^2,\ u=\psi(H^2(\Delta_g+V))v.
\end{equation*}
Now let $\psi=\chi_I$, the characteristic function of $I$, we deduce the
following:
\begin{thm}
Suppose $n\geq 3$, $g$ is a scattering metric in the sense
of \eqref{eq:asymp-Eucl} with $g_1$ satisfying \eqref{eq:weaker-asymp},
$V\in S^{-2-\rho}(X)$, $\rho>0$, $V\geq 0$, $P=\Delta_g+V$.
Let $I\subset(0,\infty)$ be a compact interval, and $\chi_I$ the
characteristic function of $I$. Then there exist
$H_0>0$ and $C>0$ such that for
$H>H_0$,
\begin{equation*}
\chi_I(H^2 P)\frac{i}{2}[H^2 P,A]\chi_I(H^2 P)
\geq C\chi_I(H^2 P).
\end{equation*}
In particular for $\psi\in\CI((0,\infty))$,
\begin{equation}\label{eq:weight-factor-Mourre}
\psi(H^2 P)\chi_I(H^2 P)\frac{i}{2}[H^2 P,A]
\chi_I(H^2 P)\psi(H^2 P)
\geq C(\inf_I\psi)^2\chi_I(H^2 P).
\end{equation}
\end{thm}
\begin{rem}
The commutator is defined here as a quadratic form on $\dCI(X)$, which
extends to $H_{\scl}^2(X)$ continuously.
If $\psi\in\CI_c(I)$ then for $v\in\dCI(X)$ one has $u\in\dCI(X)$ by
the functional calculus in the algebra of scattering pseudodifferential
operators -- the main point here is that the decay properties are preserved,
see \cite[Theorem~11]{Hassell-Vasy:Symbolic}.
(One can also obtain this decay without using the full ps.d.o.\ algebra,
working with the Helffer-Sj\"ostrand formula and commutators directly,
if one so desires.) Thus, for such $v$ and $\psi$, one can expand the
commutator and manipulate it directly, which is important in applications.
\end{rem}
This at once implies the corresponding estimate with $H^2 P$
replaced by $H\sqrt{ P}$, which is the main content of
\cite[Proposition~3.1]{Bony-Haefner1} when $X=\RR^n$ equipped with a
metric asymptotic to the standard Euclidean metric. In order to do
this recall that $\Hsc^{m,l}(X)=x^l\Hsc^m(X)$ is the scattering
Sobolev space of Melrose \cite{RBMSpec}, which for $X$ the radial
compactification of $\RR^n$ is just the standard weighted Sobolev
space $H^{m,l}(\RR^n)$, and one has the high energy estimate that
$(P+\lambda)^{-1}:\Hsc^{m,l}(X)\to\Hsc^{m,l}(X)$, $P=\Delta_g+V$, is bounded by
$C\lambda^{-1}$ in $\lambda>1$ from the semiclassical scattering
calculus; this is of course very easy to see for $l=0$, which is what
we need below. (Recall that we are using the nonnegative Laplace
operator.) Now, one has by the functional calculus
$$
\sqrt{P}=\pi^{-1}\int_0^\infty \lambda^{-1/2}
P(P+\lambda)^{-1}\,d\lambda,
$$
so
$$
H\sqrt{P}=\pi^{-1}\int_0^\infty \lambda^{-1/2}
H^2P(H^2P+\lambda)^{-1}\,d\lambda;
$$
using the above observation,
the integral converges for any $m$ as a bounded operator in
$\cL(\Hsc^{m,0}(X),\Hsc^{m-2,0}(X))$.
We now evaluate the commutator $[H\sqrt{P},A]
:\Hsc^{m,1}(X)\to\Hsc^{m-3,-1}(X)$; the integral for the products
$H\sqrt{P}A$ and $A H\sqrt{P}$ converges in this sense.
As
$$
[H^2P(H^2P+\lambda)^{-1},A]
=\lambda (H^2P+\lambda)^{-1}[H^2P,A](H^2P+\lambda)^{-1},
$$
using $(t+\lambda)^{-1}\geq (\sup I+\lambda)^{-1}$ on $I$,
we deduce from \eqref{eq:weight-factor-Mourre} that for $H>H_0$,
\begin{equation}\begin{split}\label{sqrtcommutator}
&\chi_I(H^2P)[H\sqrt{P},A]\chi_I(H^2P)\\
&\quad=\pi^{-1}\int_0^\infty \lambda^{1/2}
(H^2P+\lambda)^{-1}
\chi_I(H^2P)[H^2P,A]\chi_I(H^2P)(H^2P+\lambda)^{-1}
\,d\lambda\\
&\quad\geq \pi^{-1}\int_0^\infty C\lambda^{1/2}(\sup I+\lambda)^{-2}
\chi_I(H^2P)^2\,d\lambda=C'\chi_I(H^2P)^2,\ C'>0,
\end{split}\end{equation}
on $\Hsc^{3,1}(X)$, hence by density of $\Hsc^{3,1}(X)$ and continuity
of both sides on $L^2(X)$, on $L^2(X)$.
Thus, the analogue of the
low energy Mourre estimate of Bony and H\"afner in this more general
setting follows immediately.
\begin{thm}
Suppose $n\geq 3$, $g$ is a scattering metric in the sense
of \eqref{eq:asymp-Eucl} with $g_1$ satisfying \eqref{eq:weaker-asymp},
$V\in S^{-2-\rho}(X)$, $\rho>0$, $V\geq 0$, $P=\Delta_g+V$.
Let $I\subset(0,\infty)$ be a compact interval, and $\chi_I$ the
characteristic function of $I$. Then there exist
$H_0>0$ and $C>0$ such that for
$H>H_0$,
\begin{equation*}
\chi_I(H^2P)\frac{i}{2}[H\sqrt{P},A]\chi_I(H^2P)
\geq C\chi_I(H^2P).
\end{equation*}
\end{thm}
\section{Energy decay for the wave equation}\label{sec:wave}
If the metric on $X$ is additionally assumed to be non-trapping, we
have a finite- and high-energy Mourre estimate due to Vasy-Zworski
\cite{Vasy-Zworski} (or can re-use the construction employed in
\cite{Bony-Haefner1}). Putting these ingredients together as in
Theorem~1.3 of \cite{Bony-Haefner1} we obtain by the same means the
analogous energy decay result for solutions to the wave equation; for
brevity, we confine our discussion of these results to the case of
(unperturbed) scattering metrics, i.e.\ those given by
\eqref{eq:asymp-Eucl} near infinity.
\begin{thm}
Let $(X,g)$ be a scattering manifold having no trapped geodesics, and let $V \in
S^{-3}(X)$ be a nonnegative potential. If
$$
\big(D_t^2-(\Delta_g +V)\big)u=0
$$
on $\RR\times X,$ then for all $\ep>0$ and $\mu \in (0,1],$
$$
\norm{x^\mu u'}_{L^2([0,T] \times X)} \lesssim \ang{F_\mu^\ep(T)}^{1/2}
\norm{u'(0,\cdot)}_{L^2(X)}
$$
where $u'=(\pa_t u, \nabla_g u).$
and
$$
F_\mu^\ep(T)= \begin{cases} T^{1-2\mu-2\ep}, & \mu\leq 1/2 \\1 & \mu>1/2\end{cases}.
$$
\end{thm}
\begin{proof}
As indicated above, the relevant medium and high energy estimates are well
known in this setting, and it will suffice, following the strategy of
\cite{Bony-Haefner1}, to demonstrate that the low energy commutant
that we have constructed above satisfies all of the hypotheses of the
Mourre theory. As discussed in Proposition~3.1 of
\cite{Bony-Haefner1}, it remains for us to verify, in our notation,
the following estimates on the operator
$$
{\mathcal{A}}_H \equiv \psi(H^2P) A\psi(H^2P):
$$
\begin{align}
\label{ad1} \norm{[{\mathcal{A}}_H, H P^{1/2}]} &\lesssim 1,\\
\label{ad2}\norm{\big[{\mathcal{A}}_H,[{\mathcal{A}}_H,
H P^{1/2}]\big]} &\lesssim 1,\\
\label{mourre1} \norm{\abs{{\mathcal{A}}_H}^\mu x^\mu} &\lesssim
H^{-\mu}, \quad \mu \in [0,1]\\
\label{mourre2} \norm{\ang{{\mathcal{A}}_H}^\mu\psi(H^2P) x^\mu} &\lesssim H^{-\mu},\quad \mu \in [0,1].
\end{align}
We begin by proving a lemma allowing us to commute powers of $x$ with
spectral projections:
\begin{lemma}\label{lemma:conjugateprojector}
Let $L \in x \Diff^1_b(X).$
The operators
$$
x^{-1} \psi(H^2 P) L
\text{ and } L \psi(H^2 P) x^{-1}
$$
are uniformly $L^2$-bounded as $H \uparrow \infty.$
\end{lemma}
\noindent\emph{Proof of lemma:}
As the two types of operator in question are adjoints of one another, it
suffices to consider the latter. Moreover, for the desired boundedness
it suffices to estimate $L [\psi(H^2 P), x^{-1}].$
Letting $\tilde{\psi}$ be a compactly supported
almost-analytic extension of $\psi.$ Let $R(z)$ denote the
resolvent
$$
R(z)=(H^2P-z)^{-1}.
$$
We have
\begin{align*}
L [\psi(H^2 P), x^{-1}]&= \frac 1{2\pi} \int_{\CC} \overline{\pa} \tilde{\psi}(z)
L [R(z), x^{-1}]\, dz d\overline{z}\\
&= -\frac{H^2}{2\pi} \int_{\CC} \overline{\pa} \tilde{\psi}(z)
L R(z) [P, x^{-1}] R(z) \, dz d\overline{z}.
\end{align*}
As $[P,x^{-1}] =Q\in x\Diff_b^1(X)$, \eqref{eq:res-weight-gain-H} gives
(with $s=0$)
that
\begin{equation*}\begin{split}
&\|LR(z)\|_{\cL(L^2(X))}\leq CH^{-1}|\im z|^{-1} |z|^{1/2},\\
&\|QR(z)\|_{\cL(L^2(X))}\leq CH^{-1} |\im z|^{-1} |z|^{1/2},
\end{split}\end{equation*}
so we can estimate the integral by a multiple of
$$
H^2\int_{\CC} \big\lvert \overline{\pa} \tilde{\psi}(z) \big \rvert H^{-2} \abs{\Im z}^{-2}|z|\, dz d\overline{z}.
$$
\emph{This concludes the proof of the lemma.}
We now sketch the proofs of \eqref{ad1}--\eqref{mourre2}. The
estimate \eqref{ad1} follows from \eqref{sqrtcommutator}, as we may
again write
\begin{equation}\begin{split}\label{sqrtcommutator2}
&\psi(H^2P)[H\sqrt{P},A]\psi(H^2P)\\
&\quad=\pi^{-1}\int_0^\infty \lambda^{1/2}
R(\lambda)
\psi(H^2P)[H^2P,A]\psi(H^2P)R(\lambda)
\,d\lambda.
\end{split}\end{equation}
By anti-self-adjointness of the commutator, it suffices to estimate
the norm of
\begin{equation*}\begin{split}
&\ang{[{\mathcal{A}}_H, H P^{1/2}] u,u}\\
& = \pi^{-1}\int_0^\infty \lambda^{1/2}
\ang{R(\lambda)
\psi(H^2P)[H^2P,A]\psi(H^2P)R(\lambda)u,u}
\,d\lambda.
\end{split}\end{equation*}
Now as $[H^2 P,A] \in x^2 \Diff^2_b(X),$ we may rewrite this
pairing in the form
$$
H^2 \int_0^\infty \lambda^{1/2}
\ang{x M_1
\psi(H^2P)R(\lambda) u , x M_2
\psi(H^2P)R(\lambda) u}
\,d\lambda
$$
with $M_i \in \Diff^1_b(X).$
Applying \eqref{eq:localized-weighted-b-est-2} (in the `easy' case
$s=0$) yields \eqref{ad1}.
We can now prove \eqref{ad2} in the same manner (cf.\ Remark~3.5 in
\cite{Bony-Haefner1}): By \eqref{eq:comm-calc} we have
$$
[A,P] =2i P +M,
$$
where
$$
M \in S^{-3} \Diff^2_b(X).
$$
Thus,
$$
[{\mathcal{A}}_H,HP] =2i P +M,
$$
Thus
\begin{equation}\begin{split}\label{sqrtcommutator3}
[{\mathcal{A}}_H, HP^{1/2}]&=\psi(H^2P)[A,H\sqrt{P}]\psi(H^2P)\\
&=\pi^{-1}\int_0^\infty \lambda^{1/2}
R(\lambda)
\psi(H^2P)[A,H^2P]\psi(H^2P)R(\lambda)\, d\lambda\\
&=\pi^{-1}\int_0^\infty \lambda^{1/2}
R(\lambda)
\psi(H^2P)(2iP+M)\psi(H^2P)R(\lambda)\, d\lambda\\
&=2i\psi(H^2P)^2 H P^{1/2}+
\pi^{-1}\int_0^\infty \lambda^{1/2}
R(\lambda)
\psi(H^2P) M \psi(H^2P)R(\lambda)\, d\lambda\\
&=2i\psi(H^2P)^2 H P^{1/2}+ {\mathcal{B}}
\end{split}\end{equation}
Hence to estimate
$$
[{\mathcal{A}}_H,[{\mathcal{A}}_H, HP^{1/2}]],
$$
by \eqref{ad1}, it suffices to estimate
$$
[{\mathcal{A}}_H, {\mathcal{B}}];
$$
to do this, noting that ${\mathcal{A}}_H$ is self-adjoint, and ${\mathcal{B}}$ is
anti-self-adjoint, we see that it suffices to obtain boundedness of
$$
{\mathcal{A}}_H {\mathcal{B}} = ({\mathcal{A}}_H x) (x^{-1} {\mathcal{B}}).
$$
Now application of Lemma~\ref{lemma:conjugateprojector} shows that
$$
{\mathcal{A}}_H x = (\psi(H^2 P) A x) (x^{-1}\psi(H^2P) x)
$$
is uniformly bounded. Likewise, $x^{-1} {\mathcal{B}}$ is bounded by similar
considerations: we write the integrand for $x^{-1} {\mathcal{B}}$ as
\begin{equation*}\begin{split}
&\lambda^{1/2} x^{-1} R(\lambda) \psi(H^2P) M \psi(H^2P)R(\lambda)\\
&\qquad=\sum_j
\lambda^{1/2}\big( x^{-1}
R(\lambda)\psi(H^2P) (xL_{j1})\big) \big( L_{j2} \psi(H^2P)R(\lambda)\big).
\end{split}\end{equation*}
where $M= \sum_j xL_{j1} L_{j2},$ $L_{ji} \in x\Diff^1_b(X).$
By
\eqref{eq:localized-weighted-b-est-2} the last factor is norm bounded by a
multiple of $H^{-1} (c+\lambda)^{-1},$ $c =\inf \supp \psi>0.$
Commuting the factor of $x^{-1}$ across both $R(\lambda)$ and
$\psi(H^2 P)$ yields an operator bounded by a multiple of $H
\lambda^{-1}$ by \eqref{eq:localized-weighted-b-est-2}, while the
commutator terms involved in doing this have the same bound by
Lemma~\ref{lemma:conjugateprojector} and the observation that
$$
[x^{-1}, R(\lambda)] \psi(H^2P) = -R(\lambda) Q R(\lambda) \psi(H^2P)
$$
with $Q \in x \Diff^1_b(X);$ this expression is bounded by a multiple of $H^{-1}
(c+\lambda)^{-1}$ by \eqref{eq:res-weight-gain-H}. We thus obtain \eqref{ad2}.
To prove \eqref{mourre1}, it suffices by interpolation to prove
uniform boundedness as $H\uparrow \infty$ of
$$
\norm{H\psi(H^2P) A \psi(H^2P) x}_{\cL(L^2(X))};
$$
as above, this follows from Lemma~\ref{lemma:conjugateprojector}.
Likewise, \eqref{mourre2} follows by interpolation with the $\mu=1$ estimate
$$
\norm{H\psi(H^2P) A \psi(H^2P) x}_{\cL(L^2(X))}^2 +\norm{\psi(H^2P) x}_{\cL(L^2(X))}^2.
$$
\end{proof}
|
1,314,259,995,330 | arxiv | \section{Introduction}
In this paper we consider some classes of semilinear systems of parabolic equations coupled in zero order terms.
We are interested in controllability of such systems to stationary solutions by only one scalar control distributed in a subdomain and acting in only one of the equations.
The study of controlled systems of parabolic equations needs appropriate observability estimates for the adjoint system. These observability estimates are usually derived from global Carleman estimates.
Global Carleman estimates are by now a classical tool in proving observability inequalities, and they were established in the context of controllability for parabolic equations by O.Yu.Imanuvilov (see O.Yu.Imanuvilov and A.Fursikov \cite{furima1996}). Since then this type of estimates was extensively developed, refined and used in other contexts, like control problems with small number of controls, stabilization or inverse problems.
Controllability for parabolic systems with a reduced number of controls needs observability estimates of Carleman type with partial observations. There is an extensive literature concerning such problems; for a selection of titles we refer for example to \cite{lissy_zuazua} and the references therein.
In the case of zero order couplings with constant or time dependent coupling coefficients there exists a particular interest in obtaining algebraic conditions of Kalman type for controllability; in this direction we cite the papers of F.Ammar-Khodja, A.Benabdallah, C.Dupaix and M.Gonz\'ales-Burgos \cite{khodja_burgos1, khodja_burgos2} or the work of F.Ammar-Khodja, F.Chouly and M. Duprez \cite{ammar_chouly}.
Observability estimates for linear systems (not only parabolic) coupled with constant coupling coefficients in the dominant part and/or in the zero order terms were established by E.Zuazua and P.Lissy \cite{lissy_zuazua}; such estimates are obtained under Kalman rank conditions satisfied by the pair of the coupling and control matrices.
The results we present in our paper extend the results in \cite{teresa_burgos} where systems of parabolic equations with cascade type couplings in zero order terms are considered. The extension we propose works under hypotheses addressing two aspects of the systems under consideration: one is the structure of the
couplings, which describes in our case either a star or a tree type graph; the second aspect refers to the support of the coupling functions or, in the linear case, to the support of the coupling coefficients.
The strategy for proving the controllability result relies on the linearization of the nonlinear system around a stationary state.
The key step is
obtaining the null controllability for this linear system by using an observability inequality for the adjoint system. This observability inequality is consequence of an appropriate global Carleman estimate. This in turn is obtained by combining Carleman estimates for each of the equations, but relying on different auxiliary functions, which are in a particular order relation, made possible by the special structure of the system. The idea of using different auxiliary functions in Carleman estimates is inspired by the work of G.Olive \cite{olive1} concerning controllability of parabolic systems with controls acting in different subdomains.
The Carleman observability estimates we establish are more elaborated and are not direct consequences of the classical Carleman estimates. One reason for developing these Carleman estimates is the fact that trying to use the estimates from the paper of Luz de Teresa and M.Gonz\'ales-Burgos \cite{teresa_burgos} for cascade systems we realized that, written for the branches of the tree, they do not fit well together. Even when passing from the study of star type couplings to general tree type couplings one needs to use two Carleman estimates for each equation in an interior node of the graph and this is another quite technical point needed in our approach. The hypotheses concerning the supports of the coupling coefficients allow to construct appropriate auxiliary functions and weights in the corresponding Carleman estimates which will finally fit well in order to give the desired global observability inequality.
Passing from the linearized system to the nonlinear system, one needs an $L^\infty$ framework for controlability. The main reason is that the Carleman estimates we obtain are sensitive to zero order perturbations of the system.
More regularity of the controls in the linearized problem is obtained as in the work of V.Barbu \cite{bar12002} (see also J.-M. Coron, S.Guerrero and L.Rosier \cite{corgueros2010}) by using regularizing properties of the parabolic flow in a bootstrap argument. This step encounters supplementary technical challenges as it needs $L^\infty-L^2$ Carleman estimates with different weights in corresponding estimates for different equations.
The $L^\infty$ controllability for the linearized system allows an approach to the controllability of the nonlinear system by a fixed point argument, based on Kakutani theorem ( see also \cite{corgueros2010} or \cite{bar2000}).
\section{Preliminaries and statement of the problem}
Let $\Omega\subset\mathbb{R}^N$ be a bounded connected domain with a $C^2$
boundary $\partial\Omega$ and let
$\omega_0\subset\subset\Omega.$
Let $T>0$ and denote by $Q=(0,T)\times\Omega$ and for $\omega\subset\Omega$ write $Q_\omega=(0,T)\times\omega$.
\medskip
We consider systems of $(n+1)$ parabolic equations coupled in zero order terms through nonlinear functions, with one internally distributed control, acting in $\omega_0$ and entering only the first equation. The main goal is obtaining local exact controllability to some stationary solution for the nonlinear system.
\medskip
In the first part of the paper we study systems of parabolic equations with star-like couplings which refer to the sistuation where $y_k$ is actuated in the corresponding parabolic equation through a nonliniarity depending only on $y^0,y^k$. Such a star-like coupled system has the form:
\begin{equation}\label{nonlinsystem}
\left\lbrace
\begin{array}{ll}
D_ty_{0}-\Delta y_0=\overline g_0(x)+f_0(x,y_0)+\chi_{\omega_0}u, & \text{ in }(0,T)\times\Omega,\\
D_ty_{i}-\Delta y_i=\overline g_i(x)+f_i(x,y_0,y_i),\, i\in\overline{1,n},\, &\text{ in }(0,T)\times\Omega,\\
y_0=...=y_{n}=0, &\text{ on }(0,T)\times\partial\Omega,\\
y(0,\cdot)=y^0,&
\end{array}
\right.
\end{equation}
where $\overline g_j\in L^\infty(\Omega),\, j\in\overline{0,n}$.
We denote by $\chi_{\omega_0}v$ the extension of $v:\omega_0\rightarrow\mathbb{R}$ with $0$ to the whole domain $\Omega$.
The control function is $u:[0,T]\times\omega_0\longrightarrow\mathbb{R}$, acting directly in the equation of $y_0$ while the other components of the solution, $y_1,...,y_n$, are indirectly actuated through the corresponding coupling terms containing $y_0$.
\medskip
Consider a stationary state $\overline y=(\overline{y}_0,...,\overline{y}_n), \overline{y}_j\in L^\infty(\Omega), j\in\overline{0,n}$, solution to the elliptic system:
\begin{equation}\label{nonlinelliptic}\left\lbrace
\begin{array}{ll}
-\Delta \overline y_0=\overline g_0(x)+f_0(x,\overline y_0), & x\in\Omega,\\
-\Delta \overline y_i=\overline g_i(x)+f_i(x,\overline y_0,\overline y_i),\, i\in\overline{1,n},\, &x\in\Omega,\\
\overline y_0=...=\overline y_{n}=0, &x\in\partial\Omega.\\
\end{array}
\right.
\end{equation}
Observe in fact that, by elliptic regularity, an $L^\infty$ stationary solution is a smooth solution.
\medskip
Concerning the coupling terms we assume the following hypotheses:\medskip
\begin{enumerate}
\item[\textit{(H1)}] $f_0:\mathbb{R}^N\times\mathbb{R}\longrightarrow\mathbb{R}$, $ f_i:\mathbb{R}^N\times\mathbb{R}\times\mathbb{R}\longrightarrow\mathbb{R}, i\in\overline{1,n} $ are $C^1$ functions and there exist $\omega_1,...\omega_{n}\subset \Omega$, open nonempty subsets of
$\Omega$ such that
\begin{equation}
(\omega_i\cap\omega_0)\setminus\bigcup_{j\ne0,i}\omega_j\neq\emptyset,\,\forall i\in\overline{1,n},
\end{equation}
and for all $i\in\overline{1,n}$ we have
\begin{equation}\label{fsuport}
f_i(x,y_0,y_i)=0 \,\forall x\in\Omega\setminus\omega_i,\, y_0,y_i\in\mathbb{R};
\end{equation}
\item[\textit{(H2)}]The following coupling condition holds:
\begin{equation}\label{suportderiv}
\text{supp }\frac{\partial f_i}{\partial y_0}(x,\overline{y}_0(x),\overline{y}_i(x))\cap\biggl\{(\omega_i\cap\omega_0)\setminus\overline{\bigcup_{j\ne0,i}\omega_j}\biggr\}\neq\emptyset.
\end{equation}
\end{enumerate}
\begin{remark}
Concerning the above technical hypotheses \eqref{fsuport}, \eqref{suportderiv}-(H2), observe that they are, for example, satisfied for all sources $\overline g_i$ and corresponding stationary solutions $\overline y$ if the nonlinearities $f_i$ are of the form
$$
f_i(x,y_0,y_i)=\zeta_i(x)\xi_i(y_0,y_i),\,x\in\Omega,y_0,y_i\in\mathbb{R},
$$
with $\emptyset\not=\mbox{supp }\zeta_i(x)\subset \subset(\omega_i\cap\omega_0)\setminus\overline{\bigcup_{j\ne0,i}\omega_j}$ and $\xi_i=\xi_i(y_0,y_i):\mathbb{R}\times\mathbb{R}\rightarrow\mathbb{R}$ are smooth with $\displaystyle\frac{\partial \xi_i}{\partial y_0}\ne0,\,\forall y_0,y_i\in\mathbb{R}$.
\noindent If $\overline y$ is a constant solution, as the problem has homogeneous boundary conditions, necessarily $\overline y\equiv 0$; a stationary solution in this case exists if and only if $\overline g_0(x)=-f_0(x,0)$ and $\overline g_i(x)=-f_i(x,0,0),\forall x\in\Omega$. Condition \eqref{suportderiv} is satisfied if, for example, $\displaystyle\emptyset\ne\text{supp }\frac{\partial f_i}{\partial y_0}(x,0,0)\subset\subset (\omega_i\cap\omega_0)\setminus\overline{\bigcup_{j\ne0,i}\omega_j}.$
\noindent
In concrete situations, when the stationary solution is known, the hypotheses we imposed on the supports of the coupling functions are easy to verify.
\end{remark}
Our study concerns the controllability to the stationary state $\overline y$ of system \eqref{nonlinsystem} in a given time interval $T$. We are thus led to the study of a class of linear controlled systems and corresponding controllability properties, systems which arise by a linearization procedure around the stationary state:
\begin{equation}\label{linsystem}
\left\lbrace
\begin{array}{ll}
D_tz_{0}-\Delta z_0=c_0(t,x)z_0+\chi_{\omega_0}u, & (0,T)\times\Omega,\\
D_tz_{i}-\Delta z_i=a_{i0}(t,x)z_0+c_i(t,x)z_i,\, i\in\overline{1,n},\, &(0,T)\times\Omega,\\
z_0=...=z_{n}=0, &(0,T)\times\partial\Omega,\\
\end{array}
\right.
\end{equation}
For $M,\delta>0,$ and open subsets $\underline{\omega_i}\subset\subset(\omega_i\cap\omega_0)\setminus\bigcup_{j\ne0,i}\omega_j $ we introduce the following classes of coefficients sets:
\begin{equation}\label{hyp}
\begin{aligned}
&\mathcal{E}_{M,\delta,\{\underline\omega_i\}_i}=\biggl\{ E=\{a_{i0},c_j\}_{i\in\overline{1,n},j\in\overline{0,n}}:a_{i0},c_j\in L^\infty(Q), \\
&\|a_{i0}\|_{L^\infty},\|c_j\|_{L^\infty}\leq M, a_{i0}=0 \text{ in } Q\setminus Q_{\omega_i},\text{ and } |a_{i0}|\ge\delta \text{ on }Q_{\underline\omega_i}\biggr\}.
\end{aligned}
\end{equation}
We prove first that such linear systems with coefficients in $\mathcal{E}_{M,\delta,\{\underline\omega_i\}_i}$ are null controllable with norm $L^2$ and $L^\infty$ of the control uniformly bounded by a constant $C=C(M,\delta,\{\underline\omega_i\}_i)$.
In order to achieve this goal we consider the adjoint system:
\begin{equation}\label{adjlinsystem}
\left\lbrace
\begin{array}{ll}
-D_tp_{0}-\Delta p_0=c_0(t,x)p_0+\sum_{i=1}^{n}a_{i0}(t,x)p_i, & (0,T)\times\Omega,\\
-D_tp_{i}-\Delta p_i=c_i(t,x)p_i,\, i\in\overline{1,n},\, &(0,T)\times\Omega,\\
p_0=...=p_{n}=0, &(0,T)\times\partial\Omega.\\
\end{array}
\right.
\end{equation}
We prove an observability inequality as consequence of an appropriate Carleman estimate.
The Carleman estimate we establish in the next section gives us more than just observability, it helps obtaining a priori estimates for the control driving the solution of the linear system to zero and, as the constants appearing in the Caleman estimates are depending only on $M,\delta,\{\underline\omega_i\}_i$, the estimates on the control will result uniform.
This fact is essential in the fixed point argument when dealing with the nonlinear system.
In order to reformulate the problem in an abstract functional framework let the state space be the Hilbert space $H=[L^2(\Omega)]^{n+1}$ and the control space $U=L^2( \omega_0)$.
Consider the operator
$$\textbf{A}:D(\mathbf{A})\subset H\longrightarrow H, D(\textbf{A})=(H_0^1(\Omega)\cap
H^2(\Omega))^{n+1}, \textbf{A}z=\Delta z,$$
and the control operator $$\textbf{B}:U\rightarrow H,\, \textbf{B}u=\chi_{\omega_0}Bu,\, B=(1,0,\ldots,0)^\top.$$
Then, problem \eqref{nonlinsystem} may be written in abstract form:
\begin{equation}\label{nonlinevpb}
\left\lbrace
\begin{array}{ll}
D_ty=\textbf{A}y+\textbf{f}(y) +\textbf{B}u, & t>0,\\
y(0)=y^0. & \,\\
\end{array}
\right.
\end{equation} where $\textbf{f}(y)=f(\cdot,y(\cdot))$.
The linear problem \eqref{linsystem} may be reformulated as:
\begin{equation}\label{linevpb}
\left\lbrace
\begin{array}{ll}
D_tz=\textbf{A}z+\mathbf{A_0}(t)z+\mathbf{C}(t)z+\textbf{B}u, & t>0,\\
z(0)=z^0, &\\
\end{array}
\right.
\end{equation}
where $\textbf{C}(t)z=C_0(t,\cdot)z(\cdot)$ and $\mathbf{A_0}(t)z=A_0(t,\cdot)z(\cdot)$, $ C_0(t,x)$ is the diagonal matrix $C_0(t,x)=diag(c_i(t,x))_{i=\overline{0,n}}$ and the coupling matrix $A_0(t,x)$ has only one nonzero column, the first one, and is given by
$$A_0(t,x)=(0,a_{10},\ldots,a_{n0})^\top\cdot (1,0,\ldots,0).$$
For simplicity, when there is no confusion, we denote the norms of functions $z\in[L^2(\Omega)]^{n+1}, $ $z\in [H^1(\Omega)]^{n+1}$ \textit{etc.} as $\|z\|_{L^2(\Omega)}$, respectively $\|z\|_{H^1(\Omega)}$ \textit{etc.}.
Null controllabity for the linear system \eqref{linevpb} above is equivalent to an observability inequality
\begin{equation}\label{obsineq0}
\|p(0)\|^2_{L^2(\Omega)}\leq C(M,\delta)\int_0^T
\|\textbf{B}^* p\|_{L^2(\omega_0)}^2dt, \qquad \text{ for some } C(M,\delta)>0,
\end{equation}
for all solutions $p$ to the adjoint equation
\begin{equation}\label{adjlinevpb}
-p'=\textbf{A}p+\mathbf{A_0^*} p+\textbf{C}p
\end{equation}
where $\mathbf{A_0^*} p=A_0^\top p, \textbf{B}^*p=B^\top p|_{\omega_0}$.
\medskip
We extend our study to parabolic systems with tree-like couplings. In fact we will treat only linear equations with appropriate hypotheses for the coupling coefficients in a tree-like structure. Passing from linear results of controllability to local controllability for nonlinear systems may be obtaiend by exactly the same procedure as in the star-like case. An example of linear parabolic system with tree-like couplings is the following:
\begin{equation}\label{linsystem_tree}
\left\lbrace
\begin{array}{ll}
D_tz_{0}-\Delta z_0=c_0(t,x)z_0+\chi_{\omega_0}u, & \text{in }(0,T)\times\Omega,\\
D_tz_{1}-\Delta z_1=a_{10}(t,x)z_0+c_1(t,x)z_1,\, &\text{in }(0,T)\times\Omega,\\
D_tz_{2}-\Delta z_2=a_{20}(t,x)z_0+c_2(t,x)z_2,\, &\text{in }(0,T)\times\Omega,\\
D_tz_{3}-\Delta z_3=a_{31}(t,x)z_1+c_3(t,x)z_3,\, &\text{in }(0,T)\times\Omega,\\
D_tz_{4}-\Delta z_4=a_{41}(t,x)z_1+c_2(t,x)z_4,\, &\text{in }(0,T)\times\Omega,\\
z_0=...=z_{4}=0, &\text{on }(0,T)\times\partial\Omega,\\
\end{array}
\right.
\end{equation}
and the general form of system with tree like couplings will be discussed in \S \ref{sec-tree}.\medskip
The paper is organized as follows:
\begin{itemize}
\item In \S \ref{secobservcarl} we prove appropriate Carleman estimates for adjoint system \eqref{adjlinsystem} in either $L^2-L^2$ or $L^\infty-L^2$ settings. This will be Theorem \ref{th1obs}
\item In \S \ref{seccontrtreelin} we prove the null controllability of linear system \eqref{linsystem}. The approach uses a family of optimal control problems with penalized final cost. One then obtains besides controllability an estimate for the control in both $L^2$ and $L^\infty$ norms by using the previous Carleman estimates. This is Theorem \ref{thcontrol}.
\item \S \ref{seccontrtreenonlin} is devoted to the local controllability in $L^\infty$ of nonlinear system \eqref{nonlinsystem}. The fact that controllability has to be proved in $L^\infty$ is due to the high sensitivity of the Carleman estimates with respect to the coupling coefficients, which is not the case when controls act in each equation of the system. The argument is similar to that used in \cite{corgueros2010}.
\item In \S\ref{sec-tree} we extend results of controllability, with one distributed scalar control, for linear systems of parabolic equations, of the form \eqref{linsystem_tree}, with tree-like couplings. The key point here is obtaining appropriate Carleman estimates. Local controllability for nonlinear systems with tree-like couplings is also discussed.
\end{itemize}
\section{ Carleman estimates and observability}\label{secobservcarl}
In this section we establish an $L^2$ Carleman estimate that will help proving an observability inequality fot the adjoint problem \eqref{adjlinsystem}. This $L^2$ Carleman inequality and parabolic regularity are the starting point in obtaining an $L^\infty$ control through a bootstrap argument.
We recall the classical Carleman estimate for a generic nonhomogeneous parabolic problem,
\begin{equation}\label{genericparab}
\left\lbrace
\begin{array}{ll}
D_tp+Lp=h, & \text{ in } (0,T)\times\Omega,\\
p=0, &\text{ on } (0,T)\times\partial\Omega,\\
\end{array}
\right.
\end{equation}
where $L$ is an uniformly elliptic operator of second order. Denote by $Q:=(0,T)\times\Omega$ and, for $\omega\subset\subset \Omega$, $ Q_{\omega}:=(0,T)\times\omega$.
The solution is observed in $Q_\omega$ for sources $ h\in L^2(Q)$.
We introduce the function
$$\psi\in C^2(\overline{\Omega}),\, \psi|_{\partial\Omega}=k>0,\, k<\psi<\frac32k \text{ in }\Omega, \{x\in\overline\Omega: |\nabla\psi(x)|=0\}\subset\subset\omega, $$
and the weight functions
\begin{equation}\label{fialfagen}
\varphi(t,x):=\frac{e^{\lambda\psi(x)}}{t(T-t)},\quad
\alpha(t,x):=\frac{e^{\lambda\psi(x)}-e^{1.5\lambda\|\psi\|_{C(\overline{\Omega})}}}{t(T-t)}.
\end{equation}
Then, the classical global Carleman estimate (see \cite{furima1996}, \cite{ferzua2000}) is the following:
\begin{lemma}\label{lemaCarleman} There exist $\lambda_0,s_{0}$ and $C>0$ such that if $\lambda>\lambda_0, s\geq s_0$, the following inequality holds:
\begin{equation}\label{estCarleman}\begin{aligned}
&\int_Q\left[(s\varphi)^{-1} (|D_tp|^2+|D^2p|)+s\lambda^{2}\varphi|D p|^2+s^{3}\lambda^{4}\varphi^{3}|p|^2\right]e^{2s\alpha}dxdt\\
&\leq C\int_{Q_\omega}s^{3}\lambda^{4}\varphi^{3}|p|^2e^{2s\alpha}dxdt+\int_{Q}|h|^2 e^{2s\alpha}dxdt\\
\end{aligned}
\end{equation}
for all $p\in H^1(0,T;L^2(\Omega))\cap L^2(0,T; H^2(\Omega))$ solution of \eqref{genericparab}.
\end{lemma}
We establish a Carleman estimate for the following nonhomogeneous version to the adjoint problem to \eqref{linsystem}, with source term $g\in [L^2(Q)]^{n+1}$ and observation operator $\textbf{B}^*p=B^\top p|_{\omega_0}=p_0|_{\omega_0}$, operator which "sees" only $p_0$ in the subdomain $ \omega_0$:
\begin{equation}\label{withsource}
\left\lbrace
\begin{array}{ll}
-D_tp_{0}-\Delta p_0=c_0(t,x)p_0+\sum_{i=1}^{n}a_{i0}(t,x)p_i+g_0, & (0,T)\times\Omega,\\
-D_tp_{i}-\Delta p_i=c_i(t,x)p_i+g_i,\, i\in\overline{1,n},\, &(0,T)\times\Omega,\\
p_0=...=p_{n}=0, &(0,T)\times\partial\Omega.\\
\end{array}
\right.
\end{equation}
In the following we are going to establish Carleman estimates for each equation in \eqref{withsource} by using in each case corresponding subdomains of observation and appropriately chosen weight functions.
Some technical preliminaries are needed and we proceed as follows:
Consider open subsets
$$\tilde\omega_j\subset\subset\underline\omega_j$$
and denote as above by $Q_{\tilde\omega_j}=(0,T)\times\tilde\omega_j$; take the auxiliary functions $\psi_j, j=\overline{0,n}$, with the following properties (where we have denoted by $\tilde\omega_0:=\omega_0$):
\begin{equation}\label{psi}
\psi_j:=\eta_j+K_j, j\in \overline{0,n},
\end{equation}
$$\eta_j\in C^2(\overline{\Omega}),\, 0<\eta_j \text{ in }\Omega,\quad\eta_j|_{\partial\Omega}=0,\quad \{x\in\overline\Omega: |\nabla\eta_j(x)|=0\}\subset\subset\tilde\omega_j,$$
for some fixed positive constants $K_j>0$ such that
\begin{equation}\label{psiipsi0}
\psi_i>\psi_0 \text{ in } \Omega, \forall i\in\overline{1,n},
\end{equation}
and
\begin{equation}\label{psipsi}
\frac{\sup \psi_j}{\inf\psi_j}<\frac87, \forall j\in\overline{0,n}.
\end{equation}
Let $0<\epsilon<\inf\psi_i, i\in\overline{0,n}$ a small positive number and denote by
\begin{equation}
\overline{\psi}=\sup_{x\in\Omega}\sup_{j\in \overline{0,n}}\psi_j(x)+\epsilon,
\qquad
\underline{\psi}=\inf_{x\in\Omega}\inf_{j\in \overline{0,n}}\psi_j(x)-\epsilon.
\end{equation}
Introduce also, for parameters $s,\lambda>0$ the auxiliary functions:
\begin{equation}\label{fialfa}
\varphi_j(t,x):=\frac{e^{\lambda\psi_j(x)}}{t(T-t)},\quad
\alpha_j(t,x):=\frac{e^{\lambda\psi_j(x)}-e^{1.5\lambda\overline{\psi}}}{t(T-t)}, \forall j\in\overline{0,n}
\end{equation}
and
\begin{equation}\label{fialfabar}
\overline\varphi(t)=\overline\varphi^\lambda(t):=\frac{e^{\lambda\overline\psi}}{t(T-t)},\quad
\overline\alpha(t)=\overline\alpha^\lambda(t):=\frac{e^{\lambda\overline\psi}-e^{1.5\lambda\overline\psi}}{t(T-t)},
\end{equation}
\begin{equation}\label{fialfabar2}
\underline\varphi(t)=\underline\varphi^\lambda(t):=\frac{e^{\lambda\underline\psi}}{t(T-t)},\quad
\underline\alpha(t)=\underline\alpha^\lambda(t):=\frac{e^{\lambda\underline\psi}-e^{1.5\lambda\overline\psi}}{t(T-t)}.
\end{equation}
\begin{remark}\label{ordineaponderilor}
\begin{enumerate}
\item[(i)] As we are going to compare the various Carleman estimates stated for each equation of the linear adjoint system, we will need to compare the weights which are involved in thgose inequalities. For this purpose let us observe that given $m_0>0$ there exist $s_0=s_0(m_0),\lambda_0=\lambda_0(m_0)>0$ such that for all $s>s_0, \lambda>\lambda_0$, $|m|\le m_0$ and $ t\in(0,T)$, the following inequality holds:
\begin{equation}\label{weightsorder}
e^{s\underline\alpha}\leq s^m\varphi_i^me^{s\alpha_i}\leq e^{s\overline\alpha},
\end{equation}
\begin{equation}\label{weightsorder1}
e^{s\alpha_0}\leq s^m\varphi_i^me^{s\alpha_i}.
\end{equation}
\item[(ii)] Observe that if in \eqref{psi} we replace $K_i$ with $K_i+M$ with the constant $M>0$ big enough, the above properties of the auxiliary functions remain valid and, moreover, we may assume that
\begin{equation}
\frac{\overline\psi}{\underline\psi}\leq\frac{3}{2}.
\end{equation}
This extra assumption implies that there exist $\bar s_0>0,\bar\lambda_0>0$ such that if $s>\bar s_0$ $\lambda>\bar\lambda_0$,
\begin{equation}
|D_t\varphi_i|\leq C\varphi_i^2,\quad |D_t\alpha_i| \leq C\varphi_i^2,\quad |D^2_t\alpha_i| \leq C\varphi_i^3.
\end{equation}
\item[(iii)]Observe that for $\lambda$ big enough, say $\lambda>\overline\lambda$, we have
\begin{equation}\label{ordineaponderilor2}
\frac{\;\underline{\alpha^\lambda}\;}{\overline{\alpha^\lambda}}<2.
\end{equation}
Indeed, this is a consequence to the fact that $\displaystyle\lim_{\lambda\rightarrow+\infty}\frac{\;\underline{\alpha^\lambda}\;}{\overline{\alpha^\lambda}}=1$, uniformly with respect to $(t,x)\in Q$.
\end{enumerate}\end{remark}\medskip
In this section we prove the following Carleman estimate which has as consequence the appropriate observability inequality for the adjoint system \eqref{genericparab}.
\begin{theorem}\label{th1obs} There exist constants $\lambda_0,s_{0}$ such that for $\lambda>\lambda_0$ there exists a constant $C>0$ depending on $(M,\delta,\{\underline\omega_i\}_i,\lambda)$, such that, for any $ s\geq s_0$, the following inequality holds:
\begin{equation}\label{estCarlemanstar}\begin{aligned}
&\int_Q(|D_tp|^2+|D^2p|^2+|D p|^2+|p|^2)e^{2s\underline\alpha}dxdt\\
&\leq C\int_{Q_{\omega_0}}|p_0|^2e^{2s\overline\alpha}dxdt+C\int_{Q}|g|^2e^{2s\overline\alpha}dxdt\\
\end{aligned}
\end{equation}
for all $p\in H^1(0,T;L^2(\Omega))\cap L^2(0,T; H^2(\Omega))$ solution of \eqref{withsource}.
Moreover, there exist $m_0\in\mathbb{N}$ and $\delta_1>0$ such that for the homogeneous adjoint system (\textit{i.e.} taking $g\equiv 0$), we have the following $L^\infty-L^2$ Carleman estimate
\begin{equation}\label{LinftyL2Carleman}
\|p e^{(s+m_0\delta_1)\underline{\alpha}}\|_{L^{\infty}(Q)}\leq C \|p_0 e^{s\overline\alpha}\|_{L^2(Q_{\omega_0})}.
\end{equation}
\end{theorem}
\begin{proof}
The second remark above is useful when obtaining Carleman estimates, since the weights here are slightly different with respect to those used in \cite{furima1996} or \cite{fercarzua2000}. However, this remark allows following the same lines of proof and we may write Carleman estimate \eqref{estCarleman}
for each equation $j\in\overline{0,n}$ with observation domain $\tilde\omega_j$ and auxiliary functions and weight functions $\psi_j,\varphi_j,\alpha_j$. Thus, there exist $s_{0}>0, C>0$ such that for any $ s\geq s_0$, the following inequalities hold:
\begin{enumerate}
\item For $p_0$ we have
\begin{equation}\label{estCarleman0}\begin{aligned}
&\int_{Q}\left[(s\varphi_0)^{-1}(|D_tp_0|^2+|D^2p_0|^2)+s\varphi_0|Dp_0|^2+s^{3}\varphi_0^{3}|p_0|^2\right]e^{2s\alpha_0}dxdt\\
&\leq C\left[\int_{Q_{\omega_0}}s^{3}\varphi_0^{3}|p_0|^2e^{2s\alpha_0}dxdt+\int_{Q}\left|\sum_{i=1}^n a_{i0}p_i+g_0\right|^2 e^{2s\alpha_0}dxdt\right]\le\\
&\leq C\left[\int_{Q_{\omega_0}}s^{3}\varphi_0^{3}|p_0|^2e^{2s\alpha_0}dxdt\right.\\
&\left.+n^2M^2\sum_{i=1}^n \int_{Q}\left|p_i\right|^2 e^{2s\alpha_i}dxdt+\int_{Q}|g_0|^2e^{2s\alpha_i}dxdt\right].
\end{aligned}
\end{equation}
\item For $p_i,i\in\overline{1,n}$ we have:
\begin{equation}\label{estCarlemani}\begin{aligned}
&\int_{Q}\left[(s\varphi_i)^{-1}(|D_tp_i|^2+|D^2p_i|^2)+s\varphi_i|Dp_i|^2+s^{3}\varphi_i^{3}|p_i|^2\right]e^{2s\alpha_i}dxdt\\
&\leq C\int_{Q_{\tilde\omega_i}}s^{3}\varphi_i^{3}|p_i|^2e^{2s\alpha_i}dxdt+ C\int_{Q}|g_i|^2e^{2s\alpha_i}dxdt.\\
\end{aligned}
\end{equation}
\end{enumerate}
Summing the above Carleman inequalities we obtain, for some constant $C=C(M,\{\omega_j\}_j)>0$, that
\begin{equation}
\label{estCarleman1}\begin{aligned}
&\sum_{j=0}^{n}\left\lbrace\int_{Q}\left[(s\varphi_j)^{-1}(|D_tp_j|^2+|D^2p_j|^2)+s\varphi_j|D p_j|^2+s^{3}\varphi_j^{3}|p_j|^2\right]e^{2s\alpha_j}dxdt\right\rbrace\\
&\leq C\left[\int_{Q_{\omega_0}}s^{3}\varphi_0^{3}|p_0|^2e^{2s\alpha_0}dxdt+\sum_{i=1}^{n}\left(\int_{Q_{\tilde\omega_i}}s^{3}\varphi_i^{3}|p_i|^2e^{2s\alpha_i}dxdt\right)\right.\\
&\left.
+\sum_{j=0}^{n}\left(\int_{Q_{\tilde\omega_j}}|g_j|^2e^{2s\alpha_j}dxdt\right)\right].\\
\end{aligned}
\end{equation}
At this point we have to properly estimate the terms containing $p_i$ on $\tilde\omega_i,i\in\overline{1,n}$ from the right hand-side in terms of the component $p_0$ observed on $\tilde\omega_0$.
For this purpose we will use the first equation of \eqref{adjlinsystem} considered on $\omega_i\cap\omega_0$, which by hypothesis \eqref{hyp} is coupled only to $p_i$:
\begin{equation}
\label{p0pi}
D_tp_0+\Delta p_0+c_0p_0+a_{i0}p_i=g_0\,\text{ in }(0,T)\times\omega_i\cap\omega_0.
\end{equation}
Consider the cutoff functions $\gamma_i,i\in\overline{1,n}$ with the properties
\begin{equation*}
\begin{aligned}
&\gamma_i\in C_0^\infty(\omega_i),\,|\gamma_i|\le 1, \text{supp }\gamma_i=\overline{\underline\omega_i} \\
&\gamma_i=\text{ sign }(a_{i_0}|_{\underline\omega_i})\text{ on } \tilde\omega_i,\gamma_i\ne0 \text{ in }\underline\omega_i.
\end{aligned}
\end{equation*}
where $\text{ sign }(a_{i_0})$ is the sign of $a_{i0}$ in $\underline\omega_i$, which, by hypothesis \eqref{hyp} and continuity is nonzero and constant in $\tilde\omega_i $.
Multiply, scalarly in $L^2(Q_{\omega_0})$, the equation \eqref{p0pi} by $\gamma_i s^3\varphi_i^3p_ie^{2s\alpha_i}$:
\begin{equation}\label{ec0multipl}\begin{aligned}
&\int_{Q_{\underline\omega_i}}\gamma_ia_{i0}(x)s^3\varphi_i^3|p_i|^2e^{2s\alpha_i}dxdt\\
&=\int_{Q_{\underline\omega_i}}\gamma_is^3\varphi_i^3(-c_0p_0-D_tp_0-\Delta p_0-g_0)p_ie^{2s\alpha_i}dxdt
\end{aligned}
\end{equation}
We use \eqref{hyp} to say that that there exists a constant such that
\begin{equation}\label{ms}
\begin{aligned}
\delta \int_{Q_{\tilde\omega_i}}s^3\varphi_i^3|p_i|^2e^{2s\alpha_i}dxdt&\le \int_{Q_{\tilde\omega_i}}|a_{i0}(x)|s^3\varphi_i^3|p_i|^2e^{2s\alpha_i}dxdt\\
&\leq \int_{Q_{\underline\omega_i}}a_{i0}(x)s^3\varphi_i^3|p_i|^2e^{2s\alpha_i}dxdt.
\end{aligned} \end{equation}
We estimate each term from the right hand-side of \eqref{ec0multipl} using the properties of $\gamma_j, j\in\overline{0,n}$.
Let $C>0$ denoting various constants depending on $\delta,M$ and $\underline\omega_i,\tilde\omega_i$.
For the first term in right side of \eqref{ec0multipl} we have:
\begin{equation}\label{md1}\begin{aligned}
&\left| \int_{Q_{\underline\omega_i}}\gamma_is^3\varphi_j^3
(-c_0p_0)p_ie^{2s\alpha_i}dxdt\right|\\
&\leq M\left(\int_{Q_{\underline\omega_i}}s^2\varphi_i^2|p_i|^2e^{2s\alpha_i}dxdt\right)^{\frac{1}{2}}\left(\int_{Q_{\underline\omega_i}}s^4\varphi_i^4|p_0|^2e^{2s\alpha_i}dxdt\right)^{\frac{1}{2}}\\
&\leq
\int_{Q_{\underline\omega_i}}s^2\varphi_i^2|p_i|^2e^{2s\alpha_i}dxdt+M^2\int_{Q_{\underline\omega_i}}s^4\varphi_i^4|p_0|^2e^{2s\alpha_i}dxdt.\end{aligned}
\end{equation}
The same computation gives an estimate for the term involving the source:
\begin{equation}\label{md11}\begin{aligned}
&\left| \int_{Q_{\underline\omega_i}}\gamma_is^3\varphi_j^3
(-g_0)p_ie^{2s\alpha_i}dxdt\right|\\
&\leq
\int_{Q_{\underline\omega_i}}s^2\varphi_i^2|p_i|^2e^{2s\alpha_i}dxdt+M^2\int_{Q_{\underline\omega_i}}s^4\varphi_i^4|g_0|^2e^{2s\alpha_i}dxdt.\end{aligned}
\end{equation}
Observe now that we have the following estimates for the weight functions, with a constant $cst$ not depending on $s$:
\begin{equation}
|\gamma_is^3D_t(e^{2s\alpha_i}\varphi_i^3)|=|\gamma_is^3(e^{2s\alpha_i}2sD_t\alpha_{i}\varphi_i^3+3e^{2s\alpha_i}\varphi^2D_t\varphi_i)|\leq
cst\, e^{2s\alpha_i}s^5\varphi_i^5
\end{equation}
and
\begin{equation}
|s^3\Delta(\gamma_i\varphi_i^3 p_ie^{2s\alpha_i})|\leq cst\,
s^3\varphi_i^3(s^2\varphi_i^2|p_i|+s\varphi_i|\nabla p_i|+|\Delta
p_i|)e^{2s\alpha_i}.
\end{equation}
We now proceed with estimating the second term in \eqref{ec0multipl} using, as usually in Carleman estimates,
integration by parts:
\begin{eqnarray}\left|\int_{Q_{\underline\omega_i}}\gamma_i s^3\varphi_i^3(-D_tp_0)p_ie^{2s\alpha_i}dxdt
\right|=\left|\int_{Q_{\underline\omega_i}}
s^3D_t(\varphi_i^3p_ie^{2s\alpha_i})p_0dxdt \right|\nonumber\\
\leq\left|\int_{Q_{\underline\omega_i}}s^3D_t(\varphi_i^3e^{2s\alpha_i})p_ip_0dxdt\right|+\left|\int_{Q_{\underline\omega_i}}s^3\varphi_i^3e^{2s\alpha_i}D_tp_ip_0dxdt\right|\nonumber\\
\leq\label{md2}
C\left|\int_{Q_{\underline\omega_i}}e^{2s\alpha_i}s^5\varphi_i^5p_jp_0dxdt\right|+\left|\int_{Q_{\underline\omega_i}}e^{2s\alpha_i}s^3\varphi_i^3D_tp_ip_0dxdt\right|\nonumber\\
\leq\int_{Q_{\underline\omega_i}}s^2\varphi_i^2|p_i|^2e^{2s\alpha_i}dxdt+C\int_{Q_{\underline\omega_i}}s^8\varphi_i^8|p_0|^2e^{2s\alpha_i}dxdt
\\
+\int_{Q_{\underline\omega_i}}(s\varphi)^{-2}|D_tp_i|^2e^{2s\alpha_i}dxdt+C\int_{Q_{\underline\omega_i}}s^8\varphi_i^8|p_0|^2e^{2s\alpha_i}dxdt.\nonumber
\end{eqnarray}
We proceed now with estimating the third term in right hand side of \eqref{ec0multipl}:
\begin{equation}\label{md3}
\begin{aligned}
&\left|\int_{Q_{\underline\omega_i}}\gamma_is^3\varphi_i^3(-\Delta
p_0)p_ie^{2s\alpha_i}dxdt\right|=\left|
\int_{Q_{\underline\omega_i}}s^3\Delta(\gamma_i\varphi_i^3 p_i
e^{2s\alpha_i})p_0dxdt\right|\\
&\leq
C\int_{Q_{\underline\omega_i}}s^3\varphi_i^3(s^2\varphi_i^2|p_i|+s\varphi_i|\nabla
p_i|+|\Delta p_i|)e^{2s\alpha_i}|p_0|dxdt\\
&\leq
\int_{Q_{\underline\omega_i}}[s^2\varphi_i^2|p_i|^2+|\nabla
p_i|^2+(s\varphi_i)^{-2}|\Delta p_i|^2]e^{2s\alpha_i}dxdt\\
&+C\int_{Q_{\underline\omega_i}}s^8\varphi_i^8
|p_0|^2e^{2s\alpha_i}dxdt.
\end{aligned}
\end{equation}
Using \eqref{md1},\eqref{md11},\eqref{md2}, \eqref{md3} and \eqref{ms} we have, for $i\in\overline{1,n}$ that
\begin{equation}\begin{aligned}
&\int_{Q_{\tilde\omega_i}}s^3\varphi_i^3|p_i|^2e^{2s\alpha_i}dxdt\le C \int_{Q_{\underline\omega_i}}s^8\varphi_i^8
|p_0|^2e^{2s\alpha_i}dxdt\\
&+\int_{Q_{\underline\omega_i}}\left[(s\varphi_i)^{-2}(|\Delta p_i|^2+|D_tp_i|^2)+s^2\varphi_i^2|p_i|+|\nabla
p_i|^2\right]e^{2s\alpha_i}dxdt\\
&+C\sum_{i=1}^n\int_{Q_{\underline\omega_i}}s^4\varphi_0^4
|g_0|^2e^{2s\alpha_i}dxdt.\\
\end{aligned}\end{equation}
Going back to \eqref{estCarleman1}, we have
\begin{equation}
\begin{aligned}
&\sum_{j=0}^{n}\left\lbrace\int_{Q}\left[(s\varphi_j)^{-1}(|D_tp_j|^2+|D^2p_j|^2)+s\varphi_j|D p_j|^2+s^{3}\varphi_j^{3}|p_j|^2\right]e^{2s\alpha_j}dxdt\right\rbrace\\
&\leq C\int_{Q_{\omega_0}}s^3\varphi_0^3
|p_0|^2e^{2s\alpha_0}dxdt+C\sum_{i=1}^{n}\left(\int_{Q_{\underline\omega_i}}s^8\varphi_i^8
|p_0|^2e^{2s\alpha_i}dxdt\right.\\
&
\left.+\int_{Q_{\underline\omega_i}}\left[(s\varphi_i)^{-2}(|D^2 p_i|^2+|D_tp_i|^2)+s^2\varphi_i^2|p_i|^2+|D
p_i|^2\right]e^{2s\alpha_i}dxdt\right)\\
&+C\sum_{i=1}^n\int_{Q_{\underline\omega_i}}s^4\varphi_0^4
|g_0|^2e^{2s\alpha_i}dxdt+ C\sum_{j=0}^{n}\int_{Q}
|g_j|^2e^{2s\alpha_j}dxdt.\\
\end{aligned}
\end{equation}
We now absorb the integral terms containing $p_i$ in the right hand side into the corresponding higher order terms in the left side of the above inequality, by increasing $s$ and taking it big enough. We obtain:
\begin{equation}
\label{estCarleman2}\begin{aligned}
&\sum_{j=0}^{n}\left\lbrace\int_{Q}\left[(s\varphi_j)^{-1}(|D_tp_j|^2+|D^2p_j|^2)+s\varphi_j|D p_j|^2+s^{3}\varphi_j^{3}|p_j|^2\right]e^{2s\alpha_j}dxdt\right\rbrace\\
&\leq C\int_{Q_{\omega_0}}s^3\varphi_0^3
|p_0|^2e^{2s\alpha_0}dxdt+ C\sum_{i=1}^{n}\int_{Q_{\underline\omega_i}}s^8\varphi_i^8
|p_0|^2e^{2s\alpha_i}dxdt.\\
&+C\sum_{i=1}^n\int_{Q_{\underline\omega_i}}s^4\varphi_0^4
|g_0|^2e^{2s\alpha_i}dxdt+ C\sum_{j=0}^{n}\int_{Q}
|g_j|^2e^{2s\alpha_j}dxdt.\\
\end{aligned}
\end{equation}
Now we use Remark \ref{ordineaponderilor} in order to take a smaller weight in the left side and a greater one in the right side. Then there exist $s_0>0$ and $C=C(M,\delta,\{\underline\omega_j\}_j)$ such that the following Carleman estimate is true for all $s\ge s_0$:
\begin{equation}
\label{estCarleman3}\begin{aligned}
&\sum_{j=0}^{n}\left[\int_{Q}\left(|D_tp_j|^2+|D^2p_j|^2+|D p_j|^2+|p_j|^2\right)e^{2s\underline\alpha}dxdt\right]\\
&\leq C\int_{Q_{\omega_0}}|p_0|^2e^{2s\overline\alpha}dxdt+C\int_{Q}|g|^2e^{2s\overline\alpha}dxdt.\\
\end{aligned}
\end{equation}
\end{proof}
Concerning the $L^\infty-L^2$ Carleman estimate for the solution of the adjoint problem \eqref{adjlinsystem} we proceed in the same way as is in \cite{bar12002,corgueros2010} or \cite{balch}.
We need to use the maximal regularity result in $L^p$ spaces for parabaolic problems (see \cite{lady}) and Sobolev embeddings for anisotropic Sobolev spaces which are contained in the following lemma:
\begin{lemma}[\cite{lady}, Lemma 3.3]\label{lemmaLady1}
Let $z\in W^{2,1}_r(Q)$.
Then $z\in Z_1$ where
$$
Z_1= \left\{\begin{array}{lll}
L^s(Q)&\text{ with }s\le \frac{(N+2)r}{N+2-2r},&\text{ when }r<\frac{N+2}{2},\\
L^s(Q)&\text{ with }s\in[1,\infty),&\text{ when }r=\frac{N+2}{2},\\
C^{\alpha,\alpha/2}(Q)& \text{ with } 0<\alpha <2-\frac{N+2}{r},&\text{ when }r>\frac{N+2}{2},
\end{array}\right.
$$
and there exists $C=C(Q,p,N)$ such that $$
\|z\|_{Z_1}\le C\|z\|_{W^{2,1}_r(Q)}.
$$
\end{lemma}
Using the above regularity result we consider the following sequence of numbers:
\begin{equation}\label{seqsigmaj}
\sigma_0=2,\quad \sigma_j:=
\begin{cases}
\dfrac{(N+2)\sigma_{j-1}}{N+2-2\sigma_{j-1}}, \text{ if } \sigma_{j-1}<\frac{N+2}{2},\\
\frac{3}{2}\sigma_{j-1}, \text{ if } \sigma_{j-1}\geq \frac{N+2}{2},
\end{cases}
\end{equation}
such that by Lemma \ref{lemmaLady1} we have
$$W^{2,1}_{\sigma_{m-1}}(Q)\subset L^{\sigma_m}(Q).$$
Now, let us fix a $\delta_1>0$ and a sequence $(q^j)_{j>0}$ defined by
\begin{equation*}
q^j:=p^\varepsilon e^{(s+j\delta_1)\underline{\alpha}}.
\end{equation*}
Then $q^j=(q^j_0,\ldots,q^j_n)^\top$ is solution to the problem
\begin{equation}\begin{aligned}
&D_tq^j+\textbf{A}q^j+C q^j+A_0^\top q^j=(s+j\delta_1)D_t\underline\alpha q^j,\\
&q^j(T)=0.\\
\end{aligned}\end{equation}
Observe that the right-hand side may be bounded in terms of $q^{j-1}$, with some constant $C_j=C_j(s,\delta_1)>0$, as follows
\begin{equation}
(s+j\delta_1)D_t\underline\alpha q^j=(s+j\delta_1)\frac{2t-T}{t(T-t)}\underline{\alpha}e^{\delta_1\underline{\alpha}}q^{j-1}\leq C_jq^{j-1}.
\end{equation}
By maximal parabolic regularity (see \cite{lady}) we have
\begin{equation}\label{parabregj}
\|q^j\|_{W^{2,1}_{\sigma_{j-1}}}\leq \tilde C_j\|q^{j-1}\|_{L^{\sigma_{j-1}}}
\end{equation}
and using Sobolev type embedding from Lemma \ref{lemmaLady1}, we have that there exists a constant $K_j$ such that
\begin{equation}\label{soboj}
\|q^{j-1}\|_{L^{\sigma_{j-1}}}\leq K_j \|q^{j-1}\|_{W^{2,1}_{\sigma_{j-2}}}.
\end{equation}
The sequence $(\sigma_m)_m$ is increasing to $+\infty$ and choose rank $m_0$ such that $\sigma_{m_0}>\frac{N+2}{2}\ge\sigma_{m_0-1}$. This implies that
\begin{equation}\label{inftym}
W^{2,1}_{\sigma_{m_0}}(Q)\subset L^{\infty}(Q).
\end{equation}
From \eqref{parabregj}, \eqref{soboj} and \eqref{inftym}, and with the use of \eqref{estCarlemanstar}, we have that there exists a constant $C>0$ such that
\begin{equation}\begin{aligned}
&\|p e^{(s+m_0\delta_1)\underline{\alpha}}\|_{L^{\infty}(Q)}=\|q^{m_0}\|_{L^\infty(Q)}\leq C\|q^0\|_{L^{\sigma_0}(Q)}=C\|p e^{s\underline{\alpha}}\|_{L^2(Q)}\\
&\leq C\|p_0e^{s\overline{\alpha}}\|_{L^2(Q_{\omega_0})} .
\end{aligned}\end{equation}
\begin{remark}\label{rem_obs}
In order to obtain the observability inequality we proceed in the classical manner, by multiplying scalarly in $L^2(\Omega)$ each equation of the system \eqref{genericparab} by $p_i$ and making use of dissipativity to find, for some constant $c>0$ depending only on the coefficients of the system, the inequality:
\begin{equation*}
\frac{1}{2}\frac{d}{dt}\|p\|^2_{L^2(\Omega)}+c \|p\|^2_{L^2(\Omega)}\geq 0,
\end{equation*}
which gives
\begin{equation*}
\|p(0)\|^2_{L^2(\Omega)}\leq \|p(t)\|^2_{L^2(\Omega)}e^{Ct}, t\in(0,T).
\end{equation*}
Consequently, for fixed $s>s_0$, we have that
\begin{equation*}
\|p(0)\|^2_{L^2(\Omega)}\leq \frac{T}{2}\int_{\frac{T}{4}}^{\frac{3T}{4}} \|p(t)\|^2_{L^2(\Omega)}e^{Ct}dt\leq K(T,s)\int_{0}^{T} \|p(t)\|^2_{L^2(\Omega)}e^{2s\underline\alpha}dt.
\end{equation*}
Now, by Carleman estimate \eqref{estCarleman3} we obtain the observability inequality:
\begin{equation}\label{obsineq}
\|p(0,\cdot)\|^2_{L^2(\Omega)}\leq C\int_{Q_{\omega_0}}
|p_0|^2e^{2s\overline\alpha}dxdt,
\end{equation}
with a constant $C=C(T,s,\delta,M,\{\underline\omega_j\}_j)$.
\end{remark}
\section{Linear system: null controllability}\label{seccontrtreelin}
The main controllability result concerning linear system \eqref{linsystem} is the following
\begin{theorem}\label{thcontrol}
Consider system \eqref{linsystem} with coefficients in $\mathcal{E}_{M,\delta,\{\underline\omega_i\}_i}$. Then there exists a constant $C=C({M,\delta,\{\underline\omega_i\}_i})$ such that for all $z^0\in H$ there exists $u^*\in L^2(0,T;L^2(\omega_0))\cap L^\infty(Q_{\omega_0})$ which drives the corresponding solution to \eqref{linsystem}, $z=z^{u^*}$ in $0$ \textit{i.e.} $z(T,\cdot)=0$ and satisfies the norm estimate
\begin{equation}\label{est_contr_l2}
\|u^*e^{-s\overline\alpha}\|_{L^2(0,T;L^2(\omega_0))}+ \|u^* \|_{L^\infty(Q_{\omega_0})}\leq C\|z^0\|_{L^2(\Omega)}.
\end{equation}
\end{theorem}
\newpage
\proof\quad
\subsubsection*{$L^2(Q)$ control.}
In order to obtain norm estimates for the controls driving the trajectory to the linear system in $0$, we consider a family of optimal control problems depending on a small parameter $\varepsilon>0$:
\begin{equation}\label{optimalpb}
\inf_{u\in L^2(Q_{\omega_0})}\frac{1}{2}\int_{Q_{\omega_0}}|u|^2e^{-2s\overline\alpha}dxdt+\frac{1}{2\varepsilon}\int_{\Omega}|z(T,\cdot)|^2dxdt,
\end{equation}
with $z=z^u$ the solution of the linear controlled system \eqref{linevpb}.
Classical results concerning optimal control with quadratic cost for parabolic equations insure existence of optimal control $u^\varepsilon$ which by Pontriaghin maximum principle satisfy
\begin{equation}\label{optimalcontrol}
u^\varepsilon=e^{2s\overline\alpha}\textbf{B}^*p^\varepsilon=e^{2s\overline\alpha}p_0^\varepsilon|_{\omega_0}.
\end{equation}
where $p^\varepsilon$ is solution to the adjoint system:
\begin{equation}
\begin{cases}
D_tp^\varepsilon=-\textbf{A}p^\varepsilon-\textbf{C}(t) p^\varepsilon-\textbf{A}_0^*(t) p^\varepsilon,\\
p^\varepsilon(T)=-\frac{1}{\varepsilon}z^\varepsilon(T).
\end{cases}
\end{equation}
By cross multiplying the equations for $z^\varepsilon=z^{u^\varepsilon}$ and $p^\varepsilon$ by $p^\varepsilon$ respectively $z^\varepsilon$ and integrating on $Q$ we obtain:
$$\frac{d}{dt}\langle z^\varepsilon,p^\varepsilon\rangle_{L^2(\Omega)}=\langle (A+A_0+C)z^\varepsilon+Bu^\varepsilon,p^\varepsilon\rangle_{L^2(\Omega)}-\langle (A+A_0+C)^*p^\varepsilon,z^\varepsilon\rangle_{L^2(\Omega)}.$$
We integrate on $[0,T]$ and use the observability inequality \eqref{obsineq} to get
\begin{equation*}\begin{aligned}
&\frac{1}{\varepsilon}\|z^\varepsilon(T,\cdot)\|_{L^2(\Omega)}^2+\langle u^\varepsilon,B^*p^\varepsilon\rangle_{L^2(Q)}=-\langle z^\varepsilon(0,\cdot),p^\varepsilon(0,\cdot)\rangle_{L^2(\Omega)}\\
&\leq \|z^0\|_{L^2(\Omega)}\|p(0,\cdot)\|_{L^2(\Omega)}\leq C\|z^0\|_{L^2(\Omega)}\left(\int_{Q_{\omega_0}}
|p^\varepsilon_0|^2e^{2s\overline\alpha}dxdt\right)^{\frac{1}{2}}.
\end{aligned}
\end{equation*}
Since $\langle u^\varepsilon,B^*p^\varepsilon\rangle_{L^2(Q)}=\int_{Q_{\omega_0}}
|p^\varepsilon_0|^2e^{2s\overline\alpha}dxdt$, using appropriately balanced Young's inequality, we find that
\begin{equation}
\frac{1}{\varepsilon}\|z^\varepsilon(T,\cdot)\|_{L^2(\Omega)}^2+\frac{1}{2}\int_{Q_{\omega_0}}|p^\varepsilon_0|^2e^{2s\overline\alpha}dxdt\leq C\|z^0\|_{L^2(\Omega)}^2,
\end{equation}
and gives by \eqref{optimalcontrol} the following estimate for the sequence of optimal controls $(u^\varepsilon)_\varepsilon$ and final state:
\begin{equation}
\frac{1}{\varepsilon}\|z^\varepsilon(T,\cdot)\|_{L^2(\Omega)}^2+\frac{1}{2}\int_{Q_{\omega_0}}
|u^\varepsilon|^2e^{-2s\overline\alpha}dxdt\leq C\|z^0\|_{L^2(\Omega)}^2.
\end{equation}
Now, this $L^2$ bound for the sequence $(u^\varepsilon)_\varepsilon$, allows to extract a subsequence, denoted for simplicity also $(u^\varepsilon)_\varepsilon$ weakly convergent in $L^2(Q)$ to a limit $u^*$.
Write the corresponding solutions $(z^\varepsilon)_\varepsilon$ as
$$z^\varepsilon=w^\varepsilon+v
$$
where $w^\varepsilon$ is solution to \eqref{linsystem} with initial data $w^\varepsilon(0)=0$ and $v$ solution to homogeneous equation
$$
D_tv=\textbf{A}v+(A_0+C)v=0,\,v(0)=z^\varepsilon(0)=z^0.
$$
We have that the sequence $(w^\varepsilon)_\varepsilon$ is bounded in $L^2(0,T; D(\textbf{A}))$ and the sequence of derivatives $({D_tw^{\varepsilon}})_\varepsilon$ is bounded in $L^2(0,T;L^2(\Omega))$. By Aubin's theorem we can extract a subsequence, denoted also $(w^\varepsilon)_\varepsilon$, strongly convergent in $L^2(0,T; H_0^1(\Omega))$ to $w\in L^2(0,T; H_0^1(\Omega))\cap L^2(0,T; D(\textbf{A}))$. Consequently $(z^\varepsilon)$ is strongly convergent in $L^2(0,T; H_0^1(\Omega))$ to $z\in L^2(0,T; H_0^1(\Omega)).$
We may now pass to the limit in the weak formulation of solutions to \eqref{linsystem}, \eqref{linevpb}; thus, for some test function $\bm\varphi\in [H^1_0(\Omega)]^{n+1}$, we have
\begin{equation}
\begin{cases}
\displaystyle\langle z^\varepsilon(t,\cdot),\bm\varphi\rangle_{L^2(\Omega)}-\langle z^\varepsilon(0,\cdot),\bm\varphi\rangle_{L^2(\Omega)}+\int_0^t\langle \nabla z^\varepsilon(\tau,\cdot),\nabla\bm\varphi \rangle_{L^2(\Omega)}d\tau\\ \\\displaystyle
+\int_0^t\langle(A_0+C) z^\varepsilon,\bm\varphi\rangle_{L^2(\Omega)} d\tau=\int_{(0,t)\times\omega_0}u^\varepsilon\bm\varphi dxd\tau,\\
z^\varepsilon(0,\cdot):=z^0,
\end{cases}
\end{equation}
and we find that $z\in L^2(Q)$ is solution to the problem (\ref{linevpb}) with initial datum $z^0\in L^2(\Omega)$. In fact, by Arzel\`a-Ascoli theorem $w^\varepsilon\rightarrow w$ in $C([0,T],L^2(\Omega))$ and thus $z(T)=0$ and by weak lower semicontinuity of the $L^2$ norm we also have the following estimate for the control driving the solution to 0:
\begin{equation}\label{boundL2control}
\int_{Q_{\omega_0}}|u^*|^2 e^{-2s\overline\alpha}dxdt\leq C\|z^0\|_{L^2(\Omega)}^2.
\end{equation}
where $C=C(T,s,s_1,M,\delta,\{\underline\omega_j\}_j).$
\subsubsection*{$L^\infty(Q)$- control.}
Regarding the $L^\infty$ norm estimates for the sequence $(u^\varepsilon)_{\varepsilon}$ and also for $u^*$ we will use the results from the previous section \S\ref{secobservcarl}:
\begin{equation}\label{aaa}
\|u^\varepsilon e^{-2s\overline\alpha+(s+m_0\delta_1)\underline{\alpha}}\|_{L^{\infty}(Q_{\omega_0})}=\|p_0^\varepsilon e^{(s+m_0\delta_1)\underline{\alpha}}\|_{L^{\infty}(Q_{\omega_0})}\leq C \|z^0\|_{L^2(Q)}.
\end{equation}
Now we see that we could start from the beginning with $\lambda$ big enough such that \eqref{ordineaponderilor2} holds and in consequence
$$2s\overline\alpha\leq(s+m_0\delta_1)\underline{\alpha}.$$
As $-2s\overline\alpha+(s+m_0\delta_1)\underline{\alpha}>0$, by passing to $L^\infty$ weak-* limit in \eqref{aaa}, we find that
\begin{equation}
\|u^*\|_{L^{\infty}(Q_{\omega_0})}\leq\|u^* e^{-2s\overline\alpha+(s+m_0\delta_1)\underline{\alpha}}\|_{L^{\infty}(Q_{\omega_0})}\leq C \|z^0\|_{L^2(Q)},
\end{equation}
which concludes \eqref{est_contr_l2}.
\hfill\rule{2mm}{2mm}\medskip
\section{Nonlinear system: local exact controllability}
We prove in this section the following local controllability result concerning system \eqref{nonlinsystem}:
\begin{theorem}\label{th_local_contr}
Suppose $\overline y$ is a stationary state, \textit{i.e.} solution to \eqref{nonlinelliptic}, and that the functions $f_j,j\in\overline{0,n}$ satisfy hypotheses \textit{(H1)}, \textit{(H2)}. Then, for all $\beta_0>0$ there exist $\zeta_0=\zeta_0(\beta_0)>0$ and $C=C(\beta_0,\{\underline \omega_i\}_i,\overline{y})$ such that if $\|y^u(0)-\overline y\|<\zeta_0$ there exists a control $u\in L^\infty(Q)$ satisfying
$$
\|u\|_{L^\infty(Q)}\le C\|y^u(0)-\overline y\|_{L^\infty(\Omega)}
$$
and
$$y^u(T,\cdot)=\overline y,$$
with
$$ \|y(t,\cdot)-\overline y\|_{L^\infty}\le \beta_0, \,t\in[0,T].$$
\end{theorem}
\label{seccontrtreenonlin}
\proof
The approach to the local null controllability of the system around the stationary state is based on the Kakutani fixed point theorem.
For this aim, with a solution $y$ to \eqref{nonlinsystem}, we consider the system satisfied by $z:=y-\overline{y}$, written as a linear system
\begin{equation}\label{linsys2}
\left\lbrace
\begin{array}{ll}
D_tz_{0}-\Delta z_0=c_0^z(t,x)z_0+\chi_{\omega_0}u, & (0,T)\times\Omega,\\
D_tz_{i}-\Delta z_i=a_{i0}^z(t,x)z_0+c_i^z(t,x)z_i,\, i\in\overline{1,n},\, &(0,T)\times\Omega,\\
z_0=...=z_{n}=0, &(0,T)\times\partial\Omega,\\
z(0,x)=z^0(x):=y(0,x)-\overline y(x)&x\in\Omega,
\end{array}
\right.
\end{equation}
where the nonlinearity is hidden into the coupling coefficients which are defined by:
\begin{eqnarray}\label{linearizationcoef}
\nonumber a^z_{i0}(t,x):=\int_0^1\frac{\partial}{\partial y_0}f_i(x,\overline y_0(x)+\tau z_0(t,x),\overline y_i(x)+\tau z_i(t,x))d\tau, \, i\in\overline{1,n}\\
\nonumber c^z_{j}(t,x):=\int_0^1\frac{\partial}{\partial y_j}f_j(x,\overline y_0(x)+\tau z_0(t,x),\overline y_j(x)+\tau z_j(t,x))d\tau, \, j\in\overline{0,n}.\\
\end{eqnarray}
Observe that $\{a^0_{i0}, c^0_{j}\}_{i\in\overline{1,n},j\in\overline{0,n}}$ are the coefficients of the linearized system around the stationary solution $\overline y$ as
$$
a^0_{i0}(x)=\frac{\partial}{\partial y_0}f_i(x,\overline y_0(x) ,\overline y_i(x) ),$$
$$ c^0_{i}= \frac{\partial}{\partial y_i}f_i(x,\overline y_0(x), \overline y_i(x) ),c^0_{0}= \frac{\partial}{\partial y_0}f_0(x,\overline y_0(x) ) .
$$
We see now that hypotheses \eqref{fsuport} and \eqref{suportderiv} tell us that we may choose $M_0,\delta_0>0$ and $\underline\omega_i\subset\subset (\omega_i\cap\omega_0)\setminus\bigcup_{j\ne0,i}\omega_j$ such that
\begin{equation}\label{coef_lin_E}
\{a^0_{i0}, c^0_{j}\}_{i\in\overline{1,n},j\in\overline{0,n}}\in\mathcal{E}_{M_0,\delta_0,\{\underline\omega_i\}_i}.
\end{equation}
Let $\beta>0$ and define $\mathcal{M}_\beta$ to be:
\begin{equation}
\mathcal{M}_\beta=\lbrace \tilde{z}\in L^\infty(Q):\|\tilde{z}\|_{L^\infty(Q)}\leq\beta\rbrace.
\end{equation}
For $\tilde{z}\in\mathcal{M}_\beta$, we consider the coefficients $a^{\tilde z}_{i0}(x),\, c^{\tilde z}_{j}(x) $ defined as in
\eqref{linearizationcoef} with $z$ replaced by $\tilde z$.
Observe now that we may choose $\beta_0>0$ small enough such that if $\tilde z\in \mathcal{M}_{\beta_0}$ we have
\begin{equation}
\label{coef_z_tilde}
\{a^{\tilde z}_{i0}, c^{\tilde z}_{j}\}_{i\in\overline{1,n},j\in\overline{0,n}}\in\mathcal{E}_{2M_0,\frac{\delta_0}{2},\{\underline\omega_i\}_i}.
\end{equation}
Consider now the linear system \eqref{linsys2} with coefficients $\{a^{\tilde z}_{i0}, c^{\tilde z}_{j}\}$:
\begin{equation}\label{linsys2.1}
\left\lbrace
\begin{array}{ll}
D_tz_{0}-\Delta z_0=c_0^{\tilde z}(t,x)z_0+\chi_{\omega_0}u, & (0,T)\times\Omega,\\
D_tz_{i}-\Delta z_i=a_{i0}^{\tilde z}(t,x)z_0+c_i^{\tilde z}(t,x)z_i,\, i\in\overline{1,n},\, &(0,T)\times\Omega,\\
z_0=...=z_{n}=0, &(0,T)\times\partial\Omega,\\
z(0,x)=z^0(x)&x\in\Omega.\end{array}
\right.
\end{equation}
The linear problem \eqref{linsys2.1} may be reformulated as:
\begin{equation}\label{linsys2.2}
\left\lbrace
\begin{array}{ll}
D_tz=\textbf{A}z+\mathbf{A_0^{\tilde z}}(t)z+\mathbf{C^{\tilde z}}(t)z+\textbf{B}u, & t>0, \\
z(0)=z^0, &\\
\end{array}
\right.
\end{equation}
where $\mathbf{C^{\tilde z}}(t)z=C^{\tilde z}_0(t,\cdot)z(\cdot)$ and $\mathbf{A^{\tilde z}_0}(t)z=A^{\tilde z}_0(t,\cdot)z(\cdot)$ where $C^{\tilde z}_0(t,x)=diag(c^{\tilde z}_i(t,x))_{i=\overline{0,n}}$ and the coupling matrix $$A^{\tilde z}_0(t,x)=(0,a^{\tilde z}_{10}(t,x),\ldots,a^{\tilde z}_{n0}(t,x))^\top\cdot (1,0,\ldots,0).$$
\smallskip
Theorem \ref{thcontrol} says that for $\tilde z\in \mathcal{M}_{\beta_0}$ there exists a control $u^*=u^*(\tilde z)\in L^2(0,T;L^2(\omega_0))\cap L^\infty(Q_{\omega_0})$ satisfying the norm estimate
\begin{equation}\label{est_contr_l2_0}\begin{aligned}
& J(u^*):=\|u^*e^{-s\overline\alpha}\|_{L^2(0,T;L^2(\omega_0))}+ \|u^* \|_{L^\infty(Q_{\omega_0})}\\
& \leq C(2M_0,\delta_0/2,\{\underline\omega_i\}_i)\|z^0\|_{L^2(\Omega)},
\end{aligned}
\end{equation}
and driving the solution $z^{u^*,\tilde z}$ of the linear system \eqref{linsys2.1} in zero : $z^{u^*,\tilde z}(T)=0$.
Observe that $J$ is a norm in the space $\mathcal{U}^*:=L^2_{e^{-s\overline\alpha}}\cap L^\infty (Q_{\omega_0})$.
We will write \begin{equation}
\label{T1T2}
z^{u,\tilde z}=T_1^{\tilde z}(z^0)+T_2^{\tilde z}(u),
\end{equation}
where the first term is the solution to problem \eqref{linsys2.1} with initial data $z^0$ and the second term is the solution to system \eqref{linsys2.1} with initial datum zero and control $u$.
Let us denote by
\begin{equation}
\label{op_S}
S_1(z^0)=e^{t\textbf{A}}z^0,\, S_2h=e^{t\textbf{A}}* h=\int_0^te^{(t-s)\textbf{A}}h(s)ds,
\end{equation}
where $h\in L^2(0,t;[L^2(\Omega)]^{n+1})$.
With these notations
\begin{equation} \label{S-T}
z^{u,\tilde z}=T_1^{\tilde z}(z^0)+T_2^{\tilde z}(u)=S_1(z^0)+S_2(A_0^{\tilde z}z^{u,\tilde z}+C_0^{\tilde z}z^{u,\tilde z}+Bu).
\end{equation}
\medskip
Fix an initial datum $z^0\in L^\infty(\Omega)$. We define now the following set-valued map, associated to $z^0$:
\begin{equation}
\begin{aligned}
&F_{z^0}:\mathcal{ M}_{\beta_0}\rightarrow 2^{L^\infty(Q)}\\
F_{z^0}(\tilde z)&= \{z^{u,\tilde z}: u \text{ satisfies } \eqref{est_contr_l2_0} \text{ and } z^{u,\tilde z} (T)=0\}\\
&=\{T^{\tilde z}_1(z^0)+T^{\tilde z}_2(u): \, z^{u,\tilde z} (T)=0, J(u)\le K\|z^0\|_{L^2}\},
\end{aligned}
\end{equation}
where by $K$ we denoted the constant in \eqref{est_contr_l2_0}, $K=C(2M_0,\delta_0/2,\{\underline\omega_i\}_i)$.
In order to obtain local controllability of the nonlinear system it is enough to find a fixed point for $F_{z^0}$. We achieve this goal by applying Kakutani fixed point theorem to $F_{z^0}$ in $\mathcal{ M}_{\beta_0}$; we have thus to verify the following statements:
\begin{enumerate}
\item[i)] For every $\widetilde{z}\in \mathcal M$, $F_{z^0}(\widetilde{z})$ is a nonempty, closed and convex subset of $L^\infty(Q)$;
\medskip
Observe that $z^{u^*(\tilde z)}\in F_{z^0}(\widetilde{z})$ and thus $F_{z^0}(\widetilde{z})\not=\emptyset$. Convexity comes from linearity of $T_2$ and convexity of $J$.
To prove that $F_{z^0}(\widetilde{z})$ is closed, suppose $z^m\in F_{z^0}(\widetilde{z})$, $z^m\rightarrow z$ in $L^\infty$. We have to prove that $z\in F_{z^0}(\widetilde{z})$. Indeed, we have that
$$
z^m=T_1^{\tilde z}(z^0)+T_2^{\tilde z}(u^m)
$$
for some controls $u^m\in\mathcal{U}^*$ satisfying estimate $J(u^m)\le K\|z^0\|_{L^2}$.
We may now invoke Aubin-Lions and Ascoli-Arzel\`a compactness results (see \textit{e.g.} \cite{vra_c0}) applied to the solution operator of a parabolic initial boundary value problem and thus to say that
$T_2$ is a compact operator from $L^2(0,T;L^2(\Omega_{\omega_0}))$ to $C([0,T];[L^2(\Omega)]^{n+1})\cap L^2(0,T;[H_0^1(\Omega)]^{n+1})$. Thus, extracting subsequence $u^m\rightharpoonup u$ weakly in $L^2(Q_{\omega_0})$ we find $$z^m\rightarrow z \text{ in } C([0,T];[L^2(\Omega)]^{n+1})\cap L^2(0,T;[H_0^1(\Omega)]^{n+1})$$ with $z(T)=0$ since $z_m(T)=0$. Thus $z\in F_{z^0}(\widetilde{z}) $.
\item[ii)] There exists $\zeta_0=\zeta_0(\beta_0)$ such that for $\|z^0\|_{L^\infty(\Omega)}<\zeta_0$ we have
$$F_{z^0}(\mathcal{ M}_{\beta_0})\subset \mathcal{ M}_{\beta_0}.$$
This follows from the a priori estimates for solutions to initial boundary value problems for parabolic systems:
$$
\|T_1^{\tilde z}(z^0)\|_{L^\infty(\Omega)}\le C_1(\|\tilde z\|_{L^\infty})\|z^0\|_{L^\infty(\Omega)},
$$
$$
\|T_2^{\tilde z}(u)\|_{L^\infty(\Omega)}\le C_2(\|\tilde z\|_{L^\infty})\|u\|_{L^\infty(Q_{\omega_0})}
$$
and from the remark that both constants depend in fact uniformly on the $L^\infty$ norm of the coupling coefficients and thus depend uniformly on the norm of $\tilde z$ in $L^\infty$.
\item[iii)]The set $F_{z^0}(\mathcal{ M}_{\beta_0})$ is imbedded into a convex and compact subset of $\mathcal{ M}_{\beta_0}$.
Indeed, as $\mathcal{ M}_{\beta_0}$ is closed and convex, it is enough to prove that $F_{z^0}(\mathcal{ M}_{\beta_0})$ is relatively compact in $L^\infty$ topology. For this, take a sequence $z^m\in F_{z^0}(\mathcal{ M}_{\beta_0}) $. Correspondingly, there exist $\tilde z^m\in \mathcal{ M}_{\beta_0}$ with $z^m\in F_{z^0}(\tilde z^m)$. Take corresponding controls $u^m\in\mathcal{U}^*$ such that (see definition of $F_{z^0}$ and \eqref{S-T}):
\begin{equation}\label{eqlim}
z^m=T_1^{\tilde z^m}(z^0)+T_2^{\tilde z^m}(u^m)= S_1(z^0)+S_2(A_0^{\tilde z^m}z^m+C_0^{\tilde z^m}z^m+Bu^m).
\end{equation}
We have the following bounded sequences
\begin{itemize}
\item $\tilde z^m\in \mathcal{M}_{\beta_0}$ and so $A_0^{\tilde z^m}(Q),C_0^{\tilde z^m}(Q)$ are bounded in $L^\infty$;
\item $z^m\in \mathcal{M}_{\beta_0}$ and is thus bounded in $L^\infty(Q)$;
\item $u^m\in\mathcal{U}^*$ is bounded in $L^\infty(Q)$.
\end{itemize}
Consequently $ A_0^{\tilde z^m}z^m+C_0^{\tilde z^m}z^m+Bu^m$ is bounded in $L^p(Q), p>1$. By parabolic regularity (see \cite{lady}), $S_2(A_0^{\tilde z^m}z^m+C_0^{\tilde z^m}z^m+Bu^m)$ is bounded in any $W^{2,1}_p,\forall p, 1<p<\infty$ (the space of anisotropic Sobolev functions). For $p$ big enough we have $W^{2,1}_p\subset C^{0,\alpha}(\overline Q)$ for some $0<\alpha<1$ (the space of H\"older continuous functions). $C^{0,\alpha}(\overline Q)$ is compactly imbedded in $C(\overline Q)$. Consequently $(z^m)_m$ is a relatively compact sequence in $L^\infty(Q)$.
\item[iv)] $F_{z^0}$ is upper semi-continuous, \textit{i.e.} if $z^m\rightarrow z$, $\tilde z^m\rightarrow\tilde z$ in
$L^\infty$ and $z^m\in F_{z^0}(\tilde z^m)$ then $z\in F_{z^0}(\tilde z)$.
Indeed we have (see \eqref{linearizationcoef}) that $A_0^{\tilde z^m}\rightarrow A_0^{\tilde z}$, $C_0^{\tilde z^m}\rightarrow C_0^{\tilde z}$ in $L^\infty$ and as $ (z^m)_m$ is relatively compact in $C([0,T];[L^2(\Omega)]^{n+1})$ we may pas to the limit in \eqref{eqlim} and find that $z \in F_{z^0}(\tilde z ).$
\end{enumerate}
Now we conclude the proof by Kakutani fixed point theorem, which insures existence of $z\in \mathcal{ M}_{\beta_0}$ such that $z\in F_{z^0}( z )$ \textit{i.e.} there exists $\overline u\in\mathcal{U^*}$ such that $z^{\overline u,z}=z$. In conclusion $y^{\overline u}:=\overline y+z$ is the solution to the controlled system \eqref{nonlinsystem} with control $\overline u$ satisfying $y^{\overline u}(T)=\overline y$.
\hfill\rule{2mm}{2mm}\medskip
\section{Parabolic systems with tree-like couplings. Null controllability.}\label{sec-tree}
The case of tree-type couplings is more technical to describe in the context of the needed hypotheses on the supports of coupling functions or coupling coefficients in the linear models. These hypotheses are essential for the construction of appropriate auxiliary and weight functions in the corresponding Carleman estimates which are established for each equation associated to a node in the graph, estimates which in the end should couple well into a global observability estimate.
The hypotheses we impose to the supports of the coupling coefficients allow to treat each equation corresponding to a node of the tree as the center of a star-like system together with the directly actuated variables and corresponding equations. The star-like sub-graphs at the same level of the tree should be, in some sense, independently actuated.
\medskip
We will say that a controlled linear parabolic system has a tree-type coupling in zero order terms if the system has the form:
\begin{equation}\label{linsystemtree}
\left\lbrace
\begin{array}{ll}
D_tz_{0}-\Delta z_0=c_0(t,x)z_0+\chi_{\omega_0}u, & \text{ in }(0,T)\times\Omega,\\
D_tz_{i}-\Delta z_i=a_{i\textbf{k}(i)}(t,x)z_{\textbf{k}(i)}+c_i(t,x)z_i,\, i\in\overline{1,n},\, &\text{ in }(0,T)\times\Omega,\\
z_0=...=z_{n}=0, &\text{ on }(0,T)\times\partial\Omega,\\
z(0,\cdot)=z^0,
\end{array}
\right.
\end{equation}
with the following assumptions on the function $\textbf{k}:\{1,\ldots,n\}\rightarrow \{0,1,\ldots,n\}$:
\begin{equation}\label{k}
\forall i\in \{1,\ldots,n\},\exists m=m(i), 1\le m\le n-1, (\textbf{k}\circ)^m(i)=\textbf{k}\circ\ldots\circ\textbf{k}(i)=0.
\end{equation}
The linear problem \eqref{linsystemtree} may be reformulated as:
\begin{equation}\label{linsystree}
\left\lbrace
\begin{array}{ll}
D_tz=\textbf{A}z+\mathbf{A_0}(t)z+\mathbf{C}(t)z+\textbf{B}u, & t>0, \\
z(0)=z^0, &\\
\end{array}
\right.
\end{equation}
where $\mathbf{C}(t)z=C_0(t,\cdot)z(\cdot)$ and $\mathbf{A_0}(t)z=A_0(t,\cdot)z(\cdot)$ with $$C_0(t,x)= diag(c_i(t,x))_{i=\overline{0,n}},$$ and the coupling matrix $$A_0(t,x)=(a_{il})_{i,l\in\overline{1,n}}=(a_{i\textbf{k}(i)}\delta_{l\textbf{k}(i)})_{i,l\in\overline{1,n}},$$
where we denoted by $\delta_{lj}$ the Kronecker symbol.
Denote by
$$
\textbf{I}_j=\textbf{k}^{-1}(j)=\{i\in\overline{1,n}:\textbf{k}(i)=j\}.
$$
Fix now a family of open subsets $\omega_i\subset\Omega,i\in\overline{1,n}$ such that
\begin{equation}
\label{controlierarhic1}
D_i:=\omega_i\cap\omega_{\textbf{k}(i)}\cap\cdots\cap\omega_{(\textbf{k}\circ)^{m(i)}}\ne\emptyset.
\end{equation}
\begin{equation}\label{controlierarhic2}
D_i\setminus\bigcup_{j\ne i,\textbf{k}(j)=\textbf{k(i)}}\omega_j\ne\emptyset.
\end{equation}
Choose further a family of open subsets $\{\underline\omega_j\}_{j\in\overline{0,n}}$ with the properties
\begin{eqnarray}
\label{ci1}\underline\omega_0\subset\subset\omega_0,\quad \underline{\omega}_i\subset\subset D_i\setminus\bigcup_{l\ne i,\textbf{k}(l)=\textbf{k(i)}}\omega_l,\\
\label{ci2}\underline{\omega}_i\subset\subset\underline{\omega}_{\textbf{k}(i)}\subset\subset \underline\omega_0,\,i\in\overline{1,n}.
\end{eqnarray}
For $M,\delta>0,$ and the family of open subsets described above $\{\underline\omega_i\}_i$, we introduce the following classes of coefficients sets:
\begin{equation}\label{hyptree}
\begin{aligned}
&\mathcal{E}_{M,\delta,\{\underline\omega_i\}_i,\textbf{k}}=\biggl\{ E=\{a_{i\textbf{k}(i)},c_j\}_{i\in\overline{1,n},j\in\overline{0,n}}:a_{i\textbf{k}(i)},c_j\in L^\infty(Q), \\
&\|a_{i\textbf{k}(i)}\|_{L^\infty}, \|c_j\|_{L^\infty}\leq M, a_{i\textbf{k}(i)}=0 \text{ in } Q\setminus Q_{\omega_i},\text{ and } |a_{i\textbf{k}(i)}|\ge\delta \text{ on }Q_{\underline\omega_i} \biggr\}.
\end{aligned}
\end{equation}
In order to study controllability we consider the system adjoint to system \eqref{linsystemtree}:
\begin{equation}\label{adjlinsystemtree}
\left\lbrace
\begin{aligned}
&-D_tp_{j}-\Delta p_j-c_j(t,x)p_j=\sum_{l,\textbf{k}(l)=j}a_{lj}(t,x)p_l=\mathcal{N}_{j}(t,x),\, j\in\overline{0,n},\, \text{ in }Q,\\
&p_0=...=p_{n}=0, \text{ on }(0,T)\times\partial\Omega,\\
\end{aligned}
\right.
\end{equation}
where for simplicity of further calculations we denoted by $$\mathcal{N}_{j}(t,x)=\sum_{l,\textbf{k}(l)=j}a_{lj}(t,x)p_l(t,x).$$
As we have seen in the previous sections all controllability results have as essential ingredient an appropriate Carleman inequality for the adjoint system. For obtaing such estimates it is essential to have corresponding auxiliary functions which appear in the construction of the weights. We describe this in what follows
Consider again open subsets
$$\tilde\omega_j\subset\subset\underline\omega_j, j\in\overline{0,n},$$
and auxiliary functions
$$\eta_j\in C^2(\overline{\Omega}),\, 0<\eta_j \text{ in }\Omega,\,\eta_j|_{\partial\Omega}=0,\{x\in\overline\Omega: |\nabla\eta_j(x)|=0\}\subset\subset\tilde\omega_j, j\in\overline{0,n}.$$
We construct now the weight functions entering the various Carleman es\-timates, with the following properties:
\begin{enumerate}
\item[i)]$\psi_{j,f}, j\in\overline{0,n},\textbf{I}_j\ne\emptyset$, $\psi_{i,s},i\in\overline{1,n}$ are defined by
\begin{equation}\label{psi-star}
\psi_{j,f}:=\eta_j+K_j, \quad \psi_{i,s}:=\eta_i+\tilde K_i
\end{equation}
for some fixed positive constants $K_j,\tilde K_i>0$ and such that for a fixed $\epsilon>0$ we have
\begin{equation}\label{psifpsis0}
\psi_{i,s}>\sup_{\overline\Omega}\psi_{j,f}+2\epsilon, \forall i\in \textbf{I}_j,\,\textbf{I}_j\ne\emptyset;
\end{equation}
\begin{equation}\label{psifpsis1}
\psi_{i,f}>\sup\{ \psi_{l,s}:\textbf{k}(l)=\textbf{k}(i)\}+2\epsilon, \forall i\in\overline{1,n}, \textbf{I}_i\ne\emptyset;
\end{equation}
\item[ii)] \begin{equation}\label{psipsitree}
\frac{\sup \psi_{j,f}}{\inf\psi_{j,f}}<\frac87, \frac{\sup \psi_{i,s}}{\inf\psi_{i,s}}<\frac87;
\end{equation}
\item[iii)] For $j\in\overline{0,n}$ such that $\textbf{I}_j\ne\emptyset$ we define
\begin{eqnarray}
\overline{\psi}_j=\sup\{\psi_{j,f}(x), \psi_{i,s}(x): i\in \textbf{I}_j, x\in\Omega\}+\epsilon,\\
\underline{\psi}_j=\inf\{\psi_{j,f}(x), \psi_{i,s}(x): i\in \textbf{I}_j, x\in\Omega\}-\epsilon.
\end{eqnarray}
\item[iv)] Denote by $\overline\psi=\sup\{\overline\psi_j:\textbf{I}_j\ne\emptyset\}$ and $\underline\psi=\inf\{\underline\psi_j:\textbf{I}_j\ne\emptyset\}$ and
\begin{equation}\label{fialfabartree}
\overline\varphi_j(t)=\overline\varphi_j^\lambda(t):=\frac{e^{\lambda\overline\psi_j}}{t(T-t)},\quad
\overline\alpha_j(t)=\overline\alpha_j^\lambda(t):=\frac{e^{\lambda\overline\psi_j}-e^{1.5\lambda\overline\psi}}{t(T-t)},
\end{equation}
\begin{equation}\label{fialfabartree2}
\underline\varphi_j(t)=\underline\varphi_j^\lambda(t):=\frac{e^{\lambda\underline\psi_j}}{t(T-t)},\quad
\underline\alpha_j(t)=\underline\alpha_j^\lambda(t):=\frac{e^{\lambda\underline\psi_j}-e^{1.5\lambda\overline\psi}}{t(T-t)}.
\end{equation}
\begin{equation}\label{fialfabartree3}
\underline\alpha(t)=\frac{e^{\lambda\underline\psi}-e^{1.5\lambda\overline\psi}}{t(T-t)},\, \overline\alpha(t)=\frac{e^{\lambda\overline\psi}-e^{1.5\lambda\overline\psi}}{t(T-t)}
\end{equation}
\end{enumerate}
\begin{remark}
\label{rem_weight_order}
Observe that this construction of the weight functions allows saying that
$$
\overline \psi_j<\underline\psi_i, i\in \textbf{I}_j, \textbf{I}_j\ne \emptyset,
$$
and thus, given $\theta>0$ there exists $ s(\theta)$ such that for $s>s(\theta)$ we have
\begin{equation}
e^{s\overline\alpha_j(t)}\leq \theta e^{s\underline\alpha_i(t)}, i\in \textbf{I}_j, \textbf{I}_j\ne \emptyset, t\in[0,T].
\end{equation}
\end{remark}
The Carleman estimates we establish now in the tree coupling case are given in the following theorem:
\begin{theorem}\label{thCarltree} Suppose that the coupling coefficients in \eqref{adjlinsystemtree} satisfy $$\{a_{i\textbf{k}(i)},c_j\}_{i\in\overline{1,n},j\in\overline{0,n}}\in\mathcal{E}_{M,\delta,\{\underline\omega_i\}_i,\textbf{k}}.$$
Then there exist constants $\lambda_0,s_{0}$ such that for $\lambda>\lambda_0$ there exists a constant $C>0$ depending on $(M,\delta,\{\underline\omega_i\}_i,\lambda)$
such that, for any $ s\geq s_0$, the following inequality holds:
\begin{equation}\label{estCarlemantree}\begin{aligned}
&\int_Q(|D_tp|^2+|D^2p|^2+|D p|^2+|p|^2)e^{2s\underline\alpha}dxdt\\
&\leq C\int_{Q_{\omega_0}}|p_0|^2e^{2s\overline\alpha}dxdt\\
\end{aligned}
\end{equation}
for all $p\in H^1(0,T;L^2(\Omega))\cap L^2(0,T; H^2(\Omega))$ solution of \eqref{adjlinsystemtree}.
Moreover, there exists $m_0\in\mathbb{N}$ and $\delta_1>0$ such that we have the following $L^\infty-L^2$ Carleman estimate
\begin{equation}\label{LinftyL2Carlemantree}
\|p e^{(s+m_0\delta_1)\underline{\alpha}}\|_{L^{\infty}(Q)}\leq C \|p_0 e^{s\overline\alpha}\|_{L^2(Q_{\omega_0})}.
\end{equation}
\end{theorem}
\proof
For $j\in\overline{0,n}$ we write separately Carleman inequalities for the case $\textbf{I}_j\ne\emptyset$ and respectively for the case $\textbf{I}_j=\emptyset$.
If $j\in\overline{0,n}$ is such that $\textbf{I}_j\ne\emptyset$ we treat the equations satisfied by $p_j$ and $p_l,l\in \textbf{I}_j$ as a nonhomogeneous adjoin system, as in the star-like couplings \eqref{withsource}, while in the case $\textbf{I}_j=\emptyset$ we have to deal with homogeneous parabolic equations:
\begin{equation}\label{adjstarnonhom}
\left\lbrace
\begin{array}{ll}
-D_tp_{j}-\Delta p_j-c_j(t,x)p_j=\sum_{l,\textbf{k}(l)=j}a_{lj}(t,x)p_l,\, &\text{ in }(0,T)\times\Omega,\\
-D_tp_{l}-\Delta p_l-c_l(t,x)p_l=\mathcal{N}_{l}(t,x),&l\in \textbf{I}_j.
\end{array}
\right.
\end{equation}
For the case $\textbf{I}_j\ne\emptyset$ a Carleman estimate, which is an immediate consequence to intermediate estimate \eqref{estCarleman1}, states that there exists $\overline s_j$ and $C>0$ not depending on $s$ such that for $s>\overline s_j$ we have
\begin{equation}\label{estCarlemantree1}\begin{aligned}
&\int_Q(|D_tp_j|^2+|D^2p_j|^2+|D p_j|^2+|p_j|^2)e^{2s\underline\alpha_j}dxdt\\
&+\int_Q\left[\sum_{i\in\textbf{I}_j}(|D_tp_i|^2+|D^2p_i|^2+|D p_i|^2+|p_i|^2)\right]e^{2s\underline\alpha_j}dxdt\\
&\leq C\left[\int_{Q_{\underline\omega_j}}|p_j|^2e^{2s\overline\alpha_j}dxdt+\sum_{i\in\textbf{I}j}\int_{Q_{\underline\omega_i}}|p_i|^2e^{2s\overline\alpha_j}\right]\\
&+C\sum_{i\in\textbf{I}_j}\int_{Q}|\mathcal{ N}_i(t,x)|^2e^{2s\overline\alpha_j}dxdt\\
& \leq C\left[\int_{Q_{\underline\omega_j}}|p_j|^2e^{2s\overline\alpha_j}dxdt+\sum_{i\in\textbf{I}_j}\int_{Q_{\underline\omega_i}}|p_i|^2e^{2s\overline\alpha_i}\right]dxdt\\
&+C\sum_{i\in\textbf{I}_j,l\in \textbf{I}_i}\int_{Q}\theta|p_l(t,x)|^2e^{2s\underline\alpha_l}dxdt,
\end{aligned}
\end{equation}
where we have used Remark \ref{rem_weight_order} in order to say that $e^{2s\overline\alpha_j}\le\theta e^{2s\underline\alpha_i}\le \theta e^{2s\underline\alpha_l} $ for $\theta>0$ to be fixed later and $s>s(\theta)$ big enough.
\medskip
In the case $\textbf{I}_j=\emptyset$, we write the Carleman estimate for the homogeneous equation$$-D_tp_{j}-\Delta p_j-c_j(t,x)p_j=0.$$
So, there exist constants $\overline s_j>0$ and $C>0$ such that for $s>\overline s_j$
\begin{equation}\label{estCarlemantree2}\begin{aligned}
&\int_Q(|D_tp_j|^2+|D^2p_j|^2+|D p_j|^2+|p_j|^2)e^{2s\underline\alpha_j}dxdt\\
& \leq C\int_{Q_{\underline\omega_j}}|p_j|^2e^{2s\overline\alpha_j}dxdt.
\end{aligned}
\end{equation}
We add now estimates \eqref{estCarlemantree1} and \eqref{estCarlemantree2} and we obtain for some constant $C>0$ and $s>\max_j\overline s_j$:
\begin{equation}\label{estCarlemantree3}
\begin{aligned}
&\sum_{j\in\overline{0,n}}\int_Q(|D_tp_j|^2+|D^2p_j|^2+|D p_j|^2+|p_j|^2)e^{2s\underline\alpha_j}dxdt\leq\\
&C\left[\sum_{j\in\overline{0,n}}\int_{Q_{\underline\omega_j}}|p_j|^2e^{2s\overline\alpha_j}dxdt+
\sum_{j\in\overline{1,n}}\int_{Q}\theta|p_j(t,x)|^2e^{2s\underline\alpha_j}\right]dxdt.
\end{aligned}
\end{equation}
Choosing $\theta$ small enough we see that the integrals on $Q$ in the right side may be absorbed in the left side of the inequality and obtain
\begin{equation}\label{estCarlemantree4}
\begin{aligned}
&\sum_{j\in\overline{0,n}}\int_Q(|D_tp_j|^2+|D^2p_j|^2+|D p_j|^2+|p_j|^2)e^{2s\underline\alpha_j}dxdt\\
&\leq C\sum_{j\in\overline{0,n}}\int_{Q_{\underline\omega_j}}|p_j|^2e^{2s\overline\alpha_j}dxdt.
\end{aligned}
\end{equation}
Observe now that for $j\ge 0$, by \eqref{k} there exists $m=m(j)$ and the sequence $j_0=j,j_1=\textbf{k}(j_0),\ldots,j_m=(\textbf{k}\circ)^m(j)=0$. Now, by \eqref{controlierarhic1}, \eqref{controlierarhic2}, \eqref{ci1}, \eqref{ci2}, and looking only to the subdomains $\underline \omega_{j_l}, l\in\overline{0,m}$ we find a sequence of equations for $l\in\overline{0,m-1}$, forming cascade like system:
\begin{equation}
-D_tp_{j_{l+1}}-\Delta p_{j_{l+1}}-c_{j_{l+1}}(t,x)p_{j_{l+1}}=a_{j_l,j_{l+1}}(t,x)p_{j_l},\, \text{ in }(0,T)\times\underline\omega_{j_{l+1}}.
\end{equation}
Now, as $\underline\omega_{j_l}\subset\subset\underline\omega_{j_{l+1}}$ we find, as in the \S\ref{secobservcarl}
\begin{equation}
\int_{Q_{\underline\omega_{j_{l}}}} |p_{j_{l}}|^2e^{2s\overline\alpha_{j_l}}dxdt\le C \int_{Q_{\underline\omega_{j_{l+1}}}}|p_{j_{l+1}}|^2e^{2s\overline\alpha_{j_{l+1}}}dxdt.
\end{equation}
Consequently, for all $j\in\overline{1,n}$ we find, by coupling the chain estimates above, that
\begin{equation}
\int_{Q_{\underline\omega_{j}}} |p_{j }|^2e^{2s\overline\alpha_{j }}dxdt\le C \int_{Q_{\underline\omega_{0}}} |p_{0 }|^2e^{2s\overline\alpha_{0 }}dxdt,
\end{equation}
which plugged into \eqref{estCarlemantree4} gives a final Carleman estimate
\begin{equation}\label{estCarlemantree5}
\begin{aligned}
&\sum_{j\in\overline{0,n}}\int_Q(|D_tp_j|^2+|D^2p_j|^2+|D p_j|^2+|p_j|^2)e^{2s\underline\alpha_j}dxdt\\
&\leq C\int_{Q_{\underline\omega_0}}|p_0|^2e^{2s\overline\alpha_0}dxdt.
\end{aligned}
\end{equation}
which gives the final conclusion in the $L^2-L^2$ framework, \eqref{estCarlemantree}.
The $L^\infty-L^2$ estimate \eqref{LinftyL2Carlemantree} follows by the same lines in the corresponding Theorem \ref{th1obs}, using the bootstrap argument in connection to the regularity properties of the parabolic flow.\hfill\rule{2mm}{2mm}\medskip
The main result concerning controllability with one control for linear parabolic systems with tree-like couplings is the following:
\begin{theorem}\label{thcontrol_lin_tree}
Consider system \eqref{linsystemtree} with coefficients in $\mathcal{\tilde E}_{M,\delta,\{\underline\omega_i\}_i}$.
Then there exists a constant $C=C({M,\delta,\{\underline\omega_i\}_i})$ such that for all $z^0\in H$ there exists $u^*\in L^2(0,T;L^2(\omega_0))\cap L^\infty(Q_{\omega_0})$ which drives the corresponding solution to \eqref{linsystemtree} in $0$, \textit{i.e.} $z=z^{u^*}$ satisfies $z(T)=0$ and the control satisfies the norm estimate
\begin{equation}\label{est_contr_l2_tree}
\|u^*e^{-s\overline\alpha}\|_{L^2(0,T;L^2(\omega_0))}+ \|u^* \|_{L^\infty(Q_{\omega_0})}\leq C\|z^0\|_{L^2(\Omega)}.
\end{equation}
\end{theorem}
\proof
The proof is identical to the proof of Theorem \ref{thcontrol} by using the Carleman estimates for the linear adjoint system \eqref{adjlinsystemtree} given by Theorem \ref{thCarltree} and a corresponding observability estimate as the one given by Remark \ref{rem_obs}.
Note here that for the $L^\infty$ estimate on the control, one needs to use in Carleman estimate a parameter $\lambda$ such that
\eqref{ordineaponderilor2} holds.
\hfill\rule{2mm}{2mm}\medskip
\medskip
Controllability of nonlinear semilinear parabolic systems with tree-like couplings may be studied in analogy to the star-like case. We consider semilinear systems of parabolic equations, with tree type couplings in zero order terms, of the form
\begin{equation}\label{nonlinsystemtree}
\left\lbrace
\begin{array}{ll}
D_ty_{0}-\Delta y_0=\overline g_0(x)+f_0(x,y_0)+\chi_{\omega_0}u, & \text{ in }(0,T)\times\Omega,\\
D_ty_{i}-\Delta y_i=\overline g_i(x)+f_i(x,y_{\textbf{k}(i)},y_i),\, i\in\overline{1,n},\, &\text{ in }(0,T)\times\Omega,\\
y_0=...=y_{n}=0, &\text{ on }(0,T)\times\partial\Omega,\\
y(0,\cdot)=y^0,&
\end{array}
\right.
\end{equation}
where $\overline g_j\in L^\infty(\Omega),\, j\in\overline{0,n}$ and $\overline y=(\overline{y}_0,...,\overline{y}_n)\in [L^\infty(\Omega)]^{n+1}$ is a corresponding stationary solution.
We assume the following hypotheses on the nonlinearities:
\begin{enumerate}
\item[\textit{(H1')}] $f_0\in C^1(\Omega\times\mathbb{R}), f_i\in C^1(\Omega\times\mathbb{R}\times\mathbb{R}),i\in\overline{1,n} $ there exist $\omega_1,...\omega_{n}\subset \Omega$ open nonempty subsets of
$\Omega$ satisfying \eqref{controlierarhic1},\eqref{controlierarhic2} and
\begin{equation}
(\omega_i\cap\omega_{\textbf{k}(i)})\setminus\bigcup_{j\ne i, \textbf{k}(j)=\textbf{k}(i)}\omega_j\neq\emptyset,\,\forall i\in\overline{1,n},
\end{equation}
and for all $i\in\overline{1,n}$ we have
\begin{equation}\label{fsuport1}
f_i(x,\tau,\xi)=0 \,\forall x\in\Omega\setminus\omega_i,\, \tau,\xi\in\mathbb{R};
\end{equation}
\item[\textit{(H2')}]For a family of subdomains $\{\underline\omega_i\}_i$ satisfying \eqref{ci1},\eqref{ci2}, by defining for $i\in\overline{1,n}$ the coefficients
$$a^0_{i\textbf{k}(i)}(x):=\frac{\partial f_i}{\partial y_{\textbf{k}(i)}}(x,\overline{y}_{\textbf{k}(i)}(x),\overline{y}_i(x))$$
$$
c^0_{0}(x):=\frac{\partial f_0}{\partial y_0}(x,\overline{y}_0(x)),\, c^0_{i}(x):=\frac{\partial f_i}{\partial y_i}(x,\overline{y}_{\textbf{k}(i)}(x),\overline{y}_i(x)),
$$
we assume that for some $M_0,\delta_0>0$ we have
\begin{equation}
\{a^0_{i\textbf{k}(i)},c^0_j\}_{i\in\overline{1,n},j\in\overline{0,n}}\in\mathcal{E}_{M_0,\delta_0,\{\underline\omega_i\}_i,\textbf{k}}.
\end{equation}
\end{enumerate}
\begin{theorem}\label{th_local_contr_tree}
Suppose $\overline y$ is a stationary state to uncontrolled ($u=0$) \eqref{nonlinsystemtree} and that functions $f_j,j\in\overline{0,n}$ satisfy hypotheses \textit{(H1')}, \textit{(H2')}. Then, for all $\beta_0>0$ there exist $\zeta_0=\zeta_0(\beta_0)>0$ and $C=C(\beta_0,\{\underline \omega_i\}_i,\overline{y})$ such that if $\|y^u(0)-\overline y\|<\zeta_0$ there exists a control $u\in L^\infty(Q)$ satisfying
$$
\|u\|_{L^\infty(Q)}\le C\|y^u(0)-\overline y\|_{L^\infty(\Omega)}
$$
and
$$y^u(T,\cdot)=\overline y,$$
with
$$ \|y(t,\cdot)-\overline y\|_{L^\infty}\le \beta_0,\,t\in[0,T]. $$
\end{theorem}
\begin{remark}
\begin{enumerate}
\item Our results remain valid if instead of the operator $\Delta$ we use general elliptic operators which may be differently chosen in each of the equation of the system:
\begin{equation}
L_iy_i:=-\sum_{j,k=1}^N D_j(\alpha^{jk}_i D_ky_i)+\sum_{k=1}^N\beta^{k}_i D_ky_i+\gamma_iy_i\quad i=\overline{1,n},
\end{equation}
with general boundary conditions which may be also of Neumann or Robin type. Here $ (\alpha^{jk}_i)_{j,k}$ satisfy uniform ellipticity conditions in $\Omega$. In our study we need also to impose regularity assumptions on the coefficients ( $\alpha^{jk}_i\in W^{1,\infty}(\Omega),\beta^{k}_i, \gamma_i\in L^\infty(\Omega)$); these regularity assumptions allow the development of the bootstrap argument based on the regularizing properties of the parabolic flow when establishing an $L^\infty$ framework for the controllability problem.
\item The hypotheses on the support of the coupling coefficients is essential for our approach to the controllability problem. In fact, for the systems we consider with the same type of couplings but with constant coupling coefficients controllability no longer occurs.
take for example the following system with a star-type coupling ($\alpha$ and $\beta$ are fixed real constants):
\begin{equation}\label{rem_ex1}
\left\lbrace
\begin{array}{ll}
D_tz_{0}-\Delta z_0=\chi_{\omega_0}u, & \text{in }(0,T)\times\Omega,\\
D_tz_{1}-\Delta z_1=\alpha z_0,\, &\text{in }(0,T)\times\Omega,\\
D_tz_{2}-\Delta z_2=\beta z_0,\, &\text{in }(0,T)\times\Omega,\\
z_0=z_1=z_{2}=0, &\text{on }(0,T)\times\partial\Omega.\\
\end{array}
\right.
\end{equation}
Considering the results in \cite{khodja_burgos2},\cite{khodja_burgos1}, null controllability occurs if and only if the Kalman rank condition $\text{rank} [A_0|B]=3.$ However, in this situation the Kalman matrix is
$[A_0|B]=
\begin{pmatrix}
1&0&0\\
0&\alpha&0\\
0&\beta&0
\end{pmatrix}$
and its rank is $2$.
Also, if we consider the parabolic system with tree-like couplings \eqref{linsystem_tree} in \S2 Preliminaries, with constant coefficients $c_j=0$, $a_{10}=a_{20}=a_{31}=a_{41}=1$, the Kalman matrix
$[A_0|B]=
\begin{pmatrix}
1&0&0&0&0\\
0&1&0&0&0\\
0&1&0&0&0\\
0&0&1&0&0\\
0&0&1&0&0\\
\end{pmatrix}$
and this has rank $3$; thus the system is not null controllable.
In fact one may see the results in this paper more as an extension of the results concerning cascade-like parabolic systems with nonconstant coefficients (see \cite{teresa_burgos}).
\end{enumerate}
\end{remark}
\newpage
|
1,314,259,995,331 | arxiv | \section{Introduction and results}
\label{sec:intro}
Tightness is a notion developed in the field of differential geometry as the equality of the (normalized) \emph{total absolute curvature} of a submanifold with the lower bound \emph{sum of the Betti numbers} \cite{Kuiper84GeomTotAbsCurvTheo, Banchoff97TightSubmSmoothPoly}. It was first studied by Alexandrov \cite{Alexandrov38ClassClosedSurf}, Milnor \cite{Milnor50RelBettiHypersurfIntGaussCurv}, Chern and Lashof \cite{Chern57TotCurvImmMnf} and Kuiper \cite{Kuiper59ImmMinTotAbsCurv} and later extended to the polyhedral case by Banchoff \cite{Banchoff65TightEmb3DimPolyMnf}, Kuiper \cite{Kuiper84GeomTotAbsCurvTheo} and Kühnel \cite{Kuehnel95TightPolySubm}.
From a geometrical point of view, tightness can be understood as a generalization of the concept of convexity that applies to objects other than topological balls and their boundary manifolds since it roughly means that an embedding of a submanifold is ``as convex as possible'' according to its topology. The usual definition is the following.
\begin{definition}[tightness \cite{Kuiper84GeomTotAbsCurvTheo, Kuehnel95TightPolySubm}]
Let $\mathbb{F}$ be a field. An embedding $M \rightarrow \mathbb{E}^N$ of a compact manifold is called \emph{$k$-tight with respect to $\mathbb{F}$} if for any open or closed half-space $h\subset E^N$ the induced homomorphism
$$H_i(M\cap h;\mathbb{F})\longrightarrow H_i(M;\mathbb{F})$$
\noindent is injective for all $i\leq k$. $M$ is called \emph{$\mathbb{F}$-tight} if it is $k$-tight for all $k$. The standard choice for the field of coefficients is $\mathbb{F}_2$ and an $\mathbb{F}_2$-tight embedding is called \emph{tight}.
\label{def:tighthom}
\end{definition}
With regard to PL embeddings of PL manifolds, the tightness of \emph{combinatorial manifolds} can also be defined via a purely combinatorial condition as follows. For an introduction to PL topology see \cite{Rourke72IntrPLTop}, for more recent developments in the field see \cite{Lutz05TrigMnfFewVertCombMnf, Datta07MinTrigManifolds}.
\begin{definition}[combinatorial manifold, combinatorial tightness \cite{Kuehnel95TightPolySubm}]\hfill
\begin{enumerate}[(i)]
\item A simplicial complex $K$ that has a topological manifold as its underlying set $|K|$ is called \emph{triangulated manifold}. $K$ is called \emph{combinatorial manifold} of dimension $d$ if all vertex links of $K$ are PL $(d-1)$-spheres, where a PL $(d-1)$-sphere is a triangulation of the $(d-1)$-sphere that carries a standard PL structure.
\item Let $\mathbb{F}$ be a field. A combinatorial manifold $K$ on $n$ vertices is called \emph{($k$-)tight w.r.t. $\mathbb{F}$} if its canonical embedding $$K\subset \Delta^{n-1}\subset E^{n-1}$$ is ($k$-)tight w.r.t. $\mathbb{F}$, where $\Delta^{n-1}$ denotes the $(n-1)$-dimensional simplex.
\end{enumerate}
\label{def:tighthomcomb}
\end{definition}
In dimension $d=2$ the following are equivalent for a triangulated surface $S$ on $n$ vertices: (i) $S$ has a complete edge graph $K_n$, (ii) $S$ appears as a so called \emph{regular case} in Heawood's Map Color Theorem \cite{Heawood90MapColThm, Ringel74MapColThm}, compare \cite[Chap.~2C]{Kuehnel95TightPolySubm} and (iii) the induced piecewise linear embedding of $S$ into Euclidean $(n-1)$-space has the two-piece property \cite{Banchoff74TightPolyKleinBottProjPlMoeb}, and it is tight \cite{Kuehnel80Tight0TightPolyhEmbSurf}, \cite[Chap.~2D]{Kuehnel95TightPolySubm}.
K\"uhnel investigated the tightness of combinatorial triangulations of manifolds also in higher dimensions and codimensions, see \cite{Kuehnel94ManSkelConvPolyt}, \cite[Chap.~4]{Kuehnel95TightPolySubm}. It turned out that the tightness of a combinatorial triangulation is closely related to the concept of \emph{Hamiltonicity} of a polyhedral complexes (see \cite{Kuehnel91HamSurfPoly, Kuehnel95TightPolySubm}): A subcomplex $A$ of a polyhedral complex $K$ is called \emph{$k$-Hamiltonian}\footnote{This is not to be confused with the notion of a $k$-Hamiltonian graph, see \cite{Chartrand69CubeConnGraph1Ham}.} if $A$ contains the full $k$-dimensional skeleton of $K$. This generalization of the notion of a Hamiltonian circuit in a graph seems to be due to Schulz \cite{Schulz74DissMgfZell, Schulz94PolyhMnfPoly}. A Hamiltonian circuit then becomes a special case of a $0$-Hamiltonian subcomplex of a $1$-dimensional graph or of a higher-dimensional complex \cite{Ewald73HamCircSimpComp}.
A triangulated $2k$-manifold that is a $k$-Hamiltonian subcomplex of the boundary complex of some higher dimensional simplex is a tight triangulation as Kühnel \cite[Chap.~4]{Kuehnel95TightPolySubm} showed. Such a triangulation is also called \emph{$(k+1)$-neighborly triangulation} since any $k+1$ vertices in a $k$-dimensional simplex are common neighbors. Moreover, $(k+1)$-neighborly triangulations of $2k$-manifolds are also referred to as \emph{super-neighborly} triangulations --- in analogy with neighborly polytopes the boundary complex of a $(2k+1)$-polytope can be at most $k$-neighborly unless it is a simplex. Notice here that combinatorial $2k$-manifolds can go beyond $k$-neighborliness, depending on their topology.
With the simplex as ambient polytope there exist generalized Heawood inequalities in even dimensions $d\geq 4$ that were first conjectured by Kühnel \cite{Kuehnel94ManSkelConvPolyt, Kuehnel95TightPolySubm}, almost completely proved in \cite{Novik98UBTHomMnf} by Novik and proved by Novik and Swartz in \cite{Novik08SocBuchsMod}. As in the $2$-dimensional case, the $k$-Hamiltonian triangulations of $2k$-manifolds here appear as regular cases of the generalized Heawood inequalities.
There also exist generalized Heawood inequalities for $k$-Hamiltonian subcomplexes of cross polytopes that were first conjectured by Sparla \cite{Sparla99LBTComb2kMnf} and almost completely proved by Novik in \cite{Novik05OnFNumMnfSymm}. The subcomplexes appearing as regular cases in these inequalities admit a tight embedding into a higher dimensional cross polytope and are also referred to as \emph{nearly $(k+1)$-neighborly} as they contain all $i$-simplices, $i\leq k$, not containing one of the diagonals of the cross polytope (i.e. they are ``neighborly except for the diagonals of the cross polytope'').
For $d=2$, a regular case of Heawood's inequality corresponds to a triangulation of an abstract surface (cf. \cite{Ringel74MapColThm}). Ringel \cite{Ringel55WieManGeschlNichtoFl} and Jungerman and Ringel \cite{Jungerman80MinTrigOrientSurf} showed that all of the infinitely many regular cases of Heawood's inequality distinct from the Klein bottle do occur. As any such case yields a tight triangulation (see \cite{Kuehnel80Tight0TightPolyhEmbSurf}), there are infinitely many tight triangulations of surfaces.
In contrast, in dimensions $d\geq 3$ there only exist a finite number of known examples of tight triangulations (see \cite{Kuehnel99CensusTight} for a census), apart from the trivial case of the boundary of a simplex and an infinite series of triangulations of sphere bundles over the circle due to K\"uhnel \cite[5B]{Kuehnel95TightPolySubm}, \cite{Kuehnel86HigherDimCsaszar}.
Especially in odd dimensions it seems to be hard to give combinatorial conditions for the tightness of a triangulation and such conditions were not known so far. This work presents one such condition holding in any dimension $d\geq 4$.
\bigskip
\noindent In the course of proving the Lower Bound Conjecture (LBC) for $3$- and $4$-manifolds, D.~Walkup \cite{Walkup70LBC34Mnf} defined a class $\mathcal{K}(d)$ of ``certain especially simple'' \cite[p.~1]{Walkup70LBC34Mnf} combinatorial manifolds as the set of all combinatorial $d$-manifolds that only have \emph{stacked} $(d-1)$-spheres as vertex links as defined below.
\begin{definition}[stacked polytope, stacked sphere \cite{Walkup70LBC34Mnf}]\hfill
\begin{enumerate}[(i)]
\item A simplex is a \emph{stacked} polytope and each polytope obtained from a stacked polytope by adding a pyramid over one of its facets is again stacked.
\item A triangulation of the $d$-sphere $S^d$ is called \emph{stacked $d$-sphere} if it is combinatorially isomorphic to the boundary complex of a stacked $(d+1)$-polytope.
\end{enumerate}
\end{definition}
Thus, a stacked $d$-sphere can be understood as the combinatorial manifold obtained from the boundary of the $(d+1)$-simplex by successive stellar subdivisions of facets of the boundary complex $\partial \Delta^{d+1}$ of the $(d+1)$-simplex (i.e. by successively subdividing facets of a complex $K_i$, $i=0,1,2,\dots$, by inner vertices, where $K_0=\partial \Delta^{d+1}$).
In this work we will give combinatorial conditions for the tightness of members of $\mathcal{K}(d)$ holding in all dimensions $d\geq 4$. The main results of this paper are the following:
In Theorem~\ref{thm:MorseTight} we show that any polar Morse function subject to a condition on the number of critical points of even and odd indices is a perfect function. This can be understood as a combinatorial analogon to Morse's lacunary principle, see Remark~\ref{rem:lacunaryprinciple}.
This result is used in Theorem~\ref{thm:Kd2NeighborlyTight} in which it is shown that every $2$-neighborly member of $\mathcal{K}(d)$ is a tight triangulation for $d\geq 4$. Thus, all \emph{tight-neighborly} triangulations as defined in \cite{Lutz08FVec3Mnf} are tight for $d\geq 4$ (see Section~\ref{sec:TightNeigh}).
\noindent The paper is organized as follows.
Section~\ref{sec:PolarMorseTight} begins with a short introduction to polyhedral Morse theory giving rise to a tightness definition of a triangulation in terms of (polyhedral) Morse theory, followed by the investigation on a certain family of perfect Morse functions. The latter functions can be used to give a combinatorial condition for the tightness of odd-dimensional combinatorial manifolds in terms of properties of the vertex links of such manifolds.
In Section~\ref{sec:TightnessKd}, the tightness of members of $\mathcal{K}(d)$ is discussed, followed by a discussion of the tightness of tight-neighborly triangulations for $d\geq 4$ in Section~\ref{sec:TightNeigh}. Both sections include examples of triangulations for which the stated theorems hold.
In Section~\ref{sec:Kkd}, the classes $\mathcal{K}^k(d)$ of combinatorial manifolds are introduced as a generalization of Walkup's class $\mathcal{K}(d)$ and examples of manifolds in these classes are presented. Furthermore, an analogue of Walkup's theorem \cite[Thm.~5]{Walkup70LBC34Mnf}, \cite[Prop.~7.2]{Kuehnel95TightPolySubm} for $d=6$ is proved, assuming the validity of the Generalized Lower Bound Conjecture \ref{conj:GLBC}. Finally, Section~\ref{sec:SubcomplCross} focuses on subcomplexes of cross polytopes that lie in the class $\mathcal{K}^k(d)$ for some $k$. Here, an example of a centrally symmetric triangulation of $S^4\times S^2\in \mathcal{K}^2(6)$ as a $2$-Hamiltonian subcomplex of the $8$-dimensional cross polytope is given. This triangulation is part of a conjectured series of triangulations of sphere products as tight subcomplexes of cross polytopes.
\section{Polar Morse functions and tightness}
\label{sec:PolarMorseTight}
Apart from the homological definition given in Definition~\ref{def:tighthom} and \ref{def:tighthomcomb}, tightness can also be defined in the language of Morse theory in a natural way: On one hand, the total absolute curvature of a smooth immersion $X$ equals the average number of critical points of any non-degenerate height function on $X$ in a suitable normalization. On the other hand, the Morse inequality shows that the normalized total absolute curvature of a compact smooth manifold $M$ is bounded below by the rank of the total homology $H_{*}(M)$ with respect to any field of coefficients, where tightness is equivalent to the case of equality in this bound, see \cite{Kuehnel99CensusTight}.
As an extension to classical Morse theory (see \cite{Milnor63MorseTheory} for an introduction to the field), K\"uhnel \cite{Kuehnel90TrigMnfFewVert, Kuehnel95TightPolySubm} developed what one might refer to as a ``polyhedral Morse theory''. Note that in this theory many, but not all concepts carry over from the smooth to the polyhedral case, see the survey articles \cite{Kuiper84GeomTotAbsCurvTheo} and \cite{Banchoff97TightSubmSmoothPoly} for a comparison of the two cases.
A discrete analogon to the Morse functions in classical Morse theory, are defined in the polyhedral case as follows.
\begin{definition}[rsl functions, \cite{Kuehnel90TrigMnfFewVert, Kuehnel95TightPolySubm}]
Let $M$ be a combinatorial manifold of dimension $d$. A function $f: M\ensuremath{~\rightarrow~} \R$ is called \emph{regular simplex-wise linear} (\emph{rsl}, for short), if $f(v)\neq f(v')$ for any two vertices $v\neq v'$ of $M$ and $f$ is linear when restricted to any simplex of $M$. Regular simplex-wise linear functions are sometimes also referred to as \emph{Morse functions}.
\end{definition}
Notice that an rsl function is uniquely determined by its value on the set of vertices and that only vertices can be critical points of $f$ in the sense of Morse theory. With this definition at hand one can define critical points and levelsets of these Morse functions as in classical Morse theory.
\begin{definition}[critical vertices, \cite{Kuehnel90TrigMnfFewVert, Kuehnel95TightPolySubm}]
Let $\mathbb{F}$ be a field, $M$ be a combinatorial $d$-manifold and let $f$ be an rsl function on $M$. A vertex $v\in M$ is called \emph{critical of index $k$ and multiplicity $m$ with respect to $f$}, if
\begin{equation*}
\dim_{ \mathbb{F}} H_k(M_v,M_v\backslash\set{v};\mathbb{F})=m>0,
\end{equation*}
\noindent where $M_v:=\set{x\in M:f(x)\leq f(v)}$ and $H_{*}$ denotes an appropriate homology theory with coefficients in $\mathbb{F}$. The \emph{number of critical points of $f$ of index $i$} (with multiplicity) are
\begin{equation*}
\mu_i(f; \mathbb{F}):=\sum_{v\in V(M)} \dim_{\mathbb{F}} H_i(M_v,M_v\backslash\set{v};\mathbb{F}).
\end{equation*}
\end{definition}
In the following we will be interested in special kinds of Morse functions, so called \emph{polar} Morse functions. This term was coined by Morse, see \cite{Morse60ExistPolNonDegenFuncDiffMnf}.
\begin{definition}[polar Morse function]
Let $f$ be a Morse function that only has one critical point of index $0$ and of index $d$ each for a given (necessarily connected) $d$-manifold. Then $f$ is called \emph{polar Morse function}.
\end{definition}
Note that for a $2$-neighborly combinatorial manifold clearly all rsl functions are polar. As in the classical theory, there hold Morse relations as follows.
\begin{theorem}[Morse relations, \cite{Kuehnel90TrigMnfFewVert, Kuehnel95TightPolySubm}]
Let $\mathbb{F}$ be a field, $M$ a combinatorial manifold of dimension $d$ and $f$ an rsl function on $M$. Then the following holds, where $\beta_i(M; \mathbb{F}):=\dim_{ \mathbb{F}} H_i(M; \mathbb{F})$ denotes the $i$-th Betti number:
\begin{enumerate}[(i)]
\item $\mu_i(f; \mathbb{F})\geq \beta_i(M; \mathbb{F})$ for all $i$,
\item $\sum_{i=0}^{d} (-1)^{i} \mu_i(f; \mathbb{F})=\chi(M)=\sum_{i=0}^{d} (-1)^{i} \beta_i(M; \mathbb{F})$,
\item $M$ is ($k$-)tight with respect to $\mathbb{F}$ if and only if $\mu_i(f; \mathbb{F})=\beta_i(M; \mathbb{F})$ for every rsl function $f$ and for all $0\leq i \leq d$ (for all $0\leq i\leq k$).
\end{enumerate}
Functions satisfying equality in (i) for all $i \leq k$ are called \emph{$k$-tight functions}. A function $f$ that satisfies equality in (i) for all $i$ is usually referred to as \emph{perfect} or \emph{tight} function, cf. \cite{Bott80MorseTheoYangMills}.
\label{thm:MorseRelations}
\end{theorem}
Note that a submanifold $M$ of $E^d$ is tight in the sense of Definition~\ref{def:tighthom} if and only if every Morse function on $M$ is a tight function, see \cite{Kuehnel90TrigMnfFewVert, Kuehnel95TightPolySubm}.
As already mentioned in Section~\ref{sec:intro}, there exist quite a few examples of triangulations in even dimensions that are known to be tight, whereas ``for odd-dimensional manifolds it seems to be difficult to transform the tightness of a polyhedral embedding into a simple combinatorial condition'', as K\"uhnel \cite[Chap.~5]{Kuehnel95TightPolySubm} observed. Consequently, there are few examples of triangulations of odd-dimensional manifolds that are known to be tight apart from the sporadic triangulations in \cite{Kuehnel99CensusTight} and Kühnel's infinite series of $S^{d-1}\mathrel{\vcenter{\offinterlineskip\hbox{$\times$}\vskip-.55ex\hbox{$\hspace*{.3ex}\underline{\hspace*{1.2ex}}$}}} S^1$ for odd $d\geq 3$.
It is a well known fact, that in even dimensions a Morse function which only has critical points of even indices is a tight function, cf. \cite{Bott80MorseTheoYangMills}. This follows directly from the Morse relations, i.e. the fact that $\sum_i (-1)^i\mu_i=\chi(M)$ holds for any Morse function on a manifold $M$ and the fact that $\mu_i\geq \beta_i$. In odd dimensions on the other hand, argumenting in this way is impossible as we always have $\mu_0\geq 1$ and the alternating sum allows the critical points to cancel out each other.
What will be shown in Theorem~\ref{thm:MorseTight} is that at least for a certain family of Morse functions the tightness of its members can readily be determined in arbitrary dimensions $d\geq3$.
\begin{theorem}
Let $\mathbb{F}$ be any field, $d\geq 3$ and $f$ a polar Morse function on a combinatorial $\mathbb{F}$-orientable $d$-manifold $M$ such that the number of critical points of $f$ (counted with multiplicity) satisfies
\begin{equation*}
\mu_{d-i}(f; \mathbb{F})=\mu_i(f; \mathbb{F})=\left\{
\begin{array}{ll}
0&\text{for even $2\leq i\leq \lfloor\frac{d}{2}\rfloor$}\\
k_i&\text{for odd $1\leq i\leq \lfloor\frac{d}{2}\rfloor$}\\
\end{array}
\right.,
\end{equation*}
where $k_i\geq 0$ for arbitrary $d$ and moreover $k_{\lfloor d/2\rfloor}=k_{\lceil d/2 \rceil}=0$, if $d$ is odd. Then $f$ is a tight function.
\label{thm:MorseTight}
\end{theorem}
\begin{proof}
Note that as $f$ is polar, $M$ necessarily is connected and orientable. If $d=3$, $\mu_0=\mu_3=1$ and $\mu_1=\mu_2=0$, and the statement follows immediately. Thus, let us only consider the case $d\geq 4$ from now on.
Assume that the vertices $v_1,\dots,v_n$ of $M$ are ordered by their $f$-values, $f(v_1)< f(v_2)<\dots<f(v_n)$. In the long exact sequence for the relative homology
\begin{equation}
\begin{array}{l}
\ldots \ensuremath{~\rightarrow~} H_{i+1}(M_v,M_v\backslash\set{v}) \ensuremath{~\rightarrow~} H_{i}(M_v\backslash\set{v}) \overset{\iota_{i}^{*}}{\ensuremath{~\rightarrow~}} H_{i}(M_v) \ensuremath{~\rightarrow~} \\ \ensuremath{~\rightarrow~} H_{i}(M_v,M_v\backslash\set{v}) \ensuremath{~\rightarrow~} H_{i-1}(M_v\backslash\set{v}) \ensuremath{~\rightarrow~} \ldots
\end{array}
\label{eq:rellongexactpolar}
\end{equation}
the tightness of $f$ is equivalent to the injectivity of the inclusion map $\iota_{i}^{*}$ for all $i$ and all $v\in V(M)$. The injectivity of $\iota_{i}^{*}$ means that for any fixed $j=1,\dots,n$, the homology $H_i(M_{v_j},M_{v_{j-1}})$ (where $M_{v_{0}}=\emptyset$) persists up to the maximal level $H_i(M_{v_n})=H_i(M)$ and is mapped injectively from level $v_j$ to level $v_{j+1}$. This obviously is equivalent to the condition for tightness given in Definition~\ref{def:tighthom}. Thus, tight triangulations can also be interpreted as triangulations with the maximal persistence of the homology in all dimensions with respect to the vertex ordering induced by $f$ (see \cite{Edelsbrunner08PersistHomSurv}). Hence, showing the tightness of $f$ is equivalent to proving the injectivity of $\iota_{i}^{*}$ at all vertices $v\in V(M)$ and for all $i$, what will be done in the following. Note that for all values of $i$ for which $\mu_i=0$, nothing has to be shown so that we only have to deal with the cases where $\mu_i>0$ below.
The restriction of the number of critical points being non-zero only in every second dimension results in
\begin{equation*}
\dim_{\mathbb{F}} H_{i}(M_v,M_v\backslash\set{v})\leq\mu_i(f; \mathbb{F})=0
\end{equation*}
and
\begin{equation*}
\dim_{\mathbb{F}} H_{d-i}(M_v,M_v\backslash\set{v})\leq\mu_{d-i}(f; \mathbb{F})=0
\end{equation*}
and thus in $H_{i}(M_v,M_v\backslash\set{v})=H_{d-i}(M_v,M_v\backslash\set{v})=0$ for all even $2\leq i\leq \lfloor \frac{d}{2} \rfloor$ and all $v\in V(M)$, as $M$ is $\mathbb{F}$-orientable.
This implies a splitting of the long exact sequence (\ref{eq:rellongexactpolar}) at every second dimension, yielding exact sequences of the forms
\begin{equation*}
0 \ensuremath{~\rightarrow~} H_{i-1}(M_v\backslash\set{v}) \overset{\iota_{i-1}^{*}}{\rightarrow} H_{i-1}(M_v) \ensuremath{~\rightarrow~} H_{i-1}(M_v,M_v\backslash\set{v}) \ensuremath{~\rightarrow~} \ldots
\end{equation*}
and
\begin{equation*}
0 \ensuremath{~\rightarrow~} H_{d-i-1}(M_v\backslash\set{v}) \overset{\iota_{d-i-1}^{*}}{\rightarrow} H_{d-i-1}(M_v) \ensuremath{~\rightarrow~} H_{d-i-1}(M_v,M_v\backslash\set{v}) \ensuremath{~\rightarrow~} \ldots,
\end{equation*}
where the inclusions $\iota_{i-1}^{*}$ and $\iota_{d-i-1}^{*}$ are injective for all vertices $v\in V(M)$, again for all even $2\leq i\leq \lfloor \frac{d}{2} \rfloor$. Note in particular, that $\mu_{d-2}=0$ always holds.
For critical points of index $d-1$, the situation looks alike:
\begin{equation*}
\begin{array}{l}
0\ensuremath{~\rightarrow~} \underbrace{H_{d}(M_v\backslash\set{v})}_{=0} \ensuremath{~\rightarrow~} H_{d}(M_{v}) \ensuremath{~\rightarrow~} H_{d}(M_v,M_v\backslash\set{v}) \ensuremath{~\rightarrow~} \\ \ensuremath{~\rightarrow~} H_{d-1}(M_v\backslash\set{v}) \overset{\iota_{d-1}^{*}}{\ensuremath{~\rightarrow~}} H_{d-1}(M_v)\ensuremath{~\rightarrow~} H_{d-1}(M_v,M_v\backslash\set{v}) \ensuremath{~\rightarrow~} \ldots
\end{array}
\end{equation*}
By assumption, $f$ only has one maximal vertex as it is polar. Then, if $v$ is not the maximal vertex with respect to $f$, $H_{d}(M_v,M_v\backslash\set{v})=0$ and thus $\iota_{d-1}^{*}$ is injective.
If, on the other hand, $v$ is the maximal vertex with respect to $f$, one has
\begin{equation*}
H_{d}(M)\ensuremath{\cong} H_{d}(M_v,M_v\backslash\set{v}),
\end{equation*}
as $M_v=M$ in this case. Consequently, by the exactness of the sequence above, $\iota_{d-1}^{*}$ is also injective in this case.
Altogether it follows that $\iota_{i}^{*}$ is injective for all $i$ and for all vertices $v\in V(M)$ and thus that $f$ is $\mathbb{F}$-tight.
\end{proof}
As we will see in Section~\ref{sec:TightnessKd}, this is a condition that can be translated into a purely combinatorial one. Examples of manifolds to which Theorem~\ref{thm:MorseTight} applies will be given in the following sections.
\begin{remark}\hfill
\begin{enumerate}[(i)]
\item Theorem~\ref{thm:MorseTight} can be understood as a combinatorial equivalent of Morse's \emph{lacunary principle} \cite[Lecture~2]{Bott82LectMorseTheory}. The lacunary principle in the smooth case states that if $f$ is a smooth Morse function on a smooth manifold $M$, such that its Morse polynomial $M_t(f)$ contains no consecutive powers of $t$, then $f$ is a perfect Morse function.
\item Due to the Morse relations, Theorem~\ref{thm:MorseTight} puts a restriction on the topology of manifolds admitting these kinds of Morse functions. In particular, these must have vanishing Betti numbers in the dimensions where the number of critical points is zero. Note that in dimension $d=3$ the theorem thus only holds for homology $3$-spheres with $\beta_1=\beta_2=0$ and no statements concerning the tightness of triangulations with $\beta_1>0$ can be made. One way of proving the tightness of a $2$-neighborly combinatorial $3$-manifold $M$ would be to show that the mapping
\begin{equation}
H_2(M_v)\rightarrow H_2(M_,M_v\backslash \set{v})
\label{eq:surmaph2}
\end{equation}
is surjective for all $v\in V(M)$ and all rsl functions $f$. This would result in an injective mapping in the homology group $H_1(M_v\backslash\set{v})\ensuremath{~\rightarrow~} H_1(M_v)$ for all $v\in V(M)$ -- as above by virtue of the long exact sequence for the relative homology -- and thus in the $1$-tightness of $M$, which is equivalent to the ($\mathbb{F}_2$-)tightness of $M$ for $d=3$, see \cite[Prop.~3.18]{Kuehnel95TightPolySubm}. Unfortunately, there does not seem to be an easy to check combinatorial condition on $M$ that is sufficient for the surjectivity of the mapping (\ref{eq:surmaph2}) for all $v$ and all $f$, in contrast to the case of a combinatorial condition for the $0$-tightness of $M$ for which this is just the $2$-neighborliness of $M$.
\end{enumerate}
\label{rem:lacunaryprinciple}
\end{remark}
\section{Tightness of members of $\mathcal{K}(d)$}
\label{sec:TightnessKd}
In this section we will investigate the tightness of members of Walkup's class $\mathcal{K}(d)$, the family of all combinatorial $d$-manifolds that only have stacked $(d-1)$-spheres as vertex links. For $d\leq 2$, $\mathcal{K}(d)$ is the set of all triangulated $d$-manifolds. Kalai \cite{Kalai87RigidityLBT} showed that the stacking-condition of the links puts a rather strong topological restriction on the members of $\mathcal{K}(d)$:
\begin{theorem}[Kalai, \cite{Kalai87RigidityLBT, Bagchi08OnWalkupKd}]
Let $d\geq 4$. Then $M$ is a connected member of $\mathcal{K}(d)$ if and only if $M$ is obtained from
a stacked $d$-sphere by $\beta_1(M)$ combinatorial handle additions.
\label{thm:KalaiKdConnected}
\end{theorem}
Here, a \emph{combinatorial handle addition} to a complex $C$ is defined as usual (see \cite{Walkup70LBC34Mnf, Kalai87RigidityLBT, Lutz08FVec3Mnf}) as the complex $C^{\psi}$ obtained from $C$ by identifying two facets $\Delta_1$ and $\Delta_2$ of $C$ such that $v\in V(\Delta_1)$ is identified with $w\in \Delta_2$ only if $\operatorname{d}(v,w)\geq 3$, where $V(X)$ denotes the vertex set of a simplex $X$ and $\operatorname{d}(v,w)$ the distance of the vertices $v$ and $w$ in the $1$-skeleton of $C$ seen as undirected graph (cf. \cite{Bagchi08MinTrigSphereBundCirc}).
In other words, Kalai's theorem states that any connected $M\in\mathcal{K}(d)$ is necessarily homeomorphic to a connected sum with summands of the form $S^1\times S^{d-1}$ and $S^1\mathrel{\vcenter{\offinterlineskip\hbox{$\times$}\vskip-.55ex\hbox{$\hspace*{.3ex}\underline{\hspace*{1.2ex}}$}}} S^{d-1}$, compare \cite{Lutz08FVec3Mnf}. Looking at $2$-neighborly members of $\mathcal{K}(d)$, the following observation concerning the embedding of the triangulation can be made.
\begin{theorem}
Let $d=2$ or $d\geq 4$. Then any $2$-neighborly member of $\mathcal{K}(d)$ yields a tight triangulation of the underlying PL manifold.
\label{thm:Kd2NeighborlyTight}
\end{theorem}
Note that since any triangulated $1$-sphere is stacked, $\mathcal{K}(2)$ is the set of all triangulated surfaces and that any $2$-neighborly triangulation of a surface is tight. The two conditions of the manifold being $2$-neighborly and having only stacked spheres as vertex links are rather strong as the only stacked sphere that is $k$-neighborly, $k\geq 2$, is the boundary of the simplex, see also Remark~\ref{rem:stackedneigh}. Thus, the only $k$-neighborly member of $\mathcal{K}(d)$, $k\geq 3$, $d\geq 2$, is the boundary of the $(d+1)$-simplex.
The following lemma will be needed for the proof of Theorem~\ref{thm:Kd2NeighborlyTight}.
\begin{lemma}
Let $S$ be a stacked $d$-sphere, $d\geq 3$, and $V'\subseteq V(S)$.
Then $H_{d-j}(\operatorname{span}_S(V'))=0$ for $2\leq j \leq d-1$, where $H_{*}$ denotes the simplicial homology groups.
\label{lem:StackedSphereHomology}
\end{lemma}
\begin{proof}
Assume that $S_0=\partial \Delta^{d+1}$ and assume $S_{i+1}$ to be obtained from $S_i$ by a single stacking operation such that there exists an $N\in\mathbb{N}$ with $S_N=S$.
Then $S_{i+1}$ is obtained from $S_i$ by removing a facet of $S_i$ and the boundary of a new $d$-simplex
$T_i$ followed by a gluing operation of $S_i$ and $T_i$ along the boundaries of the removed facets. This process can also be understood in terms of a bistellar $0$-move carried out on a facet of $S_i$.
Since this process does not remove any $(d-1)$-simplices from $S_i$ or $T_i$ we have
$\skel_{d-1}(S_i)\subset \skel_{d-1}(S_{i+1})$.
We prove the statement by induction on $i$. Clearly, the statement is true for $i=0$, as $S_0=\partial \Delta^{d+1}$ and $\partial\Delta^{d+1}$ is $(d+1)$-neighborly.
Now assume that the statement holds for $S_i$ and let $V_{i+1}'\subset V(S_{i+1})$. In the following we can consider the connected components $C_k$ of $\operatorname{span}_{S_{i+1}}( V_{i+1}')$ separately.
If $C_k\subset S_i$ or $C_k \subset T_i$ then the statement is true by assumption and the $(d+1)$-neighborliness of $\partial \Delta^{d+1}$, respectively. Otherwise let
$P_1:=C_k\cap S_{i}\neq \emptyset$ and $P_2:=C_k\cap T_i\neq \emptyset$. Then
\begin{equation*}
H_{d-j}(P_1)\ensuremath{\cong} H_{d-j}(P_1\cap T_i)\text{ and }H_{d-j}(P_2)\ensuremath{\cong} H_{d-j}(P_2\cap S_i).
\end{equation*}
This yields
\begin{equation*}
\begin{array}{l@{}l@{}l}
H_{d-j}(P_1\cup P_2)
&=&H_{d-j}((P_1\cup P_2)\cap S_i\cap T_i)\\
&=&H_{d-j}(\operatorname{span}_{S_i\cap T_i}( V_{i+1}') )\\
&=&H_{d-j}(\operatorname{span}_{S_i\cap T_i}( V_{i+1}' \cap V(S_i\cap T_i)))\\
&=&0,
\end{array}
\end{equation*}
as $S_i\cap T_i=\partial\Delta^{d}$ which is $(d-1)$-neighborly such that the span of any vertex set has vanishing $(d-j)$-th homology for $2\leq j\leq d-1$.
\end{proof}
\bigskip
\begin{proof}[of Theorem~\ref{thm:Kd2NeighborlyTight}]
For $d=2$, see \cite{Kuehnel95TightPolySubm} for a proof. From now on assume that $d\geq 4$. As can be shown via excision, if $M$ is a combinatorial $d$-manifold, $f: M\ensuremath{~\rightarrow~} \R$ an rsl function on $M$ and $v\in V(M)$, then
\begin{equation*}
H_{*}(M_v,M_v\backslash\set{v})\ensuremath{\cong} H_{*}(M_v\cap \operatorname{st}(v), M_v\cap \operatorname{lk}(v)).
\end{equation*}
Now let $d\geq 4$, $1<i<d-1$. The long exact sequence for the relative homology
\begin{equation*}
\begin{array}{l}
\dots\ensuremath{~\rightarrow~} H_{d-i}(M_v\cap \operatorname{st}(v))\ensuremath{~\rightarrow~} H_{d-i}(M_v\cap \operatorname{st}(v), M_v\cap \operatorname{lk}(v))\ensuremath{~\rightarrow~}\\
\ensuremath{~\rightarrow~} H_{d-i-1}(M_v\cap \operatorname{lk}(v))\ensuremath{~\rightarrow~} H_{d-i-1}(M_v\cap \operatorname{st}(v))\ensuremath{~\rightarrow~}\dots
\end{array}
\end{equation*}
yields an isomorphism
\begin{equation}
H_{d-i}(M_v\cap \operatorname{st}(v), M_v\cap \operatorname{lk}(v))\ensuremath{\cong} H_{d-i-1}(M_v\cap \operatorname{lk}(v)),
\label{eq:IsomorphyHomologyStarLink}
\end{equation}
as $M_v\cap \operatorname{st}(v)$ is a cone over $M_v\cap \operatorname{lk}(v)$, thus contractible and we have $H_{d-i}(M_v\cap \operatorname{st}(v))=H_{d-i-1}(M_v\cap \operatorname{st}(v))=0$.
Since $M\in\mathcal{K}(d)$, all vertex links in $M$ are stacked $(d-1)$-spheres and thus Lemma~\ref{lem:StackedSphereHomology} applies to the right hand side of (\ref{eq:IsomorphyHomologyStarLink}). This implies that a $d$-manifold $M\in \mathcal{K}(d)$, $d\geq 4$, cannot have critical points of index $2\leq i \leq d-2$, i.e.
$\mu_2(f; \mathbb{F})=\dots=\mu_{d-2}(f; \mathbb{F})=0$.
Furthermore, the $2$-neighborliness of $M$ implies that any rsl function on $M$ is polar. Thus, all prerequisites of Theorem~\ref{thm:MorseTight} are fulfilled, $f$ is tight and consequently $M$ is a tight triangulation, what was to be shown.
\end{proof}
\begin{remark}
In even dimensions $d\geq 4$, Theorem~\ref{thm:Kd2NeighborlyTight} can also be proved without using Theorem~\ref{thm:MorseTight}. In this case the statement follows from the $2$-neighborliness of $M$ (that yields $\mu_0(f; \mathbb{F})=\beta_0$ and $\mu_d(f; \mathbb{F})=\beta_d$), and the Morse relations \ref{thm:MorseRelations} which then yield $\mu_1(f; \mathbb{F})=\beta_1$ and $\mu_{d-1}(f; \mathbb{F})=\beta_{d-1}$ for any rsl function $f$, as $\mu_2(f; \mathbb{F})=\dots=\mu_{d-2}(f; \mathbb{F})=0$.
\end{remark}
As a consequence, the stacking condition of the links already implies the vanishing of $\beta_2,\dots,\beta_{d-2}$ (as by the Morse relations $\mu_i\geq\beta_i$), in accordance with Kalai's Theorem~\ref{thm:KalaiKdConnected}.
An example of a series of tight combinatorial manifolds is the infinite series of sphere bundles over the circle due to K\"uhnel \cite{Kuehnel86HigherDimCsaszar}. The triangulations in this series are all $2$-neighborly on $f_0=2d+3$ vertices. They are homeomorphic to $S^{d-1}\times S^1$ in even dimensions and to $S^{d-1}\mathrel{\vcenter{\offinterlineskip\hbox{$\times$}\vskip-.55ex\hbox{$\hspace*{.3ex}\underline{\hspace*{1.2ex}}$}}} S^1$ in odd dimensions. Furthermore, all links are stacked and thus Theorem~\ref{thm:Kd2NeighborlyTight} applies providing an alternative proof of the tightness of the triangulations in this series.
\begin{corollary}
All members $M^d$ of the series of triangulations in \cite{Kuehnel86HigherDimCsaszar} are $2$-neighborly and lie in the class $\mathcal{K}(d)$. They are thus tight triangulations by Theorem~\ref{thm:Kd2NeighborlyTight}.
\end{corollary}
Another example of a triangulation to which Theorem~\ref{thm:Kd2NeighborlyTight} applies is due to Bagchi and Datta \cite{Bagchi08OnWalkupKd}. It is an example of a so called \emph{tight-neighborly} triangulation as defined by Lutz, Sulanke and Swartz \cite{Lutz08FVec3Mnf}. For this class of manifolds, Theorem~\ref{thm:Kd2NeighborlyTight} holds for $d=2$ and $d\geq 4$. Tight-neighborly triangulations will be described in more detail in the next section.
\section{Tight-neighborly triangulations}
\label{sec:TightNeigh}
Beside the class of combinatorial $d$-manifolds with stacked spheres as vertex links $\mathcal{K}(d)$, Walkup \cite{Walkup70LBC34Mnf} also defined the class $\mathcal{H}(d)$. This is the family of all simplicial complexes that can be obtained from the boundary complex of the $(d+1)$-simplex by a series of zero or more of the following three operations: (i) stellar subdivision of facets, (ii) combinatorial handle additions and (iii) forming connected sums of objects obtained from the first two operations.
The two classes are closely related. Obviously, the relation $\mathcal{H}(d)\subset \mathcal{K}(d)$ holds. Kalai \cite{Kalai87RigidityLBT} showed the reverse inclusion $\mathcal{K}(d)\subset \mathcal{H}(d)$ for $d\geq 4$.
Note that the condition of the $2$-neighborliness of an $M\in \mathcal{K}(d)$ in Theorem~\ref{thm:Kd2NeighborlyTight} is equivalent to the first Betti number $\beta_1(M)$ being maximal with respect to the vertex number $f_0(M)$ of $M$ (as a $2$-neighborly triangulation does not allow any handle additions). Such manifolds are exactly the cases of equality in \cite[Th.~5.2]{Novik08SocBuchsMod}. In their recent work \cite{Lutz08FVec3Mnf}, Lutz, Sulanke and Swartz prove the following\footnote{The author would like to thank Frank Lutz for fruitful discussions about tight-neighborly triangulations and pointing him to the work \cite{Lutz08FVec3Mnf} in the first place.}
\begin{theorem}[Theorem $5$ in \cite{Lutz08FVec3Mnf}]
Let $\mathbb{K}$ be any field and let $M$ be a $\mathbb{K}$-orientable triangulated $d$-manifold with $d\geq 3$. Then
\begin{equation}
f_0(M)\geq \left\lceil \frac{1}{2}\left(2d+3+\sqrt{1+4(d+1)(d+2)\beta_1(M;\mathbb{K})} \right) \right\rceil.
\label{eq:TightNeighEq}
\end{equation}
\end{theorem}
\begin{remark}
As pointed out in \cite{Lutz08FVec3Mnf}, for $d=2$ inequality (\ref{eq:TightNeighEq}) coincides with Heawood's inequality
\begin{equation*}
f_0(M)\geq \left\lceil \frac{1}{2}\left(7+\sqrt{49-24\chi(M)}\right)\right\rceil,
\end{equation*}
if one replaces $\beta_1(M;\mathbb{K})$ by $\frac{1}{2}\beta_1(M;\mathbb{K})$ to account for the double counting of the middle Betti number $\beta_1(M;\mathbb{K})$ of surfaces by Poincar\'{e} duality. Inequality (\ref{eq:TightNeighEq}) can also be written in the form
\begin{equation*}
\binom{f_0 -d -1}{2}\geq \binom{d+2}{2}\beta_1.
\end{equation*}
Thus, Theorem $5$ in \cite{Lutz08FVec3Mnf} settles K\"uhnel's conjectured bounds
\begin{equation*}
\binom{f_0-d+j-2}{j+1}\geq \binom{d+2}{j+1} \beta_j\quad\text{with}\quad 1\leq j\leq \lfloor \frac{d-1}{2} \rfloor
\end{equation*}
in the case $j=1$.
\end{remark}
For $\beta_1=1$, the bound (\ref{eq:TightNeighEq}) coincides with the Brehm-K\"uhnel bound $f_0\geq 2d+4-j$ for $(j-1)$-connected but not $j$-connected $d$-manifolds in the case $j=1$, see \cite{Brehm87CombMnfFewVert}. Inequality (\ref{eq:TightNeighEq}) is sharp by the series of vertex minimal triangulations of sphere bundles over the circle presented in \cite{Kuehnel86HigherDimCsaszar}.
Triangulations of connected sums of sphere bundles $(S^{2}\times S^1)^{\#k}$ and $(S^{2}\mathrel{\vcenter{\offinterlineskip\hbox{$\times$}\vskip-.55ex\hbox{$\hspace*{.3ex}\underline{\hspace*{1.2ex}}$}}} S^1)^{\#k}$ attaining equality in (\ref{eq:TightNeighEq}) for $d=3$ were discussed in \cite{Lutz08FVec3Mnf}. Note that such triangulations are necessarily $2$-neighborly.
\begin{definition}[tight-neighborly triangulation, \cite{Lutz08FVec3Mnf}]
Let $d\geq 2$ and let $M$ be a triangulation of $(S^{d-1}\times S^1)^{\#k}$ or $(S^{d-1}\mathrel{\vcenter{\offinterlineskip\hbox{$\times$}\vskip-.55ex\hbox{$\hspace*{.3ex}\underline{\hspace*{1.2ex}}$}}} S^1)^{\#k}$ attaining equality in (\ref{eq:TightNeighEq}). Then $M$ is called a \emph{tight-neighborly} triangulation.
\end{definition}
For $d\geq 4$, all triangulations of $\mathbb{F}$-orientable $\mathbb{F}$-homology $d$-manifolds with equality in (\ref{eq:TightNeighEq}) lie in $\mathcal{H}(d)$ and are tight-neighborly triangulations of $(S^{d-1}\times S^1)^{\#k}$ or $(S^{d-1}\mathrel{\vcenter{\offinterlineskip\hbox{$\times$}\vskip-.55ex\hbox{$\hspace*{.3ex}\underline{\hspace*{1.2ex}}$}}} S^1)^{\#k}$ by Theorem 5.2 in \cite{Novik08SocBuchsMod}.
The authors conjectured \cite[Conj.~13]{Lutz08FVec3Mnf} that all tight-neighborly triangulations are tight in the classical sense of Definition~\ref{def:BasicTightComb} and showed that the conjecture holds in the following cases: for $\beta_1=0,1$ and any $d$ and for $d=2$ and any $\beta_1$. Indeed, the conjecture also holds for any $d\geq 4$ and any $\beta_1$ as a direct consequence of Theorem~\ref{thm:Kd2NeighborlyTight}.
\begin{corollary}
For $d\geq 4$, all tight-neighborly triangulations are tight.
\end{corollary}
\begin{proof}
For $d\geq 4$, one has $\mathcal{H}(d)=\mathcal{K}(d)$ and the statement is true for all $2$-neighborly members of $\mathcal{K}(d)$ by Theorem~\ref{thm:Kd2NeighborlyTight}.
\end{proof}
It remains to be investigated, whether for vertex minimal triangulations of $d$-handlebodies, $d\geq 3$, the reverse implication is true, too, i.e.\ that for this class of triangulations the terms of tightness and tight-neighborliness are equivalent.
\begin{question}
Let $d\geq 4$ and let $M$ be a tight triangulation homeomorphic to $(S^{d-1}\times S^{1})^{\#k}$ or $(S^{d-1}\mathrel{\vcenter{\offinterlineskip\hbox{$\times$}\vskip-.55ex\hbox{$\hspace*{.3ex}\underline{\hspace*{1.2ex}}$}}} S^{1})^{\#k}$. Does this imply that $M$ is tight-neighborly?
\end{question}
As was shown in \cite{Lutz08FVec3Mnf}, at least for values of $\beta_1=0,1$ and any $d$ and for $d=2$ and any $\beta_1$ this is true.
\bigskip
One example of a triangulation for which Theorem~\ref{thm:Kd2NeighborlyTight} holds is due to Bagchi and Datta \cite{Bagchi08OnWalkupKd}. The triangulation $M^4_{15}$ of $(S^3\mathrel{\vcenter{\offinterlineskip\hbox{$\times$}\vskip-.55ex\hbox{$\hspace*{.3ex}\underline{\hspace*{1.2ex}}$}}} S^1)^{\#3}$ from \cite{Bagchi08OnWalkupKd} is a $2$-neighborly combinatorial $4$-manifold on $15$ vertices that is a member of $\mathcal{K}(4)$ with $f$-vector $f=(15,\,105,\,230,\,240,\,96)$. Since $M^4_{15}$ is tight-neighborly, we have the following corollary.
\begin{corollary}
The $4$-manifold $M^4_{15}$ given in \cite{Bagchi08OnWalkupKd} is a tight triangulation.
\end{corollary}
The next possible triples of values of $\beta_1$, $d$ and $n$ for which a 2-neighborly member of $\mathcal{K}(d)$ could exist (compare \cite{Lutz08FVec3Mnf}) are listed in Table~\ref{tab:PossTriplesKd}. Apart from the sporadic examples in dimension $4$ and the infinite series of higher dimensional analogues of Cs{\'a}sz{\'a}r's torus in arbitrary dimension $d\geq 2$ due to Kühnel \cite{Kuehnel86HigherDimCsaszar}, cf. \cite{Kuehnel96PermDiffCyc, Bagchi08MinTrigSphereBundCirc, Chestnut08EnumPropTrigSpherBund}, mentioned earlier, no further examples are known as of today.
\begin{table}
\caption{Known and open cases for $\beta_1$, $d$ and $n$ of $2$-neighborly members of $\mathcal{K}(d)$.}
\label{tab:PossTriplesKd}
\centering
\begin{tabular}{l|l|l|l|l}
$\beta_1$&$d$&$n$&top.\ type&reference\\ \hline
$0$&any $d$&$d+1$&$S^{d-1}$&$\partial \Delta^d$\\
$1$&any even $d\geq 2$&$2d+3$&$S^{d-1}\times S^1$&\cite{Kuehnel86HigherDimCsaszar} ($d=2$: \cite{Moebius86Werke, Csaszar49PolyWithoutDiags})\\
$1$&any odd $d\geq 2$&$2d+3$&$S^{d-1}\mathrel{\vcenter{\offinterlineskip\hbox{$\times$}\vskip-.55ex\hbox{$\hspace*{.3ex}\underline{\hspace*{1.2ex}}$}}} S^1$&\cite{Kuehnel86HigherDimCsaszar} ($d=3$: \cite{Walkup70LBC34Mnf, Altshuler74NeighComb3Mnf9Vert})\\
$2$&$13$&$35$&?&\\
$3$&$4$&$15$&$(S^3\mathrel{\vcenter{\offinterlineskip\hbox{$\times$}\vskip-.55ex\hbox{$\hspace*{.3ex}\underline{\hspace*{1.2ex}}$}}} S^1)^{\#3}$&\cite{Bagchi08OnWalkupKd}\\
$5$&$5$&$21$&?&\\
$8$&$10$&$44$&?&\\
\end{tabular}
\end{table}
Especially in (the odd) dimension $d=3$, things seem to be a bit more subtle, as already laid out in Remark~\ref{rem:lacunaryprinciple}.
As Altshuler and Steinberg \cite{Altshuler73Neigh4Poly9Vert} showed that the link of any vertex in a neighborly $4$-polytope is stacked (compare also Remark~$5$ in \cite{Kalai87RigidityLBT}), we know that the class $\mathcal{K}(3)$ is rather big compared to $\mathcal{H}(3)$. Thus, a statement equivalent to Theorem~\ref{thm:Kd2NeighborlyTight} is not surprisingly false for members of $\mathcal{K}(3)$, a counterexample being the boundary of the cyclic polytope $\partial C(4,6)\in \mathcal{K}(3)$ which is $2$-neighborly but certainly not a tight triangulation as it has empty triangles.
The only currently known non-trivial example of a tight-neighborly combinatorial $3$-manifold is a $9$-vertex triangulation $M^3$ of $S^2\mathrel{\vcenter{\offinterlineskip\hbox{$\times$}\vskip-.55ex\hbox{$\hspace*{.3ex}\underline{\hspace*{1.2ex}}$}}} S^1$, independently found by Walkup \cite{Walkup70LBC34Mnf} and Altshuler and Steinberg \cite{Altshuler74NeighComb3Mnf9Vert}. This triangulation is combinatorially unique, as was shown by Bagchi and Datta \cite{Bagchi08UniqWalkup9V3DimKleinBottle}. For $d=3$, it is open whether there exist tight-neighborly triangulations for higher values of $\beta_1\geq 2$, see \cite[Question 12]{Lutz08FVec3Mnf}.
The fact that $M^3$ is a tight triangulation is well known, see \cite{Kuehnel95TightPolySubm}. Yet, we will present here another proof of the tightness of $M^3$. It is a rather easy procedure when looking at the $4$-polytope $P$ the boundary of which $M^3$ was constructed from by one elementary combinatorial handle addition, see also \cite{Bagchi08OnWalkupKd}.
\begin{lemma}
Walkup's $9$-vertex triangulation $M^3$ of $S^2\mathrel{\vcenter{\offinterlineskip\hbox{$\times$}\vskip-.55ex\hbox{$\hspace*{.3ex}\underline{\hspace*{1.2ex}}$}}} S^1$ is tight.
\end{lemma}
\begin{proof}
Take the stacked $4$-polytope $P$ with $f$-vector $f(P)=(13, 42, 58, 37, 9)$ from \cite{Walkup70LBC34Mnf}. Its facets are
\begin{center}
\begin{tabular}{lll}
$\langle 1\,2\,3\,4\,5 \rangle$, &
$\langle 2\,3\,4\,5\,6 \rangle$, &
$\langle 3\,4\,5\,6\,7 \rangle$,\\
$\langle 4\,5\,6\,7\,8 \rangle$, &
$\langle 5\,6\,7\,8\,9 \rangle$, &
$\langle 6\,7\,8\,9\,10 \rangle$,\\
$\langle 7\,8\,9\,10\,11 \rangle$, &
$\langle 8\,9\,10\,11\,12 \rangle$, &
$\langle 9\,10\,11\,12\,13 \rangle$.
\end{tabular}
\end{center}
As $P$ is stacked it has missing edges (called \emph{diagonals}), but no empty faces of higher dimension.
Take the boundary $\partial P$ of $P$. By construction, $P$ has no inner $i$-faces for $0\leq i\leq 2$, so that $\partial P$ has the 36 diagonals of $P$ and additionally $8$ empty tetrahedra, but no empty triangles. As $\partial P$ is a $3$-sphere, the empty tetrahedra are all homologous to zero.
Now form a $1$-handle over $\partial P$ by removing the two tetrahedra $\langle 1,2,3,4\rangle$ and $\langle 10,11,12,13\rangle$ from $\partial P$ followed by an identification of the four vertex pairs $(i, i+9)$, $1\leq i \leq 4$, where the newly identified vertices are labeled with $1,\dots,4$.
This process yields a $2$-neighborly combinatorial manifold $M^3$ with $1$3$-4=9$ vertices and one additional empty tetrahedron $\langle 1,2,3,4\rangle$, which is the generator of $H_2(M)$.
As $M^3$ is $2$-neighborly, it is $0$-tight and as $\partial P$ had no empty triangles, two empty triangles in the span of any vertex subset $V'\subset V(M)$ are always homologous. Thus, $M^3$ is a tight triangulation.
\end{proof}
The construction in the proof above could probably be used in the general case with $d=3$ and $\beta_1\geq 2$: one starts with a stacked $3$-sphere $M_0$ as the boundary of a stacked $4$-polytope which by construction does not contain empty $2$-faces and then successively forms handles over this boundary $3$-sphere (obtaining triangulated manifolds $M_1,\dots,M_n=M$) until the resulting triangulation $M$ is $2$-neighborly and fulfills equality in (\ref{eq:TightNeighEq}). Note that this can only be done in the regular cases of (\ref{eq:TightNeighEq}), i.e. where (\ref{eq:TightNeighEq}) admits integer solutions for the case of equality. For a list of possible configurations see \cite{Lutz08FVec3Mnf}.
\section{$k$-stacked spheres and the class $\mathcal{K}^k(d)$}
\label{sec:Kkd}
McMullen and Walkup \cite{McMullen71GeneralizedLBC} extended the notion of stacked polytopes to \emph{$k$-stacked polytopes} as simplicial $d$-polytopes that can be triangulated without introducing new $j$-faces for $0\leq j\leq d-k-1$.
\begin{definition}[$k$-stacked balls and spheres, \cite{McMullen71GeneralizedLBC, Kalai87RigidityLBT}]\hfill
A \emph{$k$-stacked $(d+1)$-ball}, $0\leq k\leq d$, is a triangulated $(d+1)$-ball that has no interior $j$-faces, $0\leq j\leq d-k$. A \emph{minimally $k$-stacked $(d+1)$-ball} is a $k$-stacked $(d+1)$-ball that is not $(k-1)$-stacked. The boundary of any (minimally) $k$-stacked $(d+1)$-ball is called a \emph{(minimally) $k$-stacked $d$-sphere}.
\label{def:kStackedSphere}
\end{definition}
Note that in this context the ordinary stacked $d$-spheres are exactly the $1$-stacked $d$-spheres. Note furthermore, that a $k$-stacked $d$-sphere is obviously also $(k+l)$-stacked, where $l\in\mathbb{N}$, $k+l\leq d$, compare \cite{Bagchi08LBTNormPseudoMnf}. The simplex $\Delta^{d+1}$ is the only $0$-stacked $(d+1)$-ball and the boundary of the simplex $\partial\Delta^{d+1}$ is the only $0$-stacked $d$-sphere. Keep in mind that all triangulated $d$-spheres are at least $d$-stacked \cite[Rem.~9.1]{Bagchi08LBTNormPseudoMnf}.
\begin{figure} \centering
\includegraphics[height=4cm]{2s2}
\caption{A minimally $2$-stacked $S^2$ as the boundary complex of a subdivided $3$-octahedron.}
\label{fig:2stackeds2}
\end{figure}
Figure~\ref{fig:2stackeds2} shows the boundary of an octahedron as an example of a minimally $2$-stacked $2$-sphere $S$ with $6$ vertices. The octahedron that is subdivided along the inner diagonal $(5,6)$ can be regarded as a triangulated $3$-ball $B$ with $\skel_0(S)=\skel_0(B)$ and $\partial B=S$. Note that although all vertices of $B$ are on the boundary, there is an inner edge such that the boundary is $2$-stacked, but not $1$-stacked. In higher dimensions, examples of minimally $d$-stacked $d$-spheres exist as boundary complexes of subdivided $d$-cross polytopes with an inner diagonal.
Akin to the $1$-stacked case, a more geometrical characterization of $k$-stacked $d$-spheres can be given via bistellar moves (also known as Pachner moves, see \cite{Pachner87KonstrMethKombHomeo}), at least for $k\leq \lceil \frac{d}{2}\rceil$.
\begin{definition}[bistellar moves]
Let $M$ be a triangulated $d$-manifold and let $A$ be a $(d-i)$-face of $M$, $0\leq i\leq d$, such that there exists an $i$-simplex $B$ that is not a face of $M$ with $\operatorname{lk}_M(A)=\partial B$. Then a \emph{bistellar $i$-move} $\Phi_A$ on $M$ is defined by
%
\begin{equation*}
\Phi_A(M):=\left(M\backslash (A*\partial B)\right)\cup (\partial A * B),
\end{equation*}
%
where $*$ denotes the join operation for simplicial complexes. Bistellar $i$-moves with $i> \lfloor\frac{d}{2}\rfloor$ are also-called \emph{reverse $(d-i)$-moves}.
\end{definition}
See Figure \ref{fig:flips} for an example illustration of bistellar moves in dimension $d=3$. Note that for any bistellar move $\Phi_A(M)$, $A* B$ forms a $(d+1)$-simplex. Thus, any sequence of bistellar moves defines a sequence of $(d+1)$-simplices --- this we will call the \emph{induced sequence of $(d+1)$-simplices} in the following.
The characterization of $k$-stacked $d$-spheres using bistellar moves is the following.
\begin{lemma}
For $k\leq \lceil \frac{d}{2}\rceil$, a complex $S$ obtained from the boundary of the $(d+1)$-simplex by a sequence of bistellar $i$-moves, $0\leq i\leq k-1$, is a $k$-stacked $d$-sphere.
\label{lem:kstackbistellar}
\end{lemma}
\begin{proof}
As $k\leq \lceil \frac{d}{2}\rceil$, the sequence of $(d+1)$-simplices induced by the sequence of bistellar moves is duplicate free and defines a simplicial $(d+1)$-ball $B$ with $\partial B=S$. Furthermore, $\skel_{d-k}(B)=\skel_{d-k}(S)$ holds as no bistellar move in the sequence can contribute an inner $j$-face to $B$, $0\leq j\leq d-k$. Thus, $S$ is a $k$-stacked $d$-sphere.
\end{proof}
Keep in mind though, that this interpretation does not hold for values $k > \lceil \frac{d}{2}\rceil$, as in this case the sequence of $(d+1)$-simplices induced by the sequence of bistellar moves may have duplicate entries, as opposed to the case with $k\leq \lceil \frac{d}{2}\rceil$.
In terms of bistellar moves, the minimally $2$-stacked sphere in Figure~\ref{fig:2stackeds2} can be constructed as follows: Start with a solid tetrahedron and stack another tetrahedron onto one of its facets (a $0$-move). Now introduce the inner diagonal $(5,6)$ via a bistellar $1$-move. Clearly, this complex is not bistellarly equivalent to the simplex by only applying reverse $0$-moves (and thus not ($1$-)stacked) but it is bistellarly equivalent to the simplex by solely applying reverse $0$-, and $1$-moves and thus minimally $2$-stacked.
The author is one of the authors of the toolkit \texttt{simpcomp} \cite{simpcomp, simpcompISSAC} for simplicial constructions in the \texttt{GAP} system \cite{GAP4}. \texttt{simpcomp} contains a randomized algorithm that checks whether a given $d$-sphere is $k$-stacked, $k\leq \lceil \frac{d}{2}\rceil$, using the argument above.
\begin{figure
\centering
\includegraphics[height=3.2cm]{flips}
\caption{Left: a $0$-move on a tetrahedron and its inverse $3$-move (i.\ e.\ a reverse $0$-move). Right: a $1$-move on two tetrahedra glued together at one triangle and its inverse $2$-move (i.e.\ a reverse $1$-move).}
\label{fig:flips}
\end{figure}
With the notion of $k$-stacked spheres at hand one can define the following generalization of Walkup's class $\mathcal{K}(d)$.
\begin{definition}[the class $\mathcal{K}^k(d)$]
Let $\mathcal{K}^k(d)$, $k\leq d$, be the family of all $d$-dimensional simplicial complexes all whose vertex links are $k$-stacked spheres.
\end{definition}
Note that $\mathcal{K}^d(d)$ is the set of all triangulated manifolds for any $d$ and that Walkup's class $\mathcal{K}(d)$ coincides with $\mathcal{K}^1(d)$ above. In analogy to the $1$-stacked case, a $(k+1)$-neighborly member of $\mathcal{K}^k(d)$ with $d\geq 2k$ necessarily has vanishing $\beta_{1},\dots,\beta_{k-1}$. Thus, it seems reasonable to ask for the existence of a generalization of Kalai's Theorem~\ref{thm:KalaiKdConnected} to the class of $\mathcal{K}^k(d)$ for $k\geq 2$.
Furthermore, one might be tempted to ask for a generalization of Theorem~\ref{thm:Kd2NeighborlyTight} to the class $\mathcal{K}^k(d)$ for $k\geq 2$. Unfortunately, there seems to be no direct way of generalizing Theorem~\ref{thm:Kd2NeighborlyTight} to also hold for members of $\mathcal{K}^k(d)$ giving a combinatorial condition for the tightness of such triangulations. The key obstruction here is the fact that a generalization of Lemma~\ref{lem:StackedSphereHomology} is impossible. While in the case of ordinary stacked spheres a bistellar $0$-move does not introduce inner simplices to the $(d-1)$-skeleton, the key argument in Lemma~\ref{lem:StackedSphereHomology}, this is not true for bistellar $i$-moves for $i\geq 1$.
Nonetheless, an analogous result to Theorem~\ref{thm:Kd2NeighborlyTight} should be true for such triangulations.
\begin{question}
Let $d\geq 4$ and $2\leq k\leq \lfloor \frac{d+1}{2}\rfloor$ and let $M$ be a $(k+1)$-neighborly combinatorial manifold such that $M\in \mathcal{K}^k(d)$. Does this imply the tightness of $M$?
\label{ques:kneightight}
\end{question}
\begin{remark}
Note that all vertex links of $(k+1)$-neighborly members of $\mathcal{K}^k(d)$ are $k$-stacked, $k$-neighborly $(d-1)$-spheres. McMullen and Walkup \cite[Sect.~3]{McMullen71GeneralizedLBC} showed that there exist $k$-stacked, $k$-neighborly $(d-1)$-spheres on $n$ vertices for any $2\leq 2k\leq d< n$. Some examples of such spheres will be given in the following. The conditions of being $k$-stacked and $k$-neighborly at the same time is strong as the two conditions tend to exclude each other in the following sense: McMullen and Walkup showed that if a $d$-sphere is $k$-stacked and $k'$-neighborly with $k'>k$, then it is the boundary of the simplex. In that sense, the $k$-stacked $k$-neighborly spheres appear as the most strongly restricted non-trivial objects of this class: The conditions in Theorem~\ref{thm:Kd2NeighborlyTight} (with $k=1$) and in Question~\ref{ques:kneightight} are the most restrictive ones still admitting non-trivial solutions.
\label{rem:stackedneigh}
\end{remark}
\begin{remark}
Most recently, Bagchi and Datta \cite{Bagchi11StellSphereTightCritCombMnf} gave a negative answer to Question~\ref{ques:kneightight} in odd dimensions $d=2k+1$ \cite[Prop.~16]{Bagchi11StellSphereTightCritCombMnf}, but could almost prove the statement for $d\neq 2k + 1$ \cite[Prop.~20]{Bagchi11StellSphereTightCritCombMnf}.
\end{remark}
\begin{table}
\caption{Some known tight triangulations and their membership in the classes $\mathcal{K}^k(d)$, cf. \cite{Kuehnel99CensusTight}, with \emph{$n$} denoting the number of vertices of the triangulation and \emph{nb.} its neighborliness.\label{tab:tighttrigkd}}
\centering
\begin{tabular}{l|l|l|l|l}
{$d$}&{top.\ type}&{$n$}&{nb.}&{$k$}\\ \hline
$4$& $\mathbb{C}P^2$ &$9$ &$3$ &$2$\\
$4$& $K3$ &$16$ &$3$ &$2$\\
$4$& $(S^3\mathrel{\vcenter{\offinterlineskip\hbox{$\times$}\vskip-.55ex\hbox{$\hspace*{.3ex}\underline{\hspace*{1.2ex}}$}}} S^1)\#(\mathbb{C}P^2)^{\#5}$& $15$ &$2$ &$2$\\
$5$& $S^3\times S^2$ &$12$ &$3$ &$2$\\
$5$& $SU(3)/SO(3)$ &$13$ &$3$ &$3$\\
$6$& $S^3\times S^3$ &$13$ &$4$ &$3$\\
\end{tabular}
\end{table}
K\"uhnel and Lutz \cite{Kuehnel99CensusTight} gave an overview of the currently known tight triangulations. The statement of Question~\ref{ques:kneightight} holds for all the triangulations listed in \cite{Kuehnel99CensusTight}. Note that there even exist $k$-neighborly triangulations in $\mathcal{K}^k(d)$ that are tight and thus fail to fulfill the prerequisites of Question~\ref{ques:kneightight} (see Table~\ref{tab:tighttrigkd}).
Although we did not succeed in proving conditions for the tightness of triangulations lying in $\mathcal{K}^k(d)$, $k\geq 2$, these have nonetheless interesting properties that we will investigate upon in the following. Also, many known tight triangulations are members of these classes, as will be shown. Our first observation is that the neighborliness of a triangulation is closely related to the property of being a member of $\mathcal{K}^k(d)$.
\begin{lemma}
Let $k\in \N$ and $M$ be a combinatorial $d$-manifold, $d\geq 2k$, that is a $(k+1)$-neighborly triangulation. Then $M\in \mathcal{K}^{d-k}(d)$.
\label{lem:NeighbStacked}
\end{lemma}
\begin{proof}
If $M$ is $(k+1)$-neighborly, then for any $v\in V(M)$, $\operatorname{lk}(v)$ is $k$-neighborly.
As $\operatorname{lk}(v)$ is PL homeomorphic to $\partial \Delta^{d}$ (since $M$ is a combinatorial manifold) there exists a $d$-ball $B$ with $\partial B=\operatorname{lk}(v)$ (cf.\ \cite{Bagchi08LBTNormPseudoMnf}). Since $\operatorname{lk}(v)$ is $k$-neighborly, $\skel_{k-1}(B)=\skel_{k-1}(\operatorname{lk}(v))$.
By Definition~\ref{def:kStackedSphere}, the link of every vertex $v\in V(M)$ then is $(d-k)$-stacked and thus $M\in \mathcal{K}^{d-k}(d)$.
\end{proof}
As pointed out in Section~\ref{sec:intro}, K\"uhnel \cite[Chap.~4]{Kuehnel95TightPolySubm} investigated $(k+1)$-neighborly triangulations of $2k$-manifolds and showed that all these are tight triangulations. By Lemma~\ref{lem:NeighbStacked}, all their vertex links are $k$-stacked spheres.
\begin{corollary}
Let $M$ be a $(k+1)$-neighborly (tight) triangulation of a $2k$-manifold. Then $M$ lies in $\mathcal{K}^k(2k)$.
\end{corollary}
In particular, this holds for many vertex minimal (tight) triangulations of $4$-manifolds.
\begin{corollary}
The known examples of the vertex-minimal tight triangulation of a $K3$-surface with $f$-vector $f=(16, 120, 560, 720, 288)$ due to Casella and K\"uhnel \cite{Casella01TrigK3MinNumVert} and the unique vertex-minimal tight triangulation of $\C P^2$ with $f$-vector $f=(9, 36, 84, 90, 36)$ due to K\"uhnel \cite{Kuehnel83Uniq3Nb4MnfFewVert}, cf. \cite{Kuehnel83The9VertComplProjPlane} are $3$-neighborly triangulations that lie in $\mathcal{K}^2(4)$. \label{corr:K3CP2Tight}
\end{corollary}
\bigskip
Let us now shed some light on properties of members of $\mathcal{K}^2(6)$. First recall that there exists a \emph{Generalized Lower Bound Conjecture} (GLBC) due to McMullen and Walkup as an extension to the classical Lower Bound Theorem for triangulated spheres as follows.
\begin{conjecture}[GLBC, cf. \cite{McMullen71GeneralizedLBC, Bagchi08LBTNormPseudoMnf}]
For $d\geq 2k+1$, the face-vector $(f_0,\dots, f_d)$ of any triangulated
$d$-sphere $S$ satisfies
\begin{equation}
f_j\geq\left\{
\begin{array}{ll}
\sum_{i=-1}^{k-1} (-1)^{k-i+1} \binom{j-i-1}{j-k} \binom{d-i+1}{j-i} f_i,&\quad\text{if }k\leq j\leq d-k,\\
\sum_{i=-1}^{k-1} (-1)^{k-i+1} \left[ \binom{j-i-1}{j-k} \binom{d-i+1}{j-i}\right.&\\
-\binom{k}{d-j+1} \binom{d-i}{d-k+1}&\\
\left.+\sum_{l=d-j}^{k+1} (-1)^{k-l} \binom{l}{d-j} \binom{d-i}{d-l+1}\right]f_i,&\quad\text{if }d-k+1\leq j\leq d.
\end{array}
\right.
\end{equation}
Equality holds here for any $j$ if and only if $S$ is a $k$-stacked $d$-sphere.
\label{conj:GLBC}
\end{conjecture}
The GLBC implies the following theorem for $d=6$, which is a $6$-dimensional analogue of Walkup's theorem \cite[Thm.~5]{Walkup70LBC34Mnf}, \cite[Prop.~7.2]{Kuehnel95TightPolySubm}, see also Swartz' Theorem 4.10 in \cite{Swartz08FaceEnumSpheresMnf}.
\begin{theorem}
Assuming the validity of the Generalized Lower Bound Conjecture \ref{conj:GLBC}, for any combinatorial $6$-manifold $M$ the inequality
\begin{equation}
f_2(M)\geq 28\chi(M)-21 f_0 +6 f_1
\label{eq:Ineqf2K26}
\end{equation}
holds. If $M$ is $2$-neighborly, then
\begin{equation}
f_2(M)\geq 28\chi(M)+3f_0(f_0-8)
\label{eq:Ineqf2K26Neigh}
\end{equation}
holds. In either case equality is attained if and only if $M\in\mathcal{K}^2(6)$.
\label{conj:2Neighborly6ManifoldFvec}
\end{theorem}
\begin{proof}
Clearly,
\begin{equation}
f_3(M)=\frac{1}{4}\sum_{v\in V(M)} f_2(\operatorname{lk}(v)).
\label{eq:Eqf3f2}
\end{equation}
By applying the GLBC \ref{conj:GLBC} to all the vertex links of $M$ one obtains a lower bound on $f_2(\operatorname{lk}(v))$ for all $v\in V(M)$:
\begin{equation}
f_2(\operatorname{lk}(v))\geq 35 -15f_0(\operatorname{lk}(v)) +5 f_1(\operatorname{lk}(v)).
\label{eq:Ineq2Stacked5Sphere}
\end{equation}
Here equality is attained if and only if $\operatorname{lk}(v)$ is $2$-stacked. Combining (\ref{eq:Eqf3f2}) and (\ref{eq:Ineq2Stacked5Sphere}) yields a lower bound
\begin{equation}
\begin{array}{l@{}l@{}l}
f_3(M)&\geq&\frac{1}{4}\sum_{v\in V(M)} 35 -15f_0(\operatorname{lk}(v)) +5 f_1(\operatorname{lk}(v))\\
&=&\frac{5}{4}\left(7 f_0(M) - 6 f_1(M) + 3 f_2(M)\right),
\end{array}
\label{eq:Ineq5manif3}
\end{equation}
for which equality holds if and only if $M\in \mathcal{K}^2(6)$.
If we eliminate $f_4$, $f_5$ and $f_6$ from the Dehn-Sommerville-equations for
combinatorial $6$-manifolds, we obtain the linear equation
\begin{equation}
35f_0(M)-15f_1(M)+5f_2(M)-f_3(M)=35\chi(M).
\label{eq:DS6Manifolds}
\end{equation}
Inserting inequality (\ref{eq:Ineq5manif3}) into (\ref{eq:DS6Manifolds}) and solving for $f_2(M)$ yields the claimed lower bounds (\ref{eq:Ineqf2K26}) and (\ref{eq:Ineqf2K26Neigh}),
\begin{equation}
\begin{array}{l@{}l@{}l}
f_2(M)&\geq&28\chi(M) - 21 f_0(M) + 6 f_1(M)\\
&=&28\chi(M)+3f_0(\underbrace{f_0(M)-8}_{\geq 0}),
\end{array}
\label{eq:f2stacked6mnf}
\end{equation}
where the $2$-neighborliness of $M$ was used in the last line.
\end{proof}
For a possible $14$-vertex triangulation of $S^4\times S^2$ (with $\chi=4$) inequality (\ref{eq:f2stacked6mnf}) becomes
\begin{equation*}
f_2\geq 4\cdot 28 +3\cdot 14\cdot (14-8)=364,
\end{equation*}
but together with the trivial upper bound $f_2 \leq \binom{f_0}{3}$ this already would imply that such a triangulation necessarily is $3$-neighborly, as $\binom{14}{3}=364$.
So, just by asking for a $2$-neighborly combinatorial $S^4\times S^2$ on $14$ vertices that lies in $\mathcal{K}^2(6)$ already implies that
this triangulation is $3$-neighborly. Also, the example would attain equality in the Brehm-Kühnel bound \cite{Brehm87CombMnfFewVert} as an example of a $1$-connected $6$-manifold with $14$ vertices. We strongly conjecture that this triangulation also would be tight, see Question~\ref{ques:kneightight}.
\section{Subcomplexes of the cross polytope}
\label{sec:SubcomplCross}
The $d$-dimensional cross polytope (or $d$-octahedron) $\beta^d$ is defined as the convex hull of the $2d$ points
\begin{equation*}
(0,\ldots,0,\pm1,0,\ldots,0) \in \mathbb{R}^d.
\end{equation*}
It is a simplicial and regular polytope and it is centrally-symmetric with $d$ missing edges called \emph{diagonals}, each between two antipodal points of type $(0,\ldots,0,1,0,\ldots,0)$ and $(0,\ldots,0,-1,0,\ldots,0)$. Its edge graph is the complete $d$-partite graph with two vertices in each partition, sometimes denoted by $K_2 * \cdots * K_2$. See \cite{McMullen02AbstrRegPolytopes, Gruenbaum03ConvPoly, Ziegler95LectPolytopes} for properties of regular polytopes in general.
The boundary of the $(d+1)$-cross polytope $\beta^{d+1}$ is an obviously minimally $d$-stacked $d$-sphere as it can be obtained as the boundary of a minimally $d$-stacked $(d+1)$-ball that is given by any subdivision of $\beta^{d+1}$ along an inner diagonal.
As pointed out in Section~\ref{sec:intro}, centrally symmetric analogues of tight triangulations appear as Hamiltonian subcomplexes of cross polytopes. A \emph{centrally symmetric triangulation} is a triangulation such that there exists a combinatorial involution operating on the face lattice of the triangulation without fixed points. Any centrally symmetric triangulation thus has an even number of vertices and can be interpreted as a subcomplex of some higher dimensional cross polytope. The tightness of a centrally symmetric $(k-1)$-connected $2k$-manifold $M$ as a subcomplex of $\beta^d$ then is equivalent to $M$ being a $k$-Hamiltonian subcomplex of $\beta^d$, i.e. that $M$ is nearly $(k+1)$-neighborly, see \cite[Ch.~4]{Kuehnel95TightPolySubm}.
As it turns out, all of the known centrally symmetric triangulations of $d$-manifolds that are $k$-Hamiltonian subcomplexes of a higher dimensional cross polytope $\beta^N$ and admit a tight embedding into $\beta^N$ are members of the class $\mathcal{K}^k(d)$. This will be discussed in the following paragraphs.
\begin{corollary}
The $16$-vertex triangulation of $(S^2\times S^2)^{\#7}_{16}$ presented in \cite{Effenberger08HamSubRegPoly} is contained in $\mathcal{K}^2(4)$ and admits a tight embedding into $\beta^8$ as shown in \cite{Effenberger08HamSubRegPoly}. \label{corr:s2s27}
\end{corollary}
\begin{proof}
The triangulation $(S^2\times S^2)^{\#7}_{16}$ is a combinatorial manifold and a tight subcomplex of $\beta^8$ as shown in \cite{Effenberger08HamSubRegPoly}. Thus, each vertex link is a PL $3$-sphere. It remains to show that all vertex links are $2$-stacked.
Using \texttt{simpcomp}, we found that the vertex links can be obtained from the boundary of a $4$-simplex by a sequence of $0$- and $1$-moves. Therefore, by Lemma~\ref{lem:kstackbistellar}, the vertex links are $2$-stacked $3$-spheres. Thus, $(S^2\times S^2)^{\#7}_{16}\in \mathcal{K}^2(4)$, as claimed.
\end{proof}
The following centrally symmetric triangulation of $S^4\times S^2$ is a new example of a triangulation that can be seen as a subcomplex of a higher dimensional cross polytope.
\begin{theorem}
There exists an example of a centrally symmetric triangulation $M^6_{16}$ of $S^4\times S^2$ with $16$ vertices that is a $2$-Hamiltonian subcomplex of the $8$-octahedron $\beta^8$ and that is member of $\mathcal{K}^2(6)$. \label{thm:ExampleS2S4Beta8}
\end{theorem}
\begin{proof}
The construction of $M^6_{16}$ was done entirely with \texttt{simpcomp} and is as follows. First, a $24$-vertex triangulation $\tilde{M}^6$ of $S^4\times S^2$ was constructed as the standard simplicial cartesian product of $\partial \Delta^3$ and $\partial\Delta^5$ as implemented in \cite{simpcomp}, where $\Delta^d$ denotes the $d$-simplex. Then $\tilde{M}^6$ obviously is a combinatorial $6$-manifold homeomorphic to $S^4\times S^2$.
This triangulation $\tilde{M}^6$ was then reduced to the triangulation $M^6_{16}$ with $f$-vector $f=(16, 112, 448, 980, 1232, 840, 240)$ using a vertex reduction algorithm based on bistellar flips that is implemented in \cite{simpcomp}. The code is based on the vertex reduction methods developed by Bj\"orner and Lutz \cite{Bjoerner00SimplMnfBistellarFlips}. It is well-known that this reduction process leaves the PL type of the triangulation invariant, such that $M^6_{16}\cong S^4\times S^2$ holds.
The $f$-vector of $M^6_{16}$ is uniquely determined already by the condition of $M^6_{16}$ to be $2$-Hamiltonian in the $8$-dimensional cross polytope.
In particular, $M^6_{16}$ has $8$ missing edges of the form $\langle i,\,i+1\rangle$ for all odd $1\leq i \leq 15$, which are pairwise disjoint and correspond to the $8$ diagonals of the cross polytope.
As there is an involution $$I=(1,2)(3,4)(5,6)(7,8)(9,10)(11,12)(13,14)(15,16)$$ operating on the faces of $M^6_{16}$ without fixed points, $M^6_{16}$ can be seen as a $2$-Hamiltonian subcomplex of $\beta^8$. Apart from $I$, $M^6_{16}$ has no non-trivial symmetries, i.e we have $\text{Aut}(M^{6}_{16})=\langle I\rangle \cong C_2$. The $240$ facets of $M^6_{16}$ are given in Table~\ref{tab:S2S4CentSymmSimplices}.
It remains to show that $M^6_{16}\in \mathcal{K}^2(6)$. Remember, that the necessary and sufficient condition for a triangulation $X$ to be member of $\mathcal{K}^k(d)$ is that all vertex links of $X$ are $k$-stacked $(d-1)$-spheres.
Since $M^6_{16}$ is a combinatorial $6$-manifold, all vertex links are PL $5$-spheres. It thus suffices to show that all vertex links are $2$-stacked.
Using \texttt{simpcomp}, we found that the vertex links can be obtained from the boundary of the $6$-simplex by a sequence of $0$- and $1$-moves. Therefore, by Lemma~\ref{lem:kstackbistellar}, vertex links are $2$-stacked $5$-spheres. Thus, $M^6_{16}\in \mathcal{K}^2(6)$, as claimed.
\end{proof}
The triangulation $M^6_{16}$ is strongly conjectured to be tight in $\beta^8$. It is part of a conjectured series of centrally symmetric triangulations of sphere products for which tight embeddings into cross polytopes are conjectured. In some cases, the tightness of the embedding could be proved, see \cite{Sparla99LBTComb2kMnf}, \cite[6.2]{Kuehnel99CensusTight} and \cite[Sect.~6]{Effenberger08HamSubRegPoly}.
In particular, the sphere products presented in \cite[Thm.~6.3]{Kuehnel99CensusTight} are part of this conjectured series and the following holds.
\begin{theorem}
The centrally symmetric triangulations of sphere products of the form $S^k\times S^m$ with vertex transitive automorphism groups
\begin{equation*}
\begin{array}{lllllll}
S^1\times S^1, & S^2\times S^1, & S^3\times S^1, & S^4\times S^1, & S^5\times S^1, & S^6\times S^1, & S^7\times S^1, \\
& & S^2\times S^2, & S^3\times S^2, & & S^5\times S^2, & \\
& & & & S^3\times S^3, & S^4\times S^3, & S^5\times S^3, \\
& & & & & & S^4\times S^4
\end{array}
\end{equation*}
on $n=2(k+m)+4$ vertices presented in \cite[Thm.~6.3]{Kuehnel99CensusTight} all lie in the class $\mathcal{K}^{\min\set{k,m}}(k+m)$.
\label{thm:centrsymmseries}
\end{theorem}
Using \texttt{simpcomp}, we found that the vertex links of all the manifolds mentioned in the statement can be obtained from the boundary of a $(k+m)$-simplex by sequences of bistellar $i$-moves, $0\leq i\leq \text{min}\{k,l\}-1$. Therefore, by Lemma~\ref{lem:kstackbistellar}, the vertex links are $\min\set{k,m}$-stacked $(k+m-1)$-spheres. Thus, all the manifolds mentioned in the statement are in $\mathcal{K}^{\min\set{k,m}}(k+m)$. Note that since these examples all have a transitive automorphism group, it suffices to check the stackedness condition for one vertex link only.
The preceding observations naturally lead to the following Question~\ref{ques:kHamilKkdTight} as a generalization of Question~\ref{ques:kneightight}.
\begin{question}
Let $d\geq 4$ and let $M$ be a $k$-Hamiltonian codimension $2$ subcomplex of the $(d+2)$-dimensional cross polytope $\beta^{d+2}$ such that $M\in \mathcal{K}^k(d)$ for some fixed $1\leq k\leq \lceil \frac{d-1}{2} \rceil$. Does this imply that the embedding $M\subset \beta^{d+2}\subset E^{d+2}$ is tight?
\label{ques:kHamilKkdTight}
\end{question}
This is true for all currently known codimension $2$ subcomplexes of cross polytopes that fulfill the prerequisites of Question~\ref{ques:kHamilKkdTight}: The $8$-vertex triangulation of the torus, a $12$-vertex triangulation of $S^2\times S^2$ due to Sparla \cite{Lassmann00ClassCentSymmCycS2S2, Sparla97GeomKombEigTrigMgf}
and the triangulations of $S^k\times S^k$ on $4k+4$ vertices for $k=3$ and $k=4$ as well as for the infinite series of triangulations of $S^k\times S^1$ in \cite{Kuehnel86HigherDimCsaszar}. For the other triangulations of $S^k\times S^m$ listed in Theorem~\ref{thm:centrsymmseries} above, Kühnel and Lutz ``strongly conjecture'' \cite[Sec.~6]{Kuehnel99CensusTight} that they are tight in the $(k+m+2)$-dimensional cross polytope. Nevertheless, it is currently not clear whether the conditions of Question~\ref{ques:kHamilKkdTight} imply the tightness of the embedding into the cross polytope.
In accordance with \cite[Conjecture 6.2]{Kuehnel99CensusTight} we then have the following conjecture.
\begin{conjecture}
Any centrally symmetric combinatorial triangulation $M^{k+m}_n$ of $S^k\times S^m$ on $n=2(k+m+2)$ vertices is tight if regarded as a subcomplex of the $\frac{n}{2}$-dimensional cross polytope. $M^{k+m}_n$ is contained in the class $\mathcal{K}^{\min\set{k,m}}(k+m)$.
\end{conjecture}
\section*{Acknowledgment}
The author acknowledges support by the Deutsche Forschungsgemeinschaft (DFG). This work was carried out as part of the DFG project Ku 1203/5-2.
\begin{table}
\caption{The 240 $5$-simplices of $M^6_{16}$.}
\label{tab:S2S4CentSymmSimplices}
\centering
{\tiny
\begin{tabular}{lllll}
$\langle 1\,2\,3\,4\,7\,12\,14 \rangle$, &
$\langle 1\,2\,3\,4\,7\,12\,16 \rangle$, &
$\langle 1\,2\,3\,4\,7\,13\,14 \rangle$, &
$\langle 1\,2\,3\,4\,7\,13\,16 \rangle$, &
$\langle 1\,2\,3\,4\,9\,12\,14 \rangle$,\\
$\langle 1\,2\,3\,4\,9\,12\,16 \rangle$, &
$\langle 1\,2\,3\,4\,9\,14\,16 \rangle$, &
$\langle 1\,2\,3\,4\,13\,14\,16 \rangle$, &
$\langle 1\,2\,3\,6\,7\,12\,14 \rangle$, &
$\langle 1\,2\,3\,6\,7\,12\,16 \rangle$,\\
$\langle 1\,2\,3\,6\,7\,13\,14 \rangle$, &
$\langle 1\,2\,3\,6\,7\,13\,16 \rangle$, &
$\langle 1\,2\,3\,6\,9\,10\,12 \rangle$, &
$\langle 1\,2\,3\,6\,9\,10\,13 \rangle$, &
$\langle 1\,2\,3\,6\,9\,12\,16 \rangle$,\\
$\langle 1\,2\,3\,6\,9\,13\,16 \rangle$, &
$\langle 1\,2\,3\,6\,10\,11\,12 \rangle$, &
$\langle 1\,2\,3\,6\,10\,11\,13 \rangle$, &
$\langle 1\,2\,3\,6\,11\,12\,14 \rangle$, &
$\langle 1\,2\,3\,6\,11\,13\,14 \rangle$,\\
$\langle 1\,2\,3\,9\,10\,11\,12 \rangle$, &
$\langle 1\,2\,3\,9\,10\,11\,13 \rangle$, &
$\langle 1\,2\,3\,9\,11\,12\,14 \rangle$, &
$\langle 1\,2\,3\,9\,11\,13\,14 \rangle$, &
$\langle 1\,2\,3\,9\,13\,14\,16 \rangle$,\\
$\langle 1\,2\,4\,7\,12\,14\,15 \rangle$, &
$\langle 1\,2\,4\,7\,12\,15\,16 \rangle$, &
$\langle 1\,2\,4\,7\,13\,14\,15 \rangle$, &
$\langle 1\,2\,4\,7\,13\,15\,16 \rangle$, &
$\langle 1\,2\,4\,9\,12\,14\,16 \rangle$,\\
$\langle 1\,2\,4\,12\,14\,15\,16 \rangle$, &
$\langle 1\,2\,4\,13\,14\,15\,16 \rangle$, &
$\langle 1\,2\,6\,7\,12\,14\,16 \rangle$, &
$\langle 1\,2\,6\,7\,13\,14\,15 \rangle$, &
$\langle 1\,2\,6\,7\,13\,15\,16 \rangle$,\\
$\langle 1\,2\,6\,7\,14\,15\,16 \rangle$, &
$\langle 1\,2\,6\,9\,10\,11\,12 \rangle$, &
$\langle 1\,2\,6\,9\,10\,11\,13 \rangle$, &
$\langle 1\,2\,6\,9\,11\,12\,14 \rangle$, &
$\langle 1\,2\,6\,9\,11\,13\,15 \rangle$,\\
$\langle 1\,2\,6\,9\,11\,14\,15 \rangle$, &
$\langle 1\,2\,6\,9\,12\,14\,16 \rangle$, &
$\langle 1\,2\,6\,9\,13\,15\,16 \rangle$, &
$\langle 1\,2\,6\,9\,14\,15\,16 \rangle$, &
$\langle 1\,2\,6\,11\,13\,14\,15 \rangle$,\\
$\langle 1\,2\,7\,12\,14\,15\,16 \rangle$, &
$\langle 1\,2\,9\,11\,13\,14\,15 \rangle$, &
$\langle 1\,2\,9\,13\,14\,15\,16 \rangle$, &
$\langle 1\,3\,4\,7\,12\,14\,16 \rangle$, &
$\langle 1\,3\,4\,7\,13\,14\,16 \rangle$,\\
$\langle 1\,3\,4\,9\,12\,14\,16 \rangle$, &
$\langle 1\,3\,6\,7\,12\,14\,16 \rangle$, &
$\langle 1\,3\,6\,7\,13\,14\,16 \rangle$, &
$\langle 1\,3\,6\,8\,9\,10\,11 \rangle$, &
$\langle 1\,3\,6\,8\,9\,10\,13 \rangle$,\\
$\langle 1\,3\,6\,8\,9\,11\,14 \rangle$, &
$\langle 1\,3\,6\,8\,9\,13\,14 \rangle$, &
$\langle 1\,3\,6\,8\,10\,11\,13 \rangle$, &
$\langle 1\,3\,6\,8\,11\,13\,14 \rangle$, &
$\langle 1\,3\,6\,9\,10\,11\,12 \rangle$,\\
$\langle 1\,3\,6\,9\,11\,12\,14 \rangle$, &
$\langle 1\,3\,6\,9\,12\,14\,16 \rangle$, &
$\langle 1\,3\,6\,9\,13\,14\,16 \rangle$, &
$\langle 1\,3\,8\,9\,10\,11\,13 \rangle$, &
$\langle 1\,3\,8\,9\,11\,13\,14 \rangle$,\\
$\langle 1\,4\,7\,8\,10\,11\,13 \rangle$, &
$\langle 1\,4\,7\,8\,10\,11\,15 \rangle$, &
$\langle 1\,4\,7\,8\,10\,13\,16 \rangle$, &
$\langle 1\,4\,7\,8\,10\,15\,16 \rangle$, &
$\langle 1\,4\,7\,8\,11\,13\,15 \rangle$,\\
$\langle 1\,4\,7\,8\,12\,14\,15 \rangle$, &
$\langle 1\,4\,7\,8\,12\,14\,16 \rangle$, &
$\langle 1\,4\,7\,8\,12\,15\,16 \rangle$, &
$\langle 1\,4\,7\,8\,13\,14\,15 \rangle$, &
$\langle 1\,4\,7\,8\,13\,14\,16 \rangle$,\\
$\langle 1\,4\,7\,10\,11\,13\,15 \rangle$, &
$\langle 1\,4\,7\,10\,13\,15\,16 \rangle$, &
$\langle 1\,4\,8\,10\,11\,13\,15 \rangle$, &
$\langle 1\,4\,8\,10\,13\,15\,16 \rangle$, &
$\langle 1\,4\,8\,12\,14\,15\,16 \rangle$,\\
$\langle 1\,4\,8\,13\,14\,15\,16 \rangle$, &
$\langle 1\,6\,7\,8\,10\,11\,13 \rangle$, &
$\langle 1\,6\,7\,8\,10\,11\,15 \rangle$, &
$\langle 1\,6\,7\,8\,10\,13\,16 \rangle$, &
$\langle 1\,6\,7\,8\,10\,15\,16 \rangle$,\\
$\langle 1\,6\,7\,8\,11\,13\,15 \rangle$, &
$\langle 1\,6\,7\,8\,13\,14\,15 \rangle$, &
$\langle 1\,6\,7\,8\,13\,14\,16 \rangle$, &
$\langle 1\,6\,7\,8\,14\,15\,16 \rangle$, &
$\langle 1\,6\,7\,10\,11\,13\,15 \rangle$,\\
$\langle 1\,6\,7\,10\,13\,15\,16 \rangle$, &
$\langle 1\,6\,8\,9\,10\,11\,15 \rangle$, &
$\langle 1\,6\,8\,9\,10\,13\,16 \rangle$, &
$\langle 1\,6\,8\,9\,10\,15\,16 \rangle$, &
$\langle 1\,6\,8\,9\,11\,14\,15 \rangle$,\\
$\langle 1\,6\,8\,9\,13\,14\,16 \rangle$, &
$\langle 1\,6\,8\,9\,14\,15\,16 \rangle$, &
$\langle 1\,6\,8\,11\,13\,14\,15 \rangle$, &
$\langle 1\,6\,9\,10\,11\,13\,15 \rangle$, &
$\langle 1\,6\,9\,10\,13\,15\,16 \rangle$,\\
$\langle 1\,7\,8\,12\,14\,15\,16 \rangle$, &
$\langle 1\,8\,9\,10\,11\,13\,15 \rangle$, &
$\langle 1\,8\,9\,10\,13\,15\,16 \rangle$, &
$\langle 1\,8\,9\,11\,13\,14\,15 \rangle$, &
$\langle 1\,8\,9\,13\,14\,15\,16 \rangle$,\\
$\langle 2\,3\,4\,5\,7\,10\,11 \rangle$, &
$\langle 2\,3\,4\,5\,7\,10\,16 \rangle$, &
$\langle 2\,3\,4\,5\,7\,11\,14 \rangle$, &
$\langle 2\,3\,4\,5\,7\,14\,16 \rangle$, &
$\langle 2\,3\,4\,5\,9\,10\,11 \rangle$,\\
$\langle 2\,3\,4\,5\,9\,10\,12 \rangle$, &
$\langle 2\,3\,4\,5\,9\,11\,14 \rangle$, &
$\langle 2\,3\,4\,5\,9\,12\,16 \rangle$, &
$\langle 2\,3\,4\,5\,9\,14\,16 \rangle$, &
$\langle 2\,3\,4\,5\,10\,12\,16 \rangle$,\\
$\langle 2\,3\,4\,7\,10\,11\,12 \rangle$, &
$\langle 2\,3\,4\,7\,10\,12\,16 \rangle$, &
$\langle 2\,3\,4\,7\,11\,12\,14 \rangle$, &
$\langle 2\,3\,4\,7\,13\,14\,16 \rangle$, &
$\langle 2\,3\,4\,9\,10\,11\,12 \rangle$,\\
$\langle 2\,3\,4\,9\,11\,12\,14 \rangle$, &
$\langle 2\,3\,5\,6\,9\,10\,12 \rangle$, &
$\langle 2\,3\,5\,6\,9\,10\,13 \rangle$, &
$\langle 2\,3\,5\,6\,9\,11\,13 \rangle$, &
$\langle 2\,3\,5\,6\,9\,11\,14 \rangle$,\\
$\langle 2\,3\,5\,6\,9\,12\,16 \rangle$, &
$\langle 2\,3\,5\,6\,9\,14\,16 \rangle$, &
$\langle 2\,3\,5\,6\,10\,11\,12 \rangle$, &
$\langle 2\,3\,5\,6\,10\,11\,13 \rangle$, &
$\langle 2\,3\,5\,6\,11\,12\,14 \rangle$,\\
$\langle 2\,3\,5\,6\,12\,14\,16 \rangle$, &
$\langle 2\,3\,5\,7\,10\,11\,12 \rangle$, &
$\langle 2\,3\,5\,7\,10\,12\,16 \rangle$, &
$\langle 2\,3\,5\,7\,11\,12\,14 \rangle$, &
$\langle 2\,3\,5\,7\,12\,14\,16 \rangle$,\\
$\langle 2\,3\,5\,9\,10\,11\,13 \rangle$, &
$\langle 2\,3\,6\,7\,12\,14\,16 \rangle$, &
$\langle 2\,3\,6\,7\,13\,14\,16 \rangle$, &
$\langle 2\,3\,6\,9\,11\,13\,14 \rangle$, &
$\langle 2\,3\,6\,9\,13\,14\,16 \rangle$,\\
$\langle 2\,4\,5\,7\,10\,11\,12 \rangle$, &
$\langle 2\,4\,5\,7\,10\,12\,15 \rangle$, &
$\langle 2\,4\,5\,7\,10\,15\,16 \rangle$, &
$\langle 2\,4\,5\,7\,11\,12\,14 \rangle$, &
$\langle 2\,4\,5\,7\,12\,14\,15 \rangle$,\\
$\langle 2\,4\,5\,7\,14\,15\,16 \rangle$, &
$\langle 2\,4\,5\,9\,10\,11\,12 \rangle$, &
$\langle 2\,4\,5\,9\,11\,12\,14 \rangle$, &
$\langle 2\,4\,5\,9\,12\,14\,16 \rangle$, &
$\langle 2\,4\,5\,10\,12\,15\,16 \rangle$,\\
$\langle 2\,4\,5\,12\,14\,15\,16 \rangle$, &
$\langle 2\,4\,7\,10\,12\,15\,16 \rangle$, &
$\langle 2\,4\,7\,13\,14\,15\,16 \rangle$, &
$\langle 2\,5\,6\,9\,10\,11\,12 \rangle$, &
$\langle 2\,5\,6\,9\,10\,11\,13 \rangle$,\\
$\langle 2\,5\,6\,9\,11\,12\,14 \rangle$, &
$\langle 2\,5\,6\,9\,12\,14\,16 \rangle$, &
$\langle 2\,5\,7\,10\,12\,15\,16 \rangle$, &
$\langle 2\,5\,7\,12\,14\,15\,16 \rangle$, &
$\langle 2\,6\,7\,13\,14\,15\,16 \rangle$,\\
$\langle 2\,6\,9\,11\,13\,14\,15 \rangle$, &
$\langle 2\,6\,9\,13\,14\,15\,16 \rangle$, &
$\langle 3\,4\,5\,7\,8\,10\,11 \rangle$, &
$\langle 3\,4\,5\,7\,8\,10\,16 \rangle$, &
$\langle 3\,4\,5\,7\,8\,11\,12 \rangle$,\\
$\langle 3\,4\,5\,7\,8\,12\,16 \rangle$, &
$\langle 3\,4\,5\,7\,11\,12\,14 \rangle$, &
$\langle 3\,4\,5\,7\,12\,14\,16 \rangle$, &
$\langle 3\,4\,5\,8\,9\,10\,11 \rangle$, &
$\langle 3\,4\,5\,8\,9\,10\,12 \rangle$,\\
$\langle 3\,4\,5\,8\,9\,11\,12 \rangle$, &
$\langle 3\,4\,5\,8\,10\,12\,16 \rangle$, &
$\langle 3\,4\,5\,9\,11\,12\,14 \rangle$, &
$\langle 3\,4\,5\,9\,12\,14\,16 \rangle$, &
$\langle 3\,4\,7\,8\,10\,11\,12 \rangle$,\\
$\langle 3\,4\,7\,8\,10\,12\,16 \rangle$, &
$\langle 3\,4\,8\,9\,10\,11\,12 \rangle$, &
$\langle 3\,5\,6\,8\,9\,10\,12 \rangle$, &
$\langle 3\,5\,6\,8\,9\,10\,13 \rangle$, &
$\langle 3\,5\,6\,8\,9\,11\,12 \rangle$,\\
$\langle 3\,5\,6\,8\,9\,11\,13 \rangle$, &
$\langle 3\,5\,6\,8\,10\,11\,12 \rangle$, &
$\langle 3\,5\,6\,8\,10\,11\,13 \rangle$, &
$\langle 3\,5\,6\,9\,11\,12\,14 \rangle$, &
$\langle 3\,5\,6\,9\,12\,14\,16 \rangle$,\\
$\langle 3\,5\,7\,8\,10\,11\,12 \rangle$, &
$\langle 3\,5\,7\,8\,10\,12\,16 \rangle$, &
$\langle 3\,5\,8\,9\,10\,11\,13 \rangle$, &
$\langle 3\,6\,8\,9\,10\,11\,12 \rangle$, &
$\langle 3\,6\,8\,9\,11\,13\,14 \rangle$,\\
$\langle 4\,5\,7\,8\,10\,11\,13 \rangle$, &
$\langle 4\,5\,7\,8\,10\,13\,16 \rangle$, &
$\langle 4\,5\,7\,8\,11\,12\,15 \rangle$, &
$\langle 4\,5\,7\,8\,11\,13\,15 \rangle$, &
$\langle 4\,5\,7\,8\,12\,14\,15 \rangle$,\\
$\langle 4\,5\,7\,8\,12\,14\,16 \rangle$, &
$\langle 4\,5\,7\,8\,13\,14\,15 \rangle$, &
$\langle 4\,5\,7\,8\,13\,14\,16 \rangle$, &
$\langle 4\,5\,7\,10\,11\,12\,15 \rangle$, &
$\langle 4\,5\,7\,10\,11\,13\,15 \rangle$,\\
$\langle 4\,5\,7\,10\,13\,15\,16 \rangle$, &
$\langle 4\,5\,7\,13\,14\,15\,16 \rangle$, &
$\langle 4\,5\,8\,9\,10\,11\,13 \rangle$, &
$\langle 4\,5\,8\,9\,10\,12\,15 \rangle$, &
$\langle 4\,5\,8\,9\,10\,13\,15 \rangle$,\\
$\langle 4\,5\,8\,9\,11\,12\,15 \rangle$, &
$\langle 4\,5\,8\,9\,11\,13\,15 \rangle$, &
$\langle 4\,5\,8\,10\,12\,15\,16 \rangle$, &
$\langle 4\,5\,8\,10\,13\,15\,16 \rangle$, &
$\langle 4\,5\,8\,12\,14\,15\,16 \rangle$,\\
$\langle 4\,5\,8\,13\,14\,15\,16 \rangle$, &
$\langle 4\,5\,9\,10\,11\,12\,15 \rangle$, &
$\langle 4\,5\,9\,10\,11\,13\,15 \rangle$, &
$\langle 4\,7\,8\,10\,11\,12\,15 \rangle$, &
$\langle 4\,7\,8\,10\,12\,15\,16 \rangle$,\\
$\langle 4\,8\,9\,10\,11\,12\,15 \rangle$, &
$\langle 4\,8\,9\,10\,11\,13\,15 \rangle$, &
$\langle 5\,6\,7\,8\,10\,11\,13 \rangle$, &
$\langle 5\,6\,7\,8\,10\,11\,15 \rangle$, &
$\langle 5\,6\,7\,8\,10\,13\,15 \rangle$,\\
$\langle 5\,6\,7\,8\,11\,13\,15 \rangle$, &
$\langle 5\,6\,7\,10\,11\,13\,15 \rangle$, &
$\langle 5\,6\,8\,9\,10\,12\,15 \rangle$, &
$\langle 5\,6\,8\,9\,10\,13\,15 \rangle$, &
$\langle 5\,6\,8\,9\,11\,12\,15 \rangle$,\\
$\langle 5\,6\,8\,9\,11\,13\,15 \rangle$, &
$\langle 5\,6\,8\,10\,11\,12\,15 \rangle$, &
$\langle 5\,6\,9\,10\,11\,12\,15 \rangle$, &
$\langle 5\,6\,9\,10\,11\,13\,15 \rangle$, &
$\langle 5\,7\,8\,10\,11\,12\,15 \rangle$,\\
$\langle 5\,7\,8\,10\,12\,15\,16 \rangle$, &
$\langle 5\,7\,8\,10\,13\,15\,16 \rangle$, &
$\langle 5\,7\,8\,12\,14\,15\,16 \rangle$, &
$\langle 5\,7\,8\,13\,14\,15\,16 \rangle$, &
$\langle 6\,7\,8\,10\,13\,15\,16 \rangle$,\\
$\langle 6\,7\,8\,13\,14\,15\,16 \rangle$, &
$\langle 6\,8\,9\,10\,11\,12\,15 \rangle$, &
$\langle 6\,8\,9\,10\,13\,15\,16 \rangle$, &
$\langle 6\,8\,9\,11\,13\,14\,15 \rangle$, &
$\langle 6\,8\,9\,13\,14\,15\,16 \rangle$.
\end{tabular}} \end{table}
\bibliographystyle{plain}
\footnotesize
|
1,314,259,995,332 | arxiv | \section{Introduction}
Ring artifacts occur in X-ray computed tomography (CT) due to an amalgamation of factors that cause small errors in detector pixel values to persist throughout CT acquisition, resulting in semi-circular and ring-shaped artifacts on back projection for 180$^\circ$ and 360$^\circ$ CTs, respectively. Many of these errors are caused by phenomena that affect the detector gain, such as phosphor thickness variations or imperfections in the optical coupling system, while others include fluctuations in the X-ray beam and, for synchrotron sources, drift and/or vibrations in the monochromator crystal. Were the X-ray beam perfectly stable for the duration of the CT and the detector response uniformly linear for every pixel, standard dark-current and flat-field correction techniques would be sufficient to correct for these inhomogeneities and remove these artifacts; however, since detector gain and dark current can vary considerably in space and time, standard flat and dark correction is generally inadequate to compensate for these intensity variations across the entire dynamic range. Therefore, other techniques are required for proper treatment of these artifacts.
A number of different methods have been proposed for ring artifact correction, most of which are based on post hoc image processing. Many of them isolate and remove rings in the reconstructed image \cite{Jha2014, Paleo2015, Ji2017, Liang2017} or in the sinogram \cite{Raven1998, Sijbers2004, Boin2006, Muench2009, Yousuf2010, Miqueles2014, Titarenko2016, Yan2016, Massimi2018, Vo2018}. Some methods characterize the flat-field images \cite{VanNieuwenhove2015, Jailin2017}, while others shift the sample or detector during image acquisition to smear out systematic intensity fluctuations across the reconstruction volume \cite{Davis1997,Hubert2018,Pelt2018}. Of particular interest are those that seek to correct the bulk of the problem where it occurs -- in the pixel-to-pixel response variations that have historically been treated as spatially invariant \cite{Altunbas2014, Vagberg2017, Vo2018}. While all of these can be effective at addressing the ring artifacts, we take the latter approach, since it is directed at the root cause and is therefore likely to provide more accurate results without leading to other artifacts, such as blurring or the introduction of new rings. V{\aa}gberg~\textit{et~al.}~(2017)~\cite{Vagberg2017} recently took a similar approach, proposing a measurement-informed ring artifact correction algorithm, modeling the detector response using images collected over a range of intensities. Their work used aluminum filters to attenuate the beam and took the assumption that the spatial variations in response are primarily caused by changes in thickness across the scintillator. The algorithm presented here is effectively a generalization of this work, making no assumptions regarding the cause of the detector's spatial variations, using a method that is sample-independent. We present a simple, pixel-wise detector calibration using hundreds of data points for each position on the detector, rather than the standard two-point flat-field calibration. This method requires only a single image sequence for each experimental setup and does not require any information about the physical cause of the detector's response variations.
This approach can be used for conventional X-ray CT but was primarily motivated for the case of phase-contrast X-ray CT \cite{Langer2008}. Phase contrast is a relatively recent development, which converts the phase shifts imparted to the X-ray wavefield by the sample into intensity variations that can reveal soft tissue features. The simplest way to achieve this additional soft-tissue contrast is through the introduction of a distance (e.g.~>~1~m) between the sample and detector, illuminating the sample with sufficiently coherent X-ray radiation \cite{Snigirev1995,Cloetens1996}. Since phase-contrast X-ray imaging can capture soft tissue structures, we have recently applied the method to image the brain \textit{in situ}, finding that the phase contrast associated with grey and white matter is orders of magnitude less than the attenuation contrast associated with the skull \cite{Croton2018}. The correction presented here is therefore of particular interest in phase contrast imaging, given that the image contrast associated with phase effects may be comparable to the contrast of ring artefacts.
\section{Pixel-wise mapping of the detector response}
\label{sec:maps}
The full correction can be broken into two main parts -- first, a spatial mapping of the detector response, followed by an application of that mapping to experimental data. In the following sections, we break these parts further into eight basic steps, labeled (1) - (8) below. To determine the spatial variations in response across the detector, we only need to know how the intensity measured by the optical system at each pixel varies from that which is incident upon it. These intensity deviations are not expected to have a large dependence on energy, since the incident X-ray photons are converted to optical photons within the system; however, there may be some dependence if the point spread function changes significantly with energy. Here, we minimize energy dependence by measuring the response at similar energies to those used for our experiments, using a monochromatic source. We take a series of measurements acquired across a range of intensities covering the bulk of the dynamic range of the detector. It is possible to use filters to attenuate the beam \cite{Vagberg2017}, however spatial variations in density and thickness of the filters can add artifacts back into the images.
Step 1 -- To avoid introducing new artifacts, we start by simply sweeping the detector through the X-ray beam while acquiring a sequence of images (see Fig. \ref{fig:setup}). Since the beam intensity in the absence of any sample is typically peaked at the center, rolling off outward away from the beam, a wide range of intensity measurements can be obtained. For a beam size that is larger than the detector, a single sweep is usually sufficient; when the detector is larger, more than one sweep with the beam offset from the center of the detector may be necessary to ensure coverage across the full range of intensities for every pixel. By eliminating the need for absorbing materials, we can measure the exact response for a given input intensity, thus avoiding any possible artifacts that may be introduced by structural imperfections in the absorber. If, however, the intensity profile of the beam is insufficient to sample the full range of intensities required, then filters can be added to extend the intensity range acquired; the structural imperfections would not introduce additional artifacts when used in this way, since they would be shifted across the extent of the detector in the sweep direction during acquisition.
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{fig1.pdf}
\caption{The experimental set-up, shown for an incident beam with a parabolic intensity profile. When sweeping vertically, the detector is positioned high enough above the beam to achieve the desired minimum number of incident counts. A sequence of images is then acquired while the detector is translated downward through the beam to the equivalent position below the beam. Three images are shown near the middle of the sequence, along with intensity profiles taken vertically through the center of the images. The extremes ends of the sequence (not shown) contain only traces of the edge of the incident beam. Note that the direction of transverse translation is not important; it is equally valid to acquire an image sequence while translating the detector horizontally through the beam.}
\label{fig:setup}
\end{figure}
Image sequences were acquired on beamline BL20B2 at the SPring-8 synchrotron in Hy$\overline{\mbox{o}}$go, Japan using a 2048 $\times$ 2048 ORCA Flash 4.0 digital sCMOS camera (C11440-22C by Hamamatsu) with a \SI{25}{\um} thick gadolinium oxysulfide (GOS) scintillator coupled with a tandem lens system, giving an effective pixel size of \SI{15.1}{\um}. Three beam sweeps were required for full coverage across the detector. 270 images were acquired in each sequence, with an exposure time of 100~ms each. Sequences were recorded at three positions such that the beam was centered on the horizontal left, center, and right side of the detector, which was swept vertically.
Step 2 -- Once the full beam sweep sequence was acquired, the images were stacked into a volume $I(i,j,k)$, where $i$ and $j$ are the spatial dimensions within the $k$ images in the stack. Figure \ref{fig:plots}(a) shows the intensity through the combined image stack for a single pixel.
Step 3 -- A separate volume was created wherein each image $I(i,j)$ of the volume $I(i,j,k)$ was first dark-corrected using the mean image $D(i,j)$ of the dark-current images acquired at the time of the sequence, and then smoothed with a Gaussian filter, yielding an estimate $I_s(i,j,k)$ of the true intensity profile of the incident X-ray beam. The smoothing kernel radius was chosen to be large enough to eliminate the pixel-to-pixel variations resulting from the non-uniform gain, while being small enough to maintain the underlying shape of the beam intensity profile. We found that a kernel with a standard deviation of 50 pixels provided a suitable blurring function for our experimental conditions.
Step 4 -- The `true' beam intensity was plotted against the measured intensity to determine the order of polynomial, for smoothed counts as a function of measured counts, required to model the gain. This is shown in Fig. \ref{fig:plots}(b). Since the two intensities come from the same measurement, this resultant curve is necessarily linear, with a slope close to unity.
Step 5 -- The `true' intensity $I_s(i,j,k)$ was fit for each pixel $(i,j)$ as a linear function of the measured intensity,
\begin{equation}
I_s(i,j,k) = \boldsymbol\alpha [I(i,j,k) - D(i,j)] + \boldsymbol\beta.
\label{eq:linearfit}
\end{equation}
The coefficient arrays $\boldsymbol\alpha = \alpha(i,j)$ and $\boldsymbol\beta = \beta(i,j)$ from this calibration (specified in bold for clarity) can then be used to determine the `true' intensity of the beam $I_s(i,j)$ incident on the detector, given the measured intensity $I(i,j)$ for a given projection image, regardless of the sample. It should be noted that this will correct for spatial variations in the detector gain and offset, which are those primarily responsible for ring artifacts; however, since $I_s$ is estimated by smoothing the data itself, this method cannot account for any large-scale non-linearities (e.g. due to higher-order harmonics or polychromaticity); these must be accounted for separately. This, however, should not have a substantial impact on ring artifacts, since they are most prominent when caused by the small-scale, pixel-to-pixel variations.
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{fig2.pdf}
\caption{a) The counts measured at a single pixel in a stack of images taken while the detector was swept vertically through the beam three times; the beam was centered first on the horizontal left, then center, and finally right sides that this pixel is near the horizontal center. b) The `true' (i.e. smoothed) counts for the same pixel as in (a) as a function of the measured counts.}
\label{fig:plots}
\end{figure}
The coefficient maps, $\boldsymbol\alpha$ and $\boldsymbol\beta$, are shown in Figs. \ref{fig:coeffs}(a) and \ref{fig:coeffs}(b), respectively. Note that $\boldsymbol\alpha$ corresponds to the detector gain, while $\boldsymbol\beta$ maps the intercepts of the fits and hence an offset to the dark current. A distinct line can be seen clearly across the upper left of Fig. \ref{fig:coeffs}(b), and to a lesser degree in Fig. \ref{fig:coeffs}(a), possibly -- though not necessarily -- due to a scratch on the scintillator, as per the assumption of scintillator thickness variations of V{\aa}gberg \textit{et al.} (2017) \cite{Vagberg2017}. Our correction does not require any assumption about the cause of these gain variations, so while effects resulting from thickness variations in the scintillator may be present, any other phenomena affecting the gain would be represented as well.
To test our method under different experimental conditions, a calibration was also performed for a different detector at the Imaging and Medical Beamline (IMBL) of the Australian Synchrotron. Image sequences were acquired using a 2560~$\times$~2160 pco.edge~5.5 sCMOS camera with a tandem lens configuration and a \SI{25}{\um} thick GOS scintillator, giving an effective pixel size of \SI{16.2}{\um}, at a beam energy of 25~keV. Due to the relatively small beam height and narrow vertical slit aperture, five separate image sequences of 300 exposures each were acquired while sweeping the detector across the beam horizontally, rather than vertically, with each sweep centered at a different vertical position on the detector. The images in each sequence were segmented into five strips of equal height, covering the full width of the detector. The strips from each sweep corresponding to the region of peak intensity were combined to form a single final image sequence. This was done to provide a single beam sweep sequence with full coverage across the intensity range at every pixel, while removing the vertical slit edges present in the individual sequences. From this combined sequence, smoothed and unsmoothed volumes were created and processed following the same method as above, yielding the gain and dark-current offset correction maps shown in Figs. \ref{fig:coeffs}(c) and \ref{fig:coeffs}(d), respectively.
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{fig3.pdf}
\caption{The detector gain ($\boldsymbol\alpha$) and offset ($\boldsymbol\beta$) maps are shown in (a) and (b), respectively, for the Hamamatsu detector used at SPring-8. (c) and (d) show those for the pco.edge detector used at IMBL. Spatial scale bars are 1~mm in length. Numerous features can be seen to affect the gain. These are labeled with white letters and are described at the end of section \ref{sec:maps}.}
\label{fig:coeffs}
\end{figure}
Note that there are a number of non-uniformities present in all of the maps in Fig. \ref{fig:coeffs}, of both known and unknown origin. Some features of note include: (A) Dark, concentric bands just inside the perimeters of the detectors. (B) Textured gain variations. (C) A distinct, nearly horizontal line (the aforementioned `scratch'). (D) A broad, vertical, bright band. (E) Nearly parallel, fine bright lines. (F) Small regions of `hot pixels'. (G) Diffuse smudges of unknown origin. (H) A very distinct bright spot on the gain map of the pco.edge detector, also seen as a dark spot in the dark-current offset. This feature results from a small droplet of moisture that fell onto the scintillator during installation. The droplet evaporated quickly, however its effect can be seen in the raw images in addition to the gain and offset maps. (I) and (J) Large, textured `smudges'. (K) Vertical banding corresponding to the readout columns. (L) A horizontal line separating the two vertical readout directions. These features demonstrate that a wide variety of phenomena, even the relatively minor environmental effect of the moisture droplet (H), can substantially influence the detector response.
\section{Applying the correction}
\label{sec:correction}
Step 6 -- To implement the correction for a CT sequence, Eq.~\ref{eq:linearfit} is applied using the coefficient arrays from the beam sweep and replacing the beam sweep volume $I(i,j,k)$ with the projection images $P(i,j,k)$ and using the dark current acquired during the CT sequence, yielding the corrected projections:
\begin{equation}
P_c(i,j,k) = \boldsymbol\alpha [P(i,j,k) - D(i,j)] + \boldsymbol\beta.
\label{eq:correction}
\end{equation}
Step 7 -- A new flat-field image is created from the mean of the flat-field images acquired with the CT sequence, smoothing the result using the same smoothing parameters as those used to create the beam sweep volume $I_s$.
Step 8 -- Finally, the corrected projection images are flat-field corrected using the smoothed flat-field image. Each of the eight steps is outlined in the flow chart of Fig. \ref{fig:flowchart}.
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{fig4.pdf}
\caption{A flow chart of the correction algorithm. The basic method is summarized as follows: (1) The detector is swept through the X-ray beam while an image sequence is acquired. (2) The mean of the dark-current images acquired at the time of the beam sweep is subtracted from each image in the sequence, and they are stacked into a 3D volume. (3) A second volume is created to estimate the `true' intensity of the beam by smoothing each image in the volume. (4) and (5) The measured intensity is fit as a function of the `true' intensity for every pixel, yielding detector gain and dark-current offset maps. (6) For a given CT data set, corrected projections are created using these maps, after subtraction of the mean dark-current images acquired with the CT sequence. (7) The mean dark current is subtracted from the mean of the flat-field images acquired at the time of the CT, and the resultant image is smoothed using the same smoothing kernel as that used in step 3. (8) The corrected projection images are then flat-field corrected using the smoothed flat-field image. See Visualization 1 for a second flow chart showing the additional corrections described in the text.}
\label{fig:flowchart}
\end{figure}
An additional correction may be required to account for small differences in the incident beam or in the optical system output between the time of the beam sweep and that of the CT data set. This is done by applying the same correction above (Eq.~\ref{eq:correction}) to the mean flat-field image, with residuals calculated as the difference between the corrected and uncorrected images. These residuals are then added to the smoothed flat-field image used for the correction. Additionally, changes in the beam intensity over the course of the CT acquisition due to synchrotron beam injections can be corrected by creating a customized flat-field image for each projection. One simple way to do this is to scale the smoothed flat-field image by the ratio of a reference region in the projections outside the sample to that same region of the mean flat-field image. Visualization 1 contains a flow chart that includes corrections for these residuals and beam injections.
It should be noted that the residual correction is dependent on the amount of noise within the data set. There will likely be unwanted signal within the residuals (e.g. zingers, pixels where the intensity is far from the intensity in surrounding pixels), and when the noise level is low, adding these residuals to the smoothed flat-field image can introduce ring artifacts. In this case, it is important to filter the residuals to ensure that only those that are persistent throughout the CT are included. This can be done by limiting the inclusion of residuals only beyond a certain tolerance (we use those $>3\sigma$ from the mean). When the data set is noisy, this filtering is less important and can even remove residuals that should be included.
\section{Experiments}
During the synchrotron experiments described in the previous section, propagation-based, phase-contrast CTs were acquired of biological samples under the same experimental conditions as those for the beam sweeps. A CT sequence was acquired at 24~keV at SPring-8 of a scavenged head from a New Zealand White rabbit kitten born at 30 days gestational age (GA; term $\sim$32 days), suspended in agarose. 1800 projections were acquired at a 5~m sample-to-detector propagation distance over 180$^{\circ}$, with an exposure time of 100~ms per projection. A second phase-contrast CT sequence was acquired at the Australian Synchrotron of the lungs of a New Zealand White rabbit kitten, also born at 30 days GA, at 25 keV using a MICROFIL{\textregistered} contrast agent. 3600 projections were acquired at 2~m propagation over 180$^{\circ}$, with an exposure time of 200~ms per projection. To evaluate the effectiveness of our correction, each data set was processed twice, once with traditional dark-current and flat-field corrections and once with the pixel-wise correction detailed in this paper. Phase retrieval was performed using the two-material algorithm derived by Beltran~\textit{et~al.}~(2010)~\cite{Beltran2010} from the single-material algorithm of Paganin~\textit{et~al.}~(2002)~\cite{Paganin2002} and described for CT by Croton~\textit{et~al.}~(2018)~\cite{Croton2018} (for more information, see \cite{Beltran2011,Nesterets2014,Gureyev2014,Kitchen2017,Gureyev2017}). For the head data set, the bone and soft tissue interface was retrieved, while for the lung data set, phase retrieval was performed with respect to the MICROFIL{\textregistered}/tissue interface. In total, four volumes were reconstructed for each data set -- before/after correction without phase retrieval and before/after correction with phase retrieval.
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{fig5.pdf}
\caption{(a), (b), (e), (f) Reconstructed tomograms of a rabbit kitten head in agarose and (c), (d), (g), (h) rabbit kitten lungs in agarose with MICROFIL{\textregistered} contrast agent. Also shown are reconstructed slices of sample-free regions from both the (i), (j) head and (k), (l) lung data sets, containing only agarose. All data are shown without (first and third columns) and with (second and fourth columns) the correction detailed in this paper. Top row: Phase contrast tomograms of rabbit kitten head and lungs, no phase retrieval. Middle row: Tomograms of the same rabbit kitten head and lungs, after two-material phase retrieval \cite{Beltran2010, Croton2018}. Bottom row: Phase-retrieved tomograms of sample-free regions from rabbit kitten head and lung data sets. First and third columns: Standard dark-current and flat-field correction. Second and fourth columns: Beam sweep gain and offset correction.}
\label{fig:rings}
\end{figure}
\section{Results}
Figures \ref{fig:rings}(a)~-~\ref{fig:rings}(l) show reconstructed slices of the rabbit kitten head and lung CTs, both before and after phase retrieval, for standard dark-current and flat-field corrections and for the pixel-wise dark-current offset and gain correction described in section \ref{sec:correction}. Slices were also reconstructed for each data set, post-phase-retrieval, of a sample-free region containing only the sample container filled with agarose. In each of the data sets, the signal-to-noise ratio (SNR), measured as the ratio of the mean to the standard deviation (SNR~=~$\mu / \sigma$) \cite{Smith1997} within the region of interest, increased significantly in the area immediately surrounding the center of rotation (COR) after the correction was applied. We refer the reader to references \cite{Beltran2011,Nesterets2014,Gureyev2014,Kitchen2017,Gureyev2017} for further details regarding this SNR boost. The artifacts are strongest near the COR and fall off with increasing radius, since incorrect pixel values are spread across increasingly larger circumferences. In the innermost region, within a radius of 100 pixels from the COR, SNR increases of 38\% and 39\% were achieved with the correction for the phase-retrieved head and lung images shown in Fig. \ref{fig:rings}, respectively, and increases of up to 55\% were seen across the full sample-free volumes.
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\textwidth]{fig6.pdf}
\caption{a) Azimuthally-averaged radial profiles for the sample-free region in Figs. \ref{fig:rings}(i) and \ref{fig:rings}(j). The decrease in the variance between the uncorrected and corrected images can be clearly seen, particularly at smaller radii, where the effects of the ring artifacts are strongest. b) The reconstruction region used for averaging (Hamamatsu ORCA Flash 4.0). Since the CT was acquired over 180$^{\circ}$, the artifacts for individual pixels occur only in the top or the bottom half of the image. c) Azimuthally-averaged radial profiles for the sample-free region in Figs. \ref{fig:rings}(k) and \ref{fig:rings}(l). d) The reconstruction region used for averaging (pco.edge 5.5). The images in (b) and (d) are the uncorrected tomograms.}
\label{fig:azavg}
\end{figure}
To better quantify the improvement, we define an image quality metric, the ring artifact suppression percentage (RASP), as the percentage reduction in the standard deviation ($\sigma$) of the azimuthally averaged radial profile from the COR in the sample-free region:
\begin{equation}
\makeatletter
\newcommand{\bBigg@{6}}{\bBigg@{6}}
\makeatother
\textrm{RASP} = \bigg(1 - \frac{\sigma_c}{\sigma_u}\bigg) \times 100\% = \bBigg@{6}(1 - \frac{\sqrt{\frac{1}{N}\sum\limits_{j=1}^{N} (x_{c,j} - \mu_{c,j})^2}}{\sqrt{\frac{1}{N}\sum\limits_{j=1}^{N} (x_{u,j} - \mu_{u,j})^2}}\bBigg@{6}) \times 100\%.
\end{equation}
Here, $c$ and $u$ denote the corrected and uncorrected images, respectively, $N$ is the total number of radial bins, $x_j$ is the azimuthally-averaged value within each bin, and $\mu_j$ is the mean value in each bin, determined by smoothing the radial profile with a median filter.
Figure \ref{fig:azavg}(a) shows the radial profiles and RASP obtained for the sample-free regions from Fig. \ref{fig:rings}(i)-(l), and Figs. \ref{fig:azavg}(b) and (c) show the regions used for averaging. The values obtained for these images are $\textrm{RASP} = 32.0\%$ for the ORCA Flash 4.0 used at SPring-8 and $\textrm{RASP} = 38.7\%$ for the pco.edge~5.5 used at the Australian Synchrotron. Averaging the RASP measurement over 100 consecutive slices within the sample-free regions give values of $\textrm{RASP} = 24.9 \pm 6.4\%$ and $\textrm{RASP} = 29.4 \pm 9.3\%$, respectively.
To further visualize these improvements, Fig. \ref{fig:diffsino} shows the change in ring artifacts seen between the sinograms corresponding to the reconstructed tomograms of Fig. \ref{fig:rings}(e) and (f), where the rings manifest as vertical stripes. The uncorrected and corrected sinograms of Fig. \ref{fig:diffsino}(a) and (b), respectively, demonstrate the extent to which these artifacts are overwhelmed by the signal from the skull, which makes sinogram-filtering techniques less effective. The difference image in Fig. \ref{fig:diffsino}(c) shows the rings (vertical stripes) that have been removed by the response correction described herein as well as the temporal variations in intensity (horizontal stripes) due to beam injections over the \textasciitilde 3-minute duration of the CT acquisition.
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\textwidth]{fig7.pdf}
\caption{a) The uncorrected sinogram of the rabbit kitten head tomogram of Fig. \ref{fig:rings}(e). b) The sinogram of the same rabbit kitten head after the response correction has applied, corresponding to the tomogram of Fig. \ref{fig:rings}(f). Note that (a) and (b) look virtually identical, since ring artifacts that normally appear as stripes are overwhelmed by the signal from the skull. c) The difference between the sinograms in (a) and (b). The vertical stripes show the ring artifacts that have been removed. The horizontal stripes are caused by the additional correction that was applied to account for the time-variation in intensity due to beam injections (see step 8 in section 3).}
\label{fig:diffsino}
\end{figure}
\section{Conclusions}
We have presented a simple correction to account for small spatial variations in detector response that is both easy to implement and highly effective at removing CT ring artifacts. This method is applicable for both absorption contrast and phase contrast imaging, and artifacts are suppressed to a level low enough to enhance even very low-contrast features, such as the soft-tissue boundaries within the brain. This correction requires just a single series of images to be acquired while sweeping the detector through the X-ray beam, and the data need only be acquired once for each set of experiments with a common detector configuration. In addition, the algorithm should be applicable to both laboratory and synchrotron experiments alike, with some modifications required for a polychromatic source. This correction is effective on its own but can also easily be combined with other existing ring artifact removal methods for further improvement. We have also defined a ring artifact suppression metric that can be used to assess the quality of any ring artifact removal technique, and we have used this metric to quantify the improvements achieved with our method.
\section*{Funding}
Research Training Program (RTP) Scholarship; ARC Future Fellowship (FT160100454); Veski Victorian Postdoctoral Research Fellowship (VPRF); German Excellence Initiative and European Union Seventh Framework Program (291763); NHMRC Development Grant (1093319); International Synchrotron Access Program (ISAP) (AS/IA153/10571).
\section*{Acknowledgements}
The authors would like to thank Michelle Croughan for her careful proofreading of this manuscript. The synchrotron radiation experiments were performed at the beamline BL20B2 of SPring-8 with the approval of the Japan Synchrotron Radiation Research Institute (JASRI) (Proposal No. 2017B0132). Additional research was undertaken on the Imaging and Medical Beamline (IMBL) (Proposal No. AS181/IMBL/12893) at the Australian Synchrotron, part of ANSTO.
\section*{Disclosures}
The authors declare that there are no conflicts of interest related to this article.
\end{document}
|
1,314,259,995,333 | arxiv | \section{Introduction}
The Chang model, introduced in \cite{Chang1971Sets-constructi}, is the
smallest model of ZF set theory which contains all countable sequences
of ordinals. It may be constructed as $L({^{\omega}\Omega})$,
that is, by imitating the recursive definition of the $L_{\alpha}$
hierarchy,
setting $\chang_{0}=\emptyset$ and
$\chang_{\alpha+1}=\DEF^{\chang_{\alpha}}(\chang_{\alpha})$, but
modifying the definition for limit ordinals $\alpha$ by setting
$\chang_{\alpha}=[\alpha]^{<\omega_1}\cup\bigcup_{\alpha'<\alpha}\chang_{\alpha}$.
Alternatively it may be constructed, as did Chang, by replacing the
use of first order logic in the definition of $L$ with the
infinitary logic $L_{\omega_1,\omega_1}$.
We write $\chang$ for the Chang model.
Clearly the Chang model contains the set $\reals$ of reals, and hence is an
extension of $L(\reals)$. Kunen \cite{Kunen1973A-model-for-the} has shown
that the axiom of choice fails in the
Chang model whenever there are uncountably many measurable cardinals;
in particular the theory of $\chang$ may vary, even when the set of reals
is held fixed. We show that in the presence of sufficiently large
cardinal strength this is not true.
An earlier unpublished result of Woodin states that if
there is a Woodin limit of Woodin cardinals, then there is a sharp for the
Chang model. Our result is not strictly comparable to Woodin's,
since although ours
uses a much smaller cardinal, Woodin's notion of a sharp is
stronger, and his result gives the sharp for a stronger model.
Perhaps the most striking aspect of the new result is its
characterization of the size of the Chang model. Although the Chang
model, like $L(\reals)$, can have arbitrary large cardinal strength
coded into the reals, the large cardinal strength of $\chang$
relative to $L(\reals)$, even in the presence of large cardinals in
$V$, is at most $o(\kappa)=\kappa^{+\omega_1}+1$.
The next three definitions describe our notion of a sharp for $\chang$.
Following this definition and a formal statement of our theorem, we
will more specifically discuss the differences between our result and
that of Woodin.
As with traditional sharps, the sharp for the Chang model asserts the
existence of a closed, unbounded class $I$ of indiscernibles.
The conditions on $I$ are given in Definition~\ref{def:Csharp}, following
two preliminary definitions:
\begin{definition}
\label{def:suitable}
Say that a subset
$B$ of a closed class $I$ is \emph{suitable} if
\begin{inparaenum}[(a)]\item
$B$ is countable and closed,
\item every member of $B$ which is a
limit point of $I$ of countable cofinality is also a limit point
of $B$, and
\item $B$ is closed under immediate predecessors in $I$.
\end{inparaenum}
We say that suitable sequences $B$ and $B'$ are \emph{equivalent} if they
have the same order type and, writing $\sigma\colon B\to B'$ for the order isomorphism,
$\forall\kappa\in B\; \sigma(\kappa)\in\lim(I)\iff\kappa\in\lim(I)$ .
\end{definition}
Note that if $B$ is suitable and $\beta'$ is the successor of $\beta$ in $B$, then
either $\beta'$ is the successor of $\beta$ in $I$, or else $\beta'$ is a limit
member of $I$ and $\cof(\beta')>\omega$.
Indeed clauses~(b) and~(c)
of the definition of a suitable sequence are equivalent to the
assertion that every gap in $B$, as a subset of $I$, is capped by a
member of $B$ which is a limit point of $I$ of uncountable cofinality.
\begin{definition}
\label{def:restrictedFormula}
Suppose that $T$ is a collection of constants and functions with
domain in $[\kappa]^{n}$ for some $n<\omega$. Write
$\mathcal{L}_{T}$ for the language of set theory augmented with symbols denoting the members of $T$. A \emph{restricted formula} in the language $\mathcal{L}_T$ is a
formula $\phi$ such that every variable occurring inside an argument of a function in $T$ is free in $\phi$.
\end{definition}
\begin{definition}
\label{def:Csharp}
We say that there is a \emph{sharp for the Chang model} $\chang$ if there
is a closed unbounded class $I$ of ordinals and a set $T$ of
functions having the following three properties:
\begin{enumerate}
\item
Suppose that $B$ and $B'$ are equivalent suitable sets, and let
$\phi(B)$ be a restricted formula. Then
\begin{equation*}
\chang\models \phi(B)\iff \phi(B').
\end{equation*}
\item
Every member of $\chang$ is of the form $\tau(B)$ for some
term $\tau\in T$ and some suitable sequence $B$.
\item
If $V'$ is any universe of ZF set theory such that $V'\supseteq V$
and $\reals^{V'}=\reals^{V}$ then, for all restricted formulas
$\phi$
\begin{equation*}
\chang^{V'}\models
\phi(B)\iff
\chang^{V}\models\phi(B).
\end{equation*}
for any $B\subseteq I$ which is suitable in both $V$ and $V'$.
\end{enumerate}
\end{definition}
Note, in clause~3, that $\chang^{V'}$ may be larger than
$\chang^{V}$.
A sequence $B$ which is suitable in $V$ may not be suitable in
$V'$, as a limit member of $B$ may have uncountable cofinality in $V$
but countable cofinality in $V'$. However the class $I$, as well as
the theory, will be the same in the two models.
The sharp defined here is somewhat provisional, as is suggested by the
gap between the upper and lower bounds in Theorem~\ref{thm:main}. The
major consequence of $0^\sharp$ which is shared by this notion of a
sharp is the existence of nontrivial embeddings of $\chang$:
\begin{proposition}\label{thm:sharpEmbed}
Suppose that $I$ is a class satisfying Definition~\ref{def:Csharp}
and $\sigma\colon I\to I$ is an increasing map which
\begin{myinparaenum}
\item is continuous at limit points of cofinality $\omega$, and for all $\kappa\in B$
\item $\sigma(\min(I\setminus(\kappa+1)))=\min(I\setminus(\sigma( \kappa)+1))$ and
\item $\sigma(\kappa)$ is a limit point of $I$ if and only if
$\kappa$ is a limit point of $I$.
\end{myinparaenum}
Then $\sigma$
can be extended to an elementary embedding $\sigma^*\colon\chang\to\chang$.\qed
\end{proposition}
Definition~\ref{def:Csharp} is not strong enough to imply the converse, that any
elementary embedding $\sigma^*\colon \chang\to\chang$ is generated by
some such map $\sigma\colon I\to I$, and it does not imply that the
embeddings $\sigma^*$
are unique. Note, for example, that if a sharp for $\chang$ is
given, according to Definition~\ref{def:Csharp}, by $I$ and $T$ then
$I'=\set{\kappa_{\nu\cdot\omega_1}\mid \nu\in\Omega}$
also satisfies the definition, using the set
$T'=T\cup\set{t_{\alpha}\mid\alpha<\omega_1}$ of terms where
$t_{\alpha}(\kappa_{\omega_1\cdot\nu})=\kappa_{\omega_1\cdot\nu+\alpha}$.
However, the restriction to $I'$ of the embedding $i^*\colon\chang\to\chang$ induced by the
embedding
$i\colon\kappa_{\omega_1\cdot\nu+\alpha}\mapsto\kappa_{\omega_1\cdot(\nu+1)+\alpha}$
does not satisfy the hypothesis of Proposition~\ref{thm:sharpEmbed}.
It is
likely that this deficiency will eventually be resolved by a
characterization of the ``minimal sharp", that is, of the weakest large
cardinal (or the smallest mouse) which yields a sharp
in the sense of Definition~\ref{def:Csharp}.
Recall that a traditional sharp, such as $0^{\sharp}$, may be viewed
in either of two different ways: as a closed and unbounded
class of indiscernibles which generates the full (class) model, or as
a mouse with a final extender on its sequence which is an ultrafilter.
From the first viewpoint, perhaps the most striking difference between
$0^{\sharp}$ and our sharp for $\chang$ is the need for external terms in order to
generate $\chang$ from the indiscernibles.
From the second viewpoint,
regarding the sharp as a mouse, the sharp for the Chang model involves two modifications:
\begin{enumerate}
\item
For the purposes of this paper, a \emph{mouse} will always be a
mouse over the reals, that is, an extender model of the form $J_{\alpha}(\reals)[{\cal E}]$.
\item The final extender of the mouse which represents the sharp of
the Chang model will be a proper extender, not an ultrafilter.
\end{enumerate}
It is still unknown how large the final extender must be. We show
that its length is somewhere in the range from $\kappa^{+(\omega+1)}$
to $\kappa^{+\omega_1}$, inclusive:
\begin{theorem}[Main Theorem]
\label{thm:main}
\begin{enumerate}
\item\label{item:main-lower} Suppose that there is no mouse
$M=J_{\alpha}(\reals)[{\cal E}]$ with a final extender $E={\cal
E}_{\gamma}$ with critical point $\kappa$ and length $\kappa^{+(\omega+1)}$ in $J_{\alpha}(\reals)[{\cal E}]$ such that $\cof^V(\len(E))>\omega$.
Then $K(\reals)^{\chang}$,
the core model over the reals as defined in
the Chang model, is an iterated ultrapower (without drops) of
$K(\reals)^{V}$; and hence there is no sharp for the
Chang model.
\item\label{item:main-upper}
Suppose that there is a model $L(\reals)[\mathcal{E}]$ which
contains all of the reals and has an extender $E$ of length
$(\kappa^{+\omega_1})^{L(\reals)[\mathcal{E}]}$, where $\kappa$ is the critical point of $E$.
Then
there is a sharp for $\chang$.
\end{enumerate}
\end{theorem}
This problem was suggested by Woodin in a conversation at the
Mittag-Lefler Institute in 2009, in which he observed that there
was an immense gap between the hypothesis needed for his sharp, and
easily obtained
lower bounds such as a model with a single measure.
At the time I conjectured that the same argument might show that any
extender model would provide a similar lower bound, but James
Cummings and Ralf Schindler, in the same conversation,
pointed out that Gitik's results suggest that it would fail at an
extender of length $\kappa^{+(\omega+1)}$.
I would also like to thank Moti Gitik, for suggesting his
forcing for the proof of clause~2 and
explaining its use.
I have generalized his forcing to add new sequences of arbitrary
countable length. I have also made substantial but, I believe,
inessential changes to the presentation; I hope that he will recognize
his forcing in my presentation. Many of the arguments in this
paper, indeed almost all of those which do not directly involve either the
generalization of the forcing or the application to the Chang model,
are due to Gitik.
\subsection{Comparison with Woodin's sharp}
Our notion of a sharp for $\chang$ differs from that of Woodin in
several ways. We will discuss them in roughly increasing order of importance.
\begin{compactenum}
\item
The
theory of our sharp can depend on the set of reals, while the theory of
Woodin's sharp does not; however this is due to the large
cardinals involved, rather than the definition of the sharp.
Woodin's proof that the theory of $L(\reals)$ is invariant under set
forcing also shows that the theory of our sharp stabilizes in the
presence of a class of Woodin cardinals.
\end{compactenum}
\smallskip
Two differences which might seem to be weaknesses in our model are
actually only differences in presentation.
\begin{compactenum}
\setcounter{enumi}{1}
\item
Woodin's indiscernibles are defined to be indiscernible in
the infinitary language $L_{\omega_1,\omega_1}$, whereas we use
only first order logic. However the two languages are
equivalent in this context: since $\chang$ is closed under countable sequences and
$\chang_{\alpha}\prec \chang$ whenever $\alpha$ is a member of the
class $I$ of indiscernibles, the existence of our sharp implies that
any formula of $L_{\omega_1,\omega_1}$ is equivalent to a formula of
first order logic having a parameter which is a countable sequence
of ordinals.
\item
For Woodin's sharp, any two subsequences of $I$ are indiscernible,
while for our sharp only ``suitable'' subsequences are considered.
The requirement of suitability could be eliminated by replacing $I$ with the class
of
limit points of $I$ of uncountable cofinality, and making a
corresponding addition to the class
$T$ of terms, but it seems that doing so would ultimately
lose information about the structure
of the sharp. This point is discussed further in
Subsection~\ref{sec:suitability-required}.
\end{compactenum}
\smallskip
The final two differences are significant. The first can probably
be removed, while
the second is basic and explains the difference in the hypotheses used:
\begin{compactenum}
\setcounter{enumi}{3}
\item
The notion of restricted formulas is entirely absent from
Woodin's results: he allows the terms from $T$ to be used as full elements of the
language. We believe that our need for restricted formulas is due to
the choice of terms and will eventually
be removed by a more complete analysis resolving the
question about the size of the minimal mouse needed to give a sharp for $\chang$.
If this conjecture turns out to be incorrect then its failure ould be
a major weakness in our
notion of a sharp.
\item
Woodin has observed, in a personal communication, that his sharp
actually is a sharp for a much stronger model, namely the smallest model
which contains all countable sequences of ordinals and the
stationary filter on the set $\ps_{\omega_1}([\lambda]^{\omega})$
for every $\lambda$. Thus our constructions do not conflict, but instead
describe sharps for different models, and this
explains the difference in the hypotheses needed.
\end{compactenum}
Woodin has observed (private communication) that some of the gap
between the two sharps can
be filled by modifying the construction of this paper to use the least
mouse $M$
over the reals such that $M$ has infinitely many Woodin cardinals below the
extenders needed for the conclusion of
Theorem~\ref{thm:main}(\ref{item:main-upper}). This would
give a version of our sharp which can be coded by a set
$X\subseteq\reals$ having the following property: Suppose that $V'$ is
any inner model of $V$ such that $X\cap V'\in V'$. Then
$X\cap V'$ codes the corresponding sharp
for the Chang model of $V'$. Woodin regards this as the ``true
sharp''; however it seems that the better terminology would be to
regard this not as the analog of the sharp operator, but as the analog
of the $M_\omega$ mouse operator.
Future work, and the publication of Woodin's work on his sharp, will
be needed to better comprehend the possibilities of extensions of
sharps for Chang-like models in analogy with the
extended theory related to $0^{\sharp}$. At the same time, as points~3 and~4 above
make clear, further work is needed towards clarifying the basic notion
of a sharp for the Chang model as presented in this paper.
\subsection{Some basic facts about $\chang$}
\label{sec:basic-facts}
As pointed out earlier, the Axiom of Choice fails in $\chang$
if there are infinitely many measurable cardinals. However, the
fact that $\chang$ is closed under countable sequences implies that
the axiom of Dependent Choice holds, and this is enough to avoid most of the serious
pathologies which can occur in a model without choice.
For life without Dependent Choice, see for example
\cite{Gitik2012Violating-the-s}, which gives a model with surjective
maps from $\ps(\aleph_{\omega})$ onto an arbitrarily large cardinal
$\lambda$ without any need for large cardinals.
\begin{todoenv} {(7/25/14) --- idea --- A question: what can be said
about the covering lemma in the absence of choice. Does it apply
to this model?}
\end{todoenv}
The same argument that shows that every member of $L$ is ordinal definable implies that every member of
$\chang$ is definable in $\chang$ using a countable sequence of
ordinals as parameters.
In the proof of part~1 of Theorem~\ref{thm:main} we make use of
the core model $K(\reals)$ inside of $\chang$, and in the absence of the Axiom
of Choice this requires some justification. In large part the Axiom
of Choice can be avoided in the construction and theory of this core
model, since the core model itself is well ordered (after using countably complete forcing to map the reals onto $\omega$). However one
application of the Axiom of Choice falls outside of this situation:
the use of Fodor's pressing down lemma, the proof of which requires
choosing closed unbounded sets as witnesses that the sets where the
function is constant are all nonstationary. This lemma is needed in
the construction of $K(\reals)$ in order to prove that the comparison of pairs of mice by iterated ultrapowers always terminates.
However, this is not a problem
in the construction of $K(\reals)$ in $\chang$, as we can apply Fodor's lemma in the universe $V$, which satisfies the Axiom of Choice, to verify that all
comparisons terminate.
The proof of the covering lemma involves other uses of Fodor's lemma;
however we do not use the covering lemma.
\subsection{Notation}
\label{sec:notation}
We use generally standard set theoretic notation. We use $\Omega$
to mean the class of all ordinals, and frequently treat $\Omega$ itself
as an ordinal. If $h$ is a function, then we use $h[B]$ for the range of $h$ on $B$,
$h[B]=\set{h(b)\mid b\in B}$. We write $[X]^{\kappa}$ for the set of
subsets of $X$ of size $\kappa$.
In forcing, we use $p< q$ to mean that $p$ is stronger than $q$.
The notation $p\parallel\phi$ means that the
condition $p$ decides $\phi$, that is, either $p\Vdash \phi$ for
$p\Vdash\lnot \phi$.
If $P$ is a forcing order and $s\in P$, then we write $\below{P}{s}$
for the forcing below $s$, that is, the restriction of $P$ to $\set{t\in P\mid t\leq s}$.
If $E$ is an extender, then we write $\supp(E)$ for the support, or
set of generators, of $E$. Typically we take this to be the interval
$[\kappa,\len(E))$ where $\kappa$ is the critical point of $E$;
however we frequently make use of the restriction of $E$ to a
nontransitive\footnote{We regard $\supp(E)=[\kappa,\lambda)$ as
``transitive'' despite its omission of ordinals less than
$\kappa$. We could equivalently, but slightly less conveniently, use
$\supp(E)=\len(E)$.}
set of generators: that is, if $S\subseteq\supp(E)$ then
we write $E\ecut S$ for the restriction of $E$ to $S$, so $\ult(V,E\ecut
S)\cong\set{i^{E}(f)(a)\mid f\in V\land a\in[S]^{<\omega}}$. We
remark that $\ult(V,E\ecut S)=\ult(V,\bar E)$, where $\bar E$ is the
\emph{transitive collapse of $E\ecut S$}, that is, the extender
obtained from $E\ecut S$ by using the transitive collapse $\sigma\colon
[\kappa,\len(\bar E))\cong \supp(E)\cap
\set{i^E(f)(a)\mid a\in[S]^{<\omega}}$ and setting
the ultrafilter
$(\bar E)_{\alpha}=E_{\sigma^{-1}(\alpha)}$.
In cases where the $E\ecut S\notin M$ but the
transitive collapse $\bar E\in M$, we frequently describe
constructions as using $E\ecut S$ when the actual construction inside
$M$ must use $\bar E$. Such use will not always be explicitly
stated.
We write $\ufFromExt{E}{a}$ for the ultrafilter $\set{x\subseteq
H_{\crit(E)}\mid a\in i^E(x)}$.
We make extensive use of the core model over the reals, $K(\reals)$.
However we make no (direct) use of fine structure, largely because we
make no attempt to use the weakest hypothesis which could be treated
by our argument. The reader will need to be familiar with extender
models, but only those weaker than strong cardinal, that is, without the
complications of overlapping extenders and iteration trees. For
our purposes, a mouse will be an extender model $M=J_{\alpha}(\reals)[\mathcal{E}]$, where
$\reals$ is the set $\ps(\omega)$ of reals and $\mathcal{E}$ is a
sequence of extenders, and it generally can be
assumed to be a model of Zermelo set theory (and therefore equal to $L_{\alpha}(\reals)[\mathcal{E}]$).
The ultrafilters in a mouse $M$ over the reals, including those
appearing as components of an extender, are all complete over sets of
reals. That is, if $U$ is an ultrafilter and $f\colon
X\to\ps(\reals)$ for some $X\in U$ then there is a set $a\subseteq \reals$
such that $\set{x\in X\mid f(x)=a}\in U$. This implies the needed
instances of the Axiom of Choice:
\begin{proposition}
\label{thm:enoughAC}
Suppose that $U$ is an ultrafilter and $X\in U$. Then
\begin{enumerate}
\item there is a well orderable $X'\subseteq X$ such that
$X'\in U$,
and
\item if $f$ is a function such that $\set{x\in X\mid
f(x)\not=\emptyset}\in U$ then there is a function $g$ such that
$\set{x\in X\mid g(x)\in f(x)}\in U$.
\end{enumerate}
\end{proposition}
\begin{proof}
Every element of $M$ is ordinal definable from a real parameter. If
$x\in M$, then let $\phi_x$ be the least formula $\phi$, with ordinal
parameters, such that $(\exists r\in\reals)\forall
z\;(\phi(z,r)\iff z=x)$, and let $R_x=\set{r\in\reals\mid\forall
z\;(\phi_x(z,r)\iff z=x)}$. For the first clause, there is
$R\subseteq \reals$ such that $X'=\set{x\in X\mid R_x=R}\in U$.
Thus, if $r$ is any member of $R$, then every member of $X'$ is
ordinal definable from $r$.
The proof of the second clause is similar, using $R\subseteq \reals$
such that $\set{x\in X\mid \bigcup_{z\in f(x)}R_z = R}$.
\end{proof}
If $M=J_{\alpha}(\reals)[\mathcal{E}]$ is a mouse then we write
$M{\vert}\gamma$ for $J_{\gamma}(\reals)[\mathcal{E}{\upharpoonright}\gamma]$,
that is, for the cut off of $M$ at $\gamma$ without including the
active final extender $\mathcal{E}_{\gamma}$ if there is one. This
is most commonly used as $N{\vert}\Omega$, where $N$
is the final model of an iteration of length $\Omega$ and $\Omega^{N}>\Omega$.
\section{The Lower bound}
\label{sec:lower}
The proof of Theorem~\ref{thm:main}(1), giving a lower bound to
the large cardinal strength of a sharp for the Chang model, is a
straightforward application of a technique
of Gitik (see the proof of Lemma~2.5 for $\delta=\omega$ in \cite{gitik-mitchell.ext-indisc}).
\begin{proof}[Proof of Theorem~\ref{thm:main}(1)]
The proof of the lower bound uses iterated ultrapowers to
compare $K(\reals)$ with $K(\reals)^{\chang}$. Standard methods show that
$K(\reals)^{\chang}$ is not moved in this comparison, so there is an
iterated ultrapower $\seq{M_\nu\mid \nu\le\theta}$, For some
$\theta\leq\Omega$, such that $M_0=K(\reals)$ and $M_\theta=K(\reals)^\chang$.
This iterated ultrapower is defined by setting
\begin{myinparaenum}\item
$M_\alpha=\dirlim\set{M_{\alpha'}\mid \alpha''<\alpha'<\alpha}$ for
sufficiently large $\alpha''<\alpha$ if $\alpha$ is a limit
ordinal, and
\item
$ M_{\alpha+1}=\ult(M^*_{\alpha},E_\alpha)$, where
$E_{\alpha}$ is the least extender in $M_{\alpha}$ which is not in
$K(\reals)^{\chang}$ and $M^*$ is equal to $M_{\alpha}$ unless
$E_{\alpha}$ is not a full extender in $M_{\alpha}$, in which case
$M^*_{\alpha}$ is the largest initial segment of $M_\alpha$ in which
$E_{\alpha}$ is a full extender.
\end{myinparaenum}
We want to show that
\begin{myinparaenum}
\item
this does not drop, that is, $M^{*}_{\alpha}=M_{\alpha}$ for all $\alpha$, and
\item
$M_{\theta}=K(\reals)^{\chang}$.
\end{myinparaenum}
If either of these is false, then $\theta=\Omega$
and there is a closed unbounded class $C$ of ordinals $\alpha$ such that
$\crit(E_\alpha)=\alpha=i_{\alpha}(\alpha)$. Since $o(\kappa)<\Omega$ for all $\kappa$
it follows that there is a stationary class $S\subseteq C$ of ordinals of
cofinality $\omega$ such that $i_{\alpha',\alpha}(E_{\alpha'})=E_{\alpha}$ for all
$\alpha'<\alpha$ in $S$. Fix $\alpha\in S\cap\lim(S)$; we will
show that the hypothesis of
Theorem~\ref{thm:main}(1) implies that $E_\alpha\in\chang$, contradicting the choice of $E_\alpha$.
To this end, let $\vec\alpha=\seq{\alpha_n\mid n\in\omega}$ be an increasing sequence of ordinals in $S$ such that $\bigcup_{n\in\omega}\alpha_n=\alpha$. We call a sequence $\seq{\beta_n\mid n\in\omega}$ a \emph{thread} for the generator $\beta$ of $E_\alpha$ if $\beta_n=i^{-1}_{\alpha_n,\alpha}(\beta)$ for all sufficiently large $n<\omega$.
The technique of Gitik used in \cite[Lemma~2.3]{gitik-mitchell.ext-indisc} gives a formula $\phi$ such that $\phi(\vec\alpha,\vec\beta,\beta)$ holds if and only
if $\beta<\kappa^{+\omega}$ and $\vec\beta$ is a thread for $\beta$.
Since all of the threads are in $\chang$, this implies that
$E_\alpha{\upharpoonright}\kappa^{+\omega}\in\chang$. If
$\gamma=\len(E_\alpha)<\kappa_\alpha^{+(\omega+1)}$ then this
construction can be extended to all of $E_\alpha$ by using
$\seq{i^{-1}_{\alpha_n}(\len(E_\alpha))\mid n\in\omega}$ as an additional
parameter.
But the hypothesis of Theorem~\ref{thm:main}(\ref{item:main-lower}) implies that
$\len(E_\alpha)<(\kappa_\alpha^{+\omega})^\chang$, so
$E_\alpha\in\chang$, contradicting the definition of $E_\alpha$.
It follows that no sharp for $\chang$ exists, as otherwise the embedding given
by Proposition~\ref{thm:sharpEmbed} would make an iterated ultrapower of $K(\reals)$
non-rigid.
\end{proof}
\section{The upper bound}
\label{sec:upperbound}
The proof of Theorem~\ref{thm:main}(\ref{item:main-upper}) will take
up the rest of this paper except for the final
Section~\ref{sec:questions}, which poses some open questions.
The hypothesis of Theorem~\ref{thm:main}(\ref{item:main-upper}) is stronger than
necessary: our construction of the sharp for $\chang$ uses only a
sufficiently strong mouse over the
reals, that is, a model $M=J_{\gamma}(\reals)[\mathcal{E}]$
where $\mathcal{E}$ is an iterable extender sequence.
At this point we describe a general procedure for constructing a sharp from a mouse.
For this purpose we will assume that $M$ is a mouse satisfying the
following conditions:
\begin{myinparaenum}
\item $\card M=\card\reals$, definably over $M$, indeed
\item
there is an onto function $h\colon \reals\to M$ which is the
union of an increasing $\omega_1$ sequence of functions in $M$, and
\item $M$ has a last $(\kappa,\kappa^{+\omega_1})$-extender, $E\in M$.
\end{myinparaenum}
We can easily find such a mouse from the hypothesis of
Theorem~\ref{thm:main}(\ref{item:main-upper}) by choosing a model $N$ of
the form $J_{\gamma}(\reals)[\mathcal{E}]$ with the last two properties and
letting $M$ be the transitive collapse of the Skolem hull of
$\reals\cup\omega_1$ in $N$.
In Definition~\ref{def:Nsequence}, at the start of
section~\ref{sec:mainlemma},
we will make additional and
more precise assumptions on $M$ which are used in the proof of the
Main Theorem.
We remark that we could assume the Continuum Hypothesis by
generically adding a map $g$ mapping $\omega_1$ onto the reals.
Doing so would not add any new countable sequences and hence would not
affect the Chang model. Indeed we could use
$J_{\gamma}[g][\mathcal{E}]$ for the mouse $M$ instead of
$J_{\gamma}(\reals)[\mathcal{E}]$, so that $M$ satisfies the Axiom of
Choice and the Continuum Hypothesis, along with all of the properties we require of $M$.
We do not do so (though we will need to generically add such a
map $g$ near the end of the proof) but the reader certainly may, if
desired, assume that this has been done.
The following simple observation is basic to the construction:
\begin{proposition}\label{thm:Mcc}
The mouse $M$ is closed
under countable subsequences.
\end{proposition}
\begin{proof}
By the assumption (b) on $M$, any countable subset $B\subseteq M$ is
equal to $h[b]$ for a function $h\in M$ and set $b\subset\reals$.
Since $M$ contains all reals, and
any countable set of reals can be coded by a single real, $b\in M$
and thus $B\in M$.
\end{proof}
As in the case of $0^{\sharp}$, we obtain the sharp for the Chang model by iterating the final extender $E$ out of the universe:
\begin{definition}\label{def:IFromM}
We write $i_{\alpha}\colon M_0=M\to M_{\alpha}=\ult_{\alpha}(M,E)$.
In particular $M_{\Omega}$ is the result of iterating $E$ out of the
universe, so that $i_{\Omega}(\kappa)=\Omega$.
Let $\kappa=\crit(E)$. We write
$\kappa_{\nu}=i_{\nu}(\kappa)$
and $I=\set{\kappa_\nu\mid\nu\in\Omega}$. We say that an
ordinal $\beta$ is a \emph{generator
belonging to $\kappa_{\nu}$} if $\beta=i_{\nu}(\bar\beta)$ for some
$\bar{\beta}\in[\kappa,\kappa^{+\omega_1})$
\end{definition}
Note that the set of generators belonging to $\kappa_\nu$ is a subset
of $\supp(i_{\nu}(E))$, that is, it is a set of generators for the
extender $i_{\nu}(E)$ on $\kappa_\nu$ in $M_\nu$.
Every member of $M_{\Omega}$ is equal to
$i_{\Omega}(f)(\vec \beta)$ for some function $f\in M$ with domain
$\kappa^{\card{\beta}}$ and some finite sequence $\vec \beta$ of
generators for members of $I$.
The following observation follows from this fact together with
Proposition~\ref{thm:Mcc}:
\begin{proposition}\label{thm:countable-sets-generators}
Suppose that $N\supseteq M_{\Omega}{\vert}\Omega$ is a model of set
theory which contains
all countable sets of generators. Then $\chang^{N}=\chang$.
\end{proposition}
\begin{proof}
It is sufficient to show that $N$ contains all countable sets of
ordinals, but that is immediate since every countable set $B$ of
ordinals has the form
$B=\set{i_{\Omega}(f_{n})(\vec\beta_n)\mid n\in\omega}$, where each
$f_n$ is a function in $M$ and each $\vec\beta_n$ is a finite
sequence of generators. Since the sequence $\seq{f_n\mid n\in\omega}$ is
in $M\subseteq N$ by Proposition~\ref{thm:Mcc}, the sequence
$\seq{i_{\Omega}(f_n){\upharpoonright}\lambda\mid n\in\omega}\in
M_{\Omega}{\vert}\Omega\subseteq N$ for
$\lambda>\sup\bigcup_{n\in\omega}\vec\beta_n$, and the sequence
$\seq{\vec\beta_n\mid n\in\omega}$ is in $N$ by assumption. Thus $B\in
N$.
\end{proof}
Clearly the class $I$ gives a sharp for the model $M_\Omega{\vert}\Omega$
in the sense of Definition~\ref{def:Csharp} (with suitable sequences
from $I$
replaced by finite sequences), but it is not at all clear that $I$
gives a sharp for $\chang$ as well.
We show starting in
Section~\ref{sec:proof-start} that it does give a sharp when defined
using the mouse specified there.
\begin{conjecture}\label{thm:optimalmouseconjecture}
If $M$ is the minimal mouse for which this procedure yields a sharp
for $\chang$, then the core model $K(\reals)^{\chang}$ of the Chang
model is given by
an iteration $k$, without drops, of $M_{\Omega}{\vert}\Omega$.
\end{conjecture}
This mouse $M$ (which we will refer to as the ``optimal'' mouse) would
then give ``the'' sharp for $\chang$. A verification of this
conjecture would presumably determine the correct large cardinal strength of the sharp, and
remove some of the weaknesses which have been remarked on in our results.
\subsection{Why is suitability required?}
\label{sec:suitability-required}
Two major weaknesses of the results of this
paper were pointed out earlier: the need for restricted formulas and suitable sequences. We expressed the hope that the need for restricted
formulas will be eliminated by strengthening these results to use
the minimal mouse. In this subsection we make a brief digression to
look at the question of suitability. Nothing in this subsection is
required for the proof of
Theorem~\ref{thm:main}(\ref{item:main-upper}) and nothing in this
subsection will be referred to again except for the statement of
Theorem~\ref{thm:modified-suitable}.
Say that a mouse $M$ is \emph{correct for the Chang model} if there is
an iteration $k\colon M_{\Omega}\toK(\reals)^{\chang}$, without drops, such
that $k[\kappa_{\nu}]\subset\kappa_{\nu}$ for all $\nu\in\Omega$ and
$k(\kappa_\nu)>\kappa_\nu$ for all $\nu\in\Omega$ of uncountable
cofinality.
Such a mouse must be the minimal mouse which is not a member of $\chang$, since
otherwise the minimal such mouse would be a member of $M$ and the
iteration $k$ would either drop or go beyond $\Omega$. The converse is
not known, but it seems probable that the minimal mouse is correct and that $i{\upharpoonright}
I=\set{(\kappa_\nu,k(\kappa_\nu)\mid\nu\in\Omega} $ is a
class of indiscernibles for $\chang$.
Now suppose that $M$ is correct for $\chang$, and
say that a sequence $\vec\alpha$ is Prikry for $\vec \beta$ if each is an
increasing $\omega$ sequence and there is a sequence of measures $U_n\in M_{\Omega}$ on
$\beta_n$ such that $\vec\alpha$ satisfies the Mathias genericity
condition: for all $x\subseteq\sup(\vec\beta)$
in $M_{\Omega}$,
for all but finitely many $n\in\omega$, we have $\alpha_n\in x$ if and
only if $x\cap\beta_n\in U_n$.
Note that we are not asserting here that $\vec\alpha$ is actually generic
over $M_{\Omega}$, as neither $\vec\beta$ nor the sequence of measures
need be in $M_\Omega$.
We write $\vec\lambda<^*\vec \eta$ if $\lambda_n<\eta_n$ for all but
finitely many $n$.
\begin{proposition}
Suppose $\vec\nu$ and $\vec\mu$ are
increasing $\omega$-sequences of ordinals with
$\vec\nu<^*\vec\mu$ and $\sup(\vec{\nu})=sup(\vec\mu)$. Then
$\seq{k(\kappa_{\nu_n})\mid n\in\omega}$ and
$\seq{\kappa_{\nu_n+1}\mid n\in\omega}$ are each Prikry for
$\seq{k(\kappa_{\mu_n})\mid n\in\omega}$. Furthermore,
no sequence $\vec\alpha$ in the interval
$\seq{k(\kappa_{\nu_n})\mid n\in\omega}
<^*\vec\alpha<^*\seq{\kappa_{\nu_n+1}\mid n \in\omega}$
is Prikry for $\seq{k(\kappa_{\mu_n})\mid n\in\omega} $.
\end{proposition}
\begin{proof}
To see that $\seq{\kappa_{\nu_n+1}\mid n\in\omega}$ is Prikry for
$\seq{k(\kappa_{\mu_n})\mid n\in\omega}$, use $U_n=k\circ
i_{\nu_n+1,\mu_n}(U'_n)$ where
$U'_n=\set{x\subseteq\kappa_{\nu_n+1}\mid \kappa_{\xi_n+1}\in k(x)}$.
To see that $\seq{k(\kappa_{\nu_n})\mid n\in\omega}$ is Prikry for
$\seq{k(\kappa_{\mu_n})\mid n\in\omega}$, use $U_n=k\circ
i_{\mu_n}(\ufFromExt{E}{\kappa})$.
For the final sentence, observe that $\seq{k\circ
i_{\Omega}(f)(k(\kappa_{\nu}))\mid f\in M}$ is cofinal in
$\kappa_{\nu+1}$ for all $\nu\in\Omega$. It follows that if
$\seq{k(\kappa_{\nu_n})\mid n\in\omega}
<^*\vec\alpha<^*\seq{\kappa_{\nu_n+1}\mid n \in\omega}$
then there is a function $f\in M$ such that
$k\circ i_{\Omega}(f)(k(\kappa_{\nu_n}))>\alpha_n$ for all $n\in\omega$ such
that $\alpha_n< \kappa_{\nu_n+1}$, so
$x=\set{\nu\mid(\exists\nu'<\nu) \;k\circ i_{\Omega}(f)(\nu')>\nu}$ witnesses that
$\vec\alpha$ is not Prikry for $\seq{\kappa_{\mu_n}\mid n\in\omega}$.
\end{proof}
\begin{corollary}
\label{thm:counterexample}
Suppose that $B$ and $B'$ are two countable closed subsets of $I$
such that for all formulas $\phi$
of set theory (with no extra terms) $\chang\models\phi(k{\upharpoonright}
B)\iff\phi(k{\upharpoonright} B')$.
Then, writing $B=\seq{\kappa_{\nu_{\xi}}\mid\xi<\alpha}$ and
$B'=\seq{\kappa_{\nu'_{\xi}}\mid\xi<\alpha'}$, we have
$\alpha=\alpha'$, $(\forall\xi<\alpha)\;(\cof(\nu_\xi)=\omega\iff\cof(\nu'_{\xi})=\omega)$,
and
for all but finitely many $\xi<\alpha$
\begin{enumerate}
\item $\nu_{\xi+1}=\nu_{\xi}+1$ if and only if
$\nu'_{\xi+1}=\nu'_{\xi}+1$, and
\item $\nu_{\xi}$ is a limit ordinal if and only if $\nu'_{\xi}$
is
a limit ordinal.
\end{enumerate}
\end{corollary}
\begin{proof}
Only the two numbered assertions are problematic. For the first
assertion, suppose to the contrary that $\seq{\xi_n\mid n\in\omega}$
is an increasing subsequence of $\alpha$ such that
$\nu_{{\xi_n}+1}=\nu_{{\xi_n}}+1$ but
$\nu'_{{\xi_{n}}+1}>\nu'_{\xi_n}+1$.
Let $\phi(k{\upharpoonright} B)$ be the formula asserting that there is no
sequence $\vec\alpha$ which is Prikry for
$\seq{k(\kappa_{\nu_{\xi_n}+1})\mid n\in\omega}$ such that
$\seq{\kappa_{\nu_{\xi_n}}\mid
n\in\omega}<^*\vec\alpha<^*\seq{\kappa_{\nu_{\xi_n+1}}\mid n\in\omega}$ for each
$n\in\omega$. Then $\phi$
is true of $B$ but false of $B'$.
For the second assertion, observe that
$\nu_{\xi_n}$ is a limit ordinal for all but finitely many
$n\in\omega$ if and only if
there are $<^*$-cofinally many
sequences $\vec\alpha<^* \seq{\kappa_{\nu_{\xi_n}}\mid n\in\omega}$
which are Prikry for $\seq{k(\kappa_{\nu_{\xi_n}})\mid n\in\omega}$.
\end{proof}
On its face this Corollary is vacuous: it applies only to (and only
conjecturally to) the optimal sharp for the Chang model, which itself
only conjecturally exists. However it is an important motivation
for the technique we use to prove the Main Theorem and gives important
information about the structure of the sharp of the Chang model.
First, the \emph{gaps} in a sequence $B$, that is, the maximal
intervals of $I\setminus B$, are important. Second, (assuming as we
do that no gaps have a least upper bound of cofinality $\omega$) the
only important characteristic of the gaps is whether their upper bound
is a limit point or a successor point of $I$. Finally, individual
gaps are not important---only
infinite sets of gaps.
Indeed, in Subsection~\ref{sec:finite-exceptions} we will outline a
proof of Theorem~\ref{thm:modified-suitable} below, which strengthens
Theorem~\ref{thm:main}(\ref{item:main-upper}) to show that the class
$I$ of indiscernibles of given by the proof of that theorem satisfies
the converse of the conclusion of
Corollary~\ref{thm:counterexample}.%
\begin{definition}
\label{def:weaklySuitable}
Call a sequence $B\subseteq I$ \emph{weakly suitable} if $B$
is a countable and closed, and
$B\cap\lambda$ is unbounded in $\lambda$ whenever $\lambda\in B$
and $\cof(\lambda)=\omega$.
Suppose that $B=\seq{\lambda_\nu\mid \nu<\alpha}$ and
$B'=\seq{\lambda'_\nu\mid \nu<\alpha'}$, enumerated in increasing
order, are weakly suitable. We say that $B$ and $B'$ are
\emph{equivalent} if $\alpha=\alpha'$,
$(\forall\nu<\alpha)\;(\cof(\lambda_\nu)=\omega\iff\cof(\lambda'_\nu)=\omega)$,
and with at most finitely many exceptions the following hold for all $\nu<\alpha$:
\begin{myinparaenum}
\item $\lambda_{\nu_+1}=\min(I\setminus\lambda_\nu+1)$ if and only
if $\lambda'_{\nu+1}=\min(I\setminus\lambda_{\nu}+1)$, and
\item $\lambda_\nu$ is a limit member of $I$ if and only if
$\lambda'_\nu$ is a limit member of $I$.
\end{myinparaenum}
\end{definition}
\begin{theorem}\label{thm:modified-suitable}
If $B$ and $B'$ are equivalent weakly suitable sequences then
$\chang\models\phi(B)\iff\phi(B')$ for any restricted formula $\phi$
in our language.
\end{theorem}
\subsection{Definition of the set $T$ of terms.}
The next definition gives the set of terms we will use to construct the sharp.
This list should be regarded as preliminary, as a better understanding
of the Chang model will undoubtedly suggest a more felicitous choice.
\begin{definition}
\label{def:Istardef}\label{def:terms}
The members of the set $T$ of terms of our language for the sharp of $\chang$ are those
obtained by compositions
of the following set of basic terms:
\begin{enumerate}
\item \label{item:CfType}For each function $f\colon {^n\kappa}\to
\kappa$ in $M$ for some $n\in\omega$,
there is a term $\tau$ such that $\tau(z)=i_{\Omega}(f)(z)$ for
all $z\in {^n\Omega}$.
\item\label{item:CindiscBelong} For each $\bar\beta$ in the interval
$\kappa\leq\bar\beta<(\kappa^{+\omega_1})^{M}$ there is a term $\tau$ such
that $\tau(\kappa_{\nu}) = i_{\nu}(\bar\beta)$ for all $\nu\in\Omega$.
\item\label{item:Cinfinite} Suppose
$\seq{\tau_n\mid n\in\omega}$ is an $\omega$-sequence of compositions of terms from the previous two
cases, and $\domain(\tau_n)\subseteq{^{k_n}\Omega_n}$. Then
there is a term $\tau$ such that $\tau(\vec a)=\seq{\tau_n(\vec
a{\upharpoonright} k_{n})\mid n\in\omega}$ for all $\vec
a\in{^{\omega}\Omega}$.
\item \label{item:CsuccType}
For each formula $\phi$, there is a term $\tau$
such that if $\iota$ is an ordinal and $y$ is a countable sequence
of terms for members of $\chang_{\iota}$ then
\begin{equation*}
\tau(\iota, y)=
\xset{x\in \chang_{\iota}}{\chang_{\iota}\models
\phi(x, y)}.
\end{equation*}
\end{enumerate}
\end{definition}
\begin{proposition}
\label{thm:terms-suffice}
For each $z\in\chang$ there is a term $\tau\in M$ and a suitable
sequence $B$ such that $\tau(B)=z$.
\end{proposition}
\begin{proof}
First we observe that any ordinal $\nu$ can be written in the form
$\nu=i_{\Omega}(f)(\vec \beta)$ for some $f\in M$ and finite sequence
$\vec \beta$ of generators. Each generator $\beta$ belonging to
some $\kappa_\xi\in I$ is equal to $i_{\xi}(\bar \beta)$ for some
$\bar\beta\in\left[\kappa,(\kappa^{+\omega_1})^{M}\right)$,
and thus is denoted by a term $\tau(\kappa_\xi)$
built from
clause~\ref{item:CindiscBelong}. Thus any finite sequence of
ordinals is denoted by an expression using terms of
type~\ref{item:CfType} and~\ref{item:CindiscBelong}. Since $M$ is closed under
countable sequences, adding terms of type~\ref{item:Cinfinite} adds
in all countable sequences of ordinals.
Finally, any set $x\in\chang$ has the form
$\set{x\in\chang_{\iota}\mid \chang_{\iota}\models \phi(x,y)}$ for some
$\iota,\phi$ and $y$ as in clause~\ref{item:CsuccType}. Thus a
simple recursion on $\iota$ shows that every member of $\chang$ is
denoted by a term from clause~\ref{item:CsuccType}.
\end{proof}
The terms of clause~\ref{item:CindiscBelong} force the limitation
to restricted formulas in
Theorem~\ref{thm:main}(\ref{item:main-upper}), since the domain of these terms is exactly the
class $I$ of indiscernibles. It is possible that a more natural set
of terms would enable this restriction to be removed, but this would
depend on a precise understanding of the iteration $k\colon M_\Omega\to
K(\reals)^{\chang}$ from Subsection~\ref{sec:suitability-required}.
Proposition~\ref{thm:terms-suffice} actually exposes a probable
weakness in our current state of understanding of the Chang model. This proposition
corresponds to the property of $0^{\sharp}$ that every ordinal $\alpha$
is definable is using as parameters members of the class $I$ of
indiscernibles. In the case of $0^{\sharp}$ this is only true if
the parameters are allowed to include members of $I\setminus\alpha+1$.
In contrast, Proposition~\ref{thm:terms-suffice} says that $\alpha$
is alway denoted by a term $\tau(B)$ with
$B\in[I\cap(\alpha+1)]^{\omega}$. Possibly a more polished set of
terms, obtained through a more careful
analysis of the fine structure of the models and the iteration
$k$, would yield definability
properties more like those of $0^{\sharp}$.
\subsection{Outline of the proof}
\label{sec:proof-start}
Proposition~\ref{thm:countable-sets-generators} suggests a possible
strategy for the proof of
Theorem~\ref{thm:main}(\ref{item:main-upper}): find a generic
extension of $M_{\Omega}{\vert}\Omega$ which contains all
countable sequences of generators. There are good reasons
why this is likely to be impossible, beginning with the problem of
actually constructing a generic set for a class sized model.
Beyond that, many of the known forcing constructions used to add countable sequences of
ordinals require large cardinal strength far stronger than that
assumed in the hypothesis of Theorem~\ref{thm:main}, and give models with
properties which are known to imply the existence of submodels having strong
large cardinal strength.
However, two considerations suggest that this last problem may be less
serious than it first appears. First, there can be much more large cardinal strength in
the Chang model than is apparent from the actual extenders present in $K(\reals)^{\chang}$,
since much of the large cardinal strength in $V$ is
encoded in the set of reals. Second,
many properties known to imply large cardinal strength
are false in the Chang model not because of the lack of such
strength, but because of the failure of the
Axiom of Choice.
Results involving the size of the power set of singular cardinals, for
example, are irrelevant to the Chang model since the power set is not
(typically) well ordered there.
We avoid the problem of constructing generic extensions for class sized
model by working with submodels generated by countable subsets of $I$,
and we find that in fact none of the large cardinal structure in $V$
survives the passage to the Chang model beyond that given in the
hypothesis to Theorem~\ref{thm:main}.
\begin{definition}
If $B\subseteq I$ and $\gen_B$ is the set of generators belonging to members of $B$ then we write
\begin{equation*}
M_{B} = \{i_{\Omega}(f)(b)\mid f\in M \land b\in[\gen_B]^{<\omega}\}.
\end{equation*}
If $B$ is closed, and in particular if it is suitable, then we write $\chang_{B}$ for the Chang model
evaluated using the ordinals of $M_B{\vert}\Omega$ and all countable
sequences of these ordinals.
\end{definition}
Note
that $M_B$ is not transitive: it is a submodel of $M_{\Omega}$, and
$i_{\Omega}: M\to M_\Omega$ is
the canonical embedding $M\to M_B$ for any $B\subseteq I$. It is not
obvious even that the model $\chang_B$ can be regarded as a subset of
$\chang$; the proof of this is a part of the proof of the main lemma.
The definition of $\chang_{B}$ does imply that if $B$ and $B'$ are
closed subsets of $I$ with the same
order type then $\chang_{B}\cong
\chang_{B'}$. In particular, if $\otp(B)=\alpha+1$ then, setting
$B(\alpha+1)=\set{\kappa_\nu\mid \nu<\alpha+1}$,
$\chang_{B}\cong \chang_{B(\alpha+1)}$, which in turn is equal to the
$\kappa_{\alpha+1}$st stage $\chang_{\kappa_{\alpha+1}}$ of the
recursive definition of the Chang model as stated at the beginning of
this paper.
The motivation for our work begins with the observation that $M_{B}{\vert}
\Omega\prec M_{B'}{\vert}\Omega\prec M_{\Omega}{\vert}\Omega$ whenever
$B\subseteq B'\subseteq I$.
Corollary ~\ref{thm:counterexample} refutes any suggestion that this
necessarily extends to the models $\chang_B$ and $\chang_{B'}$,
however it also motivates
Definition~\ref{def:limitsuitable} below.
Corollary~\ref{thm:counterexample} says that we must take account of the gaps in $B$.
To be precise, we will say that a \emph{gap} in $B$ is a maximal
nonempty
interval in $I\setminus B$. For $B$ either suitable or limit suitable{},
every gap in $B$ is headed by a limit point $\lambda$ of
$I$ which is a member of $B\cup\sing{\Omega}$ and has uncountable cofinality.
\begin{definition}
\label{def:limitsuitable}
A subset $B$ of $I$ is \emph{limit suitable} if
\begin{myinparaenum}
\item\label{item:LS-suitable} its closure $\bar B$ is suitable, and
\label{item:LS-gaps} every gap in $B$ is an interval of the
form $[\lambda,\delta)$ where
\item $\delta$ is either $\Omega$ or a member of $B$ which is a limit
point of $I$ of uncountable cofinality,
\item if $\lambda\not=\emptyset$, then $\lambda=\sup(\sing{0}\cup
B\cap\delta)$, and
\item\label{item:LS-top}
$\lambda=\kappa_{\nu+\omega}$ for some $\nu\in\Omega$.
\end{myinparaenum}
Two limit suitable{} sets $B$ and $B'$ are said to be \emph{equivalent} if they have
the same order type and they have gaps in the same locations.
For a limit suitable\ set $B$, which is never closed (except for
$B=\emptyset$), we write
$\chang_{B}=\bigcup\set{\chang_{B'}\mid B'\subset B\land B'\text{ is suitable}}$.
That is, for limit suitable sets $B$ the model is constructed, like
$\chang_B$ for suitable $B$, by construction over the
(nontransitive) set of ordinals of $M_B$, but using only those
countable sets of ordinals which are in $\chang_{B'}$ for some
suitable $B'\subset B$.
\end{definition}
The use of $\kappa_{\nu+\omega}$ in the final Clause~(\ref{item:LS-top}) is for
convenience: our arguments would still be valid if it were only required
that $\lambda$ be a limit member of $I$ of countable cofinality which
is not a member of $B$.
Note that if $B$ is a limit suitable\ sequence then $\chang_{B}$ is not closed
under countable sequences; in particular $B$ is not a member of $\chang_B$.
Thus if $\delta$ is the head of a gap of $B$ then $\chang_{B}$
believes (correctly) that $\delta$ has
uncountable cofinality.
Theorem~\ref{thm:main}(\ref{item:main-upper}) will follow from the following
lemma:
\begin{lemma}[Main Lemma]
\label{thm:mainlemma}
Suppose $B\subset I$ is limit suitable. Then
$\chang_{B}$ is isomorphic to an elementary substructure of $\chang$
via the map defined by
$\tau^{\chang_B}(\vec\beta)\mapsto\tau^\chang(\vec\beta)$ for any
term $\tau\in T$ and any $\vec \beta$ which is a countable sequence of
generators for members of some suitable $B'\subset B$.
\end{lemma}
The elementarity holds for all restricted formulas. The proof will be by an induction over pairs
$(\iota,\phi)$, where $\iota\in M_B\cap \Omega+1$, and $\phi$ is a
formula of set theory; and the induction hypothesis implies that
the map
\begin{equation*}
\set{z\in\chang_\iota^{\chang_B}\mid \chang_\iota^{\chang_B}\models\phi(z,\vec\beta)}\mapsto
\set{z\in\chang_\iota\mid\chang_\iota\models\phi(z,\vec\beta)}
\end{equation*}
is well defined.
To see that Lemma~\ref{thm:mainlemma} suffices to prove
Theorem~\ref{thm:main}(2), observe that
any suitable set $B$ can be extended to a limit suitable\ set defined by the equation
\begin{equation*}
B'=B\cup\set{\kappa_{\nu+n}\mid \kappa_{\nu}\in B\land n\in\omega},
\end{equation*}
that is, by by adding the next $\omega$-sequence from $I$ at the foot
of each gap of $B$ and to the top of $B$. Now
let $B_0$ and $B_1$ be two equivalent suitable sets. Then their limit suitable\
extensions $B'_0$ and $B'_1$ are also equivalent, having the same
order type and having gaps in the corresponding places, so
$\chang_{B'_0}\cong \chang_{B'_1}$. Then for any restricted formula
$\phi$ we have
\begin{align*}
\chang\models \phi(B_0)&\iff \chang_{B'_0}\models\phi(B_0)\\
&\iff \chang_{B'_1}\models\phi(B_1)\iff \chang\models\phi(B_1).
\end{align*}
\section{The Proof of the Main Lemma}
\label{sec:mainlemma}
At this point we fix a mouse $M$ to be used for
the proof of the Main Lemma~\ref{thm:mainlemma}. Some basic
properties of $M$ have already been
sketched at the start of Section~\ref{sec:upperbound}, and
Definition~\ref{def:Nsequence} below gives more specific
requirements.
For this section, $B\subseteq I$ is a limit suitable sequence and $\zeta=\otp(B)$.
The main tool used for the proof is the forcing $P(\vec
E{\upharpoonright}\zeta)\mgkeq$, to be defined inside $M$, and a
$M_B$-generic set $G\subseteq i_{\Omega}(P(\vec
E{\upharpoonright}\zeta)\mgkeq)$ to be constructed inside $V[h]$ for a
generic Levy collapse map
$h\colon\omega_1\cong \reals$. The model $M_B[G]$ will include all
its countable subsets, and $\chang_{B}$ will be definable as a
submodel of $M_B[G]$.
The
forcing is essentially due to Gitik (see, for example,
\cite{Gitik2002Blowing-up-powe}) and the technique for constructing
the $M_B$-generic set $G$ is from Carmi Merimovich
\cite{Merimovich2007Prikry-on-exten}.
Gitik's forcing was designed to make the Singular Cardinal Hypothesis
fail at a cardinal of cofinality
$\omega$ by adding many Prikry sequences, each of which is (in our
context) a sequence of generators for cardinals in $B$. Thus it
would do what we need for the case when
$\otp(B)=\omega$, but needs to be adapted to work for sequences $B$
of arbitrary countable length. To this end we modify
Gitik's forcing by using ideas introduced by Magidor in
\cite{magidor.changecf} to adapt
Prikry forcing in order to to add sequences of indiscernibles of
length longer
than $\omega$. This adds some complications to Gitik's forcing, but
on the other hand much of the complication of Gitik's
work is avoided since we do not need to know whether cardinals in
the interval $(\kappa^{+}, \kappa^{+\omega_1})$ are collapsed, and
hence we can omit his preliminary forcing.
Our forcing is based on a sequence $\vec E$ of extenders, derived from the
last extender $E$ of $M$. We begin by defining this sequence, and at the
same time specify what properties we require of the chosen mouse $M$.
\begin{definition}\label{def:Nsequence}
We define an increasing sequence, $\seq{N_\nu\mid \nu<\omega_1}$
of submodels of $M$.
We write $E_{\nu}$ for $E\ecut N_{\nu}$,
the restriction of $E$ to the ordinals in $N_{\nu}$,
we write $\pi_{\nu}\colon \bar N_\nu\to N_\nu$ for the Mostowski
collapse of $N_\nu$, and we write $\bar E_\nu$ for
$\pi_{\nu}^{-1}[E_\nu]=\pi_{\nu}^{-1}(E)\ecut\bar N_\nu$.
We require that the $\reals$-mouse $M$ and the sequence
$\seq{N_\nu\mid \nu<\omega_1}$ satisfy the following conditions:
\begin{compactenum}
\item $M$ is a model of Zermelo set theory such that
$\reals\subset M$,
$\card{M}=\card{\reals}$, and
$\cof(\Omega\cap M)=\omega_1$.
\item $\len(E)=(\kappa^{+\omega_1})^{M}$.
\item If $\nu'<\nu<\omega_1$ then $(N_{\nu'},E_{\nu'})\prec (N_{\nu},E_{\nu})\prec
(M,E)$.
\item $^{\kappa}N_{\nu}\cap{ M}\subseteq N_{\nu}$.
\item\label{item:cardNnuSubsetNnu}
$\card {\bar N_\nu}^{M}\subset N_\nu$.
\item\label{item:Nseq-doublepluss} $\card {\bar
N_0}^{M}=(\kappa^{++})^{M}$, and if $\nu>0$ then
$\card {\bar N_\nu}^{M}=\sup_{\nu'<\nu}(\card{\bar N_{\nu'}}^{++})^{M}$.
\item \label{item:Eseq-MIsUnionN}
$M=\bigcup_{\nu<\omega_1}N_{\nu}$.
\end{compactenum}
\end{definition}
Clauses~5 and~6 are needed for the proof of
Proposition~\ref{thm:limitSuitableC_Bdefinable}.
We will work primarily with the extenders $E_{\nu}$ rather than with their
collapses $\bar E_{\nu}$, because this makes it easier to keep track
of the generators. However it should be noted that $E_{\nu}$ may
not be a member of $\ult(M,E)$,
so further justification is needed for many of the claims we
wish to make about being able to
carry out constructions inside $M$. Since we never actually use
more than countably many of the extenders $E_{\nu}$ at any one time,
the following observation will provide such justification:
\begin{proposition}\label{thm:EnuinM} The following are all members
of $\ult(M,E_{\nu})$, for any $\nu<\omega_1$:
\begin{itemize}
\item \label{item:Nseq-powerset}$\ps(\bigcup_{\nu'<\nu}\bar N_{\nu'})$
\item the extender $\bar E_{\nu'}$, and the map $\pi_{\nu''}^{-1}\circ \pi_{\nu'}\colon
\supp(\bar E_{\nu'})\to\supp(\bar E_{\nu''})$, for each $\nu'<\nu''<\nu$
\item the direct limit of the set $\set{\supp(\bar E_{\nu'})\mid
\nu'<\nu''<\nu}$ along the maps $\pi_{\nu''}^{-1}\circ
\pi_{\nu'}$, as well as with the injection maps from
$\supp(E_{\nu'})$ into this direct limit
\end{itemize}\qed
\end{proposition}
Since $\ult(M,E_{\nu})=\ult(M,\bar E_{\nu})$, this proposition allows
us to regard the direct limit as a code inside $M$ for the extender
$E_{\nu}$ together with its system of
subextenders $E_{\nu'}$ for $\nu'<\nu$.
The hypothesis of Theorem~\ref{thm:main} is more than sufficient to
find a
mouse $M$ and sequence $\vec N$ of submodels satisfying
Definition~\ref{def:Nsequence}: this can be done by first defining models $M'$ and
$\seq{N'_{\nu}\mid
\nu<\omega_1}$ satisfying all of the conditions except
Clause~\ref{item:Eseq-MIsUnionN}, and then taking $M$ to be the
transitive collapse of $\bigcup_{\nu<\omega_1} N'_{\nu}$. The
conditions on $M$ are, in turn, much stronger than is needed to carry out
this construction. In view of the fact that there is
no clear reason to believe that the actual strength needed is greater
that $o(E)=\kappa^{+(\omega+1)}$, it does not seem useful to
complicate the argument in order to determine the minimal mouse for which
the present argument works.
\medskip{}
We are now ready to begin the proof of Lemma~\ref{thm:mainlemma}.
Following Gitik we define, in subsections~\ref{sec:absolutelyfinal} and~\ref{sec:PForder}, a Prikry type forcing
$P(\vec F)$ depending on a sequence $\vec F$ of extenders.
Subsections~\ref{sec:PFproperties} and~\ref{sec:prikry} develop the
properties of this forcing, and Subsection~\ref{sec:gkeqDef} describes an equivalence
relation $\gkeq$ on its set of conditions.
Subsection~\ref{sec:generic_set} constructs an $M_B$-generic subset of
$i_{\Omega}(P(\vec E{\upharpoonright}\zeta)\mgkeq)$, and
subsection~\ref{sec:proof-main-lemma} uses this construction to prove
Lemma~\ref{thm:mainlemma} under the additional assumption that
$\kappa=\kappa_0\in B$. Finally,
Subsection~\ref{sec:finite-exceptions} deals with the special case
$\kappa\notin B$ and indicates how the same technique can be used to prove
Theorem~\ref{thm:modified-suitable}.
\subsection[The main forcing]{The forcing $P(\vec F)$}
\label{sec:absolutelyfinal}
Throughout the definition of the forcing, from
Subsections~\ref{sec:absolutelyfinal} through \ref{sec:gkeqDef}, we work entirely inside the
mouse $M$; in particular all cardinal calculations are carried out
inside $M$. We are
interested in defining $P(\vec E{\upharpoonright}\zeta)$, but for the purposes
of the recursion used in the definition we allow $\vec F$ to be any suitable
sequence of extenders. We will not give a definition of the
notion of a \emph{suitable
sequence} of extenders. All the sequences used in this section are
suitable: specifically, all of the sequences $\vec E{\upharpoonright}
{\xi}$ for $\xi<\omega_1$ are suitable, all of the ultrafilters
$\ufFromExt{E}{\vec E{\upharpoonright}\xi}=\set{X\subseteq H^{M}_{\kappa}\mid \vec E{\upharpoonright}\xi \in i^{E}(X)}$
concentrate on suitable sequences, and furthermore, if $\vec F$ is
suitable then so is $\vec F{\upharpoonright}[\gamma_0,\tau)$ for any
$0\leq\gamma_0\leq\tau\leq\len(\vec F)$.
Before starting the definition of the forcing, we give a brief
discussion of its design, techniques and origin.
The constructed generic extension of $M_B$ will have the form
$
M[G]=M[\vec\kappa, \vec h]
$,
where
$\vec \kappa =\seq{\forceKappa _\gamma\mid\gamma\leq\zeta}$
enumerates $B\cup\sing{\Omega}$ and
$\vec h=\seq{h_{\nu,\nu'}\mid \zeta\geq\nu>\nu'}$
is a sequence of functions
$h_{\nu,\nu'}\colon [\forceKappa _\nu,\forceKappa _{\nu}^{+})\to
\forceKappa _{\nu}$. Each of the functions $h_{\nu,\nu'}$ is,
individually, Cohen generic over $M$.
The purpose of this forcing is to provide what
we will call ``standard forcing names'' for the generators belonging
to members of $B$. Specifically, consider
$\Omega=\kappa_{\Omega}\in M_B$ and suppose
$\beta=i_{\bar\nu}(\bar\beta)$ is a generator belonging to
$\kappa_{\bar\nu}=\forceKappa_{\nu}\in B$. The construction of the
$M_B$-generic set $G$ will determine an ordinal
$\bar\xi\in[\kappa,\kappa^{+})$ such that
$\beta=h_{\zeta,\nu}(i_{\Omega}(\bar\xi))$, and this will be used as
a name in $M$, with parameters $\nu$ and $\bar\xi$, for
the generator $\beta$ in $M_B$.
Since $M$
is closed under countable sequences, this will give a name
for any countable
sequence of generators, and this in turn will give, via
clause~\ref{item:CsuccType} of Definition~\ref{def:terms}, a name
for any member of $\chang_{B}$.
The problem comes from the fact that the forcing $P(\vec E{\upharpoonright}\zeta)$ only uses
the extenders $E_{\nu}$ for $\nu<\zeta$. The raw use of the
iteration $\seq{i_{\xi}\mid\xi\in\Omega}$ would specify that
$i_{\Omega}(\bar\beta)$, for $\bar\beta\in[\kappa,\kappa^{+})$,
should be assigned the indiscernibles
$\set{i_{\bar\nu}(\bar\beta)\mid
\kappa_{\bar\nu}=\forceKappa_{\nu}\in B}$; however this would
establish names only for the generators $i_{\bar
\nu}(\bar\beta)$ such that
$\bar\beta\in\bigcup_{\nu<\zeta}\supp(E_{\nu})$. To get around this
problem we need to have a way to slip any ordinal
$i_{\bar\nu}(\bar \beta)$, for $\kappa_{\bar\nu}=\forceKappa_{\nu}\in
B$ and
$\bar\beta\in[\kappa,\kappa^{+\omega_1})$,
into the generic set as a
substitute for some $i_{\bar{\nu}}(\bar\beta')$ with $\bar\beta'\in\bigcup_{\nu<\zeta}\supp(E_\nu)$.
The trick is to design the forcing to disassociate the
indiscernibles added by the Prikry component of the forcing from
any particular ordinal for which it is an indiscernible.
We follow Gitik
\cite{Gitik2002Blowing-up-powe,Gitik2005No-bound-for-th,
Gitik2010Prikry-type-for,Gitik2012Violating-the-s}
in using three successive stages to do so.
The first stage involves mixing Cohen forcing in with the
Prikry forcing. For any apparent indiscernible
$h_{\gamma,\gamma'}(\xi)=\xi'$ determined by the
generic set $G$, there are conditions in $G$ which
assign the value via a Cohen condition as well as conditions which assign
it via a Prikry condition.
In particular, there is no function in $M_B[G]$ which
assigns uniform indiscernibles to any subset of
$[\kappa_\Omega,\kappa_{\Omega}^{+\omega_1})$ of size greater than $\Omega=\kappa_{\Omega}$.
The second stage involves the use of
$[\kappa_\Omega,\kappa_{\Omega}^{+})$ as the domain of
$h_{\zeta,\nu}$, rather than
$\bigcup_{\nu<\zeta}\supp(i_{\Omega}(E_{\nu}))$.
This is accomplished by using, in the Prikry component of the forcing,
functions
$a=a^{s,\zeta}_{\zeta,\nu}$ which map a subset of $[\kappa_{\Omega},\kappa_{\Omega}^{+})$
of size $\Omega$ into $\supp(i_{\Omega}(E_{\nu}))$.
The atomic non-direct extension will use a function $a'$, taken from
a member of the ultrafilter $\ufFromExt{i_\Omega(E_\nu)}{a}$.
The function $a'$ could be regarded as a Prikry indiscernible for
$a$; however it will be recorded in the
extension only via a Cohen condition $f_{a, a'}$
defined by $f(\xi)=a'(\xi')$, where $\xi'\in\domain(a')$ corresponds
to $\xi\in\domain(a)$.
The effect of this is that if $\alpha\in i_{\Omega}(\supp(E_0))$ and $s$ is a condition including
$a^{s,\zeta}_{\zeta,\nu}(\xi)=\alpha$ for each $\nu<\zeta$, then the
sequence $\vec\beta=\seq{h_{\zeta,\nu}(\xi)\mid \nu<\zeta}$ in $M_B[G]$
will be a Prikry sequence for the ultrafilter
$\ufFromExt{i_{\Omega}(E_0)}{\alpha}$; however there will be no
association, or at least no explicit association, with the ordinal
$\beta$ as distinguished from
any other member of $\set{\beta'\in[\kappa_{\Omega},\kappa_{\Omega}^{+\omega_1})\mid
\ufFromExt{i_{\Omega}(E_0)}{\beta'}=\ufFromExt{i_{\Omega}(E_0)}{\beta}}$,
which will for typical $\beta$ be unbounded in
$\supp(i_{\Omega}(E_\nu))$ for each $\nu\leq\zeta$.
The ambiguity introduced by the second stage allows the third, and
final, stage in the disassociation of the Prikry
conditions, via the equivalence relation $\gkeq$ introduced in
Subsection~\ref{sec:gkeqDef}. Gitik uses this equivalence relation
to ensure that the final forcing has the $\kappa^{++}$-chain condition and
hence does not collapse $\kappa^{++}$.
We do not care whether the cardinals
$\forceKappa_{\nu}^{++}$ are collapsed in $M_B[G]$, but we need to use
the equivalence relation in order to construct a generic set $G$ which gives
standard forcing names to all
generators $i_{\bar\nu}(\bar\beta)$ belonging to
$\forceKappa_{\nu}=\kappa_{\bar\nu}\in B$.
This may be regarded as a way of making the notions of ``no
association'' versus ``no explicit
association'' in the last paragraph more precise. As an example of a
non-explicit association,
suppose that
$\ufFromExt{E}{\beta'}\not=\ufFromExt{E}{\beta}$ for all
$\beta'<\beta$.
Then $E_{\beta}$ is necessarily associated with the least of the
Prikry sequences for the ultrafilter $\ufFromExt{E}{_{\beta}}$.
Thus, in this case, the
association, though not explicit, is unavoidable.
The equivalence relation $\gkeq$ will allow us to determine, for any
ordinal $\bar\beta\in
[\kappa,\kappa^{+\omega_1})$,
sequences $\seq{\bar\beta_{\nu}\mid\nu<\zeta}$ with
$\bar\beta_\nu\in\supp(E_\nu)$ such that the Prikry
sequence $\seq{i_{\nu}(\bar\beta)\mid\kappa_{\nu}\in B}$
induced by the iteration $i$ can be
substituted in the constructed generic set for the sequence
$\seq{i_{\nu}(\bar\beta_\nu)\mid \kappa_\nu\in B}$ which would be assigned
by the iteration $i_{\Omega}$ as the indiscernibles associated with
$\seq{i_{\Omega}(\bar\beta_\nu)\mid\nu<\zeta}$.
\subsubsection{Definition of the forcing: Overview}
\begin{definition}\label{def:overview}
The conditions of $P(\vec F)$ are functions $s$ satisfying the
following conditions:
\begin{enumerate}
\item The domain of $s$ is a finite subset of $\zeta+1$ with $\zeta\in\domain(s)$.
\item Each value $s(\tau)$ of $s$ is a member of the set $P^*_\tau$
of quadruples
\begin{equation*}
s(\tau)=(\forceKappa^{s,\tau},\vec F^{s,\tau},z^{s,\tau},\vec
A^{s,\tau}).
\end{equation*}
satisfying the following conditions:
\begin{enumerate}
\item $\vec F^{s,\tau}$ is a suitable sequence $\vec F^{s,\tau}=\seq{F^{s,\tau}_{\nu}\mid
\gamma_0\leq\nu<\tau}$ of extenders, where
$\gamma_0=\max(\domain(s)\cap \tau)+1$, or $\gamma_0=0$
if $\tau=\min(\domain(s))$.
\item
$\forceKappa^{s,\tau}$ is the critical point of the extenders in $\vec F^{s,\tau}$.
\item $z^{s,\tau}$ is a tableau of functions giving information about
the functions $h_{\nu,\nu'}$. This tableau will be fully
specified in Definition~\ref{def:tableau}.
\item $\vec A^{s,\tau}$ is a sequence of sets $A^{s,\tau}_\nu\in
U^{s,\tau}_\nu$, for $\gamma_0\le\nu<\tau$. The definition of the ultrafilter
$U^{s,\tau}_\nu$ will be given in Definition~\ref{def:A}.
\end{enumerate}
\end{enumerate}
The two partial orders on $P(\vec F)$, a direct extension order
$\le^*$ and a forcing
order $\le$, will be defined in Subsection~\ref{sec:PForder}.
\end{definition}
\subsubsection{Definition of the forcing: the tableau $z=z^{s,\tau}$}
The third component $z^{s,\tau}$ of $s(\tau)$ is a tableau which is represented
in Figure~\ref{fig:1}.
\begin{comment}
Before defining this, we introduce a slight variation on standard
notation:
\begin{definition}
\label{def:squarekappa}
We write $[X]^{*\kappa}$ for the set of pairs $(x,\prec)$ such
that $x\subseteq X$ and $\prec$ is a well ordering of $x$ of
length at most $\kappa$. If
$(x',\prec'),(x,\prec)\in [X]^{*\kappa}$ then we write
$(x',\prec')\supseteq(x,\prec)$ if $x'\supseteq x$ and
${\prec} = {\prec'}\cap (x\times x)$.
If $\gamma<\kappa$ and $(x,\prec)\in [X]^{*\kappa}$ then we write
$x{\vert}\gamma$ for the $\prec$-initial segment of $x$ of length
$\gamma$.
\end{definition}
\begin{proposition}\label{thm:squarekappacomplete}
Suppose that $(x_{\xi},\prec_{\xi})\in [X]^{*\kappa}$ for all
$\xi<\gamma$,
$(x_\xi,\prec_{\xi})\subseteq (x_{\xi'},\prec_{\xi'})$ for
$\xi<\xi'<\gamma$, and either $\gamma<\kappa$ or $\gamma=\kappa$
and
\begin{equation*}
\forall\xi<\gamma\exists\xi'\forall \xi''>\xi'\forall a'\in (x_{\xi''}\setminus
x_{\xi'}) \; a\prec_{\xi''}a'.
\end{equation*}
Then
$(x,\prec)=(\bigcup_{\xi<\gamma}x_{\xi},
\bigcup_{\xi<\gamma}\prec_{\xi})\in[X]^{*\kappa}$,
and $(x,\prec)\supseteq (x_{\xi},\prec_{\xi})$ for all
$\xi<\gamma$.
\end{proposition}
We will normally refer to a pair $(x,\prec)\in [X]^{*\kappa}$ by its
first member $x$, leaving the ordering $\prec$ understood. The
purpose of the ordering is as follows: fix an
$x\in[\kappa^+]^{*\kappa}$ and let $U$ be the ultrafilter
$\set{z\subseteq V_\kappa\mid x\in i^E(z)}$. Then there is a set
$A\in U$ such that for any $z\in A$, if we set
$\lambda=\otp(\prec_{z})$ then $z\in[\lambda^{+}]^{*\lambda}$ and we
have a natural one to one map from $z'$ onto $x{\vert}\lambda$. This
map will be needed in the Definition~\ref{def:one-step} of the
one-step extension.
\end{comment}
The following definition specifies the
members of this tableau:
\begin{definition}\label{def:tableau}
Suppose that $\tau\in\domain(s)$, and set
$\gamma_0=\sup(\domain(s)\cap\tau)+1$, or $\gamma_0=0$ if
$\tau=\min(\domain(s))$. The tableau $z=z^{s,\tau}$ includes
\begin{enumerate}
\item for each pair $(\gamma,\nu)$ of ordinals
with $\tau\ge\gamma\geq\gamma_0>\nu\geq0$, a function
$f^{z}_{\gamma,\nu}$ and
\item for each pair $(\gamma,\nu)$ with
$\tau\ge\gamma>\nu\geq\gamma_0$, a pair of functions
$(a^{z}_{\gamma,\nu}, f^{z}_{\gamma,\nu})$.
\end{enumerate}
For each pair $\gamma,\nu$ the function $f^z_{\gamma,\nu}=f^{s,\tau}_{\gamma,\nu}$ is a
slightly modified Cohen function:
\begin{enumerate}
\item $\domain(
f^{z}_{\gamma,\nu})\subseteq[\forceKappa^{z},(\forceKappa^z)^{+})$
and $\card{\domain(f^{z}_{\gamma,\nu})}\leq\forceKappa^z$.
\item Each of the values $f^z_{\gamma,\nu}(\xi)$ of $f^z_{\gamma,\nu}$
has one of the two following forms:
\begin{enumerate}
\item $f^{z}_{\gamma,\nu}(\xi)=\xi'\in \forceKappa^z_{\tau}$, or
\item\label{item:fpeculiar} $f^{z}_{\gamma,\nu}(\xi)=h_{\gamma',\nu}(\xi')$ for some
$\gamma'$ in the interval $\gamma>\gamma'>\nu$ and some
$\xi'\in\forceKappa^z_{\tau}$.
\end{enumerate}
\end{enumerate}
The functions $a^{z}_{\gamma,\nu}=a^{s,\tau}_{\gamma,\nu}$ satisfy the following conditions:
\begin{enumerate}
\item
$\domain( a^{z}_{\gamma,\nu})\subseteq[\forceKappa^{z},(\forceKappa^z)^{+})$
and
$\card{\domain(a^{z}_{\gamma,\nu})}\leq\forceKappa^z$.
\item
$\range(a^{z}_{\gamma,\nu})\subseteq\supp(F^{s,\tau}_{\nu})$.
\item $\domain(a^{z}_{\gamma,\nu})\cap\domain(f^z_{\gamma,\nu})=\emptyset$,
\item\label{item:asubset} If $\tau\geq\gamma>\gamma'>\nu$ then
$a^{z}_{\gamma,\nu}\subseteq a^{z}_{\gamma',\nu}$.
\end{enumerate}
\end{definition}
\begin{figure}[t]
\renewcommand{\arraystretch}{1.25}
\begin{equation*}
\begin{array}{c|ccccccc}
&0&\cdots&\gamma_0-1&\gamma_0&\cdots&\gamma&\cdots \\
\hline \tau &
f^{z}_{\tau,0}&\dots&f^{z}_{\tau,\gamma_0-1}&(a^{z}_{\tau,\gamma_0},f^{z}_{\tau,\gamma_0})&\dots&
(a^{z}_{\tau,\gamma},f^{z}_{\tau,\gamma})&\dots
\\
\vdots&\vdots&&\vdots&\vdots&&\vdots&
\\
\gamma& f^{z}_{\gamma,0}& \dots&f^{z}_{\gamma,\gamma_0-1}&
(a^{z}_{\tau,\gamma_0},f^z_{\gamma,\gamma_0})&\dots&&
\\
\vdots&\vdots&&\vdots&\vdots&&&
\\
\gamma_0+1& f^{z}_{\gamma_0+1,0}& \dots&f^{z}_{\gamma_0+1,\gamma_0-1}&
(a^{z}_{\gamma_0+1,\gamma_0},f^{z}_{\gamma_0+1,\gamma_0})& &&
\\
\gamma_0& f^{z}_{\gamma_0,0}&\dots&f^{z}_{\gamma_0,\gamma_0-1}&&&& \\
\end{array}
\end{equation*}
\caption{The third component $z^{s,\tau}$ of $s(\tau)$. The
element at row $\alpha$ and column $\beta$ is used to determine
$h_{\alpha,\beta}$. In the case of the top row, this
determination is direct; for the other rows this is indirect, via
their use in defining the ultrafilters $U^{s,\tau}_{\gamma}$ from which the sets
$A^{s,\tau}_{\alpha}$ are taken.}
\label{fig:1}
\end{figure}
The $(\gamma,\nu)$ entry in the tableau, whether a
function $f^{z}_{\gamma,\nu}$ or a pair of functions
$(a^{z}_{\gamma,\nu}, f^{z}_{\gamma,\nu})$, will ultimately be used to determine
the values of the Cohen function $h_{\gamma,\nu}$.
The functions $f^{s,\tau}_{\tau,\nu}$ in the first row of $z$ directly
determine $h_{\tau,\nu}$. The functions $f^{s,\tau}_{\gamma,\nu}$ in the remaining rows, with
$\gamma<\tau$,
indirectly help to determine $h_{\gamma,\nu}$ via the Prikry style
forcing: they restrict the possible values of $s'(\gamma)$ in
conditions $s'\leq s$.
The first form for the function $f_{\gamma,\nu}$ is the usual form for
a Cohen condition and asserts that
$h_{\gamma,\nu}(\xi)=\xi'$; or, more specifically, if $s$ is a
condition with $f^{s,\tau}_{\tau,\gamma}(\xi)=\xi'$,
then $s\Vdash \dot h_{\tau,\gamma}(\xi)=\xi'$. The second form,
the value $f^{s,\tau}_{\tau,\gamma}(\xi)=h_{\gamma',\nu}(\xi')$,
of $f(\xi)$ may be taken as a formal expression: it specifies that the value of
the name $h_{\tau,\nu}(\xi)$ is given by
\begin{align}
\text{if $s\Vdash \dot h_{\gamma',\nu}(\xi')=\xi''$ then}\quad&\quad s\Vdash\dot
h_{\tau,\nu}(\xi)=\xi'',\label{eq:fpeculiareval}\\
\text{if $s\Vdash\xi'\notin \domain(\dot h_{\gamma',\nu})$ then}\quad&\quad
s\Vdash \dot h_{\tau,\nu}(\xi)=0, \text{ and}\notag\\
\text{otherwise}
\quad&\quad s\nparallel\dot h_{\tau,\nu}(\xi).\notag
\end{align}
This definition requires recursion on $\tau$, using the fact that ``$s\Vdash
\dot h_{\gamma',\nu}(\xi')=\xi''$'' depends only on $s{\upharpoonright}\gamma'+1$.
In the first of these three cases, $s\Vdash \dot h_{\gamma',\nu}(\xi')=\xi''$, we will
regard the forms $f_{\tau,\nu}^{z}(\xi)=\xi''$ and
$f_{\tau,\nu}^{z}(\xi)=h_{\gamma',\nu}(\xi')$ as being identical.
\medskip{}
The functions $a^{z}_{\gamma,\nu}$ are included in order to generate the Prikry
indiscernibles. If $a^{s,\tau}_{\tau,\nu}(\xi)=\alpha$, then
$h_{\tau,\nu}(\xi)$ in the generic extension will
be a Prikry indiscernible for the ultrafilter
$(F^{s,\tau}_{\nu})_{\alpha}=\set{x\in \ps(\forceKappa) \mid \alpha\in
i^{F^{s,\tau}_{\nu}}(x)}$.
This completes the definition of the tableau $z^{s,\tau}$.
\subsubsection{The forcing: the ultrafilters $U^{s,\tau}_{\gamma}$ and
sets $A^{s,\tau}_{\gamma}$.}
We continue the definition of $P(\vec F)$ by specifying the
requirements for the final coordinate $A^{s,\zeta}$ for a quadruple
$w=s(\zeta)\in P^*_{\zeta}$. Definition~\ref{def:A} uses recursion on
$\zeta$ to define the following for each
for $\gamma<\zeta$:
\begin{enumerate}
\item
a set $P^{*}_{\zeta,\gamma}$, of which $A^{w}_{\gamma}$
is a subset,
\item a restriction operation
$w{\uparrow}\gamma$, which maps $w\in P^*_{\zeta}$ to a quadruple
$w{\uparrow}\gamma\in P^{*}_{\zeta,\gamma}$, and
\item an ultrafilter
$U^{w}_{\gamma}\subset\ps(P^{*}_{\zeta,\gamma})$.
\end{enumerate}
These will complete the definition of the set
$P^*_{\gamma}=P^*_{\gamma,\gamma}$, and hence of the set of conditions
of the forcing $P(\vec F)$.
In addition to $w{\uparrow}\gamma$ we use a second restriction operator
$z{\upharpoonright}[\gamma_0,\gamma]$, which may be applied to a tableau $z$
of the form of either Figure~\ref{fig:1} or~\ref{fig:member-of-A}.
This operator retains the rows of $z$ with indices in the interval
$[\gamma_0,\gamma]$ and discards the rows above these; thus if
$w=(\forceKappa^{w},\vec F^{w},z^{w},\vec A^{w})\in P^*_{\zeta}$, then
$(\forceKappa^w,\vec F^{w}{\upharpoonright}\gamma,
z^{w}{\upharpoonright}[\gamma_0,\gamma], \vec A^{w}{\upharpoonright}\gamma)\in P^*_{\gamma}$.
\begin{definition}
\label{def:A}
We assume as a recursion hypothesis that $P^{*}_{\tau}$ and
$P^{*}_{\tau,\gamma}$
have been defined for all
$\gamma\le\tau<\zeta$.
If $\zeta\geq\gamma$ then
the members of $P^{*}_{\zeta,\gamma}$ are quadruples
\begin{equation}
w=(\forceKappa^{w},\vec F^{w}, z^{w}, \vec A^{w})\label{eq:c}
\end{equation}
satisfying the following conditions:
\begin{enumerate}
\item The tableau $z^w$ has the form of Figure~\ref{fig:member-of-A}.
\item $w{\upharpoonright}[\gamma_0,\gamma]=(\forceKappa^{w},\vec
F^{w},z^{w}{\upharpoonright}[\gamma_0,\gamma],\vec A^{w}) \in P^*_{\gamma}$.
\item The functions $a^{z}_{\nu,\nu'}$ for $\tau\geq\nu>\gamma\geq\nu'$
satisfy the conditions in Definition~\ref{def:tableau}, except
that $a^{z}_{\tau,\nu'}$ has range contained in
$[\forceKappa_\tau,(\forceKappa_\tau)^{+\omega_1})$.
\end{enumerate}
Note that $P^*_{\tau,\tau}=P^{*}_{\tau}$.
Suppose that $\tau\leq\zeta$, $w\in P^*_{\tau,\gamma}$ and $\gamma'<\gamma$. Then
$w{\uparrow}\gamma'$ is the quadruple
\begin{equation*}
w{\uparrow}\gamma'=(\forceKappa^{w},\vec
F^{w}{\upharpoonright}\gamma', z^{w}{\uparrow}\gamma',\vec A^{w}{\uparrow}\gamma')\in
P^*_{\tau,\gamma'}
\end{equation*}
defined by recursion on $\gamma$ as follows:
\begin{compactenum}
\item $z^{w}{\uparrow}\gamma'$ is equal to the tableau obtained by
deleting from $z^{w}$ all columns with index greater than $\gamma'$ and
deleting the functions $f^{z}_{\nu,\nu'}$ from all rows with index
greater than $\gamma'$. Thus
$(z^{w}{\uparrow}\gamma'){\upharpoonright}[\gamma_0,\gamma']=z{\upharpoonright}[\gamma_0,\gamma']$
but the rows with index $\nu>\gamma'$ retain only the
functions $a^{w}_{\nu,\nu'}$ for $\gamma_0\leq\nu'<\nu\leq\gamma$.
\item
$\vec
A^{w}{\uparrow}\gamma'=\seq{A^{w}_{\gamma''}{\uparrow}\gamma'\mid
\gamma_0\leq\gamma''\leq\gamma'}$ where
$A^{w}_{\gamma''}{\uparrow}\gamma'=\set{w'{\uparrow}\gamma'\mid w'\in A^{w}_{\gamma''}}$.
\end{compactenum}
Note that this definition also applies for $w\in P_{\tau}^*$, since
$P^*_{\tau}=P^*_{\tau,\tau}$.
Finally, the ultrafilter $U^{s,\tau}_{\gamma}$ is defined as
\begin{equation}\label{eq:n}
U^{s,\tau}_{\gamma}=\ufFromExt{F^{s,t}_{\gamma}}{s(\tau){\uparrow}\gamma}=\set{X\subseteq P^*_{\tau,\gamma}\mid
s(\tau){\uparrow}\gamma\in i^{F^{s,\tau}_{\gamma}}(X)}.
\end{equation}
\end{definition}
\begin{figure}[]
\renewcommand{\arraystretch}{1.25}
\begin{equation*}
\begin{array}{c|cccccc}
&0&\cdots&\gamma_0-1&\gamma_0&\cdots&\gamma \\
\hline
\tau & & & &a^{z}_{\tau,\gamma_0}&\dots& a^{z}_{\tau,\gamma}
\\
\vdots&&&&\vdots&&\vdots
\\
\gamma+1&&&&a^{z}_{\gamma+1,\gamma_0}& \dots&a^{z}_{\gamma+1,\gamma}
\\[2pt]
\arrayrulecolor{gray}\cline{2-7}\arrayrulecolor{black}
\gamma& f^{z}_{\gamma,0}& \dots&f^z_{\gamma,\gamma_0-1}&(a^{z}_{\gamma,\gamma_0},f^z_{\gamma,\gamma_0})&\dots&
\\
\vdots&\vdots&&\vdots&\vdots&&
\\
\gamma_0+1& f^{z}_{\gamma_0+1,0}& \dots&f^{z}_{\gamma_0+1,\gamma_0-1}&
(a^{z}_{\gamma_0+1,\gamma_0},f^{z}_{1,\gamma_0})& &
\\
\gamma_0& f^{z}_{\gamma_0,0}&\dots&f^{z}_{\gamma_0,\gamma_0-1}&&& \\
\end{array}
\end{equation*}
\caption{The tableau $z^w$ of a member of
$A_{\gamma}^{s,\tau}\subseteq P^*_{\tau,\gamma}$.
The entry in row $\alpha$ and column $\beta$ is used in the
determination of $h_{\alpha,\beta}$.}
\label{fig:member-of-A}
\end{figure}
This completes the definition of the set of conditions for the forcing
$P(\vec F)$.
\subsection{The partial orderings of $P(\vec F)$.}
\label{sec:PForder}
Since $P(\vec F)$ is a
Prikry type forcing notion, we need to define both a direct extension
order $\leq^*$ and a forcing order $\leq$.
We will begin by defining the one-step extension, $\add(s,w)\leq s$,
which is the atomic extension adding a new ordinal to the domain
of $s$. We will then define the direct extension order $\leq^{*}$, which
will be the restriction of $\le$ to conditions $s'\leq s$ with $\domain(s')=\domain(s)$.
The forcing extension $\leq$ is then the smallest transitive relation
extending ${\leq^*}$ such that
$\add(s,w)\leq s$ for all
$w\in\bigcup_{\tau\in\domain(s)}\bigcup_{\gamma}A^{s,\tau}_{\gamma}$.
\subsubsection{The one-step extension}
\label{sec:one-step}
The one-step extension $s'=\add(s,w)$ in $P(\vec F)$ is the atomic
non-direct extension, corresponding to the extension
in Prikry forcing which simply adds one new ordinal to the finite
sequence. In $P(\vec F)$ it acts by merging Prikry components
$a^{s,\tau}_{\nu,\nu'}$ of $s(\tau)$ into the corresponding Cohen components of
$s'(\tau)$. The following preliminary definition specifies the
conversion of $a^{s,\tau}_{\nu,\nu'}$ to a Cohen condition.
\begin{definition}
\label{def:a2f}
Suppose $w\in A^{s,\tau}_{\gamma}$ and
$\tau\geq\nu>\gamma\geq\nu'\geq\gamma_0$, and let
$a=a^{s,\tau}_{\nu,\nu'}$ and $a'=a^{w}_{\nu,\nu'}$. The Cohen
condition $f_{a,a'}$ is defined as follows:
First, we define, for any function $a$ with domain a set of
ordinals, a map $\sigma_{a,r}\colon
\card{\domain(a)}\cong\domain(a)$.%
\footnote{This definition would be simplified if a Levy collapse of
$\reals$ onto $\omega_1$ had been taken at the start so that
$M$ satisfies GCH and hence the Axiom of Choice. Then $\sigma_a$
can be defined as the least map
$\card{\domain(A)}\cong\domain(A)$ and used
in place of the set of maps $\sigma_{a,r}$.}
Write $\phi_a$ for
the least
$\Sigma_{0}$ formula, with ordinal parameters, such that for some
$r\in\reals$ the equation
\begin{equation}\label{eq:w}
\sigma_{a,r}(\nu)=\xi\iff \phi_a(r,\nu,\xi)
\end{equation}
defines an enumeration $\sigma_{x,r}\colon\card {\domain(a)}\cong
\domain(a)$, and write $R_a$ for the set of $r\in\reals$ such that
this holds.
If
$r\in R_a\cap R_{a'}$ then $f_{a,a',r}$ is the Cohen condition
defined by
\begin{equation}
\label{eq:faa-def}
f_{a,a',r}(\xi)=
\begin{cases}
a'(\sigma_{a',r}\circ \sigma_{a,r}^{-1}(\xi))&\text{if
$\sigma_{a,r}^{-1}(\xi)<\forceKappa^{w}$ and $\nu'=\gamma$,}
\\
h_{\gamma,\nu'} (\sigma_{a',r}\circ\sigma_{a,r}^{-1}(\xi')) &\text{if
$\sigma_{a,r}^{-1}(\xi)<\forceKappa^{w}$ and
$\nu'<\gamma$,}
\\
0&\text{if $\sigma_{a,r}^{-1}(\xi)\ge\forceKappa^{w}$,}
\end{cases}
\end{equation}
using in the second case the second form~(\ref{item:fpeculiar}) of the Cohen condition
from Definition~\ref{def:overview}.
Then $f_{a,a'}$ is defined if and only if
$R_a=R_{a'}$ and $(\forall r,r'\in R_a)\;f_{a,a',r}=f_{a,a',r'}$, in
which case $f_{a,a'}$ is this common value of $f_{a,a',r}$.
\end{definition}
\begin{proposition}\label{thm:faa_exists}
Suppose that $F$ is an extender with critical point $\lambda$.
\begin{enumerate}
\item
If
$\card{\domain(a)}=\lambda$ then $\set{a'\mid f_{a,a'}\text{
exists}}\in \ufFromExt{F}{a}$.
\item If $\card{\domain(a_0)}=\card{\domain(a_1)}=\lambda$ and
$a_1\supseteq a_0$ then $\set{(a'_0,a'_1)\mid
f_{a_0,a'_0}=f_{a_1,a'_1}{\upharpoonright}\domain(a_0)}\in \ufFromExt{F}{(a_0,a_1)}$.
\end{enumerate}
\end{proposition}
\begin{proof}
For the first clause, note that the elementarity of $i^{F}$
implies that $\set{a'\mid R_{a'}=R_a}\in \ufFromExt{F}{a}$. Let
$r$ and $r'$ be members of $R_a$. To see that $\set{(a,a')\mid
f_{a,a',r}=f_{a,a',r'}}\in\ufFromExt{F}{(a,a')}$, set
$\pi_{a,r,r'}=\sigma^{-1}_{a,r'}\circ\sigma_{a,r}$
and
$\pi_{a',r,r'}=\sigma^{-1}_{a',r'}\circ\sigma_{a',r}$.
Then by elementarity $\set{a'\mid \pi_{a',r,r'}=\pi_{a,r,r}{\upharpoonright}\card{\domain(a')}}\in
\ufFromExt{F}{a}$, and if $a'$ is any member of this set, then (letting
$\lambda'=\card{\domain(a')}$ and letting
$\xi\in\sigma_{a,r}[\lambda']$ be arbitrary),
\begin{align*}
f_{a,a',r}(\xi)=\sigma_{a',r}\circ\sigma_{a,r}^{-1}(\xi)&=(\sigma_{a',r'}\circ\pi_{a',r,r'})\circ
(\sigma_{a,r'}\circ\pi_{a,r,r'})^{-1}(\xi)\\
&=(\sigma_{a',r'}\circ\pi_{a,r,r'}{\upharpoonright}\lambda')\circ
(\pi^{-1}_{a,r,r'}\circ
\sigma^{-1}_{a,r'})(\xi)\\
&=\sigma_{a',r'}\circ\sigma^{-1}_{a',r'}(\xi)=f_{a,a',r'}(\xi).
\end{align*}
This completes the proof of Clause~(1) of the Proposition, and a similar
argument proves Clause~(2).
\end{proof}
\begin{definition}[The one-step extension]
\label{def:one-step}
Suppose that $w\in A^{s,\tau}_{\gamma}$ where
$\gamma\notin\domain(s)$ and
$\tau=\min(\domain(s)\setminus\gamma)$. Then $s'=\add(s,w)$ is the
condition with $\domain(s')=\domain(s)\cup\sing{\gamma}$ defined as
follows:
\begin{compactenum}
\item
$s'(\gamma)=(\forceKappa_\gamma^{w},\vec F^{w},
z^{w}{\upharpoonright}\,[\gamma_0,\gamma], \vec A^{w})$.
\item
$s'(\tau)=(\forceKappa_\tau^{s},\vec F^{s',\tau}, z^{s',\tau},
\vec A^{s',\tau})$ where
\begin{enumerate}
\item $\forceKappa_\tau^{s'}=\forceKappa_\tau^{s}$ and
$\vec F^{s',\tau{}}=\vec F^{s,\tau}{\upharpoonright}(\gamma,\tau)$,
\item $z^{s',\tau}$ is obtained from
$z^{s,\tau}{\upharpoonright}(\gamma,\tau]$ by using
Definition~\ref{def:a2f} to replace $f^{s,\tau}_{\nu\nu'}$ with
$f^{s',\tau}_{\nu,\nu'}=f^{s,\tau}_{\nu,\nu'}\cup f_{a^{s,\tau}_{\nu,\nu'},a^{w}_{\nu,\nu'}}$
whenever $\tau\geq\nu>\gamma\geq\nu'\geq\gamma_0$, and
\item\label{item:AddA} if $\gamma<\nu<\tau$ then
$A^{s',\tau}_{\nu}=\set{\sigma(w')\mid w'\in
A^{s,\tau}_{\nu}\land
\forceKappa_\gamma^{w}<\forceKappa_\nu^{w'}}$, where
\begin{align}
\label{eq:Addw}
\sigma(w'){\upharpoonright}[\gamma,\nu]
&=\add(w'{\upharpoonright}[\gamma_0,\nu],w{\uparrow}\nu)\\
\sigma(w'){\upharpoonright}(\nu,\tau)&=w'{\upharpoonright}(\nu,\tau).\notag
\end{align}
\end{enumerate}
\item
$s'(\gamma')=s(\gamma')$ for all
$\gamma'\in\domain(s')\setminus\sing{\gamma,\tau}$.
\end{compactenum}
\end{definition}
Note that Equation~\eqref{eq:Addw} uses recursion on the pair
$(\gamma,\tau)$, along with the fact that
$w'{\upharpoonright}[\gamma_0,\nu]\in P^*_{\nu}$.
If any part of the definition of $\add(s,w)$ cannot be carried out as
described, then $\add(s,w)$ is undefined. Note that the set of $w$
for which it is defined is a member of $U^{s,\tau}_{\gamma}$, so that
we can assume without loss of generality that $\add(s,w)$ is defined
for all $w\in A^{s,\tau}_{\gamma}$.
This completes the definition of the one-step extension.
\subsubsection{The direct extension order $\le^*$.}
The direct extension order $\leq^*$ is the restriction of the
forcing order $\le$ to the pairs $(s',s)$ such that
$\domain(s)=\domain(s')$.
Again, the definition uses recursion on $\tau$:
\begin{definition}
\label{def:star-order}
If $s',s\in P(\vec F)$ then $s'\leq^* s$ if $\domain(s')=\domain(s)$
and $s'(\tau)\le^* s(\tau)$ for all $\tau\in\domain(s)$. The
ordering $s'(\tau)\leq^* s(\tau)$ on $P^*_{\tau}$ holds if and only
if the following conditions hold:
\begin{enumerate}
\item $\forceKappa^{s',\tau}=\forceKappa^{s,\tau}$ and $\vec
F^{s',\tau}=\vec F^{s,\tau}$.
\item \label{item:leq-star-a-fctns-extend}
$a^{s',\tau}_{\gamma,\gamma'}\supseteq
a^{s,\tau}_{\gamma,\gamma'}$ for each pair $(\gamma,\gamma')$ for
which they are defined.
\item\label{item:leq-star-change-w-in-A}
For each
$\gamma\in(\gamma_0,\tau)$ and each $w'\in A^{s',\tau}_{\gamma}$
there is $w\in A^{s,\tau}_{\gamma}$ such that
\begin{enumerate}
\item\label{item:SD-recursion} $w'{\upharpoonright}[\gamma_0,\gamma]\leq^*
w{\upharpoonright}[\gamma_0,\gamma]$ in $P^*_{\gamma}$.
\item\label{item:a-extends} $a^{w'}_{\nu,\nu'}\supseteq a^{w}_{\nu,\nu'}$ for
$\tau\geq\nu>\gamma\geq\nu'\geq\gamma_0$.
\item\label{item:f-extends} For all pairs $(\nu,\nu')$ with
$\tau\geq\nu>\nu'\geq\gamma_0$ we have
$ f_{a_{\nu,\nu'}^{s',\tau},\,a^{w'}_{\nu,\nu'}}\supseteq
f_{a^{s,\tau}_{\nu,\nu'},\, a^{w}_{\nu,\nu'}}$, where these two
functions are as defined in Definition~\ref{def:a2f}.
\end{enumerate}
\item\label{item:z} $f^{s',\tau}_{\nu,\nu'}\supseteq f^{s,\tau}_{\nu,\nu'}$ for each
pair $\nu,\nu'$ for which they are defined.
\end{enumerate}
\end{definition}
Clause~\ref{item:leq-star-change-w-in-A} implies that $\add(s',w')\leq^*
\add(s,w)$. This clause
corresponds to the requirement
in Prikry forcing that $A^{s'}\subseteq A^{s}$; however the
ultrafilters $U^{s,\tau}_{\gamma}$ used in this forcing vary with
$s$. Gitik \cite{Gitik2005No-bound-for-th} also has varying
ultrafilters, but takes them from a predefined set and uses
predefined witnesses to a Rudin-Keisler order on the ultrafilters.
Our definition could also be stated in terms of the Rudin-Keisler
order,
however the ultrafilters would have to be defined on the
complete Boolean algebra induced by the ordering $(P^*_{\tau,\gamma},\leq^*)$.
\medskip{}
This completes the definition of the forcing
$(P(\vec F), {\leq^*}, {\leq})$.
\subsection{Properties of the forcing $P(\vec F)$}
\label{sec:PFproperties}
\begin{definition}
If $\vec w$ is a sequence of length $n$, then we write
$\add(s,\vec w)$ for the condition defined by recursion as
$\add(s,\vec w)=s$ if $n=0$, and $\add(s,\vec w) =\add(\add(s,\vec
w{\upharpoonright} (n-1)), w_{n-1})$ if $n>0$.
\end{definition}
\begin{proposition}\label{thm:one-stepFirst}
Suppose that $s\leq t$. Then
there is $\vec z$ such that
$s\leq^*\add(t,\vec z)\leq t$
\end{proposition}
\begin{proof}
The proposition will follow by an easy
induction on the length of $\vec z$ once we show that for any $t'\leq^* t$ and
$s=\add(t',w')\leq t'$, where $w'\in A^{t',\tau}_{\gamma}$, there is $ w\in A^{t,\tau}_{\gamma}$ such
that $s\leq^* \add(t, w)<t$. Clause~3 of
the Definition~\ref{def:star-order} of
the direct ordering $\leq^*$ is designed to provide such a $w$:
\begin{align*}
s(\tau)&\le^* \add(t, w)(\tau)&&\text{by Clauses~(3b,c),}\\
s(\gamma)&\leq^*\add(t, w)(\gamma)&&\text{by Clause~(3a), and}\\
s(\gamma')&=\add(t',w')(\gamma')=t'(\gamma')\\
&\qquad\leq^*t(\gamma')=\add(t,w)(\gamma') &&\text{for
$\gamma'\in\domain(s')\setminus\sing{\tau,\gamma}$.}
\end{align*}
\end{proof}
\begin{proposition}\label{thm:one-step-commute}
Suppose $s\leq t$ and $\gamma\in\domain(s)\setminus\domain(t)$, and
let
$\tau=\min(\domain(t)\setminus\gamma)$. Then there is $w\in
A^{t,\tau}_{\gamma}$ such that $s\leq\add(t,w)<t$.
\end{proposition}
\begin{proof}
By using Proposition~\ref{thm:one-stepFirst}, we can find $\vec w$
so that $s\leq^* \add(t,\vec w)\leq t$ for some sequence $\vec w$.
Thus it only remains to show that the order of the sequence $\vec w$
can be permuted, that is, that there is $\vec w'$ such that $\add(s,\vec
w)=\add(s,\vec w')$ and $w'_{0} \in A^{s,\tau}_{\gamma}$.
This will follow by an easy induction once we show that the order
of two consecutive one-step extensions can be reversed.
Thus suppose that
$s=\add(\add(t,w_0),w_1)$, with $w_0\in
A^{t,\tau_0}_{\nu_0}$ and $w_1\in A^{\add(t,w_0),\tau_1}_{\nu_1}$.
We want to find $w'_1\in A^{t,\tau'_1}_{\nu_1}$ and $w'_0\in A^{\add(t,w'_1),\tau'_0}_{\nu_1}$
so that $s=\add(\add(t, w'_1),w'_0)$.
We have three cases:
\begin{case}
{1}{$\nu_0<\nu_1<\tau_0$}
In this case $\tau_1=\tau_0$, and
by definition~\ref{def:star-order}, there is $w'_1\in
A^{t,\tau_0}_{\nu_1}$
such that $w_1=
(w'_1){\uparrow}\nu_1$.
Then $s=\add(\add(t,w_1'),\sigma_{\nu_0}(w_0))$, where
$\sigma_{\nu_0}$ is as defined in Clause~3 of Definition~\ref{def:one-step}.
\end{case}
\begin{case}{2}{$\nu_1<\tau'_1=\nu_0$}
By Definition~\ref{def:one-step}, $w_1=\sigma_{\nu_1}(w'_1)$ for some
$ w'_1\in A^{t,\tau_0}_{\nu_1}$. Then $s=\add(\add(t,w'_1),w_1{\upharpoonright} (\nu_0,\tau_0])$.
\end{case}
\begin{case}{3}{$\nu_1>\tau_0$ or $\tau'_1<\nu_0$}
In this case $\add(\add(t, w_0),w_1)=\add(\add(t,w_1),w_0)$ so we
can take $w'_0=w_0$ and $w'_1=w_1$.
\end{case}
\end{proof}
\begin{comment}
\begin{lemma}
\label{thm:apaOK}
Suppose $a_1\supset a_0$. Then $\set{(a'_1,a'_0)\mid
f_{a_1,a'_1}\supset f_{a_0,a'_0}}\in F_{a_1,a_0}$.
\end{lemma}
\begin{proof}
Let $\pi_0\colon\kappa\to\domain(a_0)$ and
$\pi_1\colon\kappa\to\domain(a_1)$ be the associated enumerations
and set $\sigma=\pi_1^{-1}\circ\pi_0 \colon\kappa\to\kappa$ and
$A=\set{\lambda<\kappa\mid \sigma[\lambda]\subseteq\lambda}$.
Then
$B=\set{(a'_1,a'_0)\mid \exists \lambda\in
A\;(\pi'_0=\pi_1'\circ(\sigma{\upharpoonright}\lambda)}\in F_{a_1,a_0}}$,
and for any $(a'_0,a'_1)\in B$ and
$\xi\in\domain(a_0)\cap \pi_0[\lambda]$ we have
\begin{align*}
f_{a_0,a'_0}(\xi)&=\pi_{0}'\circ \pi_0^{-1}(\xi)\\
&=\pi_{1}'\circ\sigma\circ
(\pi_1\circ\sigma)^{-1}(\xi)\\
&=\pi_1'\circ \sigma \circ
\sigma^{-1}\circ \pi_1^{-1}(\xi)\\
&=\pi'_{1}\circ \pi_{1}^{-1}(\xi)
=f_{a_1,a'_1}(\xi).
\end{align*}
\end{proof}
\end{comment}
We write $\below{P(\vec F)} s$ for $\set{s'\in P(\vec F)\mid s'\leq
s}$. The proof of the following proposition is straightforward.
\begin{proposition}[Factorization] Suppose $s\in P(\vec F)$ and
$\gamma\in\domain(s)$ for some $\gamma<\zeta$. Then
\label{thm:factorization}
\begin{equation}
\label{eq:factorizationexact}
\below{P(\vec F)}{s} \text{ is a regular suborder
of }\below{P(\vec F^{s,\gamma})}{s{\upharpoonright}\gamma+1}\times
P'
\end{equation}
where $P'=\set{q{\upharpoonright}(\gamma,\zeta]\mid q\leq s}$.
Thus $\below{P(\vec F)}{s}$ can be written in the form
\begin{equation}\label{eq:factorizationstar}
\below{P(\vec F)}{s}\equiv
\below{P(\vec F^{s,\gamma})}{(s{\upharpoonright}\gamma+1)} \star\dot R
\end{equation}
where $\dot R$ is a $\below{P(\vec
F^{s,\gamma})}{s{\upharpoonright}\gamma+1}$-name for a Prikry style forcing order.
\qed
\end{proposition}
This factorization property is an important property of
this Magidor-Radin style of Prikry forcing.
Typically, equation~\eqref{eq:factorizationexact} would be an equality
rather than a subalgebra; however that is not true here
because
of the peculiar form of the
Cohen conditions $f^z_{\nu,\nu'}(\xi)=h_{\nu'',\nu'}(\xi'')$
in Clause~(\ref{item:fpeculiar}) of Definition~\ref{def:tableau}.
When $\nu>\gamma\ge\nu''$, the determination via Definition~\ref{def:a2f}
of the ultimate value of $h_{\nu,\nu'}$ depends on both
$\below{P(\vec F^{s,\gamma})}{s {\upharpoonright}\gamma+1}$ and $R$.
The generic $G\subseteq P(\vec F)$ obtained from a generic
$G_0\times G_1\subseteq P(\vec F^{s,\gamma})\times P'$ is obtained by resolving,
as specified in
equation~\eqref{eq:fpeculiareval}, the values of the
Cohen conditions in $G_1$ which have the form described in
Definition~\ref{def:tableau}(\ref{item:fpeculiar}): that is,
$f_{\nu,\nu'}(\xi)=h_{\nu'',\nu'}(\xi'')$ for some $\nu$,
$\nu''$ and $\nu'$ with
$\nu>\gamma\ge\nu'$.
Note that the forcing $P'$ in equation~\eqref{eq:factorizationexact}
is in fact identical to $P(\vec F)$ except that the domain of the
conditions is contained in the interval $[\gamma+1,\zeta]$ instead of
$[0,\zeta]$, and $\gamma+1$ is used instead of $0$ as the default
value of $\gamma_0$ in the definition of
$P^*_{\tau}$ when
$\domain(s)\cap\tau=\emptyset$ (but the tableau of
figure~\ref{fig:1} retains all of its columns, starting with $0$).
Thus all of the properties proved of $P(\vec F)$ are also true of $P'$.
This
factorization will frequently
be used in proofs, sometimes implicitly, to justify simplifying
notation by proving that the result holds for the case when
$\domain(s)=\sing{\zeta}$.
The result then follows for arbitrary $s$ by a simple induction on
$\zeta$: If $s$ is an arbitrary condition in $P(\vec F)$ and
$\gamma=\max(\domain(s)\cap\zeta)$ then the induction step uses
the induction
hypothesis for $P(\vec F^{s,\gamma})$ and the special case
$\domain(s)=\sing{\zeta}$ for $R$.
\begin{lemma}[Closure]
\label{thm:closure}
Suppose that $\seq{s_{\nu}\mid\nu<\beta}$ is a
$<^{*}$-descending sequence of conditions in $P(\vec F)$.
\begin{enumerate}
\item\label{item:closuresmall}($\kappa$ closure)
If $\beta<\forceKappa^{s_0,\min(\domain(s_0))}$ then the infimum
$\bigwedge_{\nu<\beta}s_\nu$ of this sequence exists.
\item (Diagonal closure)\label{item:closurediagonal}
Suppose that $\beta=\forceKappa^{s_0,\min(\domain(s_0))}$.
Then there is $s=\bigtriangleup_{\nu<\beta}s_\nu\leq^* s_0$ such that
$s\Vdash\forall\nu<\dot\forceKappa_0\;s_{\nu}\in\dot G$.
\end{enumerate}
\end{lemma}
Note that for the factorization forcing $P'$ of
Proposition~\ref{thm:factorization}, $\forceKappa_0$ can be replaced
by $\forceKappa_{\gamma+1}$.
\begin{proof}
The proof is by
induction on $\zeta$, using Proposition~\ref{thm:factorization}.
Thus we can assume that $\domain(s_0)=\sing{\zeta}$.
Since the first two coordinates of $s_{\nu}(\zeta)$ are fixed and
the third, $z^{s,\nu}$, is $\kappa^+$-closed, the fourth coordinate,
$\vec A^{s_\nu,\zeta}$, is the only problem.
If $w',w\in P_{\zeta,\eta}^*$ then we write $w'\leq^* w$ if the
conditions of
Definition~\ref{def:star-order}(\ref{item:leq-star-change-w-in-A}) hold.
If $\zeta>\gamma>\eta$ then the induction hypothesis trivially
extends to sequences
in $P^*_{\gamma,\eta}$, since only subclause~(\ref{item:leq-star-change-w-in-A}a) is problematic.
Now, to prove Clause(\ref{item:closuresmall}) of the Lemma we need to define
$A^{s,\zeta}_{\eta}$ for each $\eta<\zeta$. We can assume that
$\beta<\forceKappa^{w}_{\eta}$ for all $w\in
A^{s,\zeta}_{\eta}$. Set
\begin{equation*}
A^{s,\zeta}_{\eta}=\set{\bigwedge_{\nu<\beta}w_{\nu}\mid
(\forall\nu<\beta)\;w_\nu\in A^{s_\nu,\zeta}_{\eta}\land
(\forall\nu'<\nu<\beta)\; w_{\nu'}\leq^*w_{\nu}}.
\end{equation*}
To see that $A^{s,\zeta}_{\eta}\in U^{s,\zeta}_{\eta}$ note that
the induction hypothesis implies that
the infimum
$w=\bigwedge_{\nu<\beta}(i^{F^{s,\zeta}_\eta}(s_\nu)){\uparrow}\eta$
exists, and
$w\in i^{F^{s,\zeta}_\eta}(A^{s,\zeta}_{\eta})$.
This concludes the proof of Clause~(\ref{item:closuresmall}), and
the proof of Clause~(\ref{item:closurediagonal}) is similar.
\end{proof}
\begin{lemma}
\label{thm:diagonal-closure}
Suppose that $s\in P(\vec F)$ and for all $w\in
A^{s,\zeta}_{\gamma}$ the set $D$ is open and dense in
$(P(\vec F),\leq^*)$ below $\add(s,w)$. Then there is a condition
$s'\leq^* s$ such that $s''\in D$ for all $s''<s'$ having $\gamma\in\domain(s'')$.
\end{lemma}
\begin{proof}{}
By Proposition~\ref{thm:one-step-commute} it will be enough to show
that there is $s'\leq^* s$ such that $\add(s',w)\in D$ for all
$w\in A^{s',\zeta}_{\gamma}$. In order to simplify notation,
we assume that $\domain(s)=\sing{\zeta}$.
By proposition~\ref{thm:enoughAC} we can assume that
$A^{s,\zeta}_{\gamma}$ can be enumerated as $\set{w_{\nu}\mid
\nu<\kappa}$
so that $\nu'\leq\nu$ implies
$\forceKappa^{w_{\nu'}}_{\gamma}\leq\forceKappa^{w_{\nu}}_{\gamma}$.
We will define by recursion on $\nu$ a
$\leq^*$-decreasing sequence of conditions
$\seq{s_{\nu}\mid\nu<\kappa}$ in $R$ so that
$\add(s_{\nu},w_\nu)\leq^* \add(s,w_\nu)$ for all $\nu<\kappa$.
At the same time we will define a function $\sigma\colon
A^{s,\zeta}_{\gamma}\to P^{*}_{\zeta,\gamma}$ so that $s_\nu$ and
$\sigma(w_\nu)$ satisfy the following conditions:
\begin{enumerate}
\item $s_0=s$,
\item $s_{\nu}{\uparrow}\gamma=s{\uparrow}\gamma$ and $\vec
A^{s_\nu,\zeta}{\upharpoonright}\gamma+1=\vec
A^{s,\zeta}{\upharpoonright}\gamma+1$ for all $\nu<\kappa$,
\item $\add(s_{\nu+1},\sigma(w_\nu))\in D$, and
\item $s_{\nu'}\leq^*s_{\nu}$ for all $\nu'<\nu<\kappa$.
\end{enumerate}
Note that clause~(2) implies that $\add(s_{\nu},w)$ exists for all
$\nu<\kappa$ and all $w\in A^{s,\zeta}_{\gamma}$. Also, clauses~(2)
and~(4) imply that $\add(s_{\nu'},w_{\nu})\le^*
\add(s_{\nu},w_\nu)\leq^* \add(s,w_\nu)$ for all $\nu<\nu'<\kappa$.
To define the sequence, set $s_0=s$, and
if $\nu$ is a limit ordinal then set
$s_{\nu}=\bigwedge_{\nu'<\nu}s_{\nu'}$. For a successor ordinal
$\nu+1$, since $\add(s_{\nu},w_\nu)\leq^* \add(s,w_\nu)$, the
hypothesis implies that there is $t\leq^*\add(s_\nu,w_\nu)$ such
that $t\in D$.
Define $\sigma(w_\nu)$ by
\begin{align*}
\sigma(w_\nu){\upharpoonright}[\gamma_0,\gamma] &=
(t{\uparrow}\gamma){\upharpoonright}[\gamma_0,\gamma]
\text{, and}\\
\sigma(w_\nu){\upharpoonright}(\gamma,\zeta] &=
w_\nu{\upharpoonright}(\gamma,\zeta].
\end{align*}
By clause~(2) we have $s_{\nu+1}{\uparrow}\gamma=s{\uparrow}\gamma$ and
$\vec A^{s_{\nu+1},\zeta}{\upharpoonright}\gamma+1=\vec A^{s}{\upharpoonright}\gamma+1$.
The remainder of $z^{s_{\nu+1}}$ is taken from $t$; that is:
\begin{align*}
a^{s_{\nu+1},\zeta}_{\eta,\eta'}&=a^{t,\zeta}_{\eta,\eta'}&&\text{if
$\eta'>\gamma$,}\\
f^{s_{\nu+1},\zeta}_{\eta,\eta'}&=f^{t,\zeta}_{\eta,\eta'}&&\text{if
$\eta'>\gamma$,
and}\\
f^{s_{\nu+1},\zeta}_{\eta,\eta'}&=f^{t,\zeta}_{\eta,\eta'}{\upharpoonright}(\kappa^{+}\setminus\domain(a^{s,\zeta}_{\eta,\eta'}))&&\text{if $\eta>\gamma\geq\gamma'$.}
\end{align*}
The definition of $A_{\eta} ^{s_{\nu+1},\zeta}$ for
$\zeta>\eta>\gamma$ is by recursion on $\gamma$. For $w\in
A^{s_\nu,\zeta}_{\eta}$ and $w'\in A^{t,\zeta}_{\eta}$, let us write $w'\leq^* w$ if
they satisfy
Definition~\ref{def:star-order}(\ref{item:leq-star-change-w-in-A}),
in which case let $\pi_{w'}(w)$ be given by
\begin{enumerate}
\item
$\pi_{w'}(w){\upharpoonright}[\gamma+1,\zeta]=w{\upharpoonright}[\gamma+1,\zeta]$,
and
\item
$\pi_{w'}(w){\upharpoonright}[\gamma_0,\gamma]$ is defined
in the same way as $s_{\nu+1}$, but with $w'{\upharpoonright}[\gamma_0,\gamma]$, $w{\upharpoonright}[\gamma_0,\gamma]$ and $\eta$ in
place of $t,s_{\nu}$ and $\zeta$.
\end{enumerate}
Then
\begin{multline*}
A_{\eta}^{s_{\nu+1},\zeta}=\set{\pi_{w'}(w)\mid
w'\in A^{t}_{\eta}\land
\forceKappa^{w'}>\forceKappa^{w_\nu}\land
w\in A^{s_\nu}_{\eta}
\land w'\leq^* w}.
\end{multline*}
\medskip
Now set $s_{\kappa}=\bigtriangleup_{\nu<\kappa}s_{\nu}$, and set
$\bar w =[\sigma ]_{U^{s,\zeta}_{\gamma}}=
i^{F^{s,\zeta}_{\gamma}}(\sigma)(s{\uparrow}\gamma)$.
Then clause~(2) of the initial conditions on $s_{\nu}$ allow $\bar
w$ to be merged into $s_{\kappa}$, giving the desired extension
$s'\leq^*s$. We can assume without loss of generality that
$w'\in A^{s_\kappa,\zeta}_{\eta}$ whenever $w\in A^{s_\kappa,\zeta}_{\eta}$ and $w'\leq^*w$ in the
sense of Definition~\ref{def:star-order}(\ref{item:leq-star-change-w-in-A}).
\begin{todoenv}
(7/13/15) --- This might be simplified by assuming that $w'\leq^*
w\in A^{s,\tau}_{\nu}$ implies that $w'\in A^{s,\tau}_{\nu}$.
This would make the ultrafilter really an ultrafilter on a Boolean algebra.
\end{todoenv}
\begin{align*}
A^{s',\zeta}_{\eta}&=
\begin{cases}
A^{\bar w}_{\eta}&\text{if $\eta<\gamma$},\\
\set{\sigma(w)\mid w\in A^{s}_{\gamma}}&\text{if
$\eta=\gamma$ and}
\\
A^{s_\kappa,\zeta}_{\eta}&\text{if $\eta>\gamma$};
\end{cases}
\\
z^{s',\zeta}{\uparrow}\gamma &= \bar w{\upharpoonright}[\gamma_0,\gamma],
\\
f^{s',\zeta}_{\eta,\eta'}&=f^{s_\kappa,\zeta}_{\eta,\eta'}&&\text{if
$\eta>\gamma$, and}
\\
a^{s',\zeta
}_{\eta,\eta'}&= a^{s_\kappa,\zeta}_{\eta,\eta'}&&\text{if $\eta'>\gamma$.}
\end{align*}
\end{proof}
\subsection{The Prikry property}
\label{sec:prikry}
\begin{lemma}\label{thm:prikry}
\begin{enumerate}
\item\label{item:Prikrythm-decide} Let $\phi$ be a sentence and $s$ a condition in $P(\vec F)$.
Then there is an $s'\leq^{*}s$ such that $s'$ decides $\phi$.
\item\label{item:Prikrythm-inD} Let $D$ be a dense subset of $P(\vec F)$, and suppose $s\in
P(\vec F)$. Then there is an $s'\leq^{*}s$ and a finite
$b\subseteq\zeta+1$ such that any $s''\leq s'$ with
$b\subseteq\domain(s'')$ is a member of $D$.
\end{enumerate}
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{thm:prikry}]
The proof of Lemma~\ref{thm:prikry} is by induction on the length
$\zeta$ of $\vec F$. By the induction hypothesis and Proposition
\ref{thm:factorization}
we can simplify the notation by assuming that $\domain(s)=\sing{\zeta}$.
The main part of the proof is the following claim:
\begin{claim}
\label{thm:Prikry-sublemma}
Suppose that $D\subseteq P(\vec F)$ is dense and $s\in P(\vec F)$
has domain $\sing{\zeta}$. Then there is $s'\leq^* s$ such
that either $s'\in D$ or for some $\gamma<\zeta$
\begin{multline}
\label{eq:Prikry-sublemma}
s'\Vdash (\exists w\in A^{s',\zeta}_{\gamma})(\exists t\in \dot
G\cap D)\;\Bigl(\domain(t)\subseteq
(\gamma+1)\cup\sing\zeta\\
\land t\leq\add(s',w)\land
t(\zeta)=\add(s',w)(\zeta)\Bigr)
\end{multline}
\end{claim}
\begin{proof}
For each $\gamma<\zeta$, define
\begin{align*}
D^{+}_{\gamma}&=\set{t\in P(\vec F)\mid t\Vdash(\exists t'\in\dot G\cap
D)\,\domain(t')\subseteq(\gamma+1)\cup\sing{\zeta}}
\\
D^{-}_{\gamma}&=\set{t\in P(\vec F)\mid t\Vdash\lnot(\exists t'\in\dot G\cap
D)\,\domain(t')\subseteq(\gamma+1)\cup\sing{\zeta}}
\\
E_{\gamma}&=
\set{t\in P(\vec F)\mid
(\forall t'\leq t)\;
\bigl( t'\in
D\land\domain(t')\subseteq(\gamma+1)\cup\sing\zeta
\\&\hspace{5cm} \implies (t'{\upharpoonright}(\gamma+1)\cup
t{\upharpoonright}\sing{\zeta})\in D\bigr)}.
\end{align*}
First, suppose that for all $\gamma<\zeta$ the set
$(D^{+}_{\gamma}\cup D^{-}_{\gamma})\cap E_{\gamma}$ is $\leq^*$-dense below
any condition $t\leq s$ with $\domain(t)=\sing{\gamma,\zeta}$.
Then by Lemma~\ref{thm:diagonal-closure} there is $s'\leq^* s$
such that for each $\gamma<\zeta$ and $w\in A^{s',\zeta}_{\gamma}$ we have $\add(s,w)\in
(D^{+}_{\gamma}\cup D^{-}_{\gamma})\cap E_{\gamma}$. By shrinking the sets
$A^{s',\zeta}_{\gamma}$ we can assume that for each $\gamma$,
$\set{\add(s',w)\mid w\in A^{s',\zeta}_{\gamma}}$ is contained in one
of $D^{+}_{\gamma}\cap E_\gamma$ or $D^{-}_{\gamma}\cap E_{\gamma}$. Since $D$ is
dense it follows that $\set{\add(s',w)\mid w\in
A^{s',\zeta}_{\gamma}}\subseteq D^{+}_{\gamma}$ for some
$\gamma<\zeta$, and it follows by
Proposition~\ref{thm:factorization} that $s'$ satisfies the formula~\eqref{eq:Prikry-sublemma}.
Now fix $\gamma<\zeta$ and $t\leq s$ with $\gamma\in\domain(t)$. We will show that
$(D^{+}_{\gamma}\cup D^{-}_{\gamma})\cap E_{\gamma}$ is
$\leq^*$-dense below $t$.
First, note that by Proposition~\ref{thm:factorization}, the set $E_{\gamma}$
is $\leq^*$-dense below any condition $t$ with
$\gamma\in\domain(t)$. Now for $t\in E_{\gamma}$, consider the following
formula in the forcing language of $P(\vec F^{t,\gamma})$:
\begin{equation}
\exists
t'\in \dot G\, (t'\cup t{\upharpoonright}\sing\gamma\in D).\label{eq:Prikry.sublemma.a}
\end{equation}
By the induction hypothesis of
Lemma~\ref{thm:prikry}(\ref{item:Prikrythm-decide}) there is
$t''\leq^*
t{\upharpoonright}\gamma+1$ which decides, in $P(\vec F^{t,\eta})$, the truth of
formula~\eqref{eq:Prikry.sublemma.a}. Then
$t''\cup t{\upharpoonright}\sing{\zeta}$ is in either $D^{+}_{\gamma}$ or
in $D^{-}_{\gamma}$.
\end{proof}
To complete the proof of
Lemma~\ref{thm:prikry}(\ref{item:Prikrythm-decide}), apply
Claim~\ref{thm:Prikry-sublemma} with $D=\set{t\mid t\parallel\phi}$.
Since we are done if there is $s'\leq^*
s$ in $D$ we can assume by
Claim~\ref{thm:Prikry-sublemma} that there is $s'\leq^* s$ and
$\gamma<\zeta$ such that \eqref{eq:Prikry-sublemma} holds.
By the induction hypothesis, for each $w\in A^{s',\zeta}_{\gamma}$
there is $t_w\le^* \add(s',w){\upharpoonright}(\gamma+1)$ in $P(\vec F^{w})$ such that
$t_w\cup\add(s',w){\upharpoonright}\sing{\zeta}
\parallel\phi$. Then either $\set{w\in
A^{s',\zeta}_{\gamma}\mid t_w\cup
\add(s',w){\upharpoonright}\sing{\gamma}\Vdash\phi} \in
U^{s',\zeta}_{\gamma}$ or $\set{w\in
A^{s',\zeta}_{\gamma}\mid t_w\cup
\add(s',w){\upharpoonright}\sing{\gamma}\Vdash\lnot\phi} \in
U^{s',\zeta}_{\gamma}$. Now reduce $A^{s',\zeta}_{\gamma}$ to
whichever set is in $U^{s',\zeta}_{\gamma}$, and apply
Lemma~\ref{thm:diagonal-closure} to obtain $s''\leq^* s'$ such that
$s''$ decides $\phi$.
Lemma~\ref{thm:prikry}(\ref{item:Prikrythm-inD}) is proved similarly,
applying Claim~\ref{thm:Prikry-sublemma} using the set $D$ given in
the hypothesis.
\end{proof}
If $\gamma<\zeta$ and $G\subseteq P(\vec F)$ is generic, then set
$G{\upharpoonright} \gamma+1=\set{s{\upharpoonright}\gamma+1\mid
\gamma\in\domain(s)\land s\in G}$. Then
$G{\upharpoonright}\gamma+1$ is a generic subset of $P(\vec F^{s,\gamma})$.
\begin{corollary}[No new bounded sets]
\label{thm:noBoundedSets}
Suppose $x\in M[G]\setminus M$ and
$x\subset\lambda<\forceKappa^{M[G]}_{\gamma+1}$.
Then $x\in M[G{\upharpoonright}\gamma'+1]$ for some $\gamma'<\gamma$.
\end{corollary}
\begin{proof}
If $\gamma=\gamma'+1$ then Propositions~\ref{thm:factorization}
and~\ref{thm:closure} imply that $\gamma'$ is as required.
If $\gamma$ is a limit ordinal then take $\gamma'$ least such that
$\forceKappa^{G}_{\gamma'}>\lambda$.
\end{proof}
\begin{corollary}\label{thm:approx}
If $\vec F$ is a suitable sequence with critical point $\kappa$ then
$P(\vec F)$ has the $\kappa$-approximation property: if $G\subseteq
P(\vec F)$ is $M$-generic then for any function $f\in M[G]$ with
$\domain(f)=\kappa$ there is a set $A\in M$ with $\card A\leq\kappa$
and $\range(f)\subseteq A$.
\end{corollary}
\begin{proof}
Let $\dot f$ be the name of a function
$f\colon \kappa=\forceKappa_{\zeta}\to\kappa^{+}$, and let $s$ be a
condition, which we will assume has domain $\sing{\zeta}$.
If $\zeta=\gamma+1$ then, for any condition $s$ with
$\gamma\in\domain(s)$, factor $\below{P(\vec F)}{s}$ as
$\below{P(\vec F^{s,\gamma})}{s{\upharpoonright}\gamma+1}\times P'$. Then
$P'$ is $\kappa^{+}$-closed since $\vec F^{s,\gamma+1}=\emptyset$,
so there is $s'\leq
s{\upharpoonright}\sing{\zeta}$ such that for all $\alpha<\kappa$ there are
$\beta$ and $t\in G{\upharpoonright}\gamma+1$ such that $t\cup s'\Vdash
\dot f(\alpha)=\beta$. Thus we can take
\begin{equation*}
A=\set{\beta\mid (\exists\alpha<\kappa)(\exists
t\in \below{P(\vec F^{s,\gamma})}{s{\upharpoonright}\gamma+1})\;
t\cup s'\Vdash \dot f(\alpha)=\beta}.
\end{equation*}
If $\zeta$ is a limit ordinal then use
Lemma~\ref{thm:closure} to define a $\leq^*$-decreasing
sequence of conditions $s_\gamma\leq^* s$ such that $s_{\gamma}$ forces
the following formula:
\begin{multline*}
(\forall\alpha<\dot{\forceKappa}_{\gamma})\forall\beta\;
\bigl((\exists t\in \dot
G)\;\domain(t)\subseteq(\gamma+1\cup\sing\zeta)\land
t\Vdash\dot f(\alpha)=\beta
\implies
\\
(\exists w\in A^{s_{\gamma},\zeta}_{\gamma})(\exists t\in \dot
G{\upharpoonright}\gamma+1)\;
(t\leq\add(s_{\gamma},w)\land
\\
t\cup\add(s_{\gamma},w){\upharpoonright}\sing{\zeta}\Vdash\dot f(\alpha)=\beta
\bigr).
\end{multline*}
Set $s'=\bigwedge_{\nu<\zeta} s_{\nu}$ and
\begin{multline*}
A_{\gamma}=\set{\beta\mid \exists\alpha(\exists w\in
A^{s',\zeta}_{\gamma})(\exists
t<\add(s',w))\;\\t{\upharpoonright}(\gamma,\zeta]=\add(s,w){\upharpoonright}(\gamma,\zeta]\land
t\Vdash \dot f(\alpha)=\beta}.
\end{multline*}
Then $s'\Vdash\range(\dot f)\subseteq \bigcup_{\gamma<\zeta}A_{\gamma}$.
\end{proof}
\begin{corollary} \label{thm:PF-nocollapse}
Forcing with $P(\vec F)$ does not collapse any cardinal which is not
in the set $\bigcup_{\gamma\leq\zeta}[\forceKappa_{\gamma}^{++}, \forceKappa^{+\gamma+1}_{\gamma}]$.
\end{corollary}
\begin{proof}
Suppose $\lambda$ is a cardinal of $M$ which is collapsed in $M[G]$,
where
$G\subseteq P(\vec F)$ is $M$-generic. If $\lambda<\kappa=\forceKappa_{\zeta}$ then
Corollary~\ref{thm:noBoundedSets} implies that the collapsing
function is in $M[G{\upharpoonright}\gamma+1]$ for some $\gamma<\zeta$.
Thus we can assume without loss of generality that $\gamma=\zeta$
and $\lambda\geq\kappa$. Also\, $\lambda\leq\card{P(\vec
f)}\leq\kappa^{+(\zeta+1) }$.
Finally, Lemma~\ref{thm:approx} implies that $\lambda\not=\kappa^{+}$.
\end{proof}
In the forcing of Gitik from which this forcing is derived, a
preliminary forcing is used to define a morass-like structure
which guides the main forcing so that no cardinals are collapsed.
We omit this preliminary forcing as unnecessary for the proof
of the main theorem; however as a
consequence we do not know whether the cardinals of $M_\Omega$ which
are excepted in Lemma~\ref{thm:PF-nocollapse} are cardinals
in the Chang model.
\subsection{Introducing the equivalence relation}\label{sec:gkeqDef}
We now proceed to the second part of the definition of the forcing by
adding a variant of Gitik's equivalence relation $\gkeq$ on $P(\vec
F)$. Recall that if $F$ is an extender on $\lambda$ then $\ufFromExt{F}{b}$ is
the ultrafilter $\set{x\in V_\lambda\mid b\in i^{F}(x)}$.
\begin{definition}\label{def:gkeq-a-set}
Suppose that $\vec F$ is a suitable sequence of extenders of length
at least $\gamma+1$ on a cardinal $\lambda$, and $a,a'\colon x\to
\supp(F_{\gamma})$ for some $x\subseteq[\lambda,\lambda^{+})$ of
size $\lambda$. Set $Y=\bigcup_{\gamma<\gamma'}\supp(F_\gamma')$.
\begin{enumerate}
\item
$a\gkeq_{0}a'$ if
$\ufFromExt{F_\gamma}{y\cup\sing{a}}=\ufFromExt{F_\gamma}{y\cup\sing{a'}}$
for all $y\in[Y]^{<\omega} $.
\item
If $n\geq0$ then $a\gkeq_{n+1}a'$ if for all $b\supseteq a$
there is $b'\supseteq a'$ such that $b\gkeq_{n}b'$, and for all
$b'\supseteq a'$ there is $b\supseteq a$ such that $b\gkeq_{n}b'$.
\end{enumerate}
\end{definition}
\begin{definition}\label{def:gkeq-a-seq}
We write $\mathcal{N}$ for the set of sequences
$\vec n\in{^{\zeta}{\omega}}$ such that
$\set{\iota<\zeta\mid n_\iota< m}$ is finite for each
$m\in \omega$.
Suppose that $\vec F$ is a suitable sequence of extenders on $\lambda$
and $\vec a$ and $\vec a'$ are sequences with $\domain(\vec
a)=\domain(\vec a')=\domain(\vec F)\subseteq\zeta$.
\begin{enumerate}
\item If $\vec n\in\mathcal{N}$ then $\vec a\gkeq_{\vec n}\vec a'$
if $a_{\nu}\gkeq_{n_\nu}a'_{\nu}$ for all $\nu\in\domain(\vec
F)$.
\item $\vec a\gkeq\vec a'$ if there is some $\vec n\in\mathcal{N}$
such that $\vec a\gkeq_{\vec n}\vec a'$.
\end{enumerate}
\end{definition}
\begin{definition}\label{def:gkeq}
The extension of $\gkeq_{\vec n}$ to $P^*_{\gamma}$ is by recursion on
$\gamma$: we assume that its restriction to $P^*_{\eta}$ is defined for all
$\eta<\gamma$.
If $\eta<\gamma$ and $w,w'\in P_{\gamma,\eta}^{*}$ then $w\gkeq_{\vec n}w'$ if
\begin{myinparaenum}
\item
$w{\upharpoonright}[\gamma_0,\eta]\gkeq_{\vec n} w'{\upharpoonright}[\gamma_0,\eta]$,
as members of $P^*_{\eta}$, and
\item
$w{\upharpoonright}[\eta+1,\gamma)=w'{\upharpoonright}[\eta+1,\gamma)$.
\end{myinparaenum}
Suppose $t,t'\in P^*_{\gamma}$. Then $t\gkeq_{\vec n}t'$ if the
following conditions hold:
\begin{enumerate}
\item $\forceKappa^{t}=\forceKappa^{t'}$ and $\vec F^{t}=\vec
F^{t'}$.
\item $f^{t}_{\nu,\nu'}=f^{t'}_{\nu,\nu'}$ for all $\nu,\nu'$ for
which they are defined.
\item $a^{t}_{\mu,\nu}\gkeq_{n_\nu}a^{t'}_{\mu,\nu}$ for all $\gamma\geq\mu>\nu$.
\item\label{item:gkeq_Aeq} $ [A^{t}_{\nu}]_{\vec n}=[A^{t'}_{\nu}]_{\vec n}$ for all
$\nu\in\domain(\vec F^{t})$, where $[A]_{\vec
n}=\set{[w]_{\gkeq_{\vec n}}\mid w\in A}$.
\end{enumerate}
Finally,
$s\gkeq_{\vec n} s'$ for conditions $s,s'\in P(\vec F)$ if
$\domain(s)=\domain(s')$ and
$s(\gamma)\gkeq_{\vec n}s'(\gamma)$ for all $\gamma$ in their
common domain.
\end{definition}
It is easy to see that $\gkeq$ is an equivalence relation.
\begin{proposition}
\label{thm:one-step-gkeq}
Suppose that $\add(s, \vec z)\leq s\gkeq_{\vec n} t$. Then
there is $ \vec w$ such that $\add(s, \vec z)\gkeq_{\vec n}\add(t, \vec w)\leq t$.
\end{proposition}
\begin{proof}
We show that this is true when $\vec z$ has length one. An
induction will then show that it is true in general. .
Suppose that $\add(s,z)\leq s\gkeq_{\vec n} t$, with $z\in
A^{s,\tau}_{\gamma}$. By
definition~\ref{def:gkeq}(\ref{item:gkeq_Aeq})
there is
$w\in A^{t,\tau}_{\gamma}$ such that $z\gkeq_{\vec n}w$. Then
the condition $z{\upharpoonright}[\gamma_0,\gamma]\gkeq_{\vec
n}w{\upharpoonright}[\gamma_0,\gamma]$ implies that
$\add(s,z)(\gamma)\gkeq_{\vec n}\add(t,w)(\gamma)$, and the
condition that $z{\upharpoonright}[\gamma+1,\tau)=
w{\upharpoonright}[\gamma+1,\tau)$ implies that the Cohen functions induced in
$\add(s,z)(\tau)$ and $\add(t,w)(\tau)$
by Definition~\ref{def:a2f} are equal. Therefore
$\add(s,z)(\tau)\gkeq_{\vec n}\add(t,w)(\tau)$.
Since these are the only values of $s$ and $t$ which are changed in
the extensions,
it follows that $\add(s,z)\gkeq_{\vec n}\add(t,w)$.
\end{proof}
\begin{proposition}
\label{thm:direct-gkeq}
Suppose $s'\leq^* s\gkeq_{\vec n} t$, and that $n_{\nu}>0$ for all
$\nu\notin\domain(s)$. Then there is $t'\leq^* t$ such
that $s'\gkeq_{\vec m} t'$ for all
$\nu<\zeta$, where $m_{\nu} = n_{\nu} - 1$ if
$n_{\nu}>0$, and $m_{\nu}=0$ otherwise,
\end{proposition}
\begin{proof}
We will show by induction on $\gamma$ that, under the hypotheses of the Proposition, if $\gamma\in\domain(s)=\domain(t)$ then there is $t'(\gamma)$ such that
$t'(\gamma)\leq^* t(\gamma)$ and $t'(\gamma)\gkeq_{m_{\nu}}s'(\gamma)$. By the definition of $\gkeq$,
the sequence $\vec F^{t',\gamma}$ and the functions
$f^{t',\gamma}_{\nu,\nu'}$ must be the same as $\vec
F^{s',\gamma}$ and $f^{s',\gamma}_{\nu,\nu'}$. This leaves the
functions $a^{t',\gamma}_{\nu',\nu}$ and sets $A^{t'}_{\nu}$ to be
defined.
To define $\vec a^{t',\gamma}$, pick for each $\nu$ in the interval
$\gamma_0\leq\nu<\gamma$ some $b\supseteq a^{t,\gamma}_{\nu+1,\nu}$ such that
$a^{s',\gamma}_{\nu+1,\nu}\gkeq_{m_{\nu}} b$. This is possible by
the definition of $\gkeq_{m_{\nu}+1}$, since $n_\nu\not=0$. Now set
$a^{t',\gamma}_{\nu+1,\nu}=b$. By clause~(\ref{item:asubset}) of
the Definition~\ref{def:tableau} of the tableau, this determines
$a^{t',\gamma}_{\mu,\nu}=a^{t',\gamma}_{\nu+1,\nu}{\upharpoonright}\domain(a^{s',\gamma}_{\mu,\nu})$
for $\mu>\nu+1$.
Finally, set $A^{t',\gamma}_{\nu}$ equal to the set of all $w'$ such
that $w'\leq^* w$ for for some $w\in A^{t,\gamma}_{\nu}$ and
$w'\gkeq_{\vec m} v'$ for some $v'\in A^{s',\gamma}_{\nu}$. Then
$ [A^{t',\gamma}_{\nu}]=[A^{s',\gamma}_{\nu}]$ since for all
$v'\in A^{s',\gamma}_{\nu}$ there is $v\in A^{s,\gamma}_{\nu}$ and
$w\in A^{t,\gamma}_{\nu}$ such that $v'\leq^* v\gkeq_{\vec m}w$, and
then the induction hypothesis implies that there is $w'\leq^* w$ with
$w'\gkeq _{\vec m} v'$.
\end{proof}
\begin{definition}
\label{def:modgekq}
We will write $[s]$ for $[s]_{\gkeq}=\set{t\mid s\gkeq t}$. The
ordering on $P(\vec F)\mgkeq$ is the smallest transitive relation such
that $[s]\leq[t]$ holds if either $s\leq t$ or $s\gkeq t$.
\end{definition}
\begin{proposition}
\label{thm:leq_trans1}
Suppose $[t]= [s]$ and $t'\leq t$. Then there are $s''\leq s$ and
$t''\leq t'$ such that $[s'']=[t'']$.
\end{proposition}
\begin{proof}
Suppose that $t\gkeq_{\vec n}s$. By using a
further extension $t''=\add(t',\vec w)$ we can arrange that
$\set{\nu\mid n_{\nu}=0}\subseteq\domain(t'')$. By
Proposition~\ref{thm:one-stepFirst} there is $\vec z$ so that
$t''\leq^* \add(t,\vec z)\leq t$. By
Proposition~\ref{thm:one-step-gkeq} it follows that there is $\vec
w$ so that $\add(t,\vec z)\gkeq_{\vec n}\add(s,\vec w)\leq s$. Finally it follows by Proposition~\ref{thm:direct-gkeq} that there is
$s''\leq^* \add(s,\vec w)$ so that $s''\gkeq t''$.
\end{proof}
\begin{proposition}\label{thm:equivnormal}
Suppose that $[t]\leq [s]$. Then there is a condition $q\leq s$
such that $[q]\leq [t]$.
\end{proposition}
\begin{proof}
\newcommand{\mathbin{\genfrac{}{}{0pt}{}{\raisebox{-3pt}{$<$}}{\raisebox{1pt}{$\scriptstyle\gkeq$}}}}{\mathbin{\genfrac{}{}{0pt}{}{\raisebox{-3pt}{$<$}}{\raisebox{1pt}{$\scriptstyle\gkeq$}}}}
If $[t]\leq[s]$ then there is a sequence
$t=t_0\mathbin{\genfrac{}{}{0pt}{}{\raisebox{-3pt}{$<$}}{\raisebox{1pt}{$\scriptstyle\gkeq$}}} t_1\mathbin{\genfrac{}{}{0pt}{}{\raisebox{-3pt}{$<$}}{\raisebox{1pt}{$\scriptstyle\gkeq$}}}\cdots\mathbin{\genfrac{}{}{0pt}{}{\raisebox{-3pt}{$<$}}{\raisebox{1pt}{$\scriptstyle\gkeq$}}} t_{k-1}\mathbin{\genfrac{}{}{0pt}{}{\raisebox{-3pt}{$<$}}{\raisebox{1pt}{$\scriptstyle\gkeq$}}} t_{k}=s$,
where we write $s\mathbin{\genfrac{}{}{0pt}{}{\raisebox{-3pt}{$<$}}{\raisebox{1pt}{$\scriptstyle\gkeq$}}} s'$ to mean that either $s\le s'$ or $s\gkeq s'$.
We prove the
proposition by induction on the length of the shortest such
sequence, assuming as an induction hypothesis that there is $\bar
q\le t_{k-1}$ such that $[\bar q]\leq[t]$.
If $t_{k-1}\leq s$, then it follows that $\bar q\leq s$ and we
can take $q=\bar q$. Otherwise $\bar q\le t_{k-1}\gkeq s$, and
Proposition~\ref{thm:leq_trans1} asserts that there is $q\leq s$
and $q'\le\bar q$ such that $q\gkeq q'$. But then $[q]=[q']\leq
[t]$, as required.
\end{proof}
\begin{corollary}\label{thm:iterated}
$P(\vec F)$ is forcing equivalent to $(P(\vec F)\mgkeq)*\dot R$ where
$\dot R$ is a $P(\vec F)\mgkeq$-name for a partial order.\qed
\end{corollary}
\begin{corollary}\label{thm:nocollapse-PFgkeq}
Forcing with $P(\vec F)\mgkeq$ does not collapse any cardinal which is not in the set $\bigcup_{\gamma\leq\zeta}[\forceKappa_{\gamma}^{++},
\forceKappa^{+\omega_1}_{\gamma})$.
\end{corollary}
\begin{proof}
By Corollary~\ref{thm:PF-nocollapse} this is true in the extension by $P(\vec F)=
(P(\vec F)\mgkeq)*\dot R$;
hence it is certainly true in the extension by $P(\vec F)\mgkeq$.
\end{proof}
\subsection{Constructing a generic set}
\label{sec:generic_set}
Much of the argument in this subsection is basically the same as
Carmi Merimovich's first genericity argument
\cite[Theorem 5.1]{Merimovich2007Prikry-on-exten}.
In order to construct a $M_B$-generic set we need to move outside of
$M_B$: we work in $V[h]$, where $h$ is a generic collapse of
$\reals$ onto $\omega_1$ so that $\card{M[h]}=\omega_1$. Since this Levy
collapse does not add countable sequences of ordinals the Chang
model is unchanged, the ordering
$\le^*$ of $P(\vec
N{\upharpoonright}\zeta)$ is still countably complete, and $M$ is still closed under
countable sequences. Furthermore, since $h$ is generic over $M$,
$M[h]\supseteq M(\reals)$ and $M[h]$ is mouse over $h$ which has all
of the required
properties of $M$.
\begin{lemma}[Generic set construction]\label{thm:generic_in_V_1}
Let $h$ be a generic collapse of $\reals$ onto $\omega_1$ with
countable conditions, and
let $B$ be a countable subset of $I$ with $\otp(B)=\zeta$.
Then there is, in $V[h]$, an $i_{\Omega}(M_B)$-generic set $G\subseteq
i_{\Omega}(P(\vec E{\upharpoonright}\zeta)\mgkeq)$ such that
every countable subset of $M_{B}$ is contained in $M_B[G]$.
\end{lemma}
\subsubsection{Proof of Lemma~\ref{thm:generic_in_V_1}}
\label{sec:generic_in_V_1_proof}
Since $M_B\cong M_{B(\zeta)}$, where $B(\zeta)=\set{\kappa_\nu\mid
\nu<\zeta}$, containing the first $\zeta$ members of $I$, it will be
sufficient to prove this for the case where $B=B(\zeta)$.
This will simplify notation, since then $M_{B}{\vert}\Omega$ is
transtive and $\forceKappa_\nu^{G}$ is equal to both the $\nu$th
member $\kappa_\nu$ of $I$ and the $\nu$th member of $B$.
We define a partial order $R$. Our assumptions on $M$ are
sufficiently generous that the definition of $R$ can be made inside
$M$, using $\seq{N_{\xi}\cap H^{M}_{\tau}\mid \xi<\omega_1}$, for
some sufficiently large cardinal $\tau$ of $M$, instead
of $\seq{N_{\xi}\mid\xi<\omega_1}$.
\begin{definition}
$R=\bigcup_{\xi<\omega_1} R_{\xi}$, where $R_{\xi}$ is defined as
follows: The members of $R_{\xi}$ are the pairs $([s],b)$ such that
$[s]\in P(\vec E{\upharpoonright}\delta)\mgkeq$ is a condition with $\domain(s)=\sing{\zeta}$ and
$b=\seq{b_\gamma:\gamma<\zeta}$ where each $b_{\gamma{}}$ is a
function in $N_{\xi}$ satisfying the following three conditions:
\begin{enumerate}
\item $\domain(b_\gamma)=\domain(a^{s,\zeta}_{\gamma+1,\gamma})$ for each
$\gamma<\zeta$,
\item $\range(b_\gamma)\subset [\kappa, \kappa^{+\omega_1})$ for each
$\gamma<\zeta$, and
\item \label{item:Rgkeq}
$\seq{a^{s,\zeta}_{\gamma+1,\gamma}\mid \gamma<\zeta}\gkeq b_{\gamma}$.
\end{enumerate}
The ordering of $R$ is $(s',b')\leq(s,b)$ if $[s']\leq [s]$ in $P(\vec
N)\mgkeq$ and $(\forall\gamma<\zeta)\; b'_\gamma\supseteq
b_\gamma$.
\end{definition}
Clause~(\ref{item:Rgkeq}) requires some explanation, since
$\range(b_{\gamma})\nsubset \supp(E_\gamma)=\supp(E)\cap N_\gamma$.
The Definition~\ref{def:gkeq-a-set} of the relation
$a\gkeq_n a'$ uses the parameter $\gamma$ in two ways.
The first use is in the definition of $a\gkeq_{0}a'$, where the set
$Y=\bigcup_{\gamma'<\gamma}\supp(F_{\gamma'})$ is used as the set of
$y$ in the requirement
$(F_{\gamma})_{y\cup\sing{a}}=(F_{\gamma})_{y\cup\sing{a'}}$.
Here the same set $Y$ is used, and since
$\ufFromExt{E_\gamma}{y\cup\sing{a}}=\ufFromExt{E}{y\cup\sing a}$
the requirement can be altered to
$\ufFromExt{E_{\gamma}}{y\cup\sing{a}}=\ufFromExt{E}{y\cup
\sing{b}}$.
The second way in which the parameter $\gamma$ is used is in the
domain of the quantifiers. In Clause~(\ref{item:Rgkeq}) the
extensions $a'\supseteq a^{s,\zeta}_{\gamma+1,\gamma}$ are in
$M_{\gamma}$, while the extension $b'\supseteq b_{\gamma}$ are in
$M$. We reconcile these demands by using the elementarity of
$N_{\gamma}$, and this requires expressing Clause~(\ref{item:Rgkeq})
as a first order statement. This is achieved by the following
Proposition, which is the reason for the requirement in
Definition~\ref{def:Nsequence} that
$\card{\bigcup_{\gamma'<\gamma}N_{\gamma'}}^{++}\subseteq N_\gamma$.
\begin{proposition}
\label{thm:gkeq0inNgamma}
For any $b\colon x\to [\kappa,\kappa^{+\omega_1})$ with $x\in
[\kappa^{+}\setminus\kappa]^{\kappa}$, there is a
formula $\phi(n,a)$, with parameters from $N_{\gamma}$, such that if
$a\colon x\to\supp(E_{\gamma})$ then
$a\gkeq _n b$ if and only if $N_\gamma\models \phi(n,a)$.
\end{proposition}
\begin{proof}
For $n=0$, note that
the sequence of ultrafilters $\seq{\ufFromExt{E}{y\cup\sing{b}}\mid y\in
[Y]^{<\omega}}$ can be coded as a subset of $[Y]^{<\omega}\times
\ps(\kappa)$, which has cardinality $\card
Y=\card{\bigcup_{\gamma'<\gamma}N_{\gamma'}}$.
Working in $M$, define $T$ to be the tree
of finite sequences of the form $\seq{[b_i]_{\gkeq_0}\mid i<k}$
where $\seq{b_i\mid i<k}$ is a $\subseteq$-increasing sequence of
functions $b_i\colon x_i\to[\kappa,\kappa^{+\omega_1})$ with $x_i\in
[\kappa^{+}\setminus\kappa]^{\kappa}$.
Since $T$ is at most $\card{\ps(Y)}$-branching,
it has cardinality at most $\card {\ps^{2}(Y)}$, so
Clauses~(\ref{item:cardNnuSubsetNnu})
and~(\ref{item:Nseq-doublepluss}) of
Definition~\ref{def:Nsequence} ensure that $T\in N_{\gamma}$.
Write $T_{b}$ for the portion of $T$ above $\seq{[b]_{\gkeq_0}}$.
Then the conclusion of the proposition is satisfied by the
formula $\phi(n,a)$, with parameter $T_{b}$,
which asserts that the first $n$ levels of $T_{b}$ and
$(T_{a})^{N_\gamma}$ are equal. Since
$[\supp(E_\gamma)]^{\kappa}\cap N_\gamma\in N_\gamma$, this is a
first order formula over $N_\gamma$.
\end{proof}
\begin{todoenv}
(7/21/15) --- for future --- Note that the $\le^*$ forcing order
is homogeneous: In particular to conditions with function $f$ and
$a$ are essentially independent of the domain of $f$ and $a$
except for terms that explicitly ask for $f_{\zeta,\nu}(\xi)$ for
$\xi$ in the domain of one of the affected functions.
\end{todoenv}
\begin{todoenv}
(7/3/15) --- For future --- Note that the $\gkeq_{n}$-class is
essentially a property of $[\supp(E)]^{\kappa}$. We can, for
example, assume that $c_0=\emptyset$ and $c_{i+1}\setminus c_i$
has domain
$[\nu_i,\nu_{i+1})$ where $\nu_0=\kappa$ and $\nu_{i+1}=\nu_i
+\otp(\range(c_{i+1}\setminus\range(c_{i}))$ and $c_{i+1}\setminus
c_{i}$ is the increasing enumeration. Then the $\gkeq_{n}$-types
of a sequence of $c'_i$ with the same ranges is determined by that
of $\seq{c_i}_i$ together with the function $\rho$ such that
$c'_i=c_i\circ \rho$.
\end{todoenv}
\begin{lemma}\label{thm:density-in-R}
\begin{enumerate}
\item \label{item:x} $\xset{([s],b)}{s\in D}$ is dense in $R$ for
each $\leq^*$-dense set $D\subseteq P(\vec E{\upharpoonright}\zeta)$ in
$M$.
\item
\label{item:y}
Suppose $\gamma<\zeta$ and $\beta\in [\kappa,\kappa^{+\omega_1})$.
Then there is a dense subset of conditions $([s],b)\in R$ such
that $b(\xi)=\beta$ for some $\xi\in\domain(a^{s,\zeta}_{\zeta,\gamma})$.
\end{enumerate}
\end{lemma}
\begin{proof}
For clause~(\ref{item:x}),
let
$([s],b)\in R$ be arbitrary and set $\vec
a=\xseq{a^{s,\zeta}_{\gamma+1,\gamma}}{\gamma<\zeta}$. We may assume that
$a_{\gamma}\gkeq_{1}b_{\gamma}$ for each $\gamma<\zeta$; if not,
then replace each such $a_{\gamma}$ with some $a'_{\gamma}$ such that
$a'_{\gamma}\gkeq_{0}a_{\gamma}$ and
$a'_{\gamma}\gkeq_{1}b_{\gamma}$. This is possible by
Proposition~\ref{thm:gkeq0inNgamma} and the
elementarity of the structures $N_{\xi}$, since $b_{\gamma}$ has
the desired properties. This change
only involves finitely many of the functions $a_{\gamma}$, so the
condition obtained from $s$ by making this substitution is still
in $[s]$.
Now pick $s'\leq^* s$ in $D$. Because of the assumption we made
on $s$, Proposition~\ref{thm:direct-gkeq} implies that there is
$b'\gkeq a^{s',\zeta}$ such that
$([s'],b')\leq([s],b)$.
\smallskip{}
The proof for clause(\ref{item:y}) is
similar. Fix $([s],b)\in R$, and assume that
$a^{s,\zeta}_{\gamma+1,\gamma}\gkeq_{1}b_{\gamma}$ for all $\gamma<\zeta$.
Now fix $\mu<\omega_1$ so that $\sing{b,\eta}\subset N_{\mu}$ and
extend $b$ to $b'\in N_\mu$ by setting $b'_\nu(\xi)=\eta$ for some
$\xi$ which is not in the domain of any function in~$s$. Then there
is $ a'_{\nu}\supset a_{\nu}$ so that
$ a'_{\nu}\gkeq_{0} b_{\nu}$. Now extend $s$ to $s'$ by
setting $a^{s',\zeta}_{\gamma',\gamma}=a'(\xi)$ for all $\gamma'\in(\gamma,\zeta]$.
\end{proof}
The ordering $(P(\vec N)\mgkeq, \leq^*)$ is not countably complete:
it is easy to find an infinite descending sequence of conditions
$\seq{[s_n]\mid n<\omega}$ such that any lower bound would require an ultrafilter
concentrating on non-well founded sets of ordinals. However the
partial order $R$ is countably complete due to the guidance of the
second coordinate $b$:
\begin{lemma}\label{thm:Rcomplete}
The partial order $R$ is countably closed.
\end{lemma}
\begin{proof}
Suppose that $\seq{([s_n], b_n)\mid n<\omega}$ is a descending
sequence in $R$. We define a lower bound
$([s_{\omega}],b_{\omega})$ for this sequence. The definition of $R$ determines
$b_{\omega,\nu}=\bigcup_{n<\omega}b_{n,\nu}$, and determines all of
$s_{\omega}$ except for the functions $a^\omega_\nu=a^{s_{\omega},\zeta}_{\nu+1,\nu}$.
It also determines $\domain(a^\omega_\nu)=
\domain(b_{\omega,\nu})= \bigcup_{n<\omega}\domain(a^{s_n,\zeta}_{\zeta,\nu})$.
Pick any $\vec n=\seq{n_\nu\mid\nu<\zeta}\in\mathcal{N}$, and for each
$\nu<\zeta$ pick $a^\omega_\nu\in N_\nu$ so that
\begin{equation*}
a^\omega_\nu{\upharpoonright}\domain(a^n_\nu)\gkeq_{k_{n,\nu}}a^n_\nu
\quad\text{and}\quad
a^\omega_\nu\gkeq_{n_{\nu}}b_{\omega,\nu}
\end{equation*}
where $a^n_\nu\gkeq_{k_{n,\nu}} b_{n,\nu}$. This
is possible by the elementarity of the models $N_{\xi}$, since
$b_{\omega,\nu}$ satisfies these conditions. Then
$([s_{\omega}],b_{\omega})\in R$ and
$([s_{\omega}],b_{\omega})\le ([s_{n}],b_{n})$ for each $n\in\omega$.
\end{proof}
We are now ready to construct the desired $M_B$-generic set
$G\subset i_{\Omega}(P(\vec E{\upharpoonright}\zeta)\mgkeq)$, where $\zeta=\otp(B)$.
\begin{definition}[The generic set $G$]
\label{def:G}
Let $H\subset R$ in $V[h]$ be an
$M$-generic set. Such a set can be constructed in $V[h]$ using
Lemma~\ref{thm:Rcomplete}, since $\card M^{V[h]}=\omega_1$ and
and $^{\omega}M\subseteq M$.
We set
\begin{equation}\label{eq:a1}
G=\set{[s']\mid (\exists
([s],b)\in H)(\exists\vec \gamma\in[\zeta]^{<\omega} )\,
s'\geq^* \add(i_{\Omega}(s),\vec w(s,b,\vec\gamma))}
\end{equation}
where $\vec w(s,b,\vec \gamma)$ is defined as follows: Set
$n=\len(\vec\gamma)$. Then
$\vec w(s,b,\vec\gamma)=\seq{i_{\gamma_i}(w_i)\mid i<n}$ , where
\begin{align}
w_i{\upharpoonright}[0,\gamma_i]&=\add(s,w_i){\upharpoonright}[0,\gamma_i]
&& \text{and}\notag\\
a^{w_i}_{\gamma,\gamma_i}&=
b_{\gamma,\gamma_i}{\upharpoonright}\domain(a^{s,\gamma_i}_{\gamma,\gamma_i})&
&\text{for $\zeta\geq\gamma>\gamma_i$.}
\label{eq:a2}
\end{align}
\end{definition}
Note that $w_i\gkeq s{\uparrow}\gamma_i$ and therefore
$[\add(i_{\Omega}(s), i_{\gamma_i}(w(s,b,\vec\gamma)))]\leq [i_{\Omega}(s)]$.
The effect of the substitution used in equation~\eqref{eq:a2} to define $w_i$ is that
\begin{equation*}
[\add(i_{\Omega}(s), i_{\gamma_i}(w_i))]\Vdash
h_{\zeta,\gamma_i}(\xi)=b_{\gamma_i}(\xi)\qquad \text{for all}\qquad
\xi\in\domain(a^{s,\zeta}_{\zeta,\gamma_i}).
\end{equation*}
In looking at the Chang model inside of $M_B[G]$, it is important to
recall that the set $T$ terms specified for the sharp of $\chang$
provides a set, inside $M$, of names for the members of $\chang_B$.
Definition~\ref{def:standard-names} below makes this more specific,
and provides a set of names inside $M$ for the members of $M_B$ and
for $\chang^{M_B}$, and then provides standard \emph{forcing} names
which are useful inside $M_B[G]$; however the notation in the next
definition is sometimes useful.
\begin{definition}
\label{def:namenotation}
We write $\bar i_{\gamma}$ for the embedding $i_{\gamma'}$ where
$\gamma'$ is the ordinal such that the
$\gamma$th member $\forceKappa_{\gamma}$ of $B$ is equal to
$\kappa_{\gamma'}$.
If $\tau$ is an expression then we write $\gn{\tau}$ to indicate
that $\tau$ is being used as a name for the value of the expression.
\end{definition}
\begin{definition}
\label{def:standard-names}
A \emph{standard name} for a member of $M_B$ is a term
obtained recursively as follows:
\begin{enumerate}
\item\label{item:stdname_MB_gen}
If $\gamma\leq\zeta$ and $\bar\beta\in
[\kappa,\kappa^{+\omega_1})$ then $\gn{\bar
i_{\gamma}(\bar\beta)}$ is a standard name for the generator
$\beta=\bar i_{\gamma}(\bar\beta)$ belonging to $\forceKappa_{\gamma}$.
\item \label{item:stdname_MB_fctn}
If $f\in M$ and $x$ is a finite sequence of standard
names of generators $\beta_i$ in $M_B$, then $\gn{i_{\Omega}(f)(x)}$ is
a standard name for the value $i_{\Omega}(f)(\vec{\beta})$.
\end{enumerate}
A standard name for a member of $\chang$ is a term obtained
recursively using clause~(1) above and the following two operations:
\begin{enumerate}
\setcounter{enumi}{2}
\item[$\arabic{enumi}'$.] \label{item:stdname_C_fctn}
If $\alpha$ is an ordinal, then a standard name for
$\alpha\in M_B$ from clause~(2) above is also a standard name for $\alpha\in\chang$.
\item\label{item:stdname_C_recur}
Suppose that $i$ is a standard name for an ordinal $\iota$ and
that $\vec\tau$ is a countable sequence of standard forcing
names for ordinals $\vec\beta=\seq{\beta_k\mid k\in\omega}$.
Then
$\gn{\set{x\in\chang_{i}\mid
C_i\models\phi(x,\vec\tau)}}$ is a standard name for $\set{x\in
C_{\iota}\mid C_\iota\models \phi(x,\vec\beta)}$.
\end{enumerate}
The definition of a \emph{standard forcing name} is identical in
both cases,
except that clause~1 is replaced with the following:
\begin{enumerate}
\item[$1'$.]
Suppose $([s],b)\in H$,
$\xi\in\domain(a^{s,\gamma}_{\zeta,\gamma})$ and
$b_{\gamma}(\xi)=\bar \beta$, so
that
\begin{equation*}
([s],b)\Vdash_{R} M_B[G]\models h_{\zeta,\gamma}(\xi)=i_{\gamma}(\bar\beta).
\end{equation*}
Then $\gn{h_{\zeta,\gamma}(\xi)}$ is a standard forcing name for
$\beta=i_{\gamma}(\bar\beta)$, and is said to be
\emph{established} by the condition $([s],b)$.
\end{enumerate}
An arbitrary standard forcing name $\tau$ is established by $([s],b)$ if
this condition establishes all names $\gn{h_{\zeta,\gamma}(\xi)}$ occurring in $\tau$.
\end{definition}
\begin{claim}\label{thm:Ggeneric-claim}
$G$ is $M_B$-generic.
\end{claim}
\begin{proof}
Let $D\subseteq i_{\Omega}(P(\vec E{\upharpoonright}\zeta)\mgkeq)$ be
dense, and let \[\dot D=i_{\Omega}(d) (\seq{h_{\zeta,\gamma_i}(i_{\Omega}(\xi_i))\mid i<k})\] be
a standard forcing name for $D$, established by a condition
$(s,b)\in R$. Thus for any $\vec
w\in\prod_{i<k}A^{s,\zeta}_{\gamma_i}$, the condition
$\add(s,\vec w)$ decides the values of each of the $(P(\vec
E{\upharpoonright}\zeta)\mgkeq$)-names
$h_{\zeta,\gamma_i}(\xi_i)$ and hence determines the value of
$d(\seq{h_{\zeta,\gamma_i}(\xi_i)\mid i<k})\subseteq P(\vec
E{\upharpoonright}\zeta)\mgkeq$.
We write $d(\vec
w)$ to denote this value.
Since $D$ is dense,
\begin{align*}
A=\set{\vec w\in\prod_{i<k}A^{s,\zeta}_{\gamma_i}\mid d_{s}(\vec w)
\text{ is dense in }P(\vec E{\upharpoonright}\zeta)\mgkeq}\in\prod_{i<k}U^{s,\zeta}_{\gamma_i}
\end{align*}
so we can assume that $d_{s}(\vec w)$ is dense in $P(\vec
E{\upharpoonright}\zeta)\mgkeq$ for all $\vec w\in\prod_{i<k}A^{s,\zeta}_{\gamma_i}$.
Using Lemma~\ref{thm:prikry}(\ref{item:Prikrythm-inD})
and Lemma~\ref{thm:closure}(\ref{item:closurediagonal}), it can be
shown that there is $s'\leq^*s$ such that
\begin{equation*}
(\forall \vec
w\in\prod_{i<k}A^{s',\zeta}_{\gamma_i})(\exists e\in[\zeta]^{<\omega})(\forall t\leq
s')\;\bigl(e\subseteq\domain(t)\implies [t]\in d_{s}(\vec w)\bigr).
\end{equation*}
Since $[\zeta]^{<\omega}$ is countable,
we can furthermore assume that $e$ does not depend on $\vec w$.
But now we are done, for if $b'$ is such that $(s',b')\leq(s,b)$
in $R$ and $e\subseteq\vec\gamma $ then
$\add(i_{\Omega}(s'), \vec w(s',b',\vec\gamma))\in D\cap G$.
\end{proof}
This completes the proof of Lemma~\ref{thm:generic_in_V_1}.
\subsubsection{Defining $\chang_{B}$ in $M[G]$}
It follows immediately from the genericity of $G$ that
\begin{corollary}
\label{thm:suitableCB}
$\chang_{B}=\chang^{M_B[G]}$ for any suitable sequence $B$.\qed
\end{corollary}
Here $\chang^{M_B[G]}=\chang_B^{M_B[G]}$ is the set defined inside
$M_B[G]$ using the definition of $\chang$ given in the first
paragraph of this paper.
The more important case of a limit suitable set $B$ is more delicate since
$M_{\tilde B}$ is not definable inside $M_B[G]$ for suitable $\tilde
B\subset B$. The following is the promised precise definition of
$\chang_B$:
\begin{definition}
\label{def:changBdef}
Suppose $B$ is a limit suitable\ set, and let $B'\subset B$ be the set of heads of
gaps in $B$. Call a countable set $v\in M_B$ of ordinals
\emph{$B$-bounded} if for all $\lambda\in B'$ and
$f\colon[\Omega]^{<\omega}\to\Omega$ in $M_B$, the set
$f[\,[v]^{<\omega}]\cap\lambda$ is bounded in $M_B\cap\lambda$.
Let $\mathcal{C}$ be the set of $B$-bounded sets.
Then $\chang_B$ is the set $L_{\Omega}^{M_B[G]}(\mathcal{C})$,
constructed by recursion over the
ordinals in $M_B\cap\Omega$ as in the first paragraph of this paper
using countable sequences from
$\mathcal{C}$.
\end{definition}
Note that $\chang_{B}$ is definable inside $M_B[G]$. The following
Proposition implies that Definition~\ref{def:changBdef}
is equivalent to the more
informal one given in section~\ref{sec:proof-start}.
\begin{proposition}
\label{thm:limitSuitableC_Bdefinable}
A countable sequence $\vec\nu$ of ordinals in $M_B$ is $B$-bounded
if and only if there is a suitable $\tilde B\subset B$ such that
$\vec\nu\in M_{\tilde B}$.
\end{proposition}
\begin{proof}
It is easy to see that if $\tilde B$ is suitable then every
countable $\vec\nu\subset M_{\tilde B}$ is $B$-bounded. For the
converse, suppose that $\vec\nu$ is $B$-bounded and take for
each $\nu_k\in \vec\nu$ a function $g_k\in M$ and finite sequence of
generators $e_k$ for cardinals in $B$ such that
$\nu_k=i_{\Omega}(g_k)(e_k)$; taking for each $k$ the least possible
sequence $e_k$ in the
usual well order of finite sets of ordinals:
$e'\prec e\iff\max((e\cup e')\setminus (e\cap e'))\in e$.
Now let $f_k$ be the pseudoinverse of $g_k$ defined by setting
$f_k(\nu)$ equal to
the $\prec$-least finite sequence $e$ such that $\nu=g_k(e)$.
Then every member of $i_{\Omega}(f_k)(\nu_k)$ is a generator for some
member of $B$, for otherwise let $\xi$ be the largest counerexample,
$\xi=\max(i_{\Omega}(f_k)(\nu_k)\setminus B)$. Then there is a
function $h\in M$
and a set $e''\subset\xi$ of generators for members of $B$ such
that
$\xi=i_{\Omega}(h)(e'')$, but
$e''\cup f_{k}(\nu_k)\setminus\sing{\xi}\prec f_k(\nu_k)\preceq
b_k$, contradicting the minimality of $b_k$.
Now $\vec \eta=\bigcup_{k\in\omega}f_{k}(\nu_i)$ is $B$-bounded: suppose to
the contrary that $f[\vec\eta]$ is unbounded in $\lambda\cap M_B$ where
$\lambda$ is the head of a gap in $B$. Then $f\circ g[ \vec\nu]$ is
also unbounded in $\lambda$, where
$g(\nu)=\sup_{k\in\omega}(f_k(\nu)\cap\lambda)$, and this contradicts
the assumption that $\vec\nu$ is $B$-bounded. Finally, the set of
$\lambda\in B$ which have a generator in $\vec\eta$ is also
$B$-bounded, and it follows that it is contained in a suitable
subset $\tilde B\subset B$.
\end{proof}
\subsection{Proof of the Main Lemma}
\label{sec:proof-main-lemma}
The purpose of this subsection is to prove
Lemma~\ref{thm:mainlemma} under the additional assumption that
$\kappa=\kappa_0$ is a member of the limit suitable{} set $B$. The following
Subsection~\ref{sec:finite-exceptions} will complete the proof of
Lemma~\ref{thm:mainlemma}, and hence of
Theorem~\ref{thm:main}, by removing this
assumption.
In the process it wil indicate the technique used to prove the
stronger result Theorem~\ref{thm:modified-suitable}.
Before beginning the proof, we state two general facts about iterated
ultrapowers. Both are well known facts, but we need to verify that
they are valid in the context in which they will be used.
A full statement of the conditions under which these properties hold
is somewhat delicate, so we will restrict consideration to the
iterated ultrapowers needed here. If $k$ and $k'$ are iterated
ultrapowers, then we write $k[k']$ for the copy map, that is, the
direct limit of the maps $i^{k(U)}$ where $U=(F)_b$ for some extender $F$ used
in the iteration $k'$ and some generator $b$ for $F$. Every extender
$F$ used satisfies $k(F) = k[F]$ for any iteration $k$ such that $\crit(F)$ is not
moved. In the following the term \emph{extender} means an extender
with this property which does not overlap any measurable cardinals.
\begin{lemma}
\label{thm:commute}
Suppose $\kappa'\leq\kappa$, $E'$ is an extender on $\kappa'$, and
$E$ is an extender on $\kappa$.
Suppose further that if $\kappa'=\kappa$ then $E'\mathbin{\triangleleft} E$.
Then the following diagram commutes:
\begin{equation}
\label{eq:diagram_commute_1}
\begin{tikzpicture}[>=angle 90,baseline]
\matrix(m)[matrix of math nodes,
row sep =2em,
column sep=4.5em, text height = 1.5ex, text depth=0.25ex
]
{\ult(V,E)&\ult(V,E\times E')\\ V&\ult(V,E') \\};
\path[->] (m-2-1) edge node[auto, swap] {$i^{E'}$} (m-2-2)
edge node [auto] {$i^E$} (m-1-1)
(m-2-2) edge node [auto] {$i^{E}$} (m-1-2)
(m-1-1) edge node[above] {$i^{E}[i^{E'}]$}(m-1-2.west);
\path[->, dashed] (m-2-1) edge node[above] {$i^{E\times E'}$}(m-1-2);
\end{tikzpicture}
\end{equation}
\end{lemma}
\begin{proof}
The diagram~\eqref{eq:diagram_commute_1} is the direct limit of the same diagram for the
ultrafilters $\ufFromExt{E}{a}$ and $\ufFromExt{E'}{b}$, where $a$ and $b$ are generators of $E$ and $E'$ respectively.
\end{proof}
\begin{corollary}
\label{thm:reorderiteration}
Any iteration can be rearranged to an equivalent iteration with strictly increasing critical points.\qed
\end{corollary}
The second statement is a variant of Kunen's result in
\cite{Kunen1973A-model-for-the} that for any ordinal $\alpha$ there
are at most finitely many cardinals having a measure $U$ such that
$i^U(\alpha)>\alpha$. The statement of the following lemma is tailored to its use
in the proof of the main Lemma:
\begin{todoenv}
(7/13/15) ---
Note that a more general version of this next lemma is actually
used. I can't think of a good way to state it.
\end{todoenv}
\begin{lemma}\label{thm:M_bDoesntmove}
Suppose that $b$ is a finite subset of $I$, $B\subset I$ is suitable, and
$k$ is an iteration in $M_\Omega[B]$ of length less than
$\omega_2$ which uses only
extenders of the form $i_{\nu}(\ufFromExt{E}{\alpha})$ where
$\kappa_\nu\in B\setminus b$ and $\alpha<\omega_1$. Then
$k{\upharpoonright} (\Omega\cap M_b)$ is the identity. .
\end{lemma}
The proof uses the following lemma. We write $\Crit(k)$ for the set of critical
points of the extenders in the iteration $k$. Note that the
hypothesis implies that $k'[k]=k'(k)$ for any iteration $k'$ which is
the identity on $\Crit(k)$.
\begin{lemma}\label{thm:M_bDoesntmove_helper}
Suppose $b\subseteq I$ is finite and $\alpha\in M_b$. Then
there is a sequence
$\vec\nu=\seq{\nu_\lambda\mid \lambda\in b\cup\sing{0}}$ in
$M_b$ satisfying
$
(\forall\lambda\in
b)\;\lambda\leq\nu_\lambda<\min(\sing{\Omega}\cup
b\setminus\lambda+1)
$
which has the following property: Let $k\in
M_\Omega$ be any iteration of length less than $\kappa_0$ such
that $\Crit(k)\cap[\lambda,\nu_\lambda]=\emptyset$ for all
$\lambda\in\sing{0}\cup b$. Then $k(\alpha)=\alpha$.
\end{lemma}
Note that the statement of this lemma is first order, and hence
it is also valid (using the image of the same sequence $\vec\nu$)
in any iterated ultrapower of $M_{\Omega}$.
\begin{proof}
The proof closely follows that of Kunen. We will work inside
$M_{\Omega}$, but the fact that $M_b\prec M_\Omega$ ensures that the ordinals
$\nu_\lambda$ are members of $M_b$.
We will suppose that the lemma is false for
$b$ and $\alpha$.
Set $\bar b=\sing{0}\cup b\cap\tau$, where $\tau\in b$ is
least such that there is no sequence
$\seq{\nu_{\lambda'}\mid\lambda\in \sing{0}\cup b\cap\tau}$ which satisfies the
conclusion for iterations $k$ with $\Crit(k)\subset\tau$. Note
that $\tau\leq\max(b\cap\alpha)$, since $\nu_{\max(b\cap\alpha)}$
can be $\alpha$. Set $\bar\tau=\max(\bar b)$,
let
$\seq{\nu^{0}_{\lambda}\mid \lambda\in\bar b\cap\bar\tau}$ witness that
$\tau$ is minimal, and set $\nu^0_{\bar\tau}=\cof(\alpha)$ if
$\max(\bar b)\leq\cof(\alpha)\leq\max(b)$, and
$\nu^0_{\bar\tau}=\bar\tau$ otherwise. Following Kunen, the
failure of the lemma implies that there is an infinite sequence
$\seq{\kappa_n\mid n\in\omega}$ of iterations such that
\begin{gather*}
(\forall n\in\omega)\;k_n(\alpha)>\alpha,\\
(\forall\lambda\in\bar
b)\;\min(\Crit(k_0))\setminus\lambda>\nu^{0}_{\lambda},\qquad\text{and}\\
(\forall\lambda\in\bar b)(\forall
n\in\omega)
\;\min\bigl(\Crit(k_{n+1})\setminus\lambda\bigr)>\sup\bigl(\Crit(k_{n})\cap\min(b\setminus\lambda+1)\bigr).
\end{gather*}
Now set $k'_{0,1}=k_0\colon M_{\Omega}=N_0\to N_1$ and
$k'_{n,n+1}=k'_{n}[k_{n}]\colon N_{n}\to N_{n+1}$. Then the direct
limit $N_{\omega}$ of these iterations is well founded; however the
following claim implies that $\seq{k'_{n,\omega}(\alpha)\mid
n\in\omega}$ is strictly descending. This contradiction will
complete the proof of Lemma~\ref{thm:M_bDoesntmove_helper}.
\begin{claim}\label{thm:M_bDoesntmove_claim}
$k'_{n,n+1}(\alpha)>\alpha$ for each $n\in\omega$.
\end{claim}
\begin{proof}[Proof of Claim~\ref{thm:M_bDoesntmove_claim}]
Set $\ell=k'_{n}$ and $\ell'=k_{n+1}$, and write $\ell=\ell_{1}\circ
\ell_{0}$ and $\ell'=\ell_{1}'\circ \ell_{0}'$, where $\ell_0$ and
$\ell_0'$ use the extenders below $\bar\tau$, while $\ell_1$
and $\ell'_1$ use the extenders above $\bar{\tau}$. Now
consider the following diagram, which is obtained using
Corollary~\ref{thm:reorderiteration}.
\begin{equation}
\label{eq:zzzz}
\begin{tikzpicture}[>=angle 90,baseline]
\matrix(m)[matrix of math nodes,
row sep =1.5em,
column sep=4.4em, text height = 1.5ex, text depth=0.25ex
]
{
& M^{\ell_0} & & M^{\ell_1,h_0} \\
M & &M^{h_0} & &M^{h_1 } \\
& M^{\ell'_0} & & M^{\ell'_1, h_0} \\
& & M^{\ell'} \\
};
\path[->]
(m-1-2) edge node[auto,sloped,pos=.2]{$\ell_0[\ell'_0]$} (m-2-3)
(m-1-4) edge node[auto,sloped,pos=.2] {$\ell[\ell'_1]$} (m-2-5)
(m-2-1) edge node[auto]{$\ell_0$} (m-1-2)
edge node[auto,swap]{$\ell'_0$}(m-3-2)
edge[dotted] node[auto]{$h_0$} (m-2-3)
(m-2-3) edge node[auto,sloped,pos=.8] {$ \ell'_0[\ell_1]$} (m-1-4)
edge node[auto,swap,sloped,pos=.8] {$ \ell_0[\ell'_1]$} (m-3-4)
edge[dotted] node[auto]{$h_1$} (m-2-5)
(m-3-2) edge node[auto,pos=.2,swap,sloped]{$\ell'_0[\ell_0]$} (m-2-3)
edge node[auto,swap]{$\ell'_1$} (m-4-3)
(m-3-4) edge node[auto,swap,sloped,pos=.2] {$\ell'[\ell_1]$} (m-2-5)
(m-4-3) edge node[auto,swap,sloped,pos=.2] {$\ell'[\ell_0]$} (m-3-4);
\end{tikzpicture}
\end{equation}
The choice of $\seq{\nu^{0}_{\lambda}\mid \lambda\in\bar b}$
implies that $h_0(\alpha)=\alpha$, so
\begin{equation*}
\ell_0[\ell'_1](\alpha)=
\ell_0[\ell'_1]\circ\ell'_0[\ell_0](\alpha)=\ell'[\ell_0]\circ\ell'_1(\alpha)
\geq\ell'_1(\alpha)=\ell'(\alpha)>\alpha.
\end{equation*}
We will embed $\ell_0[\ell'_1](\alpha)$ into
$\ell'_0[\ell_1](\ell'_1)(\alpha)$, showing that the latter is also
greater than $\alpha$. To this end, let $g$ and $\gamma$ be a
function in $M^{h}$ and a generator of $\ell_0[\ell'_1]$ such
that $\ell_0[\ell_1'](g)(\gamma)<\ell_0[\ell_1'](\alpha)$. We will define
a function $\bar g\in M^{\ell_1,h_0}$, and the desired embedding will
be given by $\ell_0[\ell_1'](g)(\gamma)\mapsto
\ell_1[\ell'_1](\bar g)( \ell'_0[\ell_1](\gamma))$.
For each $\nu\in\domain(g)$, let the function $f_{\nu}$ and the
generator $\beta_\nu$ of $\ell'_0[\ell_1]$ be
such that $g(\nu)= \ell'_0[\ell_1](f_{\nu})(\beta_\nu)$. Note
that $\ell'_0[\ell_1]\in M^{h_0}$, so
the function $h(\nu,\xi)=f_{\nu}(\xi)$ is also in $M^{h_0}$.
Also $\seq{\beta_\nu\mid \nu\in\domain(g)}\in M^{h_0}$, and
since $\sup(\Crit(\ell'_0[\ell_1]))<\min(\crit(\ell_0[\ell'_1])$, there is
some $\beta$ such that $\beta_\nu=\beta$ for almost all $\nu$;
that is, $\gamma\in \ell_0[\ell'_1](\set{\nu\mid \beta_\nu=\beta})$.
Now set $\bar g(\xi)= \ell'_0[\ell_1](h)(\beta,\xi)$, so
$\bar g(\ell'_0[\ell_1](\nu))=g(\nu)$ for almost all $\nu$.
This completes the proof of Claim~\ref{thm:M_bDoesntmove_claim}
and hence of Lemma~\ref{thm:M_bDoesntmove_helper}.
\end{proof}
\let\qed\relax
\end{proof}
\begin{proof}[Proof of Lemma~\ref{thm:M_bDoesntmove}]
We will show that for any finite $b\subseteq I$ and $\alpha\in M_b$
the sequence $\vec\nu$ given by Lemma~\ref{thm:M_bDoesntmove_helper} is also valid for
iterations $k$ as in Lemma~\ref{thm:M_bDoesntmove}. Note that such
$k$, having all critical points in $M_{B\setminus b}$, satisfy the
constraint given by $\vec\nu$.
Supposing the contrary, let $b$ be a sequence for which the claim
fails, let
$\alpha$ the least ordinal for which it fails, and let $k\in M_B$
witness this failure.
Set $\zeta=\otp(B)$, and let $G\subset i_{\Omega}(P(\vec E{\upharpoonright}\zeta)\mgkeq)$
be the generic set
constructed in Subsection~\ref{sec:generic_set}, so that $k\in
M_\Omega[G]$.
Then there is a condition $s\in G$ such that
$\set{\forceKappa^{s,\nu}\mid\nu\in\domain(s)}\subseteq
b\cup\sing{\Omega}$ which forces that $\alpha$ is the least
counterexample and that $\dot k$ is a name for a witness to this failure.
The choice of $\vec\nu^{0}$ ensures that $k$ is continuous at $\alpha$,
and therefore there is some $\alpha'<\alpha$ such that
$k(\alpha')\geq\alpha$.
By Lemma~\ref{thm:prikry}(\ref{item:Prikrythm-inD}) there is a
condition $s''\leq^*s$ in $G$ and a finite $e\subseteq\zeta$ such that any
$s'\leq s$ with $e\subseteq\domain(s'')$ determines
$\alpha'$. Fix $s'\leq s''$ in $M_b$ with $e\subseteq\domain(s')$
and $\nu_{\lambda}<\forceKappa^{s',\nu}$ whenever $\lambda\in
b\cup\sing0$ and $\lambda<\forceKappa^{s',\nu}$.
Now let $ j\colon M_{\Omega}\to M^{ j}$ be the iteration of $M_{\Omega}$
by the extenders
\begin{equation}
\label{eq:www}
\seq{F^{s',\nu}_{\xi}\mid\nu\in\domain(s')\setminus\sing\Omega\land
\forceKappa^{s',\nu}\notin\lim(B)
\land \xi\in\domain(\vec F^{s',\nu})}.
\end{equation}
Now construct, as in
Subsection~\ref{sec:generic_set} (except that the second component
$\vec b$ of the conditions
of $R$ is modified
appropriately), $G'\subseteq j\circ i_{\Omega}(P(\vec
E{\upharpoonright}\otp(B))\mgkeq)$ with $s'\in G'$. Instead of taking all
indiscernibles from $I$, this construction uses the iteration
$j\circ i_{\Omega}$, substituting the critical
point of $F^{s,\nu}_{\xi}$ for the corresponding member of $B$
whenever $F^{s,\nu}_{\xi}$ is in the sequence~\eqref{eq:www}.
Now factor $\dot k^{G'}$ as $\ell_1\circ\ell_0$ where $\ell_0$ uses the
extenders of $\dot k^{G'}$ which are
in $M_b$ and $\ell_1$ uses the remainder.
Note that
since $M_b$ is closed under countable sequences, $\ell_0\circ j\in
M_b$, and since $\ell_0\circ j$ obeys $\vec\nu$ it follows that $\ell_0\circ j(\alpha)=\alpha$.
Therefore $\ell_0\circ j(\alpha')<\alpha$, but $(\ell_1\circ
\ell_0)(j(\alpha'))\geq j(\alpha)=\alpha$, so
$\ell_1(\ell_0\circ j(\alpha'))>\ell_0\circ j (\alpha')$. Since the map $j$ is
elementary, this contradicts the minimality of $\alpha$.
\end{proof}
\medskip
This concludes the preliminary observations, and
we are now ready to continue with the proof of the Main
Lemma,~\ref{thm:mainlemma}. As was stated earlier, this proof is an
induction on the lexicographic ordering of pairs
$(\iota, \phi)$ in order to prove that for all limit suitable\ sequences $B$ and
all $x$ in $\chang_\iota\cap M_B$,
\begin{equation}\label{eq:indeq}
\chang_{B}\models_{\!\!\!\!\!\lower.7ex\hbox{$\scriptstyle\chang_\iota$}}}\newcommand{\fvia}[4]{f^{#1}_{#2(#3)#4} \phi(x) \quad\text{if and only
if}\quad \models_{\!\!\!\!\!\lower.7ex\hbox{$\scriptstyle\chang_\iota$}}}\newcommand{\fvia}[4]{f^{#1}_{#2(#3)#4}\phi(x).
\end{equation}
Here and for the remainder of the paper we write $P\models_{\!\!\!\!\!\lower.7ex\hbox{$\scriptstyle\chang_\iota$}}}\newcommand{\fvia}[4]{f^{#1}_{#2(#3)#4}\sigma$ to
mean that $(\chang_{\iota})^{P}\models\sigma$.
The statement~\eqref{eq:indeq} uses the induction hypothesis: $\chang_{B}$ is not, by
its definition, a subset of $\chang$; however by the induction
hypothesis there is an embedding $\pi\colon
(\chang_{\iota})^{\chang_B}\to \chang_{\iota} $, which is the identity
on ordinals and is defined in general by
setting $\pi(\set{y\in (\chang_{\iota'})^{M_B}\mid
(\chang_{\iota'})^{M_B}\models \phi(y, a)})=
\set{y\in \chang_{\iota'}\mid
\chang_{\iota'}\models \phi(y, \pi(a))})$. For the rest of
this section we will identify $(\chang_{\iota})^{M_B}$ with the range
of $\pi$.
We will need an additional induction hypothesis in order to carry out the
proof of Lemma~\ref{thm:mainlemma}.
This hypothesis is rather
technical and uses notation which will be developed during the proof of the induction
step for Lemma~\ref{thm:mainlemma}, so we defer its statement, as
Lemma~\ref{thm:mainlemma2}, until it is needed to complete that proof.
By standard arguments, the only problematic part of the proof of the induction step
for Lemma~\ref{thm:mainlemma} is the assertion
that the existential quantifier is
preserved downwards: We assume that $\psi(x,y)$ is a formula which
satisfies~\eqref{eq:indeq}, and want to prove that
\begin{equation}\label{eq:existsImplies}
\forall x\in \chang_{B} \, \bigl(\models_{\!\!\!\!\!\lower.7ex\hbox{$\scriptstyle\chang_\iota$}}}\newcommand{\fvia}[4]{f^{#1}_{#2(#3)#4}\exists
y\,\psi(x,y)\implies \chang_{B}\models_{\!\!\!\!\!\lower.7ex\hbox{$\scriptstyle\chang_\iota$}}}\newcommand{\fvia}[4]{f^{#1}_{#2(#3)#4}\exists y\,\psi(x,y)\bigr).
\end{equation}
Since the basic problem in the proof is dealing with gaps in $B$, it
will be helpful to introduce some terminology to describe their
structure. A \emph{gap} of $B$ is a
maximal nonempty interval of $I\setminus B$.
For a limit suitable{} set $B$, the gap
will be a half open interval $[\sigma,\delta)$ where $\sigma$ is the
supremum of an $\omega$-sequence of members of $B$, and $\delta$ is
either $\min(B\setminus\sigma)$ or $\Omega$. We call $\delta$ the
\emph{head} of the gap.
Let $\delta'=\sup(\sigma\cap I\setminus B)$, or $\delta'=0$ if $I\cap
\sigma\subseteq B$. Then $[\delta',\sigma)\cap I\subseteq B$; we
refer to this interval as the \emph{block} of $B$ corresponding to
the gap, and to $\delta'$ (which either is $0$ or is also the head of
a gap below $\delta'$) as the foot of the block.
If $\sigma'=\sup((B\cap\lim(I))\cap \delta)$ then
$B\cap(\sigma',\sigma)=I\cap(\sigma',\delta)$
is an $\omega$ sequence of successor members
of $B$; we will refer to this interval as \emph{the tail} of the
gap. If $\gamma$ is any member of this tail then we will refer
to the interval $[\gamma,\sigma)\cap B$ as \emph{the tail of $B$ above $\gamma$}.
Call a set $b\subseteq B$ a \emph{tail traversal} of $B$ if it contains
exactly one point from the tail of each gap in $B$.
Then $b$ determines a suitable subsequence $\tilde B\subseteq B$ as follows: let
$\delta$ be the head of a block in $B$, let $\delta'$ be the foot of the
associated block, and let $\gamma$ be the unique member of
$b\cap[\delta',\delta)$. Then we regard $\gamma$ as dividing this
block of $B$ into three parts: the closed interval
$[\delta',\gamma)\cap B$, which we will call a \emph{closed block of $B$
below $\gamma$},
the singleton $\sing{\gamma}$, and the tail $(\gamma,\delta)\cap B$,
which we will call the \emph{tail above $b$}.
The suitable subsequence $\tilde B$ determined by $b$ is the union of
the closed blocks of $B$ below the members of $b$.
The maximal suitable subsequences of $B$ are those which are
determined by some tail traverse of $B$. Note that any suitable
subsequence of $B$ is contained in a maximal subsequence, and hence in
dealing with $\chang_B$ we only need to consider maximal suitable subsequences.
\medskip{}
We are now ready to begin the proof of the induction step for Lemma~\ref{thm:mainlemma}.
Suppose that $\phi(x)$ is the formula $\exists y\,\psi(x,y)$ and
is true in $\chang_{\iota}$, and that $B$ is a limit suitable\ sequence with
$x\in \chang_B$.
Fix a tail traversal $b$ of $B$ such that
$\sing{x,\iota}\subseteq\chang_{\tilde B}$,
where $\tilde B$ is the suitable subsequence of $B$ determined by $b$.
Pick $y$ so $\models_{\!\!\!\!\!\lower.7ex\hbox{$\scriptstyle\chang_\iota$}}}\newcommand{\fvia}[4]{f^{#1}_{#2(#3)#4}\psi(x,y)$ and let $B'\supseteq B$ be a
limit suitable\ sequence with $y\in \chang_{B'}$. By the induction hypothesis
$\chang_{B'}\models_{\!\!\!\!\!\lower.7ex\hbox{$\scriptstyle\chang_\iota$}}}\newcommand{\fvia}[4]{f^{#1}_{#2(#3)#4}\psi(x,y)$.
We will define an iteration map $k$ and an isomorphism
$\sigma$ as in Diagram~\eqref{eq:diagram_1}.
\begin{equation}
\label{eq:diagram_1}
\begin{tikzpicture}[>=angle 90,baseline]
\matrix(m)[matrix of math nodes, row sep =2.6em, column sep=2.8em, text height = 1.5ex, text depth=0.25ex]
{%
&&M_{B'}&&M_{B'}\etarestrict\vec\eta{}\\
M&M_{\tilde B}&M_B&M_k&M_k\etarestrict\vec\eta{}\\
};
\path [right hook->] (m-2-2) edge (m-2-3)
(m-2-3) edge (m-1-3);
\path [left hook->] (m-1-5) edge (m-1-3)
(m-2-5) edge (m-2-4);
\path[->] (m-1-5) edge[decorate,decoration={snake,segment
length=1.2mm, amplitude=.2mm}] node[auto]{$\sigma$} (m-2-5);
\path [->] (m-2-3) edge node[auto]{$k$} (m-2-4)
(m-2-1) edge node[auto]{$i_\Omega$} (m-2-2);
\end{tikzpicture}
\end{equation}
The map $k$ will be an iterated ultrapower using iterated extenders with critical points in $b$. It has length greater than $\omega_1$, but is definable in $M_B[c]$ from
a countable sequence $c\in M_B$ of ordinals.
The iteration $k$ has two purposes:
\begin{enumerate}\item
It includes one iteration step for each member of $B'\setminus\tilde B$
(excluding a tail in $B'$ of each gap of $B$).
\item
For each gap in $B'$ which does not correspond to a gap of $B$, it
includes an
$\omega_1$-sequence of iteration steps inserted in order to emulate
this gap inside $M_k$.
\end{enumerate}
The submodel $M_{k}\etarestrict\vec\eta{}$ of $M_k$ will be obtained by using
only the iterations from clause~1, omitting those from clause~2. The
isomorphism $\sigma$ will map members of $B'\setminus B$ to the
corresponding critical points of ultrapowers in clause~1, and the
submodel $M_{B'}\etarestrict\vec\eta{}$ of $M_{B'}$ will be obtained by taking only the
generators belonging to members of $B'\setminus B$ which correspond to
generators of
extenders used in the iteration steps from clause~1.
The iteration $k$ will be such that Lemma~\ref{thm:M_bDoesntmove}
implies that the restrictions of $k$ and $\sigma$ to ordinals in
the suitable submodel $M_{\tilde B}$ are the identity.
The iteration $k$ can be defined in $M_B[c]$, for a countable
sequence $c$ of ordinals, and thus is definable in the extension $M_B[G]$.
The models $M_B$ and $M_k$ have the same ordinals and the
same associated Chang model $\chang_B=\chang_k$.
Thus Diagram~\eqref{eq:diagram_1} induces the following diagram:
\begin{equation}
\label{eq:diagram_2}
\begin{tikzpicture}[>=angle 90, baseline]
\matrix(m)[matrix of math nodes, row sep =2.6em, column
sep=2.8em, text height = 1.5ex, text depth=0.25ex]
{%
&\chang_{B'}&&\chang_{B'}\etarestrict\vec\eta\\
\chang_{\tilde B}&\chang_B&\chang_k=\chang_B&\chang_{k}\etarestrict\vec\eta\\
};{
\path [right hook->] (m-2-1) edge (m-2-2)
(m-2-2) edge (m-1-2);
\path [left hook->] (m-1-4) edge (m-1-2)
(m-2-4) edge (m-2-3);
\path[->] (m-1-4) edge
[decorate,
decoration={snake,segment length=1.2mm,
amplitude=.2mm}
] node[auto]{$\sigma$} (m-2-4);
\path [->] (m-2-2) edge node[auto]{$k$} (m-2-3) ;
}\end{tikzpicture}
\end{equation}
Once this machinery has been put into place, we will be able to complete the
proof of the induction step for Lemma~\ref{thm:mainlemma}: we are
assuming $\models_{\!\!\!\!\!\lower.7ex\hbox{$\scriptstyle\chang_\iota$}}}\newcommand{\fvia}[4]{f^{#1}_{#2(#3)#4}\psi(x,y)$, with $x$ and $y$ in $\chang_{B'}$, so by
the
induction hypothesis $\chang_{B'}\models_{\!\!\!\!\!\lower.7ex\hbox{$\scriptstyle\chang_\iota$}}}\newcommand{\fvia}[4]{f^{#1}_{#2(#3)#4}\exists y\psi(x,y)$.
An easy proof will give
Lemma~\ref{thm:M-B-eta-elem}, which implies that
$\chang_{B'}\etarestrict\vec\eta{}\prec \chang_{B'}$, so $\chang_{B'}\etarestrict\vec\eta{}
\models_{\!\!\!\!\!\lower.7ex\hbox{$\scriptstyle\chang_\iota$}}}\newcommand{\fvia}[4]{f^{#1}_{#2(#3)#4}\exists y\psi(x,y)$. Fix $y\in \chang_{B'}\etarestrict\vec\eta{}$ so that
$\chang_{B'}\etarestrict\vec\eta{}\models_{\!\!\!\!\!\lower.7ex\hbox{$\scriptstyle\chang_\iota$}}}\newcommand{\fvia}[4]{f^{#1}_{#2(#3)#4}\psi(x,y)$. Since $\sigma$ is an
isomorphism, $\chang_{k}\etarestrict\vec\eta{}\models_{\!\!\!\!\!\lower.7ex\hbox{$\scriptstyle\chang_\iota$}}}\newcommand{\fvia}[4]{f^{#1}_{#2(#3)#4}\psi(x,\sigma(y))$.
Now we want to conclude that $\chang_{B}\models \psi(x,\sigma(y))$,
but unlike the case in the previous paragraph, we don't know of a direct proof that
$\chang_{B}\etarestrict\vec\eta{}\prec
\chang_{B}$. Instead we will state a slightly generalized form of the
needed fact as Lemma~\ref{thm:mainlemma2}, and with this as an
additional induction hypothesis conclude the
proof of the induction step for Lemma~\ref{thm:mainlemma}. We
then use the induction hypothesis (including the just proved fact that
Lemma~\ref{thm:mainlemma} holds for the pair $(\iota,\phi)$) to prove that
Lemma~\ref{thm:mainlemma2} holds for $(\iota,\phi)$; this will
complete the proof of Lemmas~\ref{thm:mainlemma}
and~\ref{thm:mainlemma2}, and thus of Theorem~\ref{thm:main}, except
for the assumption that $\kappa_0\in B$.
\medskip{}
We now give the details of the construction of Diagram~\eqref{eq:diagram_1}.
We already have the four models on the left of the diagram: $B$ is the
given limit suitable\ sequence, $\tilde B\subset B$ is a suitable subsequence with
$x\in M_{\tilde B}$ which is characterized by a tail traversal $b$ of $B$, and
$B'\supseteq B$ is a limit suitable\ sequence with a witness $y$ to $\exists
y\;\psi(x,y)$.
The following definition is more general than needed here. The added
generality is used in the proof of Lemma~\ref{thm:mainlemma2}.
\begin{definition}\label{def:vgsequence}
A \emph{virtual gap construction{} sequence} for $B$ is a triple $(b, \vec\eta,g)$
satisfying the following four conditions:
\begin{enumerate}
\item The set $b$ is a tail traversal sequence of $B$.
\item $\vec\eta$ is a function with $\domain(\vec\eta)=
\set{(\lambda,\xi)\mid \lambda\in b\land
\xi<\nu_\lambda}$, where $\nu_\lambda$ is a countable ordinal
for each $\lambda\in b$.
\item $g\subset\domain(\vec\eta),$ and if $(\lambda,\xi)\in g$ then
$\xi$ is a limit ordinal.
\item \label{item:vg_forcing}
Define an order $\precdot$ on $B\cup\domain(\vec\eta)$ using the
ordinal order on $B$, the lexicographic order on
$\domain(\vec\eta)$, and setting
$\lambda'\precdot(\lambda,\xi)\precdot\lambda$ when $\lambda'<\lambda\in B$
and $(\lambda,\xi)\in\domain( \vec\eta)$.
Then $\eta_{\lambda,\xi}>\otp(\set{z\in B\cup\domain(\eta)\mid
z\precdot(\lambda,\xi)})$.
\end{enumerate}
We will say that $(b,\vec\eta,g)$ is a virtual gap construction\ sequence for $B'$ over $B$
if in addition the following four conditions hold:
\begin{myinparaenum}
\item
$B'$ and $B$ are limit suitable\ sequences with $B'\supset B$.
\item\label{item:vgover-tauiso} $B'$ has the
same order type as $(B\cup\domain(\eta),\precdot)$. In the
following, we write
$\tau\colon (B\cup\domain(\eta))\to B'$ for the order isomorphism.
\item\label{item:vgover-identity}
$\tau$ is the identity on the suitable subsequence $\tilde B$ of
$B$ determined by $b$.
\item \label{item:vgover-gisright}
if $\gamma\in B'\setminus B$ then $\tau(\gamma)\in g$ if and
only if $\gamma$ is the head of a gap in $B'$.
\end{myinparaenum}
\end{definition}
Note that if $(b,\vec\eta,g)$ is a virtual gap construction\ sequence for $B'$ over $B$
then $b'=\tau^{-1}[b]$ is a traversal of the
tails in $B'$ of the gaps of $B$, and that if $\lambda\in b'$ then
$\tau$ maps the tail above $\lambda$ in $B'$ to the tail above
$\tau(\lambda)$ in $B$.
For the construction of Diagram~\eqref{eq:diagram_1}, we use the following virtual gap construction\ sequence
$(b,\vec\eta,g)$ for $B'$ over $B$: The function $\vec\eta$ is
a constant function, with the constant value $\eta$ to be specified
later. Fix a traversal $b'$ of the tails in $B'$ belonging to gaps of
$B$. Then
\begin{myinparaenum}
\item $\domain(\vec\eta)=\set{(\lambda,\xi)\mid \lambda\in b\land
\xi\leq\otp(B'\cap[\lambda,\lambda')}$, where $\lambda'$ is
the member of $b'$ in the tail in $B'$ of the same gap as
$\lambda$, and
\item $g=\set{\tau(\gamma)\mid \gamma\in B'\setminus B\land \gamma
\text{ is the head of a gap in }B'}$
\end{myinparaenum}
\begin{definition}
\label{def:Bprime_vg}
If $(b,\vec\eta,g)$ is a virtual gap construction\ sequence for $B'$ over $B$, then
$M_{B'}\etarestrict\vec\eta{}=\set{j_{\Omega}(f)(a)\mid f\in M\land a\in
[\mathcal{G}]^{<\omega}}$ where $\mathcal{G}$ is the following set
of generators: Let $\kappa_\nu$ be a member of $B'$ and let $\beta=i_{\nu}(\bar\beta)$
be a generator belonging to $\kappa_{\nu}$. Then
\begin{equation*}
\beta\in\mathcal{G} \iff \bigl(
\tau(\kappa_\nu)\in B \lor
(\tau(\kappa,\nu)=(\lambda,\xi)\in\domain(\vec\eta)\land \bar\beta\in\supp(E_{\eta_{\lambda,\xi}}))\bigr).
\end{equation*}
\end{definition}
Note that $M_{B'}\etarestrict\vec\eta{}\prec M_{B'}$, that $M_{\tilde B}\subseteq
M_{B'}\etarestrict\vec\eta{}$ and, that if $\eta$ is chosen sufficiently large then $y\in
M_{B'}\etarestrict\vec\eta{}$. This is the first of two criteria for the choice of
$\eta$; the other is that $\eta>\omega^{\omega}\cdot \otp(B')$.
\begin{lemma}
\label{thm:M-B-eta-elem}
If $(b,\eta_{\lambda,\xi},g)$ is a virtual gap construction\ sequence for $B'$ over $B$
then $\chang_{B'}\etarestrict\vec\eta{}\prec\chang_{B'}$.
\end{lemma}
\begin{proof}
The construction of Subsection~\ref{sec:generic_set} can
be carried out to obtain a $M_{B'}\etarestrict\vec\eta{}$-generic subset $G\in
i_{\Omega}(P(\vec E{\upharpoonright}\otp(B')\mgkeq)$. The only change needed is
that the range of the coordinate $b_{\gamma}$ in a condition of $R$ is restricted
to $\supp(E_{\eta_{\lambda,\xi}})$ whenever
$(\lambda,\xi)\in\domain(\vec\eta)$ and $\kappa_\gamma$ is the
$\xi$th member of $B'$ above $\lambda$.
Now let $\phi$ be a formula which is true in $\chang_{B'}\etarestrict\vec\eta{}$.
Then there is a condition $([r],b)$ in the forcing $R$ for $M_{B'}\etarestrict\vec\eta{}$
which establishes the parameters of $\phi$ and forces $\phi$ to be
true. This condition is also a condition in the forcing $R$ for
$M_{B'}$, it
establishes the parameters in the same way, and it forces that $\phi$
holds in $\chang_{B'}$.
\end{proof}
Note that condition~\ref{item:vg_forcing} of
Definition~\ref{def:vgsequence}
is used here to ensure that the enough of the image of $E$ is
present at each of the $\kappa_\nu\in B'\setminus B$ to construct
the generic set as in section~\ref{sec:generic_set}.
\begin{figure}[t]
\[
\begin{tikzpicture} [>=angle 90, baseline,xscale=.5]
\tikzstyle{viz}=[]
\tikzstyle{math}=[]
\tikzstyle{mapping}=[dotted,thick,black!75!white]
\tikzstyle{left-label}=[black,math,left=8]
\tikzstyle{right-label}=[black,math,right=8]
\tikzstyle{mydot}=[fill=black,circle,inner sep=0,minimum size=3pt]
\tikzstyle{inmodel}=[thick,black!50!white]
\tikzstyle{modeled}=[black!50!white]
\def4{4}
\newcommand{\mytail}[1]{#1 node[mydot]{} +(0,.3) node[mydot]{} ++(0,.6) node[mydot]{}}
\node[viz,mydot](a1) at (0,0) {};
\node[viz,mydot](a9) at (0,6) {};
\node[viz,mydot](b1) at (4,0) {};
\node[viz,mydot](b9) at (4,6) {};
\node[viz,mydot](c1) at (2*4,0) {};
\node[viz,mydot](c9) at (2*4,6) {};
\node[viz,mydot](d1) at (3*4,0) {};
\node[viz,mydot](d9) at (3*4,6) {};
\node[math](Bp) [below of = a1] {$M_{B'}\etarestrict\vec\eta{}$};
\node[math](Bph) [below of = b1] {$M_{k}\etarestrict\vec\eta{}$};
\node[math](Bh) [below of = c1] {$M_k$};
\node[math](Bk) [below of = d1] {$M_B$};
\draw[->] (Bp) edge [decorate,decoration={snake,segment length=1.2mm,amplitude=.2mm}]
node[auto]{$\sigma$} (Bph);
\draw[->] (Bk) edge node[auto,swap]{$k$}(Bh);
\draw[right hook->] (Bph) edge (Bh);
\draw[mapping] (a1) node[mydot]{} node[left-label]{$\delta'$} -- (d1) node[mydot]{}
(a9) node[left-label]{$\delta$} -- (d9);
\draw (a1) +(0,1) node(a2){} +(0,1.5) node(a3){} +(0,4) node(a4){} +(0,4.5) node(a5){}
+(0,5) node(a6){};
\draw[mapping] (a2) node[left-label]{$\max(\tilde B\cap\delta)$} node[mydot]{} --
++ (3*4,0) node[mydot](d2){} ;
\draw[mapping] (a3) node[left-label]{$\lambda$} node[mydot]{} -- + (2*4, 0)
+(3*4,0) node[mydot](d3){};
\draw[mapping] (a4) node[left-label]{$\lambda' \in b'$}
node[mydot]{} -- ++(4,-1) node[mydot](b4){}
-- ++(4,0) node[mydot](c4){}
-- (d3) node[right-label]{$\lambda\in b$} ;
\draw (a3) +(0, .6) node(a3c){}
+(0,1.2) node(a3a){}
++(0,1.7) node[mydot](a3b){};
\newcommand\brokenline{ +(-.3,-.02) -- +(-.1,+.05) -- +(.1, -.02) -- +(0.3, +.02) +(0,0)}
\draw[inmodel] (a3.center) -- (a3c.center)
\brokenline ++(0,0.08) \brokenline
-- (a3a.center) +(0,.25) node[left-label]{typical gap in $B'\setminus B$} (a3b) -- (a4.center);
\draw[modeled] (b1 |- a3)node(b3)[mydot]{} -- ++(0,0.4)node(b3a){} ++(0,0.3) node[mydot](b3b){} -- (b4)
(c1 |- a3)node[mydot]{} -- (c1 |- b4);
\draw[mapping] (a3b) -- (b3b) -- (b3b -| c1) node[mydot]{}
(a3a.center) -- (b3a.center) -- (b3a -| c1);
\draw[inmodel] (a1) edge (a2.center) (b1)-- (b1 |- a2)node[mydot]{} (c1) -- (c1 |- a2)node[mydot]{} (d1) -- (d2);
\draw (a4) \mytail{++(0,0.3)} +(0,-.3) node[left-label]{$\text{tail of }B'$};
\draw[snake=brace] (a4) +(.3, 1.05) -- +(.3, 0.15);
\draw (b4) \mytail{++(0,0.3)} (c4) \mytail{++(0,0.3)};
\draw (c4) ++(4,0) \mytail{++(0,0.3)} +(0,-.3)
node[right-label]{$\text{tail of }B$};
\draw[snake=brace] (c4) ++(4,0) +(-.3, 0.15) -- +(-.3, 1.05);
\draw[mapping] (a4) ++(.4,0.62) -- ++(4 - .3, -1.0) --
++(4, 0) -- +(4 - 0.45, 0);
\end{tikzpicture}
\]
\caption{The maps $\sigma$ and $k$ inside the block between $\delta'$
and $\delta$ which is associated
with the gap in $B$ headed by $\delta$. The dotted lines represent the
maps $\sigma$ and $k$; the heavier vertical lines represent intervals of $I$
contained in the indicated models and the lighter ones represent the indiscernibles added by the iteration $k$.}
\label{fig:k_def}
\end{figure}
We can now complete the construction of the elements of
Diagram~\eqref{eq:diagram_1} by defining $k$ and $\sigma$. This
construction is illustrated in Figure~\ref{fig:k_def}.
\begin{definition}
\label{def:k}
We define by recursion on $z\in (B\cup\domain(\vec\eta),\precdot)$
a sequence of embeddings $k_{z}\colon M_{B}\to M^*_{z}$.
We will describe the construction on one of the blocks of $B$.
Thus, suppose that $\delta\in B$ is the head of a gap and
$\delta'\in B\cup\sing{0}$ is the foot of the block of $B$ below it.
We assume that $k_z\colon M_B\to M^*_z$ has been defined for all $z\precdot\delta'$.
Let $\lambda$ be the unique member of $b\cap[\delta',\delta)$.
\begin{compactenum}[(i)]
\item
$M^*_0=M_B$, and if $\delta'>0$ then
$M^*_{\delta'}=\dirlim\seq{(M_z^*;k_z):z\precdot \delta'}$.
\item If $\nu\in \tilde B\cap
[\delta',\delta)=B\cap[\delta',\lambda)$ then
$M^*_\nu=M^*_{\delta'}$.
\item If $\nu\in B\cap(\lambda,\delta)$ then
$M^*_\nu$ is the
direct limit of the embeddings $k_z$ for $z\precdot
\lambda$.
\item If $(\lambda,\xi)\in \domain(\vec\eta)\setminus g$ and
$\xi$ is a limit ordinal then
$M^*_\nu$ is the direct limit of the embeddings $k_z$ for
$z\precdot(\lambda,\xi)$.
\item If $z=(\lambda,\xi+1)\in \domain(\vec\eta)$, or if
$z=\lambda$ and $(\lambda,\xi)$ is its predecessor in
$\precdot$, then $M^*_z=\ult(M^*_{(\lambda,\xi)},
E^*_{\eta_{\lambda,\xi}})$ where, letting $\gamma$ be such that
$\delta'=\kappa_{\gamma}$, we write $E^*_{\alpha}$ for
$k_{\lambda,\xi}\circ i_{\gamma}(E_\alpha)$.
\item If $z=(\lambda,\xi)\in g$, then set $\bar k^*_z\colon M_B\to
M^{**}_z=\dirlim_{z'\precdot z}M^*_{z'}$.
Then $M^*_z$
is an iterated ultrapower of $M^{**}_z$
of length $\omega_1$, using extenders $\bar k^{*}_z(i_{\gamma}(\vec
F))$ where $\lambda=\kappa_{\gamma}$ and $\vec F\in M$ is an
arbitrary but fixed cofinal subsequence of the
sequence of extenders below $E$ on $\kappa$ in $M$.
\end{compactenum}
If $\gamma\in B'$ and $\tau(\gamma)=(\lambda,\xi)\in \domain(\vec\eta)$, then
$\sigma(\gamma)$ is equal to the critical point of the ultrapower of
$M^*_{\tau(\gamma)}$.
\end{definition}
\begin{definition}
\label{def:sigma}
The restriction of $\sigma$ to $B'$ is determined by the map
$\tau$ specified in the Definition~\ref{def:vgsequence} of a virtual gap construction\
sequence for $B'$ over $B$: if $\tau(\gamma)\in B$ then
$\sigma(\gamma)=k(\tau(\gamma))$, and if
$\tau(\gamma)=(\lambda,\xi)$ then $\sigma(\gamma)$ is the
$\xi$th critical point of the iteration steps of $k$ using extenders on $\lambda$.
The restriction of $\sigma$ to $B'$ determines its restriction
to generators of $M_{B'}\etarestrict\vec\eta{}$, and this restriction determines
the remainder of $\sigma$.
\end{definition}
The particular choice of the sequence $\vec F$ of extenders will not
matter; a suitable choice for $F_\nu$ would be the least
$\kappa^{+(\nu+1)}$-strong extender on $\kappa$.
It is important that $\vec F\in M$, for that implies that
$M_k$ is in $M_B[B,\vec\eta]$ and hence is in the generic
extension $M_B[G]$ of $M_B$ described in
section~\ref{sec:generic_set}; we use this fact to identify the ordinals
of $M_k$ with those of $M_B$. It is also important that
$\vec F$
is cofinal among the extenders below $E$ in $M$, and hence
$i_{\gamma}(\vec F)$ is cofinal among the extenders on $\lambda$
in $M_B$: this fact ensures (using
Lemma~\ref{thm:M_bDoesntmove})
that the restriction of $k$ to the ordinals of $M_B$ is
independent of the choice of $\vec F$.
\begin{todoenv}
{6/18/15 --- This actually goes beyond
Lemma~\ref{thm:M_bDoesntmove}.}
\end{todoenv}
This completes the definition of the elements of
Diagram~\eqref{eq:diagram_1}, and the extension to the Chang model in
Diagram~\eqref{eq:diagram_2} is straightforward.
We have already observed that the Chang model $\chang_k$ built on
$M_k$ is the same as $\chang_B$, giving the identity on the
bottom. Lemma~\ref{thm:M-B-eta-elem} asserts that $C_{B'}\etarestrict\vec\eta{}$ is
an elementary substructure of $\chang_{B'}$, and $\sigma\colon
C_{B'}\etarestrict\vec\eta{}\to \chang_{k}\etarestrict\vec\eta{}$ is an isomorphism. It follows that
$\chang_{k}\etarestrict\vec\eta{}\models_{\!\!\!\!\!\lower.7ex\hbox{$\scriptstyle\chang_\iota$}}}\newcommand{\fvia}[4]{f^{#1}_{#2(#3)#4} \psi(x,\sigma(y))$, and
we will be finished if we can conclude from this that that
$\chang_{B}\models_{\!\!\!\!\!\lower.7ex\hbox{$\scriptstyle\chang_\iota$}}}\newcommand{\fvia}[4]{f^{#1}_{#2(#3)#4} \psi(x,\sigma(y))$. This is implied by the case
$(\iota,\psi)$ of
Lemma~\ref{thm:mainlemma2}, which is the promised addition to the induction
hypothesis to be used in the proof of Lemma~\ref{thm:mainlemma}. Thus this
concludes the proof of the induction step for
Lemma~\ref{thm:mainlemma}.
\begin{lemma}\label{thm:mainlemma2}
Suppose that $B\subseteq B'$ are limit suitable\ sequences and $\vec \eta$ is a virtual gap
construction sequence for $B'$ over $B$ such that
$\eta_{\lambda,\xi}\geq\omega^{n}\cdot\otp(B\cup\domain(\vec\eta),
\precdot)$ for all $(\lambda,\xi)\in\domain(\vec\eta)$ and $n<\omega$.
Let $k\colon M_B\to M_k$ be
the virtual gap construction{} iteration, and let $\chang_k\etarestrict\vec\eta{}\subseteq
\chang_k$ be as given in Diagram~\eqref{eq:diagram_2}.
Then $\chang_{k}\etarestrict\vec\eta{}\prec \chang$.
\end{lemma}
\begin{proof}
As was stated earlier, this proof is a simultaneous induction along with
Lemma~\ref{thm:mainlemma}. We have completed
the proof that Lemma~\ref{thm:mainlemma} holds
for $(\iota,\phi)$, using as an induction hypothesis that Lemmas~\ref{thm:mainlemma} and~\ref{thm:mainlemma2} hold for all smaller pairs.
We now use this same induction hypothesis, together
with the fact that Lemma~\ref{thm:mainlemma} holds for $(\iota,\phi)$, to prove that Lemma~\ref{thm:mainlemma2} holds
for $(\iota,\phi)$: that is, if
$B$, $k$ and $\eta$ are as in Lemma~\ref{thm:mainlemma2}
and $x$ is an arbitrary member of
$C_{k}\etarestrict\vec\eta{}$ such that $\models_{\!\!\!\!\!\lower.7ex\hbox{$\scriptstyle\chang_\iota$}}}\newcommand{\fvia}[4]{f^{#1}_{#2(#3)#4}\exists y\psi(x,y)$, then $\chang_k\etarestrict\vec\eta{}\models_{\!\!\!\!\!\lower.7ex\hbox{$\scriptstyle\chang_\iota$}}}\newcommand{\fvia}[4]{f^{#1}_{#2(#3)#4}\exists y\psi(x,y)$.
By the newly proved case of Lemma~\ref{thm:mainlemma},
$\chang_{B}\models_{\!\!\!\!\!\lower.7ex\hbox{$\scriptstyle\chang_\iota$}}}\newcommand{\fvia}[4]{f^{#1}_{#2(#3)#4}\exists y\,\psi(x,y)$.
Fix $y_0\in\chang_{B}$ so that $ \models_{\!\!\!\!\!\lower.7ex\hbox{$\scriptstyle\chang_\iota$}}}\newcommand{\fvia}[4]{f^{#1}_{#2(#3)#4}\psi(x,y_0)$.
We now define an extension $\vec{\eta'}$ of the virtual gap construction{} sequence
$\vec \eta$ such that $y_0\in C_{B}\etarestrict\vec{\eta'}$.
The sequence $\vec{\eta'}$ will have the same sets $b$
and $g$ as $\vec\eta$, but the domain of $\vec \eta'$ will be enlarged
by adding an $\omega$ sequence of new elements below each $(\lambda,\xi)\in g$.
Thus, for each $\lambda\in b$ define a map $t_{\lambda}$ with
$\domain(t_{\lambda})=\len(\vec\eta_\lambda)$ by
\begin{equation*}
t_{\lambda}(\xi) =
\begin{cases}
0&\text{if $\xi=0$,}\\
t_{\lambda}(\xi')+1&\text{if $\xi=\xi'+1$,}\\
\sup_{\xi'<\xi}t_{\lambda}(\xi')&\text{if $\xi$ is a limit and
$(\lambda,\xi)\notin g$}\\
\sup_{\xi'<\xi}t_{\lambda}(\xi')+\omega&\text{if $(\lambda,\xi)\in g$}.
\end{cases}
\end{equation*}
Now we define $\vec{\eta'}$, using an ordinal $\eta'\in\omega_1$ to be
determined shortly:
\begin{gather*}
\domain(\vec{\eta'})=\set{(\lambda,\xi)\mid\xi<\sup\range(t_{\lambda})}\\
b^{\vec{\eta'}}=b^{\vec\eta}\text{, and }
g^{\vec{\eta'}}=\set{(\lambda,t_{\lambda}(\xi)\mid (\lambda,\xi)\in
g^{\vec\eta}}, \text{ and}\\
\eta'_{\lambda,\xi} =
\begin{cases}
\eta_{\lambda,\xi'}&\text{if $\xi=t(\xi')$ }\\
\eta'&\text{if $(\lambda,\xi)\notin\range(t)$}.
\end{cases}
\end{gather*}
The first condition on $\eta'$ is that $\eta'\geq
\omega^{n}\cdot\otp\bigl(B\cup\domain(\vec{\eta'},\precdot)\bigr)$ for each
$n\in\omega$, and the second condition is that $y_0\in \chang_{B}\etarestrict\vec{\eta'}$.
It is possible to satisfy the second condition since
$\chang_B=\bigcup_{\eta'<\omega_1}\chang_B\etarestrict\vec{\eta'}$. Notice that the
first condition implies that $\vec{\eta'}$ satisfies the hypothesis of
Lemma~\ref{thm:mainlemma2}, since if $\xi=t_{\lambda}(\xi')$ then
$\eta'_{\lambda,\xi}=\eta_{\lambda,\xi'}>
\omega^{n+1}\cdot\otp(B\cup\domain(\vec\eta),\precdot)=\omega^{n}\cdot
\omega\cdot\otp(B\cup\domain(\vec\eta),\precdot)
\geq\omega^{n}\cdot\otp(B\cup\domain(\vec{\eta'}),\precdot)$.
\newcommand\taumap{\tau}
\begin{equation}\label{eq:mainlemma2-diag}
\begin{tikzpicture}[>=angle 90,baseline]
\tikzstyle{mysnake}=[decorate,decoration={snake,segment length=1.2mm,amplitude=.2mm}];
\matrix(m)[matrix of math nodes, row sep =2.6em, column sep=2.8em, text height = 1.5ex, text depth=0.25ex]
{&M_{B''}&&M_{B''}{\upharpoonright}\vec{\eta'}\\
M_{B'}&&M_{B'}{\upharpoonright}\vec\eta\\
M_B&M_k&M_k{\upharpoonright}\vec\eta \\
&M_{k'}&&M_{k'}{\upharpoonright}\vec{\eta'}\\
};
\path [right hook->](m-2-1) edge (m-1-2)
(m-2-3.north east) edge (m-1-4)
(m-3-1) edge (m-2-1);
\path[->] (m-3-1) edge node[auto] {$k$} (m-3-2)
edge node[auto] {$k'$} (m-4-2)
(m-2-3) edge [mysnake] node[auto] {$\sigma$} (m-3-3)
(m-3-2) edge node[auto] {$\taumap$} (m-4-2)
(m-1-4) edge [mysnake] node[auto] {$\sigma'$} (m-4-4);
\path[right hook->] (m-3-3.south east) edge node[sloped,above] {$\taumap$} (m-4-4);
\path[left hook->] (m-3-3) edge (m-3-2)
(m-4-4) edge (m-4-2)
(m-1-4) edge (m-1-2)
(m-2-3) edge (m-2-1);
\end{tikzpicture}
\end{equation}
For the remainder of the proof we refer to
Diagram~\eqref{eq:mainlemma2-diag}. The inner rectangle is the same
as Diagram~\eqref{eq:diagram_1}.
The map $\taumap$ is determined by using the map
$(\lambda,\xi)\mapsto(\lambda,t_{\lambda}(\xi))$ to map the generators
of indiscernibles from $\vec\eta$ into those of $\vec{\eta'}$.
As with Diagrams~\eqref{eq:diagram_1} and~\eqref{eq:diagram_2},
Diagram~\eqref{eq:mainlemma2-diag} induces a similar diagram for the
corresponding Chang models.
\begin{todoenv} {6/18/15 --- MAYBE --- Again,
Lemma~\ref{thm:M_bDoesntmove} doesn't really cover this, as
stated.}
\end{todoenv}
We claim that
$\taumap{\upharpoonright}(\chang_k\etarestrict\vec\eta{})$ is the identity. First,
Lemma~\ref{thm:M_bDoesntmove} implies that the restriction of $\taumap$ to the ordinals of
$M_k\etarestrict\vec\eta{}$ is the identity.
Now every member of $\chang_k\etarestrict\vec\eta{}$ is represented by a term
$w=\set{z\in\chang_{\iota'}\!\mid\quad\models_{\chang_{\iota'}}\phi(z,a)}$,
where $\iota'\in M_k\etarestrict\vec\eta{}$ and $a$ is a sequence of ordinals from
$M_{k}\etarestrict\vec\eta{}$. Thus $\taumap(w)$ is represented by the same term in
$\chang_{k'}\etarestrict\vec\eta{}$. But $\chang_{k}=\chang_{k'}=\chang_{B}$, so this
term represents the same set $w$ in $\chang_{k'}$.
Now define $B''$ to be $B'$ together with the next $\omega$-many
members of $I$ from each of the gaps of $B'$ which are not gaps of
$B$.
The right-hand trapezoid commutes, and in particular
$\sigma^{-1}(x)=(\sigma')^{-1}\circ \tau(x)=(\sigma')^{-1}(x)$. Now
$\chang_{k'}\etarestrict\vec{\eta'}\models_{\!\!\!\!\!\lower.7ex\hbox{$\scriptstyle\chang_\iota$}}}\newcommand{\fvia}[4]{f^{#1}_{#2(#3)#4} \psi(x,y_0)$, and since $\sigma'$
is an isomorphism it follows that
$\chang_{B''}\etarestrict\vec{\eta'}\models_{\!\!\!\!\!\lower.7ex\hbox{$\scriptstyle\chang_\iota$}}}\newcommand{\fvia}[4]{f^{#1}_{#2(#3)#4}\psi(\sigma^{-1}(x),(\sigma')^{-1}(y_0))$.
It follows by Lemma~\ref{thm:M-B-eta-elem} that $\chang_{B''}$ satisfies the
same formula, by the induction hypothesis Lemma~\ref{thm:mainlemma} for
$(\iota,\phi)$ it follows that $\models_{\!\!\!\!\!\lower.7ex\hbox{$\scriptstyle\chang_\iota$}}}\newcommand{\fvia}[4]{f^{#1}_{#2(#3)#4}\exists
y\;\psi(\sigma^{-1}(x),y)$, and by
another application of the same
induction hypothesis $\chang_B$ satisfies the same formula.
By Lemma~\ref{thm:M-B-eta-elem}, $\chang_{B'}{\upharpoonright}\vec\eta$
does as well, so let $y_1$ be such that $M_{B'}\etarestrict\vec\eta{}\models_{\!\!\!\!\!\lower.7ex\hbox{$\scriptstyle\chang_\iota$}}}\newcommand{\fvia}[4]{f^{#1}_{#2(#3)#4}\psi(\sigma^{-1}(x),y_1)$. Then
$\chang_k\etarestrict\vec\eta{}\models_{\!\!\!\!\!\lower.7ex\hbox{$\scriptstyle\chang_\iota$}}}\newcommand{\fvia}[4]{f^{#1}_{#2(#3)#4} \psi(x,\sigma(y_1))$, so $\chang_k\etarestrict\vec\eta{}\models_{\!\!\!\!\!\lower.7ex\hbox{$\scriptstyle\chang_\iota$}}}\newcommand{\fvia}[4]{f^{#1}_{#2(#3)#4}\exists y\psi(x,y)$, as required.
\end{proof}
\subsection{ Finite exceptions and $\kappa_0\notin B$}\label{sec:finite-exceptions}
In the last subsection we assumed that $\kappa_0=\kappa$ is a member
of $B$; here we indicate how this extra assumption can be eliminated. The
same argument is used in the proof of
Theorem~\ref{thm:modified-suitable} to support the provision allowing finitely many exceptions.
The reason that the previous argument fails when $\kappa_0\notin B$ is that $\kappa_0$ may be a member of the extended model $B'$ of diagram~\ref{eq:diagram_1}. In this case the definition of
the map $k$ in Diagram~\eqref{eq:diagram_1} fails because there is no tail of $B$ in this first gap.
To conclude the proof of Theorem~\ref{thm:main}(\ref{item:main-upper}),
suppose that $B=\set{\lambda_\nu\mid\nu\leq\zeta}$ is a limit suitable\ set
with $\lambda_0>\kappa_0$, that
$x\in\chang_{B}$, and that $\chang\models\phi(x)$. We want to show
that $\chang_{B}\models\phi(x)$.
Let $B'=B\cup\set{\kappa_{n}\mid n<\omega}$, a limit suitable\ sequence of length $\omega+\delta$.
Since $\kappa_0\in B'$, the version of
Theorem~\ref{thm:main}(\ref{item:main-upper})
already proved implies
that $\chang_{B'}\models\phi(x)$.
Let $G$ be the $M_{B'}$-generic subset of
$i_\Omega(P(\vec E{\upharpoonright}(\omega+\delta))\mgkeq)$ constructed
in section~\ref{sec:generic_set},
and set \[G_1=\set{[p{\upharpoonright}(\omega,\omega+\delta)]\mid [p]
\in G \land \omega\in\domain(p)}. \]
Then $G_1$ is an $M_B$-generic subset of
$i_\Omega(P'\mgkeq)$, where $P'$ is the forcing described following
Lemma~\ref{thm:factorization} such that $P(\vec
E{\upharpoonright}(\omega+\delta))\equiv P(\vec
E{\upharpoonright}\omega)*\dot R$ is a regular suborder of
$P(\vec E{\upharpoonright}\omega+1)\times P'$.
Now let $[q]\in G$ be a condition such that $[q]\Vdash
\chang_{B'}\models\phi(x)$. We may assume that
$\omega=\min(\domain(q))$. Let $G_0$ be a $M_B$-generic subset of
$P(\vec F^{q,\omega})$ with $q{\upharpoonright}\omega+1\in G_0$, and let
$\tilde G$ be the resulting
$M_B$-generic subset of $i_\Omega(P(\vec E{\upharpoonright}(\omega+\delta))\mgkeq)$.
Then $[q]\in \tilde G$, so $M[\tilde G]\models
\chang_{B''}\models\phi(x)$, where $B''$ is the set
$\set{\forceKappa_n\mid n\in\omega}\cup B$, interpreted as having,
like $B'$, a gap headed by $\lambda_0$. Now the forcing does add a
new countable sequence of ordinals, as $M[\tilde
G]\models\cof(\lambda_0)=\omega$. However,
$\lambda_0$ is being interpreted as the head of a gap and therefore $\chang_{ B''}=\bigcup\set{\chang_{\tilde B}\mid \tilde B\subset B\land \tilde B\text { is suitable}}$.
Since the forcing $P(\vec F^{q,\omega})\mgkeq$ does not add bounded subsets of $\lambda_0$, this implies that $\chang_{B''}$, as defined inside $M[\tilde G]$, is equal to
$\chang_B$. This concludes the proof that $\chang_B\models\phi(x)$.
\bigskip{}
It is critical to this argument that there are only a finite number of intervals
(in this case, only one interval) of $B$ which need special attention.
Finitely many such special cases can be dealt with a condition $q$
obtained, as in the proof, by finitely many one-step extensions, but
infinitely many would involve
adding Prikry type sequences, which requires the use of the
iteration to obtain genericity.
\section{Questions and Problems}
\label{sec:questions}
This study leaves a number of questions open. Two which were
mentioned in the introduction essentially involve filling gaps in this paper:
\begin{question}\label{q:exact}
Exactly what is the large cardinal strength of a sharp for $\chang$?
\end{question}
Theorem~\ref{thm:main} puts it between a mouse over the reals satisfying
$o(\kappa)=\kappa^{+(\omega+1)}+1$ and a sufficiently strong mouse
over the reals
satisfying $o(\kappa)=\kappa^{+\omega_1}+1$. The second question
asks whether this procedure truly gives a sharp for the Chang model:
\begin{question}
Can the restricted formulas be removed from the
definition~\ref{def:Csharp} of the sharp for the Chang model? That
is, can the added Skolem functions be made full-fledged members of
the language?
\end{question}
The next questions ask for more detailed information about the
structure of the sharp:
\begin{question}
What is $K(\mathcal{R})^{\chang}$? Is it an iterate (not moving
members of $I$) of
$M_{\Omega}{\vert}\Omega$ for some mouse $M$ over the reals? If so, is
this iteration definable in
$L[M,\set{\lambda\mid\cof(\lambda)=\omega}]$?
\end{question}
\begin{question}
What is the core model $K^{\chang}$ of the Chang model?
How does it relate to $K^{L(\mathcal{R})}$ and to $K(\mathcal{R})^{\chang}$?
\end{question}
\begin{todoenv}
Again, a further question: how much of the covering lemma survives?
\end{todoenv}
\begin{question}
Is it true that the measurable cardinals of
$K(\reals)^{\chang}$ are exactly the regular cardinals of
$K(\reals)^{\chang}$ which have countable cofinality in $V$?
\end{question}
\medskip{}
The final question is about the next step from the Chang model. The
$\omega_1$-Chang model $\omega_1$-$\chang$ is obtained by closing
under $\omega_1$-sequences of ordinals.
\begin{question}
What can be said about the $\omega_1$-Chang model?
\end{question}
The question is due to Woodin (personal communication), as is most
of the known information. Gitik has pointed out that (contrary to
my earlier belief) his technique of recovering extenders from threads, or
strings of indiscernibles, appears to be essentially unlimited for
strings whose length has uncountable cofinality. It follows that
the lower bound, the counterpart to
Theorem~\ref{thm:main}(\ref{item:main-lower}), is probably at least as
large as any cardinal for which there is a pure extender model.
There is one minor caveat to this statement:
\begin{proposition}\label{thm:omega1-sharp-lower-bound}
Suppose that $V=L[\mathcal{E}]$ is an extender model, and that
there is an iterated ultrapower $i\colon V\to M$ where $M$ is a
definable submodel of $\omega_1$-$\chang$. Then there is no
strong cardinal in $V$.
\end{proposition}
\begin{proof}
Suppose the contrary, and let $\kappa$ be the smallest strong
cardinal. Then $i(\kappa)$ is the smallest strong cardinal in
$M$. However, since $\kappa$ is strong there is an extender $E$
with critical point $\kappa$ such that $i^{E}(\kappa)>i(\kappa)$
and
${\vphantom{\bigl)}}^{\omega_1}{\ult(L[\mathcal{E}],E)}\subseteq\ult(L[\mathcal{E}],E)$.
Then
$\omega_1$-$\chang=(\text{$\omega_1$-$\chang$})^{\ult(L[\mathcal{E}],E)}$,
but the smallest strong cardinal in the latter is
$i^{E}(i(\kappa))\ge i^E(\kappa)>i(\kappa)$.
\end{proof}
However this observation has no implications for the existence of a
sharp for $\omega_1$-$\chang$. For example, if $V=L[\mathcal{E}]$
where $\mathcal{E}$ is a proper set, then so long as
$K^{\omega_1\text{-}\chang}$ exists and is sufficiently iterable, Gitik's
technique gives an iterated ultrapower from $L[\mathcal{E}]$ to $K^{\omega_1\text{-}\chang}$.
Woodin has observed that the existence of a sharp for
$\omega_1$-$\chang$ would imply the Axiom of Determinacy, which
implies that there is no embedding from $\omega_1$ into
the reals in $\omega_1$-$\chang$, and hence none in $V$. Thus a
sharp for $\omega_1$-$\chang$ is inconsistent with the Axiom of
Choice in $V$.
However it would be of interest to find a sharp for the
$\omega_1$-Chang model as defined inside an inner model which satisfies the
Axiom of Determinacy.
\bibliographystyle{abbrv}
|
1,314,259,995,334 | arxiv | \section{Introduction} \label{intro}
The core objective of image-based virtual try-on is to synthesize a person image with a new clothing, given the image of the person wearing a different clothing item and the new clothing item as inputs. Virtual try-on can be broken down into three main sub-tasks, namely image warping, image compositing, and synthesizing. The latter is very challenging as a synthetic image must preserve the person's identity, pose and shape. Also, the occluded body parts in a clothing item should be correctly synthesized. Moreover, the clothing image should accurately fit the pose and shape of a person, and the details of the clothing should also be preserved (i.e. logo, texture and embroidery). Prior work~\cite{jetchev2017conditional,han2018viton,wang2018toward,yu2019vtnfp,han2019clothflow,minar2020cp,jandial2020sievenet,yang2020towards,yang2020towards,ge2021disentangled,ren2021cloth} formulates virtual try-on as a supervised learning problem by following two major steps: warp the clothing image to fit the human body/shape and fuse the warped clothing with the person image (i.e. compositing and synthesis). While most of these methods are able to preserve the identity of a person, there exists a significant gap towards photo-realism as they tend to fail not only in cases of complex pose and shape of the person, but also in synthesizing initially occluded body parts (e.g., long sleeve clothing). These methods also fail to preserve the logo, texture and embroidery of the clothing, as well as the overall shape of the clothing item. This is largely attributed to the objective functions used in the existing virtual try-on methods. In fact, many approaches use per-pixel-based, perceptual-based losses~\cite{wang2018toward,minar2020cp,yang2020towards,choi2021viton,jandial2020sievenet,ren2021cloth} and adversarial losses~\cite{jetchev2017conditional,ge2021disentangled}, which do not enforce any global context and semantics necessary to accurately model the human and clothing interaction for compositing and synthesis. In addition, existing virtual try-on methods~\cite{jetchev2017conditional,han2018viton,yu2019vtnfp,wang2018toward,minar2020cp,yang2020towards,choi2021viton,jandial2020sievenet,ren2021cloth} do not provide robustness performance for in-the-wild images. Therefore, it remains an open question as to how these methods would generalize in-the-wild and it is of paramount importance to develop methods that can overcome these challenges for highly photo-realistic virtual try-on.
In order to address the aforementioned limitations, we introduce a self-supervised conditional generative adversarial network model, dubbed Fill In FAbrics (FIFA), which is a body-aware inpainting framework for image-based virtual try-on. The proposed FIFA framework can synthesize more realistic logo, texture and embroidery of the target clothing and also tackles well person images with complex poses (e.g., hands occluded). Our approach consists of a Fabricator and a unified virtual try-on pipeline with a Segmenter, Warper and Fuser. The Fabricator is used as a form of self-supervised pretraining for Warper. The goal of the Fabricator is to reconstruct full clothing details, given a partial input, enabling the model to learn the overall structure of the clothing (i.e. logo, texture, embroidery, full/short sleeves). To enforce global context at multiple scales for accurate modeling of the human and clothing interaction for compositing and synthesis, we also propose to use a multi-scale structural constraint to warp and refine the target clothing. The main contributions of this paper can be summarized as follows:
\begin{itemize}
\item We propose FIFA, a self-supervised conditional generative adversarial network model for virtual try-on, which can handle the complex pose of a reference person while preserving the target clothing details.
\item We design a masked cloth modeling (MCM) objective to learn the overall structure of the clothing by predicting the full clothing image, given a masked input, for the downstream task of better target cloth warping and refinement.
\item We show through experimental results and ablation studies that our model achieves competitive performance in comparison with strong baselines, yielding more realistic virtual try-on outputs.
\end{itemize}
\section{Related Work} \label{related}
\noindent\textbf{Image-Based Virtual Try-On.}\quad The basic objective of image-based virtual try-on is to synthesize a photo-realistic new image by overlaying a desired product image seamlessly onto the corresponding region of a clothed person. To achieve this goal, various image-based virtual try-on methods based on generative models have been proposed, Conditional Analogy Generative Adversarial Network (CA-GAN)~\cite{jetchev2017conditional}, Virtual Try-On Network (VITON)~\cite{han2018viton}, Characteristic-Preserving Virtual Try-On (CP-VTON) network~\cite{wang2018toward}, CP-VTON+~\cite{minar2020cp}, Disentangled Cycle-consistency Try-On Network (DCTON)~\cite{ge2021disentangled}, ClothFlow~\cite{han2019clothflow}, SieveNet~\cite{jandial2020sievenet}, Adaptive Content Generating and Preserving Network (ACGPN)~\cite{yang2020towards}, and Cloth Interactive Transformer (CIT)~\cite{ren2021cloth}. While these methods aim to handle complex textures on clothes and reduce artifacts in the final try-on results, they fail when the visual difference between the person image and target clothing is significant (e.g., changing long sleeve clothing items with short sleeve) and also tend to generate distorted arm regions. Furthermore, they fail to tackle person images with complex poses.
\medskip\noindent\textbf{Masked Data Modeling.}\quad Masked data modeling has proven effective in natural language processing and computer vision~\cite{mikolov2013efficient,devlin2018bert,liu2018image,suvorov2022resolution}. Existing masked data modeling approaches include Context Encoders~\cite{pathak2016context} and Masked Autoencoders (MAE)~\cite{he2021masked}. Our work differs from existing methods in two main aspects. First, we make predictions at a pixel level compared to predicting visual tokens~\cite{bao2021beit}. Second, our encoder network is purely convolutional by design, and is not based on vision transformers, which have been shown to perform well only when pre-trained on large-scale image datasets such as the JFT-300M dataset~\cite{dosovitskiy2020vit}.
\section{Proposed Method} \label{method}
\noindent\textbf{Problem Statement.}\quad Image-based virtual try-on aims to synthetically fit a target clothing onto a reference person while preserving photo-realistic details such as identity, pose and shape of the person, as well as texture and embroidery of the target clothing. More precisely, given a reference person image and a clothing image, the goal of our proposed FIFA model is to synthesize a new image of the same person wearing the target clothing such that the shape and pose of the person, as well as the details of the clothing are preserved.
\subsection{Fill in Fabrics for Virtual Try-On}
The proposed FIFA framework consists of a Fabricator and a unified pipeline consisting of a Segmenter, Warper and Fuser for virtual try-on, as shown in Figure~\ref{fig:fifa}. Given a partial input, we first use the Fabricator to reconstruct the full clothing details and learn the overall structure of the clothing (i.e. texture, full and half sleeve). This is used as a pretext task for the Warper. Second, we use Segmenter to predict the mask of the body parts of the reference person, as well as the masked target clothing regions. Third, we employ Warper to warp the target clothing image such that it fits the masked clothing region with the aim to capture the pose and shape of the reference person. Finally, Fuser integrates the outputs from Segmenter and Warper in order to synthesize the final try-on image.
\begin{figure}[!htb]
\centering
\includegraphics[scale=.58]{Figure1.pdf}
\caption{Schematic layout of the proposed FIFA framework for virtual try-on. Given a person image $\bm{I}$ and a clothing image $\bm{T}_{c}$, FIFA synthesizes a try-on image $\bm{I}_{t}$, where the person in image $\bm{I}$ is wearing the target clothing $\bm{T}_{c}$. STN refers to the spatial transformer network, and $\oplus$ denotes concatenation.}
\label{fig:fifa}
\end{figure}
\medskip\noindent\textbf{Fabricator.}\quad The Fabricator aims to reconstruct (i.e. fill in fabrics) the full target clothing image $\hat{\bm{T}}_{c}$, given the partial target clothing $\bm{T}_\text{partial}$. To this end, the Fabricator learns to represent the overall structure of the clothing while reconstructing the missing regions (i.e. fill in correct pixels that make sense in the context). Inspired by the concept of image inpainting (i.e. the task of filling in holes in an image) using partial convolutions, where the convolution is masked and re-normalized to be conditioned on only valid pixels~\cite{liu2018image}, we construct $\bm{T}_\text{partial}$ from $\bm{T}_{c}$ using masks of random streaks and holes of arbitrary shapes. In contrast to image inpainting, we formulate our objective as a masked cloth modeling problem, which can be regarded as a form of self-supervised pre-training for the downstream task of virtual try-on. More specifically, we train an encoder-decoder network $\mathcal{F}_s$ to reconstruct the reconstructed target clothing $\hat{\bm{T}}_{c}$, for a given $\bm{T}_\text{partial}$, with the goal to be close to the original target clothing image $\bm{T}_{c}$ (i.e. non-masked clothing) by minimizing the $L_1$ error $\mathcal{E}=\Vert\hat{\bm{T}}_{c}-\bm{T}_{c}\Vert_{1}$.
\medskip\noindent\textbf{Segmenter.}\quad The goal of the Segmenter is to preserve the body parts of the person during the synthesis process and also to accurately predict the semantic layout of the target clothing regions that are necessary for the Warper. Given a reference person image $\bm{I}$ and its associated mask $\bm{M}$ obtained via a publicly available human parser~\cite{li2020self}, the arms and torso regions are merged to form a fused map $\bm{M}_\text{fused}$. A conditional generative adversarial network (CGAN) $G_{p}$ is then trained to generate a different person body part mask $\bm{M}_\text{bp}$, which is conditioned on $\bm{M}_\text{fused}$, the 18-keypoint pose heatmap $\bm{M}_\text{pose}$ using out-of-the-box 2D pose estimator~\cite{8765346,cao2017realtime}, and the target clothing image $\bm{T}_{c}$. To generate the target clothing region $\bm{M}_\text{cloth}$, another CGAN $G_{c}$ is trained by combining $\bm{M}_\text{bp}$, $\bm{M}_\text{pose}$ and $\bm{T}_{c}$. Hence, in the Segmenter there are two CGANs in which the discriminator is similar to pix2pixHD~\cite{wang2018high} and the generator is a Residual U-Net architecture~\cite{zhang2018road} built on top of the U-Net model~\cite{ronneberger2015u} with residual connections~\cite{he2016deep}. This not only helps retain fine-grained features and predict accurate body part masks, but also helps generate better try-on results. For a given CGAN (i.e. $G_{p}$ or $G_{c}$), the adversarial loss is given by
\begin{equation}
\begin{split}
\mathcal{L}_\text{CGAN} = \mathbb{E}_{\bm{x}\sim p_{\text{data}}(\bm{x})}[\log D(\bm{x}\vert\bm{y})] +
\mathbb{E}_{\bm{z}\sim p_{z}(\bm{z})}[\log( 1 - D(G(\bm{z}\vert\bm{y})))],
\end{split}
\label{eq:advloss}
\end{equation}
where $G$ and $D$ are the generator and discriminator, $\bm{x}$ and $\bm{y}$ are the input and ground-truth mask, and $\bm{z}$ is a noise prior drawn from a standard normal distribution. A CGAN is a type of GAN that takes advantage of auxiliary information during the training process. To train a CGAN, we train the generator and discriminator simultaneously to maximize the performance of both. In simple terms, the goal of the generator is to generate data that the discriminator classifies as ``real'', whereas the objective of the discriminator is to not be ``fooled'' by the generator. In other words, the generator and discriminator follow the two-player min-max game with $\mathcal{L}_\text{CGAN}$ as a function of $G$ and $D$.
In order to enforce consistency at the pixel-level, we also use the pixel-wise cross-entropy loss $\mathcal{L}_\text{CE}$ for better semantic segmentation results from the generator. Therefore, the overall objective is defined as
\begin{equation}
\mathcal{L}_\text{mask} = \alpha_{1} \mathcal{L}_\text{CGAN} + \alpha_{2} \mathcal{L}_\text{CE},
\label{eq:seg_loss}
\end{equation}
where $\alpha_{1}$ and $\alpha_{2}$ are nonnegative regularization parameters, which control the contribution of each loss term. Following previous work~\cite{yang2020towards}, we set $\alpha_{1}$ and $\alpha_{2}$ to 1 and 10, respectively, in our experiments.
\medskip\noindent\textbf{Warper.}\quad We employ Warper to naturally deform the target clothing to fit the mask of the clothing region with respect to the pose of the person, as well as to preserve the texture and embroidery of the target clothing. While the Adaptive Content Generating and Preserving Network (ACGPN)~\cite{yang2020towards} for virtual try-on has been shown effective at predicting the semantic layout of the reference image, it fails, however, to preserve complex poses, logo, texture and embroidery of the target clothing. This is largely due to the fact that ACGPN employs the Spatial Transformer Network (STN)~\cite{jaderberg2015spatial} with Thin Plate Splines (TPS)~\cite{duchon1977splines} and an additional refinement network U-Net~\cite{ronneberger2015u}. To address these limitations, we design a masked cloth modeling objective (MCM) when training the Warper to better preserve logo, texture and embroidery of the target clothing. More specifically, we transfer the learned representations in $\mathcal{F}_{s}$ from Fabricator to the refinement network. We also incorporate a multi-scale structural constraint (MSC) to enforce global context at multiple scales for better warping of the target clothing according to the pose and shape of the person. Our strategy of training Warper yields better warped target clothes, which have fine details (i.e. logo, texture and embroidery), and is especially effective at handling complex poses.
Given the target clothing region $\bm{M}_\text{cloth}$ and target clothing image $\bm{T}_{c}$, the goal of Warper is to deform $\bm{T}_{c}$ such that it fits $\bm{M}_\text{cloth}$. STN first warps the clothing to $\bm{T}_\text{warped}$. This is further refined using $\bm{T}_\text{warped}$ as input to the refinement network with the goal to generate more details (i.e. logo, texture, embroidery). In a similar vein to~\cite{wang2018toward,yang2020towards}, composition is then performed on the output of the refinement network with $\bm{M}_\text{cloth}$ to output the final refined clothing $\bm{T}_\text{refined}$. The overall loss for the STN in Warper is an unweighted combination of the $\mathcal{L}_\text{CGAN}$ loss and a second-order difference constraint~\cite{yang2020towards}. The losses for the refinement network (i.e. pre-trained from the encoder-decoder network $\mathcal{F}_{s}$) are $\mathcal{L}_\text{CGAN}$ and the perceptual $\mathcal{L}_\text{VGG}$ loss~\cite{johnson2016perceptual}. This VGG perceptual loss helps ensure the target clothing and its warped version contain the same semantic content. In addition, we introduce a multi-scale structural constraint to enforce global context at multiple scales during training. Therefore, the overall loss function is defined as
\begin{equation}
\mathcal{L}_\text{refined} = \beta_{1} \mathcal{L}_\text{CGAN} + \beta_{2} \mathcal{L}_\text{VGG} + \beta_{3} \mathcal{L}_\text{MS-SSIM}
\label{eq:warp_loss}
\end{equation}
where $\beta_{1}$, $\beta_{2}$ and $\beta_{3}$ are regularization parameters, which are set to 0.2, 20 and 15, respectively, in our experiments. $\mathcal{L}_\text{MS-SSIM}$ is the multi-scale structural similarity constraint~\cite{zhao2016loss}. The Warper benefits from the MCM objective and is able to better preserve the logo, texture and embroidery of the target clothing. It also benefits from MSC to enforce global context in order to ensure better warping of the target clothing according to the pose and shape of the person. This in turn helps produce improved try-on results in Fuser.
\medskip\noindent\textbf{Fuser.}\quad The Fuser merges the target clothing region, refined clothing image, a composited body part mask and a body part image with original clothing region masked out in order to produce the final try-on image. First, the Fuser generates a composited body part mask to remove or preserve the non-target body parts, which correspond, in most cases, to the arms of the person. This is then used in the second stage to determine which parts to preserve or generate when synthesizing the final try-on results. Given the original body part mask $\bm{M}_\text{obp}$, the clothing mask $\bm{M}_\text{oc}$ from $\bm{M}$ (i.e. head, arms, torso removed), $\bm{M}_\text{bp}$ and $\bm{M}_\text{cloth}$ from Segmenter, the composited body part mask $\bm{M}_\text{comp}$ is given by
\begin{equation}
\bm{M}_\text{comp} = ((\bm{M}_\text{bp} \odot \bm{M}_\text{oc}) + \bm{M}_\text{obp}) \odot (\bm{J} - \bm{M}_\text{cloth}),
\label{eq:composition1}
\end{equation}
where $\odot$ denotes element-wise multiplication and $\bm{J}$ is an all-ones matrix. As this step takes an input from Segmenter, it is crucial to produce accurate segmentation maps of $\bm{M}_\text{bp}$ and $\bm{M}_\text{cloth}$ for better compositing. We also perform compositing on $\bm{I}$ to get the body part image with $\bm{I}_\text{nc}$ being the original clothing region masked out as follows:
\begin{equation}
\begin{split}
\bm{I}_\text{nc} &= (\bm{I} - \bm{M}_\text{oc}) \odot (\bm{J} - \bm{M}_\text{cloth}).
\end{split}
\label{eq:composition2}
\end{equation}
Hence, given $\bm{T}_\text{refined}$ from Warper, $\bm{M}_\text{cloth}$ from Segmenter, $\bm{M}_\text{comp}$ and $\bm{I}_\text{nc}$, we train a CGAN $G_{m}$ to predict the final try-on image $\bm{I}_{t}$ by minimizing the following loss function
\begin{equation}
\mathcal{L}_\text{fuser} = \gamma_{1} \mathcal{L}_\text{CGAN} + \gamma_{2} \mathcal{L}_\text{VGG},
\label{eq:fuser_loss}
\end{equation}
where the hyper-parameters $\gamma_{1}$ and $\gamma_{2}$ are set to 1 and 10, respectively, in our experiments.
\section{Experiments} \label{experiments}
We conduct extensive experiments to assess the performance of the proposed FIFA framework in comparison with competing baseline models for virtual try-on. Experimental details and additional results and ablation studies are provided in the supplementary material. Code is available at: \textcolor{blue}{https://github.com/hasibzunair/fifa-tryon}
\subsection{Experimental Setup}
\noindent\textbf{Datasets.}\quad We demonstrate and analyze the performance of our model on two virtual try-on datasets: VITON and DecaWVTON.
\begin{itemize}
\item \textbf{VITON:} This dataset consists of 16,253 pairs of front-view women images and front-view top clothing images split into a training set of 14,221 pairs and a test set of 2,032 pairs. To evaluate the capability of virtual try-on methods in handling different poses of a person, we divide the VITON test set into three subsets of easy, medium and hard cases according to the human pose in the reference images. These test subsets are denoted as VITON-E, VITON-M and VITON-H for easy, medium and hard, respectively~\cite{yang2020towards}.
\item \textbf{DecaWVTON:} To demonstrate the generalizability of FIFA to in-the-wild images, we use DecaWVTON, a proprietary dataset comprised of images with complex poses and clothing not present in the VITON dataset (e.g., turtle neck). Also, the clothing images are rotated, whereas VITON consists of only front-view clothing images. In many cases, the head portion is cut out (i.e. either fully or partially), whereas in VITON the person images consist of full faces.
\end{itemize}
\medskip\noindent\textbf{Baselines.}\quad We evaluate the performance of our proposed virtual try-on model against recent state-of-the-art techniques, including CA-GAN~\cite{jetchev2017conditional}, VITON~\cite{han2018viton}, CP-VTON~\cite{wang2018toward}, CP-VTON+~\cite{minar2020cp}, SieveNet~\cite{jandial2020sievenet}, and segmentation based methods such as VTNFP~\cite{yu2019vtnfp} and ACGPN~\cite{yang2020towards}, as well as flow based methods such as ClothFlow~\cite{han2019clothflow}. We also compare our model against a cycle-consistency based approach DCTON~\cite{ge2021disentangled} and a transformer based method CIT~\cite{ren2021cloth}.
\medskip\noindent\textbf{Evaluation Metrics.}\quad Following previous work~\cite{minar2020cp,yang2020towards}, we use the Structural SIMilarity (SSIM) that captures image level similarity and the Frechet Inception Distance (FID) that captures the distributional similarity. Both metrics are commonly used for benchmarking virtual try-on methods to quantify the visual difference between the generated and real reference images. Higher scores of SSIM and lower scores of FID indicate higher quality of the synthesized results. It is important to mention that while computing the SSIM and FID metrics, the target clothing items are the same as in the reference person as it is not possible to acquire ground truth images for try-on results.
\medskip\noindent\textbf{Implementation Details.}\quad All experiments are performed on a Linux workstation running 4.8Hz, 64GB RAM and a single NVIDIA RTX 3080 GPU. Experiments are conducted using Python programming language and PyTorch deep learning framework. A full training of FIFA, along with the Fabricator on the VITON dataset, takes roughly seven days. During training, the target clothing item is the same as the one in the reference person image, as it is not possible to acquire triplets to compute the loss with respect to the ground truth.
\subsection{Qualitative Results}
In Figure~\ref{fig:sota_vis}, we visually compare the performance of our proposed model with CP-VTON+ and ACGPN, which are state-of-the-art virtual try-on baselines. Each row shows a person virtually trying on different clothing items. As can be seen in the first row of Figure~\ref{fig:sota_vis}, when the pose of the reference person is complex (i.e. standing with arms behind the body), the baseline models either remove body regions, fail to warp short sleeve shirt, or add unrealistic body parts. These baselines are also unable to capture the global structure and semantics, which are needed for warping short sleeve shirts when the reference person is wearing a long sleeve shirt. This is due, in large part, to the limited capability of the warping strategies used in these baselines. The second and third rows of Figure~\ref{fig:sota_vis} show cases where the target clothing items are of complex texture (i.e. printed patterns, long sleeve, and shirt with logo) and embroidery (i.e. stripes). In these cases, CP-VTON+ fails to distinguish between the front and back part of the clothing regions, does not preserve the logo of the target clothes, and yields blurry results at the clothing and person body boundaries. While ACGPN produces non-blurry results, it fails to preserve the complex embroidery of the target clothing, and does not accurately warp long sleeve target clothing items, resulting in incomplete sleeves. In the last row of Figure~\ref{fig:sota_vis}, we can observe artifacts and mix-up of front and back part of the clothing in the images generated by the baseline methods. Also, both CP-VTON+ and ACGPN fail to capture the v-shaped structure of the target clothing, and do not accurately warp tank tops with very thin straps, resulting in either blurry or distorted clothing structure. Overall, these baselines fail to preserve the complex pose of the reference person, the complex texture and embroidery of the target clothing, and also the complex clothing types.
\begin{figure}[!htb]
\centering
\includegraphics[height=4.8cm]{Figure2.pdf}
\caption{Given a pair of a reference person image and a target clothing image, our FIFA model successfully synthesizes virtual try-on images. Compared to the baselines, FIFA is able to better handle complex poses and also retains photo-realistic details such as logo, texture, embroidery and structure (e.g., collar shape) of the target clothing.}
\label{fig:sota_vis}
\end{figure}
By comparison, our FIFA method is able to warp the target clothing in the case of complex poses, and preserves well the body parts. It benefits from the synergy between the MCM objective and the MSC constraint, which help preserve the pose of a person, capture the fine details of the target clothing (i.e. logo and embroideries), as well as the global structure of the clothing (i.e. front and back part of clothing, v-shaped collar). Moreover, FIFA benefits from residual blocks (RBs) to better predict the semantic layout of the body parts, resulting in realistic try-on results. In summary, this helps not only preserve the logo, texture, embroidery and the type of target clothing, but also yields an output having less artifacts and retains clear body parts, achieving more realistic try-on results. We also find that FIFA is able to better preserves the skin color of the person and accurately synthesizes the person's body parts, which were initially occluded. In addition, it can distinguish between the front and back part of clothing items.
It is worth pointing out that some examples in Figure~\ref{fig:sota_vis} seem to have color mismatch between synthetic clothes and target. We hypothesize that this might be attributed to the Warper, which in some cases produces blurry target clothing outputs. While using the multi-scale structural constraint (MSC) in Warper can output fine details of clothing, we argue that designing perceptually motivated loss functions may further improve the results.
\subsection{Quantitative Results}
Table~\ref{table:sota} shows that FIFA consistently outperforms all baselines, achieving relative improvements of 4.85\%, 4.22\%, 4.64\% and 4.47\% over the strongest ACGPN baseline on all (VITON), easy, medium and hard cases in terms of the SSIM metric. FIFA also outperforms ACGPN with a substantial relative improvement of 19.11\% in terms of FID.
\begin{table}[!htb]
\setlength{\tabcolsep}{4pt}
\begin{center}
\caption{Performance comparison of FIFA and state-of-the-art methods on the VITON, VITON-E, VITON-M and VITON-H test sets using SSIM and FID scores. FIFA consistently outperforms the baselines across easy, medium and hard cases. Boldface numbers indicate the best performance, whereas the best baselines are underlined.}
\medskip
\label{table:sota}
\begin{tabular}{llllll}
\hline\noalign{\smallskip}
\multicolumn{3}{r}{\bf SSIM ($\uparrow$)}\\
\cline{2-5}
\noalign{\smallskip}
\bf Method & VITON & VITON-E & VITON-M & VITON-H & \bf FID ($\downarrow$)\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
CA-GAN~\cite{jetchev2017conditional} & 0.740 & - & - & - & 47.34 \\
VITON~\cite{han2018viton} & 0.783 & 0.787 & 0.779 & 0.779 & 55.71 \\
CP-VTON~\cite{wang2018toward} & 0.745 & 0.753 & 0.742 & 0.729 & 24.43 \\
VTNFP~\cite{yu2019vtnfp} & 0.803 & 0.810 & 0.801 & 0.788 & - \\
ClothFlow~\cite{han2019clothflow} & 0.843 & - & - & - & 23.68 \\
CP-VTON+~\cite{minar2020cp} & 0.750 & - & - & - & 21.08 \\
SieveNet~\cite{jandial2020sievenet} & 0.837 & - & - & - & 26.67 \\
ACGPN~\cite{yang2020towards} & \underline{0.845} & \underline{0.854} & \underline{0.841} & \underline{0.828} & 16.64 \\
DCTON~\cite{ge2021disentangled} & 0.830 & - & - & - & \underline{14.82} \\
CIT~\cite{ren2021cloth} & 0.827 & - & - & - & - \\
\bf FIFA (Ours) & \textbf{0.886} & \textbf{0.890} & \textbf{0.880} & \textbf{0.865} & \textbf{13.46} \\
\hline
\end{tabular}
\end{center}
\end{table}
Interestingly, our FIFA model yields significant relative improvements of 5.10\% and 43.16\% over ClothFlow in terms of SSIM and FID, respectively. It is worth pointing out that ClothFlow operates on streams (i.e. optimal flow maps) to predict the movement of clothes and is computationally expensive, while our model is a purely image-based virtual try-on approach operating on image pixels. Our method also outperforms the transformer based CIT baseline with a relative improvement of 7.13\% in terms of SSIM. This better performance of our approach is significant because transformers are built on self-attention operations and are quite strong in modeling the global context between the person and target clothing. In addition, transformers in computer vision tasks perform well only when pre-trained on a large cohort of images such as the JFT-300M dataset, which is comprised of 18K classes and 303M high-resolution images~\cite{dosovitskiy2020vit}.
\subsection{Ablation Study}
\noindent\textbf{Effectiveness of Masked Cloth Modeling (MCM).}\quad Figure~\ref{fig:mcm_abl} illustrates the benefit of using the MCM objective in preserving the pose and logo, as well as in accurately warping the target clothing. As can be seen, MCM is able to preserve the logo of the target clothing, whereas without MCM the logo is completely lost. MCM also helps in accurately preserving or synthesizing body parts in complex poses (i.e. body aware), as well as in accurately warping the target clothing. Notice that without MCM, there is a problem of unnecessarily editing regions of the target clothing (i.e. making half sleeve shirt a full sleeve). This is largely attributed to the richer learning signal provided by MCM rather than just using the supervised objective of predicting the warped clothing, which fits the reference person, enabling our approach to accurately model the interactions between the target clothing and reference person clothing.
\begin{figure}[!htb]
\centering
\includegraphics[height=4.65cm]{Figure3.pdf}
\caption{Warped target clothing results, demonstrating the effectiveness of the MCM objective in Warper. Warper with MCM is capable of handling complex poses (i.e. body-aware) and preserving the logo and embroidery of the clothing.}
\label{fig:mcm_abl}
\end{figure}
\medskip\noindent\textbf{Effectiveness of Multi-Scale Structural Constraint (MSC).}\quad Figure~\ref{fig:msc_abl} shows that the use of per-pixel-based and perceptual-based loss functions~\cite{wang2018toward,minar2020cp,yang2020towards,choi2021viton,jandial2020sievenet,ren2021cloth} is not enough to capture the global context and semantics, which are needed for preserving the shape of a person and also for realistically synthesizing body parts. The per-pixel-based loss function $\mathcal{L}_1$ measures the distance between pixels and does not enforce any global constraint. On the other hand, the perceptual loss $\mathcal{L}_\text{VGG}$ quantifies the similarity between the reconstructed and ground-truth images, but only at a latent representation level (i.e. computes the distance of the features extracted by VGG-19~\cite{simonyan2014very}). Also, it tends to generate artifacts~\cite{johnson2016perceptual}, which is in line with our findings. We show that by adding MSC, our model is able to better tackle these issues and learns to exploit context at different scales, while the CP-VTON+ and ACGPN baselines introduce artifacts and do not preserve well the shape of the person.
\begin{figure}[!htb]
\centering
\includegraphics[height=4.65cm]{Figure4.pdf}
\caption{Try-on results, demonstrating the effectiveness of MSC in Warper. Warper with MSC helps capture global context of the target clothing and preserves the shape of the person.}
\label{fig:msc_abl}
\end{figure}
\subsection{Generalization to In-The-Wild Virtual Try-On}
To test the generalizability of virtual try-on models to in-the-wild images, we set up a challenging task where the results would better reflect the robustness on unseen data. We compare FIFA against the state-of-the-art ACGPN model~\cite{yang2020towards} by training both methods on VITON and testing them on DecaWVTON. Results presented in the supplementary material demonstrate that FIFA yields substantial improvements over ACGPN in terms of SSIM and FID, indicating that FIFA is more robust to in-the-wild images for virtual try-on.
\section{Conclusion} \label{conclusion}
We introduced a body-aware self-supervised inpainting framework for image-based virtual try-on with a focus on tackling complex poses, learning the overall structure of clothing and incorporating global context. Our proposed FIFA model achieves significant improvements in the synthesized try-on image by not only retaining the logo, texture and embroidery of the clothing, but also able to better handle the complex poses, indicating that it is body aware, a crucial feature for photo-realistic virtual try-on. By combining the strengths of mask cloth modeling, multi-scale structural constraint and residual blocks, FIFA outperforms strong baselines on the VITON dataset across all, easy, medium and hard cases. In addition, we set up an evaluation framework for testing robustness of virtual try-on models to in-the-wild images and found that FIFA outperforms previous state-of-the-art methods by a significant margin.
\bibliographystyle{bmvc2k_natbib}
|
1,314,259,995,335 | arxiv | \section{Introduction}
Collective behavior is exhibited across many different scales in animal groups from microscopic bacteria or algae colonies \cite{Zhang2010Collective, Cates2012Diffusive,Marchetti2013Hydrodynamics} to macroscopic insects \cite{Buhl2006disorder,Kelley2013Emergent,Puckett2015Searching,Attanasi2014Information}, schools of fish \cite{Berdahl2013Emergent,Tunstrom2013Collective,Puckett2018Collective} and flocks of birds \cite{Ballerini2008Interaction,Ballerini2008Empirical,Bialek2012Statistical}.
Collective behavior arises from self-organizing interactions between individuals \cite{Couzin2002Collective,Sumpter2010Collective} and can give rise to complex emergent group behaviors which surpass an individual's ability in navigation \cite{Berdahl2013Emergent,Puckett2018Collective} and foraging success \cite{Pitcher1982Fish,Bazazi2012Vortex}.
In nature, groups can exhibit several morphologies or ``states" of collective animal behavior from disordered swarms to ordered flocks and mills.
While individual animals may exhibit a range of behaviors and personalities \cite{Herbert-Read2013Role,Jolles2017Consistent}, early work showed that these collective `states' can be effectively modeled by active self-propelled particles (SPP) following uniform behavioral rules \cite{Vicsek1995Novel,Couzin2002Collective}.
Moreover, the simplest and perhaps the most studied model, the Vicsek model \cite{Vicsek1995Novel}, exhibits a continuous phase transition between disordered (swarm) and ordered (flock) states \cite{Gregoire2004Onset,Szabo2006Phase,Chate2008Collective,Aldana2009Phase,Vicsek2012Collective}.
In experiments, transitions between ordered and disordered phases have also been observed as the density of individuals increases \cite{Buhl2006disorder,Tunstrom2013Collective}.
As collective animal systems share many analogous features to nonliving active matter systems e.g., granular rods \cite{Ramaswamy2003Active,Narayan2007LongLived} and self-propelled colloids \cite{Palacci2013Living}, recent work has sought to build a unified framework to model soft matter systems.
Thus far, the theoretical approach has largely focused on studying active brownian particles (ABPs) which provide an ideal system to first construct a nonequilibrium thermodynamics for active matter \cite{Marchetti2013Hydrodynamics,Ramaswamy2010Mechanics,Patch2017Kinetics}.
In a thermal system, the ideal gas law, $P = \rho kT$, relates the mechanical pressure of the system confining the gas with the number density and temperature, where $\rho$ is the number density, $k$ Boltzmann's constant, and $T$ is the equilibrium temperature.
However, whether or not the concepts of state variables such as pressure and temperature can be directly applied to nonequilibrium systems is an open question \cite{Cugliandolo2011effective}.
Remarkably, simulated ABPs and experimental colloids have been shown to obey an equation of state much like their equilibrium counterparts \cite{Yang2014Aggregation,Mallory2014Anomalous,Takatori2014Swim,Takatori2015Thermodynamics, Ginot2015Nonequilibrium}, though the equation of state appears to only hold for spherical particles with torque-free interactions \cite{Solon2015Pressure,Solon2015Pressurea,Fily2017Mechanical}.
Furthermore, in a recent experiment, researchers found that mechanical pressure does not equilibrate in a system of polar discs with different packing fractions and is therefore not a state variable \cite{Junot2017Active}.
However, the pressure in an active matter system is still a useful quantity \cite{Omar2020Microscopic}, as even in anisotropic systems, the bulk pressure (swim stress) is self-consistent with the surface pressure \cite{Yan2018Anisotropic}.
An alternate approach taken in recent experiments is to borrow concepts and techniques from statistical mechanics and the physics of materials to test the effectiveness of state variable-like descriptions directly in animal groups.
Recent experiments on insect groups have detected a density dependent phase transition \cite{Buhl2006disorder}, measured aggregate surface tension and viscosity \cite{Mlot2012Dynamics}, and found that groups of fire ants can form viscoelastic shear-thinning materials \cite{Tennenbaum2016Mechanics}.
In bird flocks, experimental observations have found scale-free behavioral correlations, \cite{Cavagna2010Scalefree} a signature of a continuous phase transition, and have shown that flocks can be modeled using a maximum entropy approach \cite{Bialek2012Statistical,Cavagna2017Dynamic,Cavagna2018Physics}.
Experiments on flying insect swarms have measured the susceptibility \cite{Attanasi2014Collective}, observed linear response to external perturbations (sound) \cite{Ni2015Intrinsic}, found that swarms have a finite modulus \cite{Ni2016tensile}, and observed phase co-existence \cite{Sinhuber2017Phase}.
Recently, experiments on midges have shown that a swarm responding to variable light stimuli contracts along an isotherm \cite{Sinhuber2019Response}.
\begin{figure}[t]
{\centering \includegraphics[width=0.95\linewidth]{figure01.pdf}}
\caption{(a) Schematic of our experimental setup. The experimental arena (a confined circular area 100 cm in diameter) lies inside a larger 122 by 244 cm tank. Video footage is captured by a camera fitted with an infrared filter.
(b) Sample (static) visual stimulus used in experiments with the silhouettes of individual fish ($N=100$) overlaid for scale.}\label{fig:expt}
\end{figure}
In this work, we conduct a novel experiment aimed at investigating laboratory schools of rummy-nose tetra ({\it{Hemigrammus bleheri}}) in a thermodynamic framework.
Since the tetra are negatively phototactic and prefer to be in dark regions in their environment, we project a circular dark disk on the center of a large quasi-two dimensional tank, as shown in \fref{fig:expt}(ab). disk
By controlling the radius $\rproj$ of the projected dark spot, we confine the fish to a certain region of the tank as shown in \fref{fig:expt}(b).
Note that the fish are not mechanically confined, only weakly visually confined, and therefore can swim out of the central dark region with out experiencing any mechanical force.
We project both static and dynamic light fields and investigate the response of the fish to the different perturbations.
In the static light field experiments, the radius of the projected dark disk $\rproj$ is constant during each experimental trial.
In the dynamic light field experiments, we start with a radius $\rproj$ much larger than the unperturbed radii of the schools and reduce the radius of the projected dark disk with time.
We then investigate the group level kinematic statistics of the schools and compare static and dynamic light perturbations.
\section{Methods and Results}
We filmed schooling events of groups of tetras (body length, BL=$3.4 \pm0.5$ cm) in a circular experimental arena (29.4 BL in diameter, 6~cm water depth, $T = 25\pm0.5^\circ C$).
The arena (a clear acrylic wall in a circular shape) lies inside a larger 36BL by 72BL outer rectangular tank which houses water filtration and heating elements.
The outer tank provides a constant temperature heat bath to maintain the temperature of the inner circular tank.
The projected visual stimulus extends beyond the inner tank and covers the entire outer tank, as fish can see through the clear inner wall. Fish were randomly selected from one of two home tanks (each tank having $N_{\text{home}} = 150$) and transferred to the experimental arena.
Experiments were scheduled such that no individual was used in an experiment on two consecutive days.
Before data collection, fish were given one hour to acclimatize to the experimental arena.
\begin{figure*}[t!]
\includegraphics[width=0.95\linewidth]{figure02.pdf}
\caption{(a) The two-dimensional probability distribution for schools of $N = 100$ tetras and with a constant projected radius of $\rproj = 5.3$ BL.
Depicts probability of finding an individual in a given region of the experimental arena.
(b) Radial probability distributions for $N=100$ and $\rproj = 1.6, 2.6, 3.8, 5.3, 6.9, 8.8$ BL.
(c) Number density $\rho$ of schools as a function of the ratio of the group size $N$ and the volume of water within the projected dark disk for groups of $N = 25, 50, 100$ individuals.
The maximum measured number density occurs at \NEW{$\rho_{\text{proj}} = 0.64$}, and is marked by the dashed vertical line.
(d) \NEW{The swim pressure-like $\pswim$} is shown as a function of average number density $\rho$ in trials with varying static $R_{\text{proj}}$ for group sizes of $N = 25, 50, 100$.
The fit line represents an isotherm with an effective temperature for the static experiments of $kT_\text{static} \approx 2.45$.
}
\label{fig2}
\end{figure*}
The fish schools were observed using an IR camera (PointGrey GS3-U3-41C6NIR-C) placed 180~cm over the tank, which records images at 4Mpx and 30~Hz, as shown in \fref{fig:expt}(a).
The tank was illuminated from below with an array of IR LEDs, which are invisible to the fish but visible to our camera.
Further details on video analysis, individual fish tracking, and information on fish husbandry are reported in a previous work \cite{Puckett2018Collective}.
As shown in \fref{fig:expt}(ab), a projector located 226~cm over the experimental arena casts a light field at 30~Hz onto the bottom of the tank.
Throughout this work, the light field consist of a black disk (10 lux) with radius $\rproj$ on a grey background (150 lux).
A cropped sample image of the light field is shown in \fref{fig:expt}(b), with an overlay of the silhouette of a school of tetras for scale.
\subsection{Static experiments}
In our first series of experiments, the projected light fields consisted of a black disk with a constant radius $\rproj$.
We investigate the effect of confinement by the visual field as a function of six different static $\rproj = 1.65, 2.61, 3.82, 5.26, 8.81$~BL for three group sizes $N = 25, 50$ and 100.
For each trial, the static field was displayed for 1 min to allow the fish to reach a steady state, after which the camera recorded a video for 2 minutes.
We repeated experimental trials with randomly selected $\rproj$ and $N$ to generate 10 replicates for each $\rproj$ and $N$ combination.
We found that the visual field provides an effective confinement as shown in \fref{fig2}(a), where the two-dimensional probability distribution indicates that the time-averaged structure of the school is axisymmetric.
The radial probability distributions ${\cal P}(r)$ for different $\rproj$ are shown for $N=100$ in \fref{fig2}(b).
For the largest $\rproj$, we find that the experimental schools are entirely contained within $\rproj$.
However, for smaller $\rproj$, individuals may freely swim beyond $\rproj$, though
as expected, fish are more likely to be found near the center of the dark spot.
As shown in \fref{fig2}(b), we find ${\cal P}(r)$ decreases with $r$, quickly for small $\rproj$ and more gradually and approximately linearly for larger $\rproj$.
In \fref{fig2}(c), we show that the light field provides a weak confinement which competes with overcrowding, as the fish do not exceed a maximal density.
The projected dark area weakly confines the fish and is our control parameter, where the ratio of the number of fish to the projected dark area is $\rho_\text{proj} = N/ (A_\text{proj} h_\text{water})$, where $h_\text{water} \approx 1.75$ BL.
The school changes size based on the projected dark area, where individual fish must balance an effective attraction to the dark area and an effective repulsion from overcrowding.
The measured number density of the school $\rho$ is determined by finding the quasi-two-dimensional area $A_\text{school}$ of the school and multiplying it by the water depth all in units of average body length.
The area of the school is the area of the covariance error ellipse determining the 95\% confidence interval, which was found to be less susceptible to the locations of outlying fish compared to a simple convex hull area.
When $\rproj$ is large enough so that $\rho_{\text{proj}} \lesssim 0.5$, we see that the school adjusts its density roughly independent of $N$.
We also find that the school has a maximum number density per fish $\rho_{\text{max}} \approx 1.1$ fish/BL$^3$ to which the school will compress.
At smaller $\rproj$ (large $\rho_\text{proj}$), large schools ($N=100$) extend beyond $\rproj$, where fish outside the projected dark disk are more disperse, decreasing $\rho$. Using the visual light field, we can control the the number density of the school $\rho$ by approximately a factor of two.
Since the fish are visually (and weakly) confined within the projected light field, one cannot calculate a mechanical pressure since there is no momentum exchange between individuals and a container.
The pressure for an ideal active matter system can be derived as $\Pi_\text{swim} = n \zeta U_0^2 \tau_R /2$ for two dimensional systems\cite{Takatori2014Swim}, where $\zeta$ is the hydrodynamic drag and $U_0 \tau_R$ is the run length in a reorientation time.
However, in many experiments, either due to finite size effects or insufficiently length of trajectories, these terms are not measurable.
Similar quantities to the swim pressure have been derived from a virial equation \cite{Gorbonos2016Longrange} and were shown to relate to thermodynamic phases in other collective animal systems~\cite{Sinhuber2017Phase, Sinhuber2019Response}, even though these ``pressures" are not equivalent to the mechanical pressure.
We define a ``pressure" similar to previous experiments, where the swim pressure-like quantity (per unit mass) is related to the ratio of kinetic energy per unit volume,
\begin{equation}
\begin{array}{l}
\pswim = \left\langle \dfrac{N}{V} \cdot \dfrac{1}{N} \sum\limits_{i=1}^N \dfrac{1}{2} \bvec{v}_i^2 \right\rangle_t
\end{array}
\label{eqn1}
\end{equation}
where $N$ is the number of individuals, $V$ is the volume occupied by the school, $\bvec{v}_i$ is the velocity of individual of fish $i$, and the notation $\langle \rangle_t$ denotes a time-average.
Since the system is quasi-two-dimensional, V is measured by $A_\text{school} h_\text{water}$.
This quantity is the average ratio of the total kinetic energy to the volume of the school.
Written more compactly below in terms of number density $\rho=N/V$ and the rms velocity, we have,
\begin{equation}
\begin{array}{l}
\pswim = \left\langle\rho \vrms^2 \right\rangle_t,
\end{array}
\label{eqn2}
\end{equation}
where both $\rho$ and $v_{\text{rms}}$ are functions of time.
This equation is analogous to the classical definition $P = \rho kT$, where $\rho=N/V$, $k$ is Boltzmann's constant, and $T$ is the temperature.
While $\Pi$ and $\vrms$ are positively correlated, both $\rho$ and $\vrms$ are time dependent quantities, so $\Pi$ can not be interchanged with or depend solely on $\vrms$.
In \fref{fig2}(d), we show that $\pswim$ increases linearly as function of $\rho$ ($F_{1,16}=27.15$, $p=8.6\times10^{-5}$) for schools of different $N$ and $\rproj$.
The slope of $\pswim(\rho)$ yields an effective temperature of $k \teff = 2.45$ for our set of static experiments.
While this result may at first glance appear trivial, that the fish swim at roughly the same speed independent of the density or group size (e.g., $\vrms^2 = $ const.), this is not the case as our subsequent dynamic experiments show.
\subsection{Dynamic experiments}
In our second series of experiments, projected light fields consisted of a light background and dark disks, where the radius of the disk, $\rproj(t) = R_\text{0} \left( 1 - t/\tau \right)$, decreases linearly in time from $R_0=14.1$~BL to 0 in $\tau$ seconds.
We conducted trials with $\tau = 120, 60$, and $30$~s and for group sizes $N = 50$ and $100$, each with 10 replicates.
In \fref{fig3}(a), we show the number density $\rho$ of the school as a function of time with $N=100$ for each $\tau$, where time is rescaled by $\tau$.
We find that the number density $\rho$ of the schools increases roughly linearly with time as $\rproj$ decreases, until about halfway through the trial, which corresponds to the approximately the same maximum number density $\rho_\text{max}$ noted in static experiments.
After the maximum number density is reached, the area of the school grows and subsequently begins to shoal freely as the area of the dark disk is much smaller than the area of the school.
Since the school is not further compressed after $\tau / 2$, we do not use this data for any subsequent results.
\begin{figure}[t!]
\centering \includegraphics[width=0.95\linewidth]{figure03.pdf}
\caption{(a) The number density of $N = 100$ tetra schools as a function of time normalized by $\tau$ for $\tau = 30, 60, \text{ and } 120$ s. We see that the school is compressible until it reaches the maximum density and afterward decreases in density and shoals freely.
(b) \NEW{The swim pressure-like quantity $\pswim$} plotted as a function of number density for group size $N = 100$ and $\tau = 30, 60$ and $100$ s. For comparison, this data is overlaid on the data for the static experiments. The effective temperature of the school is computed via a least squares linear fit.
(c) Average change in effective temperature of the system between the dynamic and static systems as a function of compression time $\tau$. The fit line is a nonlinear least squares fit to \eref{eqn5}.}
\label{fig3}
\end{figure}
In \fref{fig3}(b), we compute pressure-like $\pswim$ as a function of $\rho$ with $N=100$ for each compression time $\tau = 30,~60$ and $100$.
Note that $\pswim$ here is no longer time averaged, since $\rho$ changes with time.
Even for a slow compression $\tau=120$s, the pressure $\pswim$ increases faster as a function of $\rho$ compared to the static case.
Therefore, the effective temperature $\teff$ (the slope of the isotherm on $\pswim(\rho)$) increases as the compression times $\tau$ decrease.
The school not only adjusts its number density $\rho$ in response to the shrinking projected disk, but also changes its kinetic energy ($\frac{1}{2} \vrms^2$) based on the rate of change of the dark area.
When the system is compressed, the size of the school decreases but there is also a change in the velocities of the fish, where shorter compression times $\tau$ result in larger increases in kinetic energy.
In \fref{fig3}(c), we show the change in effective temperature $k \Delta \teff$ as a function of $\tau$ for different group sizes $N$, where $k \Delta \teff = k\teff - k\teffstatic$.
Here, we find that $k \Delta \teff$ increases with decreasing $\tau$.
Although the school is far from equilibrium, for a first attempt to understand the relationship between $k \Delta \teff$ and $\tau$, we begin with borrowing ideas from classical equilibrium statistical mechanics. The first law of thermodynamics relates the heat added to the system $Q$ and the work done on the system $W$ to the change in internal energy
\begin{align}
\Delta U = Q+W
\label{eqn3}
\end{align}
where the change in internal energy is proportional to the change in temperature, giving $\Delta U = \alpha \Delta \teff$, where $\alpha$ is a constant.
Since all dynamic experiments start with the same $\rho_\text{initial}$ and end with $\rho_\text{final}$, they all do the same amount of work.
Furthermore, to reach the density $\rho_\text{final} \approx \rho_\text{max}$, it takes approximately half the compression time, $\frac{ \tau }{2}$, as shown in \fref{fig3}(a).
While this work, like the change in internal energy is extensive, for the sake of simplicity of the model, we ignore the dependence of each on $N$, and instead concentrate on the time dependence of heat loss.
Given a finite power of heat loss in the system, a thermodynamic process doing the same amount of work in less time should increase the temperature of the system. Therefore, the heat lost by the system as it is compressed to twice the initial density is $Q =\overline{P}_\text{heat} \frac{ \tau }{2}$, where the rate of heat loss $\overline{P}_\text{heat}$ is proportional to the temperature difference $\Delta \teff$.
We can rewrite $Q = -\beta \tau \Delta \teff $.
Using each of these relationships, we can rewrite \eref{eqn3} as
\begin{align}
\alpha \Delta \teff &= W - \beta \tau \Delta \teff .
\label{eqn4}
\end{align}
Solving for $\Delta \teff$, we have
\begin{equation}
\begin{array}{l}
\Delta \teff = \dfrac{W}{ \alpha + \beta \tau} = \dfrac{1}{a+b\tau}
\end{array}
\label{eqn5}
\end{equation}
There are then two fit parameters, $a = \alpha/W$ and $b = \beta/W$, for 57 dynamic runs for both $N=50$ and $100$ and $\tau=30$, $60$, and $120$s. Using nonlinear least squares fitting on all the dynamic data ($N=50$ and $100$), we find $a = 0.037 \pm 0.063$ and $b=0.0044\pm 0.0018$ s$^{-1}$.
While the derivation of \eref{eqn5} was made with numerous idealized assumptions, remarkably, we find the uncertainty in $b$ is small compared to its value, and the model is in good agreement with the experimental data.
Certainly, there are several ways to treat this derivation more rigorously including: considerations of $N$ dependence for $\Delta U$, $Q$ and $W$, or accounting for the rate of work done during the compression in the expression for the rate of heat loss.
However, given this simplified and arguably na\"ive treatment, our results are consistent with the argument for a `finite heat flux' of the school, where a faster dynamic compression yields a larger change in effective temperature.
\section{Conclusions and Future Work}
Our findings show that rummy-nose tetra ({\it{Hemigrammus bleheri}}) can be visually confined with projected light fields and that their number density $\rho$ can be controlled without applying any physical force.
Using static light fields, we find that $\pswim$ depends linearly on the number density $\rho$ consistent with the behavior of an isotherm, and that schools exhibit a common effective temperature that is independent of the degree of static confinement imposed on the system.
In response to dynamic light fields, the school adjust its number density $\rho$ to fit within the projected dark area, balancing the favor of being in the dark area with overcrowding.
While $\pswim$ and $\rho$ are still linearly related, we find that the slope or effective temperature is dependent on the compression time, with faster compression leading to larger $T_\text{eff}$.
Note that while $\rho_\text{proj}$ grows quadratically with time, we find that $\rho$ increases approximately linearly with $t/\tau$.
This may indicate that $\rho$ is rate limited by biology or physiology.
Additionally, fish on the boundary may respond more strongly to the shrinking dark disk, making $\rho$ nonuniform and larger at the boundary.
Furthermore, our dynamic studies show that there is a finite rate of heat lost during these dynamic compressions. This heat loss drives the system back to $\teffstatic$ likely because the fish have a preferred swim speed.
As stated above, in the experiments utilizing static light fields, the effective temperature remained constant for all confining areas.
We propose that this is due to the fish consistently achieving their preferred swim speed in the static experiments.
This observation is consistent with \eref{eqn5} given $\tau \rightarrow \infty$, where an infinitely slow compression should yield the same effective temperature observed in the static experiments.
However, for the dynamic compression experiments, the compression time $\tau$ determines both the number density $\rho(t)$ and the swim speed $v_\text{rms}(t)$.
For decreasing $\tau$, the effective temperature increases as work is done but there is less time for heat to dissipate.
Many possible biological or physiological reasons exist for this finite rate of heat loss which may include finite response times to either the visual stimuli (mainly concerning fish on the perimeter) or inter-individual kinetics.
Future work may explore various extensions including investigating larger schools or a wider range of $\tau$.
In particular, one could explore new thermodynamic processes on schools of social fish (e.g., effective adiabatic compression) or extend these methods to other social animals.
In this regard, investigating the response of group behavior to perturbations may lead to the construction of proper definitions for pressure-like and temperature-like variables for collective animal behavior and other active matter systems.
\vspace{0.5em}
\begin{acknowledgments}
We thank N. Ouellette and N. Gov for illuminating discussions.
This work is also supported by Gettysburg College and by the Cross-Disciplinary Science Institute at Gettysburg College (X-SIG).
\end{acknowledgments}
|
1,314,259,995,336 | arxiv | \section{Introduction}
Traversable wormholes, first conjectured by Morris and Thorne
\cite{MT88}, are handles or tunnels in the spacetime topology
connecting different regions of our Universe or of different
universes altogether. Interest in traversable wormholes has
increased in recent years due to an unexpected development,
the discovery that our Universe is undergoing an accelerated
expansion \cite{aR98, sP99}. This acceleration is due to
the presence of \emph{dark energy}, a kind of negative
pressure, implying that $\ddot{a}>0$ in the Friedmann equation
$\ddot{a}/a=-\frac{4\pi}{3}(\rho+3p)$. In the equation of
state $p=w\rho$, the range of values $-1<w<-1/3$ results in
$\ddot{a}>0$. This range is referred to as
\emph{quintessence dark energy}. Smaller values of $w$ are
also of interest. Thus $w=-1$ corresponds to Einstein's
cosmological constant \cite{mC01}. The case $w<-1$ is
referred to as \emph{phantom energy} \cite{sS05, oZ05,
fL05a, pK06, RKSG06, KRG10}. Here we have $\rho+p<0$, in
violation of the null energy condition. As a result,
phantom energy could, in principle, support wormholes
and thereby cause them to occur naturally.
Sections 2-4 discuss a combined model of quintessence
matter and ordinary matter that could support a wormhole
in Einstein-Maxwell gravity, once again suggesting that
such wormholes could occur naturally. The theoretical
construction by an advanced civilization is also an
inviting prospect since the model allows the assumption
of zero tidal forces. Sec. 4 considers the effect of
eliminating the electric field. A wormhole solution
can still be obtained but only by introducing a
redshift function that results in enormous radial
tidal forces, suggesting that some black holes may
actually be wormholes fitting the conditions discussed
in this paper and so may be capable of transmitting
signals, a possibility that can in principle be tested.
\section{{\textbf{ The model}} }\label{S:model}
Our starting point for a static spherically symmetric
wormhole is the line element
\begin{equation}\label{E:line1}
ds^{2}=-e^{\Phi(r)}dt^{2}+e^{\Lambda(r)}dr^{2}+r^{2}
(d\theta^{2}+\text{sin}^{2}\theta\, d\phi^{2}),
\end{equation}
where $e^{\Lambda(r)}=1/(1-b(r)/r)$. Here $b=b(r)$ is the
\emph{shape function} and $\Phi=\Phi(r)$ is the \emph{
redshift function}, which must be everywhere finite to prevent
an event horizon. For the shape function, $b(r_0)=r_0$,
where $r=r_0$ is the radius of the \emph{throat} of the
wormhole. Another requirement is the flare-out condition,
$b'(r_0)<1$ (in conjunction with $b(r)<r$), since it
indicates a violation of the weak energy condition, a
primary prerequisite for the existence of wormholes
\cite{MT88}.
In this paper the model proposed for supporting the wormhole
consists of a quintessence field and a second field with
(possibly) anisotropic pressure representing normal matter.
Here the Einstein field equations take on the following form
(assuming $c=1$):
\begin{equation}
G_{\mu\nu}= 8 \pi G ( T_{\mu\nu}+ \tau_{\mu\nu}),
\label{Eq3}
\end{equation}
where $\tau_{\mu\nu}$ is the energy momentum tensor of the
quintessence-like field, which is characterized by a free
parameter $w_q$ such that $-1<w_q<-1/3$. Following
Kiselev \cite{vK03}, the components of this tensor satisfy
the following conditions:
\begin{equation}
\tau_t^t= \tau_r^r = -\rho_q,
\label{Eq3}
\end{equation}
\begin{equation}
\tau_\theta^\theta= \tau_\phi^\phi = \frac{1}{2}(
3w_q+1)\rho_q.
\label{Eq3}
\end{equation}
Furthermore, the most general energy momentum tensor
compatible with spherically symmetry is
\begin{equation}
T_\nu^\mu= ( \rho + p_t)u^{\mu}u_{\nu}
- p_t g^{\mu}_{\nu}+ (p_r -p_t )\xi^{\mu}\xi_{\nu}
\label{Eq3}
\end{equation}
with $u^{\mu}u_{\mu} = -1 $. The Einstein-Maxwell
field equations for the above metric corresponding to
a field consisting of a combined model comprising
ordinary and quintessential matter are stated next
\cite{RKCKD11, URRRNKH}. Here $E$ is the electric
field strength, $\sigma$ the electric charge density,
and $q$ the electric charge.
\begin{equation}\label{E:Einstein1}
e^{-\Lambda}\left(\frac{\Lambda'}{r}
-\frac{1}{r^2}\right)+\frac{1}{r^2}=
8\pi G\rho+8\pi G\rho_q +E^2,
\end{equation}
\begin{equation}\label{E:Einstein2}
e^{-\Lambda}\left(\frac{\Phi'}{r}+\frac{1}{r^2}\right)
-\frac{1}{r^2}=8\pi Gp_r-8\pi G\rho_q-E^2,
\end{equation}
\begin{equation}\label{E:Einstein3}
\frac{1}{2}e^{-\Lambda}\left(\frac{1}{2}(\Phi')^2
+\Phi''-\frac{1}{2}\Lambda'\Phi'+\frac{1}{r}(\Phi'
-\Lambda')\right)=8\pi G\left(p_t+\frac{1}{2}(3w_q+1)
\rho_q\right)+E^2,
\end{equation}
\begin{equation}\label{E:Max4}
(r^2E)'=4\pi r^2\sigma e^{\mu /2}.
\end{equation}
Eq. (\ref{E:Max4}) can also be expressed in the form
\begin{equation}\label{E:Max5}
E(r)=\frac{1}{r^2}\int^r_04\pi (r')^2\sigma
e^{\mu /2}dr'=\frac{q(r)}{r^2},
\end{equation}
where $q(r)$ is the total charge on the sphere
of radius $r$.
\section{{\textbf{Solutions}}}\label{S:solutions}
We assume that for the normal-matter field we have the
following equation of state for the radial pressure
\cite{RKR09}:
\begin{equation}\label{E:EoS1}
p_r = m \rho, \quad -1/3<m<1.
\end{equation}
For the lateral pressure we assume the equation of state
\begin{equation}\label{E:EoS2}
p_t=n\rho, \quad -1/3<n <1.
\end{equation}
Generally, $p_r$ is not equal to $p_t$, unless, of course,
$m=n$.
Following Ref. \cite{RKR09}, the factor $\sigma
e^{\mu/2}$ is assumed to have the form $\sigma_0r^s$,
where $s$ is an arbitrary constant and $\sigma_0$ is the
charge density at $r=0$. As a result,
\begin{equation}\label{E:E}
E(r)=4\pi\sigma_0\frac{r^{s+1}}{s+3},
\end{equation}
\begin{equation}\label{E:E2}
E^2(r)=16\pi^2\sigma_0^2\frac{r^{2s+2}}{(s+3)^2},
\end{equation}
and
\begin{equation}\label{E:q}
q^2(r)=16\pi^2\sigma_0^2\frac{r^{2s+6}}{(s+3)^2}.
\end{equation}
The next step is to obtain the shape function $b(r)$
by deriving a differential equation that can be
solved for $e^{-\lambda(r)}$. The easiest way to
accomplish this is to solve Eq. (\ref{E:Einstein1})
for $8\pi G\rho$ and substituting the resulting
expression in Eq. (\ref{E:Einstein2}), which, in
turn, is solved for $8\pi G\rho_q$. After
substituting this result in Eq. (\ref{E:Einstein3})
and making use of Eqs. (\ref{E:EoS1}) and
(\ref{E:EoS2}), we obtain the simplified form
\begin{equation}\label{E:diffequ}
(e^{-\Lambda})^{\prime} +\frac{\alpha e^{-\Lambda}}{r} =
\frac{\beta}{r}+rE^2\gamma,
\end{equation}
where $\alpha$, $\beta$, and $\gamma$ are dimensionless
quantities given by
\begin{equation}\label{E:1alpha}
\alpha=\frac{-n\Phi'r/(m+1)+\frac{1}{2}(3w_q+1)
+\frac{1}{2}(3w_q+1)\Phi'r/(m+1)+r^2\frac{1}{4}(\Phi')^2
+\frac{1}{2}r^2\Phi''+\frac{1}{2}r\Phi'}{\frac{1}{2}
(3w_q+1)m/(m+1)+n/(m+1)+\frac{1}{4}r\Phi'
+\frac{1}{2}}.
\end{equation}
\begin{equation}\label{E:1beta}
\beta=\frac{\frac{1}{2}(3w_q+1)}{\frac{1}{2}
(3w_q+1)m/(m+1)+n/(m+1)+\frac{1}{4}r\Phi'+\frac{1}{2}}
\end{equation}
and
\begin{equation}\label{E:gamma}
\gamma=\frac{1-\frac{1}{2}(3w_q+1)}{\frac{1}{2}
(3w_q+1)m/(m+1)+n/(m+1)+\frac{1}{4}r\Phi'+\frac{1}{2}}.
\end{equation}
Eq. (\ref{E:diffequ}) is linear and would readily yield
an exact solution provided that $\alpha$ and $\beta$ are
constants. This can only happen if $\Phi'=\eta/r$ for
some constant $\eta$. In the first part of this
paper we will assume that $\eta\equiv 0$, leading to the
\emph{zero-tidal-force} solution \cite{MT88}.
Whether occurring naturally or constructed by an
advanced civilization, such a wormhole would be suitable
for humanoid travelers.
Returning to Eq. (\ref{E:diffequ}) and using Eq.
(\ref{E:E2}), the integrating factor
$e^{\alpha\,\text{ln}\, r}=r^{\alpha}$ yields the solution
\begin{equation}\label{E:solution1}
e^{-\Lambda}=\frac{\beta}{\alpha}+
\gamma(16\pi^2\sigma_0^2)\frac{r^{2s+4}}
{(s+3)^2(2s+4+\alpha)}+\frac{C}{r^{\alpha}},
\end{equation}
where $C$ is an integration constant. From
$e^{-\Lambda}=1-b(r)/r$ in Sec. \ref{S:model}, we
obtain the shape function
\begin{equation}\label{E:shape1}
b(r)=r\left[1-\frac{\beta}{\alpha}-
\gamma(16\pi^2\sigma_0^2)\frac{r^{2s+4}}
{(s+3)^2(2s+4+\alpha)}-\frac{C}{r^{\alpha}}
\right].
\end{equation}
\section{ {\textbf{Wormhole structure}} }
In Eq. (\ref{E:solution1}), $C$ is an integration
constant. So mathematically, $e^{-\lambda}$ is a
solution for every $C$, leading to $b(r)$ in Eq.
(\ref{E:shape1}). Physically, however, $b(r)$ is
going to satisfy the requirements of a shape
function only for a range of values of $C$. This
problem can best be approached graphically by
assigning some typical values to the various
parameters and adjusting the value of $C$, as
exemplified by Fig. 1. First observe that
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=0.6\textwidth]{Maxwellfig1.eps}
\end{center}
\caption{The shape function.}
\end{figure}
if $\eta\equiv 0$, then $\alpha=\beta$. For the
given values $w_q=-2/3$, $m=0.5$, $n=0.5$, $\sigma_0
=1$, and $s=-3.8$, a suitable value for $C$
is $-0.19$, as we will see. Substituting in Eq.
(\ref{E:shape1}), we obtain
\begin{equation}\label{E:shape2}
b(r)=127.62r^{-2.6}+0.19r^{1.75}.
\end{equation}
To locate the throat $r=r_0$ of the wormhole,
we define the function $B(r)=b(r)-r$ and determine
where $B(r)$ intersects the $r$-axis, as shown in
Fig. 2. Observe that Fig. 2 indicates that for
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=0.6\textwidth]{Maxwellfig2.eps}
\end{center}
\caption{$B(r)=b(r)-r$ intersects the $r$-axis at $r=r_0$.}
\end{figure}
$r>r_0$, $B(r)<0$, so that $b(r)<r$ for $r>r_0$,
an essential requirement for a shape function.
Furthermore, $B(r)$ is a decreasing function near
$r=r_0$; so $B'(r)<0$, which implies that
$b'(r_0)<1$, the flare-out condition. With the
flare-out condition now satisfied, the shape
function has produced the desired wormhole
structure. For completeness let us note that
$r_0=5.143$ and $b'(r_0)=0.223$. (Suitable
choices for $C$ corresponding to other parameters
will be discussed at the end of the section.)
To the right of $r=r_0$, $b(r)$ keeps rising,
but at $r=6.6$, $b'(r)$ is still less than
unity. So at $r_1=6.6$, the interior shape
function, Eq. (\ref{E:shape2}), can be joined
smoothly to the exterior function
\[
b_{\text{ext}}(r)=5.123\sqrt{r}-7.054.
\]
To check this statement, observe that
\[
b_{\text{int}}(6.6)=b_{\text{ext}}(6.6)=6.107,
\]
while
\[
b'_{\text{int}}(6.6)=b'_{\text{ext}}(6.6)
=0.997.
\]
To the right of $r=r_1$, $b(r)/r\rightarrow
0$ as $r\rightarrow \infty$, so that in
conjunction with the constant redshift function,
the wormhole spacetime is asymptotically flat.
(The components $g_{\hat{\theta}\hat{\theta}}$
and $g_{\hat{\phi}\hat{\phi}}$ are already
continuous for the exterior and interior
components, respectively
\cite{LLQ03, LL04, pK09}.)
Returning to Eq. (\ref{E:shape1}), an example
of an anisotropic case is $m=0.6$, $n=0.3$,
$w_q=-2/3$, $\sigma_0=1$, and $s=-3.8$; a
suitable choice for $C$ is $-0.12$. The
result is
\[
b(r)=160.92r^{-2.6}+0.12r^2.
\]
Here $r_0=5.58$ and $b'(r_0)=0.48$.
An example of a value of $w_q$ closer to $-1$,
the lower end of the quintessence range, is the
following: $w_q=-0.8$, $m=n=0.5$, $\sigma_0=1$,
and $s=-3.5$. Letting $C=-0.04$, the shape
function is
\[
b(r)=429.52r^{-2}+0.04r^{13/6}.
\]
This time $r_0=10.32$ and $b'(r_0)=0.53$.
\\
\\
\textbf{Summary:} The emphasis in this paper is
on the isotropic case $m=n$ since a cosmological
setting assumes a homogeneous distribution of
matter. For $w_q=-1$, which is
considered to be the best model for dark energy
\cite{rB08}, $\alpha$, $\beta$, and $\gamma$,
and hence $b=b(r)$, are all independent of
$m$ and $n$. This independence can yield a
valid solution to the field equations that
is consistent with Eqs. (\ref{E:EoS1}) and
(\ref{E:EoS2}) describing ordinary matter.
In particular, if $p=p_r=p_t$, then
$\rho+p>0$ under fairly general conditions.
\section{Could the electric field be eliminated?}
The purpose of this section is to study
conditions under which a combined model of
quintessential and ordinary matter may be
sufficient without the electric field $E$.
If $E$ is eliminated, then the assumption of
zero tidal forces becomes too restrictive. So
we assume that $\Phi'=\eta/r$ for some nonzero
constant $\eta$. This, in turn, means that
\begin{equation}\label{E:redshift}
e^{\Phi}=A_1r^{\eta}.
\end{equation}
Now Eq. (\ref{E:diffequ}) yields
\begin{equation}
e^{-\Lambda}=\frac{\beta}{\alpha}
+\frac{A_2}{r^{\alpha}}.
\end{equation}
Both $A_1$ and $A_2$ are positive integration
constants. (The reason that $A_2$ has to be
positive is that $\beta$ is close to zero
whenever $w_q$ is close to $-1/3$); $\alpha$
and $\beta$ now become (for $\eta\neq 0$)
\begin{equation}\label{E:2alpha}
\alpha = \frac{\frac{1}{2}(3w_q+1) +
\frac{\eta^2}{4}+\eta[\frac{1}{2}(3w_q+1)
-n]/(m+1)}
{\frac{\eta}{4}+ \frac{1}{2}+
[\frac{1}{2}(3w_q+1)m+n]/(m+1)}
\end{equation}
and
\begin{equation}\label{E:2beta}
\beta = \frac{\frac{1}{2}(3w_q+1)}
{\frac{\eta}{4}+ \frac{1}{2}+
[\frac{1}{2}(3w_q+1)m+n]/(m+1)}.
\end{equation}
The last two equations are similar to those in
Ref. \cite{RKCKD11}, which deals with galactic
rotation curves.
As noted in Sec. \ref{S:model}, the shape function $b=b(r)$
is obtained from $e^{-\Lambda(r)}$, so that
\begin{equation}\label{E:bprime}
b(r)=r(1-e^{-\Lambda(r)})=r\left(1-\frac{\beta}{\alpha}
-\frac{A_2}{r^{\alpha}}\right).
\end{equation}
To meet the condition $b(r_0)=r_0$, we must have
\[
1-\frac{\beta}{\alpha}-\frac{A_2}{r_0^{\alpha}}=1.
\]
Solving for $r_0$, we obtain the radius of the throat:
\begin{equation}\label{E:rzero}
r_0=\left(-\frac{\alpha}{\beta}A_2\right)^{1/\alpha}.
\end{equation}
Since $A_2>0$, $\alpha$ and $\beta$ must have opposite
signs. From
$b(r)=r(1-\beta/\alpha -A_2/r^{\alpha})$, we have
\begin{equation*}
b'(r_0)=1-\frac{\beta}{\alpha}-A_2(1-\alpha)
r_0^{-\alpha}
\end{equation*}
and, after substituting Eq. (\ref{E:rzero}),
\[
b'(r_0)=1-\frac{\beta}{\alpha}-A_2(1-\alpha)
\left(-\frac{\beta}{\alpha A_2}\right),
\]
which simplifies to $b'(r_0)=1-\beta$. It
follows immediately that if $\beta<0$, then
$b'(r_0)>1$, so that the flare-out condition
cannot be met. To get a value for $\beta$
between 0 and 1, the exponent $\eta$ in the
redshift function, Eq. (\ref{E:redshift}),
has to be negative and sufficiently large in
absolute value. Such a value will cause
$\alpha$ to be negative, which can best be
seen from a simple numerical example: for
convenience, let us choose $w_q=-1$, the
lower end of the quintessence range, and
$m=n=0.1$. Then we must have $\eta<-6$.
The result is a large positive numerator
in Eq. (\ref{E:2alpha}) because the last
term is positive and $\eta^2/4$ is large.
So $\alpha$ and $\beta$ have opposite
signs, as expected. (Observe that for the
isotropic case, if $w_q=-1$, then the
values of $\alpha$ and $\beta$ are
independent of $m$ and $n$.)
Continuing the numerical example, if we
let $A_2=1$ and $\eta=-7$, then $\beta=0.8$,
$\alpha= -14.6$, and
\[
b(r)=1.055r-r^{15.6}.
\]
From Eq. (\ref{E:rzero}), $r_0= 0.820$,
while $b'(r_0)= 1-\beta=0.2$. As we have
seen, $b'(r_0)$ is independent of $A_2$.
So we are free to choose a smaller value
in Eq. (\ref{E:rzero}) to obtain a larger
throat size.
We conclude that we can readily find an
interior wormhole solution around $r=r_0$
without $E$, provided that we are willing
to choose a sufficiently large (and
negative) value for $\eta$, resulting in
what may be called an unpalatable shape
function:
$\Phi= \text{ln}\,A_1+\eta\,\text{ln}\,r$.
At the throat, $|\Phi'|=|\eta/r_0|$, which
indicates the presence of an enormous
radial tidal force, even for large throat
sizes. (Recall that from Ref. \cite{MT88},
to meet the tidal constraint, we must have
roughly $|\Phi'|<(10^8\,\text{m})^{-2}$.)
Such a wormhole would not be suitable for
a humanoid traveler, but it may still be
useful for sending probes or for
transmitting signals.
The enormous tidal force is actually
comparable to that of a solar-mass black
hole of radius 2.9 km near the event horizon,
making the solution physically plausible:
since we have complete control over $b'(r_0)$
and $r_0$, we are not only able to satisfy the
flare-out condition but we can place the throat
wherever we wish. Moreover, the assumption
$w_q=-1$ is equivalent to Einstein's
cosmological constant, the best model for
dark energy \cite{rB08}. As noted at the
end of Sec. 4, also physically
desirable in a cosmological setting is the
assumption of isotropic pressure, i.e.,
$m=n$ in the respective
equations of state. As we have seen, in
the isotropic case our conclusions are
independent of $m$ and $n$. So by
placing the throat just outside the event
horizon of a suitable black hole, it is
possible in principle to construct a
``transmission station" for transmitting
signals to a distant advanced civilization
and, conversely, receiving them. If such a
wormhole were to exist, it would be
indistinguishable from a black hole at a
distance. This suggests a possibility in
the opposite direction: a black hole could
conceivably be a wormhole fitting our
description. The easiest way to test this
hypothesis is to listen for signals,
artificial or natural, emanating from a
(presumptive) black hole.
\section{Conclusion}
This paper discusses a class of wormholes supported
by a combined model consisting of quintessential
matter and ordinary matter, first in
Einstein-Maxwell gravity and then in Einstein
gravity, that is, in the absence of an electric
field. To obtain an exact solution, it was necessary
to assume that the redshift function has the form
$e^{\Phi(r)}=A_1r^{\eta}$ for some constant $\eta$.
In the Einstein-Maxwell case, this constant could
be taken as zero, thereby producing a
zero-tidal-force solution, which, in turn, would
make the wormhole traversable for humanoid travelers.
Without the electric field $E$, the exponent $\eta$
has to be nonzero and leads to a less desirable
solution with large tidal forces. Concerning the
exact solution, it is shown in Ref. \cite{pK11}
that the existence of an exact solution implies
the existence of a large set of additional
solutions, suggesting that wormholes of the
type discussed in this paper could occur
naturally.
It is argued briefly in the Einstein case with
a quintessential-dark-energy background that
some black holes may actually be wormholes with
enormous tidal forces, a hypothesis that may
be testable.
|
1,314,259,995,337 | arxiv | \section{Introduction}
\begin{figure}[htbp]
\centering
\includegraphics[width=.6\textwidth]{detector_loc_v2.png}
\caption{A bird's-eye view of J-PARC MLF and the JSNS$^2$\, detector location. The red rectangle in the figure indicates the place where the (anti-)neutrino detector is settled. \label{fig:MLF_nu}}
\end{figure}
JSNS$^2$\, (J-PARC Sterile Neutrino Search at J-PARC Spallation Neutron Source) \cite{Harada:2013} is an experiment to search for neutrino oscillations with $\Delta m^2$\, $\sim 1$ $\mathrm{eV}^2$, which is reported by LSND experiment with 3.8 $\sigma$ sensitivity in 1998 \cite{Aguilar:2001}, via the observation of $\bar{\nu}_{\mu} \to \bar{\nu}_{e}$~ appearance oscillation.
The experimental setup of JSNS$^2$\, consists of (anti-)neutrino detector placed 24 m away from the mercury target in Materials and Life Science Experimental Facility (MLF) of J-PARC as shown in figure \ref{fig:MLF_nu}. The detector contains 17 tons of Gadolinium (Gd) loaded liquid scintillator in a neutrino target (NT) volume to detect $\bar{\nu}_{e}$ \, via the inverse beta-decay (IBD) reaction $\bar{\nu}_{e} + p \to e^{+} + n$~ in a delayed coincidence method. The positron yields scintillation instantaneously which is detected as a prompt signal. When the neutron after thermalization is captured by Gd, several gamma-rays which have around 8 MeV in total are emitted by Gd. The gamma-rays generate scintillation observed as a delayed signal around 30 $\mu \mathrm{s}$\, behind the prompt signal. The delayed coincidence of the prompt and the delayed signals identifies $\bar{\nu}_{e}$ \, signal.
The JSNS$^2$\, detector is composed of three layers of two types of liquid scintillator in a stainless steel tank whose volume is around 60 $\mathrm{m}^3$\, (shown in figure \ref{fig:Detector}). The most-inner of the detector has an ultraviolet light transparent acrylic vessel to contain Gd loaded liquid scintillator (Gd-LS) for detecting $\bar{\nu}_{e}$. The space between the stainless steel tank and the acrylic vessel is filled with Gd unloaded liquid scintillator (LS), which is optically separated into two layers by black boards (optical separators). The inner layer is a gamma catcher (GC) to absorb energy of gamma-rays from Gd. Inner photomultiplier tubes (PMTs) are attached on the black boards to observe lights from both NT and GC. The outer LS layer has a role of cosmic ray anti-counter.
\begin{figure}[htbp]
\centering
\includegraphics[width=.7\textwidth]{Detector_Detail_v2.png}
\caption{A cutaway view of a detailed 3D model of the JSNS$^2$\, detector. The blue cylindrical container placed at the center of the detector is the acrylic vessel for Gd-LS. PMTs and black boards surrounds the vessel. Parts of the support structures for PMTs, colored pink, are welded to the inner surface of the stainless steel tank. \label{fig:Detector}}
\end{figure}
In addition to the delayed coincidence technique for the IBD detection, the JSNS$^2$\, detector has a pulse shape discrimination (PSD) capability as a particle identification of neutral particles, especially cosmic ray induced fast neutron and neutrino signal, as described in \cite{Ajimura:2017}. In general, it is crucial to remove dissolved oxygen properly and maintain an environment preventing oxygen contamination for keeping optical properties, such as a light yield and a PSD capability, of liquid scintillator in terms of oxygen quenching effect.
This paper describes the stainless steel tank and its related tests including liquid leakage and gas-tightness of the tank. We developed a new sealing scheme for a large flange with a poor flatness, a quantitative measurement technique and evaluation method of gas-tightness using decrease of a relative pressure. They are described in detail for the future work or similar type of detectors, e.g., reactor neutrino monitors.
\section{Tank Design and Structure}
The detailed structure of the JSNS$^2$\, detector is explained elsewhere \cite{Ajimura:2017}. Therefore, this section concentrates on the design and the structure of the stainless steel tank. A detailed drawings were developed by Morimatsu Industrial Co. Ltd., based on a conceptual design from JSNS$^2$\, collaborators \cite{web:Morimatsu}.
\subsection{Main Tank, Top Lid and Anti Oil-Leak Tank Design}
The stainless steel tank of the JSNS$^2$\, detector consists of two parts; a main tank and a top lid. There is an anti oil-leak tank surrounding the tank, which prevents liquid from spreading out in case of leak from the main tank. The side view drawings and the detailed 3D model of the entire tank are shown in figure \ref{fig:Drawings} and figure \ref{fig:3DModel} respectively.
\begin{figure}[htbp]
\centering
\includegraphics[width=.8\textwidth]{Tank_Morimatsu_All_v3.png}
\caption{A side view of the tank and the anti oil-leak tank. The red circles correspond to the LS buffer space, which forms a large ring shape, to prevent a change of liquid level due to thermal expansion of LS. \label{fig:Drawings}}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=.7\textwidth]{sstank_3dmodel_v4.png}
\caption{A detailed 3D model of the drawing without balustrades on the decks. The anti oil-leak tank is shown as a cutaway of the green cylinder with decks for operators to step on on the top. A cutaway view of the dark blue object corresponds to the main tank of the stainless steel tank. The top lid colored light blue has a large flange structure around the circumference for connection onto the main tank. \label{fig:3DModel}}
\end{figure}
The main tank is a cylindrical shape with dimension of 4.6 m diameter and 4.4 m height. The thickness of the stainless steel is 5 mm. On the top of the main tank, there is a large flange structure along the circumference pointed in figure \ref{fig:3DModel} as the connection flange to prevent gas or liquid leakage from the main tank, where the top lid resides through a sealing material.
As indicated in figure \ref{fig:Detector} and \ref{fig:Drawings}, the top lid forms the LS buffer space to absorb thermal expansion of LS along the circumference of the tank. The base area of this LS buffer ring is approximately 4 $\mathrm{m}^2$\,, and corresponds to a capability of absorbing the LS level change within $\pm 10$ cm in $\pm 10$ $^{\circ}\mathrm{C}$\, temperature change.
The top surface of the buffer ring has eight flange ports, which are used for a feed-through of PMT cables, mounting LS filling pipes, and ports of a nitrogen purging system.
There are eight reinforce beams along the radial direction, and sixteen reinforce plates in the polar direction on the top-lid.
Thanks to this reinforce structure, the stainless steel tank can tolerate up to 0.2 atm relative pressure with respect to the outside pressure.
\subsection{Sealing around the connection flange}
To compensate a poor flatness of the connection flange, we decided to use Herme-seal No.800, which is a liquid type gasket with oil-proof provided by Nihon Hermetics Co. Ltd., as the sealing material instead of a o-ring or a rubber gasket \cite{web:Hermetics}. A merit of liquid type gaskets is a capability of filling gaps caused by the poor flatness.
Herme-seal No.800 loses elasticity as it gets dry. Thus, it is necessary to avoid desiccation before the lid closure is done. This can be done with mixing 10 w\% of a water dominant special diluent produced by the company into the liquid type gasket. As a result of it, the desication time was extended into more than 30 min, which is enough for our purpose.
\begin{figure}[htbp]
\centering
\includegraphics[width=.5\textwidth]{sealing_photo.png}
\caption{A photo showing the situation when Herme-seal was heaped on the connection flange. \label{fig:SealPhoto}}
\end{figure}
For the actual sealing work, we put 3 mm thick of the diluted Herme-seal on the flange as shown in figure \ref{fig:SealPhoto}. The lid was placed to close as soon after the painting work as possible. A sealing capability for gas (gas tightness) was examined after this work, which is described in section~\ref{sec:gasleaktest}.
\section{Construction}
The construction of the stainless steel tank began on December 2017, and finished in the end of February 2018. Morimatsu was in charge of construction of the tank. All construction processes were done in J-PARC in the open air. The production of the components was done at the factory of Morimatsu in advance.
\subsection{Liquid Leak Check}
\begin{figure}[htbp]
\centering
\includegraphics[width=.8\textwidth]{schematics_LiqLeakTest.pdf}
\caption{A schematic of a liquid level during the liquid leak test. \label{fig:FigLiqLeak}}
\end{figure}
After construction of the main tank and the top lid, we filled the tank with water to check a liquid leakage from the welded joints and flanges. The liquid level was set above the flanges of the top lid as shown in figure \ref{fig:FigLiqLeak}. To see the leakage, we left the tank in this situation over one night, and then searched the leakage points and checked the liquid level. As a result, no serious liquid leakage was found.
\subsection{Transportation in J-PARC}
The tank was transported from the construction place (M1) to HENDEL building for the storage and the further construction mainly dedicated for works inside of the tank, such as a PMT mantling. As described in \cite{Ajimura:2017}, the empty JSNS$^2$\, detector is planned to be transported between HENDEL and MLF every summer. Therefore, this tank transportation will be a simulation for the planned detector transportation.
\begin{figure}[htbp]
\centering
\includegraphics[width=.7\textwidth]{trans_map.pdf}
\caption{The route to transport the stainless steel tank from M1 (the construction place) to HENDEL building (the storage building). \label{fig:TransMap}}
\end{figure}
The orange arrows in figure \ref{fig:TransMap} show the course for the tank transportation towards HENDEL building, which is the same way as the actual detector transportation after the junction to MLF. The tank was transported by a low bed trailer truck (shown in the right of figure \ref{fig:TransPhoto}). The truck went along the course slowly (less than 20 km/h) for safety. We attached an acceleration sensor at the point displayed in the left of figure \ref{fig:TransPhoto} to measure an acceleration at the top of the detector.
We already had a mock-up test on the transportation of the detector components such as PMTs and their support structures using the mini-truck with the same road course \cite{HinoJPS:2017}. The test was conducted in the more severe acceleration condition, and showed no damage on the detector components; therefore, the direct comparison of the acceleration measurements between the mock-up test and this stainless steel tank transportation can be done.
\begin{figure}[htbp]
\centering
\includegraphics[width=.9\textwidth]{trans_photo_v2.png}
\caption{Left: the position of the acceleration sensor. Right: a photo of the tank and the low bed trailer truck. \label{fig:TransPhoto}}
\end{figure}
Figure \ref{fig:TransPlot} shows plots of acceleration during the transportation. The left plot is the result of the tank transportation, and the right one is that of the mock-up test, respectively.
The largest acceleration was around 1 $\mathrm{m/sec^2}$\, during the tank transportation; in contrast, the mock-up had been exposed more than 1 $\mathrm{m/sec^2}$\, for an entire duration of the round trip.
This result leads to a conclusion that all of the detector components get no hurt during the actual JSNS$^2$\, detector transportation.
\begin{figure}[htbp]
\centering
\includegraphics[width=.9\textwidth]{trans_plot_v3.png}
\caption{Plots of the acceleration sensor output. Left: the history of the acceleration on the top lid during transportation. Right: the data from the detector mock-up from \cite{HinoJPS:2017}. Note that the duration of the tank transportation were longer than that of the mockup test; however, it is obvious that the tank was exposed to smaller acceleration during the transportation compared to the mockup. \label{fig:TransPlot}}
\end{figure}
\section{Gas-tightness test \label{sec:gasleaktest}}
As described above, to obtain a full performance of the IBD detection, it is crucial to keep the PSD capability in the entire volume of NT and GC for the fast neutron background rejection. Maintaining a nitrogen ambience for gas phases of the detector is essential to prevent oxygen contamination to Gd-LS and LS.
Figure \ref{fig:N2Flow} illustrates a concept of a bubbler system to keep a nitrogen ambience in the gas phases. Nitrogen gas is continuously supplied from gas cylinders to the gas phases of the ring buffer space and the chimney independently, and then goes
to the bubbler. In the bubbler, the end of the pipe is immersed in the liquid. The depth is 1 cm below the liquid level, which is equivalent to about 0.1 kPa pressure difference between the atmosphere and gas in the tank. This nitrogen system prevents the air intrusion into the gas phase of the detector.
In order to keep the positive pressure,
it is important that the flanges on the detector have adequate sealing capability.
If the flow rate can be set to 100 mL/min,
a nitrogen gas cylinder can supply nitrogen for 1.5 months.
Therefore, We set a tolerance level of total gas leakage as 100 mL/min, comparable amount to the nitrogen flow.
\begin{figure}[htbp]
\centering
\includegraphics[width=.8\textwidth]{schematics_N2Purging_v2.pdf}
\caption{A concept of nitrogen gas flow system. Nitrogen gas is continuously supplied from gas cylinders to the gas phases of the JSNS$^2$\, detector, and then goes to the bubblers.
The end of the pipe of the nitrogen system is immersed 1 cm below the surface of the liquid to prevent air counter-flow. \label{fig:N2Flow}}
\end{figure}
\subsection{Concept of the test}
If the stainless steel tank contains higher pressure of gas than that of atmosphere, a relative pressure of the gas with respect to the atmospheric pressure ($\Delta P$) decreases as a function of time in case the tank has a leakage and no supplemental gas. The speed of the leak is proportional to $\Delta P$ at the moment. Therefore, the time evolution of $\Delta P$ follows an exponential function:
\begin{equation}
\label{eq:leak_concept}
\begin{split}
\frac{\mathrm{d} (\Delta P)}{\mathrm{d} t} \propto - \Delta P \quad \Leftrightarrow \quad \Delta P(t) = \Delta P_0 \exp \left( -\frac{t}{\tau} \right) \,,
\end{split}
\end{equation}
\noindent where $\Delta P_0$ is the relative pressure at $t=0$. This equation exhibits that the time constant $\tau$ characterize the leakage of the system. This corresponds to a time evolution of the number of gas molecules denoted as $n_{\mathrm{cor}}(t)$, that contributes to $\Delta P$.
\begin{equation}
\label{eq:n_correspond}
\begin{split}
n_{\mathrm{cor}}(t) = n_{\mathrm{cor}}(0) \exp \left( -\frac{t}{\tau} \right) \,, \quad \mathrm{where} \quad n_{\mathrm{cor}}(0) = \frac{\Delta P_0 V}{RT} \,.
\end{split}
\end{equation}
As 100 mL/min flow rate will be kept to maintain a nitrogen ambience during periods of a physics run of JSNS$^2$\,, if we assume that a leak speed of the number of molecules equals that of the nitrogen flow, a time constant corresponding to the tolerance level of leakage is calculated as follows:
\begin{equation}
\label{eq:n_leak}
\begin{split}
\left. \frac{\mathrm{d} n_{\mathrm{cor}}(t)}{\mathrm{d} t} \right|_{t=0} &= \frac{\Delta P_{\mathrm{g.p.}} V_{\mathrm{g.p.}}}{RT\tau} \sim \frac{1 \times 0.1 \left[ \mathrm{atm \cdot L/min}\right]}{RT} \,. \\
\tau &\sim \frac{0.001 \times 2000}{1 \times 0.1} = 20 \, \left[ \mathrm{min} \right] \,,
\end{split}
\end{equation}
\noindent where $\Delta P_{\mathrm{g.p.}}$, a relative pressure in the gas phases of the detector during physics runs, is kept about 0.1 kPa ($\sim 0.001$ atm) by the nitrogen flow system, and $V_{\mathrm{g.p.}}$ is total volume of the gas phases around 2 $\mathrm{m}^3$\, .
\subsection{Experimental Setup}
The gas-tightness test was done on June 2018. After closing the top lid with Herme-seal No.800 sealing, we supplied dry air and set up the relative pressure $\Delta P = 14.6$ kPa. To measure a relative pressure as a function of time, digital relative pressure sensor GC31 \cite{web:Nagano} with $\sim 0.5$ kPa resolution, was mounted on the one of small flanges on the top lid chimney. In addition to it, three different types of thermocoupples were used for monitoring temperatures of gas and the tank surface. Since a variation of atmospheric pressure affects the relative pressure measurement as well, an atmospheric pressure and temperature logging module TR-73U was placed outside of the tank \cite{web:TandD}.
Analog outputs from each sensors, except for TR-73U, were acquired using data logger GL840 with 1/60 $\mathrm{Hz}$\, sampling \cite{web:Graphtec}. The measurement continued for about 3 days.
A time constant corresponding to the leak level of this setup is independent from $\Delta P$. However, the total volume of gas proportionally change the time constant. Because the gas volume of this setup is 30 times greater than that of the gas phases remaining after the detector is filled with Gd-LS/LS, e.g., during physic runs, a tolerance level time constant changes into
\begin{equation}
\label{eq:tau_tolerance}
\begin{split}
\tau^{\mathrm{test}}_{\mathrm{tol}} \sim \tau^{\mathrm{ref}}_{\mathrm{tol}} \times 30 = 600 \, \left[ \mathrm{min} \right],
\end{split}
\end{equation}
\noindent where $\tau^{\mathrm{test}}_{\mathrm{tol}}$ represents the time constant equivalent to the tolerance level of leakage in this setup, and $\tau^{\mathrm{ref}}_{\mathrm{tol}}$ is the time constant computed in eq. \eqref{eq:n_leak}.
\begin{figure}[htbp]
\centering
\includegraphics[width=.8\textwidth]{schematics_GasLeakTest_v3.png}
\caption{A schematic view of the gas-tightness test for the JSNS$^2$\, detector. A relative pressure sensor and three thermocoupples were mounted on the top lid chimney flange. To monitor atmospheric pressure and temperature during the measurement, TR-73U sensor and logger module was used. Because the total volume is about 60 $\mathrm{m}^3$\,, a tolerance level of leakage corresponds to the time constant $\tau^{\mathrm{test}}_{\mathrm{tol}} \sim 600$ min in this setup. \label{fig:GasLeak}}
\end{figure}
The measured relative pressure data as a function of time is shown in figure \ref{fig:DataP}, where the horizontal axis represents the time interval from the beginning of the test. The markers in the plot indicate the relative pressure value at the moment. As a uncertainty from the resolution of the sensor, we assigned $\pm 0.5$ kPa systematic uncertainty to each point as vertical error bars.
\subsection{Prediction for Fit}
\begin{figure}[htbp]
\centering
\includegraphics[width=.6\textwidth]{deltaP_data_v2.pdf}
\caption{A plot showing the data of $\Delta P$ history during the measurement. The vertical axis shows the relative pressure with respect to atmospheric pressure in unit kPa. The origin of the horizontal axis corresponds to the time when we started the measurement. The vertical error bar comes from the resolution of the pressure sensor, and the horizontal one corresponds to the root mean square of each step. \label{fig:DataP}}
\end{figure}
The change of environmental conditions, such as atmospheric pressure and gas temperature in the tank, causes a change of a relative pressure value on the sensor even though there is no leakage, and leads to a systematic error of the time constant measurement.
Therefore, in order to extract the time constant from the data shown in figure \ref{fig:DataP}, we developed a prediction model simulating the relative pressure as a function of time, based on the supplemental data.
A discrete representation of eq. \ref{eq:leak_concept} can be shown as
\begin{equation}
\label{eq:euler_dp}
\begin{split}
\Delta P^{n+1} = \left( 1-\frac{\Delta t}{\tau} \right) \Delta P^n \quad (n=0,1,\cdots) \,,
\end{split}
\end{equation}
\noindent where $\Delta P^n$ is a relative pressure at each step, and $\Delta t$ shows a time interval corresponding to a step. Once an initial value $\Delta P_0$ is given, we can obtain time evolution prediction of $\Delta P$ successively (left of figure \ref{fig:FitModel}), which corresponds to Euler's method for numerical calculation of differential equations.
\begin{figure}[htbp]
\centering
\includegraphics[width=.9\textwidth]{fitmodel_comp.pdf}
\caption{A comparison of predictions of $\Delta P$. Left plot shows the prediction without including the circumstance effects (computed based on eq. \ref{eq:euler_dp}). Right plot is the prediction including the environmental effects shown in figure \ref{fig:PatmTin}. $\tau = 3500 \mathrm{min}$ is assumed in both plots as an example. \label{fig:FitModel}}
\end{figure}
To include the effect of the atmospheric pressure and the inner gas temperature change to the prediction, the logs of their data were used in an algorithm explained below.
\begin{itemize}
\item First, a absolute pressure at the step $n$ is calculated using $P_{\mathrm{atm}}$ data shown as a blue line in figure \ref{fig:PatmTin}. A initial value of $\Delta P$ is set to the applied pressure $\Delta P_0$; therefore, $\Delta P^0 = \Delta P_0 = 14.6$ kPa.
\begin{equation}
P_{\mathrm{abs}}^n = \Delta P^n + P_{\mathrm{atm}}^{n} \,.
\end{equation}
\item An effect of a change of $T_{\mathrm{in}}$ is applied to $P_{\mathrm{atm}}$ on Boyle-Charles's law to obtain corrected relative pressure $\Delta P_{\mathrm{corr}}$. The inner gas temperature $T_{\mathrm{in}}$ was measured using the long thermocoupples shown in figure \ref{fig:GasLeak}, and exhibited as a light green line in figure \ref{fig:PatmTin}.
\begin{equation}
\begin{split}
\Delta P_{\mathrm{corr}}^{n} &= P_{\mathrm{abs}}^{n} \times \frac{T_{\mathrm{in}}^{n}}{T_{\mathrm{in}}^{n-1}} - P_{\mathrm{atm}}^{n} \quad (n \ge 1) \,, \\
\Delta P_{\mathrm{corr}}^{n} &= P_{\mathrm{abs}}^{n} - P_{\mathrm{atm}}^{n} \quad (n = 0) \,.
\end{split}
\end{equation}
\item Finally, $\Delta P$ at the next step $n+1$ is obtained following eq. \eqref{eq:euler_dp} by substituting $\Delta P_{\mathrm{corr}}$ for $\Delta P$ to include the correction.
\begin{equation}
\Delta P^{n+1} = \left( 1 - \frac{\Delta t}{\tau} \right) \Delta P_{\mathrm{corr}}^{n} \,.
\end{equation}
\end{itemize}
The right of figure \ref{fig:FitModel} displays the $\Delta P$ prediction including the environmental effects.
Note that $\Delta t$ represents 1 minute because they have data points in 1 minute interval. Both the circumstance data sets contain 4000 points of data. Thus, a maximum range of the prediction of $\Delta P$ can be obtained up to 4000 min, enough for fitting in entire range of $\Delta P$ data.
\begin{figure}[htbp]
\centering
\includegraphics[width=.6\textwidth]{20180615_PatmTin_v3.pdf}
\caption{History of atmospheric pressure $P_{\mathrm{atm}}$ (blue) and the inner gas temperature $T_{\mathrm{in}}$ (green) in the tank. These data are used for corrections of $\Delta P$ prediction. \label{fig:PatmTin}}
\end{figure}
\subsection{Fit and Result}
Using the prediction including the correction with the circumstance data, we calculated $\chi^{2}$ with changing $\tau$ as a free fit parameter. The $\chi^2$ is calculated on a formulae
\begin{equation}
\chi^2 = \sum_{i} \left( \frac{\Delta P^{\mathrm{data}}_i - \Delta P^{\mathrm{pred}}(t_i)}{\sigma^{\Delta P}_i} \right)^2 \,
\end{equation}
\noindent where $\Delta P^{\mathrm{data}}_i \, \mathrm{and} \, t_i$ represent $i$-th data point in figure \ref{fig:DataP}, and $\sigma^{\Delta P}_i$ corresponds to their error bars respectively. As the prediction is computed in the discrete process explained above, a spline interpolation function denoted as $\Delta P^{\mathrm{pred}}(t_i)$ is used to get values between each point.
Figure \ref{fig:BestFit} shows a best fit curve of the prediction with $\chi^2/\mathrm{ndf} = 3.7/10$, where $\mathrm{ndf}$ is degree of freedom of the $\chi^2$. The error of $\tau$ is estimated as a value at $\Delta \chi^2 = 1$, where
\begin{equation}
\Delta \chi^2 = \chi^2 - \chi^2_{\mathrm{min}} \,.
\end{equation}
As a result of the fit, we obtained the time constant $\tau = 3410 \pm 144 \, \mathrm{min}$, whose central value and lower limit are more than 5 times larger than the tolerance level $\tau^{\mathrm{test}}_{\mathrm{tol}} = 600$ min.
\begin{figure}[htbp]
\centering
\includegraphics[width=.6\textwidth]{ModelFit_ErrImp_20190403.pdf}
\caption{The result of the prediction fit showing the best fit curve (blue) at $\tau = 3410 \, \mathrm{min} \,$ with the processed data (green). \label{fig:BestFit}}
\end{figure}
\section{Summary}
The design, construction and tests for the stainless steel tank of the JSNS$^2$\, detector have been reported.
The acceleration measurements and the mock-up test prove that the annual detector transportation between MLF and HENDEL can be done without problems.
We also developed a new sealing technique with the liquid gasket for a poor flatness flange, and applied to the connection flange of the tank. The capability of the sealing for gas was examined by observing a decrease in positive relative pressure as a function of time. As the result with a fit technique using the prediction of the pressure decline, we obtained over 5 times larger value of the time constant $\tau = 3410 \pm 144 \, \mathrm{min}$ than that of the tolerance level $\tau^{\mathrm{test}}_{\mathrm{tol}} = 600$ min.
These test results guarantees the stable performance of the JSNS$^2$\, detector for a search for sterile neutrino in J-PARC MLF.
\acknowledgments
We warmly thank J-PARC and KEK for the various kinds of supports. We appreciate Morimatsu Industry Co., Ltd. for co-work as well.
This work is also supported by the JSPS grants-in-aid (Grant Number 16H06344, 16H03967), Japan.
|
1,314,259,995,338 | arxiv | \section{INTRODUCTION}
\label{sec1}
The imminent of quantum computing devices opens up new possibilities for utilizing quantum machine learning (QML)~\citep{biamonte2017quantum, schuld2015introduction, schuld2019quantum} to improve the efficiency of classical machine learning algorithms in many new scientific domains like drug discovery~\citep{smalley2017ai} and efficient solar conversion~\citep{romero2014quantum}. Although the exploitation of quantum computing hardware to carry out QML is still in its early exploratory states, the rapid development in quantum hardware has motivated advances in quantum neural network (QNN) to deploy in noisy intermediate-scale quantum (NISQ) devices~\citep{power_data, Preskill2018quantumcomputingin, huggins2019towards, huang2021power, kandala2017hardware}, where not enough qubits could be spared for quantum error correction and the imperfect qubits have to be directly employed at the physical layer~\citep{ball2021real, egan2021fault, guo2021testing}. Even though, a compromised QNN is proposed by employing a quantum-classical hybrid model that relies on an optimization of the variational quantum circuit (VQC)~\citep{benedetti2019parameterized, mitarai2018quantum}. The resilience of the VQC to certain types of quantum noise errors and the high flexibility concerning coherence time and gate requirements admit VQC to apply to many promising applications on NISQ devices~\citep{chen2020variational, yang2021decentralizing, qi2021classical, qi2021qtn, yang2022bert, huang2021experimental, du2021grover, du2020expressive, noormandipour2022matching}.
Although many empirical studies of VQC for quantum machine learning have been reported, its theoretical understanding requires further investigation in terms of representation and generalization powers, particularly when the non-linear operator is employed for dimensionality reduction. This work introduces a tensor-train network (TTN) on top of the VQC model to implement a TTN-VQC. The TTN is a non-linear operator mapping high-dimensional features into low-dimensional ones. Then, the resulting low-dimensional features go through the framework of VQC. Compared with a hybrid model where the operation of dimensionality reduction is constituted by a classical neural network (NN)~\citep{abiodun2018state}, TTN can be genuinely realized by utilizing universal quantum circuits, and an end-to-end quantum neural network can be truly set up.
In this work, we discuss the theoretical performance of TTN-VQC in the context of functional regression. Functional regression refers to building a vector-to-vector operator such that the regression output can approximate a target operator. In more detail, given a $Q$-dimensional input vector space $\mathbb{R}^{Q}$ and a measurable $U$-dimensional output vector space $\mathbb{R}^{U}$, the TTN-VQC based vector-to-vector regression aims to find a TTN-VQC operator $f: \mathbb{R}^{Q} \rightarrow \mathbb{R}^{U}$ such that the output vectors of $f$ can approximate desirable target ones.
In particular, this work concentrates on the error performance analysis for TTN-VQC-based functional regression by leveraging the error decomposition technique~\citep{mohri2018foundations} to factorize an expected loss over the TTN-VQC operator into the sum of the approximation error, estimation error, and training error. We separately upper bound each error component by harnessing statistical machine learning theory. More specifically, we define $\mathbb{F}_{TV}$ as the TTN-VQC hypothesis space which represents a collection of TTN-VQC operators. Then, given a data distribution $\mathcal{D}$, assuming a smooth target function $h_{\mathcal{D}}^{*}$ and a set of $N$ training data drawn independent and identically distributed from a data distribution $\mathcal{D}$, for a loss function $\ell$ and an optimal TTN-VQC operator $f_{\mathcal{D}}^{*} \in \mathbb{F}_{TV}$, an expected loss is defined as:
\begin{equation}
\mathcal{L}_{\mathcal{D}}(f_{\mathcal{D}}^{*}) := \mathbb{E}_{\textbf{x} \sim \mathcal{D}} \left[ \ell(h_{\mathcal{D}}^{*}(\textbf{x}), f_{\mathcal{D}}^{*}(\textbf{x})) \right],
\end{equation}
which can be minimized by using an empirical loss as:
\begin{equation}
\mathcal{L}_{S}(f_{\mathcal{D}}^{*}) := \frac{1}{N} \sum\limits_{n=1}^{N} \ell(h_{\mathcal{D}}^{*}(\textbf{x}_{n}), f_{\mathcal{D}}^{*}(\textbf{x}_{n})).
\end{equation}
\begin{figure}
\centering
\includegraphics[width=0.95\textwidth]{error.pdf}
\caption{{\it An illustration of error decomposition technique. $h_{\mathcal{D}}^{*}$ is a smooth target function in a family of all functions $\mathcal{Y}^{\mathcal{X}}$ over a data distribution $\mathcal{D}$; $\mathbb{F}_{TV}$ denotes the family of TTN-VQC operators as shown in the dash square; $f_{\mathcal{D}}^{*}$ represents the optimal hypothesis from the space of TTN-VQC operators over the distribution $\mathcal{D}$}; $f_{S}^{*}$ denotes the best empirical hypothesis over the set of training samples $S$; $\bar{f}_{S}$ is the actual returned hypothesis based on the training dataset $S$.}
\label{fig:error}
\end{figure}
We set the loss function $\ell$ as the mean absolute error (MAE) because of the MAE~\citep{chai2014root} ensures the property of 1-Lipschitz continuity~\citep{qi2020mean}. Furthermore, we separately define $f_{\mathcal{D}}^{*}$, $f_{S}^{*}$ and $\bar{f}_{S}$ as an optimal TTN-VQC operator, an empirical optimal operator, and an actual returned operator. Then, as shown in Figure~\ref{fig:error}, the error decomposition technique~\citep{mohri2018foundations} factorizes the expected loss $\mathcal{L}_{\mathcal{D}}(\bar{f}_{S})$ into three error components as:
\begin{equation}
\begin{split}
\mathcal{L}_{\mathcal{D}}(\bar{f}_{S}) &= \underbrace{\mathcal{L}_{\mathcal{D}}(f_{\mathcal{D}}^{*})}_{Approximation\hspace{0.5mm} Error} + \underbrace{\mathcal{L}_{\mathcal{D}}(f_{S}^{*}) - \mathcal{L}_{\mathcal{D}}(f_{\mathcal{D}}^{*})}_{Estimation\hspace{0.5mm} Error} + \underbrace{ \mathcal{L}_{\mathcal{D}}(\bar{f}_{S}) - \mathcal{L}_{\mathcal{D}}(f_{S}^{*})}_{Training\hspace{0.5mm} Error} \\
&\le \mathcal{L}_{\mathcal{D}}(f_{\mathcal{D}}^{*}) + 2\sup\limits_{f \in \mathbb{F}_{TV}} \vert \mathcal{L}_{\mathcal{D}}(f) - \mathcal{L}_{S}(f) \vert + \mathcal{L}_{\mathcal{D}}(\bar{f}_{S}) - \mathcal{L}_{\mathcal{D}}(f_{S}^{*}) \\
& \le \mathcal{L}_{\mathcal{D}}(f_{\mathcal{D}}^{*}) + 2\mathcal{\hat{R}}_{S}(\mathbb{F}_{TV}) + \nu,
\end{split}
\end{equation}
where $\mathcal{L}_{\mathcal{D}}(f_{\mathcal{D}}^{*})$ is associated with the approximation error, $\hat{\mathcal{R}}_{S}(\mathbb{F}_{TV})$ is an empirical Rademacher complexity~\citep{bousquet2002complexity} over the family $\mathbb{F}_{TV}$, and $\nu$ refers to the training error that results from the optimization bias of gradient-based algorithms. In this work, our theoretical results concentrate on the error analysis by upper bounding each error component, and our empirical results are illustrated to corroborate the theoretical results.
\subsection{Main Results}
Our main theoretical results and the significance of TTN-VQC based functional regression are summarized as follows:
\begin{itemize}
\item Representation power: our upper bound on the approximation error is derived as $\frac{\Theta(1)}{\sqrt{U}} + \mathcal{O}(\frac{1}{\sqrt{M}})$, where $U$ and $M$ separately denote the number of qubits and the count of quantum measurement. The result suggests that the expressive capability of TTN-VQC can be mainly determined by the number of qubits, and the quality of the expressiveness is also affected by the count of quantum measurement. Furthermore, our results also reflect the fact that more algorithmic qubits and a longer decoherence time are necessarily required to ensure stronger representation power of TTN-VQC.
\item Generalization power: we derive an upper bound on the estimation error concerning the empirical Rademacher complexity $\mathcal{\hat{R}}_{S}(\mathbb{F}_{TV})$, which is further upper bounded by a constant as $\frac{2P}{\sqrt{N}}(\sum_{k=1}^{K} \Lambda_{k} + \Lambda')$. Here, $P$, $N$, and $K$ separately denote the input power, the amount of training data, and the order of multi-dimensional tensor; $\Lambda_{k}$ and $\Lambda$ refer to the upper bounds on the Frobenius norm of TTN parameters. The result of the generalization power suggests that given the training data and model structure, the additive noise corresponds to a larger value of $P$ which results in an upper bound on a weaker generalization capability.
\item Optimization bias: the PL condition is employed to initialize the TTN-VQC parameters and the training error can be exponentially converged to a small loss value. Since the barren plateau is a major unsolved issue in the training process of the quantum neural network~\citep{mcclean2018barren}, our model setting and the optimization configuration based on PL condition could be beneficial to the improvement of the TTN-VQC training.
\end{itemize}
Besides, our empirical results of functional regression are designed to corroborate the corresponding theoretical results of representation and generalization powers, and the analysis of optimization performance.
\subsection{Related Work}
The related work comprises theoretical and technical aspects. As for the theoretical point, Du \emph{et al.}~\citep{du2021learnability} analyzes the learnability of quantum neural networks with parameterized quantum circuits and gradient-based classical optimizer. A theoretical comparison between this work and Du \emph{et al.}~\citep{du2021learnability} is shown in Table~\ref{tab:comp}, where our theoretical results mainly follow the error decomposition method~\citep{mohri2018foundations, qi2020analyzing}. More specifically, in this work, we factorize an expected loss based on MAE over a TTN-VQC operator into three error components: approximation error, estimation error, and training error. We separately derive upper bounds on each error component and the results are summarized in Table~\ref{tab:comp}.
\begin{table}[tpbh]\footnotesize
\center
\renewcommand{\arraystretch}{1.3}
\begin{tabular}{|c||c|c|}
\hline
\textbf{Category} & \textbf{This work} & \textbf{Du \emph{et al.}~\citep{du2021learnability}} \\
\hline
Learning problem & Regression & Classification \\
\hline
Dimensionality reduction & TTN & N/A \\
\hline
Representation power & $\frac{\Theta(1)}{\sqrt{U}} + \mathcal{O}(\frac{1}{\sqrt{M}})$ & N/A \\
\hline
Generalization power & $\frac{2P}{\sqrt{N}}(\sum_{k=1}^{K} \Lambda_{k} + \Lambda')$ & N/A \\
\hline
Conditions for Optimization bias & $\mu$-PL $+$ 1-Lipschitz & $\mu$-PL + $\beta$-smooth \\
\hline
\end{tabular}
\caption{A comparison of learning theory for VQC between this work and Du \emph{et al.}~\citep{du2021learnability}}
\label{tab:comp}
\end{table}
Besides, the techniques of this work rely on the TTN and VQC models. The TTN, which is also known as matrix product state (MPS), was first put forth by Alexander \emph{et al.}~\citep{novikov2015tensorizing} in the applications of machine learning. Chen \emph{et al.}~\citep{chen2021end} employs MPS to extract low-dimensional features for VQC.
Although this work leverages the TTN for feature dimensionality reduction, we rebuild the TTN as parallel neural network architecture, where the ReLU activation function is separately imposed upon each neural network. Moreover, since the VQC models have been widely used in the domains of quantum machine learning~\citep{dunjko2018machine, ostaszewski2021reinforcement, cerezo2021variational}, we follow the standard VQC pipeline such that our theoretical results can be employed for the general VQC model.
\section{RESULTS}
\label{sec2}
\subsection{Preliminaries}
Before we delve into the detailed architecture of the TTN-VQC, we first introduce the basic components of TTN and VQC, which have been previously proposed and widely used in quantum machine learning.
\subsubsection{Variational Quantum Circuit}
\label{sec:vqc}
As shown in Figure~\ref{fig:pqc}, we first introduce a VQC which is composed of three components: (1) Tensor Product Encoding (TPE); (2) Parametric Quantum Circuit (PQC); (3) Measurement.
The TPE model, which is shown in Figure~\ref{fig:pqc} (a), was proposed in~\citep{stoudenmire2016supervised} and it aims at converting a classical data $\textbf{x}$ into a quantum state $\vert \textbf{x} \rangle$ by adopting a one-to-one mapping as:
\begin{equation}
\label{eq:tpe}
\vert \textbf{x} \rangle = \left(\otimes_{i=1}^{U} R_{Y}(\frac{\pi}{2} x_{i}) \right) \vert 0 \rangle^{\otimes U} = \begin{bmatrix} \cos(\frac{\pi}{2} x_{1}) \\ \sin(\frac{\pi}{2} x_{1}) \end{bmatrix} \otimes \begin{bmatrix} \cos(\frac{\pi}{2} x_{2}) \\ \sin(\frac{\pi}{2} x_{2}) \end{bmatrix} \otimes \cdot\cdot\cdot \otimes \begin{bmatrix} \cos(\frac{\pi}{2} x_{U}) \\ \sin(\frac{\pi}{2} x_{U}) \end{bmatrix},
\end{equation}
where each $x_{i}$ can be strictly restricted in the domain of $[0, 1]$ such that the conversion between $\textbf{x}$ and $\vert \textbf{x} \rangle$ is a reversely one-to-one mapping.
\begin{figure}
\includegraphics[width=0.95\textwidth]{pqc.pdf}
\caption{{\it A VQC model consists of three components: (a) Tensor Product Encoding (TPE); (b) Parametric Quantum Circuit (PQC); (c) Measurement. The TPE employs a series of $R_{Y}(\frac{\pi}{2} x_{i})$ to transform classical data into quantum states. The PQC is composed of CNOT gates and single-qubit rotation gates $R_{X}$, $R_{Y}$, $R_{Z}$ with free model parameters $\boldsymbol{\alpha}$, $\boldsymbol{\beta}$, and $\boldsymbol{\gamma}$. The CNOT gates impose the characteristic of quantum entanglement among qubits, and the gates $R_{X}$, $R_{Y}$ and $R_{Z}$ can be adjustable during the training stage. The PQC model in the green dash square is repeatably copied to build a deeper model. The measurement converts the quantum states $\vert z_{1}\rangle, \vert z_{2}\rangle, ..., \vert z_{U}\rangle$ into the corresponding expectation values $\langle z_{1} \rangle, \langle z_{2} \rangle, ..., \langle z_{U} \rangle$. The outputs $\vert z_{1}\rangle$, $\vert z_{2}\rangle$, ..., $\vert z_{U}\rangle$ are connected to a loss function and the gradient descent algorithms can be used to update VQC parameters.}}
\label{fig:pqc}
\end{figure}
The PQC framework is illustrated in Figure~\ref{fig:pqc} (b), where $U$ quantum channels are utilized to correspond to currently accessible $U$ qubits on NISQ devices. Here, the controlled NOT (CNOT) gates realize the quantum entanglement and the single rotation gates $R_{X}$, $R_{Y}$, and $R_{Z}$ compose the PQC model with model free parameters $\boldsymbol{\alpha} = \{\alpha_{1}, \alpha_{2}, ..., \alpha_{U}\}$, $\boldsymbol{\beta} = \{\beta_{1}, \beta_{2}, ..., \beta_{U}\}$ and $\boldsymbol{\gamma} = \{\gamma_{1}, \gamma_{2}, ..., \gamma_{U}\}$. The PQC model corresponds to a linear operator $\mathcal{T}_{\boldsymbol{\theta}_{vqc}}$ that transforms the quantum input state $\vert \textbf{x} \rangle$ into the output one $\vert \textbf{z} \rangle$. The PQC model in the green dash square is repeatably copied to compose a deeper architecture.
The measurement framework, as shown in Figure~\ref{fig:pqc} (c), outputs expectation values with respect to the Pauli-Z operators, namely $\langle z_{1} \rangle$, $\langle z_{2} \rangle$, ..., $\langle z_{U} \rangle$ which results in the output vector $\textbf{z} = [\langle z_{1} \rangle$, $\langle z_{2} \rangle$, ..., $\langle z_{U} \rangle]^{T}$. The expection vector $\textbf{z}$ refers to the classical data and it is connected to the operation of functional regression.
\subsubsection{Tensor-Train Network}
\label{sec:ttn}
TTN, also known as matrix product state (MPS)~\citep{kolezhuk1997matrix}, refers to a tensor network aligned in a 1-dimensional array and is generated by repetitively singular value decomposition (SVD)~\citep{stewart1993early} to a many-body wave function~\citep{ran2020tensor}. A multi-dimensional tensor $\mathcal{W}$ associated with the TTN can be faithfully factorized into a multiplication of $4$-order core tensors $\mathcal{W}_{k}$. In mathematics, given a set of TT-ranks $\{R_{0}, R_{2}, ..., R_{K}\}$, for an input tensor $\mathcal{X} \in \mathbb{R}^{I_{1} \times I_{2} \times \cdot\cdot\cdot \times I_{K}}$ and an output tensor $\mathcal{Y} \in \mathbb{R}^{J_{1} \times J_{2} \times \cdot\cdot\cdot \times J_{K}}$, $\mathcal{W}_{k} \in \mathbb{R}^{R_{k-1} \times I_{k} \times J_{k} \times R_{k}}$ associated with the TTN can achieve
\begin{equation}
\begin{split}
\mathcal{Y}(j_{1}, j_{2}, ..., j_{K}) &= \sum\limits_{i_{1}=1}^{I_{1}} \cdot\cdot\cdot \sum\limits_{i_{K}=1}^{I_{K}} \mathcal{W}((i_{1}, j_{1}), ..., (i_{K}, j_{K})) \mathcal{X}(i_{1}, i_{2}, ..., i_{K}) \\
&= \sum\limits_{i_{1}=1}^{I_{1}} \sum\limits_{i_{2} =1}^{I_{2}} \cdot\cdot\cdot \sum\limits_{i_{K}=1}^{I_{K}} \left(\prod\limits_{k=1}^{K} \mathcal{W}_{k}(i_{k}, j_{k})\right) \cdot \prod\limits_{k=1}^{K}\mathcal{X}_{k}(i_{k}) \\
&= \prod\limits_{k=1}^{K} \left( \sum\limits_{i_{k}=1}^{I_{k}} \mathcal{W}_{k}(i_{k}, j_{k}) \mathcal{X}_{k}(i_{k}) \right) \\
&= \prod\limits_{k=1}^{K} \mathcal{Y}_{k}(j_{k}),
\end{split}
\end{equation}
where the $K$-order tensor $\mathcal{X}$ and $\mathcal{Y}$ are separately decomposed into a multiplication of $4$-order tensors $\mathcal{X}_{i}$ and $\mathcal{Y}_{i}$, and the condition $\prod_{k} j_{k} < \prod_{k}i_{k}$ should be met to perform the dimensionality reduction. The output of TTN $ \mathcal{Y}(j_{1}, j_{2}, ..., j_{K})$ could be normally processed by using a non-linear activation like the sigmoid function. But in this work, we deliberately impose the sigmoid activation on each $\mathcal{Y}_{k}$ such that a parallel neural network architecture can be built.
\subsection{Theoretical Results}
This section first exhibits the architecture of TTN-VQC, and then we analyze the upper bounds on the representation and generalization powers and the optimization performance.
\subsubsection{The Architecture of TTN-VQC}
\begin{figure}
\centering
\includegraphics[width=0.95\textwidth]{vqc_ttn.pdf}
\caption{{\it An illustration of the TTN-VQC architecture. (a) Tensor-Train Network (TTN); (b) Variational Quantum Circuit (VQC); (c) Functional Regression. $\mathcal{T}_{\boldsymbol{\theta}_{ttn}}$ and $\mathcal{T}_{\boldsymbol{\theta}_{vqc}}$ represent the TTN and VQC operators with trainable parameters $\boldsymbol{\theta}_{ttn}$ and $\boldsymbol{\theta}_{vqc}$, respectively. $\mathcal{T}_{\boldsymbol{y}}$ refers to a reversible classical-to-quantum mapping. The VQC model in the green dash square can be repeatably copied to generate a deep parametric model. The framework of functional regression outputs loss values and evaluate gradients of loss functions to update model parameters $\boldsymbol{\theta}_{vqc}$ and $\boldsymbol{\theta}_{ttn}$. $\mathcal{T}_{lr}$ refers to a fixed regression matrix.}}
\label{fig:vqc_ttn}
\end{figure}
The TTN-VQC pipeline is shown in Figure~\ref{fig:vqc_ttn}, where (a) denotes the framework of TTN, (b) is associated with the VQC model, and (c) represents the operation of functional regression. The VQC model is based on the standard architecture as shown in \ref{sec:vqc}, and the TTN is designed according to the framework in \ref{sec:ttn}. To introduce the non-linearity to the TTN model, a sigmoid activation function $\sigma$ is taken for each $\mathcal{Y}_{k}(j_{k})$ such that
\begin{equation}
\mathcal{Y}(j_{1}, j_{2}, ..., j_{K}) = \prod\limits_{k=1}^{K} \sigma (\mathcal{Y}_{k}\left(j_{k}) \right),
\end{equation}
which introduces the non-linearity to the TTN features and corresponds to a parallel neural network structure.
The parallel DNN structure is illustrated in Figure~\ref{fig:parallel}, where a $K$-order tensor $\mathcal{X}(i_{1}, i_{2}, ..., i_{K})$ is first decomposed into $4$-order tensors $\mathcal{X}_{1}(i_{1})$, $\mathcal{X}_{2}(i_{2})$, ..., $\mathcal{X}_{K}(i_{K})$ and each $\mathcal{X}_{k}(i_{k})$ goes through $\mathcal{W}_{k}(i_{k}, j_{k})$. The resulting $\mathcal{Y}_{1}(i_{1})$, $\mathcal{Y}_{2}(i_{2})$, ..., $\mathcal{Y}_{K}(i_{K})$ are non-linearly activated by applying the sigmoid activation function before multiplying them together into a $K$-order tensor $\mathcal{Y}(j_{1}, j_{2}, ..., j_{K})$.
\begin{figure}
\centering
\includegraphics[width=0.95\textwidth]{ttn_sn.pdf}
\caption{{\it An illustration of the parallel DNN structure for TTN. $\mathcal{W}_{1}$, $\mathcal{W}_{2}$, ..., $\mathcal{W}_{K}$ are parametric tensors associated with the TTN model. The $K$-order tensor $\mathcal{X}$ is decomposed into $4$-order tensors $\mathcal{X}_{1}$, $\mathcal{X}_{2}$, ..., $\mathcal{X}_{K}$. The generated tensors $\mathcal{Y}_{1}$, $\mathcal{Y}_{2}$, ..., $\mathcal{Y}_{K}$ are combined into a resulting tensor $\mathcal{Y}$. The condition $\prod_{k=1}^{K} j_{k} > \prod_{k=1}^{K} i_{k}$ should be met to satisfy the feature dimensionality reduction.}}
\label{fig:parallel}
\end{figure}
More significantly, the non-linearity introduced by the sigmoid function sets up a parallel DNN structure for TTN and helps to build a one-to-one mapping in the TPE framework because the sigmoid function compresses the functional values in the domain of $(0, 1)$. Proposition~\ref{prop:prop1} suggests the sigmoid activation function ensures a one-to-one mapping from the classical data to the quantum state.
\begin{prop}
\label{prop:prop1}
The sigmoid activation function applied to the TTN ensures the TPE as a linear unitary operator $ \vert \textbf{y} \rangle \mapsto \mathcal{T}_{\textbf{y}}(\vert 0 \rangle^{\otimes U})$ such that a quantum state $\vert\textbf{y} \rangle$ can be generated from a classical vector $\textbf{y}$. On the other hand, the classical vector $\textbf{y}$ can be exactly deduced based on the operator $\mathcal{T}_{\textbf{y}}$.
\end{prop}
Proposition~\ref{prop:prop1} can be justified based on Eq. (\ref{eq:tpe}), where $\cos(\frac{\pi}{2} x_{i})$ and $\sin(\frac{\pi}{2}x_{i})$ are reversible one-to-one functions because of each $x_{i} \in (0, 1)$. Then, we can deduce the original classical vector $\textbf{y}$ given the quantum state $\vert \textbf{y} \rangle$.
The VQC outputs a classical vector $\textbf{z} = [\langle z_{1} \rangle, \langle z_{2} \rangle, ..., \langle z_{U} \rangle]^{T}$, and then $\textbf{z}$ is connected to the framework of functional regression, where a fixed linear regression operator $\mathcal{T}_{lr}$ further transforms $\textbf{z}$ into the output vector. The MAE is taken to measure the loss value and the related gradients of the loss function, which are used to update the parameters of both VQC and TTN models.
\subsubsection{Upper Bounds On the Approximation Error}
Theorem~\ref{thm:thm1} shows an upper bound on the approximation error. The upper bound on the approximation error relies on the theoretical analysis of the inherent parallel structure for the TTN model and the universal approximation theory utilized for neural networks~\citep{barron1993universal, cybenko, hornik}. Theorem~\ref{thm:thm1} suggests that the representation power of linear operator $\mathcal{M} \circ \mathcal{T}_{\boldsymbol{\theta}_{vqc}} \circ \mathcal{T}_{\textbf{y}}$ is strengthened by applying a non-linear operator $\mathcal{T}_{\boldsymbol{\theta}_{ttn}} (\textbf{x})$.
\begin{theorem}
\label{thm:thm1}
Given a smooth target function $h_{\mathcal{D}}^{*} : \mathbb{R}^{Q} \rightarrow \mathbb{R}^{U}$ and a classical data $\textbf{x}$, there exists a TTN-VQC $g(\textbf{x}; \boldsymbol{\theta}_{vqc}, \boldsymbol{\theta}_{ttn}) = \mathcal{M} \circ \mathcal{T}_{\boldsymbol{\theta}_{vqc}} \circ \mathcal{T}_{\textbf{y}} \circ \mathcal{T}_{\boldsymbol{\theta}_{ttn}} (\textbf{x})$, we obtain
\begin{equation}
\label{eq:thm1}
\begin{split}
\mathcal{L}_{\mathcal{D}}(f_{\mathcal{D}}^{*}) = \left\lVert h_{\mathcal{D}}^{*}(\textbf{x}) - \mathcal{T}_{lr} \left( \mathbb{E}\left[ g(\textbf{x}; \boldsymbol{\theta}_{vqc}, \boldsymbol{\theta}_{ttn}) \right]\right) \right\rVert_{1} \le \frac{\Theta(1)}{\sqrt{U}} + \mathcal{O}\left(\frac{1}{\sqrt{M}}\right),
\end{split}
\end{equation}
where $U$ and $M$ separately refer to the number of qubits and the count of quantum measurement, and $\mathbb{E}[g(\textbf{x}; \boldsymbol{\theta}_{vqc}, \boldsymbol{\theta}_{ttn})]$ represents an expectation value of the output measurement.
\end{theorem}
The upper bound in Eq.~(\ref{eq:thm1}) implies that the number of qubits $U$ and the count of measurement $M$ jointly decide the representation power of TTN-VQC, and larger values of $U$ and $M$ are expected to lower the upper bound.
\subsubsection{Upper Bounds on the Estimation Error}
Theorem~\ref{thm:thm2} suggests the upper bounds on the estimation error. The upper bound on the estimation error can be derived based on the empirical Rademacher complexity $\mathcal{\hat{R}}_{S}(\mathbb{F}_{TV})$, which is defined as:
\begin{equation}
\label{eq:rad}
\mathcal{\hat{R}}_{S}(\mathbb{F}_{TV}) := \mathbb{E}_{\boldsymbol{\epsilon}}\left[ \sup\limits_{f\in \mathbb{F}_{TV}} \frac{1}{N} \sum\limits_{n=1}^{N} \epsilon_{n} f(\textbf{x}_{n}) \right],
\end{equation}
where $N$ samples $S = \{ \textbf{x}_{1}, \textbf{x}_{2}, ..., \textbf{x}_{N} \}$, and $\boldsymbol{\epsilon} = \{ \epsilon_{1}, \epsilon_{2}, ..., \epsilon_{N} \}$ refers to a set of $N$ Rademacher random variables taking on values $1$ and $-1$ with an equal likelihood.
\begin{theorem}
\label{thm:thm2}
Based on the TTN-VQC setup in Theorem~\ref{thm:thm1}, the estimation error is upper bounded by the empirical Rademacher complexity $2 \mathcal{\hat{R}}_{S}(\mathbb{F}_{TV})$, which is
\begin{equation}
\label{eq:est}
\begin{split}
2 \hat{\mathcal{R}}_{S}(&\mathbb{F}_{TV}) \le 2 \mathcal{\hat{R}}_{S}(\mathbb{F}_{TTN}) + 2 \mathcal{\hat{R}}_{S}(\mathbb{F}_{VQC}) \le \sum\limits_{k=1}^{K} \frac{2 P \Lambda_{k}}{\sqrt{N}} + \frac{2 P \Lambda'}{\sqrt{N}} \\
&s.t., \hspace{0.5mm} \lVert \textbf{x} \lVert_{2} \le P, \hspace{0.5mm} \lVert \textbf{W}(\mathcal{T}_{\boldsymbol{\theta}_{vqc}})\lVert_{F} \le \Lambda', \hspace{0.5mm} \lVert \mathcal{W}_{k}(\mathcal{T}_{\boldsymbol{\theta}_{ttn}}) \lVert_{F} \le \Lambda_{k}, k\in [K], \\
\end{split}
\end{equation}
where $\mathbb{F}_{TTN}$ and $\mathbb{F}_{VQC}$ separately denote the family of TTN and VQC, $P$, $\Lambda'$ and $\Lambda_{k}$ are constants, $\textbf{W}(\mathcal{T}_{\boldsymbol{\theta}_{vqc}})$ refers to a matrix associated with the operator $\mathcal{T}_{\boldsymbol{\theta}_{vqc}}$, and $\mathcal{W}_{k}(\mathcal{T}_{\boldsymbol{\theta}_{ttn}})$ corresponds to a $4$-order tensor of TTN, $\lVert \textbf{W} \rVert_{F}$ and $\lVert \mathcal{W}_{k} \rVert_{F}$ represent the Frobenius norm of a matrix and a tensor, respectively.
\end{theorem}
The upper bound on the estimation error in Eq. (\ref{eq:est}) shows when an input $\textbf{x}$ and an initialized TTN-VQC model are given, a sufficiently large amount of training data $N$ is needed to lower the related upper bound. On the other hand, the noise perturbation on the input corresponds to a larger input power $P$, which corresponds to a larger upper bound on the estimation error and weaker generalization power.
\subsubsection{Upper Bounds on Optimization Error}
A QNN system always suffers from the problem of Barren Plateaus~\citep{mcclean2018barren}, which results from optimizing a non-convex objective function and the gradients may vanish almost everywhere in the training stage. To alleviate the problem of Barren Plateaus, we introduce a new initialization strategy based on the Polyak-Lojasiewicz (PL) condition~\citep{wang2017differentially, karimi2016linear, nouiehed2019solving}. More specifically, given the set of model parameters $\boldsymbol{\theta} = \{\boldsymbol{\theta}_{ttn}, \boldsymbol{\theta}_{vqc}\}$ for TTN-VQC, if an empirical loss function $\mathcal{L}_{S}$ satisfies $\mu$-PL, the $L_{2}$-norm of the first-order gradient $\nabla \mathcal{L}_{S}$ concerning $\boldsymbol{\theta}$ should satisfy the following inequality as:
\begin{equation}
\frac{1}{2} \lVert\nabla \mathcal{L}_{S}(\boldsymbol{\theta}) \rVert_{2}^{2} \ge \mu \mathcal{L}_{S}(\boldsymbol{\theta}).
\end{equation}
\begin{theorem}
\label{thm:pl}
If a $1$-Lipschitz loss function $\mathcal{L}$ over the set of TTN-VQC parameters $\boldsymbol{\theta}$ satisfies the PL condition, the gradient descent algorithm with a learning rate of $1$ can lead to an exponential convergence rate. More specifically, at the epoch $T$, we have
\begin{equation}
\mathcal{L}_{S}(\boldsymbol{\theta}_{T}) \le \exp\left(-\mu T \right)\mathcal{L}_{S}(\boldsymbol{\theta}_{0}),
\end{equation}
where $\boldsymbol{\theta}_{0}$ and $\boldsymbol{\theta}_{T}$ separately denote the parameters at the initial stage and at the epoch T. Furthermore, given a radius $r = 2\sqrt{2 \mathcal{L}_{S}(\boldsymbol{\theta}_{0})} / \mu$ for a closed ball $B(\boldsymbol{\theta}_{0}, r)$, there exists a global minimum hypothesis $\boldsymbol{\theta}^{*} \in B(\boldsymbol{\theta}_{0}, r)$ such that the optimization error becomes sufficiently small.
\end{theorem}
Furthermore, we show a necessary condition in Proposition~\ref{prop:pl} for a TTN-VQC operator $f\in \mathbb{F}_{TV}$ to satisfy the $\mu$-PL setup of $\mathcal{L}_{S}(\boldsymbol{\theta})$, which is related to the tangent kernel of the operator $f$.
\begin{prop}
\label{prop:pl}
For a TTN-VQC operator $f \in \mathbb{F}_{TV}$, we define the tangent kernel $\mathcal{K}_{f}$ as $\nabla f(\boldsymbol{\theta}) \nabla f(\boldsymbol{\theta})^{T}$. If a $1$-Lipschitz loss function $\mathcal{L}_{S}(\boldsymbol{\theta})$ satisfies the $\mu$-PL condition, $\lambda_{\min}(\mathcal{K}_{f})$ represents the smallest eigenvalue of $\mathcal{K}_{f}$ and meets the condition as:
\begin{equation}
\lambda_{\min}\left(\mathcal{K}_{f} \right) \ge \mu.
\end{equation}
\end{prop}
Theorem~\ref{thm:pl} suggests that the $\mu$-PL condition for the TTN-VQC ensures an exponential convergence rate and the training loss can reach as low as $0$. Proposition~\ref{prop:pl} can check if the $\mu$-PL condition can be met by calculating its tangent kernel.
\subsubsection{Putting It All Together}
Based on the derived upper bound, under the setup of $\mu$-PL condition, the upper bounds on the error components can be combined into an aggregated upper bound as:
\begin{equation}
\begin{split}
\label{eq:agg}
\mathcal{L}_{\mathcal{D}}(\bar{f}_{S}) &\le \mathcal{L}_{\mathcal{D}}(f_{\mathcal{D}}^{*}) + 2 \mathcal{\hat{R}_{S}}(\mathbb{F}_{TV}) + \nu \\
&\le \frac{\Theta(1)}{\sqrt{U}} + \mathcal{O}\left( \frac{1}{\sqrt{M}} \right) + \sum\limits_{k=1}^{K} \frac{2P\Lambda_{k}}{\sqrt{N}} + \frac{2P\Lambda'}{\sqrt{N}} \\
&s.t., \hspace{4mm} \lVert \textbf{x} \rVert_{2} \le P, \\
& \lVert \textbf{W}(\mathcal{T}_{\boldsymbol{\theta}_{vqc}})\lVert_{F} \le \Lambda', \hspace{1mm} \lVert \mathcal{W}_{k}(\mathcal{T}_{\boldsymbol{\theta}_{ttn}}) \rVert_{F} \le \Lambda_{k}, k\in [K]. \\
\end{split}
\end{equation}
The aggregated upper bound in Eq. (\ref{eq:agg}) shows that the training error $\epsilon$ can be reduced to closely $0$ with the setup of $\mu$-PL condition, and the expected loss is mainly determined by the upper bounds on the approximation and estimation errors.
\subsection{Empirical Results}
To separately corroborate our theoretical analysis of the TTN-VQC, our experiments are composed of two groups: (1) to evaluate the representation power, the training and test datasets are set in the same clean environment; (2) to assess the generalization power of TTN-VQC, the test data are separately mixed by additive Gaussian and Laplacian noises, where the SNR levels are set as $8$dB and $12$dB, respectively. Our baseline system is a linear PCA-VQC model where the technique of principal component analysis (PCA)~\citep{abdi2010principal} is employed. PCA is a standard method to reduce data dimensionality by using a linear transformation in an unsupervised manner. Our experiments compare the performance of the TTN-VQC and PCA-VQC models, and particularly aim at verifying the following points:
\begin{enumerate}
\item The TTN-VQC can lead to better performance than PCA-VQC in both matched and unmatched environmental settings.
\item Increasing the number of qubits can improve the representation power of TTN-VQC.
\item Exponential convergence rates demonstrate our configurations of the TTN-VQC model satisfy the $\mu$-PL condition.
\end{enumerate}
We evaluate the performance of TTN-VQC on the standard MNIST dataset~\citep{deng2012mnist}. The MNIST dataset aims at the task of handwritten $10$ digit classification, where there are $60,000$ examples for training and $10,000$ data for testing. In our experiments, we randomly sample $10,000$ in training data and $2,000$ in test data. Both training and test data are corrupted with noisy signals at different SNR levels, and the generated noisy data are taken as the input to the quantum-based models. The target of the models is set as the clean data during the training stage, where the model-enhanced data are expected to be as close as the target one. In the test stage, we measure the model performance by calculating the $L_{1}$-norm loss between enhanced data and target one.
As the experimental baseline, a hybrid PCA-VQC model is conducted, where PCA serves as a simple feature extractor followed by the VQC as the classifier. The PCA-VQC represents a linear VQC model which is in contrast to a nonlinear one based on the TTN-VQC model. We include $4$ PQC blocks in the VQC employed in the experiments. As for the experiments of TTN-VQC, the image data are reshaped into a $3$-order $7 \times 16 \times 7$ tensors. Given a set of ranks $\textbf{R} = \{1,\hspace{1mm} 3, \hspace{1mm} 3, \hspace{1mm} 1\}$, we can set $3$ trainable tensors as: $\mathcal{W}_{1} \in \mathbb{R}^{1 \times 7 \times U_{1} \times 3}$, $\mathcal{W}_{2} \in \mathbb{R}^{3 \times 16 \times U_{2} \times 3}$, and $\mathcal{W}_{3} \in \mathbb{R}^{3\times 7 \times U_{3} \times 1}$, where $U = \prod_{k=1}^{3}U_{k}$ is associated with the number of qubits. In particular, we separately assess the models with $8$ qubits and $12$ qubits, and the parameters $(U_{1}, U_{2}, U_{3})$ are set as $(2, \hspace{1mm} 2, \hspace{1mm} 2)$ for the 8 qubits and $(2, \hspace{1mm} 3,\hspace{1mm} 2)$ for the $12$ qubits. The stochastic gradient descent (SGD)~\citep{bottou1991stochastic} with an Adam optimizer~\citep{kingma2014adam} is utilized in the training process, where a mini-batch of $50$ and a learning rate of $1$ are configured. The 1-Lipschitz continuous function based on MAE is taken to meet the PL condition.
\subsubsection{Experiments for Representation Power of TTN-VQC}
To corroborate the Theorem~\ref{thm:thm1} for the representation power of TTN-VQC, both training and test data are mixed with the Gaussian noise of the 15dB SNR level, and we compare the performance of TTN-VQC with PCA-VQC on the generated noisy settings. Figure~\ref{fig:res1} demonstrate the related empirical results, where TTN-VQC$\_8$ and TTN-VQC$\_12$ separately represent the TTN-VQC models with $8$ and $12$ qubits and PCA-VQC$\_8$ and PCA-VQC$\_12$ denote that the PCA-VQC models with $8$ and $12$ qubits, respectively. Our experiments show that the TTN-VQC can significantly outperform the PCA-VQC counterparts in terms of lower training and test loss values. Moreover, our results also suggest that more qubits can improve the empirical performance of both TTN-VQC and PCA-VQC models. Table~\ref{tab:res1} presents the final results on the test dataset. The TTN-VQC$\_12$ model owns more parameters than the TTN-VQC$\_8$ model (0.636Mb vs. 0.452Mb), but the former one attains better empirical performance in terms of lower MAE scores ($0.0156$ vs. $0.0597$) on the test dataset.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{result1.pdf}
\caption{\it Empirical results of the vector-to-vector regression on the MNIST dataset to evaluate the representation power of TTN-VQC. TTN-VQC$\_8$Qubit and TTN-VQC$\_12$Qubit represent the TTN-VQC models with $8$ and $12$ qubits, respectively; PCA-VQC$\_8$Qubit and PCA-VQC$\_12$Qubit separately denote the PCA-VQC models with $8$ and $12$ qubits.}
\label{fig:res1}
\end{figure}
\begin{table}[tpbh]\footnotesize
\center
\renewcommand{\arraystretch}{1.3}
\begin{tabular}{|c||c|c|c|c|}
\hline
Models & Qubits & Params $(\text{Mb})$ & MAE \\
\hline
TTN-VQC$\_8$ & $8$ & 0.452 & 0.0597 \\
\hline
TTN-VQC$\_12$ & $16$ & 0.636 & 0.0156 \\
\hline
PCA-VQC$\_8$ & $8$ & 0.080 & 0.3847 \\
\hline
PCA-VQC$\_12$ & $16$ & 0.120 & 0.2939 \\
\hline
\end{tabular}
\caption{Empirical results of TTN-VQC and PCA-VQC models on the test dataset.}
\label{tab:res1}
\end{table}
\subsubsection{Experiments for Generalization Power of TTN-VQC}
To assess the generalization power of TTN-VQC, the test data are separately mixed with additive Gaussian and Laplacian noises with 8dB and 12dB SNR levels. Based on the well-trained TTN-VQC and PCA-VQC models with $8$ qubits, we further assess their performance on the test data with Gaussian and Laplacian noisy conditions, which is related to the evaluation of their generalization power. Figure~\ref{fig:res2} suggests that the TTN-VQC models significantly outperform the PCA-VQC counterparts in the two noisy settings. Table~\ref{tab:res2} shows the MAE scores of TTN-VQC and PCA-VQC models, where the TTN-VQC models achieve much better performance than the PCA-VQC ones in terms of lower MAE scores under all kinds of noisy environments.
\begin{table}[tpbh]\footnotesize
\center
\renewcommand{\arraystretch}{1.3}
\begin{tabular}{|c||c|c|c|c|}
\hline
Models & Noise Type & Params $(\text{Mb})$ & MAE \\
\hline
TTN-VQC$\_$Gauss-8dB & Gaussian (8dB) & 0.452 & 0.1703 \\
\hline
TTN-VQC$\_$Gauss-12dB & Gaussian (12dB) & 0.452 & 0.1078 \\
\hline
PCA-VQC$\_$Gauss-8dB & Gaussian (8dB) & 0.080 & 0.5151 \\
\hline
PCA-VQC$\_$Gauss-12dB & Gaussian (12dB) & 0.080 & 0.4546 \\
\hline
\hline
TTN-VQC$\_$Laplace-8dB & Laplacian (8dB) & 0.452 & 0.1684 \\
\hline
TTN-VQC$\_$Laplace-12dB & Laplacian (12dB) & 0.452 & 0.1327 \\
\hline
PCA-VQC$\_$Laplace-8dB & Laplacian (8dB) & 0.080 & 0.4651 \\
\hline
PCA-VQC$\_$Laplace-12dB & Laplacian (12dB) & 0.080 & 0.4396 \\
\hline
\end{tabular}
\caption{Empirical results of TTN-VQC and PCA-VQC models on the test dataset with either Gaussian or Laplacian noise.}
\label{tab:res2}
\end{table}
\begin{figure}
\includegraphics[width=0.95\textwidth]{result2.pdf}
\caption{\it Empirical results of the vector-to-vector regression on the MNIST dataset to evaluate the generalization power of TTN-VQC and PCA-VQC with $8$ qubits. There are two noisy settings on the test dataset to evaluate the performance of the TTN-VQC and PCA-VQC models: (a) Gauss-8dB and Gauss-12dB separately denote the Gaussian noisy conditions of 8dB and 12dB SNR levels; (b) Laplace-8dB and Laplace-12dB refer to the Laplacian noisy settings of 8dB and 12dB SNR levels, respectively.}
\label{fig:res2}
\end{figure}
\section{DISCUSSION}
This work focuses on the theoretical error performance analysis for VQC-based functional regression, particularly when the TTN is employed for dimensionality reduction. Our theoretical results provide upper bounds on the representation and generalization powers of TTN-VQC. Our theoretical results suggest that the approximation error is inversely proportional to the square root of qubits, which means that the increase of qubits can lead to better representation power of TTN-VQC. The estimation error of TTN-VQC is related to its generalization power, which is upper bounded based on the empirical Rademacher complexity. The optimization error can be lowered to a small score by leveraging the PL condition to realize an exponential convergence based on the SGD algorithm. To our best knowledge, no prior works have been delivered such as a complete error characterization.
Our experiments of vector-to-vector regression on the MNIST dataset are designed to corroborate the theoretical results. We first compare the representation power of the TTN-VQC models with the PCA-VQC counterparts. We observe that more qubits and the non-linear property for TTN-VQC can improve the empirical performance that matches our theoretical analysis. Further, we assess the generalization power of TTN-VQC by taking different noisy inputs into account, and we demonstrate that more mismatched and noisy inputs can worsen the generalization power. Besides, the non-linear TTN-VQC models outperform the linear PCA-VQC models in terms of representation and generalization powers. That implies that the non-linearity of TTN-VQC can greatly contribute to the improvement of VQC performance.
We also note that the TTN-VQC models attain exponential convergence rates and the optimization error is eventually reduced to closely $0$ in the training process, which corresponds to the PL condition in our theoretical analysis. Moreover, the empirical results on the test dataset consistently exhibit a decreasing trend. Our future work will discuss how to set up TTN-VQC to satisfy the PL condition.
\section{DATA AND CODE AVAILABILITY}
The MNIST dataset can be simply downloaded via our released codes, and it is also accessed at http://yann.lecun.com/exdb/mnist/. Our codes consist of two parts: the implementation of TTN models at the website https://github.com/uwjunqi/Pytorch-Tensor-Train-Network; the experiments of TTN-VQC and PCA-VQC can be accessed at the website https://github.com/uwjunqi/TTN-VQC.
\section{COMPETING INTERESTS}
The authors declare no Competing Financial or Non-Financial Interests.
\section{AUTHOR CONTRIBUTIONS}
Jun Qi and Chao-Han Yang conceived the project. Jun Qi and Min-Hsiu Hsieh completed the theoretical analysis. Jun Qi, Chao-Han, and Pin-Yu Chen designed the experimental work. Min-Hsiu Hsieh and Pin-Yu Chen provided high-level advice on the pipeline of paperwork, and Jun Qi wrote the manuscript.
\section{METHOD}
This section aims at providing detailed proof of our theoretical results. We first present the upper bound on the representation power, and then we derive another upper bound on the generalization power. The analysis of optimization performance is shown in the two experiments.
\subsection{Proof for Theorem~\ref{thm:thm1}}
The derivation of Theorem~\ref{thm:thm1} is mainly based on the classical universal approximation theorem~\citep{barron1993universal, cybenko, hornik} and a parallel structure of TTN. We first assume $g_{m}(\textbf{x}; \boldsymbol{\theta}_{vqc}, \boldsymbol{\theta}_{ttn})$ as the $m$-th measurement for the TTN-VQC operator $g(\textbf{x}; \boldsymbol{\theta}_{vqc}, \boldsymbol{\theta}_{ttn})$, and $\sum_{m=1}^{M}g_{m}(\textbf{x}; \boldsymbol{\theta}_{vqc}, \boldsymbol{\theta}_{ttn})$ is defined as:
\begin{equation*}
\begin{split}
\sum\limits_{m=1}^{M} g_{m}(\textbf{x}; \boldsymbol{\theta}_{vqc}, \boldsymbol{\theta}_{ttn}) &= \sum\limits_{m=1}^{M} \mathcal{M}_{m} \circ \mathcal{T}_{\boldsymbol{\theta}_{\text{vqc}}} \circ \mathcal{T}_{\textbf{y}} \circ \mathcal{T}_{\boldsymbol{\theta}_{\text{ttn}}} (\textbf{x}) \\
&= \mathcal{M}' \circ \mathcal{T}_{\boldsymbol{\theta}_{\text{vqc}}} \circ \mathcal{T}_{\textbf{y}} \circ \mathcal{T}_{\boldsymbol{\theta}_{\text{ttn}}} (\textbf{x}) \\
&= \mathcal{M}' \circ \mathcal{H} \circ \mathcal{T}_{\boldsymbol{\theta}_{\text{ttn}}} (\textbf{x}),
\end{split}
\end{equation*}
where the operator $\mathcal{H} = \mathcal{T}_{\boldsymbol{\theta}_{vqc}} \circ \mathcal{T}_{\textbf{y}}$ refers to a unitary matrix, and $\mathcal{M}_{m}$ denotes the $m$-th measurement and $\mathcal{M}' = \sum_{m=1}^{M} \mathcal{M}_{m}$. Moreover, $\mathcal{H}^{-1}$ is a reversely linear unitary operator of $\mathcal{H}$ and $g_{m}$ refers to the function after the quantum measurement. Next, we can further derive that
\begin{equation*}
\begin{split}
&\hspace{4mm} \left\lVert \hat{f}(\textbf{x}) - \mathcal{T}_{lr}(\mathbb{E}[g(\textbf{x}; \boldsymbol{\theta}_{vqc}, \boldsymbol{\theta}_{ttn})]) \right\rVert_{1} \\
&\le \left\lVert \hat{f}(\textbf{x}) - \mathcal{T}_{lr}\left( \frac{1}{M}\sum\limits_{m=1}^{M}g_{m}(\textbf{x}; \boldsymbol{\theta}_{vqc}, \boldsymbol{\theta}_{ttn}) \right) \right\rVert_{1} \hspace{30mm} \text{(Triangle Ineq.)} \\
&\hspace{10mm} + \left\lVert \mathcal{T}_{lr} \left( \frac{1}{M}\sum\limits_{m=1}^{M}g_{m}(\textbf{x}; \boldsymbol{\theta}_{vqc}, \boldsymbol{\theta}_{ttn}) \right) - \mathcal{T}_{lr}\left(\mathbb{E}[g(\textbf{x}; \boldsymbol{\theta}_{vqc}, \boldsymbol{\theta}_{ttn})]\right) \right\rVert_{1} \\
&= \left\lVert \mathcal{T}_{lr}\left( \mathcal{T}_{lr}^{-1}(\hat{f}(\textbf{x})) - \frac{1}{M}\sum\limits_{m=1}^{M}g_{m}(\textbf{x}; \boldsymbol{\theta}_{vqc}, \boldsymbol{\theta}_{ttn}) \right) \right\rVert_{1} \\
&\hspace{10mm} + \left\lVert \mathcal{T}_{lr}\left( \frac{1}{M}\sum\limits_{m=1}^{M} g_{m}(\textbf{x}; \boldsymbol{\theta}_{vqc}, \boldsymbol{\theta}_{ttn}) - \mathbb{E}[g(\textbf{x}; \boldsymbol{\theta}_{vqc}, \boldsymbol{\theta}_{ttn})] \right)\right\rVert_{1} \\
&\le \left\lVert \mathcal{T}_{lr}\left( \mathcal{M}'\circ \mathcal{H}\circ \mathcal{H}^{-1} \circ \mathcal{T}_{lr}^{-1}(\hat{f}(\textbf{x})) - \mathcal{M}'\circ \mathcal{H}\circ \mathcal{T}_{\boldsymbol{\theta}_{ttn}}(\textbf{x}) \right) \right\rVert_{1} \\
&\hspace{10mm} + \mathcal{O}\left(\frac{1}{\sqrt{M}} \right) \cdot \left\lVert \mathcal{T}_{lr}(1) \right\rVert_{1} \hspace{40mm} \text{(Central Limit Theorem)} \\
&\le \prod\limits_{k=1}^{K}\frac{1}{\sqrt{U}_{k}} \cdot \lVert \mathcal{T}_{lr} \circ \mathcal{M}' \circ \mathcal{H}(1) \rVert_{1} + \mathcal{O}\left(\frac{1}{\sqrt{M}} \right)\cdot \lVert \mathcal{T}_{lr}(1) \rVert_{1} \hspace{7mm} (\text{Universal Approx.}) \\
&= \frac{\Theta(1)}{\sqrt{U}} + \mathcal{O}\left(\frac{1}{\sqrt{M}} \right) \hspace{68mm} (\prod\limits_{k=1}^{K}U_{k} = U).
\end{split}
\end{equation*}
\subsection{Proof for Theorem~\ref{thm:thm2}}
Since the empirical Rademacher complexity $\hat{\mathcal{R}}_{S}(\mathbb{F}_{TV})$ in (\ref{eq:rad}) combined with $\hat{\mathcal{R}}_{S}(\mathbb{F}_{TTN})$ and $\hat{\mathcal{R}}_{S}(\mathbb{F}_{VQC})$, based on the Rademacher identity property, we first derive that $\hat{\mathcal{R}}_{S}(\mathbb{F}_{TV})$ can be upper bounded as $\hat{\mathcal{R}}_{S}(\mathbb{F}_{TTN}) + \hat{\mathcal{R}}_{S}(\mathbb{F}_{VQC})$. Figure~\ref{fig:parallel} demonstrates a parallel DNN structure, where $K$ DNN paths are provided. The $k$ DNN path ensures an upper bound as $\frac{P \Lambda_{k}}{\sqrt{N}}$, which results in an upper bound as $\sum_{k=1}^{K} \frac{P_{1} \Lambda_{k}}{\sqrt{N}}$ for $\hat{\mathcal{R}}_{S}(\mathbb{F}_{TTN})$. On the other hand, since $\lVert \textbf{y} \lVert_{2} \le P$ because of the non-linearity of TTN, then we have $\mathcal{\hat{R}}_{S}(\mathbb{F}_{VQC}) \le \frac{P\Lambda'}{\sqrt{N}}$.
\subsection{Proof for Theorem~\ref{thm:pl}}
Assume the gradient descent algorithm runs around the closed ball $B(\boldsymbol{\theta}_{0}, R)$ with $R = \frac{2}{\mu}$ and the loss function $\mathcal{L}_{S}(\boldsymbol{\theta})$ has the following properties as: (1) The loss $\mathcal{L}_{S}(\boldsymbol{\theta})$ is $\mu$-PL; (2) The loss $\mathcal{L}_{S}(\boldsymbol{\theta})$ is $1$-Lipschitz; (3) The norm of Hessian $H$ is bounded by $1$.
Then, we need to prove the following properties: (a) There exists a global minimum $\boldsymbol{\theta}^{*} \in B(\boldsymbol{\theta}_{0}, R)$; (b) The algorithm of gradient descent converges with an exponential convergence rate: $\mathcal{L}_{S}(\boldsymbol{\theta}_{t+1}) \le (1 - \eta\mu)^{t+1} \mathcal{L}_{S}(\boldsymbol{\theta}_{0})$. By applying the Taylor expansion, we obtain
\begin{equation*}
\begin{split}
\mathcal{L}_{S}(\boldsymbol{\theta}_{t+1}) &= \mathcal{L}_{S}(\boldsymbol{\theta}_{t}) + (\boldsymbol{\theta}_{t+1} - \boldsymbol{\theta}_{t})^{T} \nabla f(\boldsymbol{\theta}_{t}) + \frac{1}{2} (\boldsymbol{\theta}_{t+1} - \boldsymbol{\theta}_{t})^{T} H(\boldsymbol{\theta}')(\boldsymbol{\theta}_{t+1} - \boldsymbol{\theta}_{t}) \\
&= \mathcal{L}_{S}(\boldsymbol{\theta}_{t}) + (-\eta) \nabla \mathcal{L}_{S}(\boldsymbol{\theta}_{t})^{T} \nabla \mathcal{L}_{S}(\boldsymbol{\theta}_{t}) + \frac{1}{2} (-\eta) \nabla\mathcal{L}_{S}(\boldsymbol{\theta}_{t})^{T} H(\boldsymbol{\theta}')(-\eta)\nabla \mathcal{L}_{S}(\boldsymbol{\theta}_{t}) \\
&= \mathcal{L}_{S}(\boldsymbol{\theta}_{t}) - \eta \lVert \nabla \mathcal{L}_{S}(\boldsymbol{\theta}_{t}) \lVert_{2}^{2} + \frac{\eta^{2}}{2} \nabla\mathcal{L}_{S}(\boldsymbol{\theta}_{t})^{T} H(\boldsymbol{\theta}') \nabla\mathcal{L}_{S}(\boldsymbol{\theta}_{t}) \\
&\le \mathcal{L}_{S}(\boldsymbol{\theta}_{t}) - \eta(1 - \frac{\eta}{2}) \lVert \nabla\mathcal{L}_{S}(\boldsymbol{\theta}_{t}) \lVert_{2}^{2} \hspace{41mm} \text{(by Assumption 3)} \\
&\le \mathcal{L}_{S}(\boldsymbol{\theta}_{t}) - \eta(2 - \eta)\mu \mathcal{L}_{S}(\boldsymbol{\theta}_{t}) \hspace{40mm} \text{(by $\mu$-PL Assumption)} \\
&= \left( 1 - 2\eta\mu + \eta^{2}\mu \right)\mathcal{L}_{S}(\boldsymbol{\theta}_{t}) \\
&\le \left( 1 - 2\eta\mu + \eta^{2}\mu \right)^{t+1} \mathcal{L}_{S}(\boldsymbol{\theta}_{0}).
\end{split}
\end{equation*}
Next, we show that $\boldsymbol{\theta}_{t}$ does not leave the ball $B$. Based on the assumption 4, we have $\mathcal{L}(\boldsymbol{\theta}_{t}) - \mathcal{L}(\boldsymbol{\theta}_{t+1}) \ge \frac{\eta}{2} \lVert \nabla \mathcal{L}(\boldsymbol{\theta}_{t})\lVert^{2}_{2}$, which leads to $\lVert \nabla\mathcal{L}(\boldsymbol{\theta}_{t}) \lVert_{2}^{2} \le \sqrt{2\beta (\mathcal{L}(\boldsymbol{\theta}_{t}) - \mathcal{L}(\boldsymbol{\theta}_{t+1})) }$. Then, we further derive that
\begin{equation*}
\begin{split}
\vert\vert \boldsymbol{\theta}_{t+1} - \boldsymbol{\theta}_{0} \lVert_{2}^{2} &= \eta \left\lVert \sum\limits_{\tau=0}^{t} \nabla \mathcal{L}(\boldsymbol{\theta}_{\tau}) \right\rVert_{2}^{2} \\
&\le \eta \sum\limits_{\tau=0}^{t} \lVert \nabla \mathcal{L}(\boldsymbol{\theta}_{\tau}) \lVert_{2}^{2} \\
&\le \eta \sum\limits_{\tau=0}^{t} \sqrt{2 \left( \mathcal{L}(\boldsymbol{\theta}_{\tau}) - \mathcal{L}(\boldsymbol{\theta}_{\tau+1}) \right)} \hspace{36mm} \text{(by Continuity)}\\
&\le \eta \sum\limits_{\tau=0}^{t} \sqrt{2 \mathcal{L}(\boldsymbol{\theta}_{\tau})} \\
&\le \eta\sqrt{2} \sum\limits_{\tau=0}^{t}\sqrt{(1-2\eta\mu + \eta^{2}\mu)^{\tau}\mathcal{L}(\boldsymbol{\theta}_{0})} \hspace{8mm} \text{(by Geometric Convergence)}\\
&= \eta\sqrt{2\mathcal{L}(\boldsymbol{\theta}_{0})} \sum\limits_{\tau=0}^{t}(1 - 2\eta\mu + \eta^{2}\mu)^{\tau/2} \\
&= \frac{\eta \sqrt{2 \mathcal{L}(\boldsymbol{\theta}_{0})}}{1 - \sqrt{1 - 2\eta\mu + \eta^{2}\mu}} \\
&= \frac{\sqrt{2\mathcal{L}(\boldsymbol{\theta}_{0})} (1 + \sqrt{1 - 2\eta\mu + \eta^{2}\mu})}{\mu (2 - \eta)} \\
&\le \frac{2\sqrt{2\mathcal{L}(\boldsymbol{\theta}_{0})}}{\mu} \hspace{55mm} \text{(By Setting $\eta = 1$)}.
\end{split}
\end{equation*}
The inequality $\lVert \boldsymbol{\theta}_{t+1} - \boldsymbol{\theta}_{0}\lVert_{2}^{2} \le \frac{2\sqrt{2\mathcal{L}(\boldsymbol{\theta}_{0})}}{\mu}$ represents the gradient descent algorithm ensures the updated point in a ball with a radius of $\frac{2\sqrt{2\mathcal{L}(\boldsymbol{\theta}_{0})}}{\mu}$, and a larger $\mu$ leads to larger updates and faster convergence rate over a smaller ball.
|
1,314,259,995,339 | arxiv | \section{Introduction}
\par Current video coding standards (e.g.,H.264 and HEVC)\cite{H.264}\cite{HEVC} are able to provide good compression using a high-complexity encoders. At the encoder, motion estimation (using block-matching) has been applied between adjacent frames to exploit the temporal redundancy. Then each reference and residual frame (motion-compensated differences) is divided in to non overlapping blocks (block size may vary from 8$\times$8 to 64$\times$64 pixels) and apply the transform coding on each block (e.g., DCT) to exploit the spatial redundancy. Motion estimation and transform coding accounts for nearly 70$\%$ of the total complexity of the encoder \cite{Richardson}. Moreover, block wise transform coding leads to blocking artifacts in the motion compensated frame and it may reduced by using deblocking filter. However, which may further increase the complexity of the encoder. In contrast, the decoder complexity is very low. The main function of the decoder is to reconstruct the video frames by using reference frame, motion-compensated residuals and motion vectors. They are more suitable for the broadcasting applications, where a high complexity encoder would support thousands of low complex decoders. However, conventional video coding schemes are not suitable for applications requires low complexity encoders like mobile phones and camcorders. There requires low complex, low power and low cost devices. High complex encoder enables increase in compression ratio and power consumption. Therefore, to increase battery life in mobile devices, a low-complexity encoder with good coding efficiency is highly desirable.
\par In a mobile video broadcast network (wireless networks), a video source is broadcast to multiple receivers and may have various channel capacities, display resolutions, or computing facilities. It is necessary to encode and transmit the video source once, but allow any subset of the bit stream to be successfully decoded by a receiver. In order to reduce the error rate in wireless broadcast network, error correction coding such as Reed-Solomon (RS) code and convolutional code has been widely used. However, this type of channel coding is not flexible. It can correct the bit errors only if the error rate is smaller than a given threshold. Therefore, it is hard to find a single channel code suitable for different channels having different capacities. For broadcast applications, without the feedback from individual receivers, the sender can re-transmit data that are helpful to all the receivers. These requirements are indeed difficult and challenging for traditional channel coding design.
From the above requirements it is desired to have a encoder with less complex, good coding efficiency, error resilience, scalable and support the realtime application.
\par
This paper introduces a new VLSI architecture for scalable low complex encoder using 3-D DWT and compressed sensing. Fig.~\ref{blockdia_1}(a) shows the block diagram of low complex video codec (encoder and decoder). Encoder has 3-D DWT and CS as main functional modules shown in Fig.~\ref{blockdia_1}(b). 3-D DWT module provides the scalability with the levels of decomposition and also exploit the spatial and temporal redundancies of the video frames. 3-D DWT module of the encoder replaces the transform coding, motion estimation and deblocking filters of the current video coding system. CS module utilize the sparse nature of the wavelet coefficients and projects on the random Bernoulli matrices for selecting the measurements at the encoder to enable the compression and approximate message passing algorithm for reconstruction at the decoder. CS module provides the good compression ratio and improves the error resilience. As a result the proposed architecture enjoys lesser complexity at the encoder and marginal complexity at the decoder. \\
\par From the last two decades, several hardware designs have been noted for implementation of 2-D DWT and 3-D DWT for different applications. Majority of the designs are developed based on three categories, viz. (i) convolution based (ii) lifting-based and (iii) B-Spline based. Most of the existing architectures are facing the difficulty with larger memory requirement, lower throughput, and complex control circuit. In general the circuit complexity is denoted by two major components viz, arithmetic and Memory component. Arithmetic component includes adders and multipliers, whereas memory component consists of temporal memory and transpose memory. Complexity of the arithmetic components is fully depends on the DWT filter length. In contrast size of the memory component is depends on dimensions of the image. As image resolutions are continuously increasing (HD to UHD), image dimensions are very high compared to filter length of the DWT, as a result complexity of the memory component occupied major share in the overall complexity of DWT architecture.\\
\par Convolution based implementations \cite{3D_conv_dai}-\cite{3D_conv_mohanty} provides the outputs within less time but require high amount of arithmetic resources, memory intensive and occupy larger area to implement. Lifting based a implementations requires less memory, less arithmetic complex and possibility to implement in parallel. However it require long critical path, recently huge number of contributions are noted to reduce the critical path in lifting based implementations. For a general lifting based structure \cite{dwt_1} provides critical path of 4$T_{m}+8T_{a}$, by introducing 4 stage pipeline it cut down to $ T_{m}+2T_{a}$. In \cite{3D_flip} Huang et al., introduced a flipping structure it further reduced the critical path to $ T_{m}+T_{a}$. Though, it reduced the critical path delay in lifting based implementation, it requires to improve the memory efficiency. Majority of the designs which implement the 2-D DWT, first by applying 1-D DWT in row-wise and then apply 1-D DWT in column wise. It require huge amount of memory to store these intermediate coefficients. To reduce this memory requirements, several DWT architecture have been proposed by using line based scanning methods \cite{3D_huang}-\cite{3D_xiong}. Huang et al., \cite{3D_huang}-\cite{3D_huang2} given brief details of B-Spline based 2-D IDWT implementation and discussed the memory requirements for different scan techniques and also proposed a efficient overlapped strip-based scanning to reduce the internal memory size. Several parallel architectures were proposed for lifting-based 2-D DWT \cite{3D_huang2}-\cite{3D_yusong2}. Y. Hu et al. \cite{3D_yusong2}, proposed a modified strip based scanning and parallel architecture for 2-D DWT is the best memory-efficient design among the existing 2-D DWT architectures, it requires only 3N + 24P of on chip memory for a N$\times$N image with $ P $ parallel processing units (PU). Several lifting based 3-D DWT architectures are noted in the literature \cite{3D_zheng}-\cite{3D_darji} to reduce the critical path of the 1-D DWT architecture and to decrease the memory requirement of the 3-D architecture. Among the best existing designs of 3-D DWT, Darji et al. \cite{3D_darji} produced best results by reducing the memory requirements and gives the throughput of 4 results/cycle. Still it requires the $4N^{2}+10N$ on-chip memory.
\par Based on the ideas of compressed sensing (CS) \cite{CS_11}-\cite{CS_13}, several new video codecs \cite{CS_10}-\cite{CS_plonka} have been proposed in the last few years. Wakin \textit{et al.} \cite{CS_10} have introduced the compressive imaging and video encoding through single pixel camera. From his research results, Wakin has established that 3-D wavelet transform is a better choice for video compared to 2-D (two-dimensional) wavelet transform. Y. Hou and F. Liu \cite{32} have proposed a system of low complexity, where sparsity extracted is from residuals of successive non-key frames and CS is applied on those frames. Key frames are fully sampled resulting in increased bit-rate. Moreover, performing motion estimation and compensation while predicting the non key frames increases the encoder complexity. S. Xiang and Lin Cai \cite{33} proposed a CS based scalable video coding, in which the base layer is composed of a small set of DCT coefficients while the enhancement layer is composed of compressed sensed measurements. It uses DCT for I frames and undecimated DWT (UDWT) for CS measurements which increases the complexity at the decoder to a great extent. Jiang \textit{et al.} \cite{31} proposed CS based scalable video coding using total variation of the coefficients of temporal DCT. Scalability is enabled by multi-resolution measurements while the video signal is reconstructed by total variation minimization by augmented Lagrangian and alternating direction algorithms (TVAL3) \cite{tval3} at the decoder. However, it increases the decoder complexity, making hardware implementation quite difficult. J. Ma \textit{et al.} \cite{CS_plonka} introduced the fast and simple on-line based encoding and decoding by forward and backward splitting algorithm. Though encoder complexity is low, scalability is not achieved and decoder complexity is very high. Most of the recently proposed video codecs \cite{CS_10}-\cite{CS_plonka}, which are assumed to be of uniform sparsity, are available for all the video frames and a fixed number of measurements are transmitted to decoder for all the frames. Depending on the content of the video frame, sparsity may change. A fixed number of measurements may cause an increase in bit-rate (decrease in compression ratio). \\
\par This paper introduces a new compressed sensing based low complex encoder architecture using 3-D DWT. The proposed method uses the random Bernoulli sequence at the encoder for selecting the measurements and the approximate message passing algorithm for reconstruction at the decoder. Major contributions of the present work may be stated as follows. Firstly the proposed framework has revised the MCTF based SVC \cite{CS_4} model by introducing compressed sensing concepts to increase the compression ratio and to reduce the complexity. As a result, the proposed framework ensures low complexity at the encoder and marginal complexity at the decoder. Secondly, we proposed a new architecture for 3-D DWT, which requires only $2*(3N + 40P)$ words of on-chip memory with a throughput of 8 results/cycle. Thirdly, we proposed a efficient architecture for compressed sensing module.
\par Organization of the paper as follows. Fundamentals of 3-D DWT and compressed sensing is presented in Section II. Detailed description of the proposed architecture for 3-D DWT and compressed sensing modules are provided in section III and IV respectively . Results and comparison are given in Section V. Finally, concluding remarks are given in Section VI.
\begin{figure}
\centering
\includegraphics [height=100mm,width=80mm]{cs_encoder.pdf}
\caption {(a) Block diagram for CS based Scalable Video Codec (b)Detailed block diagram of Encoder}
\label{blockdia_1}
\end{figure}
\section{Theoretical Framework}
This section presents a theoretical background of the wavelets and compressed sensing. 3-D DWT has been used to exploit the spatial and temporal redundancies of the video, thereby it eliminates the complex operations like ME, MC and deblocking filter. Compressed sensing is used to provide the error resilience and coding efficiency.
\subsection{Discrete Wavelet Transform}
\begin{figure}
\centering
\includegraphics [height=50mm,width=40mm]{threedimensional.pdf}
\caption {3-D wavelet by combining 2-D spatial and 1-D temporal}
\label{3dDWT}
\end{figure}
\par Lifting based wavelet transform designed by using a series of matrix decomposition specified by the Daubechies and Sweledens in \cite{dwt_1}. By applying the flipping \cite{3D_flip} to the lifting scheme, the multipliers in the longest delay path are eliminated, resulting in a shorter critical path. The original data on which DWT is applied is denoted by $X[n]$; and the 1-D DWT outputs are the detail coefficients $H[n]$ and approximation coefficients $L[n]$. For the Image (2-D) above process is performed in rows and columns as well. Eqns.(1)-(6) are the design equations for flipping based lifting (9/7) 1-D DWT \cite{3D_flip2} and the same equations are used to implement the proposed row processor (1-D DWT) and column processor (1-D DWT).
\begin{figure}[H]
\label{lift_2d_eq}
\begin{align}
{H_1}[n] &\leftarrow a'*X[2n-1]+\{X[2n]+X[2n-2]\} \ldots P1\\
{L_1}[n]&\leftarrow b'*X[2n]+\{{H_1}[n]+{H_1}[n-1]\} \ldots U1\\
{H_2}[n] &\leftarrow c'*{H_1}[n]+\{{L_1}[n]+{L_1}[n-1]\}\ldots P2\\
{L_2}[n] &\leftarrow d'*{L_1}[n]+\{{H_2}[n]+{H_2}[n-1]\}\ldots U2\\
H[n] &\leftarrow K0* \{{H_2}[n]\}\\
L[n] &\leftarrow K1* \{{L_2}[n]\}
\end{align}
\end{figure}
Where $ a'=1/\alpha $, $ b'=1/\alpha\beta $, $ c'=1/\beta\gamma $, $ d'=1/\gamma\delta $, $ K0= \alpha\beta\gamma/\zeta$, and $ K1= \alpha\beta\gamma\delta\zeta$ \cite{dwt_1}. The lifting step coefficients $ \alpha$, $ \beta$, $\gamma $, $ \delta $ and scaling coefficient $\zeta $ are constants and its values $ \alpha = -1.586134342$, $ \beta =-0.052980118$, $\gamma =0.8829110762$, and $ \delta =0.4435068522$, and $\zeta = 1.149604398.$
Lifting based wavelets are always memory efficient and easy to implement in hardware. The lifting scheme consists of three steps to decompose the samples, namely, splitting, predicting (eqn. (1) and (3)), and updating (eqn. (2) and (4)).
Haar wavelet transform is orthogonal and simple to construct and provide fast output. By considering the advantages of the Haar wavelets, the proposed architecture uses the Haar wavelet to perform the 1-D DWT in temporal direction (between two adjacent frames). Sweldens \textit{et al.} \cite{daub_Haar} developed a lifting based Haar wavelet. The equations of the lifting scheme for the Haar wavelet transform is as shown in eqn.(\ref{eq2})
\begin{equation}
\label{eq2}
\left[ \begin{array}{l}
L\\
H
\end{array} \right] = \left( {\begin{array}{*{20}{c}}
{\sqrt 2 }&0\\
0&{\frac{1}{{\sqrt 2 }}}
\end{array}} \right)\left( {\begin{array}{*{20}{c}}
1&{S(z)}\\
0&1
\end{array}} \right)\left( {\begin{array}{*{20}{c}}
1&0\\
{ - P(z)}&1
\end{array}} \right)\left( \begin{array}{l}
{X_0}(z)\\
{X_1}(z)
\end{array} \right)
\end{equation}
\begin{equation}
\label{eq3}
\begin{array}{l}
L = {\textstyle{1 \over {\sqrt 2 }}}({X_0} + {X_1})\\
H = {\textstyle{1 \over {\sqrt 2 }}}({X_1} - {X_0})
\end{array}
\end{equation}
Eqn.(\ref{eq3}) is extracted by substituting Predict value $P(z)$ as 1 and Update step $ S(z) $ value as 1/2 in eqn.(\ref{eq2}), which is used to develop the temporal processor to apply 1-D DWT in temporal direction ($ 3^{rd} $ dimension). Where L and H are the low and High frequency coefficients respectively.
\par The process which is shown in Fig. \ref{3dDWT} represents the one level decomposition in spatial and temporal. Among all the sub-bands, only LLL sub-band (LL band of L-frames) is fully sampled and transmitted without applying any CS techniques because it represents the image in low resolution (Base layer in SVC domain) which is not sparse. All the other sub-bands (3-D wavelet coefficients) except LLL exhibit approximate sparsity (Near to zero) and hard thresholding has been applied (consider as zero if value is less than threshold). After this step, conventional encoders use EZW coding to encode these wavelet coefficients which is complex to implement in hardware. EZW coding is replaced by CS in the proposed framework which exploits the sparsity preserving nature of random Bernoulli matrix by projecting the wavelet coefficients onto them. DWT version of each frame consists of four sub-bands. All the LL sub-bands of L-frames have large wavelet coefficients. Remaining three bands of L-frames and four sub-bands of H-frames exhibits sparsity on which compressed sensing is applied.
\subsection{Compressed Sensing}
Compressed sensing is an innovative scheme that enables sampling below the Nyquist rate, without (or with small) drop in reconstruction quality. The basic principle behind the compressed sensing consists in exploiting sparsity of the signal in some domain. In the proposed work, CS has been applied in wavelet domain.\\
Let $x = \lbrace x[1], . . . ,x[N]\rbrace$ be a set of $ N $ real and discrete-time samples. Let s be the representation of $ x $ in the $ \Psi $ (transform) domain, that is:
\begin{equation}
x = \Psi s = \sum\limits_{i = 1}^N {\Psi _i}{{s_i}}
\end{equation}
where $ s $ = $ [s_1, . . . , s_N]$ is a weighted coefficients vector, $ s_i $ = $\left\langle {x,{\Psi _i}} \right\rangle$, and $\Psi = [{\Psi _{1,}}|{\Psi _{2,}}|....|{\Psi _N}]$ is an $N \times N$ basic matrix. Assume that the vector $ x $ is $ K $-sparse ($ K $ coefficients of $ s $ are non-zero) in the domain $ \Psi $, and $ K \ll N $. To get the sparsity of the signal $ x $, conventional transform coding is applied on whole signal $ x $ (all $ N $ samples) by using $ s = \Psi^{T}x $ and gives the $ N $ transform coefficients. Among the N coefficients, $N-K$ or more coefficients are discarded because they carry negligible energy and the remaining are encoded. The basic idea of CS is to remove this ``sampling redundancy'' by taking only $ M $ samples of the signal, where $ K < M \ll N. $ Let $ y $ be an $ M $-element measurement
vector given by: $y = \Phi x$ or $ y = \Phi \Psi s$ with $y \in \mathbb{R}^{M}$, $\Phi \in \mathbb{R}^{M\times N} $ are non-adaptive linear projections of a signal $x$ $\in \mathbb{R}^{N}$ with typically $M \ll N$.
Recovering the original signal $ x $ means solving an under-determined linear equation with usually no unique solution. However, the signal $ x $ can be recovered losslessly from $M \ge K$ measurements, if the measurement matrix $ \Phi $ is designed in such a way that, it should preserve the geometry of the sparse signals and each of its $M \times K$ sub-matrices possesses full rank. This property is called Restricted Isometry Property (RIP) and
mathematically, it ensures that $\|x1-x2\|_2 \approx\|\Phi x1-\Phi
x2\|_2$. Where $\|y\|_2$ represents the $ \ell_2 $- norm of the vector $y $. It has been observed that the random matrices drawn from independent and identically distributed (i.i.d.) Gaussian or Bernoulli distributions satisfy the RIP property with high probability.
\par The problem of signal recovery from CS measurements is very well studied in the recent years and there exists a host of algorithms that have been proposed such as Orthogonal Matching
Pursuit (OMP) \cite{26}-\cite{28}, Iterative Hard-Thresholding (IHT) \cite{IHT}, Iterative
Soft-Thresholding (IST) \cite{IST}. Although recently introduced Approximate Message Passing (AMP) algorithm \cite{dono_1} shows a similar structure to IHT and IST, it exhibits faster convergence. Literature \cite{dono_1},\cite{dono_2} shows that AMP performs excellently for many deterministic and highly structured matrices.\\
\section{Proposed architecture for 3-D DWT}
The proposed architecture for 3-D DWT comprising of two parallel spatial processors (2-D DWT) and four temporal processors (1-D DWT), is depicted in Fig. \ref{blockdia_1}(b). After applying 2-D DWT on two consecutive frames, each spatial processor (SP) produces 4 sub-bands, viz. LL, HL, LH and HH and are fed to the inputs of four temporal processors (TPs) to perform the temporal transform. Output of these TPs is a low frequency frame (L-frame) and a high frequency frame (H-frame). Architectural details of the spatial processor and temporal processors are discussed in the following sections.
\subsection{Architecture for Spatial Processor}
In this section, we propose a new parallel and memory efficient lifting based 2-D DWT architecture denoted by spatial processor (SP) and it consists of row and column processors. The proposed SP is a revised version of the architecture developed by the Y. Hu et al.\cite{3D_yusong2}. The proposed architecture utilizes the strip based scanning \cite{3D_yusong2} to enable the trade-off between external memory and internal memory. To reduce the critical path in each stage flipping model \cite{3D_flip}-\cite{3D_flip2} is used to develop the processing element (PE). Each PE has been developed with shift and add techniques in place of multiplier. Lifting based (9/7) 1-D DWT process has been performed by the processing unit (PU) in the proposed architecture. To reduce the CPD, processing unit is designed with five pipeline stages and multipliers are replaced with shift and add techniques. This modified PU reduces the CPD to $ 2T_{a} $ (two adder delay). Fig.~\ref{3d_2}(a) shows the data flow graph (DFG) of the proposed PU and Fig.~\ref{3d_2}(b) depicts the internal architecture of the proposed PU. The number of inputs to the spatial processor is equal to 2P+1, which is also equal to the width of the strip. Where P is the number of parallel processing units (PUs) in the row processor as well as column processor. We have designed the proposed architecture with two parallel processing units (P = 2). The same structure can be extended to P = 4, 8, 16 or 32 depending on external bandwidth. Whenever row processor produces the intermediate results, immediately column processor start to process on those intermediate results. Row processor takes 5 clocks to produce the temporary results then after column processor takes 5 more clocks to to give the 2-D DWT output; finally, temporal processor takes 2 more clock after 2-D DWT results are available to produce 3-D DWT output. As a summary, proposed 2-D DWT and 3-D DWT architectures have constant latency of 10 and 12 clock cycles respectively, regardless of image size N and number of parallel PUs (P). Details of the row processor and column processor are given in the following sub-sections.
\begin{figure}
\centering
\includegraphics [height=100mm,width=100mm]{DFGandPU.pdf}
\caption{(a) Data Flow Graph of modified 1-D DWT architecture (b)Structure of Processing Unit }
\label{3d_2}
\end{figure}
\begin{figure*}
\centering
\includegraphics [height=120mm,width=130mm]{rowandcol.pdf}
\caption{(a)Row Processor (b) Column Processor}
\label{3d_1}
\end{figure*}
\begin{figure}
\centering
\includegraphics [height=70mm,width=50mm]{trans_arrange.pdf}
\caption{(a) Transpose Register (Ref:\cite{3D_yusong2}) (b) Re-arrange Unit}
\label{3d_3}
\end{figure}
\subsubsection{Row Processor (RP)}
Let X be the image of size N$ \times $N, extend this image by one column by using symmetric extension. Now image size is N$\times $(N+1). Refer \cite{3D_yusong2} for the structure of strip based scanning method. The proposed architecture initiates the DWT process in row wise through row processor (RP) then process the column DWT by column processor (CP). Fig.~\ref{3d_1}(a). shows the generalized structure for a row processor with P number of PUs. P = 2 has been considered for our proposed design. For the first clock cycle, RP get the pixels from X(0,0) to X(0,2P) simultaneously. For the second clock RP gets the pixels from next row i.e. X(1,0) to X(1,2P), the same procedure continues for each clock till it reaches the bottom row i.e., X(N,0) to X(N,2P). Then it goes to the next strip and RP get the pixels from X(0,2P) to X(0,4P) and it continues this procedure for entire image. Each PU consists of five pipeline stages and each pipeline stage is processed by one processing element (PE) as depicted in Fig.~\ref{3d_2}(b). First stage (shift$\_$PE) provide the partial results which is required at $2^{nd}$ stage (PE$\_$alpha), likewise Processing elements PE$\_$alpha to PE$\_$delta ($2^{nd}$stage to $5^{th}$ stage) gives the partial results along with their original outputs. i.e. consider the PU-1, PE$\_$alpha needs to provide output corresponding to eqn.(1) ($ H_{1}[n]$), along with $H_{1}[n] $, it also provides the partial output $ X'[2n]$ which is required for the PE$\_$beta. Structure of the PEs are given in the Fig.~\ref{3d_2}(b), it shows that multiplication is replaced with the shift and add technique. The original multiplication factor and the value through the shift and add circuit are noted in Table.\ref{tab1}, it shows that variation between original and adopted one is extremely small. The maximum CPD provided by the these PEs is $ 2T_{a} $. The outputs $ H_{1}[n+P-1]$, $ L_{1}[n+P-1]$, and $ H_{2}[n+P-1]$ corresponding to PE$\_$alpha and PE$\_$beta of last PU and PE$\_$gama of last PU is saved in the memories Memory$\_$alpha, Memory$\_$beta and Memory$\_$gama respectively. Those stored outputs are inputted for next subsequent columns of the same row. For a N$\times $N image rows is equivalent to N. So the size of the each memory is N$\times $1 words and total row memory to store these outputs is equals to 3N. Output of each PU are under gone through a process of scaling before it producing the outputs H and L. These outputs are fed to the transposing unit. The transpose unit has P number of transpose registers (one for each PU). Fig.~\ref{3d_3}(a) shows the structure of transpose register, and it gives the two H and two L data alternatively to the column processor.
\subsubsection{Column Processor (CP)}
The structure of the Column Processor (CP) is shown in Fig.~\ref{3d_1}(b). To match with the RP throughput, CP is also designed with two number of PUs in our architecture. Each transpose register produces a pair of H and L in an alternative order and are fed to the inputs of one PU of the CP. The partial results produced are consumed by the next PE after two clock cycles. As such, shift registers of length two are needed within the CP between each pipeline stages for caching the partial results (except between $ 1^{st} $ and $2^{nd} $ pipeline stages). At the output of the CP, four sub-bands are generated in an interleaved pattern, i.e., (HL,HH), (LL,LH), (HL,HH), (LL,LH), and so on. Outputs of the CP are fed to the re-arrange unit. Fig.~\ref{3d_3}(b) shows the architecture for re-arrange unit, and it provides the outputs in sub-band order i.e LL, LH, HL and HH simultaneously, by using P registers and 2P multiplexers. For multilevel decomposition, the same DWT core can be used in a folded architecture with an external frame buffer for the LL sub-band coefficients.
\begin{table}[]
\centering
\caption{Original and adopted values for multiplication}
\label{tab1}
\centering
\begin{tabular}{|l|l|l|}
\hline
& Original & Multiplier \\
PE & Multiplier & value through \\
& Value & shift and add \\
\hline
PE$ \_ $alpha & $ a'$=-0.6305 & $ a'$=-0.6328 \\
\hline
PE$ \_ $beta & $ b'$=11.90 &$ b'$=12 \\
\hline
PE$ \_ $gama &$ c'$=-21.378 & $ c'$=-21.375 \\
\hline
PE$ \_ $delta & $ d'$=2.55 & $ d'$=2.565 \\
\hline
\end{tabular}
\end{table}
\subsection{Architecture for Temporal Processor (TP)}
Eqn.(\ref{eq3}) shows that Haar wavelet transform depends on two adjacent pixels values. As soon as spatial processors are provide the 2-D DWT results, temporal processors starts processing on the spatial processor outputs (2-D DWT results) and produce the 3-D DWT results. Fig.~\ref{blockdia_1}(b) shows that there is no requirement of temporal buffer, due to the sub-band coefficients of two spatial processors are directly connected to the four temporal processors. But it has been designed with 2 pipeline stages, it require 8 pipeline registers for each TP. Same frequency sub-band of the distinct spatial processors are fed to the each temporal processor. i.e. LL, HL, LH and HH sub-bands of spatial processor 1 and 2 are given as inputs to the temporal processor 1, 2, 3 and 4 respectively. Temporal processor apply 1-D Haar wavelet on sub-band coefficients, and provide the low frequency sub-band and high frequency sub-band as output. By combining all low frequency sub-bands and high frequency sub-bands of all temporal processors provide the 3-D DWT output in the form of L-Frame and H-Frame (2-D DWT by spatial processors and 1-D DWT by temporal processors).
\section{Architecture for Compressed Sensing Module}
\begin{figure}
\centering
\includegraphics [height=80mm,width=90mm]{cs_mod.pdf}
\caption{Internal architecture of CS module}
\label{cs_mod}
\end{figure}
\par The proposed 3-D DWT module, simultaneously works on two video frames of size $ N\times N $ and provide eight 3-D DWT sub-bands as its output. As shown in Fig. \ref{blockdia_1}(b), CS is applied on all sub-bands of 3-D DWT outputs, except LLL band (LL band of L-Frame) and each sub-band is connected to one CS module. Size of the each sub-band equals to the half of the original frame for one level decomposition (N/2$ \times $N/2). The main function of the CS module is to calculate the measured matrix $ y $ from $ \Phi $ and $x$ by using the CS equation $y = \Phi x$. Where $ x $ is a input vector (for which CS need to calculate). Size of $ x $ is equal to P* N/2 (N/2 is the height of single column in a sub-band), because proposed 3-D DWT simultaneously works on P columns due to P number of PUs in the spatial processor. Proposed architecture has been designed with P = 2; so for each clock, alternative column coefficients are provided by the 3-D DWT module for each sub-band. With P equals to 2, the size of $ x $ is [2*N/2]$ \times $1 = N$ \times $1, and $ \Phi $ is the randomly generated Bernoulli matrix, size of the $\Phi$ is M$ \times $N, ($M \ge cK\log (N/K)$ for some small constant c). Value of $ K $ ($ \ell_0 $-norm) of the input vector $ x $. We have tested for different video sequences of size 512$ \times $512 and 1024$ \times $1024 with different threshold values (wavelet coefficient value less than the threshold value consider as zero) have been observed and it shows that the value of K is not more than N/8 for given $ x $ of size N$ \times $1. Based on those observations, value of $ M $ has been fixed to N/4.
\par Fig. \ref{cs_mod} shows the internal architecture of CS module. Proposed architecture for CS based encoder has seven CS modules, one for each sub-band except LLL sub-band. The structure of seven CS modules are same and works simultaneously. For all these seven CS modules only one Bernoulli matrix has been used and it is stored in ROM, denoted by Bern$\_$mat. The size of the Bern$\_$mat is M$ \times $N, each location has M bits representing one entire column and number of locations equals to N. Bernoulli matrix has been generated by using `binord' function in the Matlab tool ($\Phi$ = binornd(1,0.5,M,N)), with equal probability for 0 and 1 of size M$ \times $N. Here bit `0' represents the value `+1' and 1 represents `-1'. This generated Bernoulli matrix has been loaded in the Bern$\_$mat (ROM) locations and is used by all CS modules. As shown in Fig. \ref{cs_mod}, input for a CS module is data$ \_ $in which is sub-band out from 3-D DWT. For every clock one 15-bit data$ \_ $in will arrive (alternative column per each clock). In $y = \Phi x$, $ y $ is column matrix of size M$ \times $1, which is represented as $ y $ = [$y_{0}$, $y_{1}$, $y_{2}$, $y_{3}$, ........ $y_{M-1}$]$^{T}$, $y_{i}$=$\sum\limits_{k = 1}^N {{\Phi _{ik}}{x_k}}$ or we can also calculate iterative fashion for every $(n+1)^{th}$ clock $y_{i}(n+1)$ = $y_{i}(n)$ + $\Phi _{ik}{x_k}(n+1)$, it require N clocks to complete this operation, because $ k $ = 0 to N-1.
\par The proposed architecture uses M adders, one for each individual measurement $y_{i}$. One input of the adder is $\Phi _{ik}{x_k}$ which is the output of a multiplexer, where $\Phi _{ik}$ is either 0 or 1, if $\Phi _{ik}$ is 0, then $ x_{k}$ multiply with +1 ($\Phi _{ik}x_{k} = $data$ \_ $in), otherwise multiply with -1 ($\Phi _{ik}x_{k}$ = 2's compliment of (data$ \_ $in )), this task has been done by connecting the $\Phi _{ik}$ as a selection line of the multiplexer and first and second inputs of the multiplexer is $x_{k}$ and $- x_{k}$ respectively. The second input of adder is from partial result of $y_{i}$ in the previous clock. The proposed architecture for CS module utilize the two registers to store the M measurements ($y$) namely, Y$ \_ $msr1 and Y$ \_ $msr2, each of capacity M*16 bits (16 bits for each measurement). Y$ \_ $msr1 is used store partial results of $y_{i}$ from 0 to N-1 clocks. Just after completing N clocks, measurements are ready and are available in Y$ \_ $msr1, then control circuit transfer the Y$ \_ $msr1 data to Y$ \_ $msr2 and clear the Y$ \_ $msr1 for next set of measurements. The above procedure is repeated for all the columns of sub-band at the same time calculated measurements $y_{i}$ each of 16-bit are send as output (Y$\_ $out) from Y$\_ $msr2 by shifting 16 bits for each clock. This procedure is followed for all the seven sub-bands. Each measured matrix $ y $ is sent for the entropy coding (Golomb Rice Coding) block and coded bit streams are transmitted through channel. LLL sub-band is directly coded by entropy coding block and then transmitted through channel by considering as a base layer. Entropy coding is out of scope of this paper, not discussed in this paper.
\section{Results and Performance Comparison}
\subsection{Simulation Results}
The proposed encoder has been simulated by using Matlab tool and functionality has been verified on $ cyclone $ (Downloaded from the NASA website) and $ clock $ video sequences of 512$ \times $512 resolution, $ viplane $ and $ foreman $ video sequences of 256$ \times $256 resolution. After applying the 3-D DWT, all the HL, LH and HH sub-bands of L-Frames and LL, HL, LH and HH sub-bands of H-Frames are sent to CS. After applying the CS on 3-D DWT coefficients measurements are passed through the entropy coder (Golomb Rice coding + run length encoding). Percentage of measurements are calculated before entropy coding. Compression Ration is the ratio of total number of bits in input frame and number of bits after the entropy coding. Table \ref{table_1} shows that performance of the proposed framework competes with the existing IBMCTF \cite{CS_4} and H.264 \cite{H.264}. Performance in terms of compression ratio and PSNR of the proposed encoder and decoder for $ clock, cyclone $ and $ Viplane $ video sequences are noted from the level 1 to level 3 in Table \ref{table_2}.
\begin{table}
\begin{center}
\caption{Performance of Proposed Framework with IBMCTF and H.264}
\label{table_1}
\begin{tabular}{|l|l|c|c|}
\hline
CODEC & Video & Compression Ratio & PSNR (dB) \\ \hline
\multirow{3}{*}{Proposed} & clock & 24.24 & 44.01 \\ \cline{2-4}
& cyclone & 16.85 & 34.2 \\ \cline{2-4}
& viplane & 20.96 & 37.5 \\ \hline
\multirow{3}{*}{IB-MCTF \cite{CS_4}} & clock & 7.33 & 46.2 \\ \cline{2-4}
& cyclone & 5.08 & 40.6 \\ \cline{2-4}
& viplane & 5.28 & 47.33 \\ \hline
\multirow{3}{*}{H.264 \cite{H.264}} & clock & 62.33 & 42.65 \\ \cline{2-4}
& cyclone & 22.1 & 38.4 \\ \cline{2-4}
& viplane & 37.8 & 40.57 \\ \hline
\end{tabular}
\end{center}
\end{table}
\begin{table*}
\begin{center}
\caption{Performance of the proposed framework for different video sequences and different levels}
\label{table_2}
\begin{tabular}{|l|c|c|c|c|}
\hline
\multirow{2}{*}{Video clip} & \multirow{2}{*}{level} & \multirow{2}{*}{PSNR} & \multirow{2}{*}{Compression Ratio} & \multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}\% of measurements \\ before Entropy coding\end{tabular}} \\
& & & & \\ \hline
\multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}Clock\\ 512x512\\ (Slow motion)\end{tabular}} & 1 & 44 & 24.24 & 34.99 \\ \cline{2-5}
& 2 & 33.2 & 41.67 & 23.82 \\ \cline{2-5}
& 3 & 30.12 & 53.23 & 20.52 \\ \hline
\multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}Cyclone\\ 512x512\\ (High motion)\end{tabular}} & 1 & 34 & 16.85 & 43.7 \\ \cline{2-5}
& 2 & 29 & 20.56 & 38.61 \\ \cline{2-5}
& 3 & 25.5 & 23.3 & 36.6 \\ \hline
\multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}Viplane\\ 256x256\\ (Medium motion)\end{tabular}} & 1 & 37.5 & 20.96 & 32.7 \\ \cline{2-5}
& 2 & 31.5 & 35.63 & 23.5 \\ \cline{2-5}
& 3 & 28 & 65.54 & 18.12 \\ \hline
\end{tabular}
\end{center}
\end{table*}
\subsection{Synthesis Results}
The proposed architecture for CS based low complex video encoder has been described in Verilog HDL. Simulation results have been verified by using Xilinx ISE simulator. We have simulated the Matlab model which is similar to the proposed CS based low complex video encoder architecture and verified the 3-D DWT coefficients and CS measurements. RTL simulation results have been found to exactly match the Matlab simulation results. The Verilog RTL code is synthesised using Xilinx ISE 14.2 tool and mapped to a Xilinx programmable device (FPGA) 7z020clg484 (zync board) with speed grade of -3. Table \ref{FPGA_results} shows the device utilisation summary of the proposed architecture and it operates with a maximum frequency of 265 MHz. The proposed architecture has also been synthesized using SYNOPSYS design compiler with 90-nm technology CMOS standard cell library. Synthesis results of the proposed encoder is provided in Table \ref{enc_synopsys}, it consumes 90.08 mW power and occupies an area equivalent to 416.799 K equivalent gate count at frequency of 158 MHz.
\begin{table}
\centering
\caption{Device utilization summary of the proposed Encoder}
\label{FPGA_results}
\begin{tabular}{|l|c|c|c|}
\hline
Logic utilized & Used & Available & Utilization (\%) \\ \hline
Slice Registers & 15917 & 106400 & 14\% \\ \hline
Number of Slice LUTs & 47303 & 53200 & 88\% \\ \hline
Number of fully & \multirow{2}{*}{15523} & \multirow{2}{*}{47697} & \multirow{2}{*}{32\%} \\
used LUT-FF pairs & & & \\ \hline
Number of Block RAM & 3 & 140 & 2\% \\ \hline
\end{tabular}
\end{table}
\begin{table}
\centering
\caption{Synthesis Results for Proposed Encoder}
\label{enc_synopsys}
\begin{tabular}{|l|c|}
\hline
Combinational Area & 1072673 $ \mu m^{2} $ \\ \hline
Non Combinational Area & 915778 $ \mu m^{2} $ \\ \hline
Total Cell Area & 1988451 $ \mu m^{2} $ \\ \hline
Interconnect area & 316449 $ \mu m^{2} $ \\ \hline
Operating Voltage & 1.2 V \\ \hline
Total Dynamic Power & 80.17 mW \\ \hline
Cell Leakage Power & 9.90 mW \\ \hline
\end{tabular}
\end{table}
\subsection{Comparison}
Table \ref{3dcompare} shows the comparison of proposed 3-D DWT architecture with existing 3-D DWT architecture. It is found that, the proposed design has less memory requirement, High throughput, less computation time and minimal latency compared to \cite{3D_weeks}, \cite{3D_tagavi}, \cite{3D_swapna}, and \cite{3D_darji}. Though the proposed 3-D DWT architecture has small disadvantage in area and frequency, when compared to \cite{3D_swapna}, the proposed one has a great advantage in remaining all aspects.
Table \ref{3d_asic} gives the comparison of synthesis results between the proposed 3-D DWT architecture and \cite{3D_darji}. It seems to be proposed one occupying more cell area, but it included total on chip memory also, where as in \cite{3D_darji} on chip memory is not included. Power consumption of the proposed 3-D architecture is very less compared to \cite{3D_darji}.
\begin{table}
\centering
\caption{Comparison of proposed 3-D DWT architecture with existing architectures (for 1-level)}
\label{3dcompare}
\resizebox{\textwidth}{!}
{\begin{tabular}{|l|l|l|l|l|l|}
\hline
Parameters & Weeks \cite{3D_weeks} & Taghavi \cite{3D_tagavi} & A.Das \cite{3D_swapna} & Darji \cite{3D_darji} & Proposed \\ \hline
\hline
Memory requirement & $6N^{2} $+$ 6l$ & $5N^{2} $ & $5N^{2} $ + 5N & $4N^{2} $ + 10N & 2*(3N+40P) \\ \hline
Throughput/cycle & - & 1 result & 2 results & 4 results & 8 results \\ \hline
Computing time & \multirow{2}{*}{$2N^{2}$ + $ 3l$/2 } & \multirow{2}{*}{$6N^{2} $} & \multirow{2}{*}{$3N^{2} $} & \multirow{2}{*}{$3N^{2} $} & \multirow{2}{*}{ $N^{2}$/2P } \\ For 2 Frames & & & & & \\ \hline
Latency & $2.5N^{2} $ + 0.5$ l $ & $4N^{2} $ cycles & $2N^{2} $ cycles & $3N^{2} $/2 cycles & 12 cycles \\ \hline
Area & - & - & 1825 slices & 2490 slices & 2852 slice LUTs \\ \hline
Operating & \multirow{2}{*}{200 MHz (ASIC)} & \multirow{2}{*}{-} & 321 MHz & 91.87 MHz & 265 MHz \\
Frequency & & & (FPGA) & (FPGA) & (FPGA) \\ \hline
Multipliers & - & - & Nil & 30 & Nil \\ \hline
Adders & $ 6l$ MACs & - & 78 & 48 & 176 \\ \hline
Filter bank & $l$-length & D-9/7 & D-9/7 & D-9/7 & D-9/7 (2-D) + Haar (1-D) \\ \hline
\end{tabular}}
\end{table}
\begin{table}
\centering
\caption{Synthesis Results (Design Vision) Comparison of Proposed 3-D DWT architecture with existing}
\label{3d_asic}
\begin{tabular}{|l|c|c|}
\hline
Parameters & Darji et al.,\cite{3D_darji} & Proposed \\ \hline
Combinational Area & 61351 $ \mu m^{2} $ & 526419 $ \mu m^{2} $ \\ \hline
Non Combinational Area & 807223 $ \mu m^{2} $ & 553078 $ \mu m^{2} $ \\ \hline
Total Cell Area & 868574 $ \mu m^{2} $ & 1079498 $ \mu m^{2} $ \\ \hline
Operating Voltage & 1.98 V & 1.2 V \\ \hline
Total Dynamic Power & 179.75 mW & 38.56 mW \\ \hline
Cell Leakage Power & 46.87 $ \mu W $ & 4.86 mW \\ \hline
\end{tabular}
\end{table}
\section{Conclusions}
In this paper, we have proposed memory efficient and high throughput architecture for CS based low complex encoder. The proposed architecture is implemented on 7z020clg484 FPGA target of zync family, also synthesized on Synopsys' design vision for ASIC implementation. An efficient design of 2-D spatial processor and 1-D temporal processor reduces the internal memory, latency, CPD and complexity of a control unit, and increases the throughput. When compared with the existing architectures the proposed scheme shows higher performance at the cost of slight increase in area. The proposed encoder architecture is capable of computing 60 UHD (3840$ \times $2160) frames in a second. The proposed architecture is also suitable for scalable video coding. In addition, the complexity of the encoder is reduced to a great extent. The proposed encoder is considered to be suitable for applications including satellite communication, wireless transmission and data compression by high speed cameras.
|
1,314,259,995,340 | arxiv | \section{Introduction}
Plasmas with massive charged dust grains are of interest both for space (e.g., cometary tails, planetary rings, the interstellar medium, Earth's magnetosphere and upper atmosphere, Earth's D and lower E regions etc.) \cite{shukla2002,shukla2002} as well as laboratory plasmas \cite{geortz1989,merlino1998}. Furthermore, dusty plasmas containing both positive and negative ions and a very few percentage of free electrons (or almost electron-free) are found in both naturally occurring plasmas (e.g., Earth's D and lower E regions, the F-ring of Saturn, nighttime polar mesosphere etc.) \cite{geortz1989,narcisi1971,rapp2005} and in plasmas used for technological applications \cite{choi1993,yabe1994}. There are several motivations for investigating such plasmas and associated waves and instabilities owing to their potential applications in both laboratory and space plasma environments (See, e.g., Refs. \onlinecite{kim2006,kim2013,rosenberg2007,misra2012,misra2013,rehman2012,ghosh2013,oohara2005,saleem2007}). In dusty plasmas, the size of the dust grains may vary in the range of $0.05-10\mu$m, their mass is about $10^6-10^{12}$ times the mass of ions and they have atomic numbers $z_d$ in the range of $10^3-10^5$ \cite{shukla2002}. In typical dusty plasmas with electrons and ions, dust grains are negatively charged due to high mobility of electrons into dust grain surface \cite{shukla2002}. However, recent laboratory experiments \cite{kim2006,kim2013,rosenberg2007} suggest that in dusty negative-ion plasmas where positive ions are the more mobile species, dusts can be positively charged when $m_n>m_p$, $n_n\gg n_e$ and $T_p>T_n$ are satisfied, where $m_j$ is the mass, $n_j$ is the number density and $T_j$ is the thermodynamic temperature of $j$-species particles in which $j=e,p,n$ stand for electrons, positive ions and negative ions respectively. On the other hand, when charged dust grains collect all the electrons from the background plasma, they can be negatively charged. Such density depletion of electrons associated with the capture of electrons by aerosol particles have been observed in the summer polar mesosphere at about 85 km altitude \cite{reid1990}. Furthermore, positively charged nanometer-seized particles were observed in the nighttime polar mesosphere in a region (Altitude range between 80 and 90 km) dominated by positive and negative ions and a very few percentage of electrons \cite{rapp2005}.
It has been found that the presence of charged dust grains modifies the plasma wave phenomena and these charged dust grains give rise to new low-frequency eigenmodes, called dust-acoustic wave (DAW), which was first theoretically predicted by
Rao \textit{et al.} \cite{rao1990}. However, the presence of negative ions in dusty plasmas can significantly modify not only the dispersion properties of DAWs but also some nonlinear localized structures (For some recent theoretical developments and experiments in negative ion plasmas readers are referred to Refs. \onlinecite{misra2012,misra2013,rehman2012,ghosh2013}). On the other hand, these waves can be damped (collisionless) due to the resonance of particles (trapped and/or free) with the wave (i.e., when the particle's velocity is nearly equal to the wave phase velocity) \cite{landau1946}. The linear electron Landau damping of nonlinear ion-acoustic solitary waves was first studied by Ott and Sudan \cite{ott1969,ott1970} neglecting the particle's trapping effects on the assumption that the particle trapping time is much longer than that of Landau damping. They derived a Korteweg-de Vries (KdV) equation with a source term that models the lowest-order effects of resonant particles. It was demonstrated that an initial wave form may either steepen or not depending on the relative size of the nonlinearity compared to the Landau damping. The latter was also shown to cause decay of wave amplitude with time.
In the past, several authors have attempted to study the effects of Landau damping on the nonlinear propagation of electrostatic solitary waves in different plasma systems. For example, Bandyopadhyay {\it et al.} \cite{bandyo2002a,bandyo2002b} investigated the propagation characteristics of ion-acoustic solitary waves with Landau damping in nonthermal plasmas. Ghosh {\it et al.} \cite{ghosh2011} considered the linear Landau damping effects on nonlinear ion-acoustic solitary waves in electron-positron-ion plasmas. They studied the properties of the wave phase velocity, the solitary wave amplitude as well as the Landau damping rate with the important effects of positron density and temperature. Motivated by the recent theoretical developments as well as experimental observations of low-frequency electrostatic waves in pair-ion plasmas \cite{kim2006,kim2013,rosenberg2007,misra2012,misra2013,rehman2012,ghosh2013,oohara2005,saleem2007} we have investigated the Landau damping effects of dust-acoustic (DA) solitary waves in dusty pair-ion plasmas (quite distinctive from electron-positron-ion plasmas and dusty electron-ion plasmas). We show that for typical plasma parameters relevant for laboratory and space environments, the Landau damping (and the nonlinear) effect is stronger than the finite Debye length (dispersive) effects for which the KdV soliton theory is not applicable to DAWs in dusty pair-ion plasmas. The landau damping effect is shown to slow down the wave amplitude with time. It is found that in contrast to electron-ion or electron-positron-ion plasmas, the nonlinearity in the KdV equation can never vanish in dusty pair-ion plasmas with positively or negatively charged dusts. This implies that the modified KdV equation is no longer required for the evolution of DAWs. The properties of the linear phase velocity, solitary wave amplitudes (in presence and absence of the Landau damping) as well as the Landau damping rate are also analyzed with the effects of positive ion to dust density ratio $(\mu_{pd})$ as well as the ratios of positive to negative ion temperatures $(\sigma)$ and masses $(m)$.
\section{Basic Equations}
We consider the nonlinear propagation of electrostatic DAWs in an unmagnetized collisionless dusty plasma consisting of singly charged adiabatic positive and negative ions, and positively or negatively charged mobile dusts. The latter are assumed to have equal mass and charge which are treated as constant. It is to be noted that the dust charge fluctuation process may introduce a new low-frequency wave eigen mode as well as another dissipative effect (wave damping) into the system. However, we have neglected this charge fluctuation effect on the propagation of dust-acoustic waves by the assumption that the charging rate of dust grains is very high compared to the dust plasma oscillation frequency. The collisions of all particles are also neglected in the considered interval of time. Furthermore, in dusty plasmas the ratio of electric charge to mass of dust grains remains much smaller than those of both positive and negative ions. It is also assumed that the size of the grains is small compared to the average distance between them. The unperturbed state is overall neutral so that the internal electric field is zero. Also, the curl of the electric field vanishes (i.e., electrostatic), and the perturbation about the equilibrium state is weak. At equilibrium, the overall charge neutrality condition reads
\begin{equation}
n_{p0}+\zeta z_dn_{d0}=n_{n0}, \label{charge-neutrality}
\end{equation}
where $n_{j0}$ is the unperturbed number density of species $j$ ($j$=$p$, $n$, $d$ respectively stand for positive ions, negative ions, and dynamical charged dusts), $z_d$ ($>0$) is the unperturbed dust charge state and $\zeta=\pm1$ according to when dusts are positively or negatively charged.
The cold dust is described by a set of fluid equations \eqref{cont-eqn}-\eqref{montm-eqn}, whereas the two species of ions (positive and negative) are described by the kinetic Vlasov equation \eqref{Vlasov-eqn}. The electric potential $\phi$ is described by the Poisson equation \eqref{Poisson-eqn}. Thus, the basic equations for the dynamics of charged particles in one space dimension are
\begin{equation}
\frac{\partial n_d}{\partial t}+\frac{\partial (n_du_d)}{\partial x}=0, \label{cont-eqn}
\end{equation}
\begin{equation}
\frac{\partial u_d}{\partial t}+u_d\frac{\partial u_d}{\partial x}=-\frac{q_d}{m_d}\frac{\partial \phi}{\partial x}, \label{montm-eqn}
\end{equation}
\begin{equation}
\frac{\partial f_j}{\partial t}+v\frac{\partial f_j}{\partial x}-\frac{q_j}{m_j}\frac{\partial \phi}{\partial x}\frac{\partial f_j}{\partial v}=0, \label{Vlasov-eqn}
\end{equation}
\begin{equation}
\frac{\partial^2\phi}{\partial x^2}=-4\pi e(n_p-n_n+\zeta z_dn_d), \label{Poisson-eqn}
\end{equation}
where $j$ stands for $p$, $n$ denoting, respectively, the positive and negative ions. The ion densities are given by
\begin{equation}
n_j=\int_{-\infty}^{\infty} f_j dv. \label{density-eqn}
\end{equation}
In equations \eqref{cont-eqn}-\eqref{density-eqn} $n_d$, $u_d$, $q_d(=\pm z_de)$, $m_d$, respectively, denote the number density, fluid velocity, charge and mass of dust grains. Also, $v$ is the particle's velocity, and $f_j$, $m_j$ and $n_j$, respectively, denote the velocity distribution function, mass and number densities of $j$-species ions. Furthermore, $\phi$ is the electrostatic potential and $x$ and $t$ are the space and time coordinates.
Equations \eqref{cont-eqn}-\eqref{density-eqn} can be recast in terms of dimensionless variables. We normalize the physical quantities according to $u_d\rightarrow u_d/c_s$, $\phi\rightarrow e\phi/k_BT_p$, $n_j\rightarrow n_j/n_{j0}$, $n_d\rightarrow n_d/n_{d0}$, $f_j\rightarrow f_jv_{tj}/n_{j0}$, $v\rightarrow v/v_{tp}$, where $c_s=\sqrt {z_dk_BT_p/m_d}=\omega_{pd}\lambda_D$ is the DA speed with $\omega_{pd}=\sqrt{4\pi n_{d0}z^2_d e^2/m_d}$ and $\lambda_D=\sqrt{k_BT_p/4\pi n_{d0}z_d e^2}$ denoting, respectively, the dust plasma frequency and the plasma Debye length. Here, $k_B$ is the Boltzmann constant, $T_j$ is the thermodynamic temperature and $v_{tj}(=\sqrt{k_BT_j/m_j})$ is the thermal velocity of $j$-species ions. The space and time variables are normalized by $L$ and $L/c_s$ respectively, where $L$ is the characteristic scale length for variations of $n_j,~u_d,~\phi,~f_j$ etc.
Thus, from equations \eqref{cont-eqn}-\eqref{density-eqn}, we obtain the following set of equations in dimensionless form:
\begin{equation}
\frac{\partial n_d}{\partial t}+\frac{\partial (n_du_d)}{\partial x}=0, \label{cont-eqn-nond}
\end{equation}
\begin{equation}
\frac{\partial u_d}{\partial t}+u_d\frac{\partial u_d}{\partial x}=-\zeta\frac{\partial \phi}{\partial x},\label{montm-eqn-nond}
\end{equation}
\begin{equation}
\delta\frac{\partial f_j}{\partial t}+v\frac{\partial f_j}{\partial x}-\zeta_j\frac{m_p}{m_j}\frac{\partial \phi}{\partial x}\frac{\partial f_j}{\partial v}=0, \label{Vlasov-eqn-nond}
\end{equation}
\begin{equation}
\frac{\lambda_D^2}{L^2} \frac{\partial^2\phi}{\partial x^2}=\mu_{nd}n_n-\mu_{pd}n_p-\zeta n_d, \label{Poisson-eqn-nond}
\end{equation}
\begin{equation}
n_j=\sqrt{\frac{m_j}{m_p}\frac{T_p}{T_j}}\int_{-\infty}^{\infty} f_j dv, \label{density-eqn-nond}
\end{equation}
together with the charge neutrality condition given by
\begin{equation}
\mu_{pd}+\zeta=\mu_{nd}. \label{charge-neutrality-nond}
\end{equation}
In Eq. \eqref{Vlasov-eqn-nond}, $\zeta_j=\pm1$ for positive $(j=p)$ and negative $(j=n)$ ions and $\delta=\sqrt{z_dm_p/m_d}$. Also, in Eq. \eqref{Poisson-eqn-nond}, $\mu_{jd}=n_{j0}/z_dn_{d0}$ for $j=p$ and $n$. The basic parameters can be defined as follows:
\begin{itemize}
\item{ $\delta\equiv\sqrt{z_dm_p/m_d}$ (for positive ions) and $m\delta$ (for negative ions with $m=m_n/m_p$), which represent the effects due to ion inertias, and, in particular, Landau damping by both the positive and negative ions.}
\item{$n_{d1}/n_{d0}$, the ratio of perturbed density to its equilibrium value. This measures the strength of the nonlinearity in electrostatic disturbances. }
\item{$\lambda_D^2/L^2$: This is a measure of the strength of the wave dispersion due to deviation from the quasineutrality. Here, $L$ represents the characteristic scale length for variations of the physical quantities, namely, $n_d,~u_d,~\phi$ etc. This parameter disappears in the left-side of Eq. \eqref{Poisson-eqn-nond}, if one considers the normalization of $x$ by $\lambda_D$ instead of $L$ . }
\end{itemize}
Note that if the ions are intertialess compared to the massive charged dusts, the ratios $\delta$ and $m\delta$ can be neglected in Eq. \eqref{Vlasov-eqn-nond}, and one can replace (hence disregarding the Landau damping effects) this equation by the Boltzmann distributions of ions. On the other hand, when dust grains are considered cold, i.e., $T_d=0$, then the Landau damping is provided solely by the ions and the damping rate is $\propto$ $\delta$ or $m\delta$. Since one of our main interests is to study the interplay among the nonlinearity, the dispersion and the Landau damping effects, we consider \cite{ott1969}
\begin{itemize}
\item{$\delta=\alpha_1\epsilon$,}
\item{$n_{d1}/n_{d0}=\alpha_2\epsilon$,}
\item{$\lambda_D^2/L^2=\alpha_3\epsilon$,}
\end{itemize}
where $\epsilon(>0)$ is a smallness parameter and $\alpha_j$ ($j=1,2,3$) is a constant assumed to be of the order-unity. Then from Eq. \eqref{Vlasov-eqn-nond} we obtain the following two equations for positive and negative ions as
\begin{equation}
\alpha_1\epsilon\frac{\partial f_p}{\partial t}+v\frac{\partial f_p}{\partial x}-\frac{\partial\phi}{\partial x}\frac{\partial f_p}{\partial v}=0,\label{p_Vlasov-eqn-nond}
\end{equation}
and
\begin{equation}
\alpha_1\epsilon\frac{\partial f_n}{\partial t}+v\frac{\partial f_n}{\partial x}+\frac1m\frac{\partial\phi}{\partial x}\frac{\partial f_n}{\partial v}=0.\label{n_Vlasov-eqn-nond}
\end{equation}
\section{Derivation of KdV equation with Landau damping}
We note that in the limit of $\epsilon\rightarrow0$ (i.e., in the small-amplitude limit in which ions are inertialess and the characteristic scale length is much larger than the Debye length), Eqs. \eqref{cont-eqn-nond}-\eqref{Poisson-eqn-nond} yield the simple linear dispersion law (in nondimensional form):
\begin{equation}
v_p\equiv\omega/k=\left(\mu_{pd}+\sigma\mu_{nd}\right)^{-1/2}, \label{disp-relation}
\end{equation}
where $\omega$ and $k$ are the wave frequency and the wave number of plane wave perturbations. Equation \eqref{disp-relation} shows that the wave becomes dispersionless with the phase speed smaller than the dust-acoustic speed $c_s$. In other words, in a frame moving at the speed $v_p$, the time derivatives of all physical quantities should vanish. Thus, for a finite $\epsilon$ with $0<\epsilon\lesssim1$, we can expect slow variations of the wave amplitude in the moving frame of reference, and so introduce the stretched coordinates as \cite{taniuti1969}
\begin{equation}
\xi=\epsilon^{1/2} (x-Mt),~\tau=\epsilon^{3/2}t, \label{stretching}
\end{equation}
where $M$ is the nonlinear wave speed (relative to the frame) normalized by $c_s$, to be shown to be equal to $v_p$ later.
The dependent variables are expanded in powers with respect to $\epsilon$ about the equilibrium state as
\begin{eqnarray}
n_d&&=1+\alpha_2\epsilon n^{(1)}_d+\alpha_2^2\epsilon^2 n^{(2)}_d+\cdots,\notag \\
u_d&&=\alpha_2\epsilon u^{(1)}_d+\alpha_2^2\epsilon^2 u^{(2)}_d+\cdots,\notag \\
\phi&&=\alpha_2\epsilon\phi^{(1)}+\alpha_2^2\epsilon^2\phi^{(2)}+\cdots,\label{expantions}\\
n_j&&=1+\alpha_2\epsilon n^{(1)}_j+\alpha_2^2\epsilon^2 n^{(2)}_j+\cdots,\notag \\
f_j&&=f^{(0)}_j+\alpha_2\epsilon f^{(1)}_j+\alpha_2\epsilon^2 f^{(2)}_j+\cdots,\notag
\end{eqnarray}
where $f^{(0)}_j$, for $j=p,n$, are assumed to be the Maxwellian given by
\begin{equation}
f^{(0)}_j=\sqrt{{1}/{2\pi}}\exp\left[\left(m_jT_p/m_pT_j\right)\left(-v^2/2\right)\right].\label{f_0 Maxwellian distribution}
\end{equation}
For convenience, we temporarily drop the constant $\alpha_2$ in the subsequent expressions and equations. We, however, remember that $\alpha_1,~\alpha_2$ and $\alpha_3$ will explicitly appear in the coefficients of the terms associated with the Landau damping, nonlinear and the dispersion in the KdV equation. In what follows, we substitute the expressions from Eqs. \eqref{stretching} and \eqref{expantions} into Eqs. \eqref{cont-eqn-nond}, \eqref{montm-eqn-nond}, \eqref{Poisson-eqn-nond}, \eqref{density-eqn-nond}, \eqref{p_Vlasov-eqn-nond} and \eqref{n_Vlasov-eqn-nond} and equate successively different powers of $\epsilon$. The results are given in the following subsections.
\subsection*{First-order perturbations and nonlinear wave speed}
Equating the coefficients of $\epsilon^{3/2}$ from Eqs. \eqref{cont-eqn-nond} and \eqref{montm-eqn-nond}, the coefficients of $\epsilon$ from Eqs. \eqref{Poisson-eqn-nond} and \eqref{density-eqn-nond}, and the coefficients of $\epsilon^{3/2}$ from Eqs. \eqref{p_Vlasov-eqn-nond} and \eqref{n_Vlasov-eqn-nond}, we successively obtain
\begin{equation}
n^{(1)}_d=u^{(1)}_d/M, \label{u^(1)-n^(1)}
\end{equation}
\begin{equation}
u^{(1)}_d=\zeta\frac{\phi^{(1)}}{M}, \label{u^(1)-phi^(1)}
\end{equation}
\begin{equation}
0=\mu_{nd}n^{(1)}_n-\mu_{pd}n^{(1)}_p-\zeta n^{(1)}_d, \label{mu_nd-mu_bd}
\end{equation}
\begin{equation}
n^{(1)}_j=\sqrt{\frac{m_j}{m_p}\frac{T_p}{T_j}}\int_{-\infty}^{\infty}f^{(1)}_jdv,\label{n^(1)_p-n^(1)_n}
\end{equation}
\begin{equation}
v\frac{\partial f^{(1)}_p}{\partial\xi}+vf^{(0)}_p\frac{\partial\phi^{(1)}}{\partial\xi}=0,\label{v_p-1}
\end{equation}
\begin{equation}
v\frac{\partial f^{(1)}_n}{\partial\xi}-\sigma vf^{(0)}_n\frac{\partial\phi^{(1)}}{\partial\xi}=0.\label{v_n-1}
\end{equation}
From Eqs.\eqref{u^(1)-n^(1)}, \eqref{u^(1)-phi^(1)} we obtain
\begin{equation}
n^{(1)}_d=\zeta\frac{\phi^{(1)}}{M^2}.\label{n^(1)-phi^(1)}
\end{equation}
Equation \eqref{v_p-1} yields \cite{ott1969}
\begin{equation}
\frac{\partial f^{(1)}_p}{\partial\xi}=-f^{(0)}_p\frac{\partial\phi^{(1)}}{\partial\xi}+\lambda(\xi,\tau)\delta(v),\label{for unique soln v_p}
\end{equation}
where $\delta(v)$ is the Dirac delta function and $\lambda(\xi, \tau)$ is an arbitrary function of $\xi$ and $\tau$. We find that the above solution for ${\partial f^{(1)}_p}/{\partial\xi}$ involves the arbitrary function $\lambda(\xi, \tau)$, and hence is not unique. Thus, for the unique solution to exist we follow Ref. \citep{ott1969} and include an extra higher-order term $\epsilon^{7/2}\alpha_1\left({\partial f^{(1)}_p}/{\partial\tau}\right)$ originating from the term $\epsilon^{5/2}\alpha_1\left({\partial f_p}/{\partial\tau}\right)$ in Eq. \eqref{p_Vlasov-eqn-nond} after the expressions \eqref{stretching} and \eqref{expantions} being substituted. Thus, we write Eq. \eqref{v_p-1} as
\begin{equation}
\alpha_1\epsilon^2\frac{\partial f^{(1)}_{p\epsilon}}{\partial\tau}+v\frac{\partial f^{(1)}_{p\epsilon}}{\partial\xi}=-vf^{(0)}_p\frac{\partial\phi^{(1)}}{\partial\xi},\label{v_p modfd-1}
\end{equation}
Similarly, from Eq. \eqref{v_n-1}, we have
\begin{equation}
\alpha_1\epsilon^2\frac{\partial f^{(1)}_{n\epsilon}}{\partial\tau}+v\frac{\partial f^{(1)}_{n\epsilon}}{\partial\xi}=\sigma vf^{(0)}_n\frac{\partial\phi^{(1)}}{\partial\xi}.\label{v_n modfd-1}
\end{equation}
The solutions of the initial value problems \eqref{v_p modfd-1} and \eqref{v_n modfd-1} are now unique, and can be found uniquely, once $f^{(1)}_{j\epsilon}$ for $j=p,n$ are known, by letting $\epsilon\rightarrow 0$ as
\begin{equation}
f^{(1)}_j=\lim_{\epsilon\rightarrow 0} f^{(1)}_{j\epsilon}.\label{unique sol_1}
\end{equation}
Next, taking the Fourier transform of Eq. \eqref{v_p modfd-1} with respect to $\xi$ and $\tau$ according to the formula
\begin{equation}
\hat{f}(\omega, k)=\int^{\infty}_{-\infty}\int^{\infty}_{-\infty}f(\xi, \tau)e^{i(k\xi-\omega\tau)}d\xi d\tau, \label{FT}
\end{equation}
we obtain
\begin{equation}
\hat{f}^{(1)}_{p\epsilon}=-\left(\frac{kvf^{(0)}_p}{kv-\epsilon^2\alpha_1\omega}\right)\hat{\phi}^{(1)}. \label{1FT f_p}
\end{equation}
We note that the singularity appears in Eq. \eqref{1FT f_p}. In order to avoid it we replace $\omega$ by $\omega+i\eta$, where $\eta~(>0)$ is small, to obtain
\begin{equation}
\hat{f}^{(1)}_{p\epsilon}=-\left[\frac{kvf^{(0)}_p}{(kv-\epsilon^2\alpha_1\omega)-i\eta\alpha_1\epsilon^2}\right]\hat{\phi}^{(1)}. \label{2FT f_p}
\end{equation}
Proceeding to the limit as $\epsilon\rightarrow 0$ and using the Plemelj's formula
\begin{equation}
\lim_{\epsilon\rightarrow 0}\frac{1}{x+i\omega}=-i\pi\delta(x)+P\left(\frac{1}{x}\right),\label{Plmj formla}
\end{equation}
where $P$ and $\delta$, respectively, denote the Cauchy principal value and the Dirac delta function, we obtain
\begin{equation}
\hat{f}^{(1)}_p=-f^{(0)}_p\hat{\phi}^{(1)},\label{3FT f_p}
\end{equation}
in which we have used the properties $xP(1/x)=1$, $x\delta(x)=0$. Next, taking Fourier inversion of Eq. \eqref{3FT f_p}, we have
\begin{equation}
f^{(1)}_p=-f^{(0)}_p\phi^{(1)}.\label{4FT f_p}
\end{equation}
Proceeding in the same way as above for the positive ions, we obtain from Eq. \eqref{v_n modfd-1} for negative ions as
\begin{equation}
f^{(1)}_n=\sigma f^{(0)}_n\phi^{(1)}.\label{4FT f_n}
\end{equation}
From Eqs. \eqref{4FT f_p} and \eqref{4FT f_n} and using Eq. \eqref{n^(1)_p-n^(1)_n} we obtain
\begin{equation}
n^{(1)}_p=-\phi^{(1)},\label{n^(1)_p}
\end{equation}
\begin{equation}
n^{(1)}_n=\sigma\phi^{(1)}.\label{n^(1)_n}
\end{equation}
Substituting the expressions for $n_j^{(1)}$ from Eqs. \eqref{n^(1)-phi^(1)}, \eqref{n^(1)_p} and \eqref{n^(1)_n} into Eq.\eqref{mu_nd-mu_bd}, we obtain the following expression for the nonlinear wave speed
\begin{equation}
M=\left(\mu_{pd}+\sigma\mu_{nd}\right)^{-1/2}\equiv\left[(1+\sigma)\mu_{pd}\pm\sigma\right]^{-1/2}.\label{phase-velocity}\end{equation}
As expected, the expression for $M$ is the same as obtained in the linear dispersion law \eqref{disp-relation} for plane wave perturbations. The signs $\pm$ stand for the expressions according to when dust grains are considered positive or negative. For typical laboratory \cite{kim2006,kim2013,rosenberg2007} and space plasma parameters \cite{rapp2005}, $\sigma\gtrsim1$. Also, for plasmas with positively or negatively charged dusts, the expression inside the square root is positive and/or $\mu_{pd}>1$. So, $M<1$, i.e., the nonlinear wave speed is always lower than the dust-acoustic speed. Figure \ref{fig:fig1} shows that for plasmas with positively charged dusts (See the solid, dashed and dotted lines) the value of $M$ tends to decrease with increasing values of the positive to negative ion temperature ratio $\sigma$ as well as the density ratio $\mu_{pd}$. However, the value of $M$ becomes higher in the case of negatively charged dusts (Compare the solid and the dash-dotted lines).
\begin{figure}[ht]
\centering
\includegraphics[height=2.5in,width=3.6in]{fig1}
\caption{The phase velocity $M$ of the nonlinear DAW [Eq. \eqref{phase-velocity}] is plotted against the temperature ratio $\sigma~(=T_p/T_n)$ for different values of the density ratio $\mu_{pd}~(=n_{p0}/z_dn_{d0})$ as shown in the figure. The acronym PCD (NCD) stands for the case of positively (negatively) charged dusts.}
\label{fig:fig1}
\end{figure}
\begin{figure*}[ht]
\centering
\subfigure[]{
\includegraphics[height=2.5in,width=3.3in]{fig2a}
\label{fig:fig2a}}
\quad
\subfigure[]{
\includegraphics[height=2.5in,width=3.3in]{fig2b}
\label{fig:fig2b}}
\caption{The linear Landau damping decrement $|\gamma|$ [Eq. \eqref{damping decrement}] is shown with the variations of the temperature ratio $\sigma~(=T_p/T_n)$ for different values of (a) the mass ratio $m~(=m_n/m_p)$ (left panel) and (b) the density ratios $\mu_{pd}~(=n_{p0}/z_dn_{d0})$ (right panel), keeping one parameter fixed at a time, as in the figure. The acronym PCD (NCD) stands for the case of positively (negatively) charged dusts.}
\label{fig:fig2}
\end{figure*}
\subsection*{Second-order perturbations}
Equating the coefficients of $\epsilon^{5/2}$ from Eqs. \eqref{cont-eqn-nond} and \eqref{montm-eqn-nond}, the coefficients of $\epsilon^2$ from Eqs. \eqref{Poisson-eqn-nond} and \eqref{density-eqn-nond}, and the coefficients of $\epsilon^{5/2}$ from Eqs. \eqref{p_Vlasov-eqn-nond} and \eqref{n_Vlasov-eqn-nond}, we successively obtain
\begin{equation}
-M\frac{\partial n^{(2)}_d}{\partial\xi}+\frac{\partial u^{(2)}_d}{\partial\xi}+\frac{\partial n^{(1)}_d}{\partial\tau}+\frac{\partial(u^{(1)}_dn^{(1)}_d)}{\partial\xi}=0,\label{u^(2)-n^(2)}
\end{equation}
\begin{equation}
-M\frac{\partial u^{(2)}_d}{\partial\xi}+u^{(1)}_d\frac{\partial u^{(1)}_d}{\partial\xi}+\frac{\partial u^{(1)}_d}{\partial\tau}=-\zeta\frac{\partial\phi^{(2)}}{\partial\xi},\label{u^(2)-phi^(2)}
\end{equation}
\begin{equation}
\frac{\partial^2\phi^{(1)}}{\partial\xi^2}=\mu_{nd}n^{(2)}_n-\mu_{pd}n^{(2)}_p-\zeta n^{(2)}_d, \label{mu_nd-mu_bd-2}
\end{equation}
\begin{equation}
n^{(2)}_j=\sqrt{\frac{m_j}{m_p}\frac{T_p}{T_j}}\int_{-\infty}^{\infty}f^{(2)}_jdv,\label{n^(2)_p-n^(2)_n}
\end{equation}
\begin{eqnarray}
&&-\alpha_1M\frac{\partial f^{(1)}_p}{\partial\xi}+v\frac{\partial f^{(2)}_p}{\partial\xi}-\frac{\partial \phi^{(1)}}{\partial\xi}\frac{\partial f^{(1)}_p}{\partial v}\notag \\
&&+vf^{(0)}_p\frac{\partial \phi^{(2)}}{\partial\xi}=0,\label{v_p-2}
\end{eqnarray}
\begin{eqnarray}
&&-\alpha_1M\frac{\partial f^{(1)}_n}{\partial\xi}+v\frac{\partial f^{(2)}_n}{\partial\xi}+\frac1m\frac{\partial \phi^{(1)}}{\partial\xi}\frac{\partial f^{(1)}_n}{\partial v}\notag \\
&&-\sigma vf^{(0)}_n\frac{\partial \phi^{(2)}}{\partial\xi}=0.\label{v_n-2}
\end{eqnarray}
Substituting the expressions for $f_j^{(1)}$ from Eqs. \eqref{4FT f_p} and \eqref{4FT f_n} into Eqs. \eqref{v_p-2} and \eqref{v_n-2} we successively obtain
\begin{equation}
v\frac{\partial f^{(2)}_p}{\partial\xi}+vf^{(0)}_p\frac{\partial \phi^{(2)}}{\partial\xi}=(D_{pa}+vD_{pb})f^{(0)}_p,\label{v_p-3}
\end{equation}
\begin{equation}
v\frac{\partial f^{(2)}_n}{\partial\xi}-\sigma vf^{(0)}_n\frac{\partial \phi^{(2)}}{\partial\xi}=(D_{na}+vD_{nb})f^{(0)}_n,\label{v_n-3}
\end{equation}
where
\begin{eqnarray}
&&D_{pa}=-{\alpha_1M}\frac{\partial\phi^{(1)}}{\partial\xi},~D_{pb}=\frac{1}{2}\frac{\partial(\phi^{(1)})^2}{\partial\xi}\notag\\
&&D_{na}=\alpha_1M\sigma\frac{\partial\phi^{(1)}}{\partial\xi},~D_{nb}=\frac{\sigma^2}{2}\frac{\partial(\phi^{(1)})^2}{\partial\xi}.\label{D_j(a,b)}
\end{eqnarray}
As before, to get the unique solutions for $f^{(2)}_j$ for positive $(j=p)$ and negative $(j=n)$ ions, we introduce an extra higher-order term $\epsilon^{9/2}\alpha_1\left({\partial f^{(1)}_j}/{\partial\tau}\right)$ originating from the term $\epsilon^{5/2}\alpha_1\left({\partial f_j}/{\partial\tau}\right)$ in Eqs.\eqref{p_Vlasov-eqn-nond} and \eqref{n_Vlasov-eqn-nond} after the expressions \eqref{stretching} and \eqref{expantions} being substituted. Thus, we rewrite Eqs. \eqref{v_p-3} and \eqref{v_n-3} as
\begin{eqnarray}
&&\alpha_1\epsilon^2\frac{\partial f^{(2)}_{p\epsilon}}{\partial\tau}+v\frac{\partial f^{(2)}_{p\epsilon}}{\partial\xi}+vf^{(0)}_p\frac{\partial \phi^{(2)}}{\partial\xi}\notag\\
&&=(D_{pa}+vD_{pb})f^{(0)}_p,\label{v_p-4}
\end{eqnarray}
\begin{eqnarray}
&&\alpha_1\epsilon^2\frac{\partial f^{(2)}_{n\epsilon}}{\partial\tau}+v\frac{\partial f^{(2)}_{n\epsilon}}{\partial\xi}-\sigma vf^{(0)}_n\frac{\partial \phi^{(2)}}{\partial\xi}\notag\\
&&=(D_{na}+vD_{nb})f^{(0)}_n.\label{v_n-4}
\end{eqnarray}
So, the unique solutions can be found, once $f^{(2)}_{j\epsilon}$ for $j=p,n$ are known, by letting $\epsilon\rightarrow 0$ as
\begin{equation}
f^{(2)}_j=\lim_{\epsilon\rightarrow 0} f^{(2)}_{j\epsilon}.\label{unique sol_2}
\end{equation}
Next, introducing the Fourier transform in Eq. \eqref{v_p-4} with respect to $\xi$ and $\tau$ according to the formula \eqref{FT}, we have
\begin{eqnarray}
\hat{f}^{(2)}_{p\epsilon}=-\left(\frac{kvf^{(0)}_p}{kv-\epsilon^2\alpha_1\omega}\right)\hat{\phi}^{(2)}\notag\\
-i\left(\frac{\hat{D}_{pa}+v\hat{D}_{pb}}{kv-\epsilon^2\alpha_1\omega}\right)f^{(0)}_p. \label{5FT f_p}
\end{eqnarray}
As before, to avoid the wave singularity, $\omega$ will have a small positive imaginary part. So, we replace $\omega$ by $\omega+i\eta$, where $\eta>0$, to obtain from Eq. \eqref{5FT f_p} as
\begin{eqnarray}
\hat{f}^{(2)}_{p\epsilon}=-\left[\frac{kvf^{(0)}_p}{(kv-\epsilon^2\alpha_1\omega)-i\eta\alpha_1\epsilon^2}\right]\hat{\phi}^{(2)}\notag\\
-i\left[\frac{(\hat{D}_{pa}+v\hat{D}_{pb})}{(kv-\epsilon^2\alpha_1\omega)-i\eta\alpha_1\epsilon^2}\right]f^{(0)}_p. \label{6FT f_p}
\end{eqnarray}
Proceeding to the limit as $\epsilon\rightarrow 0$ and using the Plemelj's formula \eqref{Plmj formla}, we have from Eq. \eqref{6FT f_p} as
\begin{eqnarray}
&&\hat{f}^{(2)}_p+f^{(0)}_p\hat{\phi}^{(2)}=-i\left[\text{P}\left(\frac{1}{kv}\right)+i\pi\frac{\text{sgn}(k)}{k}\delta(v)\right]\times\notag\\
&&(\hat{D}_{pa}+v\hat{D}_{pb})f^{(0)}_p,\label{7FT f_p}
\end{eqnarray}
where we have used the properties $x\text{P}(1/x)=1$, $x\delta(x)=0$ and $\delta(kv)=[\text{sgn}(k)/k]\delta(v)$.
We multiply both sides of Eq.\eqref{7FT f_p} by $ik$ and then integrate over $v$ to obtain
\begin{equation}
ik\left(\hat{n}^{(2)}_p+\hat{\phi}^{(2)}\right)=\hat{D}_{pb}+i\sqrt{\frac{\pi}{2}}\text{sgn}(k)\hat{D}_{pa}.\label{n^(2)_p}
\end{equation}
The Fourier inverse transform of Eq.\eqref{n^(2)_p} yields
\begin{eqnarray}
&&\frac{\partial n^{(2)}_p}{\partial\xi}+\frac{\partial\phi^{(2)}}{\partial\xi}=\frac{1}{2}\frac{\partial(\phi^{(1)})^2}{\partial\xi}\notag\\
&&+\sqrt{\frac{\pi}{2}}F^{-1}\left[i~\text{sgn}(k)\hat{D}_{pa}\right].\label{n^(2)_p-phi^(2)}
\end{eqnarray}
Then using the convolution theorem of Fourier transform, we have from Eq. \eqref{n^(2)_p-phi^(2)}
\begin{eqnarray}
&&\frac{\partial n^{(2)}_p}{\partial\xi}+\frac{\partial\phi^{(2)}}{\partial\xi}=\frac{1}{2}\frac{\partial(\phi^{(1)})^2}{\partial\xi}\notag\\
&&+\alpha_1M\frac{1}{\sqrt{2\pi}}\text{P}\int^{\infty}_{-\infty}\frac{\partial\phi^{(1)}}{\partial\xi'}\frac{d\xi'}{\xi-\xi'},\label{n^(2)_p-phi^(2)-P}
\end{eqnarray}
where we have used $F^{-1}[i~\text{sgn}(k)]=-(1/\pi)\text{P}\left({1}/{\xi}\right)$.
Proceeding in the same way as above for positive ions, we obtain from Eq. \eqref{v_n-4} for negative ions as
\begin{eqnarray}
&&\frac{\partial n^{(2)}_n}{\partial\xi}-\sigma\frac{\partial\phi^{(2)}}{\partial\xi}=\frac{\sigma^2}{2}\frac{\partial(\phi^{(1)})^2}{\partial\xi}\notag\\
&&-\alpha_1Mm^{1/2}\sigma^{3/2}\frac{1}{\sqrt{2\pi}}\text{P}\int^{\infty}_{-\infty}\frac{\partial\phi^{(1)}}{\partial\xi'}\frac{d\xi'}{\xi-\xi'}.\label{n^(2)_n-phi^(2)-P}
\end{eqnarray}
\begin{figure*}[ht]
\centering
\subfigure[]{
\includegraphics[height=2.5in,width=3.3in]{fig3a}
\label{fig:fig3a}}
\subfigure[]{
\includegraphics[height=2.5in,width=3.3in]{fig3b}
\label{fig:fig3b}}
\caption{The KdV soliton solution $n$ [Eq. \eqref{sol of k-dv}] is shown with respect to $\xi$ at $\tau=0$ for different values of (a) the temperature ratio $\sigma~(=T_p/T_n)$ (left panel) and (b) the density ratio $\mu_{pd}~(=n_{p0}/z_dn_{d0})$ (right panel), keeping one parameter fixed at a time, as in the figure. The acronym PCD (NCD) stands for the case of positively (negatively) charged dusts.}
\label{fig:fig3}
\end{figure*}
\subsection*{KdV Equation with Landau damping}
In order to obtain the required KdV equation, we first eliminate ${\partial u^{(2)}_d}/{\partial\xi}$ and ${\partial n^{(2)}_d}/{\partial\xi}$ from Eqs. \eqref{u^(2)-n^(2)}-\eqref{mu_nd-mu_bd-2} and then eliminate ${\partial n^{(2)}_j}/{\partial\xi}$ by using Eqs. \eqref{n^(2)_p-phi^(2)-P} and \eqref{n^(2)_n-phi^(2)-P}. In the resulting equation we also substitute the expressions for $n_d^{(1)}$ and $u_d^{(1)}$ from Eqs. \eqref{u^(1)-n^(1)} and \eqref{u^(1)-phi^(1)}. Thus, we obtain the following KdV equation (Recall that the constants $\alpha_1,~\alpha_2$ and $\alpha_3$ will enter into the Landau damping, nonlinear and the dispersive terms)
\begin{equation}
\frac{\partial n}{\partial\tau}+a\text{P}\int^{\infty}_{-\infty}\frac{\partial n}{\partial\xi'}\frac{d\xi'}{\xi-\xi'}
+bn\frac{\partial n}{\partial\xi}+c\frac{\partial^3 n}{\partial\xi^3}=0,\label{K-dV}
\end{equation}
where $n\equiv n^{(1)}_d$ and the coefficients of the Landau damping, nonlinear and dispersive terms, respectively, are
\begin{equation}
a=\frac{\alpha_1}{\sqrt{8\pi}}\frac{\sigma^{-1/2}}{\sigma_1^2}\left[\zeta\sqrt{m}+(\sqrt{m}+\sigma^{-3/2})\mu_{pd}\right],\label{c}
\end{equation}
\begin{equation}
b=\frac{\alpha_2}{2\sqrt{\sigma\sigma_1}}\left[3-\zeta\frac{\sigma_2}{\sigma_1^2}\right],\label{a}
\end{equation}
\begin{equation}
c=\frac{\alpha_3}{2}(\sigma\sigma_1)^{-3/2},\label{b}
\end{equation}
with $\sigma_1=\zeta+\left(1+\sigma^{-1}\right)\mu_{pd}$, $\sigma_2=\zeta+\left(1-\sigma^{-2}\right)\mu_{pd}$ and $\zeta=\pm1$ denoting, respectively, for positively and negatively charged dusts.
Equation \eqref{K-dV} is the required KdV equation which describes the weakly nonlinear and weakly dispersive dust-acoustic waves in an unmagnetized dusty pair-ion plasma with the effects of Landau damping. Inspecting on the coefficients $a,~b$ and $c$ we find that for $\sigma>1$ and $\alpha_1,~\alpha_2,~\alpha_3\sim O(1)$, we have $\sigma_1\gtrsim\sigma_2$ and $b>a,c$ for typical laboratory \cite{merlino1998,kim2006,kim2013} and space \cite{rapp2005} plasma parameters.
Also, if we set $\delta=0$, i.e. if we neglect the strength of the Landau damping associated with the ions then Eq. \eqref{K-dV} reduces to the usual KdV equation which governs the small but finite amplitude nonlinear DAWs in unmagnetized dusty pair-ion plasmas.
It is of interest to examine the range of values of the parameters for which the KdV equation (without the Landau damping) is applicable to DAWs, and also to see the competition between nonlinearity and Landau damping in determining whether or not an initial wave steepens. To this end we consider typical plasma parameters that are relevant to laboratory and space plasmas. For example, for laboratory plasmas \cite{merlino1998,kim2006,kim2013} (in which the light positive ions are singly ionized potassium $K^+$ and heavy negative ions are $SF_6^-$), we can consider $m_p=6.5\times10^{-23}$ g, $m_n=2.4\times10^{-22}$ g, $T_p=0.2~$ev$=2321~$K,
$T_n=0.025~$ev$=290.12~$K, $n_{n0}=2\times10^9$ cm$^{-3}$, $n_{p0}=1.3\times10^9$ cm$^{-3}$, $n_{d0}=2\times10^6$ cm$^{-3}$, $\phi_s=0.1$ v$=3.3\times10^{-4}$ statv,
$R=5.04~\mu$m$=5.04\times10^{-4}~$cm and $z_d=\sim R\phi_s/e\sim350$, so that we have [Assuming that $\alpha_1,~\alpha_2,~\alpha_3\sim O(1)$] $a=0.0412~(0.1028)$, $b=0.2719~(0.6263)$ and $c=0.0041~(0.0194)$ for positively (negatively) charged dusts. This implies that the nonlinear and the Landau damping effects are larger than the finite Debye length (dispersive) effects so that the KdV soliton theory is not applicable to DAWs. This is, in fact, true for the experiment described in Ref. \citep{andersen1967}.
On the other hand, for space plasma parameters \cite{rapp2005} (e.g., a dusty region at an altitude of about 95 km) in which $m_p=28m_{\text{proton}}=4.7\times10^{-23}$ g, $m_n=300m_{\text{proton}}=5.02\times10^{-22}$ g, $T_p=200~$K, $T_n=200~$K, $n_{n0}=2\times10^6$ cm$^{-3}$, $n_{p0}=10^6$ cm$^{-3}$, $z_dn_{d0}=10^6$ cm$^{-3}$, $\phi_s=0.7$ v$=2.3\times10^{-3}$ statv, $R=0.6~$nm$=6\times10^{-8}~$cm, we have [Assuming that $\alpha_1,~\alpha_2,~\alpha_3\sim O(1)$] $a=0.1670~(0.1995)$, $b=0.8340~(1)$ and $c=0.09~(0.5)$ for positively (negatively) charged dusts. In this case, though the Landau damping coefficient is lower than the nonlinear one, but is larger than or comparable with the dispersive coefficient. Thus, both in laboratory and space plasma environments the Landau damping effects on DAWs can no longer be negligible, but may play crucial roles in reducing the wave amplitude which will be shown shortly.
Next, to obtain the regular Landau damping of DAWs in plasmas we set $b=c=0$. Then Eq. \eqref{K-dV} reduces to
\begin{equation}
\frac{\partial n}{\partial\tau}+a\text{P}\int^{\infty}_{-\infty}\frac{\partial n}{\partial\xi'}\frac{d\xi'}{\xi-\xi'}=0.\label{regular LD}
\end{equation}
Taking the Fourier transform of Eq. \eqref{regular LD} according to the formula \eqref{FT} and using the result that the inverse transform of $[i~\text{sgn}(k)]$ is $-(1/\pi)\text{P}\left(1/\xi\right)$, we have
\begin{eqnarray}
\omega=&&-ik\alpha_1\sqrt{\frac{\pi}{8}}\frac{\sigma^{-1/2}}{\sigma_1^2}\left[\zeta\sqrt{m}+(\sqrt{m}+\sigma^{-3/2})\mu_{pd}\right]\notag\\
\equiv&&-i\pi ka.\label{Dispersion relation}
\end{eqnarray}
Thus, the DAWs become damped due to the finite positive and negative ion inertial effects as $\alpha_1\propto\sqrt{m_j/m_d}$, and the damping decrement (nondimensional) $\gamma$ is given by
\begin{eqnarray}
|\gamma|\sim&&\sqrt{\frac{m_pz_d}{m_d}}\sqrt{\frac{\pi}{8}}\frac{\sigma^{-1/2}}{\sigma_1^2}\left[\zeta\sqrt{m}+(\sqrt{m}+\sigma^{-3/2})\mu_{pd}\right]\notag\\
\equiv&&\pi a.\label{damping decrement}
\end{eqnarray}
The variations of $|\gamma|$ with respect to $\sigma$ is shown in Fig. \ref{fig:fig2} for different values of the mass ratio $m$ [Fig. \ref{fig:fig2a}] and the density ratio $\mu_{pd}$ [Fig. \ref{fig:fig2b}]. It is seen that the value of $|\gamma|$ slowly decreases with an increase of the temperature ratio $\sigma$ for plasmas with positively charged dusts. However, for plasmas with negatively charged dusts, the value of $|\gamma|$ initially increases until $\sigma$ reaches its critical value, and then decreases with increasing values of $\sigma$. These may be consequences of typical laboratory plasmas (see above) in which positive ion temperature is higher than the neagtive ions. Also, as the ratio $m$ $(\mu_{pd})$ increases, the value of $|\gamma|$ increases (decreases) [See the solid and dashed lines for positively charged dusts, and the dotted and dash-dotted lines for negatively charged dusts]. Thus, dusty plasmas with a higher concentration of positive ions (and hence that of the negative ions in order to maintain the charge neutrality) than the charged dusts reduce the linear Landau damping rate of DAWs. This may be true for some laboratory plasmas as described above. Also, since the mass difference of positive and negative ions can be higher in space plasmas as mentioned above, an enhancement of the damping rate is more likely to occur there. Furthermore, a higher value of $|\gamma|$ for plasmas with negatively charged dusts is seen to occur (Compare the solid and dotted lines or the dotted and dash-dotted lines).
\begin{figure*}[ht]
\centering
\subfigure[]{
\includegraphics[height=2.2in,width=3.2in]{fig4a}
\label{fig:fig4a}}
\quad
\subfigure[]{\includegraphics[height=2.2in,width=3.2in]{fig4b}
\label{fig:fig4b}}
\caption{Typical forms of the soliton solutions given by Eqs. \eqref{sol of k-dv} and \eqref{final-sol} are shown. The left (right) panel shows the solution without (with) the Landau damping effect. The parameter values are for plasmas with positively charged dusts with $m=2,~\mu_{pd}=1.2$, $\sigma=1.2$, $\alpha_1=0.6,~\alpha_2=0.8$ and $\alpha_3=8$ so that $b\sim c\gg a$. Also, for the left panel (a) $U_0=0.06$, and for the right panel (b) $U_0=40$ and $N_0=0.4$. }
\label{fig:fig4}
\end{figure*}
\section{Solitary wave solution of the KdV equation with Landau damping}
We note that in absence of the Landau damping effect (i.e. $a=0$), Eq. \eqref{K-dV} reduces to the usual KdV equation, the solitary wave solution of which is given by
\begin{equation}
n=N~\text{sech}^2\left(\frac{\xi-U_0\tau}{W}\right),\label{sol of k-dv}
\end{equation}
where $N=3U_0/b$ is the amplitude and $W=\left({12c}/{Nb}\right)^{1/2}\equiv\sqrt{4c/U_0}$ is the width and $U_0=Nb/3$ is the constant phase speed (normalized by $c_s$) of the solitary wave.
To find the solitary wave solution of Eq. \eqref{K-dV} with the effect of a small amount of Landau damping, we follow Ref. \cite{ott1969}. Thus, integrating Eq. \eqref{K-dV} with respect to $\xi$ one can obtain
\begin{equation}
\frac{\partial}{\partial\tau}\int^{+\infty}_{-\infty}n~d\xi=0,
\end{equation}
i.e., Eq. \eqref{K-dV} conserves the total number of particles. Furthermore, multiplying Eq. \eqref{K-dV} by $n$ and integrating over $\xi$ yields
\begin{equation}
\frac{\partial}{\partial\tau}\int_{-\infty}^{\infty}n^2(\xi, \tau) d\xi\leq 0,\label{positive definite, H theorem}
\end{equation}
where the equality sign holds only when $n=0$ for all $\xi$. Equation \eqref{positive definite, H theorem} states that an initial perturbation of the form \eqref{sol of k-dv} for which
\begin{equation}
\int^{+\infty}_{-\infty}n^2~d\xi<\infty,
\end{equation}
will decay to zero. That is, the wave amplitude $N$ is not a constant but decreases slowly with time. In what follows we perform a perturbation analysis of Eq. \eqref{K-dV} assuming that $a~(\gg\epsilon)$ is a small parameter with $1\sim b\sim c\gg a$. The latter may be satisfied for plasmas (e.g., laboratory plasmas in which pair-ions can be $Ar^+SF_6^-$) with $m>1$ and $\mu_{pd}\sim\sigma\sim1$. So, we introduce a new space coordinate $z$ in a frame moving with the solitary wave and normalized to its width as
\begin{equation}
z=\left(\xi-\frac {b}3\int_{0}^{\tau}Nd\tau\right)/W,\label{space coordinate}
\end{equation}
where $N$ is assumed to vary slowly with time and $N=N(a, \tau)$. Also, assume that $n\equiv n(z, \tau)$. Under this transformation Eq. \eqref{K-dV} becomes
\begin{eqnarray}
&&\frac{\partial n}{\partial\tau}+\frac{a}{W}P\int_{-\infty}^{\infty}\frac{\partial n}{\partial z'}\frac{dz'}{z-z'}-\left[\frac{Nb}{3W}-\frac{z}{2N}\left(\frac{dN}{d\tau}\right)\right]\frac{\partial n}{\partial z}\notag\\
&&+\frac{b}{W}n\frac{\partial n}{\partial z}+\frac{c}{W^3}\frac{\partial^3n}{\partial z^3}=0,\label{after subst SP}
\end{eqnarray}
where we have used ${\partial n}/{\partial z'}={\partial n}/{\partial z}$ at $z=z'$.
Next, to investigate the solution of Eq. \eqref{after subst SP}, we follow Ref. \citep{ott1969} and generalize the multiple time scale analysis with respect to $a$. Thus, we consider the solution as \cite{bandyo2002a,bandyo2002b}
\begin{equation}
n(z, \tau)=n^{(0)}+an^{(1)}+a^2n^{(2)}+a^3n^{(3)}+\cdots,\label{multiple time scale}
\end{equation}
where $n^{(i)},~i=0, 1, 2, 3,\cdots$, are functions of $\tau=\tau_0, \tau_1, \tau_2, \tau_3,\cdots$ in which $\tau_i$ are given by
\begin{equation}
\tau_i=a^{i}\tau,\label{tau_i}
\end{equation}
where $i=0, 1, 2, 3,\cdots$.
Substituting \eqref{multiple time scale} into Eq. \eqref{after subst SP}, we obtain
\begin{eqnarray}
&&\left(\frac{\partial n^{(0)}}{\partial\tau}+a\frac{\partial n^{(0)}}{\partial\tau_1}+\cdots\right)+a\left(\frac{\partial n^{(1)}}{\partial\tau}+a\frac{\partial n^{(1)}}{\partial\tau_1}+\cdots\right)\notag\\
&&+\left[-\frac{Nb}{3W}+\frac{z}{2N}\left(\frac{\partial N}{\partial\tau}+a\frac{\partial N}{\partial\tau_1}+\cdots\right)\right]\times\notag\\
&&\frac{\partial}{\partial z}\left(n^{(0)}+an^{(1)}+\cdots\right)+\frac{a}{W}P\int_{-\infty}^{\infty}\frac{\partial n^{(0)}}{\partial z'}\frac{dz'}{z-z'}\notag\\
&&+\frac{b}{W}n^{(0)}\frac{\partial n^{(0)}}{\partial z}+\frac{ab}{W}\left(n^{(1)}\frac{\partial n^{(0)}}{\partial z}+n^{(0)}\frac{\partial n^{(1)}}{\partial z}\right)\notag\\
&&+\frac{c}{W^3}\frac{\partial^3n^{(0)}}{\partial z^3}+\frac{ac}{W^3}\frac{\partial^3n^{(1)}}{\partial z^3}+\cdots=0,\label{big equation}
\end{eqnarray}
Equating the coefficients of zeroth and first-order of $a$, we successively obtain from Eq. \eqref{big equation} as
\begin{equation}
\beta\left[\frac{\partial}{\partial\tau}+\frac{z}{2N}\frac{\partial N}{\partial\tau}\frac{\partial}{\partial z}\right]n^{(0)}+M\frac{\partial n^{(0)}}{\partial z}=0,\label{0th order eq}
\end{equation}
\begin{equation}
\beta\left[\frac{\partial}{\partial\tau}+\frac{z}{2N}\frac{\partial N}{\partial\tau}\frac{\partial}{\partial z}\right]n^{(1)}+\frac{\partial}{\partial z} Mn^{(1)}=\beta Rn^{(0)},\label{1st order eq}
\end{equation}
where
\begin{equation}
\beta=\frac{W^3}{c}=24\sqrt{\frac{3c}{{b}^3}}N^{-3/2},\label{beta}
\end{equation}
\begin{equation}
M=\frac{\partial^2}{\partial z^2}+4\left(3\frac{n^{(0)}}{N}-1\right),\label{M}
\end{equation}
\begin{eqnarray}
Rn^{(0)}=-\left[\frac{\partial n^{(0)}}{\partial\tau_1}+\frac{z}{2N}\frac{\partial N}{\partial \tau_1}\frac{\partial n^{(0)}}{\partial z}\right.\notag\\
\left.+\frac{1}{W}\text{P}\int_{-\infty}^{\infty}\frac{\partial n^{(0)}}{\partial z'}\frac{dz'}{z-z'}\right].\label{Rq^(0)}
\end{eqnarray}\\
Next, imposing the boundary conditions, namely, $n^{(0)}$, ${\partial n^{(0)}}/{\partial z}$, ${\partial^2 n^{(0)}}/{\partial z^2}\rightarrow 0$ as $z\rightarrow\pm\infty$, it can easily be shown that $n^{(0)}=N~\text{sech}^2z$ is the soliton solution of the equation $M{\partial n^{(0)}}/{\partial z}=0$. Hence $n^{(0)}=N~\text{sech}^2z$ will be the soliton solution of Eq. \eqref{0th order eq} if and only if \cite{bandyo2002a,bandyo2002b}
\begin{equation}
\frac{\partial N}{\partial\tau}=0.\label{condition 1}
\end{equation}
Under the condition \eqref{condition 1}, Eq. \eqref{1st order eq} reduces to
\begin{equation}
\beta\left[\frac{\partial n^{(1)}}{\partial\tau}\right]+\frac{\partial}{\partial z} \left(Mn^{(1)}\right)=\beta Rn^{(0)}.\label{1st order eq-AC}
\end{equation}\\
In order that the solution of Eq. \eqref{1st order eq-AC} exists, it is necessary that $Rn^{(0)}$ be orthogonal to all solutions $g(z)$, of $L^{+}[g]=0$ which satisfy $g(\pm\infty)=0$, where $L^{+}$ is the operator adjoint to $L$, defined by
\begin{equation}\\
\int_{-\infty}^{\infty}\psi_1(z)L[\psi_2(z)]dz=\int_{-\infty}^{\infty}\psi_2(z)L^{+}[\psi_1(z)]dz,\label{adjt oprtr}
\end{equation}
where $\psi_1(\pm\infty)=\psi_2(\pm\infty)=0$, and the only solution of $L^{+}[g]=0$ is $g(z)=\text{sech}^2z$. Thus, we have
\begin{equation}
\int_{-\infty}^{\infty} Rn^{(0)}\text{sech}^2z dz=0,\label{eqn}
\end{equation}
which gives
\begin{eqnarray}
&&\frac{\partial N}{\partial \tau}+\frac12a\sqrt{\frac{b}{3c}}N^{3/2}\times\notag\\
&&\text{P}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\frac{\text{sech}^2z}{z-z'}\frac{\partial}{\partial z'}\left(\text{sech}^2z'\right)dzdz'=0.\label{1st ordr diffnal eq}
\end{eqnarray}
Equation \eqref{1st ordr diffnal eq} is a first-order differential equation for the solitary wave amplitude $N(a, \tau)$, the solution of which is
\begin{equation}
N(a, \tau)=N_0\left(1+\frac{\tau}{\tau_0}\right)^{-2},\label{solution}
\end{equation}
where $N=N_0$ at $\tau=0$ and $\tau_0$ is given by
\begin{equation}
\tau_0^{-1}=\frac{a}{4}\sqrt{\frac{bN_0}{3c}}\text{P}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\frac{\text{sech}^2z}{z-z'}\frac{\partial}{\partial z'}\left(\text{sech}^2z'\right)dzdz'.\label{tau'}
\end{equation}
Taking Fourier transform of $\text{sech}^2z$ and making use of the identity
\begin{equation}
\text{P}\int_{-\infty}^{\infty}\frac{\exp(ikz)}{z-z'}dz=i\pi~\text{sgn}~\kappa~\exp(i\kappa z')
\end{equation}
one obtains
\begin{equation}
\text{P}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\frac{\text{sech}^2z}{z-z'}\frac{\partial}{\partial z'}\left(\text{sech}^2z'\right)dzdz'=\frac{24}{\pi^2}\zeta(3)=2.92,
\end{equation}
where $\zeta$ is the Riemann zeta function. Thus, from Eq. \eqref{tau'} we have
\begin{equation}
\tau_0\approx\frac{1.37}{a}\sqrt{\frac{3c}{bN_0}}.\label{tau-final}
\end{equation}
The final soliton solution of Eq. \eqref{K-dV} with the Landau damping is then given by
\begin{eqnarray}
n=&&N_0\left(1+\frac{\tau}{\tau_0}\right)^{-2}~\text{sech}^2\left[\left(\xi-\frac {b}3\int_{0}^{\tau}Nd\tau\right)/W\right]\notag\\
&&+O(a).\label{final-sol}
\end{eqnarray}
This shows that the amplitude of DA solitary waves decays slowly with time with the effect of a small amount of the Landau damping. From Eq. \eqref{final-sol}, it is also seen that as the wave amplitude decreases, the propagation speed also slows down in widening the pulse width.
\begin{figure}[ht]
\centering
\includegraphics[height=2.5in,width=3.6in]{fig5}
\caption{The decay of the solitary wave amplitude [Eq. \eqref{solution}] with time is shown as a result of the Landau damping effect for different values of the parameters $m,~\mu_{pd}$ and $\sigma$ as relevant for plasmas with positively charged dusts. The other values are taken as $\alpha_1=0.6,~\alpha_2=0.8$, $\alpha_3=8$ and $N_0=0.2$ so that $b\sim c\gg a$ is satisfied. The qualitative behaviors remain the same for plasmas with negatively charged dusts. }
\label{fig:fig5}
\end{figure}
We numerically investigate the properties of the soliton solutions given by Eqs. \eqref{sol of k-dv} and \eqref{final-sol} with different plasma parameters. Figure \ref{fig:fig3} exhibits the characteristics of the KdV soliton for different values of (a) $\sigma$ [Fig. \ref{fig:fig3a}] and (b) $\mu_{pd}$ [Fig. \ref{fig:fig3b}] for plasmas with positively (PCD) and negatively charged dusts (NCD). The changes of values of the amplitude and width are more pronounced in case of plasmas with negatively charged dusts. In this case, the solitons get widened and their amplitudes remain smaller than the positively charged dust case. From Fig. \ref{fig:fig3a} it is seen that as the temperature ratio $\sigma$ increases, both the amplitude and width of the soliton decrease. On the other hand, as the ratio $\mu_{pd}$ increases, an increase of the amplitude and a decrease of the width are found to occur [Fig. \ref{fig:fig3b}].
Typical forms of the KdV soliton [Fig. \ref{fig:fig4a}] and soliton with the effect of Landau damping [Fig. \ref{fig:fig4b}] are shown in Fig. \ref{fig:fig4}. The parameter values are for typical laboratory plasmas (as mentioned before) with positively charged dusts. The features are also similar in the case of negatively charged dusts. Clearly, the wave amplitude slows down with time by the effect of a small amount of Landau damping. Such a decay of the wave amplitude with time [Eq. \eqref{solution}] is also exhibited in Fig. \ref{fig:fig5} for different parameter values. The latter are for plasmas with positively charged dusts with $m>1$ and $\mu_{pd}\sim\sigma\sim1$. We find that as the mass and temperature ratios increase (Relevant for typical laboratory dusty plasmas mentioned above) there is a slowing down of the wave amplitude. However, such a reduction of the amplitude is more pronounced with a small enhancement of $\sigma$. Furthermore, an increase of the wave amplitude with the density ratio $\mu_{pd}$ is also found to occur.
\section{Conclusion}
We have investigated the Landau damping effects of both positive and negative ions on small but finite amplitude electrostatic solitary waves in dusty negative-ion plasmas consisting of mobile charged dusts and both positive and negative ions. A Korteweg de-Vries (KdV) equation with a nonlocal integral term (Landau damping) is derived which governs the dynamics of weakly nonlinear and weakly dispersive DAWs. It is found that for typical laboratory \cite{merlino1998,kim2006,kim2013} and space plasmas \cite{rapp2005}, the Landau damping (and the nonlinear) effects for both positive and negative ions become dominant over the finite Debye length (dispersive) effects for which the KdV soliton theory is no longer applicable to DAWs in dusty pair-ion plasmas. In such cases and in presence of Landau damping, the soliton amplitude is found to decay with time. The wave amplitude also decreases with an increase of the ratios of negative to positive ion masses $(m)$ and the positive to negative ion temperatures $(\sigma)$. However, the amplitude may be increased with increasing values of the positive ion to dust number density ratio $(\mu_{pd})$. On the other hand, the amplitude and width of the KdV soliton are found to decrease with an increase of $\sigma$, whereas its amplitude increases and the width decreases with an increase of $\mu_{pd}$. The results should be useful for understanding the evolution of dust-acoustic solitary waves in dusty negative ion plasmas such as those in laboratory \cite{merlino1998,kim2006,kim2013} and space environments \cite{geortz1989,rapp2005}.
\section*{Acknowledgement}
{A. B. is thankful to University Grants Commission (UGC), Govt. of India, for Rajib Gandhi National Fellowship with Ref. No. F1-17.1/2012-13/RGNF-2012-13-SC-WES-17295/(SA-III/Website). This research was partially supported by the SAP-DRS (Phase-II), UGC, New Delhi, through sanction letter No. F.510/4/DRS/2009 (SAP-I) dated 13 Oct., 2009, and by the Visva-Bharati University, Santiniketan-731 235, through Memo No. REG/Notice/156 dated January 7, 2014.}
|
1,314,259,995,341 | arxiv | \section{Introduction}
The nuclear stellar disc (NSD) is a flat dense stellar structure that roughly outlines the Galactic centre with a radius of $\sim150$\,pc and a scale height of $\sim40$\,pc \citep[e.g.][]{Launhardt:2002nx,gallego-cano2019,Sormani:2020aa,Sormani:2022wv}. Its study is hampered by the extreme source crowding and the high extinction towards the innermost regions of the Galaxy \citep[e.g.][]{Nishiyama:2006tx,Nogueras-Lara:2021wj}.The analysis of its structure and stellar population is therefore mainly limited to near-infrared high-resolution photometry, while spectroscopy is restricted to very small regions of high interest \citep[e.g.][]{Lohr:2018aa,Clark:2018aa}.
A photometric analysis of the innermost $\sim1600$\,pc$^2$ of the NSD using the GALACTICNUCLEUS catalogue \citep[a high-angular, $\sim0.2''$, resolution photometric survey in the near-infrared,][]{Nogueras-Lara:2018aa,Nogueras-Lara:2019aa,Nogueras-Lara:2019ad}, determined that the NSD is mainly dominated by old stars (more than 80\,\% of its total stellar mass was formed more than 8\,Gyr ago). A similar analysis of Sgr\,B1, a region of intense HII emission located $\sim$100\,pc away from the centre of the NSD \citep[e.g.][]{Simpson:2021ti}, showed evidence of an intermediate-age stellar population ($\sim40$\,\% of the stellar mass was formed $\sim2-7$\,Gyr ago) in addition to the old stars \citep[$\sim40\,\%$ of the stellar mass is older than 7\,Gyr,][]{Nogueras-Lara:2022ua}. This is consistent with the inside-out formation scenario proposed for the NSD \citep[based on recent studies on nearby spiral galaxies,][]{Gadotti:2020aa,Bittner:2020aa}, that would originate a radial age gradient. In this picture, the formation of the NSD is related to the Galactic bar, which funnels gas from the Galactic disc towards the innermost regions of the Galaxy \citep[e.g.][]{Sormani:2019aa} producing an accumulation of gas and dust that grows the NSD from inside-out.
In this letter, we follow up on previous work that studied stellar populations at different lines of sight \citep{Nogueras-Lara:2022ua} and investigate the age distribution of stellar populations located at different NSD radii \emph{along} the line of sight. We aim to assess the presence of an age gradient that would support the inside-out formation scenario proposed for the NSD.
\section{Data}
\subsection{Photometry}
\label{satu}
\begin{figure*}[h!]
\includegraphics[width=\linewidth]{scheme}
\caption{Spitzer false colour image using 3.6, 4.5, 5.8\,$\mu$m, as red, green and blue \citep{Stolovy:2006fk}. The white shaded boxes indicate the target region. The position of Sgr\,B1 as well as the nuclear star cluster (NSC) are indicated. The circular region indicates the effective radius of the nuclear star cluster \citep[$\sim5$\,pc, e.g.][]{gallego-cano2019}.}
\label{scheme}
\end{figure*}
We used $H$ and $K_s$ photometry from the GALACTICNUCLEUS survey \citep{Nogueras-Lara:2018aa,Nogueras-Lara:2019aa}. This is a high-angular ($\sim0.2''$) resolution near-infrared catalogue specially designed to observe the NSD. It contains accurate point spread function photometry for more than three million sources. The zero point systematic uncertainty is below 0.04\,mag in all bands, whereas the statistical uncertainties are below 0.05\,mag at $H\sim19$\,mag and $K_s\sim18$\,mag.
We corrected potentially saturated sources in $K_s$ \citep[$K_s<11.5$\,mag, e.g.][]{Nogueras-Lara:2019aa} using $K_s$ data from the SIRIUS IRSF survey \citep{Nishiyama:2008qa}. We also included saturated sources which were not detected in the GALACTICNUCLEUS catalogue.
Figure\,\ref{scheme} indicates the target region, which was chosen to overlap with a high-precision proper motion catalogue of the Galactic centre \citep{Libralato:2021td}. We excluded regions close to the nuclear star cluster because its different star formation history \citep[e.g.][]{Schodel:2020aa,Nogueras-Lara:2021wm} could affect our analysis.
\subsection{Proper motions}
We used the Galactic centre proper motion catalogue obtained by \citet{Libralato:2021td} using the Wide-Field Camera 3 (WFC3/IR, filter F153M) at the HST. This catalogue contains absolute high-precision proper motions for more than 800,000 stars calibrated with Gaia DR2 \citep{Gaia-Collaboration:2016uw,Gaia-Collaboration:2018aa}.
\section{Colour-magnitude diagram and targets selection}
Figure\,\ref{CMD} shows the colour magnitude diagram (CMD) $K_s$ vs. $H-K_s$ of the target region. To remove foreground stars from the Galactic disc and bulge/bar, we applied a colour cut $H-K_s\gtrsim1.3$\,mag \citep[e.g.][]{Nogueras-Lara:2021uz,Nogueras-Lara:2021wj}.
\begin{figure}
\includegraphics[width=\linewidth]{CMD}
\caption{CMD $K_s$ versus $H-K_s$. The blue and salmon coloured regions indicate the two target groups of stars with different reddening dominated on average by stars from the closest edge of the NSD (NSD outer region) and stars deeper inside the NSD (NSD inner region), respectively. The dashed boxes show the reference stars used to build the extinction maps for each of the stellar groups analysed (see Sect,\,\ref{extinct}). The black arrow indicates the reddening vector.}
\label{CMD}
\end{figure}
Recent results on the NSD structure indicate that the extinction within the NSD correlates with the position of the stars along the line of sight \citep{Nogueras-Lara:2022aa}. Hence, stars belonging to the closest edge of the NSD suffer a lower reddening than stars from its farthest edge. It is then possible to statistically distinguish between stars located at different NSD radii along the line of sight by applying a colour cut in the CMD $K_s$ vs. $H-K_s$. We selected two target groups corresponding to stars from the closest edge of the NSD and stars deeper inside the NSD (hereafter NSD outer and inner regions, respectively) by applying the colour cuts indicated in Fig.\,\ref{CMD}.
To check our target selection and assess whether the target stellar groups are dominated by stars located at different NSD radii, we cross-correlated the photometry from the GALACTICNUCLEUS survey \citep{Nogueras-Lara:2018aa,Nogueras-Lara:2019aa} with the proper motion catalogue by \cite{Libralato:2021td}. Figure\,\ref{proper} shows the proper motion distribution of stars from each of the groups. We clearly distinguish a different distribution of the proper motion component parallel to the Galactic plane ($\mu_l$) for each of the groups. We computed a mean value of $\mu_{l} = -4.66\pm0.02$\,mas/yr for the group of stars with the bluest colour (i.e. the lowest extinction), where the uncertainty corresponds to the standard error of the distribution. This value is in agreement with the rotation of stars from the closest edge of the NSD, $\mu_l = -4.70\pm0.03$\,mas/yr, obtained in \citet{Nogueras-Lara:2022aa}. On the other hand, we obtained a mean value of $\mu_{l} = -5.77\pm0.03$\,mas/yr for the the group of stars with the reddest colour (i.e. the highest extinction). This value is lower than the computed in previous work for stars from the farthest edge of the NSD \citep[$\mu_{l} = -7.65\pm0.03$\,mas/yr, see Table\,1 in ][]{Nogueras-Lara:2022aa}, but significantly larger than the result for stars from the closest edge. This indicates that the stars in this group mainly belong to the inner regions of the NSD, being placed at more internal radii than stars from the farthest edge of the NSD and the outer group previously defined.
We found that the proper motion distribution of the component perpendicular to the Galactic plane is $<\mu_b>\sim 0 $\,mas/yr, in agreement with previous work \citep{Shahzamanian:2021wu,Martinez-Arranz:2022uf,Nogueras-Lara:2022aa}. Small differences in the shape of the $\mu_b$ distribution of the target groups might be caused by a different contribution of contaminant stars from the Galactic bulge/bar which can depend on the applied colour cut for the sample selection.
\begin{figure}
\includegraphics[width=\linewidth]{proper_motions}
\caption{Proper motion distribution of stars from each of the target groups. The blue and salmon dots indicate stars from the NSD outer and inner regions, respectively. Only a fraction of the stars is shown to not overcrowd the plot.}
\label{proper}
\end{figure}
\section{Stellar population analysis}
To analyse the stellar population in each of the target groups, we created $K_s$ luminosity functions and fit them with a linear combination of theoretical models following the method in \citet{Nogueras-Lara:2019ad,Schodel:2020aa,Nogueras-Lara:2022ua}.
\subsection{Reddening correction}
\label{extinct}
We de-reddened the stars in each of the target groups by creating dedicated extinction maps following the approach described in \citet{Nogueras-Lara:2021wj} and using the extinction curve $A_H/A_{K_s} = 1.84\pm0.03$ \citep{Nogueras-Lara:2020aa}. To build the extinction maps, we used red clump stars \citep[giant stars in their helium core burning sequence, e.g.][]{Girardi:2016fk} and other red giant stars with similar brightness (dashed boxes in Fig.\,\ref{CMD}), as reference stars, and assumed that they have an intrinsic colour of $(H-K_s)_0=0.10\pm0.01$, \citep{Nogueras-Lara:2021wj}. We defined a pixel size of $\sim10''$ and used the 5 closest reference stars within a maximum radius of $15''$ from the centre of each pixel to compute the extinction value applying an inverse distance weight method \citep[for details see appendix A.2 of ][]{Nogueras-Lara:2021wj}. We only computed an extinction value for a given pixel if at least 5 reference stars where found within the maximum radius.
Figure\,\ref{ext_maps} shows the obtained extinction maps. We computed mean extinction values of $A_{K_s}\sim1.7$\,mag and $A_{K_s}\sim2.4$\,mag for the extinction maps corresponding to stars from the NSD outer and inner regions, respectively. We applied the extinction maps to de-redden the photometry of each of the target groups.
\begin{figure}
\includegraphics[width=\linewidth]{extinction_maps}
\caption{Extinction maps obtained for the stars from the NSD outer (upper panel) and inner regions (lower panel). The black star indicates the position of the supermassive black hole at the centre of the Galaxy. The white pixels indicate that there is no associated extinction value due to the lack of reference stars.}
\label{ext_maps}
\end{figure}
\subsection{Luminosity function}
We created a $K_s$ luminosity function using the de-reddened photometry for each of the target groups. We only used stars within the defined colour cut in Fig.\,\ref{CMD} for the group of stars from the NSD outer region, whereas we also included stars that were detected in $K_s$ but not in $H$ for the group of stars from the inner region of the NSD for the sake of completeness. Not detecting these stars in $H$ indicates that they are affected by a larger extinction and then belong to the target group of stars.
To avoid over-de-reddened stars, we excluded stars with a de-reddened $H-K_s$ colour more than 2$\sigma$ bluer than the mean value of the de-reddened distribution of the red clump features \citep[e.g.][]{Nogueras-Lara:2019ad,Nogueras-Lara:2022ua}. To build the luminosity functions, we chose the bin width that maximises the Freedman-Diaconis \citep{Freedman1981} and Sturges \citep{doi:10.1080/01621459.1926.10502161} estimators, using the "auto" option in the python function numpy.histogram \citep{Harris:2020aa}. We assumed Poisson uncertainties (i.e. the square root of the number of stars) for each luminosity function bin.
The faint end of the luminosity functions is significantly affected by incompleteness due to the extreme source crowding in the NSD \citep[e.g.][]{Nogueras-Lara:2019ad,Nogueras-Lara:2022ua}. In this way, we computed a crowding completeness solution determining the critical distance at which a star can be detected around a brighter star, following the method described in \citet{Eisenhauer:1998tg}. We divided the analysed field into small subregions of $2'\times1.4'$ and computed a completeness solution for each of them. We then combined the results and computed their mean and standard deviation to obtain the final completeness and its uncertainty, as explained in \citet{Nogueras-Lara:2020aa}. We used our completeness solution to correct the luminosity functions, setting a lower limit of $70\,$\% of data completeness.
Due to possible saturation and incompleteness of the bright end of the luminosity function, even after using the SIRIUS IRSF catalogue to correct saturated stars (see Sect.\,\ref{satu}), we restricted the bright end of the luminosity function to $K_s=8.5$\,mag for the subsequent analysis \citep[e.g.][]{Nogueras-Lara:2022ua}.
\subsection{Stellar populations}
Luminosity functions contain fundamental information about the star formation history. In this way, the presence of particular features such as the asymptotic giant branch bump, the red clump, or the red giant branch bump, and their relative weights, allows us to characterise their stellar population \citep[e.g.][]{Nogueras-Lara:2018ab,Nogueras-Lara:2019ad,Schodel:2020aa}.
We fitted the obtained $K_s$ luminosity functions with a linear combination of theoretical models to determine the stellar populations present in each target group, as explained in \citet{Nogueras-Lara:2019ad,Nogueras-Lara:2022ua}. We used 14 Parsec models\footnote{generated by CMD 3.6 (http://stev.oapd.inaf.it/cmd)} \citep{Bressan:2012aa,Chen:2014aa,Chen:2015aa,Tang:2014aa,Marigo:2017aa,Pastorelli:2019aa,Pastorelli:2020wz}, choosing ages that homogeneously sample the possible stellar populations in the analysed luminosity functions (14, 11, 8, 6, 3, 1.5, 0.6, 0.4, 0.2, 0.1, 0.04, 0.02, 0.01, 0.005 Gyr). The age-spacing between the models was selected to account for the differences that appear in the luminosity function for stellar populations with different ages. In this way, we increased the number of models towards younger ages, where the luminosity functions is more affected by age variations \citep[see Sect. Methods in][]{Nogueras-Lara:2022ua}.
We assumed a Kroupa initial mass function corrected for unresolved binaries \citep{Kroupa:2013wn}, and twice solar metallicity (Z=0.03) for our models, in agreement with previous results for the NSD \citep[e.g.][]{Schultheis:2019aa,Nogueras-Lara:2019ad,Schultheis:2021wf,Nogueras-Lara:2022ua}. Our fitting algorithm included a parameter to consider the distance towards the target stellar populations, and also a Gaussian smoothing factor accounting for possible distance differences between stars and also some residual uncorrected differential extinction. Figure\,\ref{KLFs} shows the best fits obtained by minimising a $\chi^2$ for each of the luminosity functions, where we combined models with similar ages to minimise the degeneracy.
\begin{figure}
\includegraphics[width=\linewidth]{KLFs}
\caption{Analysis of the de-reddened luminosity functions corresponding to the NSD outer (left panel) and inner (right panel) regions. The reduced $\chi^2$ of the fit is shown in each panel. The 14 theoretical models used are grouped into 5 broader age bins to decrease the degeneracy between models with similar ages.}
\label{KLFs}
\end{figure}
We resorted to Monte Carlo simulations to determine the contribution of each model to the luminosity function and estimate the uncertainties, as explained in \citet{Nogueras-Lara:2022ua}. We created 1000 Monte Carlo samples by computing the number of stars per bin in each luminosity function by randomly varying the original value assuming a Gaussian distribution with a standard deviation equal to each bin uncertainty. We then fitted each Monte Carlo sample following our method and average over the results to compute a mean value and its standard deviation.
We repeated the process using also MIST models \citep{Paxton:2013aa,Dotter:2016aa,Choi:2016aa} with similar ages and a Salpeter initial mass function, to account for possible systematic effects caused by the choice of different stellar evolutionary tracks \citep[e.g.][]{Nogueras-Lara:2022ua}. We obtained small differences between the analyses carried out with Parsec and MIST models, as shown in Fig.\,\ref{SFH}. The final results were computed by combining the values obtained when using both models (Fig.\,\ref{SFH}). The uncertainties were calculated quadratically propagating the ones obtained for each age bin with Parsec and MIST models.
From Fig.\,\ref{SFH}, we conclude that the stellar populations in each of the target regions are significantly different. While most stars across the NSD have ages above 7\,Gyr, we find that in the outer region of the NSD there is a factor 3 more stars with ages between 2 and 7\,Gyr than in its inner region, strongly suggesting the presence of an age gradient.
\begin{figure*}
\includegraphics[width=\linewidth]{res_fit}
\caption{Stellar populations present in the NSD outer region (left panel), inner region (central panel), and all the NSD stars in the sample (right panel). The numbers indicate the percentage of mass due to a given age bin and its standard deviation.}
\label{SFH}
\end{figure*}
\subsection{Systematic uncertainties}
To assess our results, we studied potential sources of systematic uncertainties:
\begin{enumerate}
\item {\it Luminosity function bin width}. We repeated the described process for each of the target populations assuming half and double the previously computed bin width. We did not observe any significant difference within the uncertainties.
\item {\it Completeness solution and faint end of the luminosity function}. We created two additional completeness solutions considering the completeness values per magnitude bin, subtracting and adding the corresponding uncertainty. We corrected the luminosity functions applying these two completeness solutions and repeated the fitting process. We slightly changed the faint end of the luminosity function to avoid problems related to incompleteness due to sensitivity. We did not observe any significant variation within the obtained uncertainties.
We also repeated the process assuming a completeness limit of 75\,\% instead of 70\,\% without any significant change.
\item {\it Bright end of the luminosity function}. We set a bright limit of $K_s=8.5$\,mag (de-reddened) for our study. Nevertheless, we also repeated our analysis including stars up to $K_s=8$\,mag and conclude that it does not affect our results in any significant way. Only a somewhat larger contribution ($\lesssim2$\,\% variations) is observed for some young age bins (<2\,Gyr) in the sample from the NSD outer region.
\item {\it Metallicity of the stellar population}. We assumed twice solar metallicity for our analysis in agreement with the super solar metallicities found for NSD stars \citep[e.g.][]{Schultheis:2021wf,Nogueras-Lara:2022ua,Feldmeier-Krause:2022vm}. Nevertheless, we also explored solar metallicity and 1.5 solar metallicity using Parsec models. In both cases we obtained that the old contribution is shifted towards the age bin 2-7\,Gyr for the NSD outer region. On the other hand, there is not any variation in the results obtained for the NSD inner region. This result still agrees with the presence of different stellar populations for different NSD radii and is also compatible with an inside-out formation of the NSD. However, we obtained that the reduced $\chi^2$ is larger for both target populations when considering solar or 1.5 solar metallicity, indicating that assuming twice solar metallicity gives a better result.
\item {\it Influence of the extinction maps}. We repeated the process for each target group assuming different parameters to build the extinction map (7 reference stars, and a maximum radius of $20''$ to select reference stars). We did not observe any significant change within the uncertainties.
\item {\it Width of the colour cut for target selection}. We selected stars in the NSD outer region by applying a colour cut width $\Delta(H-K_s)=0.3$\,mag, whereas the width of the colour cut was larger for the NSD inner region (Fig.\,\ref{CMD}). Our choice was motivated by the lower completeness when considering the stars from the inner regions of the NSD. To assess our results, we also tested the effect of assuming a colour cut with a similar width ($H-K_s=0.3$\,mag) for stars in the NSD inner region. We repeated the whole process assuming a completeness limit of $75\,\%$ to avoid completeness limitations imposed by the $H$ data. We obtained that the contribution of stars older than $7$\,Gyr is $\sim70$\,\%, whereas the contribution from the age bin $2-7$\,Gyr is $\sim16$\,\%. Therefore, our results are compatible with an age gradient in which stars in the colour range $H-K_s\sim 1.8-2.1$\,mag correspond to an intermediate case in between the previously analysed ones (Fig.\,\ref{SFH}.)
\item {\it Initial mass function}. Recent results on the known young clusters in the NSD point towards a top-heavy initial mass function \citep[e.g.][]{Hosek:2019va}. Nevertheless, our stellar samples are dominated by giant stars with a mass of $\gtrsim1$\,M$_\odot$ and therefore assuming a top-heavy initial mass function does not have any observable effect on our results \citep[e.g.][]{Schodel:2020aa}.
\item {\it Contamination from the Galactic bulge/bar in the target groups}. We used a colour cut $H-K_s\gtrsim1.3$\,mag to remove foreground stars from the NSD sample. However, a residual contamination from the innermost regions of the Galactic bulge/bar could remain in our target groups \citep[$\lesssim20$\,\% of the stars in the sample, as estimated by ][]{Sormani:2022wv}. Given that the stellar population from the innermost bulge/bar is mainly old and metal rich \citep[e.g.][]{Clarkson:2011ys,Nogueras-Lara:2018ab,Renzini:2018aa}, its presence would not affect the detected radial age gradient, and could only increase the amount of stars in the oldest age bin, without affecting our conclusions.
\end{enumerate}
\section{Discussion and conclusion}
A previous study of the central regions of the NSD (the innermost $\sim20$\,pc\,$\times\,90$\,pc excluding the nuclear star cluster), showed that it is dominated by old stars \citep[more than 80\,\% of the total stellar mass was found to be older than 8\,Gyr,][]{Nogueras-Lara:2019ad}. In this letter, we analysed a similarly concentrated region towards the centre of the NSD, but considered stars located at different NSD radii along the line of sight. We selected our target samples of stars by using their different extinction, and found that they are placed on average at different Galactocentric NSD radii by analysing their proper motion distribution. Hence, the current study was only possible after \citet{Nogueras-Lara:2022aa} found that the extinction within the NSD correlates with the position of the stars along the line of sight.
Our results indicate that the stellar population at different NSD radii along the line of sight is significantly different. We detected an age gradient in the NSD along the line of sight, so that the stellar population from the innermost regions is significantly older than stars from the outer regions of the NSD. In this way, we obtained that there is a significant fraction of stars with ages between 2-7\,Gyr in the NSD outer region that is not present in our sample deeper inside the NSD, where $\sim90\,$\% of the originally formed stellar mass is older than 7\,Gyr. We also observed that the contribution of even younger stars (age bins 0.5-2\,Gyr and 0-0.06\,Gyr) is also larger in the NSD outer region in comparison with the inner one.
To compare our results with the predominantly old stellar population detected in the central region of the NSD \citep{Nogueras-Lara:2019ad}, we also studied the $K_s$ luminosity function derived from all the stars in the analysed region. Figure\,\ref{SFH} (right panel) shows that the stellar mass is dominated by old stars (>7\,Gyr), in agreement with previous studies \citep{Nogueras-Lara:2019ad,Nogueras-Lara:2022ua}. The contribution from the intermediate age stellar population (2-7\,Gyr) is not significant when analysing all the stars in the region because the majority of stars in our sample belong to the innermost regions of the NSD. Therefore, the stars from the closest NSD edge do not contribute much to the total luminosity function. To further assess this, we computed the stellar mass from the NSD outer region in the age bin $2-7$\,Gyr, using Parsec models \citep[see Sect. Methods in][]{Nogueras-Lara:2022ua}. We ended up with a total mass of $(1.5\pm0.2)\cdot10^7$\,M$_\odot$. Comparing this value with the total stellar mass obtained when using all the stars in the target region, $(1.3\pm0.1)\cdot10^8$\,M$_\odot$, we conclude that the mass of stars in the age bin $2-7$\,Gyr only accounts for $\sim4$\,\% of the total stellar mass, which is compatible with the results in Fig.\,\ref{SFH} (right panel), and also with previous work, where the old stellar component dominates \citep{Nogueras-Lara:2019ad,Nogueras-Lara:2022ua}. This is in agreement with an exponential radial scale length of the NSD, in which the stellar mass density increases towards its innermost regions \citep[e.g.][]{Sormani:2022wv}.
Our results, in combination with the detection of a significant stellar mass in the age bin of 2-7\,Gyr in the Sgr\,B1 region (belonging to the NSD and located at $\sim100$\,pc in projection from its centre), indicate the presence of an age gradient in the NSD. Therefore the age distribution of the NSD stellar population is similar to the one found for other NSDs in external spiral galaxies \citep{Bittner:2020aa}, and supports an inside-out formation of the NSD \citep{Gadotti:2020aa,Bittner:2020aa}.
\begin{acknowledgements}
This work is based on observations made with ESO Telescopes at the La Silla Paranal Observatory under program ID195.B-0283. The data employed in this work can be downloaded from the ESO Archive Facility. F. N.-L. gratefully acknowledges the sponsorship provided by the Federal Ministry for Education and Research of Germany through the Alexander von Humboldt Foundation. D. G. acknowledges support by STFC grant ST/T000244/1.
\end{acknowledgements}
|
1,314,259,995,342 | arxiv | \section{Introduction}
This manuscript draws inspiration from recent trends in fundamental physics and computer science.
{\em A view from physics.} A striking feature of most quantum gravity approaches is that spacetime can be in a superposition of macroscopic (semi)classical geometries. This genuine quantum gravitational effect already takes place by quantum manipulating weakly gravitating non-relativistic matter \cite{Christodouloupossibilitylaboratoryevidence2019}.
The past few years have seen a concentrated effort to devise and implement experiments that ought to confirm or invalidate this phenomenon \cite{BoseSpinEntanglementWitness2017,BoseMASSIVE2018,MarshmanLocalityEntanglementTableTop2019}
\cite{Christodouloupossibilityexperimentaldetection2018b}
Superposition of macroscopic geometries are the subject of recent intense discussion also in the context of quantum causal structures \cite{ChiribellaQuantumcomputationsdefinite2013,ZychBellTheoremTemporal2017,PaunkovicCausalordersquantum2019}.
These lines of research bring a quantum informational perspective to quantum gravitational physics. This cross fertilisation deserves to be explored further.
Several approaches to quantum gravity employ graphs as the topological skeleton on which the theory is built. Notable examples are loop quantum gravity, causal dynamical triangulations, causal sets and tensor models. The description of a superposition of spacetimes would then involve a superposition of graphs. The question arises, what is the appropriate mathematical framework to describe graph superpositions, and in particular, how should invariance under changes of coordinates be imposed? The manner in which this question is answered, as we will see, can have profound consequences for the structure of the theory and its predictions.
Typically, invariance under changes of coordinates is attempted by either embedding the graphs in an ambient continuous space, a manifold, or by defining the localisation on a graph using auxilliary physical fields, or by working at the level of a state space defined with equivalence classes of graphs under renamings. These strategies all present difficulties for arriving at a meaningful notion of superpositions of graph. Here, we propose an alternative strategy tailor made for this purpose that we argue to be conceptually clear and powerful: to directly use node names as natively discrete coordinates, forgo embedding in the continuous, and impose invariance under renamings at the level of observables rather than at the level of states. We show in particular that directly working at the level of invariant states misses crucial physics, while using auxiliary fields to rectify the situation can introduce instantaneous signalling.
{\em The view from Computer Science.} The fundamental question in Computer Science is `What is a computer?' That is, what features of the natural world are available as resources for computing and how can we capture these resources into a mathematical definition, a `model of computation'? The Turing machine was initially believed to be the ultimate such model in the first half of the previous century. In the 1960's, spatial parallelism came to be recognised as a major additional resource, captured into distributed models of computing: dynamical networks of interacting automata. In the 1990's, it became clear that quantum parallelism is another powerful computational resource. This was again captured into models of quantum computing, the quantum Turing machine. Therefore, the current `ultimate' answer to the question `What is a computer?' appears to be a distributed model of quantum computing: a dynamical network of interacting quantum automata \cite{ArrighiOverview,ArrighiQCGD}. The intuitive idea of a network of quantum computers has been otherwise coined as the `quantum internet' \cite{QuantumNetworksKimble,QuantumNetworksCirac,QuantumNetworksBianconi}. However, it seems we are not done yet, as it is natural to consider the dynamics of the network itself to be quantum dynamics. Then, the connectivity/topological structure of the network itself could be in a superposition, a potentially powerful computational resource. Indeed, if spacetime can be found in a quantum superposition in nature as often postulated by quantum gravity theory, this resource is in principle available.
{\em Contributions to common grounds.} We provide a robust notion of quantum superposition of graphs. This can serve as the basis on which to build the state space for quantum gravity theories, as well as for quantum circuit paradigms that admit superpositions of circuits. We import techniques from the paradigms of Quantum Walks and their multi-particle regime of Quantum Cellular Automata \cite{ArrighiOverview}, together with their recent extension to dynamical graphs, namely Quantum Causal Graph Dynamics \cite{ArrighiQCGD}. On the quantum gravity theory front, we mainly draw inspiration from formulations of Loop Quantum Gravity (LQG) \cite{RovelliQuantumGravity2004}, whose kinematical state space is spanned by coloured graphs, called the spin network states.
The main subtlety lies in the treatment of a central symmetry we identify: renaming invariance.
\section{Graph State Space: Names Matter}
We consider quantum theories underlied by a kinematical state space that is the span of orthonormal basis states that correspond to graphs. Dynamics are defined as a (unitary) operator over this state space. Precise definitions of these notions are given in Section \ref{sec:labelledGraphs}. The subtlety in the construction lies on how to treat node names, which we now discuss.
Consider a Hilbert state $\mathcal{H}_\mathcal{G}$ defined as that generated by a countably infinite orthonormal basis $\mathcal{G}$, where the elements of $\mathcal{G}$ are \emph{graphs}, denoted as $\ket{G},\ket{G'},\ldots \! \in \mathcal{G}$. That is, each graph labels a different unit vector $\ket{G}$, and a generic (pure) state in $\mathcal{H}_\mathcal{G}$ is a superposition of graphs:
$$\ket{\psi} = \alpha_G \ket{G} + \alpha_{G'} \ket{G'} + \ldots$$
with $\alpha_G,\alpha_{G'}, \ldots \in \mathbb{C}$. The inner product on $\mathcal{H}_\mathcal{G}$ is defined by linearity and by
\begin{equation}
\langle G \vert G' \rangle = \delta_{GG'}.\label{eq:inner}
\end{equation}
where $\delta_{GG'}$ is unit if $G=G'$ and vanishes otherwise.
Now, two graphs $\ket{G}$ and $\ket{G'}$ can differ only by certain names given to their nodes. Anticipating notation, we denote this as $\ket{G'} = R\ket{G}$ where $R$ is a renaming. Clearly, these two graphs are physically equivalent. On this basis, whenever two graphs differ only by a renaming it appears that we should take
\begin{equation}
\langle G \vert R G \rangle=1 \ \ ?
\end{equation}
In fact, as we will see in the next section, it is imperative that we take them as orthogonal, taking
\begin{equation}
\langle G \vert R G \rangle=0
\end{equation}
\renewcommand{\vec}[1]{#1}
An analogy can be made with plane waves in quantum mechanics and translation invariance. Consider the plane wave state $ \ket{p,0} = \int e^{i px} dx$ and the same state shifted in position as $ \ket{p,\Delta}= \int e^{i p(x+\Delta)} dx$, with $p$ the wave momentum and $x$ the position variable. In empty space there is no physical sense in which the two plane waves are different: the shift in position is immaterial, as the plane wave homogeneously spreads across space. Yet, the inner product $\langle \vec{p,0} \vert \vec{p,\Delta} \rangle $ need not be taken to be unit. In fact, it is imperative to distinguish mathematically between $\ket{p,0}$ and $\ket{p,\Delta}$ if we wish to do quantum mechanics. For instance, as a particle propagates, its plane waves components typically evolve from $\ket{p,0}$ and $\ket{p,\Delta}$. Thus, whilst $\ket{p,0}$ and $\ket{p,\Delta}$ alone do not hold any physically relevant position information, their difference does: it carries physically relevant relative position information. To avoid confusion we emphasize that in this analogy there does exist an ambient spacetime geometry (Newtonian spacetime) with respect to which distances are defined, while in our treatment of quantum graphs we define superpositions of graphs already at the pre-geometrical, topological level, using only the naming of nodes on the graph.
The heart of the matter then lies in the following observation. If we do not further constrain the theory, having taken $\braket{G|G'}=0$ for graphs only differing in their naming, names will in principle be observable. This is clearly unreasonable. In the continuum, observables that read out the coordinates of a point on the manifold are excluded by the requirement of diffeomorphism invariance, also known as general covariance. This central insight of general relativity ensures that no prediction of the theory depends on the coordinate system in use.
Invariance of graphs under renamings should be recognized as a symmetry of similar importance for theories employing graphs rather than manifolds as their underlying topological space. Furthermore, the definition of invariance under renamings carries will carry through effortlessly on quantum superpositions of graphs.
\section{Named Graphs or Instantaneous Signalling}\label{sec:signalling}
In this section we present our main argument for using graph names rather than a physical field to define the vicinity of nodes, and that it is necessary to work at the level of a state space spanned by states corresponding to graphs, rather than a state space spanned by equivalence classes of graphs under renamings (``anonymous graphs''). We may also call the latter `name invariant states' and the former `name variant states', where invariance under renamings is to be understood as a type of gauge invariance of the state space.
In particular, we show that employing node names ensures we do not inadvertently introduce instantaneous signalling. We demonstrate this point with a simple toy theory.
Consider the state space to be the span of circular graphs having $n$ number of nodes and links. Each of these nodes has a unique name (e.g. $w, x, y, z$) and can be in any of the following internal states (colours): empty, occupied by an $a$--moving particle, occupied by a $b$--moving particle, or, occupied by both. Nodes have ports $:\!\!a$ and $:\!\!b$, upon which the neighbouring nodes are attached. An $a$--moving particle is depicted as a half filled disk on port $a$'s side and similarly for $b$. Thus, each node has the state space $\mathbb{C}^4$. The global Hilbert space is ${\cal H}=\bigotimes_{n \in V_G} \mathbb{C}^4$, where $\bigotimes_{n \in V_G}$ denotes the tensor product over the nodes of graph $G$.
We take the dynamics to be the simplest known quantum walk, the Hadamard quantum walk \cite{Kempe}. A quantum walk is a unitary operator driving a particle on a lattice in steps. Many quantum algorithms can be expressed in this manner, the Hadamard quantum walk in particular has been implemented on a variety of substrates such as an array of beamsplitters traversed by photons \cite{Sciarrino}. Mathematically, evolution is implemented by applying an operator $U=TH$ on the graph state, the alternation of steps $H$ and $T$.
The step $H$ is the application of the Hadamard gate to the internal state of each node. Formally, $H=\bigotimes_n \textrm{Hadamard}$ with
\begin{align*}
\textrm{Hadamard}=\left(
\begin{array}{cccc}
1& 0& 0& 0\\
0& \frac{1}{\sqrt{2}} &\frac{1}{\sqrt{2}} &0\\
0& \frac{1}{\sqrt{2}} &-\frac{1}{\sqrt{2}} &0\\
0& 0& 0& 1
\end{array}.
\right)
\end{align*}
Henceforth, we adopt an easily tractable pictorial notation:
\begin{align*}
H \ket{\splittednode{white}{white}} &= \ket{\splittednode{white}{white}}\\
H \ket{\splittednode{white}{black}} &= \frac{1}{\sqrt{2}} \left( \ket{\splittednode{white}{black}} + \ket{\splittednode{black}{white}} \right) \\
H \ket{\splittednode{black}{white}} &= \frac{1}{\sqrt{2}} \left( \ket{\splittednode{white}{black}} - \ket{\splittednode{black}{white}} \right)\\
H \ket{\splittednode{black}{black}} &= \ket{\splittednode{black}{black}}
\end{align*}
Once $H$ is applied, step $T$ moves the `particles' through port $a$ or $b$ to the adjacent node according to their species, all at once. For instance
\begin{eqnarray}
&T \ket{\raisebox{-6pt}{\resizebox{70pt}{20pt}{\tikz{
\draw (-0.5,0) -- (2.5,0);
\node at (0,0) [draw,rotate=90, circle, circle split part fill={white,white}]{};
\node at (0,-0.4) {$u$};
\node at (-0.3,0)[above]{\scriptsize :$a$};
\node at (0.3,0)[above]{\scriptsize :$b$};
\node at (1,0) [draw,rotate=90, circle, circle split part fill={black,white}]{};
\node at (1,-0.4) {$v$};
\node at (0.7,0)[above]{\scriptsize :$a$};
\node at (1.3,0)[above]{\scriptsize :$b$};
\node at (2,0) [draw,rotate=90, circle, circle split part fill={white,white}]{};
\node at (2,-0.4) {$w$};
\node at (1.7,0)[above]{\scriptsize :$a$};
\node at (2.3,0)[above]{\scriptsize :$b$};
}}}} = \ket{\raisebox{-6pt}{\resizebox{70pt}{20pt}{\tikz{
\draw (-0.3,0) -- (2.3,0);
\node at (0,0) [draw,rotate=90, circle, circle split part fill={black,white}]{};
\node at (0,-0.4) {$u$};
\node at (-0.3,0)[above]{\scriptsize :$a$};
\node at (0.3,0)[above]{\scriptsize :$b$};
\node at (1,0) [draw,rotate=90, circle, circle split part fill={white,white}]{};
\node at (1,-0.4) {$v$};
\node at (0.7,0)[above]{\scriptsize :$a$};
\node at (1.3,0)[above]{\scriptsize :$b$};
\node at (2,0) [draw,rotate=90, circle, circle split part fill={white,white}]{};
\node at (2,-0.4) {$w$};
\node at (1.7,0)[above]{\scriptsize :$a$};
\node at (2.3,0)[above]{\scriptsize :$b$};
}}}} \nonumber
\\
&T \ket{\raisebox{-6pt}{\resizebox{70pt}{20pt}{\tikz{
\draw (-0.5,0) -- (2.5,0);
\node at (0,0) [draw,rotate=90, circle, circle split part fill={white,white}]{};
\node at (0,-0.4) {$u$};
\node at (-0.3,0)[above]{\scriptsize :$a$};
\node at (0.3,0)[above]{\scriptsize :$b$};
\node at (1,0) [draw,rotate=90, circle, circle split part fill={white,black}]{};
\node at (1,-0.4) {$v$};
\node at (0.7,0)[above]{\scriptsize :$a$};
\node at (1.3,0)[above]{\scriptsize :$b$};
\node at (2,0) [draw,rotate=90, circle, circle split part fill={white,white}]{};
\node at (2,-0.4) {$w$};
\node at (1.7,0)[above]{\scriptsize :$a$};
\node at (2.3,0)[above]{\scriptsize :$b$};
}}}} = \ket{\raisebox{-6pt}{\resizebox{70pt}{20pt}{\tikz{
\draw (-0.3,0) -- (2.3,0);
\node at (0,0) [draw,rotate=90, circle, circle split part fill={white,white}]{};
\node at (0,-0.4) {$u$};
\node at (-0.3,0)[above]{\scriptsize :$a$};
\node at (0.3,0)[above]{\scriptsize :$b$};
\node at (1,0) [draw,rotate=90, circle, circle split part fill={white,white}]{};
\node at (1,-0.4) {$v$};
\node at (0.7,0)[above]{\scriptsize :$a$};
\node at (1.3,0)[above]{\scriptsize :$b$};
\node at (2,0) [draw,rotate=90, circle, circle split part fill={white,black}]{};
\node at (2,-0.4) {$w$};
\node at (1.7,0)[above]{\scriptsize :$a$};
\node at (2.3,0)[above]{\scriptsize :$b$};
}}}}
\end{eqnarray}
Nothing in particular happens if particles cross over each other or land on the same node (no collisions).
Now, let us apply $U$ twice on an initial state featuring a single b--moving particle. The computation can be followed pictorially in Fig. \ref{fig:UU}.
\begin{figure}[h!]
\includegraphics[width=\columnwidth]{figs/Hadamard.pdf}
\includegraphics[width=\columnwidth]{figs/TH.pdf}
\includegraphics[width=\columnwidth]{figs/THTH.pdf}
\caption{\label{fig:UU}Twice the Hadamard quantum walk on circular graphs}
\end{figure}
In the final state the node names or `particles position' and their colours are \emph{entangled}. Note that the first two branches of the superposition and the last two branches of the superposition differ only by a renaming of the nodes ( $u \leftrightarrow w, x \leftrightarrow v$). The last two branches are those that include a $b$-moving particle and come with an opposite sign. Thus, if names were not present we would be left with radically different physics, a state that has only $a-moving$ particles, and no entanglement.
To drive this point through we switch to a different Hilbert space ${\cal H}'$ defined as the span of name invariant states or `anonymous graphs'. Technically, anonymous graphs can be defined as equivalence classes of graphs up to arbitrary renamings, see section \ref{sec:labelledGraphs}. A pictorial depiction of this equivalence relation $\sim$ is as below:
\newcommand{\!\!\!\textrm{\raisebox{0.2684pt}{---}}\!\!\!\!}{\!\!\!\textrm{\raisebox{0.2684pt}{---}}\!\!\!\!}
\scalebox{0.75}{
\tikz {
\tikzset{
>=stealth',
punkt/.style={
rectangle,
rounded corners,
draw=black, very thick,
text width=6.5em,
minimum height=2em,
text centered},
pil/.style={
->,
thick,
shorten <=3pt,
shorten >=3pt,}
}
\node[] at(0,0){\lineoffour{white}{white}{white}{black}{white}{white}{white}{white}};
\node[] at(3,0){$\neq$};
\node[] at(6,0){\lineoffour{white}{white}{white}{white}{white}{black}{white}{white}};
\node[] at(0,-2){\lineoffourmodulo{white}{white}{white}{black}{white}{white}{white}{white}};
\node[] at(3,-2.1){$ \sim $};
\node[] at(6,-2){\lineoffourmodulo{white}{white}{white}{white}{white}{black}{white}{white}};
\draw[pil,>=latex] (0,-0.3)--(0,-1.7);
\draw[pil,>=latex](6,-0.3)--(6,-1.7);
}}
That is, pictorially we `erase' the names. Then,
because of the above identification, shifting a node through an $a-$ or $b-$ port is to do nothing: $T$ acts as the identity in the one particle sector. Since $H^2$ is the identity, $U^2$ also reduces to the identity in the one particle sector of the name invariant Hilbert space. Pictorially, dropping the names $w, x, y, z$ in Fig. \ref{fig:UU}, we obtain Fig. \ref{fig:UU2}. The two last terms \emph{cancel} and the two first terms \emph{sum}. Two steps of the dynamics get us back to where we started. Therefore, with anonymous graphs and their span ${\cal H}'$ we are unable to express one of the simplest quantum walks.
\begin{figure}[h!]
\includegraphics[width=\columnwidth]{figs/THTH_modulo.pdf}
\caption{\label{fig:UU2}Twice the Hadamard QW on circular abstract graphs}
\end{figure}
{\em `Remote star' detection.} Now, if one insists on doing away with names, one may try to remedy this lack of descriptive power by providing relative position information in the graphs. After all, a `landmark' will be available in most practical situations, whether the laboratory walls or the `fixed stars'. Attempting this ad hoc fix is instructive as it unveils an even more severe issue: depending on whether some arbitrarily remote landmark is present or not, the physics of the final state will be made to change in an instantaneous manner.
Let us illustrate this point through a gedanken experiment. We can model the presence of the landmark by introducing a new colour, bright green say, that is not affected by the dynamics. We will toggle this colour at some far away node $x$. Indeed, the calculation of Figure \ref{fig:UU} carries through when we consider circular graphs with an arbitrarily large number of nodes.\footnote{The same effect would be seen if we consider an infinite chain of nodes and edges. The generalisation to infinite graphs is however beyond the scope of this paper, it would require the use of $C^*$-algebras for the definition of the inner product as was done for cellular automata in \cite{SchumacherWerner}.}
If the bright green star is present, it forbids the `unwanted identifications' between states. The correct behaviour of the dynamics is recovered, and a b--moving particle will be present in the state after application of $U$ twice, see Fig. \ref{fig:star} ({\em top}). When the bright green star is not present, the unwanted identifications cause the destructive interference of the b--moving degrees of freedom, see Fig. \ref{fig:star} ({\em bottom}). \emph{This change in spin particle behaviour is perfectly observable locally to the particle}. We thus obtain an arbitrarily remote `star' detector, by observing the local statistics of the behaviour of spin particles.
\begin{figure}[h!]
\includegraphics[width=\columnwidth]{figs/THTH_star.pdf}
\includegraphics[width=\columnwidth]{figs/THTH_without_star.pdf}
\caption{\label{fig:star} Anonymous graphs can lead to instantaneous signalling. See the text for how this leads to a `remote star' detector. The graph segments shown here are understood as part of a large circular graph, or of an infinite chain of nodes. \\
{\em Top.} Local spin behaviour when star exists.\\
{\em Bottom.} Local spin behaviour when star does not exist.}
\end{figure}
This is a type of instantaneous signalling, although we cautiously use this heavily connotative term in the following sense. In the parlance of quantum information operational theories, a far away agent $A$ can produce a bright green landmark, and this event is then signalled to an agent $B$ that is performing spin measurements after two time steps, regardless of the number of nodes and links between $A$ and $B$. $B$'s measurement does not require position or distance information, only the measurement capacity to discriminate the `spin' (a-- or b--moving) degree of freedom the particle.
One might argue that the two agents at two specific nodes should be taken to be part of the state space, acting much like the landmarks considered previously. However, we can consider each node to be equipped with identical local measurement devices, that are triggered synchronously at $t=2$. Similarly, each node can be equipped with a quantum operation producing or not producing a landmark at $t=0$. Then, if any of the local observers observes a b--moving particle at $t=2$, they know for a fact that at least one landmark was produced at $t=0$, regardless of where this occured.
This instantaneous signalling is in contradiction with the basic principle of locality of interactions that underpins physics. If we hold on to this principle as being fundamental, then we must ask for formalisms that disallow instantaneous signalling -- even in a gedanken experiment. Taking the alternative route of employing both anonymous graphs and landmarks, it is likely that by positing `unremovable' landmarks on a case-by-case basis and appropriately sprinkled to break every symmetry of the application at hand, one might eventually remove unwanted identifications, forbidding instantaneous signalling. This ad hoc approach of checking how many `landmarks' would be sufficient for every dynamics and initial conditions of the theory at hand, qould become a daunting task for a arbitrary graphs and arbitrary dynamics, potentially graph changing. The only way to be sure instantaneous signalling of this type is not present is to place distinguishable landmarks on all nodes of every graph---which is not unlike naming the nodes to begin with. In our opinion, grappling with the theory in such a way would be like trying to hide the culprit under the carpet instead of recognising that it is a simple and elegant symmetry.
This brings us back to the discussion of the previous section. Names are fiducial constructs that in the quantum theory are crucial for the purpose of aligning a superposition, and keeping the alignment through the dynamical evolution. Instead of trying to hide names under the carpet, we should explicitly allow for them, and posit invariance under renamings as a symmetry imposed on the theory. Conversely, having recognised invariance under renamings as a symmetry that is akin to a gauge invariance, we conclude that to avoid instantaneous signalling quantum theories built on graphs need to be based on a state space spanned by gauge variant states rather than on the gauge invariant space. Gauge invariance should be imposed later, at the level of the evolution and observables.
\section{Superpositions of Graphs: Names vs Colours}
\label{sec:labelledGraphs}
Moving to rigorous definitions, we clarify a simple but crucial point: the \emph{names} of graphs are a distinct, more primitive concept, than the \emph{colouring} of graphs. While it is true that when colouring a graph we may induce implicitly a renaming, it is conceptually imperative to separate the two. This is similar in the continuous to separating the procedure of laying coordinates on a manifold and that of adding physical fields that live on the manifold. We conclude the section with the definition of a Hilbert space built on graphs.
We start by postulating a countable space ${\cal V}$ of possible names for the vertices of graphs. It suffices to think of them as the integers or `words' generated by a finite alphabet. We fix in addition $\pi$ to be a finite set of {\em ports} per node of the graph. We say that $f$ is a partial involution, meaning that if $f(x)$ is defined and equal to $y$, then $f(y)$ is defined and equal to $x$.
\begin{definition}[Graph]\label{def:graphs}
Let $\pi$ be a finite set of {\em ports}. A {\em graph} $G$ is given by a pair $(V_G, \Gamma_G)$, defined as
\begin{itemize}
\item[$\bullet$] A finite non-intersecting subset $V_G$ of ${\cal V}$.
\item[$\bullet$] A partial involution $\Gamma_G$ from $V_G\!\!:\!\!\pi$ to $V_G\!\!:\!\!\pi$ thereby describing a set $E_G$:
$$\left\{\ \{u\!\!:\!\! a,\Gamma_G(u\!\!:\!\! a)\}\ |\ u\!\!:\!\! a\in V_G\!\!:\!\!\pi\ \right\}.$$
\end{itemize}
$V_G$ are the vertices, $E_G$ the edges and $\Gamma_G$ is the adjacency function of the graph. The set of all graphs is written ${\cal G}$.
\end{definition}
\begin{definition}[Coloured Graph]\label{def:cgraphs}
A {\em coloured graph} $(G,C_G)$ is a graph $G$ with the addition of $C_G \equiv (\sigma_G, \delta_G)$:
\begin{itemize}
\item[$\bullet$] A partial function $\sigma_G$ from $V_G$ to a set $\Sigma$.
\item[$\bullet$] A partial function $\delta_G$ from $E_G$ to a set $\Delta$.
\end{itemize}
The sets $\Sigma$ and $\Delta$ are the internal states, a.k.a node/vertex colours and links/edge state colours of the theory. The set of coloured graphs is written ${\cal G}_C$.
\end{definition}
With these two definitions, we see that \emph{assigning names} to the vertices of a graph is but a mathematical tool for describing them, similar to a coordinate system on a manifold.
The \emph{colouring} on the other hand, serves the purpose of encoding a `field'---depending on the application at hand-- and comes on top of the naming of nodes. For instance, as discussed in Section \ref{sec:QG} a colouring of graphs can code for quantum spacetime geometry. For matter, quite often fermionic fields are modelled as living on the nodes, whereas bosonic fields are modelled as living on the edges \cite{ArrighiQED}. In summary, names and colours should not be confused. Names play the role of coordinates on the graph and are essential for describing the graph itself. Colours play the role of the physical field living on top of the graph. In particular, it can of course happen that a field (colouring) can take the same value $\sigma(u)=\sigma(v)$ at two different places $u\neq v$, which is not allowed for names.
These observations suggest that approaches dealing with superpositions of manifolds that label points by the values of fields, such as \cite{hardy2020implementation}, may suffer from the instantaneous signalling of the previous section. Indeed, in this path integral approach diffeomorphisms are allowed to act independently on each quantum branch, which suggests that the underlying kinematical state space considered is spanned by gauge invariant states.
For the sake of clarifying the argument of Section \ref{sec:signalling}, let us now define anonymous graphs. This is done by considering equivalence classes of named graphs modulo isomorphism:
\begin{definition}[Anonymous graphs]\label{def:pointedmodulo}
The graphs $G, H\in {\cal G}$ are said to be isomorphic if and only if there exists an injective function $R(.)$ from ${\cal V}$ to ${\cal V}$ which is such that $V_H=R(V_G)$ and for all $u,v\in V_G, a,b\in\pi$, $R(v)\!\!:\!\! b=\Gamma_H(R(u)\!\!:\!\! a)$ if and only if $v\!\!:\!\! b=\Gamma_G(u\!\!:\!\! a)$. Additionally, if the graphs are coloured $\sigma_H\circ R=\sigma_G$ and for all $u,v\in V_G, a,b\in\pi$, $\delta_H(\{R(u)\!\!:\!\! a,R(v)\!\!:\!\! b\})=\delta_G(\{u\!\!:\!\! a,v\!\!:\!\! b\})$. We then write $G\sim H$. Consider $G\in{\cal G}$. The {\em anonymous graph} $\widetilde{G}$ is the equivalence class of $G$ with respect to the equivalence relation $\sim$. Similarly if $G\in{\cal G}_C$. The space of anonymous graphs is denoted $\tilde{\mathcal{G}}$.
\end{definition}
Notice how, in the above definitions, anonymous graphs arise from graphs---not the reverse way round. In fact, we are not aware of a way to mathematically define a generic anonymous graph first, without any reference to a (named) graph, and then assign names to its nodes. Graphs, a concept that presupposes naming the nodes, are the primitive mathematical notion, from which anonymous graphs can be derived.
We argued in the previous section that quantum superpositions of graphs can and must be defined as the span of named graphs. Therefore, we construct a state space based solely based on Definition \ref{def:graphs}:
\begin{definition}[Superpositions of graphs]~\label{def:HCfbis} We define ${\cal H}$ the Hilbert space of graphs, as that having $\{\ket{G}\}_{G\in{\cal G}_C}$ as its canonical orthonormal basis.
\end{definition}
This definition captures the main point of this paper: ${\cal G}_C$ is the set of coloured graphs of Definition \ref{def:cgraphs}, that inherits the names from the Definition \ref{def:graphs} for graphs on which it is based. We have argued that it is this state space that should be the basis of a theory that manipulates superpositions of graphs, rather than a state space based on the name invariant space $\tilde{{\cal G}}$ of Definition \ref{def:pointedmodulo}. Thus, we posit:
\begin{postulate}
Physically relevant quantum superpositions of graphs are elements of ${\cal H}$.
\end{postulate}
As usual in quantum theory, states can either be `state vectors' (pure states), i.e. unit vectors $\ket{\psi}$ in ${\cal H}$, or `density matrices' (possibly mixed states), i.e. unit trace non-negative operators $\rho$ over ${\cal H}$. Evolutions can be prescribed by unitary operators $U$ over ${\cal H}$, taking $\ket{\psi}$ into $U\ket{\psi}$, or alternatively $\rho$ into $U\rho U^\dagger$.
\begin{comment}
\subsection{Drop what follows???}
\subsection*{Graphs}
Consider a countable set of possible \emph{names}, denoted ${\cal N}$. We will see in the following section that in order to accommodate for time-varying graphs, in a local-causal unitary dynamics, this ${\cal N}$ needs to have non trivial structure, see Section SECTION. Moreover, the set of node labels $V(\Gamma)\subset {\cal N}$ of a given graph $\Gamma$ needs to satisfy a compatibility constraint.
A node may be connected to another node via a \emph{link} $\ell$ through one of their \emph{ports}. Ports form a finite set $P$. Each port is available to every node, and can be used only once, and thus each node has at most $|P|$ links, enforcing that the graph is of bounded degree. We consider graphs of a finite number of nodes and edges, albeit of unbounded size. A \emph{labelled graph} $\Gamma$ is then defined as a set of links. More precisely, we consider only connected graphs, defined as a non--intersecting two--element subset of the set $N \times \pi$.
Let us see explicitly how this definition works. Nodes will be denoted using lower case latin letters and ports using lower case greek letters. A labelled graph $\Gamma$ is then given as a set of links of the form $\{u \!\!:\!\! a, v \!\!:\!\! b\}$, and $\forall u,v \in E(G), u\cap v\neq \emptyset \Rightarrow u=v$. For instance, if the graph $\Gamma$, defined above as a set of links, includes the element $\ell=\{u\!\!:\!\!\alpha,v\!\!:\!\!\beta\}$, then there exist a link $\ell$ in $\Gamma$ that connects its nodes $u$ and $v$ via their ports $\alpha$ and $\beta$ respectively. The subset of node labels in a labelled graph $\Gamma$ is denoted $N_\Gamma \subset N$. The {\em set of labelled graphs} is written ${\cal L}$ and labelled graphs denoted as $\Gamma, \Gamma', \Gamma'', \text{etc}$.
\subsection*{Coloured Graph}
A coloured graph $G$ is a \emph{labelled graph} $\Gamma_G$ with an additional assignment $C_G$ of \emph{graph data} or colours. We consider a set of link data $D_L$ and a set of node data $D_N$. These sets can have a fairly complicated structure in a given theory, and are not restricted in any sense. The graph data assignment $C$ consists of two partial functions, $C^L_G : D_L \rightarrow \Gamma_G $ that assigns unique data to each link in $\Gamma_G$, and $C^N_G : D_N \rightarrow N_{\Gamma_G}$ that assigns unique data to each node in $\Gamma_G$. The {\em set of coloured graphs} is written as ${\cal G}$. We denote coloured graphs using the dirac notation
\begin{equation}
\ket{G}= \ket{\Gamma_G,C_G}
\end{equation}
\textcolor{red}{ When we colour graphs and we assign say the name $j_\ell$ to a link, this means that there is a half integer living on that link, but also that that link has the label $\ell$. In this case the labelling is ell and the colouring is $j_\ell$. But, going about in this way, simply labels graphs but does not take into account how the labelling must be done in order for alignments of superpositions to remain meaningful or to retaining relational information under dynamical evolution.}.
\subsection*{Abstract Graphs}
We start this section with a note of caution on terminology regarding the notion of abstract graphs. In the quantum gravity literature, a distinction is made between the case of graphs embedded in a manifold, with their vertices assigned coordinates and edges corresponding to curves on the manifold, and the case of equivalence classes of such graphs under diffeomorphisms of the manifold. The latter are sometimes referred to as abstract graphs. To avoid confusion, we emphasize that neither of this notion of graphs is considered here: at no point do we consider ambient continuous topological spaces such as a manifold in which the graphs are to be embedded.
Furthermore, while the notion of an abstract graphs of this Subsection will retain some conceptual relevance for the theory, because as noted above allowed renamings in the theory are not arbitrary renamings, the relevant notion of abstract graph in the theory is not the abstract set theoretical object. This is our second important point. Before examining these points in detail, we proceed to define the notions of graphs we will be employing.
However, we caution that this notion will not coincide in general with the set theoretical notion of an abstract graph for the following reason. As we will see in the following section, the allowed renamings is a highly non--trivial matter once the dynamics are allowed to be graph changing and required to be unitary.
\textcolor{blue}{
Central to the above definition is that a labelled graph $\Gamma$ is defined through a choice of a subset $N_\Gamma$ of node labels drawn from the space $N$ of all possible node labels. At the level of a single graph, this choice of labelling is of course completely fiducial. In particular, an arbitrary relabelling does not alter the discrete topological structure that is the abstract graph. An abstract graph can be defined using set theoretical notions, mimicking the definitions given above, postulating a node space (and a port space) and employing the axiom of choice when defining a graph as a set of links. However, these mathematical objects are not relevant for describing physics, much like a manifold holds no physical significance in general relativity and serves only as a fiducial canvas on which to define the gravitational (metric) and matter fields. The point of view taken here is to simply bypass the mental gymnastics required to define first a graph as an abstract mathematical object and then label and colour it. In particular, in what follows, we see that a careful choice of rules for labelling and relabelling is imperative when we wish to consider theories that deal with superpositions of graphs and their dynamical evolution. \\
With the above remarks, we consider that given a labelled graph $\Gamma$ there exists a set of \emph{allowed} renamings $\mathcal{R}_\Gamma$, which we take to be part of the theory. A relabelling is denoted $R, R',\ldots$. An abstract graph is then defined to be the equivalence class of a labelled graph $\Gamma$ under renamings, and is denoted $\tilde{\Gamma}$. An abstract coloured graph...
}
\textcolor{red}{If the name space is countable then equivalence classes under renaming are abstract graphs (if the graph is finite)}
\end{comment}
\section{Renamings, Observables and Evolution}
\label{sec:observables}
We now proceed to treat renamings as a symmetry group of our quantum state space. In the above section, the definition of `renamings' (graph isomorphism) was inlined inside Definition \ref{def:pointedmodulo} of anonymous graphs. Having argued we should not work at the level of name invariant graphs, but at the level of ((name variant) graphs, renaming invariance remains to be enforced. First, we define renamings as a standalone notion acting on the state space $\cal{H}$ of graphs:
\begin{definition}[Renaming]\label{def:graph renaming}
Consider $R$ an injective function from ${\mathbb{N}}$ to ${\cal V}$.
Renamings act on elements of ${\cal G}$ by renaming each vertex, and are extended to act on ${\cal H}$ by linearity, i.e. $R\ket{G}=\ket{RG}$ and $\bra{G}R^\dagger=\bra{RG}$.
\end{definition}
\medskip
\noindent {\em Observables.} Physically relevant observables must be name-invariant, so that probabilities or expected values given by the Born rule are unaltered under renaming. Thus, we must demand that {\em global} observables satisfy
$$
\mathrm{Tr}\left(O R\ket{G}\!\bra{G}R^\dagger \right)=\mathrm{Tr}\left(R^\dagger O R \ket{G}\!\bra{G} \right)=\mathrm{Tr}\left(O \ket{G}\!\bra{G} \right)
$$
which follows for all $G$ if and only if $[R,O]=0$ where $[,]$ is the commutator of operators on $\mathcal{H}$ . However, a physically relevant {\em local} observable $O(u)$ may be defined at a given name or `location' $u$, in which case its Born rule does depend upon $u$. For instance, we may be interested in knowing `the temperature at $u$'. Renaming $u$ to $v$ should not matter. Thus, we refine our requirement by
$$
\mathrm{Tr}\left(O(u)\ket{G}\!\bra{G}\right)=\mathrm{Tr}\left(O(R(u))R\ket{G}\!\bra{G}R^\dagger\right)
$$
We are then lead to the following definition
\begin{definition}[Renaming invariance]
An operator
$O(u)$ is said to be renaming invariant if and only if for all $G\in \cal G$ and for all renamings $R$,
$O(R(u))R=R O(u)$.
\end{definition}
This generalizes to $n$-point observables: $u$ can be understood as a list of nodes $u_1,\ldots,u_n$ in the above definition.
With the above definition we are effectively forbidding `observing' the names or `graph coordinates' . For instance, say that nodes were numbered by a counter $i$. Measuring $O(i)=i\mathbf{I}$ at $i$, with $\mathbf{I}$ the identity operation, would read out coordinate $i$. This is not an observable as it is not name-invariant, which is seen by taking $j=R(i)\neq i$:
$$ R O(i) R^\dagger = R i \mathbf{I} R^\dagger = i \mathbf{I} \neq j\mathbf{I}=O(j)=O(R(i)).$$
An example of a valid local observable is that which reads out the ratio between the number of second neighbours, and the number of first neighbours, of a node $u$---this is often thought of as a discrete analogue of a Ricci scalar curvature for graphs. An example of a valid global observable is the total number of links and nodes of the graph.
\medskip
\noindent {\em Evolutions.} Physically relevant evolutions need be insensitive to the names of the vertices. Thus, we must demand that any global evolution $U$ be renaming invariant in the following sense: for any renaming $R$, we must have that $UR = RU$. Similarly, a local evolution will verify $U(R(u))R = RU(u)$. Examples of valid evolutions were given in Sec. \ref{sec:signalling} (over named graphs). We, thus, posit:
\begin{postulate}
Physically relevant observables over quantum superpositions of graphs are renaming invariant operators over ${\cal H}$.
\end{postulate}
A keen reader will ask whether this last postulate is compatible with the possibility that an evolution may create/destruct nodes. There is indeed a subtlety here. One of the authors \cite{ArrighiCGD,ArrighiRCGD} has shown that in the classical, reversible setting, and when using straightforward naming conventions for the nodes, renaming invariance enforces node preservation. However, node creation/destruction becomes possible again when we adopt slightly more elaborate naming schemes, as shown by two of the authors in \cite{ArrighiDestruction}.
\section{Relevance for Quantum Gravity}
\label{sec:QG}
Diffeomorphism invariance is the central symmetry underlying general relativity. Its physical content is to ensure that predictions of the theory do not depend on the choice of a coordinate system. This is because (smooth) changes of coordinates are in a one--to--one correspondence with diffeomorphisms. Because of this mathematical equivalence, changes of coordinates are called `passive diffeomorphisms' while the more abstract notion of a diffeomorphism as a map on the manifold is called an `active diffeomorphism'. Similarly, graph renamings can be thought of as the `passive' way of describing graph isomorphisms.
Diffeomorphisms are a primitive concept defined already at the pre-geometrical level of a manifold, before we consider the metric tensor that describes the spacetime or matter fields. In this work we identified renamings of graphs as a discrete analogue to coordinate changes on a manifold. The, now discrete, pre-geometrical space is the graph. The assignment of names to nodes is the assignment of `coordinates' to the set of points of this space, the graph nodes. Links are to be understood as an adjacency relation in the topological, pre-geometrical, sense.
Let us discuss our findings in the context of Loop Quantum Gravity (LQG). In this well developed tentative theory for quantum gravity, a central result is that the kinematical state space decomposes in Hilbert spaces each corresponding to a graph, spanned by a certain basis of colourings of this graph called the spin--network states. The graph can be understood as dual to a `triangulation' of three dimensional space with quantum tetrahedra (see for instance\cite{PhysRevD.83.044035}). This is a quantum geometry in the sense that geometrical observables such as areas and angles do not commute and thus are undetermined satisfying uncertainty relations.
In the literature, we find two ways to enforce spatial diffeomorphism invariance at the quantum level in this program. One way \cite{rovelli_vidotto_2014} is to embed the graphs in a manifold, such that links become curves on the manifold, and then take the equivalence class resulting from acting with spatial diffeomorphisms on the embedded graph. This is a tedious procedure leading to a number of complications such as the creation of knots. Another strategy \cite{thiemann_2007}, at odds with the former approach, is to work directly at the level of non-embedded graphs, also called `abstract graphs'. The latter method is claimed to completely do away with the need to impose the (spatial) diffeomorphism invariance constraint at the quantum level, as there is no embedding manifold to begin with. This is a minimalistic and much simpler point of view.
Our analysis suggests that both points of view are partly correct and partly misplaced. In the former approach where the graph is embedded in a manifold, it seems superfluous to consider an additional continuous background space if the theory is to be based on graphs. Relevant for continuum limit.!! Graphs already serve the role of a natively discrete topological ambient space on which fields can then be defined. In the latter approach that employs `abstract' or non--embedded graphs, as we do here, it is misplaced to claim that the invariance under changes of coordinates, a central symmetry of the classical theory, has disappeared altogether on the grounds of an embedding manifold not being present. Whilst the embedding continuous topological space is appropriately dropped, on the discrete topological structure that remains---the graph--- there still remains the possibility to rename the nodes of the graph. Thus, in the context of LQG, we may introduce renaming invariance as implementing spatial diffeomorphism invariance at the quantum level.
!!!!
The above relate to describing a superposition of spacetimes as follows. We have seen that the names used in the definition of graphs play a crucial role in aligning superpositions of graphs. In LQG, a superposition of (quantum) spacetimes, is represented as a superposition of appropriately colored graphs. The simplest case conceptually is to consider a superposition of two coloured graph states$ \ket{G}$ and $\ket{H}$ each corresponding to a superposition of distinct semiclassical spatial geometries, a superposition of two `wavepackets' of geometry (see for instance \cite{phdM} for an introduction) each peaked on the 3-geometry of a spacelike surface. Consider then the state
\begin{equation}
\ket{G}+\ket{H}
\end{equation}
and let us momentarily consider embedding the two graphs $G$ and $H$ in a three dimensional Riemannian manifold. We consider a node $u$ which exists both in $G$ and $H$. The graph embedding can be defined in a common coordinate system $(x^\alpha)$. There are two different metric fields $g(x^\alpha)$ and $h(x^\alpha)$ defined on the manifold, on which the state of each graph is correspondingly peaked. In particular, the colouring of the node $u$ will be different in the two graphs . In the manifold, the node has coordinates $x^\alpha_u$. A diffeomorphism $\phi$ will change these coordinates to $\phi^* x^\alpha_u$. This is simply a change of name for the node. Of course, diffeomorphisms would give a continuous range of possible names, making the use of real numbers necessary. Working in the discrete, a countable range of names is enough. The key point is that the induced renaming is the same on both $G$ and $H$. From Definition 5, it acts on all branches of the superposition, as in
\begin{equation}
R \ket{G}+ R \ket{H}.
\end{equation}
This the non-trivial content of renaming invariance we have seen above in Section \ref{sec:signalling}. The renaming $R$, acts as a a sort of `quantum diffeomorphism', because it is acting on a superposition of graphs. It preserves the `alignment' of the superposition, regardless of whether a colouring is present or not. Having now control of this, we may proceed to colour the graph appropriately with discrete fields that admit a physical interpretation, as above, and proceed to describe interesting physics such as a superposition of spacetimes. That is, in the example above, it will remain the case under arbitrary renamings that the renamed node $u$ will have the same colouring in the two graph states, thus, also the two values of the geometrical data captured in $G$ and $H$ at that node will remained superposed but aligned. We stress once again that embedding the graphs in an ambient manifold is a superfluous procedure, that was employed here for the purpose of demonstration. It is sufficient to work at the level of graphs, recognising that they carry names on their nodes by their very definition. Then, renaming invariance naturally arises as a native discrete analogue of diffeomorphism invariance, that can be carried through at the quantum level.
\begin{comment}
\pcom{Marios: I really do not understand what follows\ldots can you make your point about active versus passive distinction in just a few sentences, hooking it back to what I rewrote above? I am tempted to cut out all the reminder of this section.} It is only in the particular case when $G$ and $H$ do not share a single node, that we can write
\begin{equation}
R ( \ket{G}+ \ket{H}) = R_1 \ket{G_1} + R_2 \ket{H}
\end{equation}
with $R_1$ and $R_2$ acting independently. Still it must not be the case that $u_1\neq u_2$ get mapped to $R_1(u_1)=R_2(u_2)$
\pcom{???} Then, a given embedding in the manifold might put the two on top of each other, such that they overlap. For instance, two of their nodes $u_1$ and $u_2$, one from each graph, might be placed on the same point of the manifold. What a classical diffeomorphism would do is move the graphs on the manifold, but always take both nodes on one and the same point again. However, in the quantum theory when two graphs are understood as a superposition, it is not immediately evident what a `quantum diffeomorphism' would do, as the possibility of it acting independently on each quantum branch cannot be overlooked. Renamings give us a clear idea of what happens. Since the two graphs do not have a commonly named node, a renaming $R$ can be split as $R_1$ and $R_2$ acting on each graph independently, giving in this case
\begin{equation}
R ( \ket{G_1}+ \ket{G_2}) = R_1 \ket{G_1} + R_2 \ket{G_2}
\end{equation}
This corresponds to the passive way of thinking of diffeomorphisms, as a change of coordinates. In the active point of view, the graphs are oved by a quantum diffeomorphism independently on the manifold: there is no information in the state that requires them having to stay `aligned' in any way, and the two nodes need not be on the same point anymore. However, if the nodes $n_1$ and $n_2$ carry the same name, then the above decomposition will not be possible. The embedding of the graphs might be changed, but if the nodes where at the same point they will be moved again at one and the same point. We have been referring to an embedding, as it is easier to imagine what is happing. We stress however that the relevant alignment information in the state can be easily manipulated by renamings without having to resort to embedding the graphs in an ambient space.
\pcom{Marios: do you have an argument for that? That is a question I have.} \textcolor{blue}{ Invariance of names of vertices for non embedded spinfoams would be equally important.}
\end{comment}
\bigskip
\section{Conclusions}
{\em Summary of contributions.} We provided a robust notion of quantum superposition of graphs, to serve as the state space for various applications, ranging from distributed models of Quantum Computing, the `quantum internet', to Quantum Gravity.
The main difficulty lied in the treatment of names. While node names are part of the definition of graphs they need to be done away with. We have shown that getting rid of them by working at the level of `anonymous' graphs is problematic as it leads to instantaneous signalling. We pointed out that the underlying reason is that names play an essential role in keeping quantum superpositions of graphs `aligned' with respect to one another.
We then proceeded to define renamings as a symmetry of the quantum state space and postulated that observables and evolutions to satisfy renaming invariance, i.e. they must commute with renamings. We pointed out that renamings on graphs are the native discrete analogue of passive diffeomorphisms on a manifold---both are relabelling the points of a topological space. In this sense, we followed here the standard prescription of General Relativity: use coordinates to define the physical situation being studied, and then demand that statements of physical relevance be invariant under changes of coordinates. Having furthermore straightforwardly extended graph renamings at the quantum level, we suggest them as a strategy for implementing discrete diffeomorphism invariance at the quantum level.
{\em Some perspectives.} In the continuous setting, passive diffeomorphisms (aka differentiable changes of coordinates) and active diffeomorphisms (aka differentiable displacements of the points in space) coincide: they are but two sides of the same coin. In the discrete this is by no means obvious: whilst the former amount to graph renamings as we have seen, the latter have often been interpreted as changing the shape of graph, to other triangulations of the same underlying manifold. Discrete analogues to active diffeomorphism, and their generated Hamiltonian constraints were studied e.g. in \cite{dittrich2009diffeomorphism,hohn2015canonical,hohn2014quantization}. The invariance that were found were not exact, except in flat space. Which of the passive or active picture is physically relevant to the discrete? Can build on the exact renaming invariance described here, as well as the passive/active analogy of the continuum, to work towards exact retriangulation invariance?
Whilst this paper focused on graphs (understood as a basis for 3--dimensional quantum space in LQG), the same line of argument could be carried through on `higher dimensional' graphs, e.g. 2--dimensional cellular complexes which correspond to a 4--dimensional quantum spacetime in LQG. Renaming invariance will then serve as a discrete analogue of full, 4--dimensional diffeomorphism invariance.
Having a robust notion of quantum superposition of graphs allows us to define quantum observables even at the pre-geometric level, before any spacetime geometry emerges or a matter field is considered. The Von Neumann entropy is one such observable, implying that information can be stored at the pre-geometric level. This potentially vast storage space could be holding the key to solving the black hole information loss paradox, as suggested in \cite{Perez:2014xca}. Quantum superpositions of graphs may also provide a discrete, reference frame-independent formalism for the recent continuous theories of quantum coordinate systems \cite{Hardy:2019cef} and quantum reference frames \cite{Castro-Ruiz:2019nnl, Giacomini:2017zju}. These are perspectives from theoretical physics. A perspective from quantum computing is to encode indefinite causal orders \cite{OreshkovQuantumOrder} within quantum superpositions of directed acyclic graphs.
\begin{acknowledgments}
The authors thank \c{C}aslav Brukner, Lucien Hardy, Aristotelis Panagiotopoulos, Alejandro Perez and Carlo Rovelli, for insights and discussions on this work. We acknowledge support from the Templeton Foundation, The Quantum Information Structure of Spacetime (QISS) Project (qiss.fr). The opinions expressed in this publication are those of the author(s) and do not necessarily reflect the views of the John Templeton Foundation (Grant Agreement No. 61466).
This work is supported by the National Science Foundation of China through Grant No. 11675136 and by the Hong Kong Research Grant Council through Grant No. 17300918, and by the Croucher Foundation.
\end{acknowledgments}
\bibliographystyle{JHEPs}
|
1,314,259,995,343 | arxiv | \section{Summary and concluding remarks}
\label{conclusions}
In the present paper we devised an analysis method to uniformly investigate a set of 12 OCs, located in the Sagittarius spiral arm. We took advantage of public Washington $CT_1$ photometric data combined with GAIA DR2 astrometry. The use of GAIA data proved to be an essential tool, since these objects are projected against dense stellar fields, which results in severe contamination in their CMDs.
Once we estimated the structural parameters from King profile fitting to the OCs' RDPs, we searched for statistically significant concentrations of data in the 3D astrometric space, in order to assign membership likelihoods. PARSEC isochrones covering a wide range of astrophysical parameter values were automatically fitted to the photometric data of high-membership stars and the basic astrophysical parameters ($(m-M)_{0}$, $E(B-V)$, log $t$ and $[Fe/H]$) were derived. Then, OCs' mass functions were built and their total masses were estimated.
We confirmed BH\,150 as a genuine OC, as judged from the outcomes of our decontamination procedure showing real concentrations of stars in the astrometric space, which allowed the identification of clearer evolutive sequences in the object's CMD. Its physical
nature had been under debate, because of purely photometric analyses or photometric data
combined with lower quality astrometry could not unambiguously disentangle the OC from field populations.
The studied OCs have similar Galactocentric distances, and hence are affected by nearly the
same Galactic gravitational potential. For this reason, we speculate that any
difference in the OC dynamical stages is caused by differences in the internal dynamical evolution.
Based on this assumption, we split the OC sample in three groups according to their $r_{h}/R_J$ ratios, which has resulted to be a good indicator of the relative OC dynamical evolution stages.
Particularly, we found that the larger the $r_{h}/R_J$ ratio, the less dynamically evolved
an OC.
The set of investigated OCs are not in an advanced stage of dynamical evolution, since their
concentration parameters span the lower part of the $c$ regime ($c\lesssim0.75$). In general, the
studied OCs present $c$ values which are within the smallest ones for OCs of similar core radii.
Their tidal radii reveal that they are relative small OCs as compared to the sizes of previously
studied OCs. We verified a general trend in which the higher the concentration parameter, the higher the age/$t_{\textrm{rh}}$. Those relative more dynamically evolved OCs have apparently
experienced more important low-mass star loss.
\section{Data collection and reduction}
\label{data_collection_reduction}
\begin{table*}
\small
\caption{Observations log of the studied OCs.}
\label{log_observations}
\begin{tabular}{lccccccc}
\hline
Cluster & $\rmn{RA}$ & $\rmn{DEC}$ & $\ell$ & $b$ & Filter & Exposure & Airmass \\
& ($\rmn{h}$:$\rmn{m}$:$\rmn{s}$) & ($\degr$:$\arcmin$:$\arcsec$) & ($^{\circ}$) & ($^{\circ}$) & & (s) & \\
\hline
Collinder\,258 & 12:27:16 & -60:46:42 & 299.9843 & 01.9550 & $C$ & 15,150 & 1.2,1.3 \\
& & & & & $R$ & 2,20 & 1.3,1.3 \\
NGC\,6756 & 19:08:45 & 04:43:01 & 39.1046 & -01.6865 & $C$ & 30,90,90,900 & 1.2,1.2,1.2,1.2 \\
& & & & & $R$ & 20,20,150,150 & 1.2,1.2,1.2,1.2 \\
Czernik\,37 & 17:53:12 & -27:22:00 & 2.2053 & -0.6255 & $C$ & 30,450 & 1.0,1.0 \\
& & & & & $R$ & 5,45 & 1.0,1.0 \\
NGC\,5381 & 14:00:40 & -59:36:18 & 311.5940 & 02.0975 & $C$ & 90,120,600,600 & 1.1,1.1,1.2,1.2 \\
& & & & & $R$ & 30,30,120,120 & 1.2,1.2,1.2,1.2 \\
Trumpler\,25 & 17:24:24 & -39:01:01 & 349.1460 & -01.7594 & $C$ & 30,300 & 1.0,1.0 \\
& & & & & $R$ & 5,300 & 1.0,1.0 \\
BH\,150 & 13:38:04 & -63:20:42 & 308.1334 & -0.9451 & $C$ & 60,60,600,600 & 1.2,1.2,1.2,1.2 \\
& & & & & $R$ & 5,15,15,90,90 & 1.2,1.2,1.2,1.2,1.2 \\
Ruprecht\,111 & 14:36:00 & -59:58:48 & 315.6658 & 0.2811 & $C$ & 30,45,450 & 1.2,1.2,1.2 \\
& & & & & $R$ & 15,15 & 1.2,1.2 \\
Ruprecht\,102 & 12:13:34 & -62:43:48 & 298.6088 & -0.1766 & $C$ & 45,450 & 1.2,1.2 \\
& & & & & $R$ & 7,45 & 1.2,1.2 \\
NGC\,6249 & 16:57:36 & -44:49:00 & 341.5242 & -01.1772 & $C$ & 30,30 & 1.0,1.0 \\
& & & & & $R$ & 5,5 & 1.0,1.0 \\
Basel\,5 & 17:52:24 & -30:06:00 & 359.7616 & -01.8636 & $C$ & 30,30,600,600 & 1.0,1.0,1.0,1.0 \\
& & & & & $R$ &15,15,120,120 & 1.0,1.0,1.0,1.0 \\
Ruprecht\,97 & 11:57:28 & -62:43:00 & 296.7920 & -0.4901 & $C$ & 60,90,900 & 1.3,1.3,1.3 \\
& & & & & $R$ & 60,180,180 & 1.3,1.3,1.3 \\
ESO\,129-SC32 & 11:44:11 & -61:03:29 & 294.8851 & 0.7587 & $C$ & 60,60,450,450 & 1.3,1.3,1.3,1.3 \\
& & & & & $R$ & 10,10,60,120 & 1.3,1.3,1.3,1.3 \\
\hline
\end{tabular}
\end{table*}
\begin{figure*}
\begin{center}
\includegraphics[width=0.325\textwidth]{DSS2_nearIR_14x14_arcmin_Cr258.jpg}
\includegraphics[width=0.325\textwidth]{DSS2_nearIR_14x14_arcmin_NGC6756.jpg}
\includegraphics[width=0.325\textwidth]{DSS2_nearIR_14x14_arcmin_Czernik37.jpg}
\includegraphics[width=0.325\textwidth]{DSS2_nearIR_14x14_arcmin_NGC5381.jpg}
\includegraphics[width=0.325\textwidth]{DSS2_nearIR_14x14_arcmin_Trumpler25.jpg}
\includegraphics[width=0.325\textwidth]{DSS2_nearIR_14x14_arcmin_BH150.jpg}
\includegraphics[width=0.325\textwidth]{DSS2_nearIR_14x14_arcmin_Ruprecht111.jpg}
\includegraphics[width=0.325\textwidth]{DSS2_nearIR_14x14_arcmin_Ruprecht102.jpg}
\includegraphics[width=0.325\textwidth]{DSS2_nearIR_14x14_arcmin_NGC6249.jpg}
\includegraphics[width=0.325\textwidth]{DSS2_nearIR_14x14_arcmin_Basel5.jpg}
\includegraphics[width=0.325\textwidth]{DSS2_nearIR_14x14_arcmin_Ruprecht97.jpg}
\includegraphics[width=0.325\textwidth]{DSS2_nearIR_14x14_arcmin_ESO129-32.jpg}
\caption{ DSS2 near-IR images of the OCs (from top-left to bottom-right) Collinder\,258, NGC\,6756, Czernik\,37, NGC\,5381, Trumpler\,25, BH\,150, Ruprecht\,111, Ruprecht\,102, NGC\,6249, Basel\,5, Ruprecht\,97 and ESO\,129-SC32. Image sizes are $14^{\arcmin}\times14^{\arcmin}$. North is up and East to the left. }
\label{images_clusters_parte1}
\end{center}
\end{figure*}
Images taken in $R$ (Kron-Kousins) and $C$ (Washington) filters were downloaded from the National Optical Astronomy Observatory (NOAO) public archive\footnote[1]{http://www.noao.edu/sdm/archives.php}. Observations were carried out during the nights 2008\,May\,08 to 2008\,May\,12 with the Tek2K CCD imager (scale of 0.4 arcsec\,pixel$^{-1}$, which provides a field of view of 13.6 arcmin$^2$) attached to the 0.9-m telescope at the Cerro Tololo Inter-American Observatory (CTIO, Chile; programme no. 2008A-0001, PI: Clari\'a). The observations log is showed in Table \ref{log_observations}. To illustrate the reader, images for the 12 selected OCs are showed in Figure \ref{images_clusters_parte1}.
Calibration (bias, dome and sky flats in both $C$ and $R$ filters) and standard star field images (SA\,101, SA\,107, SA\,110; \citeauthor{Landolt:1992}\,\,\citeyear{Landolt:1992}; \citeauthor{Geisler:1996}\,\,\citeyear{Geisler:1996}) were also downloaded together with the images for the investigated OCs. The data reduction steps followed the standard procedures employed for optical CCD photometry: overscan and bias subtraction and division by normalized flat fields. All procedures were implented with the use of {\fontfamily{ptm}\selectfont QUADRED} package in {\fontfamily{ptm}\selectfont IRAF}\footnote[2]{{\fontfamily{ptm}\selectfont
IRAF} is distributed by the National Optical Astronomy Observatories, which is operated by the Association of Universities for Research in Astronomy, Inc., under contract with the National Science Foundation.}. Images were taken with four amplifiers and were properly mosaiced in a single image extension. Bad pixels masks were also built and defective regions were corrected via linear interpolations performed on the images.
The photometry was performed on the reduced images using a point spread function (PSF)-fitting algorithm. We used a modified version of the STARFINDER code \citep{Diolaiti:2000}, which draws empirical PSFs from pre-selected stars on the images (with high signal-to-noise and relatively isolated from nearby sources) and cross-correlates them with every point source detected above a defined threshold. The main modification consisted in automatising the code, minimising the user intervention during the choice of proper sources for PSF modelling. This allowed us to deal with a relatively large number of images taken in crowded fields. In this step, we only kept in the photometric catalogues those stars for which the correlation coefficients between the measured profile and the modelled PSF resulted greater than 0.7. This criterion minimized the introduction of spurious detections and, at the same time, allowed the detection of faint stars contaminated by the background noise.
Astrometric solutions were computed for the whole set of images by mapping the positions of stars in each CCD frame with the corresponding coordinates as given in the GAIA DR2 catalogue for the observed region. A set of linear equations were calibrated, which allowed the transformation between the CCD reference system and the equatorial system (see \citeauthor{Caetano:2015}\,\,\citeyear{Caetano:2015} and references therein for further details) with an astrometric precision better than $\sim0.1\,$mas. Finally, our pipeline builds a final master table (containing the instrumental $c$ and $r$ magnitudes) by registering the fainter stars detected in the longer exposure frames and successively including those brighter sources identified in shorter exposures, thus avoiding the inclusion of saturated objects.
Nearly 70 magnitudes of standard stars per filter per night were measured in order to calibrate the transformation equations between the instrumental and standard systems. Standard star field SA101 was observed repeatedly in a wide range of airmass ($\sim1.1-2.5$), which allowed the determination of the extinction coefficients. We used the following calibration equations:
\begin{align}
c\, & =\,c_{1} + C + c_{2}\times X_{C} + c_{3}\times(C-T_{1}), \\
r\, & =\,t_{11} + T_{1} + t_{12}\times X_{T_{1}} + t_{13}\times(C-T_{1}),
\end{align}
\noindent
\noindent
where $c,r$ represent instrumental magnitudes and $C,T_1$ are the standard ones; $c_1$ and $t_{11}$ are the zero-point coefficients, $c_2$ and $t_{12}$ are the extinction coefficients, $c_3$ and $t_{13}$ are the colour terms. The airmass in both filters are represented as $X_C$ and $X_{T_{1}}$. The coefficients were obatined via multiple linear regression as implemented in the {\fontfamily{ptm}\selectfont IRAF} task {\fontfamily{ptm}\selectfont FITPARAMS}. The results are presented in Table \ref{results_FITPARAMS}. Instead of using instrumental $t_1$ magnitudes, \cite{Geisler:1996} proposed that the $R$ filter is an excellent substitute of the $T_1$ filter due to increased transmission at all wavelengths.
\begin{table}
\begin{minipage}{85mm}
\caption{Mean values of the fitted coefficients and residuals for the presently calibrated $CT_1$ photometric data set.}
\label{results_FITPARAMS}
\begin{tabular}{lcccc}
\hline
Filter & Zero & Extinction & Colour & Residual \\
& point & coefficient & term & (mag) \\
\hline
$C$ & 3.884$\pm$0.023 & 0.282$\pm$0.002 & -0.173$\pm$0.011 & 0.011 \\
$T_{1}$ & 3.306$\pm$0.024 & 0.089$\pm$0.002 & -0.041$\pm$0.004 & 0.010 \\
\hline
\end{tabular}
\end{minipage}
\end{table}
\begin{table}
\tiny
\begin{minipage}{85mm}
\caption{Stars in the field of NGC\,5381: Identifiers, coordinates, magnitudes and photometric uncertainties.}
\label{excerpt_NGC5381}
\begin{tabular}{ccccc}
\hline
Star ID & $\rmn{RA}$ & $\rmn{DEC}$ & $C$ & $T_{1}$ \\
& ($^{\circ}$) & ($^{\circ}$) & (mag) & (mag) \\
\hline
1 & 210.1042480 & -59.5857430 & 13.429$\pm$0.001 & 12.631$\pm$0.001 \\
2 & 210.1034546 & -59.5751610 & 13.619$\pm$0.001 & 13.024$\pm$0.001 \\
3 & 210.1103973 & -59.5492859 & 14.035$\pm$0.001 & 12.383$\pm$0.002 \\
4 & 210.0230560 & -59.6616936 & 16.398$\pm$0.007 & 14.996$\pm$0.004 \\
5 & 210.3296967 & -59.5831680 & 17.027$\pm$0.010 & 15.274$\pm$0.006 \\
$-$ & $-$ & $-$ & $-$ & $-$ \\
\hline
\end{tabular}
\end{minipage}
\end{table}
Finally, the above equations were inverted in order to convert instrumental magnitudes to the standard system and obtain photometric uncertainties properly propagated into the final magnitudes, according to the STARFINDER algorithm. This step was implemented via the {\fontfamily{ptm}\selectfont IRAF INVERTFIT} task. For each OC, the photometric catalogue consists of an identifier for each star (ID), equatorial coordinates ($\rmn{RA}$ and $\rmn{DEC}$), magnitudes in filters $C$ and $T_1$ with their respective photometric uncertainties. An excerpt of the final table for the OC NGC\,5381 is presented in Table \ref{excerpt_NGC5381}. Our typical uncertainties are illustrated in Figure \ref{photerrors_C_T1_NGC5381}. We also derived the completeness level of our photometry at different magnitudes. To accomplish this, we performed artificial star tests on our images. The detailed procedure and results are described in section 2 of \cite{Angelo:2018}.
\begin{figure}
\begin{center}
\includegraphics[width=8.5cm]{phot_errors_NGC5381.pdf}
\caption{Photometric errors as a function of the $T_1$ mag for stars in the field of NGC\,5381, which
are typical in our photometric catalogues.}
\label{photerrors_C_T1_NGC5381}
\end{center}
\end{figure}
\section{Discussion}
\label{discussion}
\begin{figure*}
\begin{center}
\includegraphics[width=1.0\textwidth]{Rgal_versus_age.png}
\caption{ Panel (a): Galactocentric distance $R_{\textrm{G}}\,$ versus log($t$/yr) for the studied OC sample. Symbols' colours were assigned according to the different $r_{h}/R_J$
ranges described in the text. Red symbols: Collinder\,258 (\textcolor{red}{$\CIRCLE$}), NGC\,6756 (\textcolor{red}{$\blacktriangle$}), Czernik\,37 (\textcolor{red}{$\blacklozenge$}) and NGC\,6249 (\textcolor{red}{$\blacksquare$}); black symbols: Trumpler\,25 ($\CIRCLE$), BH\,150 ($\blacktriangle$), Ruprecht\,111 ($\blacksquare$), Basel\,5 ($\blacklozenge$); blue symbols: NGC\,5381 (\textcolor{blue}{$\CIRCLE$}), Ruprecht\,102 (\textcolor{blue}{$\blacktriangle$}), Ruprecht\,97 (\textcolor{blue}{$\blacklozenge$}) and ESO\,129-SC32 (\textcolor{blue}{$\blacksquare$}). Panel (b): Radial metallicity $[Fe/H]$ distribution. The continuous line is the relationship derived by Netopil et al. (2016). The dashed lines represent its upper and lower limits. Panel (c): Distribution of the OCs projected on to the Galactic plane. The spiral pattern was taken from Vall\'ee\,(2008). Panel (d): Distribution of OCs perpendicular to the Galactic plane (horizontal line). The grey dots represent OCs taken from Kharchenko et al. (2013) and Dias et al. (2002).}
\label{Rgal_versus_age}
\end{center}
\end{figure*}
The OC sample studied in this work present nearly similar ages and Galactocentric distances, with the sole exception of BH\,150 (see Fig.~\ref{Rgal_versus_age}, panel (a)). They are located close to the Galactic plane ($\vert Z\vert\leq75\,$pc) and are part of the Sagittarius arm (see panels (c) and (d)). Their colour-excess $E(B-V)$ vary from $\sim0.2$ up to $\sim1.6$ (Table \ref{astroph_params}), witnessing the different amount of dust and gas distributed along their line of sight, as expected.
As shown in Fig.~\ref{Rgal_versus_age}, panel (b), they span metallicities from slightly sub-solar up to moderately more metal-rich values than the Sun, being most of them of solar metal content. In this panel we superimposed the $[Fe/H]$-R$_{G}$ relation (continuous line; slope -0.086\,dex/kpc) as derived in \citeauthor{Netopil:2016}\,\,(\citeyear{Netopil:2016}; their table 3) by fitting the radial metallicity distribution for a set of 88 OCs in the range $R_G<12\,$kpc. The upper and lower limits for this relation, as derived from the informed parameters uncertainties, are represented by dashed lines. OCs from the samples of \cite{Kharchenko:2013} and \cite{Dias:2002} are also shown (grey filled circles), for comparison purposes. Considering uncertainties, our cluster sample -- except possibly NGC 5381 -- agree well with the results of \cite{Netopil:2016}.
NGC\,5381 ($R_G=6.8\pm0.5\,$kpc) departs from the expected relation towards lower metallicity. Despite this, other clusters also show relative low-metallicity values. Indeed, the metallicity distribution derived by \cite{Netopil:2016} (their figure 10) shows azimuthal variations in $[Fe/H]$ from $\sim-0.2\,$ to $\sim+0.2$ for $R_G$ in the range $\sim6.3-7.3\,$kpc.
\begin{figure*}
\begin{center}
\includegraphics[width=1.0\textwidth]{plots_radius_age_mphot_parte1.png}
\caption{ Relationships between different OC properties. Symbols´ colours are as
in Fig.~\ref{Rgal_versus_age}. Small dots represent OCs from the literature.}
\label{plots_radius_age_mphot_parte1}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=1.0\textwidth]{plots_radius_age_mphot_parte2.png}
\caption{ Relationships between different OC astrophysical parameters.
Symbols and colours are as in Fig.~\ref{Rgal_versus_age}. Small dots represent
Joshi et al.'s (2016) OC sample, while the dashed line represent their
derived relationship.}
\label{plots_radius_age_mphot_parte2}
\end{center}
\end{figure*}
In the subsequent analysis, we employ parameters associated to the dynamical evolution, namely mass, age, core, half-light, tidal and Jacobi radii, in order to characterise the dynamical stages of the investigated sample. Panel (a) of Fig.~\ref{plots_radius_age_mphot_parte1} shows the $r_{h}/R_J$ versus $R_G$ plane, which gives us some hints on the dynamical evolution of the investigated OCs \citep{Baumgardt:2010}. Because of the similar $R_G$ values, the Galactic gravitational potential is
not expected to produce differential tidal effects \citep{Lamers:2005,Piatti:2018}. Therefore, we interpret any difference in the OC dynamical stages as being caused mainly by internal dynamical evolution. Consequently, the larger the $r_{h}/R_J$ ratio in panel (a), the relative less dynamically relaxed an OC.
To the light of this correlation, we split our OC sample in three groups, distinguished by red ($r_h/R_J\lesssim0.15$), black ($0.15\lesssim r_h/R_J\lesssim0.21$) and blue symbols ($r_h/R_J\gtrsim0.21$), respectively, as also indicated in Fig.~\ref{Rgal_versus_age}. OCs in the blue group (NGC\,5381, Ruprecht\,102, Ruprecht\,97 and ESO\,129-SC32) are relatively less evolved; the black ones (Trumpler\,25, Ruprecht\,111 and Basel\,5; BH\,150 included) are at a relative intermediate stage of dynamical evolution, while the red ones (Collinder\,258, NGC\,6756, Czernik\,37 and NGC\,6249) are the most advanced in dynamical two-body relaxation. It is noticeable that all investigated OCs present $r_h/R_J<0.5$, which makes these stellar aggregates stable against rapid dissolution. Some studies (e.g., \citeauthor{Portegies-Zwart:2010}\,\,\citeyear{Portegies-Zwart:2010} and references therein) have extensively investigated the combined effects of mass loss by stellar evolution and dynamical evolution in the tidal field of the host galaxy and showed that, when clusters expand to a radius of $\sim$0.5\,$r_J$, they lose equilibrium and most of their stars overflow $r_J$.
In panel (b) of Fig.~\ref{plots_radius_age_mphot_parte1} we plot the concentration parameter $c$\,(=log($r_t/r_c$)) as a function of age for our OC sample and compare them with those in the literature. As can be seen, the OCs studied here are within those with smaller
$c$ values for their ages. Likewise,
there is a hint of relative different mass segregation (e.g., \citeauthor{de-La-Fuente-Marcos:1997}\,\,\citeyear{de-La-Fuente-Marcos:1997}; \citeauthor{Portegies-Zwart:2001}\,\,\citeyear{Portegies-Zwart:2001}), in the sense that the smaller the $c$ values, the less dynamically evolved an OC. Note that the selected OCs have $r_c$ that
show a trend with $c$ (see panel (c)) following a much tighter relationship than that
observed for the vast majority of other known OCs. We can see that the less evolved OCs in our sample present less compact cores. This is an expected trend, since as internal relaxation transports energy from the (dynamically) warmer central core to the cooler outer regions, the core contracts as it loses energy \citep{Portegies-Zwart:2010}. Furthermore, the OCs in our sample are relatively small as judged by their tidal radii ($r_t$, see panel (d)) and present their stellar content well within their respective Jacobi radii (Figure \ref{plots_radius_age_mphot_parte2}, panel (d)).
The classification scheme proposed in Fig.~\ref{plots_radius_age_mphot_parte1}, panel (a), is supported by the results presented in panel (b) of Fig.~\ref{plots_radius_age_mphot_parte2}, where $c$ is plotted as function of age/$t_{\textrm{rh}}$. As can be seen, those OCs with smaller $c$ values show smaller age/$t_{\textrm{rh}}$ ratios. In this sense, \cite{Vesperini:2009} provided an evolutionary picture from results of $N$-body simulations of star clusters in a tidal field. They showed that two-body relaxation causes star clusters to subsequently lose memory of their initial structure (e.g., initial density profile and concentration) and the concentration parameter $c$ increases steadily with time. This overall trend was also verified, although with considerable scatter, by \cite{Piatti:2016} and \cite{Angelo:2018} (figure 14 in both papers), who compared the concentration parameters of a set of investigated Galactic OCs with a sample of 236 OCs analysed homogeneously by \cite{Piskunov:2007}.
Indeed, the more evolved OCs (red symbols) present larger age/$t_{\textrm{rh}}$ compared to the less evolved ones (blue symbols). This is consistent with the overall evaporation scenario, in which the larger the age/$t_{\textrm{rh}}$ ratio, the more depleted the lower mass content. This can be seen in our clusters' mass functions (Fig.~\ref{mass_func_parte1}), where the ones from the red group show systematically higher depletion of their lowest mass bin when compared to those from the blue group. In all cases, the mass function steepnesses for the higher mass bins do not show noticeable deviations from linear trends as exhibited by Kroupa and Salpeter IMFs (log\,$\phi(m)$ $\propto$ -2.3\,log\,$m$, for $m>0.5\,$M$_{\odot}$). Since our 12 OCs are located at almost the same $R_G$, we expect no differential impact of the tidal field on the clusters' mass function depletion.
As stated by \cite{Bonatto:2004a},
OCs will leave at advanced evolutionary stages only a core, with most of the low-mass stars dispersed into the background. In this sense, it is noticeable that smaller OCs are those
relatively more evolved, so that they have had chance to lose low-mass stars, while the
more massive ones have concentrated toward the OC centres.
Consequently, their $r_t$/$r_J$ ratios are
smaller than those for relative less evolved OCs, which have mainly expanded within
the Jacobi volume.
(see panel (d) of Fig.~\ref{plots_radius_age_mphot_parte2}).
Assuming $m_{\textrm{Kroupa}}$ (Table \ref{astroph_params}) as a rough estimate of the initial OC mass, the studied OCs may have lost more than $\sim60$\,percent of their initial mass. Note that Fig.~\ref{mass_func_parte1} shows that the OCs´ mass functions
depart from the linear trend at different stellar masses, which could be linked with
their relative different dynamical stages, in the sense that the more evolved a system
the more massive the lower mass end. On the other hand, the estimated OC masses are
within the range of those obtained by \cite{Joshi:2016}, as shown in panel (a) of
Fig.~\ref{plots_radius_age_mphot_parte2}. The dashed line is a linear fit performed by \citeauthor{Joshi:2016}\,\,(\citeyear{Joshi:2016}, their equation 8) to the mean masses within age bins of $\Delta$log($t$/yr)=0.5.
For completeness purposes we included
in panel (c) OC masses as a function of age/$t_{\textrm{rh}}$, which follows eq. (9). For all investigated clusters, the derived ages are larger than the corresponding $t_{rh}$ (7 $\lesssim$ age/$t_{\textrm{rh}}$$\lesssim$ 164), which means that they have had enough time to dynamically evolve. This statement is also true even if we consider $m_{\textrm{Kroupa}}$ (Table \ref{astroph_params}) to determine $t_{rh}$ (in which case we have $5\lesssim$ age/$t_{\textrm{rh}}$$\lesssim$ 140). Fig.~\ref{plots_radius_age_mphot_parte2} shows that the larger the age/$t_{\textrm{rh}}$, the smaller the cluster mass. This is an expected result, since clusters lose stars as its ages surpass many times its $t_{rh}$.
From the present analysis we advocate that the studied OCs are in different dynamical states. Assuming that these objects have been subject to the same Galactic tidal effects -- they
have similar ages (except BH\,150) and Galactocentric distances --, the differences in their dynamical stages tell us about the wide family of OCs formed in the Sagittarius spiral
arm.
\section{Method}
\label{method}
\subsection{Structural parameters determination}
\label{center_RDPs_struct_params}
The first step in our analysis was to determine the OCs' central coordinates, simultaneously with
their core ($r_c$) and tidal ($r_t$) radii. In each case, we used
a uniformly spaced square grid (steps of 0.25\,arcmin), centred on the literature coordinates and with full
extension equal to $\sim$2$-$4 times the informed limiting radius. These grids typically contained $\sim$200$-$400 square cells. We used each of these cells as a putative centre and built radial density profiles (RDPs) by performing stellar counts in concentric rings with widths varying from 0.50 to 1.50\,arcmin, in steps of 0.25\,arcmin. The background levels were estimated by taking into account
the average of the stellar densities corresponding to the external rings, where the stellar densities fluctuate around a nearly constant value.
The background subtracted RDPs were then fitted using a \cite{King:1962} model:
\begin{equation}
\sigma(r) \propto \left( \frac{1}{\sqrt{1+(r/r_c)^2}} - \frac{1}{\sqrt{1+(r_t/r_c)^2}} \right)^2
,\end{equation}
\noindent
A grid of $r_t$ and $r_c$ values spanning the whole range of radii \citep{Piskunov:2007} was employed and we searched for the values that minimized $\chi^2$. The final centres were assumed as the coordinates that produce the smoothest stellar RDPs and, at the same time, the highest density in the innermost region. This procedure is analogous to that employed by \cite{Bica:2011} in their study of very field-contaminated OCs. The best
King's model fits are plotted in Fig.~\ref{RPDs_parte1} with blue lines, while the corresponding $r_t$ and $r_c$ values (converted to pc according to the distance moduli; see Section \ref{members_selection}) are
showed in Table \ref{struct_params}. The determined central coordinates are also showed in the same table. Additionally, we fitted a \cite{Plummer:1911} profile to each RDP, for comparison purposes:
\begin{equation}
\sigma(r) \propto \frac{1}{\left[1+(r/a)^2\right]^2}
.\end{equation}
\noindent
These fits were represented in Fig.~\ref{RPDs_parte1} with red lines. As can be seen, that both profiles are nearly indistinguishable in the inner OCs' region ($r\lesssim r_c$). The $a$ parameter is the Plummer radius which is related to the half-light radius $r_{\textrm{h}}$ by the relation $r_{\textrm{h}}\sim1.3a$.
\begin{figure*}
\begin{center}
\parbox[c]{1.0\textwidth}
{
\includegraphics[width=0.333\textwidth]{rprofile_Cr258.pdf}
\includegraphics[width=0.333\textwidth]{rprofile_NGC6756.pdf}
\includegraphics[width=0.333\textwidth]{rprofile_Czernik37.pdf}
\includegraphics[width=0.333\textwidth]{rprofile_NGC5381.pdf}
\includegraphics[width=0.333\textwidth]{rprofile_Trumpler25.pdf}
\includegraphics[width=0.333\textwidth]{rprofile_BH150.pdf}
\includegraphics[width=0.333\textwidth]{rprofile_Ruprecht111.pdf}
\includegraphics[width=0.333\textwidth]{rprofile_Ruprecht102.pdf}
\includegraphics[width=0.333\textwidth]{rprofile_NGC6249.pdf}
\includegraphics[width=0.333\textwidth]{rprofile_Basel5.pdf}
\includegraphics[width=0.333\textwidth]{rprofile_Ruprecht97.pdf}
\includegraphics[width=0.333\textwidth]{rprofile_ESO129-32.pdf}
}
\caption{ Normalized RDPs before and after background subtraction drawn with open and filled symbols, respectively.
Poisson error bars are showed. The vertical continous and dotted lines represent the OC limiting radius and its uncertainty, respectively. The horizontal continuous line represents the mean backround density. The blue and red curves represent the fitted King\,(1962) and Plummer\,(1911) profiles, respectively.}
\label{RPDs_parte1}
\end{center}
\end{figure*}
\begin{table*}
\caption{ Determined central coordinates, Galactocentric distances and structural parameters of the studied OCs. }
\label{struct_params}
\begin{tabular}{lccccccc}
\hline
Cluster &$\rmn{RA}$ &$\rmn{DEC}$ & R$_{\textrm{GC}}^{*}$ & $r_c$ & $r_{h}^{\dag}$ & $r_t$ & R$_J^{\dag\dag}$ \\
&($\rmn{h}$:$\rmn{m}$:$\rmn{s}$) & ($\degr$:$\arcmin$:$\arcsec$) & (kpc) & (pc) & (pc) & (pc) & (pc) \\
\hline
Collinder\,258 & 12:27:17 & -60:46:45 &7.4\,$\pm$\,0.5 &0.50\,$\pm$\,0.19 &0.65\,$\pm$\,0.12 &1.23\,$\pm$\,0.46 & 4.77\,$\pm$\,0.44 \\
NGC\,6756 & 19:08:44 & 04:42:53 &6.6\,$\pm$\,0.5 &0.68\,$\pm$\,0.14 &0.99\,$\pm$\,0.15 &2.10\,$\pm$\,0.40 & 7.73\,$\pm$\,0.65 \\
Czernik\,37 & 17:53:15 & -27:22:53 &6.5\,$\pm$\,0.6 &0.72\,$\pm$\,0.16 &1.01\,$\pm$\,0.15 &2.03\,$\pm$\,0.41 & 7.12\,$\pm$\,0.66 \\
NGC\,5381 & 14:00:45 & -59:35:12 &6.8\,$\pm$\,0.5 &1.88\,$\pm$\,0.49 &2.05\,$\pm$\,0.40 &2.86\,$\pm$\,0.61 & 7.33\,$\pm$\,0.59 \\
Trumpler\,25 & 17:24:29 & -39:00:48 &6.3\,$\pm$\,0.6 &1.57\,$\pm$\,0.30 &1.91\,$\pm$\,0.20 &3.13\,$\pm$\,0.51 &10.04\,$\pm$\,0.89 \\
BH\,150 & 13:38:04 & -63:20:27 &6.6\,$\pm$\,0.5 &0.79\,$\pm$\,0.18 &1.20\,$\pm$\,0.17 &2.90\,$\pm$\,0.70 & 7.82\,$\pm$\,0.69 \\
Ruprecht\,111 & 14:36:04 & -59:59:21 &6.8\,$\pm$\,0.5 &0.67\,$\pm$\,0.22 &0.94\,$\pm$\,0.22 &1.83\,$\pm$\,0.44 & 5.40\,$\pm$\,0.47 \\
Ruprecht\,102 & 12:13:37 & -62:42:55 &7.1\,$\pm$\,0.6 &1.29\,$\pm$\,0.37 &1.73\,$\pm$\,0.24 &3.40\,$\pm$\,0.83 & 6.37\,$\pm$\,0.55 \\
NGC\,6249 & 16:57:38 & -44:48:15 &6.9\,$\pm$\,0.5 &0.37\,$\pm$\,0.10 &0.65\,$\pm$\,0.13 &2.00\,$\pm$\,0.67 & 4.94\,$\pm$\,0.44 \\
Basel\,5 & 17:52:27 & -30:05:32 &6.3\,$\pm$\,0.6 &0.61\,$\pm$\,0.15 &1.05\,$\pm$\,0.20 &2.88\,$\pm$\,0.76 & 5.41\,$\pm$\,0.54 \\
Ruprecht\,97 & 11:57:35 & -62:43:20 &7.2\,$\pm$\,0.5 &3.01\,$\pm$\,0.85 &3.67\,$\pm$\,0.49 &5.55\,$\pm$\,1.22 & 8.54\,$\pm$\,0.65 \\
ESO\,129-SC32 & 11:44:06 & -61:05:56 &7.3\,$\pm$\,0.5 &1.73\,$\pm$\,0.54 &2.39\,$\pm$\,0.42 &4.65\,$\pm$\,1.08 & 7.78\,$\pm$\,0.60 \\
\hline
\multicolumn{8}{l}{ \textit{Note}: To convert 1 arcmin into pc we used the expression $(\pi\,/10800)\times10^{[(m-M)_{0}+5]/5}$, where $(m-M)_{0}$ }\\
\multicolumn{8}{l}{ is the OC distance modulus (see Table \ref{astroph_params}). } \\
\multicolumn{8}{l}{ $^{*}$ The $R_G$ were obtained from the distances in Table \ref{astroph_params}, assuming that the Sun is located at 8.0\,$\pm$\,0.5\,kpc } \\
\multicolumn{8}{l}{ from the Galactic centre \citep{Reid:1993a}. }\\
\multicolumn{8}{l}{ $^{\dag}$ Half-light radius (Section \ref{mass_functions}). } \\
\multicolumn{8}{l}{ $^{\dag\dag}$ Jacobi radius (Section \ref{mass_functions}). }
\end{tabular}
\end{table*}
\subsection{Membership determination}
\label{memberships}
After the structural analysis, we used the Vizier service\footnote[3]{http://vizier.u-strasbg.fr/viz-bin/VizieR} to extract astrometric data from the GAIA DR2 catalogue \citep{Gaia-Collaboration:2018} for stars in a large circular area of radius 20\,arcmin centred on each target. This region is large enough to encompass completely the field of view corresponding to our Washington images. For each OC we cross-matched our photometric catalogues with GAIA and executed a routine that explores the three-dimensional (3D) parameters space of proper motions and parallaxes ($\mu_{\alpha}$, $\mu_{\delta}$, $\varpi$) corresponding to stars in the OC area ($r\lesssim r_t$) and in a control field (stars in the region $r\gtrsim r_t$). The routine is devised to detect and evaluate statistically the overdensity of OC stars in comparison to the field in each part of the parameters space. This was a critical step in our analysis, since the studied OCs are located at low Galactic latitudes ($\vert b\vert\leq2^{\circ}$) and thus projected against dense stellar fields.
The routine is completely described in \cite{Angelo:2019a}. Briefly, the procedure consists in dividing the astrometric space in cells with widths proportional to the sample mean uncertainties (\,$\langle\Delta\mu_{\alpha}\rangle$, $\langle\Delta\mu_{\delta}\rangle$ and $\langle\Delta\varpi\rangle$\,) in each astrometric parameter. Cell widths are typically 1.0\,mas.yr$^{-1}$ and 0.1\,mas for proper motion components and parallax, respectively. These values correspond to $\sim10\times\langle\Delta\mu_{\alpha}\rangle$, $\sim10\times\langle\Delta\mu_{\delta}\rangle$ and $\sim1\times\langle\Delta\varpi\rangle$. These cell sizes allow to accomodate a significant number of stars inside each cell and they are small enough to properly sample the fluctuations across the 3D space.
Inside each cell, we determined membership likelihoods for stars in the OC sample ($l_{\textrm{star}}$) by employing a multivariate gaussian:
\begin{equation}
\begin{aligned}
l_{\textrm{star}} = \frac{\textrm{exp}\left[-\frac{1}{2}(\boldsymbol{X}-\boldsymbol{\mu})^{\textrm{T}}\boldsymbol{\sum}^{-1}(\boldsymbol{X}-\boldsymbol{\mu})\right]}{\sqrt{(2\pi)^3\vert\boldsymbol{\sum}\vert}}
\end{aligned}
\label{likelihood_formula}
,\end{equation}
\noindent
where $\boldsymbol{X}$ is the column vector ($\mu_{\alpha}$,\,$\mu_{\delta}$,\,$\varpi$) for a given star and $\boldsymbol{\mu}$ is the mean vector for the sample of OC stars contained within the cell. $\boldsymbol{\sum}$ is the full covariance matrix, which incorporates the uncertainties and the correlations between the astrometric parameters (see equation 2 of \citeauthor{Angelo:2019a}\,\,\citeyear{Angelo:2019a}). Then the same calculation was performed for stars in the control field. For each sample inside a given cell, the total likelihood was taken multiplicatively: $\mathcal{L}=\prod_{i}^{} l_i$.
In order to compare statistically the dispersion of data for both samples (OC and control field) in a given cell, we employed the objective function:
\begin{equation}
S = -\textrm{log}\,\mathcal{L}
\label{func_entropia}
.\end{equation}
\noindent
Then we searched for cells for which $S_{\textrm{clu}}<S_{\textrm{field}}$ and the corresponding OC stars were flagged (``1") as member candidates (stars in cells which do not satisfy this criterion were flagged as ``0"). In these cases, the ``entropy" of the parameter space for stars in the OC sample is locally smaller than that for field stars.
The final membership likelihoods were assigned to stars inside those cells flagged as ``1" according to the relation:
\begin{equation}
L_{\textrm{star}} \propto\,\textrm{exp}\left(-\frac{\langle N_{\textrm{clu}}\rangle}{N_{\textrm{clu}}}\right)
\label{termo_exponencial}
,\end{equation}
\noindent
where $\langle N_{\textrm{clu}}\rangle$ is the average number of stars inside cells for a given grid size. This exponential factor is necessary to make sure that only stars within cells where $N_{\textrm{clu}}$ is considerably greater than $\langle N_{\textrm{clu}}\rangle$ will receive large membership likelihoods.
Finally, the cell sizes were varied by one third of their original sizes in each direction and the procedure stated above was repeated. For each star the algorithm determines 27 different likelihoods and registers the median of this set of values. The maximum likelihood (equation \ref{termo_exponencial}) for the complete sample of stars in the OC region is then normalized to unity.
\section{Results}
\label{results}
\subsection{Members selection}
\label{members_selection}
\begin{figure*}
\begin{center}
\parbox[c]{1.0\textwidth}
{
\includegraphics[width=0.333\textwidth]{T1_CT1_Cr258.pdf}
\includegraphics[width=0.333\textwidth]{T1_CT1_NGC6756.pdf}
\includegraphics[width=0.333\textwidth]{T1_CT1_Czernik37.pdf}
\includegraphics[width=0.333\textwidth]{T1_CT1_NGC5381.pdf}
\includegraphics[width=0.333\textwidth]{T1_CT1_Trumpler25.pdf}
\includegraphics[width=0.333\textwidth]{T1_CT1_BH150.pdf}
}
\caption{ $T_{1}\,\times\,(C-T_1)$ CMDs after applying the decontamination procedure described in Section \ref{memberships}. Filled and open symbols represent member and non-member stars, respectively. Colours were assigned according to the
astrometric membership scale, as indicated by the colour bars. Small black dots represent stars in a control field. The continuous lines are the best-fitted PARSEC isochrones and the dotted ones are shifted by -0.75 mag in $T_1$ to match to loci of unresolved binaries with equal mass components. The basic astrophysical parameters are indicated. The filled black squares represent stars without GAIA data and with $\mathcal{L_{\textrm{phot}}}\ge0.1$ (see text for details). }
\label{CMDs_parte1}
\end{center}
\end{figure*}
\setcounter{figure}{3}
\begin{figure*}
\begin{center}
\parbox[c]{1.0\textwidth}
{
\includegraphics[width=0.333\textwidth]{T1_CT1_Ruprecht111.pdf}
\includegraphics[width=0.333\textwidth]{T1_CT1_Ruprecht102.pdf}
\includegraphics[width=0.333\textwidth]{T1_CT1_NGC6249.pdf}
\includegraphics[width=0.333\textwidth]{T1_CT1_Basel5.pdf}
\includegraphics[width=0.333\textwidth]{T1_CT1_Ruprecht97.pdf}
\includegraphics[width=0.333\textwidth]{T1_CT1_ESO129-32.pdf}
}
\caption{(continued).}
\label{CMDs_parte2}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=0.31\textwidth]{VPD_Cr258.pdf}
\includegraphics[width=0.31\textwidth]{VPD_NGC6756.pdf}
\includegraphics[width=0.31\textwidth]{VPD_Czernik37.pdf}
\includegraphics[width=0.31\textwidth]{VPD_NGC5381.pdf}
\includegraphics[width=0.31\textwidth]{VPD_Trumpler25.pdf}
\includegraphics[width=0.31\textwidth]{VPD_BH150.pdf}
\includegraphics[width=0.31\textwidth]{VPD_Ruprecht111.pdf}
\includegraphics[width=0.31\textwidth]{VPD_Ruprecht102.pdf}
\includegraphics[width=0.31\textwidth]{VPD_NGC6249.pdf}
\includegraphics[width=0.31\textwidth]{VPD_Basel5.pdf}
\includegraphics[width=0.31\textwidth]{VPD_Ruprecht97.pdf}
\includegraphics[width=0.31\textwidth]{VPD_ESO129-32.pdf}
\caption{VPDs after applying the decontamination procedure of Section \ref{memberships}. Colours were assigned according to the membership scale and big filled circles represent member stars. }
\label{VPDs_T1plx_parte1}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=0.3\textwidth]{T1mag_plx_Cr258.pdf}
\includegraphics[width=0.3\textwidth]{T1mag_plx_NGC6756.pdf}
\includegraphics[width=0.3\textwidth]{T1mag_plx_Czernik37.pdf}
\includegraphics[width=0.3\textwidth]{T1mag_plx_NGC5381.pdf}
\includegraphics[width=0.3\textwidth]{T1mag_plx_Trumpler25.pdf}
\includegraphics[width=0.3\textwidth]{T1mag_plx_BH150.pdf}
\includegraphics[width=0.3\textwidth]{T1mag_plx_Ruprecht111.pdf}
\includegraphics[width=0.3\textwidth]{T1mag_plx_Ruprecht102.pdf}
\includegraphics[width=0.3\textwidth]{T1mag_plx_NGC6249.pdf}
\includegraphics[width=0.3\textwidth]{T1mag_plx_Basel5.pdf}
\includegraphics[width=0.3\textwidth]{T1mag_plx_Ruprecht97.pdf}
\includegraphics[width=0.3\textwidth]{T1mag_plx_ESO129-32.pdf}
\caption{ $\varpi$\,versus $T_1$ plots for the studied OCs. Symbols and colours are the same of those in Fig.~\ref{VPDs_T1plx_parte1}. }
\label{T1mag_plx}
\end{center}
\end{figure*}
After applying the procedure described in the previous section, we built decontaminated CMDs for the whole studied OC sample, as showed in Fig.~\ref{CMDs_parte1}. We also built the vector-point diagrams (VPDs) and the $\varpi\,$versus $T_1\,$ plots for the complete sample (Figs.~\ref{VPDs_T1plx_parte1} and \ref{T1mag_plx}, respectively). We can see that higher membership stars (typically $P\gtrsim0.6-0.7$) define recognizable evolutionary sequences in the CMDs, although some interlopers remain. These stars also define ``clumps" of data in the astrometric plots, since they share common movements and compatible parallaxes.
We then took the photometric data for higher membership stars and, in a first step, we performed preliminary estimates of the basic astrophysical parameters ($(m-M)_0$,\,log($t/$yr) and $E(B-V)$\,) for each OC. To accomplish this first step, we performed visual fits of solar metallicity PARSEC isochrones \citep{Bressan:2012} to each decontaminated CMD. A first guess for the distance modulus was obtained from the relation $(m-M)_{0}=5\times\,$log(100/$\langle\varpi\rangle$), where $\langle\varpi\rangle$ is the mean parallax taken over high membership stars. Then the isochrone was reddened and vertically shifted in order to provide reasonable matches to key evolutionary CMD features, such as the main sequence, the subgiant branch, the red clump (when present) and the red giant branch.
In a second step, we used a set of PARSEC isochrones covering metallicty values $[Fe/H]$ in the range [-1.0\,,\,+0.65]\,dex (the upper limit corresponding to the maximum value covered by the models) and run the ASTECA code \citep{Perren:2015} in order to perform automatic isochrone fits by employing the parameters estimated visually as a first guess and the Washington photometry of high-membership stars ($L\gtrsim0.6-0.7$; equation \ref{termo_exponencial}). The isochrone fitting process in ASTECA is based on the generation of synthetic OCs from theoretical isochrones and selection of the best fit through a genetic algorithm (see section 2.9.1 of \citeauthor{Perren:2015}\,\,\citeyear{Perren:2015} and references therein). We restricted the parameters space covered by the models according to the following intervals:
\begin{itemize}
\item ($(m-M)_{0,\textrm{pre}}-0.5)\leq(m-M)_0\leq((m-M)_{0,\textrm{pre}}+0.5$), in steps of 0.1\,mag;
\item ($E(B-V)_{\textrm{pre}}-0.15)\leq E(B-V)\leq (E(B-V)_{\textrm{pre}}+0.15$), in steps of 0.01\,mag;
\item (log($t/$yr)$_{\textrm{pre}}$$-$0.2) $\leq$ log($t$/yr) $\leq$ (logt($t$/yr)$_{\textrm{pre}}$$+$0.2);
\item overall metallicity: 0.007 $\leq$ Z $\leq$ 0.04, in steps of 0.002,
\end{itemize}
\noindent
where $(m-M)_{0,\textrm{pre}}$, $E(B-V)_{\textrm{pre}}$ and log($t/$yr)$_{\textrm{pre}}$ are our first estimates for distance modulus, interstellar redenning and age, respectively, as described above.
The continuous lines in Fig.~\ref{CMDs_parte1} represent the best-fitted isochrones, while the dotted lines correspond to the same isochrones shifted by -0.75 mag in $T_1$ to match the loci of unresolved binaries with equal mass components.
Filled symbols represent member stars and colours correspond to the membership likelilhoods, as showed in the colour bars. Open circles are non-members and small black dots are stars in a control field.
Additionally, we ran the photometric decontamination procedure described in \cite{Maia:2010} in order to identify possible member candidates without astrometric information in GAIA. For this procedure, we used the photometric data for the same groups of stars (OC and control field) employed in Section \ref{memberships}.
Stars with photometric membership likelihoods $\mathcal{L}_{\textrm{phot}}\ge0.60$, but without available astrometry, are plotted as filled black squares in the CMDs of BH\,150, Czernik\,37, NGC\,6756, Ruprecht\,111 and Trumpler\,25. For reference, we circled the member stars (filled symbols) for which $\mathcal{L}_{\textrm{phot}}\ge0.10$. In the case of NGC\,6249, we identified three bright stars, labeled as \#1, \#2 and \#586 in Fig.~\ref{CMDs_parte2} and located inside the OC's $r_t$, which could have been considered member stars in a purely photometric analysis. The astrometric data $\mu_{\alpha}$(mas\,yr$^{-1}$), $\mu_{\delta}$(mas\,yr$^{-1}$) and $\varpi$(mas) are: (-4.182$\pm$0.088, -7.707$\pm$0.068, 1.7587$\pm$0.0435)$_{\#1}$, (-8.139$\pm$0.099, -15.823$\pm$0.083, 1.4016$\pm$0.0524)$_{\#5}$ and (1.447$\pm$0.103, -3.183$\pm$0.083, 0.9423$\pm$0.0512)$_{\#586}$. These stars received membership likelihoods smaller than 0.3, since their parallaxes are incompatible with the group of members and/or their proper motion components are inconsistent with the bulk motion of the OC. The complete set of astrophysical parameters for the studied sample is showed in Table \ref{astroph_params}. In this table, $M_{\textrm{Kroupa}}$ and $N_{\textrm{Kroupa}}$ are upper limits for OC's mass and number of stars, respectively.
\begin{table*}
\begin{center}
\caption{Fundamental parameters, half-light relaxation times and photometric masses ($M_{\textrm{phot}}$) for the studied OCs. $M_{\textrm{Kroupa}}$ and $N_{\textrm{Kroupa}}$ are upper limits for OC mass and number of stars, respectively.}
\label{astroph_params}
\footnotesize
\begin{tabular}{lccccccccc}
\hline
Cluster & (m-M)$_{0}$ & $d$ & $E(B-V)$ & log($t$/yr) & $[Fe/H]$ & $t_{rh}$ & $M_{\textrm{phot}}$ & $M_{\textrm{Kroupa}}$ & $N_{\textrm{Kroupa}}$ \\
& (mag) & (kpc) & (mag) & & (dex) & (Myr) & ($M_{\odot}$) & ($M_{\odot}$) & \\
\hline
Collinder\,258 &10.60\,$\pm$\,0.50 &1.32\,$\pm$\,0.30 &0.19\,$\pm$\,0.15 &8.15\,$\pm$\,0.30 & 0.18\,$\pm$\,0.22 & 1.10\,$\pm$\,0.32 & 80\,$\pm$\,15 & 244 & 524 \\
NGC\,6756 &11.45\,$\pm$\,0.30 &1.95\,$\pm$\,0.27 &1.09\,$\pm$\,0.10 &8.35\,$\pm$\,0.20 & 0.00\,$\pm$\,0.17 & 3.13\,$\pm$\,0.70 & 481\,$\pm$\,33 &1596 &3617 \\
Czernik\,37 &10.95\,$\pm$\,0.40 &1.55\,$\pm$\,0.28 &1.54\,$\pm$\,0.10 &8.50\,$\pm$\,0.15 & 0.00\,$\pm$\,0.17 & 3.35\,$\pm$\,0.73 & 403\,$\pm$\,31 &1835 &4202 \\
NGC\,5381 &11.60\,$\pm$\,0.30 &2.09\,$\pm$\,0.29 &0.74\,$\pm$\,0.05 &8.60\,$\pm$\,0.10 &-0.32\,$\pm$\,0.06 &10.72\,$\pm$\,3.10 & 376\,$\pm$\,26 &1016 &2400 \\
Trumpler\,25 &11.20\,$\pm$\,0.30 &1.74\,$\pm$\,0.24 &1.20\,$\pm$\,0.10 &8.10\,$\pm$\,0.10 & 0.05\,$\pm$\,0.15 & 8.80\,$\pm$\,1.37 &1211\,$\pm$\,58 &4111 &8837 \\
BH\,150 &12.40\,$\pm$\,0.30 &3.02\,$\pm$\,0.42 &1.57\,$\pm$\,0.10 &7.35\,$\pm$\,0.10 & 0.00\,$\pm$\,0.23 & 1.97\,$\pm$\,0.42 & 503\,$\pm$\,52 &1962 &3740 \\
Ruprecht\,111 &11.40\,$\pm$\,0.20 &1.91\,$\pm$\,0.17 &0.88\,$\pm$\,0.10 &8.50\,$\pm$\,0.10 &-0.13\,$\pm$\,0.23 & 2.47\,$\pm$\,0.86 & 152\,$\pm$\,19 & 714 &1638 \\
Ruprecht\,102 &12.50\,$\pm$\,0.40 &3.16\,$\pm$\,0.58 &0.84\,$\pm$\,0.15 &8.45\,$\pm$\,0.25 & 0.22\,$\pm$\,0.14 & 6.35\,$\pm$\,1.32 & 221\,$\pm$\,22 & 820 &1880 \\
NGC\,6249 &10.30\,$\pm$\,0.40 &1.15\,$\pm$\,0.21 &0.44\,$\pm$\,0.10 &8.30\,$\pm$\,0.25 & 0.10\,$\pm$\,0.18 & 1.22\,$\pm$\,0.37 & 109\,$\pm$\,15 & 303 & 689 \\
Basel\,5 &11.20\,$\pm$\,0.40 &1.74\,$\pm$\,0.32 &0.66\,$\pm$\,0.10 &8.35\,$\pm$\,0.15 & 0.22\,$\pm$\,0.14 & 2.75\,$\pm$\,0.78 & 193\,$\pm$\,19 & 509 &1136 \\
Ruprecht\,97 &12.55\,$\pm$\,0.30 &3.24\,$\pm$\,0.45 &0.56\,$\pm$\,0.15 &8.10\,$\pm$\,0.15 & 0.31\,$\pm$\,0.11 &18.85\,$\pm$\,3.78 & 512\,$\pm$\,34 &1491 &3234 \\
ESO\,129-SC32 &12.85\,$\pm$\,0.30 &3.71\,$\pm$\,0.51 &0.75\,$\pm$\,0.10 &8.30\,$\pm$\,0.15 & 0.00\,$\pm$\,0.11 &10.49\,$\pm$\,2.78 & 369\,$\pm$\,29 &1141 &2560 \\
\hline
\end{tabular}
\end{center}
\end{table*}
\subsection{Mass functions}
\label{mass_functions}
\begin{figure*}
\begin{center}
\parbox[c]{1.0\textwidth}
{
\includegraphics[width=0.333\textwidth]{massfunction_Cr258.pdf}
\includegraphics[width=0.333\textwidth]{massfunction_NGC6756.pdf}
\includegraphics[width=0.333\textwidth]{massfunction_Czernik37.pdf}
\includegraphics[width=0.333\textwidth]{massfunction_NGC5381.pdf}
\includegraphics[width=0.333\textwidth]{massfunction_Trumpler25.pdf}
\includegraphics[width=0.333\textwidth]{massfunction_BH150.pdf}
\includegraphics[width=0.333\textwidth]{massfunction_Ruprecht111.pdf}
\includegraphics[width=0.333\textwidth]{massfunction_Ruprecht102.pdf}
\includegraphics[width=0.333\textwidth]{massfunction_NGC6249.pdf}
\includegraphics[width=0.333\textwidth]{massfunction_Basel5.pdf}
\includegraphics[width=0.333\textwidth]{massfunction_Ruprecht97.pdf}
\includegraphics[width=0.333\textwidth]{massfunction_ESO129-32.pdf}
}
\caption{ Mass functions for the studied OC sample. The scaled IMFs of Salpeter\,(1955) and Kroupa\,(2001)
are overplotted with black and red lines, respectively. }
\label{mass_func_parte1}
\end{center}
\end{figure*}
Individual masses for member stars were estimated by interpolation from the observed $T_1$ magnitude along the best-fitted isochrone, properly shifted according to the OC reddening and distance modulus (Fig.~\ref{CMDs_parte1} and Table \ref{astroph_params}). The OCs' mass functions ($\phi(m)=dN/dm$) were then built by counting the number of stars in linear mass bins which were then converted to the logarithmic scale, as showed in Fig.~ \ref{mass_func_parte1}. Stars counts inside each mass bin were weighted by the membership likelihoods and properly corrected for photometric completeness. The total photometric OC masses were obtained from discrete sum of the mass bins, while the corresponding uncertainties were determined from Poisson statistics.
For comparison purposes, we overplotted in Figs.~\ref{mass_func_parte1} the initial mass functions (IMF) of \citeauthor{Salpeter:1955}\,\,(\citeyear{Salpeter:1955}, continous black lines) and \citeauthor{Kroupa:2001}\,\,(\citeyear{Kroupa:2001}, red dashed lines), which were scaled according to the OC total photometric mass (Table \ref{astroph_params}). The signal of
low-mass stars depletion is a consequence of preferential evaporation
during the OCs dynamical evolution (see Section 5). Other OCs (e.g., ESO\,129-SC32, NGC\,5381) present less noticeable depletion in lower mass bins, since the observed mass function is more compatible with Kroupa's and Salpeter's IMF.
Finally, we estimated Jacobi radii from the expression:
\begin{equation}
R_J = \left(\frac{M_{\textrm{clu}}}{3\,M_G}\right)^{1/3}\times R_G
\end{equation}
\noindent
where $M_{\textrm{clu}}$ is the OC's photometric mass (Table \ref{astroph_params}). This formula assumes a circular orbit around a point mass galaxy ($M_{G}\sim1.0\times10^{11}\,M_{\odot}$; \citeauthor{Carraro:1994}\,\,\citeyear{Carraro:1994}; \citeauthor{Bonatto:2005}\,\,\citeyear{Bonatto:2005}; \citeauthor{Taylor:2016}\,\,\citeyear{Taylor:2016}). With the estimated OCs' masses, we also derived their half-light relaxation times (\citeauthor{Spitzer:1971}\,\,\citeyear{Spitzer:1971}):
\begin{equation}
t_{\textrm{rh}}=8.9\times10^5\,\textrm{yr}\times\frac{M_{\textrm{clu}}^{1/2}\,r_{\textrm{h}}^{3/2}}{\langle m\rangle\,\textrm{log}_{10}(0.4M_{\textrm{clu}}/\langle m\rangle)}
\end{equation}
\noindent
where $\langle m\rangle$ is the mean mass of the OC stars.
\subsection{Comments on individual clusters}
The present OC sample was also investigated by \cite{Kharchenko:2013}, who employed proper motions from the PPMXL catalogue \citep{Roeser:2010} and photometric data from the near-infrared 2MASS catalogue \citep{Skrutskie:2006}. Their analysis was based on a dedicated data-processing pipeline to determine kinematic and photometric membership probabilities for stars in the cluster regions. In this section, we point out some similarities and differences between their and our results. Some other previous studies are also highlighted.
We found a very good agreement with the results of \cite{Kharchenko:2013} for Collinder\,258, NGC\,6756, Czernik\,37, NGC\,5381, Trumpler\,25, Ruprecht\,111 and NGC\,6249, while some
differences arise for BH\,150, Ruprecht\,102, Basel\,5, Ruprecht\,97 and ESO\,129-SC32.
For instance, distance moduli differ from $\sim1\,$mag (Basel\,5) to $\sim2\,$mag
(ESO\,129-SC32); differences in log($t$) vary from $0.2\,$ (BH\,150) to $\sim1\,$
(Ruprecht\,102); $E(B-V)$ colour excesses are typically $\sim0.3\,$mag lower.
We speculate that the main reason for these differences is related to the membership assignment procedures used. Kharchenko et al.'s method is based on lower quality proper motions data,
alongside the fact that parallaxes are not available in the PPMXL catalogue. Likewise,
the photometric completeness of 2MASS data in regions projected close do the Galactic centre
may also play a role. Indeed, our CMDs (see Fig.~\ref{CMDs_parte1}) exhibit deeper main
sequences, reaching nearly 3-4 mag below the clusters´ main sequence turnoffs.
\subsubsection{NGC\,6756}
NGC\,6756 was studied by \cite{Netopil:2013}, who employed publicy available Str\"omgren $uvby$ data in their analysis. Empirical relations between effective temperatures and reddening-free indices $[u-b]$ were previously calibrated. The cluster parameters were derived by means of the method proposed by \cite{Pohnl:2010}, which is based on the fit of differential evolutionary tracks (normalised to the zero age main-sequence, ZAMS) for a variety of metallicity/age combinations.
The derived metallicity ($[Fe/H]=0.10\pm0.14$), colour excess ($E(B-V)=1.03\pm0.05\,$mag) and age (log $t$/yr=8.10) are in agreement with our results (see Table \ref{astroph_params}), considering uncertainties. Their derived distance modulus ((m-M)$_0$=12.3\,mag), however, does not agree with our analysis. This discrepancy may be partially attributed to the different sets of isochrones employed in both studies and, more importantly, to the adopted criteria in the selection of member stars. In the analysis of NGC\,6756, \cite{Netopil:2013} employed only photometric data (see their table 2). Despite this, as stated above, the distance modulus obtained by \cite{Kharchenko:2013} ((m-M)$_0$ = 11.45\,mag) is in excellent agreement with our result.
Additionally, NGC\,6756 is present in the sample of \cite{Santos:2004}, who obtained equivalent widths from integrated spectra and presented homogeneous scales of ages and metallicities for clusters younger than 10\,Gyr. They derived $[Fe/H]=0.0\pm0.2$ and log($t$/yr)=$8.48\pm0.15$, which are in fair agreement with our analysis (Table \ref{astroph_params}).
\subsubsection{Czernik\,37 and NGC\,5381}
Both OCs have also been investigated by \cite{Marcionni:2014}, who used the same
public images as in Section \ref{data_collection_reduction}. They derived, for Czernik\,37, $(m-M)_0=10.8\pm1.3\,$mag, log$(t/$yr)=$8.40\pm0.14$ and $E(B-V)=1.47\pm0.25\,$mag, and for NGC\,5381, $(m-M)_0=12.1\pm0.3\,$mag,
log$(t/$yr)=$8.40\pm0.10$ and $E(B-V)=0.46\pm0.25\,$mag, respectively.
Within the quoted uncertainties our values are in good agreement with theirs, except in the case of
NGC\,5381's colour excess. Nevertheless, we would like to point out that they did not employ
GAIA DR2 data. Consequently, we provide with a larger and more reliable
list of member stars.
\subsubsection{NGC\,6249}
\cite{Mermilliod:2008} determined the mean cluster radial velocity. Two stars ($\alpha=254^{\circ}\!\!.41346$, $\delta=-44^{\circ}\!\!.79994$ and $\alpha=254^{\circ}\!\!.42387$, $\delta=-44^{\circ}\!\!.78798$) observed by them are also present in our sample. They have been identified with numbers $\#1$ and $\#5$ in Fig.~\ref{CMDs_parte2}. Both stars could be considered members in a purely photometric analysis, but they were discarded as cluster members because they received very low membership likelihoods ($L<1$\%) due to very discrepant parallaxes ($\varpi_{\#1}=1.7587\,\pm\,$0.0435\,mas and $\varpi_{\#5}=1.4016\,\pm\,0.0524\,$mas) compared to the bulk of member stars (Fig.~\ref{T1mag_plx}).
\subsubsection{Ruprecht\,97}
\cite{Claria:2008} obtained Washington $CMT_1T_2$ photoeletric data for 6 red giant candidates of Ruprecht\,97. Their sample was defined based on the previous photometric study of \cite{Moffat:1975}. This 6 stars were also observed in the present study. Photometry from both studies proved to be consistent with each other, since the mean differences between our magnitudes and the literature ones resulted: $\langle C_{\textrm{our}}-C_{\textrm{lit}}\rangle=0.011\pm0.058\,$mag and $\langle T_{\textrm{1,our}}-T_{\textrm{1,lit}}\rangle=-0.062\pm0.019\,$mag, that is, a relatively small systematic offset was found for the $T_{1}$ magnitudes.
By making use of Washington metallicity-sensitive indices \citep{Geisler:1991}, they identified stars \#4 and \#11 as members (their table 6), for which $[Fe/H]=-0.03\pm0.03$. Both stars were discarded from our membership analysis as cluster members, which could explain the difference in the derived cluster metallicity. Indeed, star \#4 is located at is located at $\sim6{\arcmin}$ from the cluster centre (Table \ref{struct_params}), beyond the cluster limiting radius and therefore in a region dominated by field stars (Fig.~\ref{RPDs_parte1}). After running our decontamination method, star \#11 received a membership likelihood of $\sim5$ per cent, because its parallax ($\varpi=0.5922\pm0.0298\,$mas) is incompatible with the bulk of cluster members (Figure \ref{T1mag_plx}). These results highlight the need of a proper characterization scheme to refine the selection of cluster members, thus leading to a more reliable determination of astrophysical parameters.
\subsubsection{BH\,150}
The physical nature of BH\,150 has been until now under debate. \cite{Kharchenko:2013} classified this object as ``dubious". On the other hand, \cite{Carraro:2005a} -- based only on the analysis of $V\times(B-V)$ and $V\times{(V-I)}$ CMDs -- could not derive any
astrophysical parameters, nor draw any conclusion on its existence as a genuine physical system
either. Due to the large field star contamination along the line of sight, their method did not allow them a reliable disentanglement between the object and the composite field population. Here, once we applied the decontamination procedure described in Section \ref{memberships}, we could identify a group of stars with compatible kinematics and parallaxes and to reveal clearer evolutionary sequences in its CMD (Fig.~\ref{CMDs_parte1}). Our results favour the hypothesis of a real OC.
\section{Introduction}
It is known that the majority of stars are born embedded within giant molecular clouds \citep{Lada:2003} and form stellar aggregates named associations or open clusters (OCs). Whilst the first are loose and gravitationally unbound groups (typical dissolution times between $\sim10-100\,$Myr; \citeauthor{Moraux:2016}\,\,\citeyear{Moraux:2016}), the latter are long-lived stellar structures and their diversity in terms of age, stellar content and metallicity makes them ideal tracers of the Galaxy structure, providing information regarding its kinematical evolution and chemical enrichment.
The initial evolutionary stages of the OCs have critical impact on their subsequent dynamical evolution, since the early gas-expulsion process (caused by, e.g., supernova and stellar winds during the first $\sim3\,$Myr; \citeauthor{Portegies-Zwart:2010}\,\,\citeyear{Portegies-Zwart:2010}) cause less than 10\% of embedded OCs to survive emergence from molecular clouds \citep{Lada:2003}. Those that are massive enough to survive this initial phase enter subsequent phases, when dynamical timescales become continuously smaller than mass loss timescales due to stellar evolution. The investigation of these structures helps to constrain the initial conditions assumed by, e.g., $N-$body simulations aimed at reproducing observable quantities of OCs at later evolutionary stages.
During the long-term evolution, the interplay between several destructive effects (e.g., tidal stripping, collisions with molecular clouds, evaporation of stars due to internal relaxation) causes the OC's stellar content to be gradually depleted until its dissolution amongst the general Galactic field. How stellar OCs dissolve is a debated topic (\citeauthor{Pavani:2007}\,\,\citeyear{Pavani:2007}; \citeauthor{Bica:2001}\,\,\citeyear{Bica:2001}; \citeauthor{Piatti:2017a}\,\,\citeyear{Piatti:2017a}) and uniform characterizations of evolved OCs, possibly covering different parameters and positions within the Galaxy, are important to constrain evolutionary models and thus to clarify this subject.
In this context, the second release of the GAIA catalogue \citep{Gaia-Collaboration:2018} inaugurated a new era in astronomy. The availability of astrometric information with high-precision (typically $\lesssim$\,0.1\,mas and $\lesssim$\,0.1\,mas\,yr$^{-1}$ for parallaxes and proper motion components, respectively) allowed the discovery of new OCs (\citeauthor{Cantat-Gaudin:2018a}\,\,\citeyear{Cantat-Gaudin:2018a}; \citeauthor{Borissova:2018}\,\,\citeyear{Borissova:2018}; \citeauthor{Ferreira:2019}\,\,\citeyear{Ferreira:2019}) and a more precise characterization of already catalogued ones (e.g., \citeauthor{Kos:2018}\,\,\citeyear{Kos:2018}; \citeauthor{Dias:2018a}\,\,\citeyear{Dias:2018a}).
In the present paper, we took full advantage of GAIA DR2 data combined with mostly unpublished Washington $CT_1$ photometry to select member stars and analyse the dynamical stage of a set of 11 OCs (namely Collinder\,258, NGC\,6756, Czernik\,37, NGC\,5381, Ruprecht\,111, Ruprecht\,102, NGC\,6249, Basel\,5, Ruprecht\,97, Trumpler\,25 and ESO\,129-SC32). We also discuss the case of BH\,150, which existence as a physical system could not be confirmed in previous studies \citep{Carraro:2005a} and it was classified as a dubious object by inspection of photographic data (\citeauthor{Kharchenko:2013}\,\,\citeyear{Kharchenko:2013} and references therein). It is currently classified as an asterism in the \cite{Dias:2002} catalogue. We have included this OC in our sample in order to establish conclusively its physical nature. Since these 12 OCs are projected against dense stellar population from the Galactic disc, their colour-magnitude diagrams (CMDs) are severely contaminated, which makes the disentanglement of OC and field stars and the search for evolutionary sequences non-trivial tasks. We have combined astrometric and photometric data together with a decontamination algorithm which searches for overdensities across the 3-dimensional astrometric space ($\mu_{\alpha}, \mu_{\delta}, \varpi$) and check their significance by statistical comparisons with the dispersion of data from field stars. To the best of our knowledge, this is the first work that employs GAIA DR2 data in the analysis of the present sample.
Interestingly, 11 of the investigated OCs share similar ages and Galactocentric distances ($R_G$, uncertainties considered), which allows us to explore the relationships between parameters associated to their dynamical evolution. Besides, their concentration parameters are relatively low ($c$=log$(r_t/r_c)\lesssim0.75$), which places them in the lower regime of $c$ values for OCs with similar ages (Section \ref{discussion}). This characteristics makes this sample an unique one. Although they are not in advanced dynamical stages, according to their $c$ parameters, our OCs present signals of dynamical evolution, as evidenced by their age/$t_{\textrm{rh}}$ ratios, where $t_{\textrm{rh}}$ is the half-light relaxation time, and signals of low-mass star depletion in their mass functions (Section \ref{mass_functions}).
This paper is organized as follows: in Section \ref{data_collection_reduction}, we present our sample and give some details about the data reduction steps. In Section \ref{method}, we present the analysis procedure (structural parameters and membership determination). The results are presented in Section \ref{results} and discussed in Section \ref{discussion}. In Section \ref{conclusions}, we summarize our conclusions.
\input data_collection_reduction.tex
\input method_and_results.tex
\input discussion.tex
\input conclusions.tex
\section{Acknowledgments}
This research has made use of the VizieR catalogue access tool, CDS, Strasbourg, France. This work has made use of data from the European Space Agency (ESA) mission Gaia (https://www.cosmos.esa.int/gaia), processed by the Gaia Data Processing and Analysis Consortium (DPAC, https://www.cosmos.esa.int/web/gaia/dpac/consortium). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement.
\bibliographystyle{mn2e}
|
1,314,259,995,344 | arxiv | \section{Introduction}
In 2005, Berndtsson \cite{Bern06} found that the functional version of the classical Brunn-Minkowski inequality, i.e. the Prekopa theorem \cite{Prekopa73}, can be seen as a special case of the subharmonicity property of the Bergman kernel (see \cite{Maitani84} and \cite{MY04} for early results with different point of view). It opens another door (so called \emph{complex Brunn-Minkowski theory}) of studying complex geometry by using the Brunn-Minkowski theory in convex geometry. Below, we shall give a short account of the complex Brunn-Minkowski theory, and a simple example to show our motivation to write this paper.
Heuristically speaking, the Prekopa theorem can be seen as a version of inverse H\"older inequality. In \cite{Bern13a}, Berndtsson gave another form of the classical H\"older inequality:
\medskip
\textbf{Theorem A} [H\"older inequality]: Let $\phi(t, x)$ be convex in $t$. Then
\begin{equation}
t \mapsto \log\int_{\mathbb R^n} e^{\phi(t,x)} dx, \ dx=dx^1\wedge \cdots \wedge dx^n,
\end{equation}
is convex (if the integral is convergent).
\medskip
The proof follows by differentiating with respect to $t$:
\begin{equation}
\left(\log\int e^{\phi} \right)_{tt}=\left(\int e^{\phi} \right)^{-2} \left(\int e^{\phi} \int (\phi_{tt}e^{\phi} +\phi_t^2 e^\phi)-\big(\int \phi_t e^\phi\big)^2 \right).
\end{equation}
Notice that, by the Cauchy-Schwarz inequality, we have
\begin{equation}
\big(\int \phi_t e^\phi\big)^2 \leq \int e^{\phi} \int \phi_t^2 e^\phi.
\end{equation}
Thus $\phi_{tt} \geq 0$ implies that $\left(\log\int e^{\phi} \right)_{tt}\geq 0$.
\medskip
The following result is due to Prekopa:
\medskip
\textbf{Theorem B} [Prekopa theorem]: Let $\phi(t, x)$ be convex in $t$ and $x$. Then
\begin{equation}
t \mapsto -\log\int_{\mathbb R^n} e^{-\phi(t,x)} dx,
\end{equation}
is convex.
\medskip
There are many ways to prove Theorem B. A famous observation of Brascamp-Lieb (see \cite{BL76}) is: one may use a weighted $L^2$-estimates of the $d$-operator to prove Theorem B.
In \cite{Bern98}, Berndtsson showed that one may also use the H\"ormander's weighted $L^2$-estimates of the $\ensuremath{\overline\partial}$-operator (see \cite{Hormander65}) to prove Theorem B. Moreover, in \cite{Bern06}, he established the following complex version of Theorem B:
\medskip
\textbf{Theorem C} [Berndtsson's theorem]: Let $\phi(t, z)$ be a plurisubharmonic function on a pseudoconvex domain $D \subset \ensuremath{\mathbb{C}}^m_t\times \ensuremath{\mathbb{C}}^n_z$. Then
\begin{equation}
(t,z) \mapsto \log K^t(z,z),
\end{equation}
is plurisubharmonic or equal to $-\infty$ identically on $D$, where each $K^t$ denotes the weighted Bergman kernel associated to the fibre $D_t:=D\cap( \{t\}\times \ensuremath{\mathbb{C}}^n)$ and the weight $\phi^t:=\phi|_{D_t}$.
\medskip
In \cite{Bern06}, Berndtsson gave two proofs of Theorem C. A crucial step in his first proof is also the H\"ormander's $L^2$-estimates of the $\ensuremath{\overline\partial}$-operator. Later in \cite{Bern09}, he pointed out that it will be more natural to look at Theorem C as a curvature property of the direct image bundle (see Theorem 1.1 and Theorem 1.2 in \cite{Bern09}). This is a milestone in the complex Brunn-Minkowski theory (see \cite{Bern13b}).
The complex Brunn-Minkowski theory has proved to be very useful in several complex variables and complex geometry (see \cite{Bern09a}, \cite{Bern13}, \cite{Bern14}, \cite{BL14}, \cite{BB14}, \cite{BernPaun08} and references therein). This paper is an attempt to study the curvature formula of the direct image bundle associated to general Stein-fibrations (see \cite{Tsuji05}, \cite{Sch12}, \cite{LiuYang13}, \cite{GS15}, \cite{MT07} and \cite{MT08} for other generalizations and related results). The new results are the \emph{boundary term} of the curvature formula and its relation with \emph{interpolation family} of convex bodies.
Let us start by looking at an almost trivial case of Theorem B. Let
\begin{eqnarray}
\mathcal F:= \{[a(t),b(t)]\}_{0\leq t\leq 1},
\end{eqnarray}
be a family of line segments. Let
\begin{equation*}
D:=\{(t,x)\in \mathbb R^2: a(t)<x<b(t), \ 0<t<1\},
\end{equation*}
be the total space. Assume that $b(t)> a(t)$ for each $0\leq t\leq 1$ and $a, b$ are smooth on a neighborhood of $[0,1]$. Put
\begin{equation*}
\theta(a)=\frac{d^2a}{dt^2},\ \theta(b)=-\frac{d^2b}{dt^2}.
\end{equation*}
Let us introduce the following definitions:
\begin{definition} We call $\theta$ the geodesic curvature of $\mathcal F$.
\end{definition}
\begin{definition}
We call $\mathcal F$ an interpolation family if $\theta\equiv 0$.
\end{definition}
\textbf{Remark:} $\mathcal F$ is an interpolation family if and only if both $a$ and $b$ are affine functions.
\begin{definition} We call $\mathcal F$ a trivial family if there exists a real constant $c$ such that for every $0< t<1$, $[a(t),b(t)]=[a(0),b(0)]+ct$.
\end{definition}
Put
\begin{equation}
\phi(t,x)=0, \ \text{on} \ D, \ \ \phi(t,x)=\infty,\ \text{on} \ \mathbb R^2 \backslash D,
\end{equation}
then convexity of $D$ is equivalent to convexity of $\phi$. Thus Theorem B implies that if $D$ is convex then
\begin{equation}
\Phi: t \mapsto -\log (b(t)-a(t)) =-\log\int_{\mathbb R} e^{-\phi(t,x)} dx
\end{equation}
is convex on $(0,1)$. Moreover, by direct computation,
\begin{equation}
\ddot \Phi=\frac{(b-a)(\ddot a-\ddot b)+(\dot a-\dot b)^2}{(b-a)^2}, \ \ddot \Phi:=\frac{d^2\Phi}{dt^2}, \ \dot a:=\frac{da}{dt}.
\end{equation}
We call
\begin{equation}
Geo:=\frac{(b-a)(\ddot a-\ddot b)}{(b-a)^2}=\frac{\theta(a)+\theta(b)}{b-a},
\end{equation}
the geodesic term in $\ddot \Phi$ and
\begin{equation}
R:=\frac{(\dot a-\dot b)^2}{(b-a)^2}
\end{equation}
the remaining term in $\ddot \Phi$. Thus we have:
\begin{proposition}\label{pr:curvature-real} The remaining term in $\ddot\Phi$ is always non-negative. Moreover, if the total space $D$ is convex then the geodesic term in $\ddot\Phi$ is also non-negative.
\end{proposition}
\begin{proposition}\label{pr:trivial-real} Assume that the total space $D$ is convex then affine-ness of $\Phi$ is equivalent to triviality of $\mathcal F$.
\end{proposition}
In this paper, we shall study the counterparts of the above notions in complex geometry. In the next secion, we shall define the notion of geodesic curvature (see Definition \ref{de:geodesic-c-bdy}) for a smooth family of smoothly bounded Stein domains (see Definition \ref{de:smoothfamily}). Then Definition \ref{de:inter-psc}, \emph{interpolation family of Stein domains}, can be seen as a generalization of Definition 1.2; and Definition \ref{de:trivial}, \emph{trivial family of Stein domains}, can be seen as a generalization of Definition 1.3.
Our main result, Theorem \ref{th:CF}, is a curvature formula associated to variation of Stein manifolds. Let $\{D_t\}$ be a smooth family of smoothly bounded $n$-dimensional Stein domains. Let $\mathcal L$ be a holomorphic line bundle on the total space $D$. Let $h$ be a smooth Hermitian metric on $\mathcal L$. We shall consider the associated family of Bergman spaces
\begin{equation*}
\mathcal H:=\{\mathcal H_t\},
\end{equation*}
where each $\mathcal H_t$ is the space of $L^2$-holomorphic $\mathcal L|_{D_t}$-valued $(n,0)$-forms on $D_t$. Then Theorem \ref{th:CF} reads that:
\medskip
\textbf{Main Theorem}: \emph{Assume that $\mathcal L$ is flat or relatively ample. Then the curvature of $\mathcal H$ contains two terms: the geodeisc term and the remaining term. The remaining term is always semi-positive in the sense of Nakano. Moreover, if the total space is pseudoconvex and $\mathcal L$ is semi-positive on the total space then the geodesic term is also semi-positive in the sense of Nakano.}
\medskip
Thus our main result can be seen as a generalization of Proposition \ref{pr:curvature-real}. In section 2.5, we shall give a definition of the holomorphic section of the \textbf{dual} of $\mathcal H$ (see Definition \ref{de:holomorphic-dual-new}).
Then our main application, Corollary \ref{co:dual-psh}, can be stated as follows:
\medskip
\textbf{Application}: \emph{If the total space is pseudoconvex and $\mathcal L$ is non-negative on the total space then $\log||f||$ is plurisubharmonic for every holomorphic section $f$ of the dual of $\mathcal H$.}
\medskip
Corollary \ref{co:dual-psh} can be seen as a generalization of Theorem C (see the remark behind Theorem 1.1 in \cite{Bern09}). In section 5.2, we shall also use Corollary \ref{co:dual-psh} to study variation of the Bergman projection of currents with compact support. In particular, we shall give a variation formula for the derivatives of the Bergman kernel (see Theorem \ref{th:VF}).
In section 6, we shall discuss the counterparts of Proposition \ref{pr:trivial-real} in complex case. We shall show that under some assumptions (see Theorem \ref{th:flat}), flatness of $\mathcal H$ and triviality of $D$ are equivalent. As a direct corollary, we shall give a triviality criterion for a class of holomorphic motions (see Corollary \ref{co:last}) of planar domains.
\medskip
\textbf{Acknowledgement}: I would like to thank Bo-Yong Chen for introducing me this topic, Bo Berndtsson for many inspiring discussions relating the curvature of the direct images, his useful comments on this paper and suggestions on paper writing. Thanks are also given to Qing-Chun Ji for his constant support and encouragement.
I would also like to thank Laszlo Lempert for pointing out a mistake in the first version of this paper regarding the assumptions of Lemma \ref{le:key-lemma}. After the first version of this manuscript was completed, Laszlo Lempert kindly sent me a preprint \cite{Tran} of Dat Vu Tran. In \cite{Tran}, Dat Vu Tran gave a careful study of curvature of Hilbert fields (see also \cite{LS14}) and showed that the curvature operator associated to a famlily of planar domains with pseudoconvex global space is semi-positive, which covers our main theorem in case each fibre is one-dimensional.
Last but not least, thanks are due to the referee for pointing out several inaccuracies in this paper and hopefully making this paper more readable.
\section{Basic definitions and results}
\subsection{List of notations} \ \
\medskip
1. $\pi: \mathcal X \to \mathbb B$ is a holomorphic submersion.
2. $D$ is an open subset in $\mathcal X$, $D_t:=D\cap \pi^{-1}(t).$
3. $\mathcal L$ is a holomorphic line bundle over $\mathcal X$ and $L_t:=\mathcal L|_{D_t}$.
4. $\mathcal H_t$ is the space of $L^2$ holomorphic $L_t$-valued $(n,0)$-forms on $D_t$.
5. $\mathcal H:=\{\mathcal H_t\}_{t\in \mathbb B}$.
6. $i_t$: the inclusion mapping $D_t\hookrightarrow D$.
7. $t$: coordinate system on $\mathbb B$, $t^j$ components of $t$, $\partial_{t^j}=\partial/\partial t^j$, $\ensuremath{\overline\partial}_{t^k}=\partial/\partial \bar t^k$.
8. $\sum d\bar t^j\otimes \ensuremath{\overline\partial}_{t^j}$ is the $\ensuremath{\overline\partial}$-operator on $\mathcal H$.
9. $\sum dt^j\otimes D_{t^j}$ is the $(1,0)$-part of the Chern connection on $\mathcal H$.
10. $\Theta_{j\bar k}:=[D_{t^j}, \ensuremath{\overline\partial}_{t^k}]$: curvature operators on $\mathcal H$.
11. $\theta_{j\bar k}(\rho)$: geodesic curvature of $\{\partial D_t\}$.
12. $\eta, \zeta, \mu$: local coordinate system on a fixed fibre $D_t$ of $D$, $\mu^\lambda$: components of $\mu$.
13. $e^{-\phi}$: local representative of a Hermitian metric $h$ on a line bundle $\mathcal L$.
14. $\phi_j:=\partial\phi/\partial t^j$, $\phi_{j\lambda}:=\partial^2\phi/\partial t^j\partial \mu^{\lambda}$, $\phi_{\lambda\bar\nu}:=\partial^2\phi/\partial \mu^\lambda\partial \bar\mu^{\nu}$.
15. $(\rho^{\bar\lambda\nu})$, $(\phi^{\bar\lambda\nu})$: inverse matrix of $(\rho_{\lambda\bar\nu})$, $(\phi_{\lambda\bar\nu})$ respectively.
16. $\delta_{V}:=V~\lrcorner ~$ means contraction of a form with a vector field $V$;
17. $\alpha,\beta\in \mathbb N^n$, $|\alpha|:=\alpha_1+\cdots+\alpha_n$, $f_\alpha:=\partial^{|\alpha|}f/(\partial\mu^1)^{\alpha_1}\cdots
(\partial\mu^n)^{\alpha_n}$.
\subsection{Set up}
Let $\pi:\mathcal X\to \mathbb B$ be a holomorphic submersion from an $(n+m)$-dimensional complex manifold $\mathcal X$ to the unit ball $\mathbb B$ in $\mathbb C^m$. Let $D$ be an open subset of $\mathcal X$. Put
\begin{equation*}
D_t=D\cap \pi^{-1}(t).
\end{equation*}
The following assumption will be used throughout this paper.
\medskip
\textbf{A1:} \emph{The restriction of $\pi$ to the closure of D (with respect to the topology structure of the total space $\mathcal X$) is \textbf{proper}, and $D_t$ is non-empty for every $t$ in $\mathbb B$.}
\medskip
Let $\mathcal L$ be a holomorphic line bundle over $\mathcal X$. Put
\begin{equation*}
L_t:=\mathcal L|_{D_t}.
\end{equation*}
We shall consider the following family of vector spaces associated to $\{D_t\}_{t\in\mathbb B}$:
\begin{equation*}
\mathcal H_t:=\{f\in H^0(D_t, K_{D_t}+L_t): \int_{D_t}i^{n^2}\{f,f\} <\infty \},
\end{equation*}
where $\{\cdot, \cdot\}$ is the canonical sesquilinear pairing (see page 268 in \cite{Demailly12}) with respect to $h$. If we fix a local holomorphic frame, say $e$, of $\mathcal L$, and write $h(e,e)=e^{-\phi}$, then
\begin{equation*}
\{f(z)\otimes e,f(z)\otimes e\}=e^{-\phi(z)} f(z)\wedge \overline{f(z)}.
\end{equation*}
Put
\begin{equation*}
\mathcal H=\{\mathcal H_t\}_{t\in\mathbb B}.
\end{equation*}
\medskip
\textbf{Remark:} We know that each fibre $\mathcal H_t$ is an infinite dimensional Hilbert space, moreover, the element in $\mathcal H_t$ may not be smooth up to the boundary. Thus it is not easy to give a good definition of the curvature operator on $\mathcal H$ (see \cite{LS14} for a careful study of this subject). In order to make things easier, \emph{we shall only define the curvature operator on sections that are smooth up to the boundary.}
\medskip
Let $i_t$ be the inclusion mapping
\begin{equation}
i_t: D_t\hookrightarrow D.
\end{equation}
We shall introduce the following definition:
\begin{definition}\label{de:smooth} We call $u: t\mapsto u^t \in \mathcal H_t$
a smooth section of $\mathcal H$ if there exists an $\mathcal L$-valued $(n,0)$-form, say $\mathbf u$, such that
\begin{equation}
i_t^* \mathbf u=u^t, \ \forall\ t\in\mathbb B,
\end{equation}
and $\mathbf u$ is \textbf{smooth up to the boundary} of $D$. We shall denote by $\Gamma(\mathcal H)$ the space of smooth sections of $\mathcal H$.
\end{definition}
\textbf{Remark}: One may choose $\mathbf u$ in the above definition such that $\mathbf u$ is smooth \textbf{on the total space $\mathcal X$}.
\medskip
In order to give a precise definition of the $\ensuremath{\overline\partial}$-operator on $\mathcal H$, we shall introduce the following definition (see \cite{Bern11}, see also Page 46 in \cite{KSp} or \cite{Wang13} for the admissible coordinate method):
\begin{definition}\label{de:re} We call a smooth $\mathcal L$-valued $(n,0)$-form $\textbf{u}$ on $\mathcal X$ a \textbf{representative} of $u\in\Gamma (\mathcal H)$ if $i_t^* (\textbf{u})=u^t$ for all $t\in\mathbb B$.
\end{definition}
\medskip
\textbf{$\ensuremath{\overline\partial}$-operator on $\mathcal H$:} Let $\mathbf u$ be a representative of $u\in\Gamma(\mathcal H)$. Then one may write
\begin{equation}
\ensuremath{\overline\partial} \mathbf u=\sum dt^j\wedge \eta_j +d\bar t^j\wedge \nu_j.
\end{equation}
Since $\mathbf u\wedge dt$, $dt:=dt^1\wedge \cdots \wedge dt^m$, does not depend on the choice of $\mathbf u$ and
\begin{equation}
\ensuremath{\overline\partial} (\mathbf u\wedge dt)= \sum d\bar t^j\wedge \nu_j \wedge dt,
\end{equation}
we know that each $i_t^*\nu_j$ does not depend on the choice of $\nu_j$. Let us define
\begin{equation}\label{eq:def-dbar25}
\ensuremath{\overline\partial}_{t^j} u: t\mapsto i_t^*\nu_j.
\end{equation}
Then the $\ensuremath{\overline\partial}$-operator on $\mathcal H$ can be defined as
\begin{equation*}
\ensuremath{\overline\partial} u:=\sum d\bar t^j\otimes \ensuremath{\overline\partial}_{t^j} u.
\end{equation*}
From the definition, we know that $\ensuremath{\overline\partial} u\equiv 0$ on $\mathbb B$ if and only if $\mathbf u \wedge dt$ is holomorphic on $D$. We shall introduce the following definition:
\begin{definition}\label{de: holo-section-H} Let $u$ be a smooth section of $\mathcal H$. We call $u$ a holomorphic section of $\mathcal H$ if $\mathbf u\wedge dt$ is holomorphic on $D$.
\end{definition}
\textbf{Chern connection on $\mathcal H$:} We shall write the $(1,0)$-part of the Chern connection on $\mathcal H$ as $\sum dt^j\otimes D_{t^j}$. By definition, each $D_{t^j}$ should satisfy
\begin{equation}\label{eq:chern-connection}
\partial_{t^j}(u,v)=(D_{t^j} u, v) + (u, \ensuremath{\overline\partial}_{t^j} v), \ \forall\ u, v\in \Gamma(\mathcal H),
\end{equation}
where $\partial_{t^j}=\partial/\partial t^j$ and $(\cdot,\cdot)$ denotes the inner product on $\mathcal H_t$.
\begin{definition}\label{de:cc} We say that the Chern connection is well defined on $\mathcal H$ if for every $1\leq j\leq m$, there exists a $\ensuremath{\mathbb{C}}$-linear operator $D_{t^j} : \Gamma (\mathcal H) \to \Gamma (\mathcal H)$ such that \eqref{eq:chern-connection} is true.
\end{definition}
\textbf{Remark:} By using Hamilton's theorem (see \cite{Hamilton79}), in section 4 (see Proposition \ref{pr:chern}), we shall prove that the Chern connection is well defined on $\mathcal H$ if $D$ satisfies $\mathbf{A1}$ and the following assumption:
\medskip
\textbf{A2:} \emph{There is a smooth real valued function $\rho$ on $\mathcal X$ such that for each $t\in\mathbb B$, $\rho|_{\pi^{-1}(t)}$ is strictly plurisubharmonic in a neighborhood of the closure $($with respect to the topology on $\pi^{-1}(t))$ of $D_t$. Moreover, $D_t=\{\rho <0\}\cap\pi^{-1}(t)$ and the gradient of $\rho|_{\pi^{-1}(t)}$ has no zero point on $\partial D_t$. }
\medskip
\textbf{Remark:} In section 3.2, we shall prove that $\mathbf A1$ and $\mathbf A2$ together implies that every smooth vector field on the base $\mathbb B$ has a smooth lift on $\mathcal X$ that tangent to the boundary of $D$. Thus in this case, $\{D_t\}$ is locally trivial as a smooth family.
\begin{definition}\label{de:smoothfamily} $\{D_t\}$ is said to be a smooth family of smoothly bounded Stein domains if $D$ satisfies $\mathbf{A1}$ and $\mathbf{A2}$.
\end{definition}
Assume that $D$ satisfies $\mathbf{A1}$ and $\mathbf{A2}$. Then we can define the curvature operators on $\mathcal H$ as follows:
\begin{equation}\label{eq:new-1}
\Theta_{j\bar k}u:=[D_{t^j}, \ensuremath{\overline\partial}_{t^k}]u=D_{t^j}\ensuremath{\overline\partial}_{t^k}u-\ensuremath{\overline\partial}_{t^k} D_{t^j}u, \ \forall \ u\in \Gamma(\mathcal H).
\end{equation}
\subsection{Previous results}
We shall give a short account of Berndtsson's results on\ \textquotedblleft geodesic\textquotedblright\ formula for $\Theta_{j\bar k}$. Let us recall the following notions in his curvature formula.
\medskip
\textbf{Geodesic curvature in the space of K\"ahler metrics}: Let us denote by $\Theta(\mathcal L,h)$ the curvature of $(\mathcal L,h)$. If we write $h$ locally as $e^{-\phi}$ then we have
\begin{equation}
\Theta(\mathcal L,h) = \partial\ensuremath{\overline\partial} \phi.
\end{equation}
If $D$ is a product, say $D=D_0 \times\mathbb B$, and
\begin{equation}
i\partial\ensuremath{\overline\partial} \phi|_{D_0\times \{t\}} >0, \ \forall\ t\in\mathbb B,
\end{equation}
then $\{i\partial\ensuremath{\overline\partial} \phi|_{D_0\times \{t\}} \}$ can be seen as a family of K\"ahler metrics on $D_0$. Assume further that $m=1$. Then there exists a smooth function, say $c(\phi)$, such that
\begin{equation}
\frac{(i\partial\ensuremath{\overline\partial} \phi)^{n+1}}{(n+1)!}= c(\phi)\frac{(i\partial\ensuremath{\overline\partial} \phi)^{n}}{n!} \wedge idt\wedge d\bar t.
\end{equation}
By Proposition 3 in \cite{Donaldson99}, if $\{i\partial\ensuremath{\overline\partial} \phi|_{D_0\times \{t\}} \}$ is $S^1$ invariant then
\medskip
\emph{The path $\{i\partial\ensuremath{\overline\partial} \phi|_{D_0\times \{t\}} \}$ defines a geodesic in the space of K\"ahler metrics on $D_0$ if and only if $c(\phi)\equiv 0$.}
\medskip
In general, $c(\phi)$ is called the \emph{geodesic curvature} in the space of K\"ahler metrics. The geodesic curvature plays a crucial role on variation of K\"ahler metrics on projective manifolds; see \cite{Mabuchi86}, \cite{Semmes88} and \cite{Donaldson99}, to cite just a few. Another way to look at the geodesic curvature is to use the notion of \emph{horizontal lift}.
\medskip
\textbf{Horizontal lift}: The notion of horizontal lift is introduced by Siu in \cite{Siu86}. In \cite{Bern11}, Berndtsson found that one may also define the notion of horizontal lift with respect to a relative K\"ahler form (a smooth $d$-closed $(1,1)$-form that is positive on each fibre). Let us recall his definition:
\medskip
\emph{Let $\omega$ be a relative K\"ahler form on $\mathcal X$. A $(1,0)$-vector field $V$ on $\mathcal X$ is said to be horizontal with respect to $\omega$ if $\langle V, W\rangle_{\omega}=0$ for every $(1,0)$-vector field $W$ such that $\pi_*(W)=0$. Let $v$ be a vector field on $\mathbb B$. We call $V$ a \emph{horizontal lift} of $v$ if $V$ is horizontal with respect to $\omega$ and $\pi_*V$=$v$.}
\medskip
By Berndtsson's formula (see page 3 in \cite{Bern11}), if we write $\omega=i\partial\ensuremath{\overline\partial}\phi$ locally then for each $1\leq j\leq m$, $\partial/\partial t^j$ has a unique horizontal lift, say $V_j$, as follows:
\begin{equation}
V_j=\partial/\partial t^j-\sum \phi_{j\bar \nu}\phi^{\bar \nu \lambda} \partial/\partial \mu^\lambda.
\end{equation}
Moreover, if $m=1$ then
\begin{equation}
\langle V_1, V_1\rangle_{i\partial\ensuremath{\overline\partial}\phi}=c(\phi).
\end{equation}
By this formula, it is natural to define the notion of geodesic curvature for general base dimension $m$ and general fibration $\pi$.
\medskip
\textbf{Geodesic curvature of $\{h|_{L_t}\}$}: Let us assume that
\begin{equation*}
i\Theta(\mathcal L,h)|_{D_t}>0, \ \forall \ t\in\mathbb B, \ \text{or} \ \Theta(\mathcal L,h)\equiv 0 \ \text{on}\ D,
\end{equation*}
In case $i\Theta(\mathcal L,h)|_{D_t}> 0, \ \forall \ t\in\mathbb B$, let $V_j^h$ be the horizontal lift of the base vector fields $\partial/\partial t^j$ with respect to $i\Theta(\mathcal L,h)$. We shall define the geodesic curvature of $\{h|_{L_t}\}$ as:
\begin{equation}\label{eq:cjkh}
c_{j\bar k}(h):=\langle V_j^h, V_k^h\rangle_{i\Theta(\mathcal L,h)}; \ \text{and} \ c_{j\bar k}(h):=0 \ \text{if} \ \Theta(\mathcal L,h)\equiv 0 \ \text{on} \ D.
\end{equation}
Another notion in Berndtsson's curvature formula is the following:
\medskip
\textbf{Remaining term in H\"ormander's $L^2$-estimate (product case)}: Assume that $D=D_0\times \mathbb B$. Then the vector fields $\partial/\partial t^j$ are well defined on $\mathcal X$. Fix \begin{equation}\label{eq:new-2}
u_j\in\Gamma(\mathcal H), \ 1\leq j\leq m, \ \ \text{(see Definition \ref{de:smooth})}.
\end{equation}
Let $a$ be the $L^2$-minimal solution of
\begin{equation*}
\ensuremath{\overline\partial}^t(\cdot)=c,
\end{equation*}
where $\ensuremath{\overline\partial}^t$ denotes the Cauchy-Riemann operator on $D_t=D_0\times \{t\}$ and
\begin{equation}
c:=\sum ( \partial/\partial t^j \lrcorner ~\Theta(\mathcal L,h))|_{D_t} \wedge u_j=\sum\ensuremath{\overline\partial}^t\phi_{j}\wedge u_j.
\end{equation}
Then we have the following remaining term in H\"ormander's $L^2$-estimate
\begin{equation*}
R:=||c||^2_{i\Theta(\mathcal L,h)|_{D_t}} -||a||^2.
\end{equation*}
By H\"ormander's theorem (see \cite{Hormander65}), if $D_0$ is pseudoconvex then $R$ is non-negative. Thus in our case, $R$ is always non-negative. Now we can state the following theorem of Berndtsson (see \cite{Bern06} and Theorem 1.1 in \cite{Bern09}):
\begin{theorem}\label{th:CF-06} Assume that $D=D_0\times \mathbb B$, where $D_0$ is a strongly pseudoconvex domain with smooth boundary. If $i\Theta(\mathcal L,h)|_{D_0\times \{t\}}>0, \ \forall \ t\in\mathbb B$, then we have
\begin{equation}\label{eq:final-CF-06}
\sum(\Theta_{j\bar k} u_j, u_k)= \sum(c_{j\bar k}(h) u_j, u_k) + R, \ R\geq 0.
\end{equation}
If $\Theta(\mathcal L,h)\equiv 0 \ \text{on}\ D$ then
\begin{equation}\label{eq:final-CF-06-1}
\sum(\Theta_{j\bar k} u_j, u_k)\equiv 0.
\end{equation}
\end{theorem}
\textbf{Remark}: \eqref{eq:final-CF-06} can be found in the proof of Theorem 1.1 in \cite{Bern09}. \eqref{eq:final-CF-06-1} is a direct application of formula $(2.4)$ in \cite{Bern09}. A special case of \eqref{eq:final-CF-06} for variation of the Bergman kernel is given in \cite{Bern06}.
\medskip
The counterpart of Theorem \ref{th:CF-06} for a proper K\"ahler fibration was given in \cite{Bern11} by Berndtsson. Let us recall the following notions in his formulae:
\medskip
\textbf{Remaining term in H\"ormander's $L^2$-estimate (Polarized fibration)}: Let
\begin{equation}
\pi: D \to \mathbb B,
\end{equation}
be a proper holomorphic submersion. Assume that $\mathcal L$ is a relatively ample line bundle on $D$, i.e. $i\Theta(\mathcal L,h)|_{D_t}>0, \ \forall \ t\in\mathbb B$. By the Ohsawa-Takegoshi extension theorem (see \cite{OT87} and \cite{Siu02}), we know that the dimension of $\mathcal H_t:=H^0(D_t, K_{D_t}+L_t)$ does not depend on $t$ and our bundle $\mathcal H$ is just the holomorphic vector bundle associated to the zero-th direct image sheaf $\pi_*\mathcal O(K_{D/\mathbb B}+\mathcal L)$. For each $j$, let $V_j^h$ be the horizontal lift of $\partial/\partial t^j$ with respect to $i\Theta(\mathcal L,h)$. Let us denote by
\begin{equation}
\kappa: T_{\mathbb B} \to \{H^1(D_t, T_{D_t})\}_{t\in\mathbb B},
\end{equation}
the Kodaira-Spencer map associated to the holomorphic fibration $\pi$. By Theorem 5.4 in \cite{Kodaira86}, we know that each $(\ensuremath{\overline\partial} V_j^h)|_{D_t}$ can be seen as a representative of the Kodaira-Spencer class $\kappa(\partial/\partial t^j)$. For each $1\leq j\leq m$, let $u_j$ be a smooth section of $\mathcal H$. Put
\begin{equation*}
b=\sum (\ensuremath{\overline\partial} V_j^h)|_{D_t}\lrcorner ~u_j.
\end{equation*}
Let $a$ be the $L^2$-minimal solution of
\begin{equation*}
\ensuremath{\overline\partial}^t(\cdot)=\partial^t_\phi b,
\end{equation*}
where $\partial^t_\phi $ is the restriction of the $(1,0)$-part of the Chern connection of $\mathcal L$ on $D_t$. Then we have the following H\"ormander type-remaining term:
\begin{equation*}
R^h:=||b||^2_{i\Theta(\mathcal L,h)|_{D_t}} -||a||^2\geq 0.
\end{equation*}
\medskip
\textbf{Remaining term in H\"ormander's $L^2$-estimate (K\"ahler fibration)}: In this case, let us assume that the total space of the proper holomorphic submersion $\pi: D \to \mathbb B$ possesses a K\"ahler form $\omega$. Let $\mathcal L$ be a flat line bundle over $D$, i.e.
$\Theta(\mathcal L,h)\equiv 0 \ \text{on}\ D$. By Theorem 8.1 in \cite{Bern09}, we know that $\pi_*\mathcal O(K_{D/\mathbb B}+\mathcal L)$ is locally free. Let $\mathcal H$ be the associated vector bundle. For each $j$, let $u_j$ be a smooth section of $\mathcal H$. Put
\begin{equation*}
b=\sum (\ensuremath{\overline\partial} V_j^\omega)|_{D_t}\lrcorner ~u_j,
\end{equation*}
where each $V_j^{\omega}$ is the horizontal lift of $\partial/\partial t^j$ with respect to the K\"ahler form $\omega$. Consider
\begin{equation*}
\ensuremath{\overline\partial}^t(a)=\partial^t_\phi b,
\end{equation*}
where $a$ is the $L^2$-minimal solution. Then the associated H\"ormander remaining term is
\begin{equation*}
R^{\omega}:=||b||^2_{\omega|_{D_t}} -||a||^2\geq 0.
\end{equation*}
Now we can state the following theorem of Berndtsson (see \cite{Bern11}):
\begin{theorem}\label{th:CF-11} Let $\pi: D \to \mathbb B$ be a proper holomorphic submersion. Let $(\mathcal L,h)$ be a holomorphic line bundle over $D$. If $\mathcal L$ is relatively ample then we have
\begin{equation}\label{eq:final-CF-11}
\sum(\Theta_{j\bar k} u_j, u_k)= \sum(c_{j\bar k}(h) u_j, u_k) + R^h.
\end{equation}
If $\mathcal L$ is flat then
\begin{equation}\label{eq:final-CF-11-1}
\sum(\Theta_{j\bar k} u_j, u_k)\equiv R^{\omega}.
\end{equation}
\end{theorem}
\textbf{Remark}: If $\mathcal L$ is trivial then \eqref{eq:final-CF-11-1} is just Griffiths' formula (see page 33 in \cite{Griffiths84}). In general, if the total space $D$ is K\"ahler and there is a smooth Hermitian metric on $\mathcal L$ with non-negative curvature then by Thereom 1.2 in \cite{Bern09}, we know that
\begin{equation*}
\sum(\Theta_{j\bar k} u_j, u_k)\geq 0.
\end{equation*}
\medskip
If the boundary of each fibre is non-empty then, in general, the boundary term should also appear in the curvature formula (see \cite{MY04}). Our main result is a study of the curvature of $\mathcal H$ for fibrations with boundary.
\subsection{Basic notions for fibrations with boundary} Let $D=\{D_t\}_{t\in\mathbb B}$ be a smooth family of smoothly bounded Stein domains (see Definition \ref{de:smoothfamily}). We shall define the notion of \textquotedblleft geodesic curvature\textquotedblright\ of $\{\partial D_t\}$ by using the notion of \emph{horizontal lift with respect to the Levi-form on the boundary} of $D$. Let $\rho$ be the defining function in $\mathbf{A2}$. We call an $(1,0)$-\emph{tangent} vector field $V$ on $\partial D$ horizonal with respect to the Levi-form if
\begin{equation*}
\langle V, W \rangle_{i\partial\ensuremath{\overline\partial}\rho}=0, \ \text{on}\ \partial D,
\end{equation*}
for every $(1,0)$-\emph{tangent} vector field $W$ on $\partial D$ such that $\pi_*(W)=0$.
\medskip
\textbf{Remark}: From the above definition, the notion of horizontal lift with respect to the Levi-form on the boundary is compatible with the usual notion of horizintal lift if we only consider the category of tangent vector fields on $\partial D$.
Same as before, we shall also define the notion of geodesic curvature in the following sense:
\begin{definition}\label{de:geodesic-c-bdy} Assume that $D$ satisfies $\mathbf{A1}$ and $\mathbf{A2}$ and for each $j$, $\partial/\partial t^j$ has a horizontal lift to $\partial D$ with respect to the Levi-form $i\partial\ensuremath{\overline\partial}\rho$ on $\partial D$. Then
we call
\begin{equation}\label{eq:cjkrho}
\theta_{j\bar k}(\rho):=\langle V_j^\rho, V_k^\rho\rangle_{i\partial\ensuremath{\overline\partial}\rho},
\end{equation}
the geodesic curvature of $\{\partial D_t\}_{t\in\mathbb B}$ with respect to the Levi form $i\partial\ensuremath{\overline\partial}\rho$ on $\partial D$.
\end{definition}
Now a natural question is whether each base vector field has a horizontal lift (with respect to the Levi-form) to $\partial D$ or not. We have the following lemma:
\begin{lemma}[Key Lemma]\label{le:key-lemma} Assume that $D$ satisfies $\mathbf{A1}$ and $\mathbf{A2}$. Put
\begin{equation}
\omega:=i\partial\ensuremath{\overline\partial}(-\log-\rho),
\end{equation}
where $\rho$ is the defining function in $\mathbf{A2}$. For each $j$, let $V_j$ be the horizontal lift (on $D$) of $\partial/\partial t^j$ with respect to $\omega$. Then each $V_j$ is smooth up to the boundary of $D$ and $V_j|_{\partial D}$ is horizontal with respect to the Levi form $i\partial\ensuremath{\overline\partial}\rho$ on $\partial D$. In particular, every smooth base vector field has a unique smooth horizontal lift with respect to the Levi form and the geodesic curvature $\theta_{j\bar k}(\rho)$ is well defined on $\partial D$.
\end{lemma}
As a generalization of Definition 1.2, we shall introduce the following definition:
\begin{definition}\label{de:inter-psc} Assume that $D$ satisfies $\mathbf{A1}$ and $\mathbf{A2}$. We call $\{D_t\}_{t\in\mathbb B}$ an interpolation family in $\mathcal X$ if $\theta_{j\bar k}(\rho) \equiv 0$ on $\partial D$.
\end{definition}
\textbf{Remaining term in H\"ormander's $L^2$-estimate (fibration with boundary)}: For each $1\leq j\leq m$, let $u_j$ be a smooth section of $\mathcal H$ (see Definition \ref{de:smooth}).
Put
\begin{equation}\label{eq:cb}
c=\sum (V_j ~ \lrcorner ~\Theta(\mathcal L,h))|_{D_t} \wedge u_j, \ b=\sum(\ensuremath{\overline\partial} V_j)|_{D_t} ~\lrcorner~ u_j,
\end{equation}
where each $V_j$ is the vector field in Lemma \ref{le:key-lemma}. Let $a$ be the $L^2$-minimal solution of
\begin{equation*}
\ensuremath{\overline\partial}^t(\cdot)=\partial^t_\phi b+c,
\end{equation*}
in $L^2(D_t, K_{D_t}+L_t)$. Put
\begin{equation*}
\omega^t:=i\partial\ensuremath{\overline\partial}(-\log-\rho)|_{D_t}.
\end{equation*}Then we shall define
\begin{equation}\label{eq:new-3}
R:=||c||^2_{i\Theta(\mathcal L,h)|_{D_t}}+||b||^2_{\omega^t}-||a||^2_{\omega^t},
\end{equation}
if $i\Theta(\mathcal L,h)|_{D_t}>0$; and define
\begin{equation*}
R:=||b||^2_{\omega^t}-||a||^2_{\omega^t},
\end{equation*}
if $\Theta(\mathcal L,h)\equiv 0$. We shall show in Theorem \ref{th:L2} that $R$ is always non-negative.
\subsection{Main theorem}
\begin{theorem}\label{th:CF} Assume that $D$ satisfies $\mathbf{A1}$ and $\mathbf{A2}$. If
\begin{equation}\label{eq:CF)}
i\Theta(\mathcal L,h)|_{D_t}>0, \ \forall \ t\in\mathbb B, \ \text{or} \ \Theta(\mathcal L,h)\equiv 0 \ \text{on}\ D,
\end{equation}
then, using the above notation, see \eqref{eq:new-1}, \eqref{eq:cjkh}, \eqref{eq:cjkrho}, \eqref{eq:new-3}, we have the following curvature formula of $\mathcal H$:
\begin{equation}\label{eq:final-CF}
\sum(\Theta_{j\bar k} u_j, u_k)= \sum\int_{\partial D_t} \theta_{j\bar k}(\rho) \langle u_j,u_k\rangle d \sigma
+ \sum(c_{j\bar k}(h) u_j, u_k) + R,
\end{equation}
where $\langle \cdot, \cdot \rangle$ denotes the point-wise inner product with respect to $i\partial\ensuremath{\overline\partial}\rho|_{D_t}$ and $h$, and the surface measure $d\sigma$ with respect to $i\partial\ensuremath{\overline\partial}\rho|_{D_t}$ is defined as
\begin{equation*}
d\sigma:= \frac{\sum \rho_{\bar\lambda}\rho^{\bar\lambda\nu}\partial/\partial \mu^\nu}{\sum \rho_{\bar\lambda}\rho^{\bar\lambda\nu}\rho_{\nu}} ~ \lrcorner ~ \frac{(i\partial\ensuremath{\overline\partial}\rho|_{D_t})^n}{n!}.
\end{equation*}
\end{theorem}
\subsection{Applications} \ \
\medskip
We shall show how to use our main theorem to study the complex geometry counterparts of Theorem 1.4. Let us give some positive-curvature criterion of $\mathcal H$ first. Recall that, $\mathcal H$ is said to be semi-positive in the sense of Nakano if $\sum(\Theta_{j\bar k} u_j, u_k) \geq 0$, for all smooth sections $u_1, \cdots, u_m$ of $\mathcal H$. As a direct consequence of our main theorem, we shall prove that:
\begin{corollary}\label{co:nakano} Assume that $D$ satisfies $\mathbf{A1}$ and $\mathbf{A2}$. If $D$ is Stein and $ i\Theta(\mathcal L,h)\geq 0$ on $D$ then $\mathcal H$ is semi-positive in the sense of Nakano.
\end{corollary}
Another very useful notion of positivity is the Griffiths positivity. Recall that $\mathcal H$ is said to be Griffiths semi-positive if $\sum(\Theta_{j\bar k} u, u) \xi_j\bar\xi_k \geq 0$, for every smooth section $u$ of $\mathcal H$ and every $\xi\in\ensuremath{\mathbb{C}}^m$. It is known that a \emph{finite rank} vector bundle is Griffiths semi-positive if and only if its dual bundle is Griffiths semi-negative. Moreover, the following is true:
\medskip
\textbf{Criterion for Griffiths semi-positivity}: A \emph{finite rank} vector bundle is Griffiths semi-positive if and only if the $\log$-norm of the holomorphic sections of its dual bundle are plurisubharmonic.
\medskip
Then a natural question is whether there is a similar criterion in our case, i.e. for the infinite rank vector bundle $\mathcal H$. As a first step, we have to define the notion of the \emph{holomorphic section of the \textbf{dual} of $\mathcal H$.}
\begin{definition}\label{de:smooth-dual-new} For each $t\in \mathbb B$, let $f^t$ be a $\ensuremath{\mathbb{C}}$-linear mapping from $\mathcal H_t$ to $\ensuremath{\mathbb{C}}$. We call $f: t \mapsto f^t$ a smooth section of the \textbf{dual} of $\mathcal H$ if there exists a smooth section, say $P(f)$, of $\mathcal H$ such that
\begin{equation}
f^t(u^t)=(u^t, P(f)^t),
\end{equation}
for every $u^t\in \mathcal H_t$ and every $t\in\mathbb B$. We shall write the norm of $f^t$ as $||f^t||:=||P(f)^t||$.
\end{definition}
\begin{definition}\label{de:holomorphic-dual-new} Let $f: t \mapsto f^t$ be a smooth section of the \textbf{dual} of $\mathcal H$. We call $f$ a holomorphic section if
\begin{equation}
t \mapsto f^t(u^t)
\end{equation}
is holomorphic for every holomorphic section $u$ of $\mathcal H$.
\end{definition}
\textbf{Remark:} Inspired by \cite{BL14}, we shall give a careful study of those holomorphic sections of the dual of $\mathcal H$ defined by a family \emph{currents with compact support} in fibres.
\medskip
Now we are ready to state the main application of our main theorem:
\begin{corollary}\label{co:dual-psh} Assume that $D$ satisfies $\mathbf{A1}$ and $\mathbf{A2}$. If $D$ is Stein and $i\Theta(\mathcal L,h)\geq 0$ on $D$ then
\begin{equation}
\log||f||: t \mapsto \log ||f^t||
\end{equation}
is plurisubharmonic for every holomorphic section $f$ of the dual of $\mathcal H$.
\end{corollary}
\textbf{Remark:} If we choose $f^t$ as a fixed Dirac measure then we get the plurisubharmonicity of the Bergman kernel \cite{Bern06}. In section 5, we shall also use Corollary \ref{co:dual-psh} to study variation of the deriatives of the Bergman kernel.
\medskip
\textbf{Triviality and flatness:} In case every fibre $D_t$ is compact without boundary, we know that the criterions for flatness of $\mathcal H$ are quite usefull in the study of the uniqueness problems of extremal K\"ahler metrics (see \cite{Bern09a} and \cite{Bern14}). In our case, we call $\mathcal H$ a \emph{flat bundle} if
\begin{equation}
\Theta_{j\bar k} u \equiv 0, \ \forall \ 1\leq j,k\leq m.
\end{equation}
for every smooth section $u$ of $\mathcal H$. From Theorem 1.4, one may guess that flatness of $\mathcal H$ should be related to triviality of the fibration $\pi$. Let us introduce the following definition as a generalization of Definition 1.3.
\begin{definition}\label{de:trivial} Assume that $D$ satisfies $\mathbf{A1}$ and $\mathbf{A2}$. We call $\{D_t\}_{t\in\mathbb B}$ a trivial family, or $D$ is trivial, if there exists a biholomorphic mapping $\Phi: D_0\times \mathbb B \to D$ such that
\begin{equation}
\Phi(D_0\times\{t\})=D_t, \ \forall \ t\in\mathbb B,
\end{equation}
and $\Phi_{*}(\partial/\partial t^j)$ extends to a smooth $(1,0)$-vector field on $\mathcal X$ for every $1\leq j\leq m$.
\end{definition}
The following theorem can be seen as a generalization of Proposition 1.5:
\begin{theorem}\label{th:flat} Assume that $D$ satisfies $\mathbf{A1}$ and $\mathbf{A2}$. Assume further that the total space $D$ is Stein. If $K_{\mathcal X/\mathbb B}+\mathcal L$ is trivial on each fibre of $\pi$ and $\Theta(\mathcal L,h)\equiv 0$ on $D$ then flatness of $\mathcal H$ and triviality of $D$ are equivalent.
\end{theorem}
As a direct corollary of Theorem \ref{th:flat}, we have
\begin{corollary}\label{co:last} Let $D_0$ be a smooth domain in $\ensuremath{\mathbb{C}}$. Let
\begin{equation}
F:(t,z)\mapsto(t,z+a(t)\bar z)
\end{equation}
be a mapping from $\mathbb B\times D_0$ to $\mathbb B\times \ensuremath{\mathbb{C}}$. Assume that $a$ is holomorphic on $\mathbb B$ and
\begin{equation}
|a| <1, \ \text{on}\ \mathbb B, \ a(0)=0.
\end{equation}
Then $\{F(\{t\}\times D_0)\}_{t\in\mathbb B}$ is a trivial family if and only if $a\equiv0$ on $\mathbb B$.
\end{corollary}
\textbf{Remark:} One may also give a direct proof of Corollary \ref{co:last} by introducing the notion of Kodaira-Spencer class for deformations with boundary, we leave it to the interested reader.
\section{Geodesic curvature and interpolation family}
In this section, we shall give two proofs of Lemma \ref{le:key-lemma} and show that our Definition of interpolation family (see Definition \ref{de:inter-psc}) is compatible with the usual definition of interpolation family of Hermitian norms on $\mathbb C^n$ for $n\geq 1$.
\medskip
\textbf{Relation with Levi-flatness}: By the definition of the geodesic curvature $\theta_{j\bar k}(\rho)$ (see Definition \ref{de:geodesic-c-bdy}) of $\{\partial D_t\}$, we have:
\medskip
\emph{ Assume that every fibre $D_t$ is one dimensional. Then $\{D_t\}$ is an interpolation family if and only if the boundary of $D$ is Levi flat.}
\medskip
For higher fibre dimension case, the criterion for interpolation family (see \cite{Semmes88} and references therein) is not so obvious. We will give a study of it in section 3.3. Let us prove our Key Lemma first.
\subsection{First proof of Lemma \ref{le:key-lemma}}\ \
\medskip
Since $V_j$ is a lift of $\partial/\partial t^j$, locally one may write,
\begin{equation}\label{eq:vj-1}
V_j=\partial/\partial t^j-\sum v_j^\lambda\partial/\partial \mu^\lambda.
\end{equation}
Put $\psi=-\log-\rho$. By definition, we know that each $V_j$ is determined by
\begin{equation*}
\langle V_j, \partial/\partial \mu^\nu\rangle_{i\partial\ensuremath{\overline\partial}\psi}\equiv 0, \ \text{on} \ D, \ \forall \ 1\leq \nu\leq n,
\end{equation*}
thus
\begin{equation}\label{eq:vj-2}
v_j^\lambda=\sum\psi_{j\bar\nu}\psi^{\bar\nu\lambda}, \ \text{on} \ D.
\end{equation}
\textbf{Fibre dimension one case}: If $n=1$, by direct computation, we have
\begin{equation}\label{eq:keylemma}
V_j:=\frac{\partial}{\partial t^j}-\frac{\rho_j\rho_{\bar\mu}-\rho\rho_{j\bar\mu}}{|\rho_\mu|^2-\rho\rho_{\mu\bar\mu}}
\frac{\partial}{\partial \mu}.
\end{equation}
By our assumption $\mathbf{A2}$, $\rho_\mu$ has no zero point near the boundary and $\rho_{\mu\bar\mu} >0$ near the boundary, thus $V_j$ is smooth up to the boundary of $D$. Furthermore, \eqref{eq:keylemma} implies that $V_j(\rho)=0$ on $\{\rho=0\}$. Notice that, in case $n=1$, every tangent vector field on $\partial D$ is horizontal with respect to the Levi-form on $\partial D$. Thus we know that if $n=1$ then $V_j|_{\partial D}$ is the horizontal lift of $\partial/\partial t^j$ with respect to the Levi-form.
\medskip
\textbf{General case}: The general case can also be proved by direct computation (see the second proof below). But there is also a simple proof as follows: If $n\geq2$, fix $x_0\in\partial D_0$, then by our assumption $\mathbf{A2}$, $(\rho_{\lambda\bar \nu})(x_0)$ is a positive definite matrix, thus one may choose local coordinates around $x_0$ such that
\begin{equation*}
\left(\rho_{\lambda\bar \nu}(x_0)\right)=I_n, \ \rho_\nu(x_0)=0, \ \forall \ \nu\geq2,
\end{equation*}
where $I_n$ is the identity matrix. Now we have
\begin{equation*}
v_j^1(x_0)=\frac{\rho_j\rho_{\bar1}-\rho\rho_{j\bar1}}{|\rho_1|^2-\rho}(x_0)=\frac{\rho_j}{\rho_1}(x_0), \ v_j^\lambda(x_0)=\rho_{j\bar \lambda}(x_0), \ \forall \ \lambda\geq2.
\end{equation*}
By assumption $\mathbf{A2}$, we know that $\rho_1(x_0)\neq 0$, thus $V_j$ is smooth up to the boundary and
\begin{equation*}
V_j(\rho)(x_0)=\rho_j(x_0)-\sum v_j^\lambda\rho_\lambda(x_0)=\rho_j(x_0)-\rho_j(x_0)=0.
\end{equation*}
Moreover, we have
\begin{equation*}
\langle V_j, \partial/\partial \mu^\lambda \rangle_{i\partial\ensuremath{\overline\partial}\rho}(x_0)=\rho_{j\bar\lambda}(x_0)-v_j^{\lambda}(x_0)=0 , \ \forall \ \lambda\geq 2,
\end{equation*}
which implies that each $V_j$ is horizontal with respect to the Levi form. The proof is complete.
\subsection{Second proof of Lemma \ref{le:key-lemma}}\ \
\medskip
In this subsection, we will give an explicit formula for each $V_j$. Here we shall use some computations from \cite{Choi15}. Put
\begin{equation*}
\rho^\alpha=\sum \rho^{\bar\beta\alpha}\rho_{\bar\beta}, \ \ \ |\partial\rho|^2=\sum|\rho_{\alpha}|^2.
\end{equation*}
By \eqref{eq:vj-1} and \eqref{eq:vj-2}, we have
\begin{equation}\label{eq:vj-4}
V_j=\partial/\partial t^j-\sum\psi_{j\bar\nu}\psi^{\bar\nu\lambda} \partial/\partial \mu^\lambda,
\end{equation}
where $\psi=-\log-\rho$. One may verify that (see \cite{Choi15})
\begin{equation}\label{eq:vj-5}
\psi^{\bar\alpha\beta}=(-\rho)\left( \rho^{\bar\alpha\beta}+\frac{\rho^{\bar\alpha}\rho^{\beta}}{\rho-|\partial\rho|^2}
\right).
\end{equation}
By direct computation, we have
\begin{equation}\label{eq:vj-6}
\sum\psi_{j\bar\alpha}\psi^{\bar\alpha\beta}=-\frac{\rho^{\beta}\rho_{j}}{\rho-|\partial\rho|^2}+
\sum \left(\rho^{\bar\alpha\beta}\rho_{j\bar\alpha}+
\frac{\rho^{\bar\alpha}\rho^{\beta}\rho_{j\bar\alpha}}{\rho-|\partial\rho|^2}\right).
\end{equation}
From \eqref{eq:vj-6}, we know that each $V_j$ is smooth up to the boundary of $D$ and is tangent to the boundary of $D$. By a direct computation, we also have that each $V_j$ is horizontal with respect to the Levi-form of the boundary of $D$. Thus the proof of Lemma \ref{le:key-lemma} is complete.
\subsection{Geodesic curvature for $\{\partial D_t\}$}\ \
\medskip
Denote by $\hat V_j$ the horizontal lift of $\partial/\partial t^j$ with respect to $i\partial\ensuremath{\overline\partial}\rho$. By the proof of \eqref{eq:vj-2}, we have
\begin{equation}\label{eq:vj-7}
\hat V_j= \frac{\partial}{\partial t^j}- \sum \rho_{j\bar\alpha} \rho^{\bar\alpha\beta}\frac{\partial}{\partial \mu^{\beta}} .
\end{equation}
By \eqref{eq:vj-4} and \eqref{eq:vj-6} and a direct computation, we get
\begin{equation}\label{eq:vj-8}
\theta_{j\bar k} (\rho)=\langle V_j, V_k\rangle_{i\partial\ensuremath{\overline\partial}\rho} =c_{j\bar k}(\rho)+\frac{|\partial \rho|^2 \hat V_j(\rho)\overline{\hat V_k(\rho)} }{(\rho-|\partial\rho|^2)^2},
\end{equation}
where
\begin{equation}\label{eq:vj-9}
c_{j\bar k}(\rho):= \langle \hat V_j, \hat V_k\rangle_{i\partial\ensuremath{\overline\partial}\rho}.
\end{equation}
Thus we have:
\begin{proposition}\label{pr:theta-c} Let $\{D_t\}$ be a smooth family of smoothly bounded Stein domains. Then \begin{equation}
\sum \theta_{j\bar k}(\rho) \xi^j\bar\xi^k \geq \sum c_{j\bar k}(\rho)\xi^j\bar\xi^k, \ \ \forall \ \xi\in \ensuremath{\mathbb{C}}^m,
\end{equation}
on $\partial D$. Moreover, $\theta_{j\bar k}(\rho)\equiv c_{j\bar k}(\rho)$ if and only if each $\hat V_j$ is tangent to the boundary of $D$.
\end{proposition}
\subsection{Relation with interpolation of norms}\ \
\medskip
Let $h$ be a smooth Hermitian norm on the trivial vector bundle $\mathbb B\times \mathbb C^n$. Then for each $t\in\mathbb B$, $h^t:=h|_{t\times\mathbb C^n}$ defines a Hermitian norm on $\mathbb C^n$. It is known that $\{h^t\}$ defines an interpolation family if and only if the curvature of $h$ vanishes identically on $\mathbb B$ (see Semmes \cite{Semmes88}). Denote by $N_t$ the unit ball in $\mathbb C^n$ defined by $h^t$. We shall prove that:
\begin{proposition}\label{pr:interpolation} $\{h^t\}$ defines an interpolation family if and only if the geodesic curvature of $\{\partial N_t\}$ vanishes identically.
\end{proposition}
\begin{proof} Let us write
\begin{equation*}
h^t(z)=\sum h_{\alpha\bar\beta}(t)z^\alpha\bar z^{\beta}.
\end{equation*}
By definition,
\begin{equation*}
\rho(t,z):=h^t(z)-1
\end{equation*}
is a defining function for $\{\partial N_t\}$. By direct computation, we have that $\hat V_j(\rho)$ vanishes identically. Thus by Proposition \ref{pr:theta-c}, the geodesic curvature, $\theta_{j\bar k}(\rho)$, of $\{\partial N_t\}$ is equal to $c_{j\bar k}(\rho)$. By \eqref{eq:vj-9},
\begin{equation}\label{eq:vj-10}
c_{j\bar k}(\rho)= \langle \hat V_j, \hat V_k\rangle_{i\partial\ensuremath{\overline\partial}\rho}=\rho_{j\bar k}-\sum \rho_{j\bar\alpha} \rho^{\bar\alpha\beta}\rho_{\bar k\beta},
\end{equation}
thus we have
\begin{equation}\label{eq:vj-11}
\theta_{j\bar k}(\rho)= \sum \left(h_{\alpha\bar\beta,j\bar k}-\sum h_{\alpha\bar\lambda,j}h^{\bar\lambda\nu}h_{\nu\bar\beta,\bar k} \right) z^{\alpha}\bar z^{\beta}.
\end{equation}
Thus $\theta_{j\bar k}(\rho)$ vanishes identically if and only if
\begin{equation}\label{eq:vj-12}
h_{\alpha\bar\beta,j\bar k}-\sum h_{\alpha\bar\lambda,j}h^{\bar\lambda\nu}h_{\nu\bar\beta,\bar k} \equiv 0, \ \ \text{on}\ \mathbb B.
\end{equation}
Notice that \eqref{eq:vj-12} is equivalent to that the curvature of $h$ vanishes identically. The proof is complete.
\end{proof}
\section{Curvature formula}
\subsection{Definition of the Chern connection}\ \
\medskip
By Definition \ref{de:cc}, it suffices to find a linear operator $D_{t^j}$ from $\Gamma (\mathcal H)$ to $\Gamma (\mathcal H)$ such that
\begin{equation}\label{eq:chern-connection-1}
\partial_{t^j}(u,v)=(D_{t^j} u, v) + (u, \ensuremath{\overline\partial}_{t^j} v), \ \forall\ u, v\in \Gamma(\mathcal H),
\end{equation}
is true. By Definition \ref{de:re}, the left hand side of \eqref{eq:chern-connection-1} can be written as
\begin{equation}\label{eq:variation}
\partial_{t^j} (\pi_*(c_n\{\textbf{u},\textbf{v}\})),
\end{equation}
where $\mathbf u, \mathbf v$ are arbitrary representatives (see Definition \ref{de:re}) of $u,v$ and $c_n= i^{n^2}$ such that $c_n\{\textbf{u},\textbf{u}\}$ is a positive $(n,n)$-form on the total space.
Assume that $D$ satisfies $\mathbf{A1}$ and $\mathbf{A2}$. Let $V_j$ be the vector fields in Lemma \ref{le:key-lemma}. Since $V_j(\rho)=0$ on $\partial D$, by Corollary \ref{co:vfi}, we have
\begin{equation}\label{eq:Lie-first}
\partial_{t^j} (\pi_*\{\textbf{u},\textbf{v}\})=\pi_*(L_{V_j}\{\textbf u, \textbf v\}).
\end{equation}
Let $d^{\mathcal L}$ be the Chern connection on $\mathcal L$. Then we have
\begin{equation*}
d\{\textbf u, \textbf v\}=\{d^{\mathcal L} \textbf u, \textbf v\}+(-1)^n \{\textbf u,d^{\mathcal L} \textbf v\}.
\end{equation*}
Using Cartan's formula,
\begin{equation*}
L_{V_j}=d\delta_{V_j}+\delta_{V_j}d,
\end{equation*}
we get that
\begin{equation*}
L_{V_j}\{\textbf u, \textbf v\}=\{L_j \textbf u, \textbf v\}+\{\textbf u,L_{\bar j}\textbf v\},
\end{equation*}
where
\begin{equation*}
L_j :=d^{\mathcal L} \delta_{V_j}+\delta_{V_j}d^{\mathcal L},
\end{equation*}
and
\begin{equation*}
L_{\bar j}:=d^{\mathcal L} \delta_{\bar V_j}+\delta_{\bar V_j}d^{\mathcal L}.
\end{equation*}
Since $ \textbf v$ is an $(n,0)$-form, we have
\begin{equation}\label{eq:chern01}
L_{\bar j} \textbf v=\delta_{\bar V_j} \ensuremath{\overline\partial} \textbf v,
\end{equation}
By \eqref{eq:def-dbar25}, we know that $ L_{\bar j} \textbf v$ is a representative of $\ensuremath{\overline\partial}_{t^j} v$. Thus we have
\begin{equation}\label{eq:chern-connection-11}
\partial_{t^j}(u,v)=\pi_*(c_n\{L_j \textbf{u},\textbf{v}\}) + (u, \ensuremath{\overline\partial}_{t^j} v), \ \forall\ u, v\in \Gamma(\mathcal H).
\end{equation}
Notice that the $(n,0)$-part of $L_j \textbf u$ can be written as
\begin{equation}\label{eq:(n,0)}
(\partial_\phi \delta_{V_j} +\delta_{V_j}\partial_{\phi}) \textbf u,
\end{equation}
where $\partial_\phi$ denotes the $(1,0)$-component of $d^{\mathcal L}$. Thus we have
\begin{equation*}
\pi_*(c_n\{L_j \textbf{u},\textbf{v}\})(t) =\left(i_t^*(\partial_\phi \delta_{V_j} +\delta_{V_j}\partial_{\phi}) \textbf u, v^t\right).
\end{equation*}
Our assumption $\mathbf{A2}$ implies that
\begin{equation}
\{v^t: v\in\Gamma(\mathcal H)\}.
\end{equation}
is dense in the Hilbert space $\mathcal H_t$ (see the proof of Lemma \ref{le:extension} below). Thus there is a \textbf{unique} element, say $\sigma^t$, in $\mathcal H_t$ such that
\begin{equation}\label{eq:riesz-repre}
\left(i_t^*(\partial_\phi \delta_{V_j} +\delta_{V_j}\partial_{\phi}) \textbf u, v^t\right)=(\sigma^t, v^t),
\end{equation}
which implies that there is a \textbf{unique} element, $\sigma^t$, in $\mathcal H_t$ such that
\begin{equation}\label{eq:chern-connection-111234}
\partial_{t^j}(u,v)=(\sigma^t, v^t) + (u, \ensuremath{\overline\partial}_{t^j} v), \ \forall\ u, v\in \Gamma(\mathcal H).
\end{equation}
Thus by Definition \ref{de:cc}, we know that: $\mathcal H$ has a Chern connection if and only if
\begin{equation*}
\sigma: t\to \sigma^t,
\end{equation*}
defines a smooth section of $\mathcal H$, i.e.
\begin{equation*}
\sigma \in \Gamma(\mathcal H).
\end{equation*}
By \eqref{eq:riesz-repre}, $\sigma^t$ is the Bergman projection to $\mathcal H_t$ of $i_t^*((\partial_\phi \delta_{V_j} +\delta_{V_j}\partial_{\phi}) \textbf u)$. Thus by Hamilton's theorem (see \cite{Hamilton77}, \cite{Hamilton79}, \cite{GreeneK82} or Appendix \ref{ss:SBK}), if $\{D_t\}$ is a smooth family of smoothly bounded Stein domains then $\sigma\in \Gamma(\mathcal H)$ and
\begin{equation}\label{eq:41000}
D_{t^j}u=\sigma.
\end{equation}
Thus we have:
\begin{proposition}\label{pr:chern} If $D$ satisfies $\mathbf{A1}$ and $\mathbf{A2}$ then the Chern connection is well defined on $\mathcal H$.
\end{proposition}
Now we are ready to compute the curvature of the Chern connection of $\mathcal H$. First we shall show how to get a curvature formula for \textbf{holomorphic} sections of $\mathcal H$.
\subsection{Curvature formula for holomorphic sections}\ \
\medskip
Let $u_1, \cdots, u_m$ be holomorphic sections (see Definition \ref{de: holo-section-H}) of $\mathcal H$. By definition of the Chern connection and \eqref{eq:new-1}, we have
\begin{equation}\label{eq:curvature2}
(\Theta_{j\bar k} u_j,u_k)=(D_{t^j}u_j,D_{t^k}u_k)- (u_j,u_k)_{j\bar k}.
\end{equation}
By \eqref{eq:chern-connection-11}, we have
\begin{equation}\label{eq:jbark-11}
(u_j,u_k)_{j\bar k}=\ensuremath{\overline\partial}_{t^k}\pi_*(c_n\{L_j \textbf u_j,\textbf u_k\})= \pi_*(c_n\{L_j \textbf u_j,L_k\textbf u_k\})+ \pi_*(c_n\{L_{\bar k} L_j \textbf u_j,\textbf u_k\}).
\end{equation}
Since each $u_j$ is a holomorphic section, we have $i_t^*(L_{\bar k} \textbf u_j)=(\ensuremath{\overline\partial}_{t^k}u_j)(t)\equiv 0$. Thus
\begin{equation}\label{eq:jbark-1}
\pi_*\{L_j L_{\bar k} \textbf u_j,\textbf u_k\}=
\partial_{t^j} \pi_*\{L_{\bar k}\textbf u_j,\textbf u_k\}
-\pi_*\{L_{\bar k}\textbf u_j,L_{\bar j}\textbf u_k\}\equiv 0,
\end{equation}
which implies that
\begin{equation}\label{eq:jbark}
(u_j,u_k)_{j\bar k}=\pi_*(c_n\{L_j \textbf u_j,L_k\textbf u_k\}) -\pi_*(c_n\{[ L_j, L_{\bar k}]\textbf u_j, \textbf u_k\}).
\end{equation}
We shall use the following formula:
\begin{proposition}\label{pr:key} $ [L_j,L_{\bar k}]= d^{\mathcal L} \delta_{[V_j,\bar V_k]}+\delta_{[V_j,\bar V_k]} d^{\mathcal L}+ \langle V_j, V_k\rangle_{i\Theta(\mathcal L,h)}.$
\end{proposition}
\begin{proof} By definition, locally we have
\begin{equation*}
L_j=L_{V_j}-V_j(\phi), \ L_{\bar k}= L_{\bar V_k}.
\end{equation*}
Thus
\begin{equation*}
[L_j,L_{\bar k}]=[L_{V_j}, L_{\bar V_k}] -[V_j(\phi),L_{\bar V_k}]=L_{[V_j,\bar V_k]}+ \bar V_k V_j(\phi).
\end{equation*}
By Cartan's formula, we have
\begin{equation*}
L_{[V_j,\bar V_k]}=d^{\mathcal L} \delta_{[V_j,\bar V_k]}+\delta_{[V_j,\bar V_k]} d^{\mathcal L}+ \delta_{[V_j,\bar V_k]} \partial\phi,
\end{equation*}
Thus
\begin{equation*}
[L_j,L_{\bar k}]-\left(d^{\mathcal L} \delta_{[V_j,\bar V_k]}+\delta_{[V_j,\bar V_k]} d^{\mathcal L}\right)=
\delta_{[V_j,\bar V_k]} \partial\phi+\bar V_k V_j(\phi).
\end{equation*}
By direct computation, we have
\begin{equation}\label{eq:strange}
\delta_{[V_j,\bar V_k]} \partial\phi+\bar V_k V_j(\phi)=\langle V_j, V_k\rangle_{i\Theta(\mathcal L,h)}.
\end{equation}
Thus this proposition follows.
\end{proof}
Now we can prove the following:
\begin{lemma}\label{le:boundary} If $i\Theta(\mathcal L,h)|_{D_t}>0$, $\forall \ t\in \mathbb B$, then
\begin{equation}\label{eq:boundary^1}
\sum \pi_*(c_n\{[ L_j, L_{\bar k}]\textbf u_j, \textbf u_k\})=||c||^2_{i\Theta(\mathcal L,h)|_{D_t}}+ B+ \sum(c_{j\bar k}(h) u_j, u_k),
\end{equation}
where $c$ is defined by \eqref{eq:cb} and $B$ is the boundary term defined by
\begin{equation*}
B:=\sum\int_{\partial D_t} \theta_{j\bar k}(\rho) \langle u_j,u_k\rangle d \sigma
\end{equation*} If $i\Theta(\mathcal L,h) \equiv 0$ on $D$ then
\begin{equation}\label{eq:boundary^2}
\sum \pi_*(c_n\{[ L_j, L_{\bar k}]\textbf u_j, \textbf u_k\})=B.
\end{equation}
\end{lemma}
\begin{proof} By the above proposition, we have
\begin{equation}\label{eq:Lie-bern-1}
\sum \pi_*(c_n\{[ L_j, L_{\bar k}]\textbf u_j, \textbf u_k\})=\sum c_n\int_{\partial D_t}\{\delta_{[V_j,\bar V_k]}u_j, u_k\}+I,
\end{equation}
where
\begin{equation}
I:=\sum (\langle V_j, V_k\rangle_{i\Theta(\mathcal L,h)} u_j, u_k).
\end{equation}
Now the boundary term can be written as
\begin{equation*}
\sum c_n\int_{\partial D_t}\{\delta_{[V_j,\bar V_k]}u_j, u_k\}= \sum \int_{\partial D_t} ( \delta_{[V_j,\bar V_k]}\partial \rho) \langle u_j, u_k \rangle d\sigma.
\end{equation*}
We shall prove $\delta_{[V_j,\bar V_k]} \partial\rho = \theta_{j\bar k}(\rho)$ on $\partial D$. In fact, by \eqref{eq:strange}, we have
\begin{equation*}
\delta_{[V_j,\bar V_k]} \partial\rho+\bar V_k V_j(\rho)=\langle V_j, V_k\rangle_{i\partial\ensuremath{\overline\partial}\rho},
\end{equation*}
and by our key lemma, $V_j|_{\partial D}=V_j^\rho$, thus $\bar V_k V_j(\rho)\equiv 0$ on $\partial D$ and
\begin{equation*}
\delta_{[V_j,\bar V_k]} \partial\rho \equiv \theta_{j\bar k}(\rho),\ \text{on}\ \partial D.
\end{equation*}
Thus
\begin{equation}\label{eq:4.19}
B=\sum c_n\int_{\partial D_t}\{\delta_{[V_j,\bar V_k]}u_j, u_k\}.
\end{equation}
and \eqref{eq:Lie-bern-1} implies \eqref{eq:boundary^2}.
Now let us prove \eqref{eq:boundary^1}: By \eqref{eq:cjkh}, we have
\begin{equation*}
\langle V_j, V_k\rangle_{i\Theta(\mathcal L,h)}=c_{j\bar k}(h)+\langle V_j-V_j^h, V_k-V_k^h\rangle_{i\Theta(\mathcal L,h)|_{D_t}}.
\end{equation*}
Since
\begin{equation*}
(V_j-V_j^h)~ \lrcorner ~ ( i\Theta(\mathcal L,h)|_{D_t})= ((V_j-V_j^h)~ \lrcorner ~i\Theta(\mathcal L,h))|_{D_t}=(V_j~ \lrcorner ~i\Theta(\mathcal L,h))|_{D_t},
\end{equation*}
by \eqref{eq:cb}, we have
\begin{equation}\label{eq:Lie-bern-2}
I =\sum (c_{j\bar k}(h)u_j, u_k)+ ||c||^2_{i\Theta(\mathcal L,h)|_{D_t}}.
\end{equation}
Thus \eqref{eq:boundary^1} follows.
\end{proof}
By \eqref{eq:curvature2} and \eqref{eq:jbark}, we have
\begin{equation}\label{eq:curvature21}
(\Theta_{j\bar k} u_j,u_k)=\pi_*(c_n\{[ L_j, L_{\bar k}]\textbf u_j, \textbf u_k\}) +(D_{t^j}u_j,D_{t^k}u_k)-\pi_*(c_n\{L_j \textbf u_j,L_k\textbf u_k\}) .
\end{equation}
Let $a^j$ be the $(n,0)$-part of $i_t^*(L_j \textbf u_j)$ and $b^j$ be the $(n-1,1)$-part of $i_t^*(L_j \textbf u_j)$, i.e.
\begin{equation*}
a^j=i_t^*(\partial_\phi \delta_{V_j}+\delta_{V_j}\partial_\phi) \textbf u_j=i_t^*[\partial_\phi,\delta_{V_j}] \textbf u_j,
\end{equation*}
and
\begin{equation*}
b^j=i_t^*(\ensuremath{\overline\partial} \delta_{V_j}+\delta_{V_j}\ensuremath{\overline\partial})\textbf u_j=(\ensuremath{\overline\partial} V_j)|_{D_t} ~ \lrcorner ~ u_j.
\end{equation*}
Then we have
\begin{equation}\label{eq:curvature22}
||\sum D_{t^j}u_j||^2-\sum \pi_*(c_n\{L_j \textbf u_j,L_k\textbf u_k\})=-||a||^2-\pi_*(c_n\{b,b\}),
\end{equation}
where $b$ is defined in \eqref{eq:cb} and
\begin{equation*}
a=\sum ((D_{t^j}u_j)(t)-a^j).
\end{equation*}
We shall prove that:
\begin{proposition}\label{pr:abc-fun} $a$ is the $L^2$-minimal solution of
\begin{equation}\label{eq:dbar-equation-3}
\ensuremath{\overline\partial}^t(a)=\partial_\phi^t b+c,
\end{equation}
where $b$ and $c$ are defined in \eqref{eq:cb}.
\end{proposition}
\begin{proof} Since $i_t^*(\ensuremath{\overline\partial} \textbf u_j )\equiv 0$ and $i_t^*(\partial_\phi \textbf u_j )\equiv 0$, we have
\begin{equation}\label{eq:dbar-equation}
\ensuremath{\overline\partial}^t a^j+\partial_\phi^t b^j= i_t^*(\ensuremath{\overline\partial}[\partial_\phi,\delta_{V_j}]+\partial_\phi[\ensuremath{\overline\partial}, \delta_{V_j}]) \textbf u_j=i_t^*([\ensuremath{\overline\partial},[\partial_\phi,\delta_{V_j}]]+[\partial_\phi,[\ensuremath{\overline\partial}, \delta_{V_j}]])\textbf u_j.
\end{equation}
Since
\begin{equation}\label{eq:dbar-equation-1}
[\ensuremath{\overline\partial},[\partial_\phi,\delta_{V_j}]]+[\partial_\phi,[\ensuremath{\overline\partial}, \delta_{V_j}]]+[\delta_{V_j}, [\ensuremath{\overline\partial},\partial_\phi]]\equiv 0,
\end{equation}
and $[\ensuremath{\overline\partial},\partial_\phi]\equiv\Theta(\mathcal L, h)$, we get that
\begin{equation}\label{eq:dbar-equation-2}
\ensuremath{\overline\partial}^t a^j+\partial_\phi^t b^j=-(V_j~\lrcorner~ \Theta(\mathcal L, h))|_{D_t} \wedge u_j.
\end{equation}
Recall that by \eqref{eq:41000}, each $D_{t^j} u_j$ is just the Bergman projection to $\mathcal H_t$ of $a^j$. Thus this proposition follows from \eqref{eq:dbar-equation-2}.
\end{proof}
By Lemma \ref{le:boundary}, \eqref{eq:curvature21} and \eqref{eq:curvature22}, we have
\begin{equation}\label{eq:curvature31}
\sum(\Theta_{j\bar k} u_j, u_k)= \sum\int_{\partial D_t} \theta_{j\bar k}(\rho) \langle u_j,u_k\rangle d \sigma
+ \sum(c_{j\bar k}(h) u_j, u_k) + R',
\end{equation}
where
\begin{equation*}
R'=||c||^2_{i\Theta(\mathcal L,h)|_{D_t}}-\pi_*(c_n\{b,b\})-||a||^2.
\end{equation*}
Now let us prove that $R'=R$. It is enough to prove that
\begin{equation*}
-\pi_*(c_n\{b,b\})=||b||^2_{\omega^t}.
\end{equation*}
But it follows directly from the following lemma:
\begin{lemma}\label{le:primitive} $b$ is primitive with respect to $\omega^t:=i\partial\ensuremath{\overline\partial}(-\log-\rho)|_{D_t}$.
\end{lemma}
\begin{proof} Recall that
\begin{equation*}
b=\sum (\ensuremath{\overline\partial} V_j)|_{D_t} ~ \lrcorner ~ u_j.
\end{equation*}
Since $b$ is an $(n-1,1)$-form, by definition of primitivity, it suffices to show that
\begin{equation*}
\omega^t\wedge b\equiv 0, \ \text{on} \ D_t.
\end{equation*}
Thus it is enough to prove that
\begin{equation}\label{eq:primitive}
((\ensuremath{\overline\partial} V_j) ~ \lrcorner ~ i\partial\ensuremath{\overline\partial}(-\log-\rho) )|_{D_t}\equiv 0, \ \forall \ 1\leq j\leq m.
\end{equation}
By definition of $V_j$ in our Key-Lemma, $(V_j~ \lrcorner ~ i\partial\ensuremath{\overline\partial}(-\log-\rho))|_{D_t}=0$. Thus \eqref{eq:primitive} is true.
\end{proof}
\textbf{Remark}: Now we know that Theorem \ref{th:CF} is true if each $u_j$ is a \textbf{holomorphic section} of $\mathcal H$. For finite rank vector bundles, the curvature operators are always pointwise defined, thus it is enough to find a curvature formula for holomorphic sections in finite rank case. One may guess that the same argument also works for the general infinite rank vector bundle. In the next subsection, we shall prove that at least the curvature operators for our bundle $\mathcal H$ are pointwise defined. Thus we know that \eqref{eq:curvature31} is also true for \textbf{general smooth sections} of $\mathcal H$.
\subsection{Curvature formula for general sections}\ \
\medskip
By the above remark, we need to prove that the curvature operators $\Theta_{j\bar k}$ on $\mathcal H$ are pointwise defined. We shall use the following two lemmas.
\begin{lemma}\label{le:symmetry} $(\Theta_{j\bar k}u,v)=(u, \Theta_{k\bar j} v)$, $\forall \ u,v\in\Gamma(\mathcal H)$.
\end{lemma}
\begin{proof} By \eqref{eq:chern-connection}, we have
\begin{equation*}
(u,v)_{\bar k j}=\ensuremath{\overline\partial}_{t^k}((D_{t^j}u,v)+(u, \ensuremath{\overline\partial}_{t^j}v))=(\ensuremath{\overline\partial}_{t^k}D_{t^j}u,v)+(D_{t^j}u,D_{t^k}v)+
(\ensuremath{\overline\partial}_{t^k}u, \ensuremath{\overline\partial}_{t^j}v)+ (u, D_{t^k}\ensuremath{\overline\partial}_{t^j}v).
\end{equation*}
On the other hand,
\begin{equation*}
(u,v)_{j\bar k}=\partial_{t^j}((\ensuremath{\overline\partial}_{t^k}u,v)+(u, D_{t^k}v))=(D_{t^j}\ensuremath{\overline\partial}_{t^k}u,v)+(D_{t^j}u,D_{t^k}v)+
(\ensuremath{\overline\partial}_{t^k}u, \ensuremath{\overline\partial}_{t^j}v)+ (u, \ensuremath{\overline\partial}_{t^j}D_{t^k}v).
\end{equation*}
Since $(u,v)_{\bar k j}\equiv (u,v)_{j\bar k}$, the lemma follows by comparing the difference of the above two equality.
\end{proof}
\begin{lemma}\label{le:extension} Assume that $D$ satisfies $\mathbf{A1}$ and $\mathbf{A2}$. Fix $u\in\Gamma(\mathcal H)$ and $t_0\in\mathbb B$. Then $u|_{D_{t_0}}$ can be approximated by holomorphic sections of $\mathcal H$ in the following sense:
For every $0<s<1$, there exists a holomorphic section $u^{(s)}$ of $\mathcal H$ over an open neighborhood (may depend on $s$) of $t_0$ such that
\begin{equation}
(\eta,s)\mapsto u^{(s)}|_{D_{t_0}} (\eta), \ 0<s<1, \ \ \ (\eta,0)\mapsto u|_{D_{t_0}}(\eta),
\end{equation}
is smooth up to the boundary of $D_{t_0}\times[0,1)$.
\end{lemma}
\begin{proof} Fix a sufficiently small $\varepsilon >0$ and consider
\begin{equation*}
D_{t_0}^s:=\{\zeta\in \pi^{-1}(t_0): \rho(t_0, \zeta)<\varepsilon s\}.
\end{equation*}
Let us define $u^{(s)}$ as the Bergman projection to the space of $L^2$-holomorphic forms on $D_{t_0}^s$ of $u|_{D_{t_0}^s}$. By Siu's theorem \cite{Siu76}, for every $0<s<1$, $D_{t_0}^s$ has a Stein neighborhood in $\mathcal X$. Thus by Cartan's theorem, every $u^{(s)}$ extends to a holomorphic section (also denoted by $u^{(s)}$) of $\mathcal H$ over an open neighborhood of $t_0$. The regularity properties of $\{u^{(s)}\}$ follows directly from Hamilton's theorem (see Appendix \ref{ss:SBK}).
\end{proof}
Now let us finish the proof of Theorem \ref{th:CF}. By the above two lemmas, for every $u_j\in\Gamma(\mathcal H)$, $1\leq j\leq m$, $t_0\in\mathbb B$, we have
\begin{eqnarray*}
(\Theta_{j\bar k}u_j,u_k)(t_0) &=& \lim_{s_1\to 0}(\Theta_{j\bar k}u_j,u_k^{(s_1)})(t_0)=\lim_{s_1\to 0}(u_j,\Theta_{k\bar j}u_k^{(s_1)})(t_0) \\
&=& \lim_{s_1\to 0}\lim_{s_2\to 0} (u_j^{(s_2)}, \Theta_{k\bar j}u_k^{(s_1)})(t_0)=
\lim_{s_1\to 0}\lim_{s_2\to 0} (\Theta_{j\bar k} u_j^{(s_2)}, u_k^{(s_1)})(t_0) \\
&=& \lim_{s\to 0}(\Theta_{j\bar k} u_j^{(s)}, u_k^{(s)})(t_0).
\end{eqnarray*}
where $u_j^{(s)}$ are holomorphic sections of $\mathcal H$ defined in Lemma \ref{le:extension}. By our curvature formula for holomorphic sections, we have
\begin{equation*}
\sum(\Theta_{j\bar k} u_j^{(s)}, u_k^{(s)})= \sum\int_{\partial D_t} \theta_{j\bar k}(\rho) \langle u_j^{(s)},u_k^{(s)}\rangle d \sigma
+ \sum(c_{j\bar k}(h) u_j^{(s)}, u_k^{(s)}) + R(s),
\end{equation*}
where
\begin{equation*}
R(s)=||c(s)||^2_{i\Theta(\mathcal L,h)|_{D_t}}+||b(s)||^2_{\omega^t}-||a(s)||^2_{\omega^t}.
\end{equation*}
Since $a(s), \ b(s)$ and $c(s)$ only depend on $u_j^{(s)}|_{D_{t_0}}$, by Lemma \ref{le:extension}, let $s\to 0$, we know that \eqref{eq:final-CF} is true at $t_0$. Since $t_0$ is an arbitrary point in $\mathbb B$, the proof of Theorem \ref{th:CF} is complete.
\subsection{Proof of Corollary \ref{co:nakano}}\ \
\medskip
For any fixed $t_0\in\mathbb B$, one may choose a sufficiently large positive constant $A$ such that $\rho+A|t|^2$ is strictly plurisubharmonic in a neighborhood of the closure of $D\cap\pi^{-1}(U)$, where $U$ is a small neighborhhod of $t_0$. Now for every $\varepsilon >0$,
\begin{equation*}
h^\varepsilon:=he^{-\varepsilon(\rho+A|t|^2)},
\end{equation*}
defines a smooth Hermitian metric on $\mathcal L$ with positive curvature on a neighborhood of the closure of $D\cap\pi^{-1}(U)$. Denote by $\mathcal H^\varepsilon$ the associatied family of Hilbert spaces with respect to $h^\varepsilon$. Denote by $\Theta_{j\bar k}^{\varepsilon}$ the assocaited curvature operator on $\mathcal H^\varepsilon$. Since the total space $D$ is Stein, we know that $\theta_{j\bar k}$ is semi-positive. By the construction of $h^\varepsilon$, we know that $c_{j\bar k}(h^\varepsilon)$ is positive on $D\cap\pi^{-1}(U)$. Thus our main theorem implies that $\mathcal H^\varepsilon$ is Nakano positive on $U$. By Hamilton's theorem, we have
\begin{equation*}
\sum(\Theta_{j\bar k} u_j, u_k)(t_0)= \lim_{\varepsilon\to 0} \sum (\Theta_{j\bar k}^{\varepsilon}u_j, u_k)(t_0)\geq 0, \ \forall \ u,v\in\Gamma(\mathcal H).
\end{equation*}
Thus $\mathcal H$ is Nakano semi-positive at $t_0$. Since $t_0$ is an arbitrary point in $\mathbb B$, we know that $\mathcal H$ is Nakano semi-positive.
\section{Curvature of the dual family}
In this section, we shall prove our main application Corollary \ref{co:dual-psh}. As a direct application, we shall give a plurisubharmonicity property of the derivatives of the Bergman kernel, which can be seen as a generalization of Theorem C. In the last part of this section, based on a remarkable idea of Berndtsson and Lempert \cite{BL14}, we shall show how to use Corollary \ref{co:dual-psh} to study plurisubharmonicity properties of the Bergman projection of currents with compact support.
\subsection{Proof of Corollary \ref{co:dual-psh}}\ \
\medskip
Let $f$ be a holomorphic section of the dual of $\mathcal H$. By Definition \ref{de:smooth-dual-new}, we know that there is a smooth section, say $P(f)$, of $\mathcal H$, such that
\begin{equation}
f^t(u^t)=(u^t, P(f)^t),
\end{equation}
for every $u^t\in \mathcal H_t$. Moreover, by Definition \ref{de:holomorphic-dual-new}, we know that
\begin{equation}
f(u): t\mapsto f^t(u^t),
\end{equation}
is a holomorphic function of $t$ if $u$ is a holomorphic section of $\mathcal H$. Thus we have
\begin{equation}
0\equiv \ensuremath{\overline\partial}_{t^j}f(u) = (u, D_{t^j}P(f)),
\end{equation}
for every holomorphic section $u$ of $\mathcal H$. By Lemma \ref{le:extension}, we know that
\begin{equation}\label{eq:new-important}
D_{t^j}P(f) \equiv 0,
\end{equation}
which implies that
\begin{equation}
\partial_{t^j}\ensuremath{\overline\partial}_{t^k}(||P(f)||^2)=(\ensuremath{\overline\partial}_{t^k}P(f), \ensuremath{\overline\partial}_{t^j}P(f))+(\Theta_{j\bar k}P(f), P(f)).
\end{equation}
By Corollary \ref{co:nakano}, we have
\begin{equation*}
\sum (\Theta_{j\bar k}(\xi_j P(f)),\xi_k P(f))
\geq 0.
\end{equation*}
for every $\xi\in\ensuremath{\mathbb{C}}^m$. Thus we have
\begin{equation*}
\sum \ensuremath{\overline\partial}_{t^k}\partial_{t^j}(\log||P(f)||^2) \xi_j\bar\xi_k \geq
\frac{||\sum\bar\xi_k \ensuremath{\overline\partial}_{t^k}P(f)||^2}{||P(f)||^2}-
\frac{|(P(f), \sum\bar\xi_k \ensuremath{\overline\partial}_{t^k}P(f) )|^2}{||P(f)||^4},
\end{equation*}
on
\begin{equation}
U_f:= \{t\in\mathbb B: ||P(f)^t||>0\}.
\end{equation}
By Schwartz inequality, we have $\sum \ensuremath{\overline\partial}_{t^k}\partial_{t^j}(\log||P(f)||^2) \xi_j\bar\xi_k \geq 0$ on $U_f$. Notice that
\begin{equation}
||P(f)||: t\mapsto ||P(f)^t||=||f^t||
\end{equation}
is a smooth function on $\mathbb B$. Thus $\log||P(f)||=\log||f||$ is plurisubharmonic on $\mathbb B$. The proof is complete.
\subsection{Variation of the derivatives of the Bergman kernel}\ \
\medskip
For simplicity purposes, we shall only consider the following case:
\medskip
\textbf{Pseudoconvex family in $\ensuremath{\mathbb{C}}^n$}: In this case, $\mathcal X$ is $\mathbb C^n \times \mathbb B$ and $\pi$ is just the natural projection to $\mathbb B$. Assume that $D$ satisfies $\mathbf{A1}$ and $\mathbf{A2}$. One may look at $D=\{D_t\}_{t\in\mathbb B}$ as a smooth family of smoothly bounded strongly pseudoconvex domains in $\ensuremath{\mathbb{C}}^n$. Moreover, we shall assume that $\mathcal L$ is a trivial line bundle over $\mathcal X$ with Hermitian metric $h=e^{-\phi}$.
\medskip
\textbf{Variation formula of the derivatives of the Bergman kernel}: Fix $\eta\in D_0$, replace $\mathbb B$ by a smaller ball if necessary, one may assume that
\begin{equation}
\eta\in D_t, \ \ \forall\ t\in\mathbb B.
\end{equation}
Let us consider
\begin{equation*}
{D^\alpha\eta}: f\mapsto f_\alpha (\eta):=\frac{\partial^{|\alpha|}f}{(\partial\mu^1)^{\alpha_1}\cdots(\partial\mu^n)^{\alpha_n}}(\eta), \ \ \forall \ f=f(\mu)d\mu\in \mathcal H_t,
\end{equation*}
where $\alpha\in\mathbb N^n, \ |\alpha|:=\alpha_1+\cdots+\alpha_n$ and $d\mu$ is short for $d\mu^1\wedge \cdots\wedge d\mu^n$. By Definition \ref{de:holomorphic-dual-new}, we know that every ${D^\alpha\eta}$ defines a holomorphic section of the dual of $\mathcal H$. Put
\begin{equation*}
{\cdot_\alpha\eta}:=P(D^\alpha\eta).
\end{equation*}
i.e., ${\cdot_\alpha\eta}$ is the unique smooth section of $\mathcal H$ such that
\begin{equation}\label{eq:def-point}
( f, {\cdot_\alpha\eta} )=f_\alpha (\eta) \ \ \forall \ f\in \mathcal H_t.
\end{equation}
Let $K^t(\zeta,\eta)d\zeta\otimes\overline{d\eta}$ be the Bergman reproducing kernel of $\mathcal H_t$. Then \eqref{eq:def-point} implies that
\begin{equation}\label{eq:reproducing-kernel}
\cdot_0\eta=K^t(\mu,\eta)d\mu, \ ( {\cdot_\beta\eta}, {\cdot_\alpha\zeta} )
=({\cdot_\beta\eta})_\alpha(\zeta)=\overline{({\cdot_\alpha\zeta})_\beta(\eta)}=K^t_{\alpha\bar\beta}(\zeta,\eta),
\end{equation}
where
\begin{equation*}
K^t_{\alpha\bar\beta}(\zeta,\eta)=
\frac{\partial^{|\alpha|+|\beta|}K^t}{(\partial\zeta^1)^{\alpha_1}\cdots(\partial\zeta^n)^{\alpha_n}
(\partial\bar\eta^1)^{\beta_1}\cdots(\partial\bar\eta^n)^{\beta_n}}(\zeta,\eta).
\end{equation*}
By \eqref{eq:new-important}, we have
\begin{equation}\label{eq:vf-pre}
K^t_{j\bar k\alpha\bar\beta}(\zeta,\eta)=\partial_{t^j}\ensuremath{\overline\partial}_{t^k}({\cdot_\beta\eta}, {\cdot_\alpha\zeta})
=( ({\cdot_\beta\eta})_{\bar k}, ({\cdot_\alpha\zeta})_{\bar j})
+ (\Theta_{j\bar k}({\cdot_\beta\eta}), {\cdot_\alpha\zeta}).
\end{equation}
By \eqref{eq:final-CF}, we have
\begin{equation}\label{eq:vf-curvature}
(\Theta_{j\bar k}({\cdot_\beta\eta}), {\cdot_\alpha\zeta})=\int_{\partial D_t} \theta_{j\bar k}(\rho) \langle {\cdot_\beta\eta},{\cdot_\alpha\zeta}\rangle d \sigma
+ (c_{j\bar k}(h) {\cdot_\beta\eta}, {\cdot_\alpha\zeta}) + R,
\end{equation}
where
\begin{equation}\label{eq:vf-R}
R=(c,c')_{i\partial\ensuremath{\overline\partial}\phi|_{D_t}}+(b,b')_{\omega^t}-(a,a')_{\omega^t},
\end{equation}
if $i\partial\ensuremath{\overline\partial}\phi|_{D_t}>0$, and
\begin{equation*}
R=(b,b')_{\omega^t}-(a,a')_{\omega^t},
\end{equation*}
if $i\partial\ensuremath{\overline\partial}\phi\equiv 0$. Here
\begin{equation*}
\omega^t=i\partial\ensuremath{\overline\partial}(-\log-\rho)|_{D_t},
\end{equation*}
and $(a,b,c)$ (resp. $(a',b',c')$) are forms associated to ${\cdot_\beta\eta}$ (resp. ${\cdot_\alpha\zeta}$) respectively. Moreover,
\begin{equation*}
\ensuremath{\overline\partial}^t a=\partial^t_\phi b+c, \ \ensuremath{\overline\partial}^t a'=\partial^t_\phi b'+c'.
\end{equation*}
\textbf{Remark:} Theorem \ref{th:L2} implies that $R$ is non-negative as a Hermitian form. Later we shall give
an explicit expression of the H\"ormander remaining term $R$ in case $i\partial\ensuremath{\overline\partial}\phi\equiv 0$. (thus $c=c'\equiv 0$).
\medskip
\textbf{H\"ormander remaining term for flat weight}: Let us assume that $i\partial\ensuremath{\overline\partial}\phi\equiv 0$. By definition, then we have $c=c'\equiv 0$. Put
\begin{equation}
\square'=\partial^t_{\phi}(\partial^t_{\phi})^*+(\partial^t_{\phi})^*\partial^t_{\phi},
\end{equation}
We shall prove that:
\begin{lemma}\label{le:R-expression} If $i\partial\ensuremath{\overline\partial}\phi\equiv 0$ on $D$ then
\begin{equation}\label{eq:R-expression}
R=(\mathbb H b,\mathbb H b')_{\omega^t},
\end{equation}
where $\mathbb H b$ denotes the $\square'$-harmonic part of $b$.
\end{lemma}
\begin{proof} Since $i\partial\ensuremath{\overline\partial}\phi\equiv 0$ and $\omega^t$ is complete K\"ahler, we know that the $\ensuremath{\overline\partial}$-Laplace $\square''$ is equal to $\square'$. Denote by $G$ the associated Green operator. Let us omit $\omega^t$ in $(\cdot,\cdot)_{\omega^t}$, then we have
\begin{equation*}
(a,a')=((\ensuremath{\overline\partial}^t)^*G\partial^t_\phi b, a')=(G\partial^t_\phi b, \partial^t_\phi b').
\end{equation*}
Since $b$ is primitive and $\ensuremath{\overline\partial}^t$-closed, we know that $b$ is $(\partial^t_\phi)^*$-closed. Thus $b$ can be written as
\begin{equation*}
b=\mathbb H b + (\partial^t_\phi)^*f, \ \ \partial^t_\phi f=0.
\end{equation*}
Now
\begin{equation*}
(a,a')=(G\partial^t_\phi(\partial^t_\phi)^*f, \partial^t_\phi b')=(f,\partial^t_\phi b')=(b-\mathbb H b,b')=(b,b')-(\mathbb H b, \mathbb H b').
\end{equation*}
Thus
\begin{equation*}
R=(b,b')-(a,a')=(\mathbb H b, \mathbb H b').
\end{equation*}
\end{proof}
Recall that
\begin{equation}
b=(\ensuremath{\overline\partial} V_j)|_{D_t} ~ \lrcorner ~ ({\cdot_\beta\eta}), \ b'=(\ensuremath{\overline\partial} V_k)|_{D_t} ~ \lrcorner ~ ({\cdot_\alpha\zeta}).
\end{equation}
Thus Lemma \ref{le:R-expression} implies that:
\begin{theorem}[Variation Formula of the Bergman Kernel]\label{th:VF} The first order variation formula of the Bergman kernel can be written as
\begin{equation}\label{eq:VF1}
K^t_{j\alpha\bar\beta}(\zeta,\eta)=i^{n^2}\int_{D_t} \phi_j\{{\cdot_\beta\eta},{\cdot_\alpha\zeta}\}
-i^{n^2}\int_{\partial D_t} \delta_{V_j}\{{\cdot_\beta\eta},{\cdot_\alpha\zeta}\},
\end{equation}
Moreover, if $i\partial\ensuremath{\overline\partial}\phi\equiv 0$ on $D$ then
\begin{equation}\label{eq:VF2}
K^t_{j\bar k \alpha\bar\beta}(\zeta,\eta)
=( ({\cdot_\beta\eta})_{\bar k}, ({\cdot_\alpha\zeta})_{\bar j})
+\int_{\partial D_t} \theta_{j\bar k}(\rho) \langle {\cdot_\beta\eta}, {\cdot_\alpha\zeta}\rangle d \sigma
+ (\mathbb H b,\mathbb H b'),
\end{equation}
where $b=(\ensuremath{\overline\partial} V_j)|_{D_t} ~ \lrcorner ~ ({\cdot_\beta\eta}), \ b'=(\ensuremath{\overline\partial} V_k)|_{D_t} ~ \lrcorner ~ ({\cdot_\alpha\zeta})$.
\end{theorem}
\begin{proof} Notice that \eqref{eq:VF2} is a direct consequence of Lemma \ref{le:R-expression}. Thus it suffices to prove \eqref{eq:VF1}. Notice that \eqref{eq:Lie-first} implies that
\begin{equation*}
K^t_{j\alpha\bar\beta}(\zeta,\eta)=i^{n^2} \int_{D_t} L^t_{V_j}\{ {\cdot_\beta\eta}, {\cdot_\alpha\zeta}\}.
\end{equation*}
By Cartan's formula
\begin{equation*}
L^t_{V_j}=i_t^*(d\delta_{V_j}+\delta_{V_j}d),
\end{equation*}
thus we have
\begin{equation*}
K^t_{j\alpha\bar\beta}(\zeta,\eta)=i^{n^2}\int_{\partial D_t} \delta_{V_j}\{{\cdot_\beta\eta},{\cdot_\alpha\zeta}\}+
i^{n^2}\int_{D_t} \frac{\partial}{\partial t^j}\{{\cdot_\beta\eta},{\cdot_\alpha\zeta}\}.
\end{equation*}
By the reproducing formula,
\begin{equation*}
i^{n^2}\int_{D_t} \frac{\partial}{\partial t^j}\{{\cdot_\beta\eta},{\cdot_\alpha\zeta}\}=2K^t_{j\alpha\bar\beta}(\zeta,\eta)-i^{n^2}\int_{D_t} \phi_j\{{\cdot_\beta\eta},{\cdot_\alpha\zeta}\},
\end{equation*}
which implies \eqref{eq:VF1}.
\end{proof}
\textbf{Remark}: If $\alpha=\beta=0$ and $\phi\equiv0$ then \eqref{eq:VF1} is Komatsu's formula (see \cite{Komatsu82}). Recently, Berndtsson \cite{Bern15} showed that \eqref{eq:VF1} can be used to study the comparison principle for Bergman kernels. In fact, if $D$ is a product then \eqref{eq:VF1} is just (2.2) in \cite{Bern15}.
\subsection{Variation of the Bergman projection of currents}\ \
\medskip
In the last section, we discussed the plurisubharmonicity properties of the Bergman projection of the derivatives of the Dirac measure. Recently, it is known that the plurisubharmonicity properties of the Bergman projection of other kind of currents are also very useful (see \cite{BL14}). In this subsection, we shall show how to use Corollary \ref{co:dual-psh} to study variation of the Bergman projection of general currents with compact support.
\medskip
\textbf{Smooth family of currents with compact support}: Denote by $A_t$ the space of smooth sections of $K_{D_t}+L_t$ over $D_t$. Put
\begin{equation*}
\mathcal A =\{A_t\}_{t\in\mathbb B}.
\end{equation*}
We shall introduce the notion of the dual of $\mathcal A$ by using the language of currents. Denote by $A'_t$ the dual space of $A_t$, that is the space of $L_t^*$-valued degree $(0,n)$-currents with \textbf{compact support} in $D_t$. Fix $f^t\in A'_t$, we shall \emph{formally} write
\begin{equation*}
f^t(u^t)= \int_{D_t} f^t\wedge u^t,\ \forall \ u^t\in A_t,
\end{equation*}
even though the $(n,n)$-current $ f^t\wedge u^t$ may not be integrable in general. Put
\begin{equation*}
\mathcal A'=\{A'_t\}_{t\in\mathbb B}.
\end{equation*}
Denote by ${\rm Supp} f^t$ the support of $f^t$. Denote by $K_{\mathcal X/\mathbb B}$ the relative canonical line bundle associated to $\pi$, recall that
\begin{equation}\label{eq:relative-canonical}
K_{\mathcal X/\mathbb B}:=K_{\mathcal X}-\pi^*K_{\mathbb B}, \ K_{\mathcal X/\mathbb B}|_{D_t} \simeq K_{D_t}.
\end{equation}
We shall introduce the following definiton:
\begin{definition} We call $f: t\to f^t\in A'_t$ a smooth family of currents with compact support if
\begin{equation}\label{eq:smooth-dual1}
\bigcup_{t\in K} {\rm Supp} f^t \Subset D, \ \forall \ K\Subset\mathbb B,
\end{equation}
and for every smooth section $\kappa$ of $(K_{\mathcal X/\mathbb B}+\mathcal L)\boxtimes (\overline{K_{\mathcal X/\mathbb B}}+\mathcal L^*) $ over
\begin{equation*}
\mathcal X\times_{\pi} \mathcal X:= \{(x,y)\in\mathcal X\times \mathcal X : \pi(x)=\pi(y)\},
\end{equation*}
there exists a smooth section, say $u_{f,\kappa}$, of $\overline{K_{\mathcal X/\mathbb B}}+\mathcal L^*$ over $\mathcal X$ such that
\begin{equation}\label{eq:smooth-dual2}
f^t(\kappa^t(v^t))=u_{f,\kappa}^t(v^t), \ \forall \ v\in C^\infty(\mathcal X, K_{\mathcal X/\mathbb B}+\mathcal L), \ t\in \mathbb B.
\end{equation}
\end{definition}
\textbf{Remark}: Let us explain the meaning of \eqref{eq:smooth-dual2}. The right hand side is clear, that is
\begin{equation*}
u_{f,\kappa}^t(v^t):=\int_{D_t} u_{f,\kappa}^t\wedge v^t.
\end{equation*}
For the left hand side, by our assumption \textbf{A1}, the restriction of $\pi$ to the closure of $D$ is proper, thus we know that
\begin{equation*}
\kappa^t(v^t): x \mapsto \int_{D_t} \kappa^t(x,\cdot)\wedge v^t(\cdot), \ \forall \ x\in \pi^{-1}(t),
\end{equation*}
defines a section in $A_t$. Thus $f^t(\kappa^t(v^t))$ is well defined. Hence \eqref{eq:smooth-dual2} means that \emph{the current defined by $f(\kappa)$ is smooth up to the boundary of $D$}.
\medskip
\textbf{Bergman projection of smooth family of currents with compact support}: We shall prove the following proposition:
\begin{proposition} Assume that $D$ satisfies $\mathbf{A1}$ and $\mathbf{A2}$. Let $f: t\to f^t\in A'_t$ be a smooth family of currents with compact support. Then
\begin{equation}
f^t: u^t \mapsto f^t(u^t), \ \ \forall \ u^t\in\mathcal H_t,
\end{equation}
defines a smooth section of the dual of $\mathcal H$ in the sense of Definiton \ref{de:smooth-dual-new}.
\end{proposition}
\begin{proof} \eqref{eq:smooth-dual1} implies that there exists a smooth real function, say $\chi$, on $D$ such that
\begin{equation*}
\chi \equiv 1 \ \text{on} \ \bigcup_{t\in\mathbb B} {\rm Supp} f^t, \ \ {\rm Supp} ( \chi|_{D_t} )\Subset D_t, \ \forall\ t\in\mathbb B.
\end{equation*}
Denote by $K^t$ the Bergman kernel of $\mathcal H_t$. Put
\begin{equation*}
\chi K: (x,y) \mapsto \chi(x) K^{\pi(x)}(x,y), \ \forall \ (x,y)\in D\times_{\pi}D.
\end{equation*}
By Hamilton's theorem (see Appendix \ref{ss:SBK}), assumptions \textbf{A1} and \textbf{A2} imply that $\chi K$ is smooth up to the boundary, i.e., it extends to a smooth section of $(K_{\mathcal X/\mathbb B}+\mathcal L)\boxtimes (\overline{K_{\mathcal X/\mathbb B}}+\mathcal L^*) $ over
$\mathcal X\times_{\pi} \mathcal X$. By the reproducing property of $K^t$, we have
\begin{equation*}
(\chi K)^t(v^t)=(\chi v)^t, \ \forall\ v\in \Gamma(\mathcal H).
\end{equation*}
Thus by \eqref{eq:smooth-dual2}, we have
\begin{equation}\label{eq:fv??}
f^t(v^t)=f^t((\chi v)^t)=f^t((\chi K)^t(v^t))=u^t_{f, \chi K}(v^t)=\int_{D_t} u^t_{f, \chi K} \wedge v^t, \ \forall \ v^t\in\mathcal H_t.
\end{equation}
Let us write
\begin{equation*}
u^t_{f, \chi K} \wedge v^t =i^{n^2} \{v^t,P(f)^t\}, \ \forall\ v^t\in C^{\infty}(D_t, K_{D_t}+L_t).
\end{equation*}
Thus we have
\begin{equation}\label{eq:formal-good}
f^t((\chi K)^t(v^t))=(v^t, P(f)^t), \ \forall\ v^t\in C^{\infty}(D_t, K_{D_t}+L_t).
\end{equation}
Since $(\chi K)^t(\mathcal H_t^{\bot})=0$, we have $P(f)^t\in \mathcal H_t$. Thus $P(f)\in\Gamma(\mathcal H)$, and by \eqref{eq:fv??}, we have
\begin{equation}\label{eq:formal-good-1}
f^t(v^t))=(v^t, P(f)^t), \ \forall \ v\in \Gamma(\mathcal H).
\end{equation}
By Definition \ref{de:smooth-dual-new}, we know that $f$ defines a smooth section of the dual of $\mathcal H$.
\end{proof}
\textbf{Remark} By Lemma \ref{le:extension}, if $D$ satisfies $\mathbf{A1}$ and $\mathbf{A2}$ then $\mathcal H_t$ is equal to the closure of $\{u^t\in\mathcal H_t: u\in\Gamma(\mathcal H)\}$. Thus \eqref{eq:formal-good-1} implies that
\begin{equation}\label{eq:norm-dual}
||P(f)||^2(t)= \sup\{|f^t(u)|^2: u\in \mathcal H_t , \ i^{n^2}\int_{D_t}\{u,u\}=1\}.
\end{equation}
By this extremal property, one may generalize Corollary \ref{co:dual-psh} to the case that the metric $h$ on $\mathcal L$ is singular.
\section{Triviality and flatness}
In this section, we shall prove Theorem \ref{th:flat} and use it to study triviality of holomorphic motions.
\subsection{Proof of Theorem \ref{th:flat}}\ \
\medskip
\textbf{Triviality implies flatness}: By definition, if $D$ is trivial then one may assume that the vector fields $\partial/\partial t^j$ are well defined on $D$, tangent to the boundary of $D$ and can be extended to smooth vector fields on $\mathcal X$. Thus we have
\begin{equation}
\theta_{j\bar k}(\rho)\equiv 0,
\end{equation}
on $\partial D$. Moreover, in this case $b\equiv 0$. If $\Theta(\mathcal L, h)\equiv 0$ then we also have $c\equiv 0$. Thus $a\equiv 0$ and $R\equiv 0$. By our main theorem, we know that $\mathcal H$ is flat.
\medskip
\textbf{Flatness implies triviality}: By Theorem \ref{th:CF} and our assumption, we have
\begin{equation*}
\sum(\Theta_{j\bar k} u_j, u_k)= \sum\int_{\partial D_t} \theta_{j\bar k}(\rho) \langle u_j,u_k\rangle d \sigma + R,
\end{equation*}
where $R\geq 0$. Moreover, since $D$ is Stein, we have
\begin{equation*}
\sum\int_{\partial D_t} \theta_{j\bar k}(\rho) \langle u_j,u_k\rangle d \sigma \geq 0.
\end{equation*}
Thus if $\Theta_{j\bar k}\equiv 0$ then $R\equiv 0$ and
\begin{equation}\label{eq:-log-rho-1}
\theta_{j\bar k}(\rho)=\langle V_j^{\rho}, V_k^{\rho} \rangle_{i\partial\op\rho}\equiv 0 \ \text{on} \ \partial D.
\end{equation}
Since
\begin{equation}\label{eq:-log-rho}
\langle V_j, V_k \rangle_{i\partial\op(-\log-\rho)}=\frac{\langle V_j, V_k \rangle_{i\partial\op\rho}}{-\rho}+
\frac{V_j(\rho) \overline{V_k(\rho)}}{\rho^2},
\end{equation}
and $V_j=V_j^\rho$ on $\partial D$, by \eqref{eq:-log-rho} and \eqref{eq:-log-rho-1}, we know that $\langle V_j, V_k \rangle_{i\partial\op(-\log-\rho)}$ is smooth up to the boundary of $D$. We shall use the following lemma, which follows from
\begin{equation}
V_j^\psi=\partial/\partial t^j-\sum \psi_{j\bar\lambda}\psi^{\bar\lambda\nu}\partial/\partial \mu^\nu,
\end{equation}
by direct computation.
\begin{lemma}\label{le:geodesic} Let $\psi$ be a smooth function on $D$. Assume that $\psi$ is strictly plurisubharmonic on each fibre of $D$. Denote by $V_j^\psi$ the horizontal lift of $\partial/\partial t^j$ with respect to $i\partial\ensuremath{\overline\partial}\psi$. Then
\begin{equation*}
[V_j^\psi, V_k^\psi]=0, \ \ [V_j^\psi, \overline{V_k^\psi}]=\sum c_{j\bar k}(\psi)_{\bar\lambda}\psi^{\bar\lambda\nu}\partial/\partial\mu^\nu-c_{j\bar k}(\psi)_\nu\psi^{\bar\lambda\nu}\partial/\partial\bar\mu^{\lambda},
\end{equation*}
where $c_{j\bar k}(\psi):=\langle V_j^\psi, V_k^\psi \rangle_{i\partial\ensuremath{\overline\partial}\psi}$.
\end{lemma}
Let us apply this lemma to $\psi=-\log-\rho$. Now by definition of $V_j$ in Lemma \ref{le:key-lemma}, we have $V_j \equiv V_j^\psi$. Since $(-\log-\rho)^{\bar\lambda\nu} \equiv 0$ on $\partial D$. By \eqref{eq:-log-rho} and the above lemma, we have
\begin{equation}\label{eq:flat-bdy}
[V_j, V_k]=0, \ \text{on} \ D, \ \text{and} \ \ [V_j, \overline{V_k}]=0,\ \text{on} \ \partial D.
\end{equation}
on the boundary of $D$. Moreover, we shall prove that the following lemma is true.
\begin{lemma}\label{le:equality1} Assume that $D$ satisfies $\mathbf{A1}$ and $\mathbf{A2}$. Assume further that $K_{\mathcal X/\mathbb B} +\mathcal L$ is trivial on each fibre of $\pi$ and $\Theta(\mathcal L, h)\equiv 0$ on $D$. If $R\equiv0$ then each $V_j^\rho$ has a smooth extension, say $\tilde V_j$, that is holomorphic on fibres and smooth up to the boundary of $D$.
\end{lemma}
If the above lemma is true then by \eqref{eq:flat-bdy}, we have
\begin{equation*}
[ \tilde V_j,\tilde V_k]=[\tilde V_j,\overline{ \tilde V_k}]=0, \ \ \text{on}\ \partial D.
\end{equation*}
Since $\tilde V_j$ are holomorphic on fibres, we have
\begin{equation*}
[\tilde V_j,\overline{ \tilde V_k}] = \frac{\partial}{\partial t^j} \overline{ \tilde V_k} - \frac{\partial}{\partial \bar t^k} \tilde V_j.
\end{equation*}
Thus
\begin{equation*}
\frac{\partial}{\partial \bar t^k} \tilde V_j \equiv 0, \ \text{on}\ \partial D.
\end{equation*}
Since $\frac{\partial}{\partial \bar t^k} \tilde V_j$ are holomorphic on each fibre, we have
\begin{equation*}
\frac{\partial}{\partial \bar t^k} \tilde V_j \equiv 0, \ \text{on}\ D.
\end{equation*}
Thus each $\tilde V_j$ is a holomorphic vector field on $D$. Moreover,
\begin{equation*}
[\tilde V_j,\tilde V_k]\equiv 0,\ \text{on}\ \partial D.
\end{equation*}
Thus $D$ is trivial. The proof of Theorem \ref{th:flat} is complete.
\medskip
Now let us prove Lemma \ref{le:equality1}:
\medskip
\textbf{Proof of Lemma \ref{le:equality1}}: We shall prove that $R\equiv 0$ implies that every $V_j|_{\partial D}(=V_j^\rho)$ has a holomorphic extension to $D$. Notice that the proof of Lemma \ref{le:R-expression} implies that
\begin{equation*}
R=||\mathbb H (\sum(\ensuremath{\overline\partial} V_j)|_{D_t} ~ \lrcorner~ u_j)||^2_{\omega^t},
\end{equation*}
where $\omega^t=i\partial\ensuremath{\overline\partial}(-\log-\rho)|_{D_t}$. Thus $R\equiv 0$ implies that $\mathbb H b \equiv 0$. Since $\omega^t$ is $d$-bounded in the sense of Gromov (see \cite{Gromov91}, \cite{DF83} or \cite{Bern96}), and
\begin{equation*}
b:=\sum(\ensuremath{\overline\partial} V_j)|_{D_t} ~ \lrcorner~ u_j
\end{equation*}
is $\ensuremath{\overline\partial}$-closed, we know that there exists a smooth $L_t$-valued $(n-1,0)$-form $u^t$ such that $\ensuremath{\overline\partial}^t u^t=b$ and
\begin{equation*}
||u^t||_{\omega^t} \leq 2 ||b||_{\omega^t}<\infty.
\end{equation*}
We claim that $u: (\eta, t)\mapsto u^t(\eta)$ is smooth up the boundary of $D$ and $u=0$ on $\partial D$. In fact, if we can show
\begin{equation}\label{eq:equality1}
\int_{D_t} \{f,b\}=0,
\end{equation}
for every $\partial^t_\phi$-closed $L_t$-valued $(n-1,1)$-form $f$ that is smooth up to the boundary of $D_t$, then $*b\in {\rm Im} (\partial^t_\phi)^*$, where $*$ is the Hodge-de Rham operator with respect to $(i\partial\ensuremath{\overline\partial} \rho)|_{D_t}$ and $(\partial^t_\phi)^*$ is the adjoint of $\partial^t_\phi$ with respect to $(i\partial\ensuremath{\overline\partial} \rho)|_{D_t}$. By the regularity property of the $\ensuremath{\overline\partial}$-Neumann problem (in fact, in our case, it is Dirichlet problem), one may solve
\begin{equation*}
(\partial^t_\phi)^*v^t=*b.
\end{equation*}
where $v: (\eta,t) \mapsto v^t(\eta)$ is smooth up to the boundary of $D$. Since $(\partial_\phi^t)^*=-*\ensuremath{\overline\partial}^t*$, we have
\begin{equation*}
\ensuremath{\overline\partial}^t(-*v^t)=b.
\end{equation*}
Since $v^t\in {\rm Dom} (\partial_\phi^t)^*$, we have $v^t=0$ on $\partial D_t$. Thus $||-*v^t||_{\omega^t}<\infty$. Since there are no $L^2$ (with respect to $\omega^t$) holomorphic $L_t$-valued $(n-1,0)$-forms on $D_t$, we have $u^t=-*v^t$. Thus our claim follows from \eqref{eq:equality1}.
Now let us prove \eqref{eq:equality1}. Put $\rho^t=\rho|_{D_t}$. Since $b$ is smooth up to the boundary, we have
\begin{equation}\label{eq:equality2}
\int_{D_t} \{f, b\}=\lim_{r\to 0-}\int_{\{\rho^t<r\}}\{f, b\}= \lim_{r\to 0-} (-1)^n
\int_{\{\rho^t=r\}}\{f,u^t\}.
\end{equation}
Since
\begin{equation*}
\omega^t\geq \frac{(i\partial\ensuremath{\overline\partial}\rho)|_{D_t}}{-\rho},
\end{equation*}
we know that $||u^t||_{\omega^t}<\infty$ implies that
\begin{equation*}
\int_{D_t}\frac{|u^t|_{i\partial\ensuremath{\overline\partial}\rho|_{D_t}}^2}{-\rho} \frac{(i\partial\ensuremath{\overline\partial}\rho)^n|_{D_t}}{n!} <\infty.
\end{equation*}
Thus
\begin{equation*}
\liminf_{r\to 0-}\int_{\{\rho^t=r\}} |u^t|^2_{i\partial\ensuremath{\overline\partial}\rho|_{D_t}} d\sigma = 0.
\end{equation*}
Since $f$ is smooth up to the boundary, we know that
\begin{equation}\label{eq:equality3}
\lim_{r\to 0-} (-1)^n
\int_{\{\rho^t=r\}}f\wedge \bar u^t=0.
\end{equation}
Thus \eqref{eq:equality1} follows from \eqref{eq:equality2} and \eqref{eq:equality3}, and our claim is proved.
By our assumption, $K_{\mathcal X/\mathbb B} +\mathcal L$ is trivial on $\pi^{-1}(t)$, thus there exists a holomorphic section, say $e$, of $K_{\mathcal X/\mathbb B} +\mathcal L$, that has no zero point in $\pi^{-1}(t)$. Now fix $t\in\mathbb B$, $1\leq j\leq m$. Put
\begin{equation*}
u_j=e, \ \ u_k=0, \ \forall \ k\neq j.
\end{equation*}
By our claim, one may solve $\ensuremath{\overline\partial}^t u^t=(\ensuremath{\overline\partial} V_j)|_{D_t} ~\lrcorner~ e$ such that $u^t=0$ on the boundary. Since $e$ has no zero point in $\pi^{-1}(t)$, one may write
\begin{equation*}
u^t=V ~\lrcorner~ e, \ \text{on}\ D_t.
\end{equation*}
Thus we have
\begin{equation*}
\ensuremath{\overline\partial}^t(V_j-V)=0, \ \text{on}\ D_t; \ V_j-V=V_j \ \text{on}\ \partial D_t.
\end{equation*}
Thus $V_j|_{\partial D_t}$ has a holomorphic extension, say $\tilde{V_j}|_{D_t}$, to $D_t$. The regularity property of $\tilde{V_j}$ follows from the regularity property of $u^t$.
The proof is complete.
\subsection{Triviality of holomorphic motions}\ \
\medskip
We shall show how to use Theorem \ref{th:flat} to study triviality of holomorphic motions.
\medskip
\textbf{Basic notions on holomorphic motion}: Recall that that if every fibre $D_t$ is a domain in $\ensuremath{\mathbb{C}}$ then:
\medskip
\emph{ $\partial D$ is Levi-flat if and only if $\theta_{j\bar k}(\rho)\equiv 0$.}
\medskip
It is known that the boundary of the total space of a holomorphic motion of a planar domain is Levi-flat. Recall that, by definition, a homeomorphism
\begin{equation}
F:(z,t)\mapsto(f(z,t),t),
\end{equation}
from $D_0\times\mathbb B$ to $D$ is called a \emph{holomorphic motion} (see \cite{MSS83}) of $D_0$ (with total space $D$) if $f(\cdot,0)$ is the identity mapping and $f(z,\cdot)$ is holomorphic for every fixed $z\in D_0$.
\medskip
\textbf{Curvature formula for holomorphic motions}: Assume that $D_0$ is a smooth domain in $\ensuremath{\mathbb{C}}$ and $F$ is smooth up to the boundary. Assume further that $\mathcal L$ is trivial and $\phi\equiv 0$ on $D$. Put
\begin{equation*}
V_j^F:=F_*(\partial/\partial t^j)=\partial/\partial t^j+f_j(z,t)\partial/\partial \zeta.
\end{equation*}
By definition, $V_j^F(\rho)\equiv0$ on $\partial D$. Thus
\begin{equation*}
V_j^F=V_j=V_j^\rho, \ \text{on}\ \partial D.
\end{equation*}
Hence we have
\begin{equation*}
||\sum (V^F_j-V_j)~\lrcorner~ u_j||_{\omega^t}<\infty,
\end{equation*}
which implies that
\begin{equation*}
\sum(\Theta_{j\bar k} u_j, u_k)=||\mathbb H (\sum ( \ensuremath{\overline\partial} V_j)|_{D_t}~\lrcorner~ u_j)||^2=||\mathbb H (\sum ( \ensuremath{\overline\partial} V_j^F)|_{D_t}~\lrcorner~ u_j)||^2.
\end{equation*}
\textbf{Criterion for $\Theta_{j\bar k}\equiv 0$ by using the Bergman kernel}: Put
\begin{equation*}
J=f_{\bar z}/f_z.
\end{equation*}
Since
\begin{equation*}
\partial/\partial \bar\zeta=z_{\bar\zeta}\partial/\partial z+ \overline{z_{\zeta}} \partial/\partial \bar z,
\end{equation*}
and
\begin{equation*}
\overline{z_{\zeta}}=\frac{f_z}{|f_z|^2-|f_{\bar z}|^2}, \ z_{\bar\zeta}=\frac{-f_{\bar z}}{|f_z|^2-|f_{\bar z}|^2},
\end{equation*}
we have
\begin{equation}\label{eq:abz}
(\ensuremath{\overline\partial} V_j^F)|_{D_t}=(f_{jz}z_{\bar\zeta}+f_{j\bar z}\overline{z_{\zeta}})d\bar\zeta\otimes \frac{\partial}{\partial \zeta}=\frac{(f_z)^2J_j}{|f_z|^2(1-|J|^2)} d\bar\zeta\otimes \frac{\partial}{\partial \zeta}.
\end{equation}
Thus $\Theta_{j\bar k}\equiv 0$ is equivalent to
\begin{equation}\label{eq:trivialcondition}
\int_{D_t} K^t(\zeta,\bar\eta) \left(\frac{(f_z)^2J_j}{|f_z|^2(1-|J|^2)}\right)(z(\zeta,t),t)\ id\zeta\wedge d\bar\zeta=0.
\end{equation}
for every $(\eta,t)$ in $D$ and every $j$.
\begin{proof}[Proof of Corollary \ref{co:last}] Since $J=a(t)$ now, by \eqref{eq:trivialcondition}, we know that $\Theta_{j\bar k}\equiv 0$ is equivalent to
\begin{equation*}
\frac{a_j}{1-|a|^2} \int_{D_t} K^t(\zeta,\bar\eta) \ id\zeta\wedge d\bar\zeta=0.
\end{equation*}
for every $(\eta,t)$ in $D$ and every $j$. But notice that
\begin{equation*}
\int_{D_t} K^t(\zeta,\bar\eta) \ id\zeta\wedge d\bar\zeta\equiv1.
\end{equation*}
Thus $\Theta_{j\bar k}\equiv 0$ is equivalent to $a_j\equiv 0$ for every $j$. Since $a(0)=0$, we know that $\Theta_{j\bar k}\equiv 0$ is equivalent to $a\equiv 0$.
\end{proof}
\textbf{Remark}: In \cite{Liurenshan}, Ren-Shan Liu showed that if $f=z+t^2\bar z$, then $F(\mathbb D\times \mathbb D)$ is not biholomorphic equivalent to the bidisc, where $\mathbb D$ denotes the unit disc. Interested readers can find more information on the holomorphic motion in \cite{Sl91} and \cite{ST86}.
\section{Appendix}
\subsection{Variation of fibre integrals}\label{ss:VFI} \ \
\medskip
Let $\ensuremath{\mathbb{B}}$ be the unit ball in $\ensuremath{\mathbb{R}}^m$. Let $\{D_t\}_{t\in\ensuremath{\mathbb{B}}}$ be a family of smoothly bounded domains in $\ensuremath{\mathbb{R}}^n$. Put
\begin{equation*}
D:=\{(t,x)\in\ensuremath{\mathbb{R}}^{m+n}: x\in D_t, \ t\in\ensuremath{\mathbb{B}}\}.
\end{equation*}
Assume that there is a real valued function $\rho$ on $\mathbb B\times \mathbb R^n$ such that for each $t$ in $\ensuremath{\mathbb{B}}$, $\rho|_{D_t}$ is a smooth defining function of $D_t$.
We call $\{D_t\}_{t\in\ensuremath{\mathbb{B}}}$ a smooth family if there exists a fibre preserving diffeomorphism $\Phi$ from $\mathbb B\times D_0$ onto $D$ such that for each $1\leq j\leq m$, $\Phi_*(\partial/\partial t^j)$ extends to a smooth vector field on $\mathbb R^n$. Put
\begin{equation}\label{eq:boundary}
[D]:=\overline D\cap(\ensuremath{\mathbb{B}}\times\ensuremath{\mathbb{R}}^n), \ \delta D:=\partial D\cap(\ensuremath{\mathbb{B}}\times\ensuremath{\mathbb{R}}^n).
\end{equation}
Let $dx:=dx^1\wedge\cdots\wedge dx^n$ be the Euclidean volume form on $\ensuremath{\mathbb{R}}^n$. Fix a smooth function $f$ on a neighborhood of $[D]$. If $\{D_t\}_{t\in\ensuremath{\mathbb{B}}}$ is a smooth family then the fibre integrals
\begin{equation*}
F(t):=\int_{D_t}f(t,x)dx
\end{equation*}
depend smoothly on $t\in \ensuremath{\mathbb{B}}$. We shall introduce a natural way to compute the derivatives of $F(t)$ (see \cite{Sch12} for related results). For every fixed
$j\in\{1,\cdots m\}$, let
\begin{equation*}
V_j:=\frac{\partial}{\partial t^j}-\sum v^{\lambda}_j\frac{\partial}{\partial x^\lambda}
\end{equation*}
be a smooth vector field on a neighborhood of $[D]$. We shall prove that:
\begin{theorem}\label{th:vfi} Let $\{D_t\}_{t\in\ensuremath{\mathbb{B}}}$ be a smooth family of smoothly bounded domain in $\ensuremath{\mathbb{R}}^m$. Assume that $V_j(\rho)=0$ on $\delta D$. Then we have
\begin{equation}\label{eq:vfi}
\frac{\partial F}{\partial t^j}(t)=\int_{D_t}L^t_{V_j}\left(f(t,x)dx\right)=\int_{D_t}L_{V_j}\left(f(t,x)dx\right),
\end{equation}
for every $t$ in $\ensuremath{\mathbb{B}}$, where $L^t_{V_j}:=i_t^*(L_{V_j})$.
\end{theorem}
\begin{proof} Without loss of generality, one may assume that $t=0$ and $j=1$. Since $V_1(\rho)$ vanishes on $\delta D$, the motion
\begin{equation*}
\Phi:(-1,1)\times D_0\rightarrow\ensuremath{\mathbb{R}}^{m}
\end{equation*}
of $D_0$ associated to $V_1$ is compatible with $\{D_t\}$, i.e.
\begin{equation*}
\Phi(a\times D_0)=D_{a\nu}, \ \nu=(1,0,\cdots,0)\in\ensuremath{\mathbb{R}}^m,
\end{equation*}
for every $a\in(-1,1)$. Since for every fixed $a\in(-1,1)$,
\begin{equation*}
\Phi^a:x\mapsto \Phi(a,x)
\end{equation*}
is a $C^{\infty}$ isomorphism from $D_0$ to $D_{a\nu}$, we have
\begin{equation}\label{eq:derivative}
\frac{\partial F}{\partial t^1}(0)=\lim_{0\neq a\to 0}\int_{D_0}\frac{f(a\nu,\Phi^a(x))d\Phi^a(x)-f(0,x)dx}{a}
\end{equation}
Since $V_1$ and $f$ are smooth up to the boundary, we have
\begin{equation}\label{eq:derivative1}
\frac{\partial F}{\partial t^1}(0)=\int_{D_0}\lim_{0\neq a\to 0}\frac{f(a\nu,\Phi^a(x))d\Phi^a(x)-f(0,x)dx}{a}.
\end{equation}
By definition of Lie derivative,
\begin{equation}\label{eq:derivative2}
L_{V_1}\left(f(t,x)dx\right)(0,x)=\lim_{0\neq a\to 0}\frac{[(\Psi^a)^*(fdx)](0,x)-f(0,x)dx}{a},
\end{equation}
where
\begin{equation*}
\Psi^a:(b\nu,\Phi^b(x))\mapsto(b\nu+a\nu,\Phi^{b+a}(x)), \ (b,x)\in (-1+|a|,1-|a|)\times D_0.
\end{equation*}
Since
\begin{equation*}
i_0^*\left\{[(\Psi^a)^*(fdx)](0,x)-f(a\nu,\Phi^a(x))d\Phi^a(x)\right\}=0,
\end{equation*}
\eqref{eq:vfi} follows from \eqref{eq:derivative1} and \eqref{eq:derivative2}.
\end{proof}
Now assume that $m=2$, put
\begin{equation*}
\frac{\partial}{\partial t}:=\frac12\left(\frac{\partial}{\partial t^1}-i\frac{\partial}{\partial t^2}\right), \
\frac{\partial}{\partial \bar t}:=\frac12\left(\frac{\partial}{\partial t^1}+i\frac{\partial}{\partial t^2}\right).
\end{equation*}
Let
\begin{equation*}
V=\frac{\partial}{\partial t}-\sum v^\lambda\frac{\partial}{\partial x^\lambda}
\end{equation*}
be a smooth vector field on a neighborhood of $[D]$. If $V(\rho)$ vanishes on $\delta D$, then both $2{\rm Re}V$ and $-2{\rm Im}V$ satisfy the assumption of Theorem~\ref{th:vfi}. Thus we have:
\begin{corollary}\label{co:vfi} If $V(\rho)$ vanishes on $\delta D$ then
\begin{equation*}
\frac{\partial F}{\partial t}(t)=\int_{D_t} L^t_{V}\left(f(t,x)dx\right), \
\frac{\partial F}{\partial \bar t}(t)=\int_{D_t} L^t_{\overline V}\left(f(t,x)dx\right),
\end{equation*}
for every $t\in \mathbb B$.
\end{corollary}
\subsection{Stability of the Bergman kernel}\label{ss:SBK} \ \
\medskip
We shall give a short account of Hamilton's theory on regularity properties of families of non-coercive boundary value problems. By Lemma 2.1 in \cite{Bern06}, stability of Bergman kernels follows directly from stability of solutions $u^t$ of a family of $\ensuremath{\overline\partial}$-Neumann problems $\square^t(\cdot)=f^t$. But it is not easy to prove regularity of $u^t$ by the standard method. In fact, if we want to use
\begin{equation}\label{eq:stability}
||\square^t(u^t-u^s)||=||f^t-f^s-(\square^t-\square^s)u^s||,
\end{equation}
to estimate $||u^t-u^s||$ then we have to find a natural connection between the domain of $\square^t$ and the domain of $\square^s$ (i.e., $u^s$ may not be in the domain of $\square^t$).
Hamilton \cite{Hamilton79} found a more natural way to study the regularity properties of families of non-coercive boundary value problems (not only for the $\ensuremath{\overline\partial}$-Neumann problem). For reader's convenience we give a sketch description of Hamilton's idea.
Instead of considering $\square^t$ (whose domain satisfies the so called $\ensuremath{\overline\partial}$-Neumann condition), Hamilton considered the full Laplace operator $\widetilde{\square^t}$ (whose domain contains all forms smooth up to the boundary). Let $u^t$ be a form smooth up to the boundary. In general, the Sobolev norm of $\widetilde{\square^t}(u^t)$ could not control the Sobolev norm of $u^t$. In fact, $u^t$ has to be in the domain of $\square^t$ (see \cite{Folland-Kohn72}). Thus two more operators (sending forms on $\overline{D_t}$ to forms on the boundary of $D_t$) are used in Hamilton's paper, i.e., he considered the full $\ensuremath{\overline\partial}$-Neumann problem
\begin{equation}\label{eq:full-Neumann}
\mathfrak{S}^t(\cdot):=\left(\widetilde{\square^t},(\ensuremath{\overline\partial}^t\rho)\vee,(\ensuremath{\overline\partial}^t\rho)\vee\ensuremath{\overline\partial}^t\right)(\cdot)=f^t,
\end{equation}
where $$(\ensuremath{\overline\partial}^t\rho)\vee:=(\ensuremath{\overline\partial}^t\rho\wedge\cdot)^*.$$ Now the domain of $\mathfrak{S}^t$ is $C^{\infty}_{\bullet,\bullet}(\overline{D_t})$ for each $t$. Choose a $C^{\infty}$ trivialization mapping
$$\mathbb B\times D_0\simeq D,$$
then the domain of $\mathfrak{S}^t$ can be seen as a fixed space $C^{\infty}_{\bullet,\bullet}(\overline{D_0})$. Moreover, by \cite{Hamilton79}, the constant in the basic estimates for $\mathfrak{S}^t$ can be chosen to be independent of $t\in \mathbb B$. Thus \eqref{eq:stability} applies. The interested reader is referred to that paper for further information and a clear proof.
\subsection{$L^2$-estimate for $\ensuremath{\overline\partial} a=\partial_{\phi} b+c$}\label{ss:dbar} \ \
\medskip
We shall prove a generalization of Demailly's theorem (see \cite{Demailly82}, \cite{Hormander65} or \cite{Bern10}) in this section.
\begin{theorem}\label{th:L2} Let $(L, h)$ be a Hermitian line bundle over an $n$-dimensional complete K\"ahler manifold $(X,\omega)$. Let $v$ be a smooth $\ensuremath{\overline\partial}$-closed $L$-valued $(n,1)$-form. Assume that
\begin{equation*}
i\Theta(L,h) >0 \ \text{on} \ X,\ (resp. \ i\Theta(L,h) \equiv 0 \ \text{on} \ X )
\end{equation*}
and
\begin{equation*}
I(v):= \inf_{v=\partial_\phi b+c} ||b||^2_{\omega} + ||c||^2_{i\Theta(L,h)} <\infty, \ (resp. \ I(v):= \inf_{v=\partial_\phi b} ||b||^2_{\omega}<\infty ) .
\end{equation*}
Then there exists a smooth $L$-valued $(n,0)$-form $a$ on $X$ such that $\ensuremath{\overline\partial} a=v$ and
\begin{equation}\label{eq:L2q}
||a||_{\omega}^2 \leq I(v).
\end{equation}
\end{theorem}
\begin{proof} We shall only prove the $i\Theta(L,h) >0$ case, since the $i\Theta(L,h) \equiv 0$ case can be proved by a similar argument. By H\"ormander's theorem and the standard density lemma for complete K\"ahler manifold, it suffices to prove that,
\begin{equation}\label{eq:hormander}
|(\partial_\phi b+c,g)_\omega|^2\leq (||b||^2_\omega + ||c||^2_{i\Theta(L,h)})(||\ensuremath{\overline\partial}^*g||_\omega^2+||\ensuremath{\overline\partial} g||^2_\omega),
\end{equation}
for every smooth $L$-valued $(n,1)$-form $g$ with compact support in $X$. Notice that
\begin{equation*}
(\partial_\phi b+c,g)_\omega=(b,\partial_\phi^*g)_\omega+(c,g)_\omega.
\end{equation*}
Hence
\begin{equation*}
|(\partial_\phi b+c,g)_\omega|^2\leq (||b||^2_\omega + ||c||^2_{i\Theta(L,h)})(||\partial_\phi^*g||^2_{\omega}+([i\Theta(L,h),\Lambda_\omega]g,g)_{\omega}),
\end{equation*}
where $\Lambda_\omega$ denotes the adjoint of $\omega\wedge$. Thus \eqref{eq:hormander} follows from the Bochner-Kodaira-Nakano formula. The proof is complete.
\end{proof}
|
1,314,259,995,345 | arxiv | \section{Reduction from Multiplicative to Additive Approximation}
\label{app:mult_to_add}
Here we show how to obtain a multiplicative approximation for SFM
from our $\tilde{O}(n^{5/3}\cdot\time/\varepsilon^{2})$ additive-approximate
SFM algorithm. Because the minimizer of $f$ is scale- and additive-invariant,
it is necessary to make certain regularity assumptions on $f$ to
get a nontrivial result. This is akin to submodular function maximization
where constant factor approximation is possible only if $f$ is nonnegative
everywhere \cite{BFNS12,FMV07}. For SFM, by considering $f-\OPT$
we see that finding a multiplicative-approximate solution and an exact
solution are equivalent for general $f$. (Indeed most submodular
optimization problems permit multiplicative approximation only in
terms of the range of values.)
Similar to submodular maximization, we assume $f$ to be \textit{nonpositive}.
Then $f'=f/\OPT$ has range $[-1,0]$ and has minimum value -1 so
our additive-approximate algorithm immediately yields multiplicative
approximation. This requires knowing $\OPT$ (or some constant factor
approximation of). Alternately we can ``binary search'' to get factor-2
close to $\OPT$ by trying different powers of 2. This would lead
to a blowup of $O(\log\OPT)$ in the running time.
\section{Approximate SFM via Fujishige-Wolfe }
\label{app:approx_sfm_wolfe}
Here we show how Frank-Wolfe and Wolfe can give $\eps$-additive approximations
for SFM. We know that both algorithms in $O(1/\delta)$ iterations
can return a point $x\in B_{f}$, the base polyhedron associated with
$f$, such that $x^{\top}x\leq p^{\top}p+\delta$ for all $p\in B_{f}$.
Here we are using the fact implied by Lemma \ref{lem:bounded-subgradients}
that the diameter of the base-polytope for functions with bounded
range is bounded (note that vertices of the base polytope correspond
to gradients of the Lovasz extension.) The robust Fujishige Theorem
(Theorem 5, \cite{CJK14}) implies that we can get a set $S$ such
that $f(S)\le\OPT+2\sqrt{n\delta}.$ Setting $\delta=\eps^{2}/4n$
gives the additive approximation in $O(n\eps^{-2})$ gradient calls.
\section{Faster Algorithm for Directed Minimum Cut}
\label{sec:minimum_cut}
Here we show how to easily obtain faster approximate submodular minimization
algorithms in the case where our function when the funciton is an
explicitly given $s$-$t$ cut function. This provides a short illustration
of the reasonable fact that when given more structure, our sumbodular
minimization algorithms can be improved.
For the rest of this section, let $G=(V,E,w)$ be a graph with vertices
$V$, directed edges $E\subseteq V\times V$, and edge weights $w\in\R_{\geq}^{E}0$.
Let $s,t\in V$ be two special vertices, $A\defeq V\setminus\{s,t\}$,
and for all $S\subseteq A$ let $f(S)$ be defined as the total weight
of the edges in leaving the set $S\cup\{s\}$, i.e. where the tail
of edge is in $S\cup\{s\}$ and the head of the edges is in $V\setminus(S\cup\{s\})$.
The function $f$ is a well known submodular function and minimizing
it corresponds to computing the minimum $s$-$t$ cut, or correspondingly
the maximum $s$-$t$ flow.
Note that clearly, $f(S)\leq W$ where $W=\sum_{e\in E}w_{e}$. Furthermore,
if we pick $S$ by including each vertex in $A$ randomly to be in
$S$ with probability independently $\frac{1}{2}$ then we see that
$\E f(S)=\frac{1}{2}W$. Consequently, $\frac{1}{2}W\leq\max_{S\subseteq A}f(s)\leq W$
and if we want to scale $f$ to make it have values in $[-1,1]$ we
need to devide by something that is $W$ up to a factor of $2$.
Now, note that we can easily extend this problem to a continuous problem
over the reals. Let $x^{+}$ denote $x$ if $x\ge0$ and $0$ otherwise.
Furthermore, for all $x\in\R^{A}$ let $y(x)\in\R^{V}$ be given by
$y(x)_{i}=x_{i}$ if $i\in A$, $y(x)_{s}=0$, $y(x)_{t}=1$, and
let
\[
g(x)\defeq\sum_{(a,b)\in E}w_{ab}(y(x_{b})-y(x_{a}))^{+}\,.
\]
Clearly, minimizing $g(x)$ over $[0,1]^{A}$ is equivalent to minimizing
$f(S)$. Furthermore the subgradient for $g$ decomposes into subgradients
for each edge $(a,b)\in E$ each of which is a vector with 2 non-zero
entries and norm at most $O(w_{ab})$. If we picking a random edge
with probability proportional to $w_{ab}$ and output its subgradient
scaled by $W/w_{ab}$ subgradient this yields a stochastic subgradient
oracle $\tilde{g}(x)$ with $\E\norm{\tilde{g}(x)}_{2}^{2}=O(\sum_{(a,b)\in E}\frac{w_{ab}}{W}((W/w_{ab})\cdot w_{ab})^{2})=O(W^{2})$.
Consequently, by Theorem~\ref{thm:subgradient_descent} setting $R^{2}=O(|V|)$
we see that we can compute $z$ with $g(z)-\min_{x}g(x)\leq W\epsilon$
in $O(|v|\epsilon^{-2})$. Thus, if we scaled $g$ to make it $[-1,1]$
valued the time to compute an $\epsilon$-approximate solution would
be $O(|V|\epsilon^{-2})$.
This shows that an explicit instance of minimum $s$-$t$ cut does
not highlight the efficacy of the approach in this paper. Instantiating
our algorithm naively would give an $\otilde(|E|\cdot|V|^{5/3}\cdot\epsilon^{-2})$
to achieve additive error $\epsilon$. Nevertheless, even for such
an instance if instead we were simply given access to the an $\time$
time evaluation oracle for $f$, and the graph was desne, even in
this instance, without knowing the structure aprior we do not know
how to improve upon the $O(\time\cdot|V|^{5/3}\epsilon^{-2})$ time
bound achieved in this paper (though no serious attempt was made to
do this). In short there may be a gap between explicitly given structured
instances of submodular functions and algorithms that work with general
evaluation oracles as focused on in this paper.
\section{Certificates for Approximate SFM }
\label{app:approx_sfm_certificates}
The only certificate we know to prove that the optimum value of SFM
is $\geq F$ is to show a certain vector $x$ lies in the base polyhedron.
For example, one proof via Edmond's Theorem \cite{E70} is by demonstrating
$x\in B_{f}$ whose negative entries sum to $\geq F$. The only way
to do this is via Carathedeory's Theorem which requires $n$ vertices
of $B_{f}$, each of which requires $n$ function evaluations. For
approximate SFM, one thought might to be to use approximate Caratheodory's
Theorems \cite{Berman15,MPVW15} to describe a nearby point $x'$.
Unfortunately, for $\eps$-additive SFM approximation, one needs $x'$and
$x$ to be close in $\ell_{1}$-norm and approximate Caratheodory
works only for $\ell_{2}$-norm and higher. If one uses the $\ell_{2}$-norm
approximation, then unfortunately one doesn't get anything better
than quadratic. More precisely, approximate Caratheodory states that
one can obtain $||x'-x||_{2}\leq\delta$ with support of $x'$ being
only $O(1/\delta^{2})$-sparse. But to get $\ell_{1}$ approximations,
we need to set $\delta=\eps\sqrt{n}$ leading to linear sized support
for $x'.$ The approximate Caratheodory Theorems are tight \cite{MPVW15}
for general polytopes. Whether one can get better theorems for the
base polyhedron is an open question.
\section{Pseudocodes for Our Algorithms}
\label{app:algorithm_pseudocode}
We provide guiding pseudocodes for our two algorithms.
\label{alg:exact}
\begin{algorithm}[H]
\label{alg:exact-1}\textbf{Initialization.}
\begin{itemize}
\item \setlength{\itemsep}{-1mm}$x^{(1)}\defeq0^{n}$
\item Evaluate $g^{(1)}$ is the Lovasz subgradient at $x^{(1)}$. \emph{(Takes
$O(n\cdot\time)$ time. Store as (coordinate, value) pair in set $S^{(1)}.$
$|S^{(1)}|\leq3M.$ )}
\item Store $x^{(1)}$ in a balanced Binary search tree. At each node store
the \textbf{value} that is the sum of the gradient coordinates corr.
to children in the tree. \emph{(Takes $O(n)$ time to build.)}
\item Set $T\defeq20nM^{2}.$ Set $\eta\defeq\frac{\sqrt{n}}{18M}$.
\end{itemize}
\textbf{For }$t=1,2,\ldots,T:$
\begin{itemize}
\item \setlength{\itemsep}{-1mm}Define $e^{(t)}$ which is non-zero in
coordinates corresponding to $S^{(t)}$\emph{: (Takes time $|S^{(t)}|\le3M$
.)}
\begin{itemize}
\item if $g_{i}^{(t)}>0,$ then $e_{i}^{(t)}=\min(x_{i}^{(t)},\eta g_{i}^{(t)})$
\item if $g_{i}^{(t)}<0,$ then $e_{i}^{(t)}=\max(x_{i}^{(t)}-1,\eta g_{i}^{(t)})$
\end{itemize}
\item \textbf{Update}($x^{(t)},e^{(t)},S^{(t)}$) to get $(x^{(t+1)},g^{(t+1)},S^{(t+1)}$)
where $g^{(t+1)}$ is stored as coordinate,value pairs in $S^{(t+1)}.$
as described in Lemma \ref{lem:subgradient-update}. \emph{(Update
takes time $O(M\log n+M\cdot\time+M\cdot\time\log n)$)}
\end{itemize}
Obtain the $O(n)$ sets given\textbf{ }the order\textbf{ }of \textbf{$x_{T}$},
that is, if $P$ is the permutation corresponding to $x_{T}$, then
the sets are $\{P[1],\ldots,P[n]\}$. Return the minimum valued set
among them.\textbf{ }
\caption{Near Linear Time Exact SFM Algorithm. }
\end{algorithm}
\begin{algorithm}[H]
\textbf{Initialization}
\begin{itemize}
\item \setlength{\itemsep}{-1mm}Set $N\defeq10n\log^{2}n\eps^{-2},$ $T=\ceil{n^{1/3}}$
\item Initialize $x$ as the all zeros vector and store it in a BST.
\end{itemize}
For $i=1,2,\ldots N/T:$
\begin{itemize}
\item \setlength{\itemsep}{-1mm}$x^{(1)}\defeq$the current $x$.
\item Compute $g^{(1)}$, the gradient to the Lovasz extension given $x^{(1)}.$
\emph{//This takes $O(n\time)$ time).}
\item Sample $z^{(1)}$ by picking $j\in[n]$ with probability proportional
to $\abs{g_{j}^{(1)}}$ and returning $z^{(1)}\defeq\norm{g^{(1)}}_{1}sign(g_{j}^{(1)})\cdot\mathbf{1}_{j}$.
\emph{//This takes $O(n\time)$ time.}
\item Set $\tilde{g}^{(1)}\defeq z^{(1)}.$
\item For $t=1,2,\ldots,T:$
\begin{itemize}
\item \setlength{\itemsep}{-1mm}Define $e^{(t)}$ as in Algorithm 1 using
$\tilde{g}^{(t)}$ instead of $g^{(t)}.$\emph{ //This takes time
$O(\supp(\tilde{g}^{(t)}))$ which will be $O(t^{2})$}
\item Obtain $z^{(t)}$ using \textbf{Sample}($x^{(t)},e^{(t)},\ell=t)$
where \textbf{Sample }is the randomized procedure describe in Lemma
\ref{lem:subgradient-update_additive-1}. \emph{//This takes $O(t^{2}\time\log n)$
time.}
\item Update $\tilde{g}^{(t+1)}\defeq\sum_{s\leq t}z^{(s)}$. \emph{//This
takes $O(t^{2}\log n)$ time to update the relevant BSTs.}
\end{itemize}
\item Set current $x$ to $x_{T}.$
\end{itemize}
Obtain the $O(n)$ sets given\textbf{ }the order\textbf{ }of the final
$x$, that is, if $P$ is the permutation corresponding to $x$, then
the sets are $\{P[1],\ldots,P[n]\}$. Return the minimum valued set
among them.\textbf{ }
\caption{Subquadratic Approximate SFM Algorithm.}
\end{algorithm}
\section{Introduction}
Submodular functions are set functions that prescribe a value to every
subset of a finite universe $U$ and have the following diminishing
returns property: for every pair $S\subseteq T\subseteq U$, and for
every element $i\notin T$, $f(S\cup i)-f(S)\geq f(T\cup i)-f(T)$.
Such functions arise in many applications. For instance, the utility
functions of agents in economics are often assumed to be submodular,
the cut functions in directed graphs or hypergraphs are submodular,
the entropy of a given subset of random variables is submodular, etc.
Submodular functions have been extensively studied for more than
five decades \cite{Choquet55,E70,Lovasz83,fujishigeB,Mccormick06}.
One of the most important problems in this area is submodular function
minimization (SFM, henceforth) which asks to find the set $S$ minimizing
$f(S)$. Note that submodular functions need not be monotone and therefore
SFM is non-trivial. In particular, SFM generalizes the minimum cut
problem in directed graphs and hypergraphs, and is a fundamental problem
in combinatorial optimization. More recently, SFM has found many
applications in areas such as image segmentation \cite{BVZ01,KKT08,KT10}
and speech analysis \cite{LB10,LB11}. Owing to these large scale
problems, fast SFM algorithms are highly desirable.
We assume access to an \emph{evaluation oracle }for the submodular
function, and use $\runtime$ to denote the time taken per evaluation.
An amazing property of submodular functions is that SFM can be exactly
solved with polynomial many queries and in polynomial time. This was
first established via the ellipsoid algorithm \cite{GLS81} in 1981,
and the first polynomial combinatorial algorithms were obtained \cite{Cun85,IFF01,S00,IO09}
much later.
The current fastest algorithms for SFM are by the second, third, and
fourth authors of this paper \cite{LSW15} who give $O(n^{2}\log nM\cdot\time+n^{3}\log^{O(1)}nM)$
time and $O(n^{3}\log^{2}n\cdot\time+n^{4}\log^{O(1)}n)$ time algorithms
for SFM. Here $M$ is the largest absolute value of the integer-valued
function. The former running time is a (weakly) polynomial running
time, i.e. it depends polylogarithmically on $M,$ while the latter
is a strongly polynomial running time, i.e. it does not depend on
$M$ at all. Although good in theory, known implementations of the
above algorithms are slow in practice \cite{FHI06,FI11,Bac13,CJK14}.
A different algorithm, the so-called Fujishige-Wolfe algorithm \cite{Wolfe76,Fujishige80}
seems to have the best empirical performance \cite{Bac13,JLB11,Bilmes15}
among general purpose SFM algorithms. Recently the Fujishige-Wolfe
algorithm and variants were shown \cite{CJK14,LJ15} to run in $O((n^{2}\cdot\time+n^{3})M^{2})$
time, proving them to be \emph{pseudopolynomial} time algorithms,
that is having running time $O(\poly(n,\time,M))$.
In this paper we also consider approximate SFM. More precisely, for
submodular functions whose values are in the range $[-1,+1]$ (which
is without loss of generality by scaling), we want to obtain \emph{additive
}approximations\footnote{We also show in Appendix~\ref{app:mult_to_add} how to obtain a multiplicative
approximation under a mild condition on $f$. Such a condition is
necessary as multiplicative approximation is ill defined in general.}, that is, return a set $S$ with $f(S)\leq\textrm{\ensuremath{\OPT}}+\eps.$
Although approximate SFM has not been explicitly studied before, previous
works \cite{LSW15,Bac13,CJK14} imply $O(n^{2}\time\log^{O(1)}(n/\eps))$-time
and $O((n^{2}\cdot\time+n^{3})/\eps^{2})$-time algorithms. Table
\ref{tab:time} summarizes the above discussion.
\begin{table}[h]
\centering{}%
\begin{tabular}{|c||c|c|c|}
\hline
Regime & {\small{}Previous Best Running Time} & {\small{}Our Result} & {\small{}Techniques}\tabularnewline
\hline
\hline
{\small{}Strongly Polynomial} & \multicolumn{2}{c|}{{\small{}$O(n^{3}\log^{2}n\cdot\time+n^{4}\log^{O(1)}n)$ \cite{LSW15}}} & {\small{}Cutting Plane + Dimension Collapsing}\tabularnewline
\hline
{\small{}Weakly Polynomial} & \multicolumn{2}{c|}{{\small{}$O(n^{2}\log nM\cdot\time+n^{3}\log^{O(1)}nM)$\cite{LSW15}}} & {\small{}Cutting Plane}\tabularnewline
\hline
{\small{}Pseudo Polynomial} & {\small{}$O((n^{2}\cdot\time+n^{3})M^{2})$\cite{CJK14,LJ15}} & \textbf{\small{}$\tilde{{O}}(nM^{3}\cdot\time)$} & {\small{}See Section~}\ref{sec:technique_overview}\tabularnewline
\hline
\multirow{1}{*}{{\small{}$\eps$-Approximate}} & \multirow{1}{*}{{\small{}$O(n^{2}\cdot\time/\varepsilon^{2})$ \cite{CJK14,LJ15,Bac13}}} & \textbf{\small{}$\tilde{O}(n^{5/3}\cdot\time/\varepsilon^{2})$} & {\small{}See Section~}\ref{sec:technique_overview}\tabularnewline
\hline
\end{tabular}\caption{\label{tab:time} {\small{}Running times for minimizing a submodular
function defined on a universe of size $n$ that takes integer values
between $-M$ and $M$ (except for $\varepsilon$-approximate algorithms
we assume the submodular function is real-valued with range in $[-1,1]$).
$\text{EO}$ denotes the time to evaluate the submodular function
on a set.}}
\end{table}
In particular, the best known dependence on $n$ is \emph{quadratic}
even when the exact algorithms are allowed to be pseudopolynomial,
or when the $\eps$-approximation algorithms are allowed to have a
polynomial dependence on $\eps$. This quadratic dependence seems
to be a barrier. For exact SFM, the smallest known non-deterministic
proof \cite{E70,Cun85} that certifies optimality requires $\Theta(n^{2})$
queries, and even for the approximate case, nothing better is known
(see Appendix~\ref{app:approx_sfm_certificates}). Furthermore, in
this paper we \emph{prove} that a large class of algorithms which
includes the Fujishige-Wolfe algorithm\cite{Wolfe76,Fujishige80}
and the cutting planealgorithms of Lee et al.\cite{LSW15}, as stated
need to make $\Omega(n^{2})$ queries. More precisely, these algorithms
do not exploit the full power of submodularity and work even with
the weaker model of having access only to the ``subgradients of the
Lovasz Extension'' where each subgradient takes $\Theta(n)$ queries.
We prove that any algorithm must make $\Omega(n)$ subgradient calls
implying the quadratic lower bound for this class of algorithms. Furthermore,
our lower bound holds even for functions with range $\{-1,0,1\}$,
and so trivially the lower bound also holds for approximate SFM as
well.
\subsection{Our Results}
In this paper, we describe exact and approximate algorithms for SFM
which run in time subquadratic in the dimension $n$. Our first result
is a pseudopolynomial time exact SFM algorithm with\emph{ nearly linear
dependence }on $n$. More precisely, for any integer valued submodular
function with maximum absolute value $M$, our algorithm returns the
optimum solution in $O(nM^{3}\log n\cdot\time)$ time.\textbf{ }This
has a few consequences to the complexity theory of SFM. First, this
gives a better dependence on $n$ for pseudopolynomial time algorithm.
Second, this shows that to get a super-linear lower bound on the query
complexity of SFM, one need to consider a function with super constant
function values.\footnote{Conversely, \cite[Thm 5.7]{harvey2008matchings} shows that we need
at least $n$ queries of evaluation oracle to minimize a submodular
function with range in $\{0,1,2\}$.} Third, this completes the following picture on the complexity of
SFM: the best known strongly polynomial time algorithms have query
complexity $\tilde{O}(n^{3})$, the best known (weakly) polynomial
time algorithms have query complexity $\tilde{O}(n^{2})$, and our
result implies the best pseudopolynomial time algorithm has query
complexity $\tilde{O}(n)$ .
Our second result is a \emph{subquadratic approximate SFM }algorithm\emph{.
}More precisely, we give an algorithm which in $\tilde{O}(n^{5/3}\time/\eps^{2})$
time, returns an $\eps$-additive approximate solution. To break
the quadratic barrier, that arise from the need to compute $\Omega(n)$
subgradient each of which individully we do not know how to compute
faster than $\Omega(n\cdot\time)$, we wed continuous optimization
techniques with properties deduced from submodularity and simple
data structures. These allow us to compute and use gradient updates
in a much more economical fashion. We believe that that the ability
to obtain subquadratic approximate algorithms for approximate submodular
minimization is an interesting structural result that could have further
implications.\footnote{Note that simple graph optimization problems, such as directed minimum
$s$-$t$ cut, is not one of these (See Appendix~\ref{sec:minimum_cut}). }
Finally, we show how to improve upon these results further if we know
that the optimal solution is sparse. This may be a regime of interest
for certain applications where the solution space is large (e.g. structured
predictions have exponentially large candidate sets \cite{prasad2014submodular}),
and as far as we are aware , no other algorithm gives sparsity-critical
results.
\subsection{Overview of Techniques}
\label{sec:technique_overview}
In a nutshell, all are our algorithms are projected, stochastic subgradient
descent algorithms on the Lovasz extension $\hat{f}$ of a submodular
function with economical subgradient updates. The latter crucially
uses submodularity and serves as the point of departure from previous
black-box continuous optimization based methods. In this section,
we give a brief overview of our techniques.
The Lovasz extension $\hat{f}$ of a submodular function is a non-smooth
convex function whose (approximate) minimizers leads to (approximate)
SFM. Subgradient descent algorithms maintain a current iterate $x^{(t)}$and
take a step in the negative direction of a subgradient $g(x^{(t)})$
at $x^{(t)}$to get the next iterate $x^{(t+1)}$. In general, the
subgradient of a Lovasz extension takes $O(n\time)$ to compute. As
stated above, the $\Omega(n)$ lower bound on the number of iterations
needed, implies that if we naively recompute the subgradients at every
iterations, we cannot beat the quadratic barrier. Our main technical
contribution is to exploit submodularity so that $g(x^{(t+1)})$ can
be computed in sublinear time given $x^{(t)}$ and $g(x^{(t)})$.
The first implication of submodularity is the observation (also made
by \cite{JB11,HK12}) that $\ell_{1}$-norms of the subgradients are
bounded by $O(M)$ if the submodular function is in $[-M,M]$. When
the function is integer valued, this implies that the subgradients
are sparse and have only $O(M)$ non-zero entries. Therefore, information
theoretically, we need only $O(M)$ bits to get $g(x^{(t+1)})$ from
$g(x^{(t)})$. However, we need an algorithm to find the positions
at which they differ. To do so, we use submodularity again. We observe
that given any point $x$ and non-negative, $k$-sparse vector $e,$
the difference vector $d:=g(x+e)-g(x)$ is non-positive at points
corresponding to support of $e$ and non-negative everywhere else.
Furthemore, on a ``contiguous set'' of coordinates, the sum of these
entries in $d$ can be computed in $O(\time)$ time. Armed with this,
we create a binary search tree (BST) type data structure to find the
$O(M)$ non-zero coordinates of $d$ in $O(M\cdot\time\log n)$ time
(as opposed to $O(n\cdot\time)$ time). This, along with standard
subgradient descent analysis yields our $O(nM^{3}\time\log n)$-algorithm.
When the submodular function is real valued between $[-1,1]$, although
the $\ell_{1}$-norm is small the subgradient can have full support.
Therefore, we cannot hope to evaluate the gradient in sublinear time.
We resort to stochastic subgradient descent where one moves along
a direction whose expected value is the negative subgradient and whose
variance is bounded. Ideally, we would have liked a fast one-shot
random estimation of \textbf{$g(x^{(t+1)})$; }unfortunately we do
not how to do it. What we can do is obtain fast estimates to the difference
vector $d$ mentioned above. As discussed above, the vector $d$ has
$O(k)$ ``islands'' of non-negative entries peppered with $O(k)$
non-positive entries. We maintain a data-structure which with $O(k\time\log n)$
preprocessing time can evaluate the sums of the entries in these islands
in $O(\time\log n)$ time. Given this, we can sample a coordinate
$j\in[n]$ with probability proportional to $|d_{j}|$ in a similar
time. Thus we get a random estimate of the vector $d$ whose variance
is bounded by a constant.
To get the stochastic subgradient, however, we need to add these difference
vectors and this accumulates the variance. To keep the variance in
control, we run the final algorithm in batches. In each batch, as
we progress we take more samples of the $d$-vector to keep the variance
in check. This however increases the sparsity (the $k$ parameter),
and one needs to balance the effects of the two. At the end of each
batch, we spend $O(n\time)$ time computing the deterministic subgradient
and start the process over. Balancing the number of iterations and
length of batches gives us the $\tilde{O}(n^{5/3}\time\eps^{-2})$-time
algorithm for $\eps$-approximate SFM.
\subsection{Related Work}
Submodularity, and indeed SFM, has a rich body of work and we refer
the reader to surveys of Fujishige \cite{fujishigeB} and McCormick\cite{Mccormick06}
for a more detailed pre-2006 version. Here we mention a few subsequent
related works which were mostly inspired by application in machine
learning.
Motivated by applications in computer vision \cite{BVZ01,BK04} which
require fast algorithms for SFM, researchers focused on minimization
of \emph{decomposable }submodular functions which are expressible
as sum of ``simple'' submodular functions. It is assumed that simple
submodular functions can be minimized fast (either in practice or
in theory). Such a study was initiated by Stobbe and Krause \cite{SK10}
and Kolmogorov \cite{Kolmogorov12} who gave faster (than general
SFM) algorithms for such functions. More recently, motivated by work
of Jegelka et al. \cite{JBS13}, algorithms with \emph{linear }convergence\emph{
}rates \cite{NJJ14,EN15} have been obtained. That is, they get $\eps$-approximate
algorithms with dependence on $\eps$ being $\log(1/\eps)$. \textbf{}.
We end our introductory discussion by mentioning the complexity of
\emph{constrained }SFM where one wishes to minimize over sets satisfying
some constraints. In general constrained SFM is much harder than unconstrained
SFM. For instance the minimum cut problem with cardinality constraints
becomes the balanced partitioning problem which is APX-hard. More
generally, Svitkina and Fleischer \cite{SF11} show that a large class
of constrained SFM problems cannot be approximated to better than
$\tilde{O}(\sqrt{n})$ factors without making exponentially many queries.
In contrast, Goemans and Soto \cite{GS13} prove that symmetric submodular
functions can be minimized over a large class of constraints. Inspired
by machine learning applications, Iyer et al. \cite{IJB13,IB13} give
algorithms for a large class of constrained SFM problems which have
good approximation guarantees if the \emph{curvature} of the functions
are small.
\paragraph{}
\begin{comment}
Submodular function minimization is important. Well studied and popular.
Unfortunately minimizing is difficult. Many classic algorithms and
heuristics. Recent theoretical result to $O(n^{2})$ weakly polynomial
and $O(n^{3})$ strongly polynomial. Is this tight? Yes and no. First
we show that in the oracle model where can only compute subgradients
of Lovasz extension (or equivalently greedy over base polytope) then
need $\Omega(n)$ queries. Since this typically takes $\Omega(n)$
submodular function evaluations, means would need different way of
accessing problem. In short, there is lower bound on things like Frank
Wolfe, Wolfe, and Cutting plane methods for the current way they access
the problem. However, we also show how to circumvent this lower bound
in various settings. We provide $O(nM^{3})$ pseudopolynomial algorithm,
the first \emph{nearly linear} time SFM in any regime. We also provide
$O(n^{1.5}\epsilon^{-2})$ algorithm to get $f(x)-f(x_{*})\leq\epsilon$
if $|f(S)|\leq1$, breaking the $o(n^{2})$ barrier for the first
time.
\end{comment}
\section{Lower Bound}
It is well known that $\Omega(n)$ evaluation oracle calls are needed
to minimize a submodular function. On the other hand, the best way
we know of for certifying minimality takes $\Theta(n)$ subgradient
oracle calls (or equivalently, vertices of the base polyhedron). A
natural question is whether $\Theta(n)$ subgradient oracle calls
are in fact needed to minimize a submodular function. In this section
we answer this in the affirmative. Since each gradient oracle needs
$n$ evaluation oracle calls, this gives an $\Omega(n^{2})$ lower
bound on the number of evaluations required for algorithms which only
access the function via graident oracles. As mentioned in the introduction,
these include the Fujishige-Wolfe heuristic \cite{Wolfe76,Fujishige80},
various version of conditional gradient or Franke Wolfe \cite{frankWolfe,linearConvergentCondGrad}
, and the new cutting plane methods \cite{LSW15}. Note that there
are known lower bounds for subgradient descent that have a somewhat
submodular structure \cite{Nesterov2003} and this suggests that such
a lower bound should be possible, however we are unaware of a previous
information theoretic lower bound such as we provide.
To prove our lower bound, we describe a distribution over a collection
of hard functions and show that any algorithm must make $\Omega(n)$
subgradient calls in expectation\footnote{One can also prove a high probability version of the same result but
for simplicity we don't do it. } and by Yao's minimax principle this will give an $\Omega(n)$ lower
bound on the expected query complexity of any randomized SFM algorithm.
The distribution is the following. Choose $R$ to be a random set
with each element of the universe selected independently with probability
$1/2$. Given $R$, define the function
\[
f_{R}(S)=\begin{cases}
-1 & \text{if }S=R\\
0 & \text{if }S\subsetneq R\text{ or }R\subsetneq S\\
1 & \textrm{otherwise}.
\end{cases}
\]
Clearly the minimizer of $f_{R}$ is the set $R.$ Any SFM algorithm
is equivalent to an algorithm for recognizing the set $R$ via subgradient
queries to $f_{R}$. A subgradient $g$ of $f_{R}$ at any point $x$
corresponds to a permutation $P$ of $\{1,2,\ldots,n\}$ (the sorted
order of $x$). Recall the notation $P[i]:=\left\{ P_{1},P_{2},\ldots,P_{i}\right\} $.
The following claim describes the structure of subgradients.
\begin{lem}
\label{lem:sub_r}Let $i$ be the smallest index such that $P[i]$
\textbf{is} \textbf{not} a subset of $R$ and $j$ be the smallest
index such that $P[j]$ \textbf{is} a superset of $R$. Then $g(i)=1,$
$g(j)=-1$, and $g(k)=0$ for all $k\in[n]\setminus{i,j}.$\end{lem}
\begin{proof}
To see $g(i)=1$, note that $A:=P[i-1]$ is a subset of \textbf{$R$.
}Two cases arise: either $A=R$ in which case $P[i]$ is a strict
superset of $R$\textbf{ }and so $f_{R}(A)=-1$ and $f_{R}(P[i])=0$
implying $g(i)=1$; or $A$ is a strict subset of $R$ in which case
$P[i]$ is neither a subset or a superset, implying $f_{R}(A)=0$
and $f_{R}(P[i])=1$. Similarly, to see $g[j]=-1,$ note that $B:=P[j-1]$
is not a superset of $R$. Two cases arise: either $B$ is a strict
subset of $R$ in which case $P[j]=R$ and we have $f_{R}(B)=0$ and
$f_{R}(P[j])=-1$; or $B$ is neither a subset nor a superset in which
case $P[j]$ is a strict superset of $R$ and we have $f_{R}(B)=1$
and $f_{R}(P[j])=0$.
For any other $k,$ we have either both $P[k]$ and $P[k-1]$ are
strict subsets of $R$ (if $k<\min(i,j)$), or both $P[k]$ and $P[k-1]$
are strict supersets of $R$ (if $k>\max(i,j)$) , or both are neither
superset nor subset. In all three cases, $g(k)=0\mbox{. }$
\end{proof}
Intuitively, any gradient call gives the following information regarding
$R$: we know elements in $P[i-1]$ lie in $R,$ $P_{i}$ doesn't
lie in $R,$ $P_{j}$ lies in $R,$ and all $P_{k}$ for $k>j$ do
not lie in $R.$ Thus we get $i+n-j+1$ ``bits'' of information.
If $R$ is random, then the expected value of this can be shown to
be $O(1)$, and so $\Omega(n)$ queries are required. We make the
above intuitive argument formal below.
\begin{comment}
not involving $S^{*}$ takes on the following two patterns (ordered
in terms of the permutation for the subgradient). The first line is
the values of the submodular function while the second line shows
the corresponding elements. It is possible to have no 1's at the beginning
in Case 1. In fact, Case 2 can be thought of a degenerate Case 1 with
no 2's in the middle.
\[
1,\dots,1,2,2,\dots,2,1,1,\dots,1\qquad1,\dots,1,0,1,\dots,1\qquad
\]
\[
p,\dots,q,r,s\dots,t,u,v,\dots,w\qquad\qquad\qquad\qquad
\]
For an arbitrarily subgradient we have to be extremely lucky to have
the second case which actually reveals $S^{*}$ right away. In the
first case, the subgradient call only tells us information about very
few elements of $U$. From above we can deduce that $p,\dots,q\in S^{*}$,
$r\notin S^{*}$, $u\in S^{*}$ and $v,\dots,w\notin S^{*}$.
This observation suggests that a general subgradient reveals information
about only the two ``ends''. For an unknown $S^{*}$, we generally
expect that its elements are scattered across the subgradient and
hence the ends should be rather short. In other words, identities
of only $O(1)$ elements are revealed in the ideal scenario. Our proof
relies heavily on this intuition.
A common barrier to establishing oracle lower bounds is that the oracle
queries can be adaptive and make it difficult to reason about the
knowledge of the algorithm. To overcome this, we show that each of
the subgradient oracle queries can be ``decoupled'' thanks to our
choice of $f$.
We call an element good if it is in $S^{*}$ and bad otherwise. The
next lemma shows that good and bad elements can effectively be ``ignored''
in future oracle calls.
\end{comment}
Suppose at some point of time, the algorithm knows a set $A\subseteq R$
and a set $B\cap R=\emptyset.$ The following lemma shows that one
may assume wlog that subsequent subgradient calls are at points $x$
whose corresponding permutation $P$ contains the elements of $A$
as a ``prefix'' and elements of $B$ as a ``suffix''.
\begin{lem}
\label{lem:known}Suppose we know $A\subseteq R$ and $B\cap R=\emptyset$.
Let $g$ be a subgradient and $g'$ be obtained from $g$ by moving
$A$ and $B$ to the beginning and end of the permutation respectively.
Then one can compute $g$ from $g'$ without making any more oracle
calls.\end{lem}
\begin{proof}
Easy by case analysis and Lemma \ref{lem:sub_r}. Let $P$ be the
permutation corresponding to $g$. We show that given $g$' and $P$,
we can evaluate $g.$ Let us say we are interested in evaluating $g_{P_{k}}$
and say $P_{k}=a.$ Lemma \ref{lem:sub_r} states that this is 1 iff
$P[k-1]\subseteq R$ and $P[k]$ isn't. Now, if $P[k-1]\cap B\neq\emptyset,$
then we know $g_{P_{k}}=$0. Otherwise, $g_{P_{k}}=$1 iff $P[k-1]\setminus B\cup A\subseteq R$
and $P[k]\setminus B\cup A$ is not, since $A\subseteq R.$ Therefore,
$g_{P_{k}}=$1 iff $g'_{a}=1$ and $P[k-1]\cap B=\emptyset.$ Whether
$g_{P_{k}}=-1$or not can be done analogously.
\end{proof}
For an algorithm, let $h(k)$ be the expected number of subgradient
calls required to minimize $f_{R}$ when the universe if of size $k$
(note $R$ is chosen randomly by picking each element with probability
1/2). For convenience we also define $h(k)=0$ for $k\leq0$.
\begin{lem}
\label{lem:recur}For $k\geq1$, $h(k)\geq1+\mathbb{\E}_{X,Y}[h(k-X-Y)]$,
where $X,Y$ are independent geometric random variables, i.e. $Pr[X=i]=1/2^{i}$
for $i\geq1$.\end{lem}
\begin{proof}
By our observation above, a subgradient of $f$ reveals the identities
of $\min\{X+Y,k\}$ elements, where $X-1=i-1$ and $Y-1=n-j$ ($i,j$
as defined in Lemma \ref{lem:sub_r}) are the lengths of the streaks
of 0's at the beginning and end of the subgradient.
Note that $X$ simply follow a geometric distribution because $Pr[P[i-1]\subseteq R,P_{i}\notin R]=1/2^{i}$.
Similarly, $Y$ also follow the same geometric distribution. In the
case of $X+Y>k$, we have $R$ as a prefix of the permutation.
Finally, as a subgradient call reveals no information about the intermediate
elements in the permutation, by Lemma \ref{lem:known} we are then
effectively left with the same problem of size $k-X-Y$. More formally,
this is because the value of the subgradient queried is independent
of the identities of the elements $P_{i+1},\ldots,P_{j-1}$.\end{proof}
\begin{thm}
$h(n)\geq n/4$, i.e. any algorithm for SFM requires at least $\Omega(n)$
subgradient calls.\end{thm}
\begin{proof}
We show by induction that $h(k)\geq k/4$. By Lemma \ref{lem:recur}
and the induction hypothesis,
\begin{eqnarray*}
h(k) & \geq & 1+\mathbb{\E}_{X,Y}[h(k-X-Y)]\\
& \geq & 1+\mathbb{\E}_{X,Y}[(k-X-Y)/4]\\
& = & 1+k/4-\mathbb{\E}[X]/4-\mathbb{\E}[Y]/4\\
& = & k/4
\end{eqnarray*}
as desired.
\end{proof}
Readers may have noticed that the proofs of the preceding two lemmas
essentially imply that $h(k)$ is roughly the expected number of geometric
random variables needed to sum up to $k$. One can use this property
together with some concentration inequality for geometric random variables
to establish a high probability version of our lower bound.
\section*{Acknowledgments}
This work was partially supported by NSF awards 0843915, 1111109,
CCF0964033, CCF1408635 and Templeton Foundation grant 3966. Part of
this work was done while the first three authors were visiting the
Hausdorff Research Institute for Mathematics in Bonn for the Workshop
on Submodularity, and the last three authors were visiting the Simons
Institute for the Theory of Computing in Berkeley. We thank the organizers
of the workshop for inviting us. We thank Elad Hazan and Dan Garber
for helpful preliminary discussions regarding approximate SFM. We
thank the anonymous reviewers for their helpful comments and in particular
for pointing us to needed references and previous work as well as
pointing us to the relationship between our work and graph optimization,
encouraging us to write Appendix~\ref{sec:minimum_cut}. A special
thanks to Bobby Kleinberg for asking the question about approximate
SFM.
\bibliographystyle{plain}
\section{Preliminaries}
Here we introduce notations and general concepts used throughout this
paper.
\begin{comment}
In\textbf{ }Section\textbf{~\ref{sec:prelim:notation}}: we set some
general notation that is used throughout the paper; in Section\textbf{~\ref{sec:prelim:submodular_functions}},
we cover basic definitions regarding submodular functions; in Section\textbf{~\ref{sec:prelim:lovasz_extension}}
we provide the necessary facts about the Lovasz extension, and finally
in Section\textbf{~\ref{sec:prelim:subgradient_descent}}, we give
necessary details of subgradient descent for convex optimization.
\end{comment}
\subsection{General Notation \label{sec:prelim:notation}}
We let $[n]\defeq\{1,...,n\}$ and $[0,1]^{n}\defeq\{x\in\R^{n}\,:\,x_{i}\in[0,1]\,\,\,\forall i\in[n]\}$.
Given a permutation $P=(P_{1},...,P_{n})$ of $[n]$, let $P[j]\defeq\{P_{1},P_{2},...,P_{j}\}$
be the set containing the first $j$ elements of $P$. Any point $x\in\R^{n}$
defines the permutation \emph{$P_{x}$ consistent with $x$ }where
$x_{P_{1}}\geq x_{P_{2}}\geq...\geq x_{P_{n}}$with ties broken lexicographically.
We denote by $\mathbf{1}_{i}\in\R^{n}$ the indicator vector for coordinate
$i$, i.e. $\mathbf{\mathbf{1}}_{i}$ has a $1$ in coordinate $i$
and a $0$ in all other coordinates. We call a vector $s$-\textit{sparse}
if it has at most $s$ non-zero entries.
\subsection{Submodular Functions \label{sec:prelim:submodular_functions}}
Throughout this paper $f\,:\,2^{U}\rightarrow\R$ denotes a submodular
function on a ground set $U$. For notational convenience we assume
without loss of generality that $U=[n]$ for some positive integer
$n$ and that $f(\emptyset)=0$ (as this can be enforced by subtracting
$f(\emptyset)$ from for the value of all sets while preserving submodularity).
Recall that $f$ is submodular if and only if it obeys the property
of diminishing marginal returns: for all $S\subseteq T\subseteq[n]$
and $i\notin T$ we have
\[
f(S\cup\{i\})-f(S)\geq f(T\cup\{i\})-f(T)\,.
\]
We let $\OPT\defeq\min_{S\subseteq[n]}f(S)$ be the minimum value
of $f$. We denote by $\runtime$ the time it takes to evaluate $f$
on a set $S$. More precisely, we assume given a linked list storing
a permutation $P$ of $[n]$, and a position $k$, we can evaluate
$f(P[k])$ in $\time$ time.
\begin{comment}
Although we do not use it, we define the base polytope $B_{f}\defeq{x\in\R^{n}:x(S)\leq f(S),\forall S\subsetneq U,\sim x(U)=f(U)}$.
It is well known that vertices of $B_{f}$ correspond to permutations
of $[n];$ given a permutation $P$ the point $g$ whose $P_{i}$the
coordinate is $f(P[i])-f(P[i-1])$ is a vertex.
\end{comment}
\subsection{The Lovasz Extension \label{sec:prelim:lovasz_extension}}
Our results make extensive use of the Lovasz extension, a convex,
continuous extension of a submodular function to the interior of the
$n$-dimensional hypercube, i.e. $[0,1]^{n}$.
\begin{defn}[Lovasz Extension]
Given a submodular function $f$, the Lovasz extension of $f$,
denoted as $\hat{f}\,:\,[0,1]^{n}\rightarrow\R$, is defined for all
$x\in[0,1]^{n}$ by $\hat{f}(x)=\sum_{j\in[n]}(f([P[j])-f(P[j-1]))x_{i_{j}}$
where $P=P_{x}=(P_{1},...,P_{n})$ is the permutation consistent with
$x$.
\end{defn}
Note that since $f(\emptyset)=0$ this definition is equivalent to
\begin{equation}
\hat{f}(x)=f(P[n])x_{P_{n}}+\sum_{j\in[n-1]}f([P[j])(x_{P_{j}}-x_{P_{j+1}})\,.\label{eq:lovasz_non_negative}
\end{equation}
\begin{comment}
From this we see that the Lovasz extension is well-defined since if
$x_{i_{j}}=x_{i_{j+1}}$then $f(P[j])$ contributes nothing to the
sum.
\end{comment}
{} We make use of the following well known facts regarding submodular
functions (see e.g. \cite{Lovasz83,fujishigeB}).
\begin{thm}[Lovasz Extension Properties]
\label{thm:lovasz_extension_properties} The following are true
for all $x\in[0,1]^{n}$:%
\begin{comment}
and permutations $P=(i_{1},...,i_{n})$ consistent with $x$:
\end{comment}
\end{thm}
\begin{itemize}
\item \setlength{\itemsep}{-1mm}\textbf{Convexity}: The Lovasz extension
is convex.
\item \textbf{Consistency}: For $x\in\{0,1\}^{n}$ we have $\hat{f}(x)=f(S(x))$
where $S(x)=\{i\in S\,:\,x_{i}=1\}$.
\item \textbf{Minimizers}: $\min_{x\in[0,1]^{n}}\hat{f}(x)=\min_{S\subseteq[n]}f(S)$.
\item \textbf{Subgradients}: The vector $g(x)\in\R^{n}$ defined by $g(x)_{P_{k}}\defeq f(P[k])-f(P[k-1])$
is a subgradient of $\hat{f}$ at $x$, where $P=P_{x}$ is the permutation
consistent with $x$. Let us call this the Lovasz subgradient. %
\begin{comment}
Thus subgradients of $\hat{f}$ are in 1-to-1 correspondence with
vertices of the base polytope.
\end{comment}
\end{itemize}
We conclude with a few straightforward computational observations
regarding the Lovasz extension and its subgradients. First note that
for $x\in[0,1]^{n}$ we can evaluate $\hat{f}(x)$ or compute $g(x)$
in time $O(n\time+n\log n)$ simply by sorting the coordinates of
$f$ and evaluating $f$ at the $n$ desired sets. Also, note that
by (\ref{eq:lovasz_non_negative}) the Lovasz extension evaluated
at $x\in[0,1]^{n}$ is a non-negative combination of the value of
$f$ at $n$ sets. Therefore computing the smallest of these sets
gives a set $S\subseteq[n]$ such that $f(S)\leq\hat{f}(x)$ and we
can clearly compute this, again in $O(n\time+n\log n)$ time. Therefore
for any algorithm which approximately minimizes the Lovasz extension
with some (additive) error $\eps$, we can always find a set $S$
achieving the same error on $f$ by just paying an additive $O(n\time+n\log n)$
in the running time.
\subsection{Subgradient Descent \label{sec:prelim:subgradient_descent}}
Our algorithmic results make extensive use of subgradient descent
(or mirror descent) and their stochastic analogs. Recall that for
a convex function $h\,:\,\chi\rightarrow\R$, where $\chi\subseteq\R^{n}$
is a compact convex set, a vector $g\in\R^{n}$ is a \emph{subgradient}
of $h$ at $x\in\chi$ if for all $y\in\chi$ we have
\[
h(y)\geq h(x)+g^{\top}(y-x)\,.
\]
For such an $h$ we let $\partial h(x)$ denote the set of subgradients
of $h$ at $x$. An algorithm that on input $x$ outputs $\tilde{g}(x)\in\partial h(x)$
is a \emph{subgradient oracle} for $h$. Similarly, an algorithm that
on input $x$ outputs a \emph{random} $\tilde{g}(x)$ such that $\E\tilde{g}(x)\in\partial h(x)$
is a \emph{stochastic subgradient oracle} for $h$.
One of our main algorithmic tools is the well known fact that given
a (stochastic) subgradient oracle we can minimize a convex function
$h$. Such algorithms are called\emph{ (stochastic) subgradient descent}
algorithms and fall into a more general framework of algorithms known
as \emph{mirror descent}. These algorithms are very well studied and
there is a rich literature on the topic. Below we provide one specific
form of these algorithms adapted from \cite{Bubeck15} that suffices
for our purposes.
\begin{thm}[Projected (Stochastic) Subgradient Descent\footnote{This is Theorem~6.1 from \cite{Bubeck15} restated where we used
the ``ball setup'' with $\Phi(x)=\frac{1}{2}\norm x_{2}^{2}$ so
that $\dist=\R^{n}$ and $D_{\Phi}(x,y)=\frac{1}{2}\norm{x-y}_{2}^{2}$.
We also used that $\argmin_{x\in\chi}\eta g^{\top}x+\frac{1}{2}\norm{x-x_{t}}_{2}^{2}=\argmin_{x\chi}\norm{x-(x_{t}-\eta g)}_{2}^{2}$.}]
\label{thm:subgradient_descent} Let $\chi\subseteq\R^{n}$ denote
a compact convex set, $h\,:\,\chi\rightarrow\R$ be a convex function,
$\tilde{g}$ be a (stochastic) subgradient oracle for which $\E\norm{\tilde{g}(x)}_{2}^{2}\leq B^{2}$
for all $x\in\chi$, and $R^{2}\defeq\sup_{x\in\chi}\frac{1}{2}\norm x_{2}^{2}$
. Now consider the iterative algorithm starting with
\[
x^{(1)}:=\argmin_{x\in\chi}\norm x_{2}^{2}
\]
and for all $s$ we compute
\[
x^{(s+1)}:=\argmin_{x\in\chi}\norm{x-(x^{(s)}-\eta\tilde{g}(x^{(s)}))}_{2}^{2}
\]
Then for $\eta=\frac{R}{B}\sqrt{\frac{2}{t}}$ we have
\[
\E h\left(\frac{1}{t}\sum_{i\in[t]}x^{(s)}\right)-\min_{x\in\chi}h(x)\leq RB\sqrt{\frac{2}{t}}\,.
\]
We refer to this algorithm as projected stochastic subgradient descent
when $\tilde{g}$ is stochastic and as projected subgradient descent
when $\tilde{g}$ is deterministic, though we often omit the term
\emph{projected }for brevity. Note that when $\tilde{g}$ is deterministic
the results are achieved exactly rather than in expectation. \end{thm}
\section{Faster Submodular Function Minimization\label{sec:fast_sfm}}
In this section we provide faster algorithms for SFM. In particular
we provide the first nearly linear time pseudopolynomial algorithm
for SFM and the first subquadratic additive approximation algorithm
for SFM. Furthermore, we show how to obtain even faster running times
when the SFM instance is known to have a sparse solution.
All our algorithms follow the same broad algorithmic framework of
using subgradient descent with a specialized subgradient oracle. Where
they differ is in how the structure of the submodular functions is
exploited in implementing these oracles. The remainder of this section
is structured as follows: in Section\textbf{~\ref{sec:fast_sfm:framework}}
we provide the algorithmic framework we use for SFM, in Section\textbf{~\ref{sec:fast_sfm:subgrad}},
we prove structural properties of submodular functions that we use
to compute subgradients, in Section\textbf{~\ref{sec:fast_sfm:pseudopoly}},
we describe our nearly linear time pseodopolynomial algorithms, in
Section\textbf{~\ref{sec:fast_sfm:subquad}}, we describe our subquadratic
additive approximation algorithm, and in Section\textbf{~\ref{sec:fast_sfm:sparse}},
we show how to improve these results when SFM has a sparse solution.
We make minimal effort to control logarithmic factors in through this
section and note that some of the factors come from sorting and therefore
maybe can be removed depending on the desired computational model.
\subsection{Algorithmic Framework \label{sec:fast_sfm:framework}}
All our algorithms for SFM follow the same broad algorithmic framework.
We consider the Lovasz extension $\hat{f}:[0,1]^{n}\rightarrow\R$,
and perform projected (stochastic) subgradient descent on $\hat{f}$
over the convex domain $\chi=[0,1]^{n}$. While the subgradient oracle
construction differs between algorithms (and additional care is used
to improve when the solution is sparse, i.e. Section~\ref{sec:fast_sfm:sparse})
the rest of algorithms for Section~\ref{sec:fast_sfm:pseudopoly}
and Section~\ref{sec:fast_sfm:subquad} are identical.
In the following, Lemma~\ref{lem:framework}, we encapsulate this
framework, bounding the performance of projected (stochastic) subgradient
descent to the Lovasz extension, i.e. applying Theorem~\ref{thm:subgradient_descent}
to $\hat{f}$ over $\chi=[0,1]^{n}$. Formally, we abstract away the
properties of a subgradient oracle data structure that we need to
achieve a fast algorithm. With this lemma in place the remainder of
the work in Section~\ref{sec:fast_sfm:subgrad}, Section~\ref{sec:fast_sfm:pseudopoly},
and Section~\ref{sec:fast_sfm:subquad} is to show how to efficiently
implement the subgradient oracle in the particular setting.
\global\long\def\mathrm{T_{g}}{\mathrm{T_{g}}}
\begin{lem}
\label{lem:framework} Suppose that there exists a procedure which
maintains $(x^{(i)},\tilde{g}^{(i)})$ satisfying the invariants:
(a) $\tilde{g}^{(i)}$ is $k$-sparse, (b) $\E[\tilde{g}^{(i)}]=g(x^{(i)})$
is the Lovasz subgradient at $x^{(i)},$(c) $\E\norm{\tilde{g}^{(i)}}_{2}^{2}\leq B^{2}$.
Furthermore, suppose given any $e^{(i)}$ which is $k$-sparse, the
procedure can update to $(x^{(i+1)}=x^{(i)}+e^{(i)},\tilde{g}^{(i+1)})$
in time $\mathrm{T_{g}}$. Then, for any $\eps>0,$ we can compute a set $S$
with $\E[f(S)]\le\OPT+\eps$ in time $O(nB^{2}\eps^{-2}\mathrm{T_{g}}+n\time+n\log n)$.
If invariants (b) and (c) hold without expectation, then so does our
algorithm.
\begin{comment}
Let $T\geq nB^{2}\epsilon^{-2}$ and suppose that for any sequence
of $x^{(1)},..,x^{(T)}$ such that that $x^{(i+1)}-x^{(i)}$ is $k$-sparse
we can implement a subgradient oracle for $\hat{f}$ at $x^{(i)},$
denoted $\tilde{g}(x^{(i)})$, that is $k$-sparse and obeys $\E\norm{\tilde{g}(x)}_{2}^{2}\leq B^{2}$.
Then in time time $O(n(\runtime+\log n+B^{2}\epsilon^{-2}k))$ we
can compute a set $S$ such that $\E f(S)-f_{*}\leq\epsilon$. If
the subgradient oracle is deterministic then the reuslt holds without
the expectation.
\end{comment}
\end{lem}
\begin{proof}
We invoke Theorem~\ref{thm:subgradient_descent} on the convex function
$\hat{f}\,:\,[0,1]^{n}\rightarrow\R$ over the convex domain $\chi=[0,1]^{n}$
to obtain the iterates where we use the given subgradient oracle.
Clearly
\[
x^{(1)}=\argmin_{x\in[0,1]^{n}}\frac{1}{2}\norm x_{2}^{2}=0\in\R^{n}
\]
and
\[
R^{2}=\sup_{x\in[0,1]^{n}}\frac{1}{2}\norm x_{2}^{2}=\frac{1}{2}\norm 1_{2}^{2}=\frac{n}{2}.
\]
Consequently, as long as we implement the projection step for $T=O(nB^{2}\eps^{-2})$
steps (each step requiring $\mathrm{T_{g}}$ time), then Theorem~\ref{thm:subgradient_descent}
yields
\[
\E\hat{f}\left(\frac{1}{T}\sum_{i\in[T]}x^{(i)}\right)-\min_{x\in\chi}\hat{f}(x)\leq RB\sqrt{\frac{2}{T}}\leq\sqrt{\frac{nB^{2}}{T}}\leq\eps\,.
\]
Furthermore, as we argued in Section~\ref{sec:prelim:lovasz_extension}
we can compute $S$ with $f(S)\leq\hat{f}(\frac{1}{T}\sum_{i\in[T]}x^{(i)})$
in the time it takes to compute $\frac{1}{T}\sum_{i\in[T]}x^{(i)}$
plus additional $O(n\runtime+n\log n)$ time. To prove the lemma all
that remains to to reason about the complexity of computing the projection,
i.e. $x^{(t+1)}$, given that all the subgradients we compute are
$s$-sparse. However, since $x^{(t+1)}=\argmin_{x\in[0,1]^{n}}\norm{x-(x^{(t)}-\eta\tilde{g}(x^{(t)})}_{2}^{2}$
decouples coordinate-wise -- note that $x^{(t+1)}=\mathrm{median}\{0\,,\,x^{(t)}-\eta\tilde{g}(x^{(t)})\,,\,1\}$,
we subtract $\eta\tilde{g}(x^{(t)})$ from $x^{(t)}$ and if any coordinate
is less than $0$ we set it to $0$ and if any coordinate is larger
than $1$ we set it to $1$. Thus the edit vector $e^{(i)}$ is of
sparsity $\leq k$. Combining these facts yields the described running
time.
\end{proof}
\subsection{Subgradients of the Lovasz Extension\label{sec:fast_sfm:subgrad}}
Here we provide structural results of submodular function that we
leverage to compute subgradients of submodular functions in $o(n)$
time on average. %
\begin{comment}
Note that many of these bounds have appeared in prior work on submodular
functions, however we include them and there short proofs for completeness
\end{comment}
{} First, in Lemma~\ref{lem:bounded-subgradients} we state a result
due to Jegelka and Bilmes \cite{JB11}(also Hazan and Kale \cite{HK12})
which puts an upper bound on the $\ell_{1}$ norm of subgradients
of the Lovasz extension provided we have an upper bound on the maximum
absolute value of the function. We provide a short proof for completeness.
\begin{comment}
Note that this is the same as saying that $\tilde{f}$ is Lipshitz
continuous in $\ellInf$ {[}CITE{]}, however we will not need to use
this equivalence.
\end{comment}
{} %
\begin{comment}
Instead we will use this to show that either subgradients are sparse
(Section~\ref{sec:fast_sfm:pseudopoly}) or that they are sparsely
approximatable by sampling (Section~\ref{sec:fast_sfm:subquad}).
\end{comment}
{}
\begin{lem}[Subgradient Upper Bound]
\label{lem:bounded-subgradients} If $|f(S)|\leq M$ for all $S\subseteq[n]$,
then $\norm{g(x)}_{1}\leq3M$ for all $x\in[0,1]^{n}$ and for all
subgradients $g$ of the Lovasz extension.\end{lem}
\begin{proof}
For notational simplicity suppose without loss of generality (by changing
the name of the coordinates) that $P(x)=(1,2,...,n)$, i.e. $x_{1}\geq x_{2}\geq...\geq x_{n}$.
Therefore, for any $i\in[n],$we have $g_{i}=f([i])-f([i-1])$. Let
$r_{1}\leq r_{2}\leq...,\leq r_{R}$ denote all the coordinates such
that $g_{r_{i}}>0$ and let $s_{1}\leq s_{2}\leq...\leq s_{S}$ denote
all the coordinates such that $g_{s_{i}}<0$.
We begin by bounding the contribution of the positive coordinate,
the $g_{r_{i}}$, to the norm of the gradient, $\norm g_{1}$. For
all $k\in[R]$ let $R_{k}\defeq\left\{ r_{1},...,r_{k}\right\} $
with $R_{0}=\emptyset$. By assumption we know that that $f(R_{0})=\emptyset$.
Furthermore, by submodularity, i.e. diminishing marginal returns,
we know that for all $i\in[R]$
\[
f(R_{i})-f(R_{i-1})\geq f([r_{i}])-f([r_{i}-1])=:g_{r_{i}}=\left|g_{r_{i}}\right|
\]
Consequently $f(R_{R})-f(R_{0})=\sum_{i\in[R]}f(R_{i})-f(R_{i-1})\geq\sum_{i\in[R]}\left|g_{r_{i}}\right|$.
Since $f(R_{0})=0$ and $f(R_{R})\leq M$ by assumption we have that
$\sum_{i\in[k]}\left|g_{r_{i}}\right|\leq M\,.$
Next, we bound the contribution of the negative coordinatess, the
$g_{s_{i}}$, similarly. For all $k\in[S]$ let $S_{k}\defeq\left\{ s_{1},...,s_{k}\right\} $
with $S_{0}=\emptyset$. By assumption we know that that $f(S_{0})=\emptyset$.
Define $V:=[n]\setminus S$. Note that for all $i\in[S],$ the set
$V\cup S_{i-1}$ is a superset of $[s_{i}-1].$ Therefore, submodularity
gives us for all $i\in[S]$,
\[
f(V\cup S_{i})-f(V\cup S_{i-1})\leq f([s_{i}])-f([s_{i}-1])=g_{s_{i}}=-\abs{g_{s_{i}}}
\]
Summing over all $i,$ we get $f([n])-f(V)\leq\sum_{i\in[S]}-\left|g_{s_{i}}\right|$.
Since $f([n])\geq-M$ and $f(V)\leq M$ we have that $\sum_{i\in[S]}\left|g_{s_{i}}\right|\leq2M\,.$
Combining these yields that $\norm g_{1}=\sum_{i\in[n]}\left|g_{i}\right|=\sum_{i\in[R]}\left|g_{r_{i}}\right|+\sum_{i\in[S]}\left|g_{s_{i}}\right|\leq3M\,.$
\end{proof}
Next, in Lemma~\ref{lem:subgrad_monotonicity} we provide a simple
but crucial monotonicity property of the subgradient of $\hat{f}$.
In particular we show that if we add (or remove) a positive vector
from $x\in[0,1]^{n}$ to obtain $y\in[0,1]^{n}$ then the gradients
of the \emph{untouched} coordinates all decrease (or increase).
\begin{lem}[Subgradient Monotonicity]
\label{lem:subgrad_monotonicity} Let $x\in[0,1]^{n}$ and let $d\in\R_{\geq0}^{n}$
be such that $y=x+d$ (resp. $y=x-d$). Let $S$ denote the non-zero
coordinates of $d$. Then for all $i\notin S$ we have $g(x)_{i}\geq g(y)_{i}$
(resp. $g(x)_{i}\leq g(y)_{i}$).\end{lem}
\begin{proof}
We only prove the case of $y=x+d$ as the proof of the $y=x-d$ case
is analagous. Let $P^{(x)}$and $P^{(y)}$ be the permutations consistent
with $x$ and $y$. Note that $P^{(y)}$ can be obtained from $P^{(x)}$
by moving a subset of elements in $S$ to the left, and the relative
ordering of elements \emph{not in }$S$ remains the same. Therefore,
for any $i\notin S,$ if $r$ is its rank in $P^{(x)},$ that is,
$P_{r}^{(x)}=i,$ and $r'$is its rank in $P^{(y)}$ , then we must
have $P^{(y)}[r']\supseteq P^{(x)}[r].$ By submodularity, $g(y)_{i}=f(P^{(y)}[r'])-f(P^{(y)}[r'-1])\leq f(P^{(x)}[r])-f(P^{(x)}[r-1]))=g(x)_{i}.$
\end{proof}
Lastly, we provide Lemma~\ref{lem:subgrad_interval} giving a simple
formula for the sum of multiple coordinates in the subgradient.
\begin{comment}
Combining these lemmas allows us to keep track of changes in the subgradient
efficiently.
\end{comment}
\begin{lem}[Subgradient Intervals]
\label{lem:subgrad_interval} Let $x\in[0,1]^{n}$ and let $P$
be the permutation consistent with $x$. For any positive integers
$a\leq b$, we have $\sum_{i=a}^{b}g(x)_{P_{i}}=f(P[b])-f(P[a-1])$.\end{lem}
\begin{proof}
This follows immediately from the definition of $g(x)$: $\sum_{i=a}^{b}g(x)_{P_{i}}=\sum_{i=a}^{b}(f(P[i])-f(P[i-1]))=f(P[b])+\sum_{i=a}^{b-1}f(P[i])-\sum_{i=a}^{b-1}f(P[i])-f(P[a-1])\,.$
\end{proof}
\subsection{Nearly Linear in $n$, Pseudopolynomial Time Algorithm \label{sec:fast_sfm:pseudopoly}}
Here we provide the first nearly linear time pseudopolynomial algorithm
for submodular function minimization. Throughout this section we assume
that our submodular function $f$ is integer valued with $|f(S)|\leq M$
for all $S\subseteq[n]$. Our goal is to deterministically produce
an exact minimizer of $f$. The primary result of this section is
showing the following, that we can achieve this in $\otilde(nM^{3}\runtime)$
time:
\begin{thm}
\label{thm:runtime_pseudopoly} Given an integer valued submodular
function $f$ with $|f(S)|\leq M$ for all $S\subseteq[n]$ in time
$O(nM^{3}\time\log n)$ we can compute a minimizer of $f$.
\end{thm}
We prove the theorem by describing $(x^{(i)},\tilde{g}(x^{(i)})$)
in Lemma~\ref{lem:framework}. In fact, in this case, $\tilde{g}$
will \emph{deterministically }be the subgradient of the Lovasz extension.
In Lemma~\ref{lem:sparse_subgradients}, we prove that the Lovasz
subgradient is $O(M)$-sparse and so $\|g\|_{2}^{2}\leq O(M^{2})$.
Thus, Conditions (a), (b), and (c) are satisfied with $B^{2}=O(M^{2})$.
The main contribution of this section is Lemma \ref{lem:subgradient-update},
where we show that $\mathrm{T_{g}}=O(M\log n\cdot\time)$, that is the subgradient
can be updated in this much time. A pseudocode of the full algorithm
can be found in Section~\ref{alg:exact-1}.
\begin{comment}
First we provide Lemma~\ref{lem:sparse_subgradients} proving that
subgradients of the Lovasz extension in this setting are sparse. This
essentially follows directly from Lemma~\ref{lem:bounded-subgradients},
our bound on the $\ellOne$ norm of the subgradients of the Lovasz
extension. T
\end{comment}
\begin{comment}
Next, using the fact that the subgradients are sparse we provide an
efficient data structure for maintaining this sparse gradient with
Lemma~\ref{lem:subgradient-update}. In particular we show that in
$\otilde(n\time)$ we can build a data structure that after any $s$-sparse
change in $x$ we can exactly compute a subgradient of the Lovasz
extension after the change in $\otilde((s+k)\time)$ time. Here we
leverage, Lemma~\ref{lem:subgrad_monotonicity}, the monotonicity
of the subgradient, and Lemma~\ref{lem:subgrad_interval}, that we
can compute the sum of many coordiantes of the subgradient cheaply.
Together they imply that if we add a vector of the same sign, then
for all the coordinates that don't change they change monotonically
and thus with $O(1)$ evaluations of $f$ we can compute the total
change over an contiguous interval of these coordinates. Consequently,
we can efficiently binary search for the changed coordinate over any
contiguous interval of unchanged coordinates quickly. Breaking the
$s$-sparse change into an all positive change followed by an all
negative change and using a data structure to keep track of these
changes ultimately yields Lemma~\ref{lem:subgradient-update-old}.
Finally, using this data structure for computing the subgradient and
with Lemma~\ref{lem:framework} we prove Theorem~\ref{thm:runtime_pseudopoly}.
\end{comment}
\begin{lem}
\label{lem:sparse_subgradients} For integer valued $f$ with $|f(S)|\leq M$
for all $S$ the subgradient $g(x)$ has at most $3M$ non-zero entries
for all $x\in[0,1]^{n}$.\end{lem}
\begin{proof}
By Lemma~\ref{lem:bounded-subgradients} we know that $\norm{g(x)}_{1}\leq3M$.
However, since $g(x)_{P_{i}}=f(P[i])-f(P[i-1])$ and since $f$ is
integer valued, we know that either $g(x)_{P_{i}}=0$ or $\left|g(x)_{P_{i}}\right|\geq1$.
Consequently, there are at most $3M$ values of $i$ for which $g(x)_{i}\neq0$.\end{proof}
\begin{lem}
\label{lem:subgradient-update} With $O(n\cdot\time)$ preprocessing
time the following data structure can be maintained. Initially, one
is input $x^{(0)}\in[0,1]^{n}$ and $g(x_{0})$. Henceforth, for all
$i,$ given $g(x^{(i)})$ and a vector $e^{(i)}$ which is $k$-sparse,
in $O(k\log n+k\time+M\time\log n)$ time one can update $g(x^{(i)})$
to the gradient $g(x^{(i+1)})$ for $x^{(i+1)}=x^{(i)}+e^{(i)}$.\end{lem}
\begin{proof}
The main idea is the following. Suppose $e^{(i)}$ is non-negative
(we later show how to easily reduce to the case where all coordinates
in $e^{(i)}$ have the same sign and the non-positive case is similar)
Thus, by Lemma~\ref{lem:subgrad_monotonicity}, for all coordinates
not in support of $e^{(i)},$ the gradient goes down. Due to Lemma
\ref{lem:sparse_subgradients}, the total number of change is $O(M)$,
and since we can evaluate the sum of gradients on intervals by Lemma
\ref{lem:subgrad_interval}, a binary search procedure allows us to
find \emph{all }the gradient changes in $O(M\log n\cdot\time)$ time.
We now give full details of this idea.
We store the coordinates of $x^{(i)}$ in a balanced binary search
tree (BST) with a node for each $j\in[n]$ keyed by the value of $x_{j}^{(i)}$;
ties are broken consistently, e.g. by using the actual value of $j$.
We take the order of the nodes $j\in[n]$ in the binary search tree
to define the permutation $P^{(i)}$ which we also store explicitly
in a link-list, so we can evaluate $f(P^{(i)}[k])$ in $O(\time)$
time for any $k$.\textbf{ }Note that each node of the BST corresponds
to a subinterval of $P^{(i)}$ given by the children of that node
in the tree. At each node of the BST, we store the sum of $g(x^{(i)})_{j}$
for all children $j$ of that node, and call it the \emph{value }of
the node. Note by Lemma \ref{lem:subgrad_interval} each individual
such sum can be computed with 2 calls to the evaluation oracle. Finally,
in a linked list, we keep all indices $j$ such that $g(x^{(i)})_{j}$
is non-zero and we keep pointers to them from their corresponding
node in the binary search tree. Using the binary search tree and the
linked list, one can clearly output the subgradient. Also, given $x^{(0)}$
, in $O(n\cdot\time)$ time one can obtain the initialization. What
remains is to describe the update procedure.
We may assume that all non-zero entries of $e^{(i)}$ are the same
sign; otherwise write $e^{(i)}:=e_{+}^{(i)}+e_{-}^{(i)}$, and perform
two updates. WLOG, lets assume the sign is + (the other case is analogous).
Let $S$ be the indices of $e^{(i)}$ which are non-zero.
First, we change the key for each $j\in[n]$ such that $x_{j}^{(i+1)}\neq x_{j}^{(i)}$
and update the BST. Since we chose a consistent tie breaking rule
for keying, only these elements $j\in[n]$ will change position in
the permutation $P^{(i+1)}$. Furthermore, performing this update
while maintaining the subtree labels can be done in $O(k\log n)$
time as it is easy to see how to implement binary search trees that
maintain the subtree values even under rebalancing. For the time being,
we retain the old values as is.
For brevity, let $g^{(i)}$ and $g^{(i+1)}$ denote the gradients
$g(x^{(i)})$ and $g(x^{(i+1)})$, respectively. Since we assume all
non-zero changes in $e^{(i)}$ are positive, by Lemma \ref{lem:subgrad_monotonicity},
we know that $g_{j}^{(i+1)}\le g_{j}^{(i)}$ for all $j\notin S$.
First, since $|S|\leq k$, for all $j\in S,$ we go ahead and compute
$g_{j}^{(i+1)}$ in $O(k\time)$ time. For each such $j$ we update
the value of the nodes from $j$ to the root, by adding the difference
$(g_{j}^{(i+1)}-g_{j}^{(i)})$ to each of them. Next, we perform the
following operation top-down start at the root: at each node we compare
the current subtree value stored at this node with what the value
actually should be with $g^{(i+1)}$ . Note that since we know $P^{(i+1)}$,
the latter can be computed with 2 evaluation queries. The simple but
crucial observation is that if at any node $j$ these two values match,
then we are guaranteed that $g_{k}^{(i+1)}=g_{k}^{(i)}$ for all $k$
in the tree rooted at $j$ and we do not need to recurse on the children
of this node. The reason for equality is that for all the children,
we must have $g_{k}^{(i+1)}\leq g_{k}^{(i)}$ by Lemma \ref{lem:subgrad_monotonicity}
, and so if the sum is equal then we must have equality everywhere.
Since there are at most $O(M)$ coordinates change, this takes $O(M\time\log n)$
for updating \emph{all }the changes to $g^{(i+1)}$ for the binary
search tree. During the whole process, whenever a node changes from
non-zero to zero or from zero to non-zero, we can update the linked-list
accordingly.
\end{proof}
\begin{comment}
\begin{lem}
\label{lem:subgradient-update-old} Let $x^{(0)},x^{(1)},...$ be
an infinite sequence of points in $[0,1]^{n}$ such that $x^{(i+1)}-x^{(i)}$
is k-sparse for all $i\in[n]$. Now suppose we receive the initial
$x^{(0)}\in[0,1]^{n}$, a permutation $P^{(0)}$ consistent with $x$,
and the subgradient $g(x_{0},P_{0})$ and then receive the changes
in $x^{(i)}$ in sequence needing to output $g(x^{(i)},P^{(i)})$
for some $P^{(i)}$ consistent with $x^{(i)}$ after receiving each
$x^{(i)}.$ With $O(n)$ initialization time we can build a datastructure
that outputs each $x^{(i)}$ in $O(k\time+k\log n+M\time\log n)$
time per $i$. \end{lem}
\begin{proof}
For all $i$, we store the coordinates of $x^{(i)}$ in a balanced
binary search tree with a node for each $j\in[n]$ keyed by the value
of $x_{j}^{(i)}$; ties are broken consistently, e.g. by using the
actual value of $j$. We take the order of the nodes $j\in[n]$ in
the binary search tree to define the permutation $P^{(i)}$ which
we also store explicitly in a link-list, so we can evaluate $f(P^{(i)}[k])$
in $O(\time)$ time for all $k$. Furthermore, in every node $j\in[n]$
in the binary search tree we store the sum of $G(x^{(i)},P^{(i)})_{k}$
for every $k$ that is a descendent of $j$, including $j$. We also
store the $j\in[n]$ such that $g(x^{(i)},P^{(i)})_{j}\neq0$ in a
binary search tree where the key for $j\in[n]$ is its position in
$P^{(i)}$.
Clearly, we can initialize this data structure in time $O(n)$, since
we are given the sorting of the coordinates of $x^{(0)}$ through
$P^{(0)}$. Also, clearly the data structure contains all the information
we need to output $g(x^{(i)},P^{(i)})$ for all $i\in[n]$. All that
remains to show is that we can update the data structure efficiently.
Now suppose we wish to update this data structure given some $k$-sparse
modification to $x^{(i)}$. Suppose without loss of generality, just
by adding an intermediary step, that each element of the update has
the same sign, so coordinate wise $x^{(i+1)}\geq x^{(i)}$ or $x^{(i+1)}\leq x^{(i)}$
for every coordinate.
First, we change the key for each $j\in[n]$ such that $x_{j}^{(i+1)}\neq x_{j}^{(i)}$
updating it to the current value while maintaining the old subtree
labels, i.e. we still use $g(x^{(i)},P^{(i)})$ to induce the subtree
labels. Since we chose a consistent tie breaking rule fore keying,
only these elements $j\in[n]$ will change position in the permutation
$P^{(i+1)}$. Furthermore, performing this update while maintaining
the subtree labels can be done in $O(k\log n)$ time as it is well
known how to implement binary search trees that maintain these types
of subtree values even under rebalancing {[}CITE{]}.
Next, we need to update the value of the subtree labels to $g(x^{(i+1)},P^{(i+1)})$.
First, for every one of the changed coordinate, i.e. $j\in[n]$ with
$x_{j}^{(i+1)}\neq x_{j}^{(i)}$ we compute its subgradient naively
in $O(k\time)$ time and then update the subtree sum values of $g(x^{(i+1)},P^{(i+1)})$
along the path from these nodes to the root $O(k\log n)$ time since
the tree is balanced. Next, we start at the root and compare the subtree
value stored at the node with what the value should be. We do this
by making $O(1)$ evaluations to the submodular function corresponding
to the endpoints of the interval both $x^{(i)}$ and $x^{(i+1)}$
and using Lemma~\ref{lem:subgrad_interval}. If they are the same
then we know there were no coordinates of the subgradient changed
by Lemma~\ref{lem:subgrad_monotonicity} other than the ones we already
changed. Otherwise, we update the subtree value and recurse and that
nodes children. We repeat this process until all the subtree values
are the same.
Since other than the coordinates whose value of $x$ was changed all
other coordinates have their subgradient value change monotonically,
i.e. Lemma~\ref{lem:subgrad_monotonicity}, this process finds all
the changed coordinates. For these changed coordinates we can update
the binary search tree storing the nonzero components directly and
since there are at most $O(M)$ such coordinates this takes $O(M\log M)$
time. Furthermore, since at most $O(M)$ coordinates change and the
subtree values for only their ancestors in the tree change we see
that the above process for finding the changes to $g(x^{(i+1)},P^{(i+1)})$
takes time $O(M\time\log n)$.\end{proof}
\end{comment}
\begin{comment}
Using the observations in the previous two sections we can produce
our nearly linear pseudpolynomial time algorithm for submodular function
minimization. To do this the main result we will use is standard bounds
on mirror descent to minimize a function with a subgradient oracle.
\end{comment}
\begin{proof}[Proof of Theorem~\ref{thm:runtime_pseudopoly}]
We apply Lemma~\ref{lem:framework} giving the precise requirements
of our subgradient oracle. We know that the subgradients we produce
are always $O(M)$ sparse by Lemma~\ref{lem:sparse_subgradients}
and satisfy $B^{2}=O(M^{2})$. Consequently, we can simply instantiate
Lemma~\ref{lem:subgradient-update} with $k=O(M)$ to obtain our
algorithm. Furthermore, since $f$ is integral we know that so long
as we have a set additive error less than $1$, i.e. $\epsilon<1$,
the set is a minimizer. Consequently, we can minimize in the time
given by the cost of adding the cost of Lemma~2, with the Lemma~\ref{lem:sparse_subgradients}
initialization cost, plus the Lemma~\ref{lem:sparse_subgradients}
cost for $T=O(nM^{2})$ iterations, yielding
\[
O\left(n(\runtime+\log n+M^{3})+n+M\runtime+(M\log n+M\runtime\log n)\cdot nM^{2}\right)=O(nM^{3}\time\log n)\,.
\]
\end{proof}
\subsection{Subquadratic Additive Approximation Algorithm \label{sec:fast_sfm:subquad}}
Here we provide the first subquadratic additive approximation algorithm
for submodular function minimization. Throughout this section we assume
that $f$ is real valued with $|f(S)|\leq1$ for all $S\subseteq[n]$.
Our goal is to provide a randomized algorithm that produces a set
$S\subseteq[n]$ such $\E f(S)\leq\OPT+\epsilon$. The primary result
of this section is showing the following, that we can achieve this
in $O(n^{5/3}\epsilon^{-2}\log^{4}n)$ time:
\begin{thm}
\label{thm:runtime_additive} Given a submodular function $f:2^{[n]}\rightarrow\R$
with $|f(S)|\leq1$ for all $S\subseteq V$, and any $\eps>0$, we
we can compute a random set $S$ such that $\E f(S)\leq\OPT+\epsilon$
in time $O(n^{5/3}\epsilon^{-2}\time\log^{4}n)$.
\end{thm}
The proof of this theorem has two parts. Note that the difficulty
in the real-valued case is that we can no longer assume the Lovasz
gradients are sparse, and so we cannot do naive updates. Instead,
we use the fact that the gradient has small $\ell_{2}$norm to get
sparse \emph{estimates }of the gradient. This is the first part where
we describe a sampling procedure which given any point $x$ and a
$k$-sparse vector $e$, returns a good and sparse estimate to the
\emph{difference }between the Lovasz gradient at $x+e$ and $x$.
The second issue we need to deal with is that if we naively keep using
this estimator, then the error (variance) starts to accumulate. The
second part then shows how to use the sampling procedure in a ``batched
manner'' so as to keep the total variance under control, restarting
the whole procedure with a certain frequency. A pseudocode of the
full algorithm can be found in Section \ref{alg:exact-1}.
\begin{lem}
\label{lem:subgradient-update_additive-1} Suppose a vector $x\in[0,1]^{n}$
is stored in a BST sorted by value. Given a $k$-sparse vector $e$
which is either non-negative or non-positive, and an integer $\ell\geq1$,
there is a randomized sampling procedure which returns a vector $z$
with the following properties: (a) $\E[z]=g(x+e)-g(x)$, (b) $\E[\norm{z-\E[z]}_{2}^{2}]=O(1/\ell)$,
and (c) the number of non-zero coordinates of $z$ is $O(\ell)$.
The time taken by the procedure is $O((k+\ell)\cdot\time\log^{2}n)$.\end{lem}
\begin{proof}
We assume that each non-zero value of $e$ is positive as the other
case is analogous. Note that $x$ is stored in a BST , and the permutation
$P_{x}$ consistent with $x$ is stored in a doubly linked list. Let
$S$ be the set of positive coordinates of $e$ with $|S|=k$ and
let $y$ denote the vector $x+e$. We compute $P_{y}$ in $O(k\log n)$
time.
Let $I_{1},\ldots,I_{2k}\subseteq[n]$ denote the subsets of the coordinates
that correspond to the intervals which are contiguous in both $P_{x}$
and $P_{y}$. Note that these are $\leq2k\mbox{\,such intervals, and some of them can be empty.}$
We store the pointers to the endpoints of each interval in the BST.
This can be done in $O(k\log n)$ time as follows. First compute the
coarse intervals which are contiguous in $P_{x}$ in $O(k)$ time.
These intervals will be refined when we obtain $P_{y}$. In $O(k\log n)$
time, update the BST so that for every node we can figure out which
coarse interval it lies in $O(\log n)$ time. This is done by walking
up the BST for every end point of all the $k$ intervals and storing
which ``side'' of the interval they lie in. Given a query node,
we can figure out which interval it lies in by walking up the BST
to the root. Finally, for all nodes in $S$, when we update the BST
in order to obtain $P_{y}$, using the updated data structure in $O(\log n)$
time figure out which coarse interval it lies in and refine that interval.
For each $j\in S,$ we compute $d_{j}\defeq g(y)_{j}-g(x)_{j}$ explicitly.
This can be done in $O(k\time)$time using $P_{x}$and $P_{y}$.For
$r\in[2k]$, we define $D_{r}:=\sum_{j\in I_{r}}(g(y)_{j}-g(x)_{j})$.
Since each $I_{r}$ is a contiguous interval in both $P_{x}$ and
$P_{y},$ Lemma~\ref{lem:subgrad_interval} implies that we can store
all $D_{r}$ in $O(k\cdot\time)$ time in look-up tables. Note that
by monotonicity Lemma~\ref{lem:subgrad_monotonicity} each summand
in $D_{r}$ is of the same sign, and therefore summing the absolute
values of $D_{r}$'s and $d_{j}$'s gives $\norm{g(y)-g(x)}_{1}.$
We store this value of the $\ell_{1}$ norm.
Now we can state the randomized algorithm which returns the vector
$z.$ We start by sampling either a coordinate $j\in S$ with probability
proportional to $|d_{j}|$ , or an interval $I_{r}$ with probability
proportional to $\abs{D_{r}}$. If we sample an interval, then iteratively
sample sub-intervals $I'\subset I_{r}$ proportional to $\sum_{j\in I'}(g(y)_{j}-g(x)_{j})$
till we reach a single coordinate $j\notin S.$ Note that any $j\in[n]$
is sampled with probability proportional to $\abs{g(y)_{j}-g(x)_{j}}$.
We now show how to do this iterative sampling in $O((\time+\log n)\log n)$
time. Given $I_{r}$, we start from the root of the BST and find a
node closest to the root which lies in $I_{r}.$ More precisely, since
for every ancestor of the endpoints of $I_{r}$, if it doesn't belong
to the interval we store which ``side'' of the tree $I_{r}$ lies
in, one can start from the root and walk down to get to a node inside
$I_{r}$. This partitions $I_{r}$into two subintervals and we randomly
select $I'$proportional to $\sum_{j\in I'}(g(y)_{j}-g(x)_{j})$ .
Since sub-intervals are contiguous in $P_{y}$ and $P_{x,}$ this
is done in $O(\time)$ time. We then update the information at every
ancestor node of the endpoints of the sampled $I'$ in $O(\log n)$
time. Since each iteration decreases the height of the least common
ancestor of the endpoints of $I'$, in $O(\log n)$ iterations (that
is the height of the tree), we will sample a singleton $j\not\notin S$.
In summary, we can sample $j\in[n]$ with probability proportional
to $g(y)_{j}-g(x)_{j}$ in $O((\time+\log n)\log n)$ time. If we
sample $j,$ we return the (random) vector
\[
z:=\norm{g(y)-g(x)}_{1}\cdot\sign(g(y)_{j}-g(x)_{j})\cdot\mathbf{1}_{j}
\]
where recall $\mathbf{1}_{j}$ is the vector with 1 in the $j$th
coordinate and zero everywhere else. Note that given $j,$ computing
$z$ takes $O(\time+\log n)$ time since we have to evaluate $g(y)_{j}$and
$g(x)_{j}.$ Recall, we already know the $\ell_{1}$norm. Also note
by construction, $\E[z]$ is precisely the vector $g(y)-g(x).$ To
upper bound the variance, note that
\[
\E[\norm{z-\E z}_{2}^{2}]\leq\E[\norm z_{2}^{2}]=\norm{g(y)-g(x)}_{1}^{2}\leq9\cdot\max_{S\subseteq V}|f(S)|\leq9
\]
by Lemma~\ref{lem:bounded-subgradients} and the fact that $|f(S)|\leq1$.
Also observe that $z$ is $1$-sparse.
Given $\ell,$ we sample independently $\ell$ such random $z$'s
and return their \emph{average. }The expectation remains the same,
but the variance scales down by $\ell.$ The sparsity is at most
$\ell.$The total running time is $O(k(\time+\log n)+\ell(\time+\log n)\log n)$.
This completes the proof of the lemma.
\begin{comment}
\begin{proof}
With this simplification we store for each $i$ a binary search tree
on the coordinates of the subgradient that change and the intervals
that change. Formally, let $S^{(i+1)}$ denote the non-zero coordinates
of $d^{(i+1)}$, and let $I_{1}^{(i)},...,I_{k^{(i)}+1}^{(i)}\subseteq[n]$
denote the subsets of the coordinates that correspond to the contiguous
intervals of $P^{(i)}$ in between the $S^{(i+1)}$, note that some
of these subsets may be empty and this expression as intervals is
not-unique, but any will do. For each $j\in S^{(i+1)}$ we compute
$d_{j}^{(i+1)}\defeq g_{j}^{(i+1)}-g_{j}^{(i)}$ and we compute $D_{j}^{(i+1)}=\sum_{j\in I_{k}^{(i)}}g_{j}^{(i+1)}-g_{j}^{(i)}$
for each $I_{k}^{(i)}$. We store these in look up tables so that
we can sample them by absolute value of $d_{j}^{(i+1)}$ or $D_{j}^{(i+1)}$
in $O(1)$ time {[}CITE{]}. We also compute the sum of all these absolute
values, i.e. $\norm{g^{(i+1)}-g^{(i)}}_{1}$ \textbf{{[}Tat: $\norm{g^{(i+1)}-g^{(i)}}_{1}$?{]}}.
Since by Lemma~\ref{lem:subgrad_monotonicity} the changes to the
intervals are monotonic and since by Lemma~\ref{lem:subgrad_interval}
we can query intervals of subgradients in $O(\time)$ time and we
are maintaining the tree of coordinates anyway, we can maintain all
of this in $O(K(\time+\log n))$ time.
Using the above data structure we claim that upon receiving $d^{(i+1)}$
for any $i'\in[i]$ we can compute a vector $z$ such that $\E z=g(x^{(i'+1)})-g(x^{(i')})$
and $\E\norm z_{2}^{2}\leq3$ in time $O(\time\log n)$. To do this,
we simply go to the data structure for time $i'$ and sample one of
the $j\propto|d_{j}^{(i'+1)}|$ or the $D_{j}^{(i'+1)}\propto|\sum_{j\in I_{k}^{(i)}}g_{j}^{(i'+1)}-g_{j}^{(i')}|$.
If we pick a $D_{j}$ then we keep picking sub-intervals proportional
to the absolute value of the sum of $g_{j}^{(i;+1)}-g_{j}^{(i')}$
in that interval by starting at partition induced by the root of tree
and repeating until the interval consists of a single element. In
either case we sample down to some coordinate $j$ coordinate and
output $\norm{g^{(i'+1)}-g^{(i')}}_{1}\mathrm{sign}(g_{j}^{(i'+1)}-g_{j}^{(i')})1_{j}$
where $1_{j}$ is the indicator vector for coordinate $j$ and $\mathrm{sign(a)}$
is $1$ if $a$ is positive, $-1$ if $a$ is negative, and $0$ otherwise.
Since again, by Lemma~\ref{lem:subgrad_monotonicity} and Lemma~\ref{lem:subgrad_interval}
we can query for the sum of coordinates in contiguous intervals of
changed coordinates in $O(\time)$ this all takes $O(\time\log n)$
time. Furthermore, since by the structure of this algorithm $z$ is
the same as outputting $\norm{g^{(i'+1)}-g^{(i')}}_{1}\mathrm{sign}(g_{j}^{(i'+1)}-g_{j}^{(i')})1_{j}$
with probability $|g_{j}^{(i'+1')}-g_{j}^{(i)}|/\norm{g^{(i'+1)}-g^{(i')}}_{1}$.
Consequently, $\E z=g(x^{(i'+1)})-g(x^{(i')})$ and
\[
\E\norm z_{2}^{2}=\norm{g_{j}^{(i'+1)}-g_{j}^{(i')}}_{1}^{2}\leq9\cdot\max_{S\subseteq V}|f(S)|\leq9
\]
by Lemma~\ref{lem:bounded-subgradients} and the fact that $|f(S)|\leq1$.\end{proof}
\end{comment}
\end{proof}
We now complete the proof of \emph{Theorem \ref{thm:runtime_additive}.}
\begin{proof}
(\emph{Theorem \ref{thm:runtime_additive}}) The algorithm runs in
batches (as mentioned before, the pseudocode is in Section \ref{alg:exact-1}.)
At the beginning of each batch, we have our current vector $x^{(0)}$
as usual stored in a BST. We also compute the Lovasz gradient $g^{(0)}=g(x^{(0)})$
spending $O(n\log n\time)$ time. The batch runs for $T=\Theta(n^{1/3})$
steps. At each step $t\in[T],$ we need to specify an estimate $\tilde{g}^{(t)}$
to run the (stochastic) subgradient procedure as discussed in Lemma
\ref{lem:framework}. For $t=0$, since we know $g^{(0)}$ explicitly,
we get $\tilde{g}^{(0)}$ by returning $\norm{g^{(0)}}_{1}\mathrm{sign}(g_{j}^{(0)})\mathbf{1_{j}}$
with probability proportional to $\abs{g_{j}^{(0)}}$. This is a 1-sparse,
unbiased estimator of $g^{(0)}$ with $O(1)$ variance. Define $z^{(0)}:=\tilde{g}^{(0)}$.
Henceforth, for every $t\geq0,$ the subgradient descent step suggests
a direction $e^{(t)}$ in which to move whose sparsity is at most
the sparsity of $\tilde{g}^{(t)}.$ We partition $e^{(t)}=e_{+}^{(t)}+e_{-}^{(t)}$
into its positive and negative components. We then apply Lemma \ref{lem:subgradient-update_additive-1}
twice: once with $x=x^{(t)}$,$e=e_{+}^{(t)}$, and $\ell=t$, to
obtain random vector $z_{+}^{(t)}$of sparsity $t$, and then with
$x=x^{(t)}+e_{+}^{(t)}$, $e=e_{-}^{(t)}$, and $\ell=t,$ to obtain
the random vector $z_{-}^{(t)}$ of sparsity $t$. The estimate of
the gradient at time $t$ is the sum of these random vectors. That
is, for all $t\geq1,$ define $\tilde{g}^{(t)}:=\sum_{s\leq t}(z_{+}^{(s)}+z_{-}^{(s)})$.
By the property (b) of Lemma \ref{lem:subgradient-update_additive-1}
, $\tilde{g}^{(t)}$ is a valid stochastic subgradient and can be
fed into the framework of Lemma \ref{lem:framework}. Note that for
any $t\in[T]$, the sparsity of $\tilde{g}^{(t)}$is $O(t^{2})$ and
so is the sparsity of $e^{(t)}$ suggested by the stochastic subgradient
routine. Thus, the $t$th step of estimating $z_{+}^{(t)}$and $z_{-}^{(t)}$
requires time $O(t^{2}\time\log^{2}n)$, implying we can run $T$
steps of the above procedure in $O(T^{3}\time\log^{2}n)$ time.
Finally, to argue about the number of iterations required to get $\eps$-close,
we need to upper bound $\E[\norm{\tilde{g}^{(t)}}_{2}^{2}]$ for every
$t$. Since $\E[\tilde{g}^{(t)}]=g^{(t)}$, the true subgradient at
$x^{(t)}$ and since $\norm{g^{(t)}}_{2}^{2}=O(1)$ by Lemma \ref{lem:bounded-subgradients}
, it suffices to upper bound $\E[\norm{\tilde{g}^{(t)}-\E[\tilde{g}^{(t)}]}_{2}^{2}]$.
But this follows since $\tilde{g}^{(t)}$ is just a sum of independent
$z$-vectors.
\[
\E[\norm{\tilde{g}^{(t)}-\E[\tilde{g}^{(t)}]}_{2}^{2}]=\sum_{s\leq t}\E[\norm{z_{+}^{(s)}-\E[z_{+}^{(s)}]}_{2}^{2}]+\sum_{s\leq t}\E[\norm{z_{-}^{(s)}-\E[z_{-}^{(s)}]}_{2}^{2}]=O\left(\sum_{s\leq t}1/s\right)=O(\log n)
\]
The second-last inequality follows from (c) of Lemma \ref{lem:subgradient-update_additive-1}.
And so, $\E[\norm{\tilde{g}^{(t)}}_{2}^{2}]=\E[\norm{\tilde{g}^{(t)}-\E[\tilde{g}^{(t)}]}_{2}^{2}]+\norm{g^{(t)}}_{2}^{2}=O(\log n)$.
Therefore, we can apply the framework in Lemma \ref{lem:framework}
with $B=O(\log n)$ implying the total number of steps to get $\eps$-approximate
is $N=O(n\log^{2}n\eps^{-2})$. Furthermore, since each batch takes
time $O((n+T^{3})\time\log^{2}n)$ and there are $N/T$ batches, we
get that the total running time is at most
\[
O\left(n\time\log^{4}n\eps^{-2}\left(\frac{n+T^{3}}{T}\right)\right)=\tilde{O}(n^{5/3}\eps^{-2}\time)
\]
if $T=n^{1/3}.$ This ends the proof of Theorem \ref{thm:runtime_additive}.
\begin{comment}
\textbf{{[}OLD PROOF STARTS HERE.{]}}We break this proof into two
parts. First we provide the subgradient oracle data structure in Lemma~\ref{lem:subgradient-update_additive}
and then we use this data structure to prove the theorem. Our data
structure is fairly analogous to the one in Section~\ref{sec:fast_sfm:pseudopoly}.
The crucial difference is that in this case subgradients are no longer
sparse and therefore we can no longer hope to completely compute the
subgradient exactly in each iteration in $o(n\time)$. Instead, we
show that it is possible to maintain a data structure so that we can
produce an $O(1)$ variance estimate of the difference in the subgradient
between each iterations. By maintaining this, sampling from the differences,
and recomputing the entire subgradient with some frequency, we obtain
the result.
\begin{lem}
\label{lem:subgradient-update_additive} Let $x^{(0)},x^{(1)},...,x^{(N)}$
be a sequence of points in $[0,1]^{n}$ such that $d^{(i+1)}\defeq x^{(i+1)}-x^{(i)}$
is $k^{(i)}$-sparse for each $i\in[n]$. Now suppose we receive the
initial $x^{(0)}\in[0,1]^{n}$ and then receiving the $d^{(i)}$ (sparsely
represented) one after another needing to immediately output a random
vector $g^{(i)}$ such that $\E g^{(i)}=g(x^{(i)})$, $\E\norm{g^{(i)}}_{2}^{2}\leq100$,
and $g^{(i)}$ is $\alpha$-sparse for $\alpha\geq2$. We can solve
this problem in time
\[
O((1+N/\sqrt{\alpha})(n\time+n\log n)+\alpha N\time\log n+K\log n)
\]
where $K=\sum_{i\in[n]}k_{i}$. \textbf{{[}Original: $O((n+K)(\time+\log n)+N(\frac{n}{\sqrt{\alpha}}+\alpha\log n)))${]}}\end{lem}
\begin{proof}
As with Lemma~\ref{lem:subgradient-update-old}, for all $i\in[N]$
we store the coordinates of $x^{(i)}$ in a balanced binary search
tree with a node for each $j\in[n]$ keyed by the value of $x_{j}^{(i)}$;
ties are broken consistently, e.g. by using the actual value of $j$.
Again, we take the order of the nodes $j\in[n]$ in the binary search
tree to define the permutation $P^{(i)}$ which we also store explicitly
in a link-list, so we can evaluate $f(P^{(i)}[k])$ in $O(\time)$
time for all $k$. Now we also use persistence {[}CITE{]}, so that
for whenever we are in round $i\in[n]$ we can access this binary
search tree without more than a constant asymptotic overhead for all
$j\in[i]$. Clearly, by using persistence {[}CITE{]} and balanced
binary search trees we can implement all of this in time $O(K\log n)$.
Next, we assume that each change in $x^{(i)},$ i.e. $d^{(i)}$, is
either non-positive or non-negative. This assumption is essentially
without loss of generality, we now just need to prove the lemma under
the assumption that $\alpha\geq1$. With this simplification we store
for each $i$ a binary search tree on the coordinates of the subgradient
that change and the intervals that change. Formally, let $S^{(i+1)}$
denote the non-zero coordinates of $d^{(i+1)}$, and let $I_{1}^{(i)},...,I_{k^{(i)}+1}^{(i)}\subseteq[n]$
denote the subsets of the coordinates that correspond to the contiguous
intervals of $P^{(i)}$ in between the $S^{(i+1)}$, note that some
of these subsets may be empty and this expression as intervals is
not-unique, but any will do. For each $j\in S^{(i+1)}$ we compute
$d_{j}^{(i+1)}\defeq g_{j}^{(i+1)}-g_{j}^{(i)}$ and we compute $D_{j}^{(i+1)}=\sum_{j\in I_{k}^{(i)}}g_{j}^{(i+1)}-g_{j}^{(i)}$
for each $I_{k}^{(i)}$. We store these in look up tables so that
we can sample them by absolute value of $d_{j}^{(i+1)}$ or $D_{j}^{(i+1)}$
in $O(1)$ time {[}CITE{]}. We also compute the sum of all these absolute
values, i.e. $\norm{g^{(i+1)}-g^{(i)}}_{1}$ \textbf{{[}Tat: $\norm{g^{(i+1)}-g^{(i)}}_{1}$?{]}}.
Since by Lemma~\ref{lem:subgrad_monotonicity} the changes to the
intervals are monotonic and since by Lemma~\ref{lem:subgrad_interval}
we can query intervals of subgradients in $O(\time)$ time and we
are maintaining the tree of coordinates anyway, we can maintain all
of this in $O(K(\time+\log n))$ time.
Using the above data structure we claim that upon receiving $d^{(i+1)}$
for any $i'\in[i]$ we can compute a vector $z$ such that $\E z=g(x^{(i'+1)})-g(x^{(i')})$
and $\E\norm z_{2}^{2}\leq3$ in time $O(\time\log n)$. To do this,
we simply go to the data structure for time $i'$ and sample one of
the $j\propto|d_{j}^{(i'+1)}|$ or the $D_{j}^{(i'+1)}\propto|\sum_{j\in I_{k}^{(i)}}g_{j}^{(i'+1)}-g_{j}^{(i')}|$.
If we pick a $D_{j}$ then we keep picking sub-intervals proportional
to the absolute value of the sum of $g_{j}^{(i;+1)}-g_{j}^{(i')}$
in that interval by starting at partition induced by the root of tree
and repeating until the interval consists of a single element. In
either case we sample down to some coordinate $j$ coordinate and
output $\norm{g^{(i'+1)}-g^{(i')}}_{1}\mathrm{sign}(g_{j}^{(i'+1)}-g_{j}^{(i')})1_{j}$
where $1_{j}$ is the indicator vector for coordinate $j$ and $\mathrm{sign(a)}$
is $1$ if $a$ is positive, $-1$ if $a$ is negative, and $0$ otherwise.
Since again, by Lemma~\ref{lem:subgrad_monotonicity} and Lemma~\ref{lem:subgrad_interval}
we can query for the sum of coordinates in contiguous intervals of
changed coordinates in $O(\time)$ this all takes $O(\time\log n)$
time. Furthermore, since by the structure of this algorithm $z$ is
the same as outputting $\norm{g^{(i'+1)}-g^{(i')}}_{1}\mathrm{sign}(g_{j}^{(i'+1)}-g_{j}^{(i')})1_{j}$
with probability $|g_{j}^{(i'+1')}-g_{j}^{(i)}|/\norm{g^{(i'+1)}-g^{(i')}}_{1}$.
Consequently, $\E z=g(x^{(i'+1)})-g(x^{(i')})$ and
\[
\E\norm z_{2}^{2}=\norm{g_{j}^{(i'+1)}-g_{j}^{(i')}}_{1}^{2}\leq9\cdot\max_{S\subseteq V}|f(S)|\leq9
\]
by Lemma~\ref{lem:bounded-subgradients} and the fact that $|f(S)|\leq1$.
We now have everything we need to provide our algorithm. First we
compute $g(x^{(0)})$ exactly in time naively in $O(n\time+n\log n)$.
For all $i\leq\beta-1$ to output $g^{(i)}$ we $\alpha$ times pick
randomly from $i'\in0,...,\beta-1$ uniformly at random and if $i=0$
we compute some $\beta\norm{g_{j}^{(0)}}_{1}\mathrm{sign}(g_{j}^{(0)})1_{j}$
for $j$ chosen with probability proportional to $|g_{j}^{(0)}|$
otherwise we compute some $z$ with $\E z=\beta(g(x^{(i'+1)})-g(x^{(i')}))$
and $\E\norm z_{2}^{2}\leq3\beta^{2}$. We then output their average.
Clearly, the output is $\alpha$ sparse. Furthermore, individually
we see that each term $z$ we compute has $\E z=g(x^{(i)})$ and
\[
\E\norm{z-g(x^{(i)})}_{2}^{2}\leq2\E\norm z_{2}^{2}+2\norm{g(x^{(i)})}_{2}^{2}\leq36\beta^{2}
\]
where again we used Lemma~\ref{lem:bounded-subgradients} and the
fact that $|f(S)|\leq1$. Consequently, if we output the average of
these $z$ as our $g^{(i)}$ and use that the samples are independent
we have that
\[
\E\norm{g^{(i)}}_{2}^{2}\leq2\E\norm{g^{(i)}-g(x^{(i)})}_{2}^{2}+2\norm{g(x^{(i)})}_{2}^{2}\leq\frac{72\beta^{2}}{\alpha}+18\,.
\]
Note, that the time for this procedure is simply $O(\alpha\time\log n)$.
Consequently, if we repeat this procedure whenever $\beta\geq\sqrt{\frac{\alpha}{72}}$,
i.e. start again as if this was a new $x^{(0)}$, we get that the
time for outputting these samplings is $O((1+N\beta^{-1})(n\time+n\log n)+\alpha N\time\log n)$
giving the desired result.
\end{proof}
Combining this lemma with Lemma~\ref{lem:framework} yields the proof
of Theorem~\ref{thm:runtime_pseudopoly}.
\begin{proof}[Proof of Theorem~\ref{thm:runtime_pseudopoly}]
We apply Lemma~\ref{lem:framework}, our framework lemma which
describes the precise requirements of our subgradient oracle. We compute
the subgradients using Lemma~\ref{lem:subgradient-update_additive}.
We constraint our subgradients to be $k$ sparse and satisfy $B^{2}\leq100$
and therefore run for $T=100n\epsilon^{-2}$ iterations. Computing
the subgradient by Lemma~\ref{lem:subgradient-update_additive} and
accounting for the cost of Lemma~\ref{lem:framework} we compute
$S\subseteq[n]$ with $\E f(S)-f_{*}\leq\epsilon$ in time
\[
O((1+T/\sqrt{k})(n\time+n\log n)+kT\time\log n)
\]
with $T=nB^{2}\eps^{-2}$. \textbf{{[}original: $O\left(n(\time+\log n+100\epsilon^{-2}k)+(n+100n\epsilon^{-2}k)(\time+\log n)+100n\epsilon^{-2}\left(\frac{n}{\sqrt{k}}+k\log n)\right)\right)$
{]}}
Picking $k=\Theta(n^{2/3})$ yields our desired running time.\end{proof}
\end{comment}
\end{proof}
\subsection{Improvements when Minimizer is Sparse \label{sec:fast_sfm:sparse}}
Here we discuss how to improve our running times when the submodular
function $f$ is known to have a sparse solution, that is, the set
minimizing $f(S)$ has at most $s$ elements. Throughout this section
we suppose we know $s$.
\begin{comment}
In the case when $f$ is integer valued with $|f(S)|\leq M$ for all
$S\subseteq[n]$ our running time improves to $\otilde((n+sM^{3})\time)$
to compute a minimizer and in the case when $f$ is real valued with
$|f(S)|\leq1$ our time to find a $S$ with $\E f(S)\leq\OPT+\epsilon$
improves to $\otilde((n+sn^{2/3})\time\epsilon^{-2})$. The main result
in this section is the following:
\end{comment}
\begin{thm}
\label{thm:sparse} Let $f$ be a submodular function with a $s$-sparse
minimizer. Then if $f$ is integer valued with $|f(S)|\leq M$ for
all $S\subseteq[n]$ we can compute the minimizer deterministically
in time $O((n+sM^{3})\log n\cdot\time)$. Furthermore if $f$ is real
valued with $|f(S)|\leq1$ for all $S\subseteq[n]$, then there is
a randomized algorithm which in time $\tilde{O}((n+sn^{2/3})\time\eps^{-2})$
returns a set $S$ such that $\E[f(S)]\le\OPT+\eps,$ for any $\eps>0$.
\end{thm}
Therefore, if we know that the sparsity of the optimum solution is,
say ${\tt polylog}(n)$, then there is a near linear time approximate
algorithm to get constant additive error.
To obtain this running time we leverage the same data structures for
maintaining subgradients presented in Section~\ref{sec:fast_sfm:pseudopoly}
and Section~\ref{sec:fast_sfm:subquad}. Instead we show how to specialize
the framework we used presented in Section~\ref{sec:fast_sfm:framework}.
In particular we simply leverage that rather than minimizing the Lovasz
extension over $[0,1]^{n}$ we can minimize over $S_{s}\defeq\{x\in[0,1]^{n}\,|\,\sum_{i\in[n]}x_{i}\leq s\}$.
This preserves the value of the maximum and minimum, but now improves
the convergence of projected (stochastic) subgradient descent (because
the quantity $R$ becomes $s$ from $n$). To show this formally we
simply need to show that the projection step doesn't hurt the performance
of our algorithm asymptotically.
We break the proof of this into 3 parts. First, in Lemma~\ref{lem:projection_step}
we compute how to projection onto $S_{s}$. Then in Lemma~\ref{lem:sparse_framework}
we show how to update our framework. Using these, we prove Theorem~\ref{thm:sparse}.
\begin{lem}
\label{lem:projection_step} For $k\geq0$ and $y\in\R^{n}$ let $S=\{x\in[0,1]^{n}\,|\,\sum_{i}x_{i}\leq k\}$
and
\[
z=\argmin_{x\in S}\frac{1}{2}\norm{x-y}_{2}^{2}.
\]
Then, we have that for all $i\in[n]$
\[
z_{i}=\text{median}(0,y_{i}-\lambda,1)
\]
where $\lambda$ is the smallest non-negative number such that $\sum_{i}z_{i}\leq k$. \end{lem}
\begin{proof}
By the method of Lagrange multiplier, we know that there is $\lambda\geq0$
such that
\[
z=\argmin_{x\in[0,1]^{n}}\frac{1}{2}\norm{x-y}_{2}^{2}+\lambda\sum_{i\in[n]}x_{i}\,.
\]
Since each variable in this problem is decoupled with each other,
we can solve this problem coordinate-wise and get that for all $i\in[n]$
\[
z_{i}=\text{med}(0,y_{i}-\lambda,1).
\]
Since $\sum_{i\in[n]}z_{i}$ decreases as $\lambda$ increases, we
know that $\lambda$ is the smallest non-negative number such that
$\sum_{i\in[n]}z_{i}\leq k$.
\end{proof}
In particular we provide Lemma~\ref{lem:sparse_framework} and improvement
on Lemma \ref{lem:framework}.
\begin{lem}
\label{lem:sparse_framework} Suppose that for $N\geq sB^{2}\epsilon^{-2}$
and any sequence of $x^{(1)},..,x^{(N)}$ such that that $x^{(i+1)}-x^{(i)}$
is $k^{(i)}$-sparse up to modifications that do not affect the additive
distance between non-zero coordinates with $K\defeq\sum_{i\in[T]}k^{(i)}=O(Nk)$
we can implement a subgradient oracle for $f$ and $x^{(i)},$ denoted
$\tilde{g}(x^{(i)})$, that is $k$-sparse and obeys $\E\norm{\tilde{g}}_{2}^{2}\leq B^{2}$.
Then in time $O(n(\runtime+\log n)+Nk\log n)$ we can compute a set
$S$ such that $\E f(S)\leq\OPT+\epsilon$ (and if the subgradient
oracle is deterministic then the result holds without the expectation).\end{lem}
\begin{proof}
The proof is the same as before, just the size of $R$ improves to
$s$ and we need to deal with this new projection step. However, in
the projection step we set all the coordinates that are less than
$0$ to $0$ and then keep subtracting uniformly (stopping whenever
a coordinate reaches 0) until the maximum coordinate is $\leq1$.
We can do this efficiently by simply maintaining an additive offset
and the coordinate values in sorted order. Then we simply need to
know the number of coordinates above some threshold and the maximum
and the minimum non-zero coordinate to determine what to subtract
up to the point we make the minimum non-zero. We can do this in $O(\log n)$
easily. Now we are not counting the movements that do not set something
to 0 so do not change the additive distances between the non-zero
coordinate. Consequently, an iteration may only move many coordinates
if it sets many things to 0, however that is paid for by the movement
that created it, so we only need $Nk\log n$ time in total to do all
the updates.
\end{proof}
We now have everything we need to prove Theorem~\ref{thm:sparse}
\begin{proof}[Proof of Theorem~\ref{thm:sparse}]
(Sketch) The proof is the same as in Section~\ref{sec:fast_sfm:pseudopoly}
and Section~\ref{sec:fast_sfm:subquad}. We just use Lemma~\ref{lem:sparse_framework}
instead of the previous framework lemma. To invoke the first data
structure just do the update in batches. the second data structure
was already written for this setting. \end{proof}
|
1,314,259,995,346 | arxiv | \section{Introduction}
Fractional kinetics is finding increasingly wide application to physics, chemistry, biology and interdisciplinary complexity science\cite{KlagesBook,KlafterEA1996,Scalas2004,ZaslavskyBook,BalescuBook,Mura2008}. One reason for this is the link between ``strange" kinetics and observed non-Brownian anomalous diffusion, motivating the use of fractional dynamical models of transport processes, including those based on fractional calculus. \citet{Scalas2004} give numerous applications; we will just note space plasmas \citep{MilovanovZelenyi2001}, magnetically confined laboratory plasmas \citep{Balescu1995,Carreras2001,CastilloNegrete2005,SanchezEA2005}, fluid turbulence \citep{KlafterEA1996} and the travels of dollar bills \cite{BrockmannEA2006}.
Equally widespread in its application, and evolving in parallel with the theory of anomalous diffusion, is the theory of anomalous time series. The corresponding models, particularly in the mathematics and statistics literature, have often been based on stable self-similar processes \cite{SamTaqBook,EmbrechtsMaejima2002,Sottinen2003}. Stability here means the property whereby the shape of a probability density function (pdf) remains unchanged under convolution to within a rescaling (c.f. chapter 4 of \citet{MantegnaStanleyBook}) It is an attractive feature in modelling, particularly when one anticipates that a signal represents a sum of random processes. In particular stable self-similar processes, a development in the wider field of stochastic processes\cite{Kolmogorov1956,Billingsley1979}, can model two effects which are are often seen in real data records. The first-Mandelbrot's ``Noah" effect \cite{Mandelbrot1963}- describes non-Gaussian ``heavy-tailed" pdfs, while the second-his ``Joseph" effect \cite{MandelbrotRSbook}-manifests itself as long-ranged temporal memory. The many applications have included hydrology \cite{MandelbrotVanNess1968}, finance \cite{Mandelbrot1963}, magnetospheric activity as measured by the auroral indices \cite{Watkins2002,WatkinsEA2005}, in-situ solar wind quantities \cite{WatkinsEA2005}, and solar flares \cite{BurneckiEA2008}.
The existence of two rich, parallel, but intersecting literatures means that it is not yet completely known which techniques from one will apply to a given problem in the other. It is for example not always clear {\it a priori} what type of kinetic equations will apply in a given context. The right class of kinetic equation for reversible microphysical transport need not also be the right one for an evolving time series taken from a macroscopic variable. The problem of model choice is an important and timely one, both in physics and more general complexity research. Because different models can predict subtly different observable scaling behaviours, distinguishing them may require measuring several exponents, as any individual exponent may be identical across several models, a point recently emphasised by \citet{Lutz2001}.
It is now increasingly recognised that much natural data is of the type that \citet{BrockmannEA2006} dubbed ``ambivalent". In such systems heavy tailed jumps and long-ranged temporal memory compete to determine whether transport is effectively superdiffusive or subdiffusive. The ambivalent process they used for illustration, and fitted to data, was the well-studied fully fractional continuous time random walk \cite{KlagesBook} which incorporates both effects via fractional orders of the spatial and temporal derivatives in its kinetic equation. \citet{ZaslavskyEA2007,ZaslavskyEA2008} also advocated use of the same process in space physics for modelling the auroral index time series. They explicitly contested \cite{ZaslavskyEA2008} the applicability of a time series model \cite{WatkinsEA2005} based on a self-similar stable process-linear fractional stable motion (lfsm) \cite{SamTaqBook,EmbrechtsMaejima2002}-in this role. We note that, rather than being purely a mathematical abstraction, lfsm has been linked to physics via the propagation of activity fronts in extremal models \cite{KrishnamurthyEA2000}. A comparison of these two approaches, leading to a better understanding of their structure and their similarities and differences, thus seems to us to be highly topical. It will be the first of two main topics of this Paper. Although we are fully aware that the kinetic equation we obtain on its own cannot fully specify a non-Markovian process, and, importantly, will not be unique to lfsm, we nonetheless believe that our comparison of the kinetic equations for the two paradigms is of value, particularly as a source of physical insight (see also sections 1 and 2 of \cite{WatkinsEA2008}).
To make the comparison we first briefly recap (Table 1) the main kinetic equations corresponding to the modelling of time series by stable processes, and of anomalous diffusion by the continuous time random walk (CTRW), respectively. In particular we highlight (following \citep{Lutz2001}) the difference between fractional Brownian motion (fBm) and the fractional time process (ftp) which has sometimes led to confusion, at least in the physics and complexity literature (e.g. \cite{WatkinsEA2005}). We illustrate the potential value of this comparison with reference to a surprising gap in the physics literature, the absence of a kinetic equation corresponding to lfsm, analogous to the one given for fBm by \citet{WangLung1990}. We give a simple derivation by direct differentiation using the characteristic function. The kinetic equation can be obtained by methods as diverse as a transformation $t^{\alpha H}$ of the time variable in the space fractional diffusion equation, and a path integral \cite{CalvoEA2008}.
Our second main topic is the potential relevance of lfsm to physics as a toy model for ``calibrating" diagnostics of intermittency (c.f. \cite{MercikEA2003}). As a frequent attribute of nonequilibrium and nonlinear systems, intermittency has been a particular stimulus to physicists and time series modellers \cite{SornetteBook}. In particular the paradigm of self-organised criticality (SOC) \cite{SornetteBook} has been one framework for this, embodying the hypothesis of avalanches of activity in nonequilibrium complex systems. We investigate the scaling of intermittent bursts in lfsm, using the burst size and duration measures which have very often been used as direct diagnostics of SOC. Such measures have been previously studied on, among many others, magnetospheric and solar wind time series \cite{FreemanEA2000}. We follow several earlier conjectures \cite{FreemanEA2000,Watkins2002, CarboneEA2004, Bartolozzi2007} and make simple scaling arguments building on a result of \citet{KearneyMajumdar2005} which suggest that lfsm could indeed be one candidate model for observed power laws in such bursts. We test our arguments with numerics using the algorithm of Stoev and Taqqu \cite{StoevTaqqu2004}. We confirm the earlier numerical results of \citet{CarboneEA2004} and the more recent work of \citet{Rypdal2008}. These papers considered just the $\alpha=2$, fBm case, using a running average threshold and a fixed threshold respectively.
However we find numerically that for lfsm our simple scaling argument, while giving a good approximation to the dependence of the burst size exponent on the self similarity exponent $H$ as $\alpha$ is progressively reduced from 2, becomes much less accurate as $\alpha$ reaches 1. We conclude by offering some suggestions as to the reasons for this, and describing future work.
\section{The kinetic equation of lfsm}
\subsection{Limit theorems and stochastic processes}
\begin{table}
\[
\begin{array}{*{20}c}
\hline
& \alpha &\vline & H &\vline & {{\rm{Stable \ process}}} & {{\rm{CTRW}}} \\
& & \vline & & \vline & & \\
& & \vline & & \vline & H=1/\alpha+d& d^{'}= \alpha H\\
\hline
& {\alpha {\rm{ = 2}}} &\vline & {H = 1/2} &\vline & {BWBm} & & \\
& {} &\vline & {} &\vline & \nabla ^2 p = \partial_t p & {} & \\
\hline
& {0 < \alpha \le 2} &\vline & {H = 1/\alpha} &\vline & {oLm} & & \\
& {} &\vline & {} &\vline & \nabla ^\alpha p = \partial_t p & {} & \\
\hline
& {\alpha = 2} &\vline & 0 \le H \le 1 &\vline & {fBm} & {ftp} & \\
& {} &\vline & {} &\vline & 2Ht^{2H-1} \nabla ^2 p = \partial_t p & \nabla ^2 P = \partial_t^{\alpha H} p & \\
\hline
& {0 \le \alpha < 2} &\vline & 0 \le H \le 1 &\vline & {\bf lfsm} & {ap} & \\
& {} &\vline & {} &\vline & {\bf \alpha H t^{\alpha H-1} \nabla ^\alpha p = \partial_t p } & \nabla ^\alpha p = \partial_t^{\alpha H} p & \\
\hline
\end{array}
\]
\caption{Kinetic equations for the main classes of process used to study anomalous time series and transport beyond the Bachelier Wiener Brownian paradigm. Ordinary L\'{e}vy motion (oLm) parameterised by a stability exponent $\alpha$ relaxes the finite variance assumption of the central limit theorem. The fractional time process (ftp), and the ambivalent process (ap) i.e. the fully fractional continuous time random walk, add a fractional derivative of order $\alpha H$ to the kinetic equations for WBm and oLm respectively. A different way of introducing temporal memory effects is via stable selfsimilar processes with memory kernels, fractional Brownian motion (fBm)and linear fractional stable motion (lfsm) respectively. Although not fully specified by them, the stable processes nonetheless have kinetic equations with time dependent diffusion coefficients. To our knowledge the (boldfaced) kinetic equation for lfsm had not been derived before our preprint \cite{WatkinsEA2008}. It was subsequently arrived at by path integral methods in \cite{CalvoEA2008}. In the self-similar stable processes the self similarity parameter $H$ depends on a memory exponent $d$ and on the stability exponent $\alpha$ via $H=1/\alpha+d$. In the CTRW case by contrast the standard memory exponent, $d'=\alpha H$. In all cases there is a coefficient $D$ with appropriate dimensionality on the left hand side which we have set to 1, in the BWBm case this is simply the familiar diffusion coefficient.}
\end{table}
In Table 1 we collect the kinetic equations for the pdf $p=p(x,t)$ of some processes which have been proposed in the various literatures on time series analysis and anomalous diffusion. For clarity we concentrate on the simplest examples from the family of stable processes and from the CTRW. In the table the (statistical) self similarity exponent $H$ is defined using dilation in time where $\Delta t$ goes to $\lambda \Delta t$:
\begin{equation}
x(\lambda \Delta t) = \lambda^H x(\Delta t)
\end{equation}
and the equality is in distributions.
The fourth row of Table 1 corresponds to Bachelier-Wiener Brownian motion (BWBm) and the fifth row to the familiar diffusion equation where we have abbreviated $\partial / \partial t$ to $\partial_t$. BWBm is of course a manifestation of the central limit theorem (CLT) \cite{Levy1937,Meerschaert2001}. The solution $p(x,t)$ is of Gaussian form with width spreading as $t^{1/2}$ and its characteristic function is also a Gaussian in k $(\sim \exp (-|k|^2 t)$ with stability exponent $\alpha=2$; the process has power spectrum $S(f) \sim f^{-2}$.
\subsection{Anomalous diffusion and the extended Central Limit Theorem}
Similarly, the fifth row corresponds to relaxing the assumption of finite variance,
by allowing a stability exponent $0 < \alpha < 2$. The corresponding probability density functions (pdf) $p_\alpha(x,t)$ are the $\alpha$-stable class, with power law tails decaying as $x^{-(\alpha+1)}$. Following Mandelbrot we refer to these as ``L\'{e}vy flights" or ordinary L\'{e}vy motion (oLm).
The corresponding kinetic equation
\begin{equation}
\frac{\partial p_{\alpha}(x,t)}{\partial t} = \frac{\partial^{\alpha} p_{\alpha}(x,t)}{\partial |x|^{\alpha}} \label{OLM_KE}
\end{equation}
has a symmetric Reisz fractional derivative in space, $\partial^{\alpha}/\partial |x|^{\alpha}$, which in the Table is given as three dimensional and abbreviated to $\nabla^{\alpha}$.
The Reisz derivative is a pseudodifferential operator with symbol $-|k|^{\alpha}$ and $p_{\alpha}(x,t)$ has characteristic function $\hat{p}_\alpha(k,t)= \exp(-|k|^{\alpha}t)$.
Unlike the cases we now go on to discuss, the kinetic equation for oLm is still unambiguously Markovian and an expression of the extended CLT. Due to infinite divisibility, in this specific case the pdf alone $p_{\alpha}(x,t)$ is enough to uniquely characterise the stochastic process, which we will call $Z_{\alpha}(t)$.
\subsection{Relaxing independence through temporal memory: fractional Brownian motion vs. the fractional time process}
The seventh row of Table 1 describes the case when the iid assumption is relaxed, rather than the finite variance one. This case is more subtle than the previous two. Relaxing independence is one way to break the iid assumption and is the situation we consider. It can be done in several ways, we will discuss just two.
One of the ways which has been employed in the CTRW formalism is to take a power
law pdf of waiting times $p(\tau) \sim \tau^{-(1+\alpha H)}$\citep{LindenbergWest1986}. This became known as the fractional time process (ftp, see also \cite{Lutz2001}). The waiting times themselves are still iid, but their infinite mean is assumed to be a consequence of dependence due to microscale physics. The kinetic equation that corresponds to the ftp \citep{Balakrishnan1986, MetzlerKlafter2000,Lutz2001} can be seen in the fourth column of the eighth row in Table 1. We may define a temporal exponent by $d^{'}=\alpha H$.
The fractional derivative in time, of order $\alpha H=d^{'}$, corresponds physically to the power law in waiting times. The prime indicates that this exponent is not identical to the memory parameter $d$ in the case of fBm or FARIMA \cite{BurneckiEA2008}. $d^{'}$ runs from $0$ to $1$ and is, for example, the same as the temporal exponent defined by Brockmann et al (their ``$\alpha$"; our $\alpha$ is their ``$\beta$"). In all the following cases $D$ is no longer the Brownian diffusion constant but simply ensures dimensional correctness in a given equation. Note that we do not include the term describing the power law decay of the initial value here or in subsequent CTRW equations (it is retained and discussed in \cite{MetzlerKlafter2000}, see their eqn. 40).
Another way to relax independence is to introduce global long range dependence, as pioneered by \citet{MandelbrotVanNess1968}. They used a self-affine process with a memory kernel, originally due to Kolmogorov and called by them ``fractional Brownian motion".
Contradictory statements exist in the physics literature concerning the equivalent kinetic equation for fBm corresponding to that for ftp. It has somtimes been asserted \citep{ZaslavskyBook,WatkinsEA2005} that the equations are the same, while ftp has sometimes been labelled ``fBm" (c.f. the supplementary material in \cite{BrockmannEA2006}). However the solution of the equation for ftp is now known to be non-Gaussian \cite{MetzlerKlafter2000}, and can be given in terms of Fox functions. Conversely the pdf of fBm is by definition \citep{Mandelbrot1982,McCauley2004} Gaussian but with a variance which ``stretches" with time as $t^{2H}$. The correct kinetic equation for fBm must thus \cite{Lutz2001} be local in time. It is shown in row 8, column 3 of Table 1. Given, to our knowledge, first by \citet{WangLung1990}, it can be seen by trial solution to have a solution of the required form.
The difference between ftp and fBm is striking, in that although both include temporal correlations, the kinetic equation for the ftp is non-local in time while that for fBm is local. This distinction disappears when we go to a Langevin description, where both processes explicitly require fractional derivatives \cite{Lutz2001}. We are grateful to our referee for pointing out that the kinetic equation for fBm also corresponds to a transformation of time to $t^{2 H}$ in the ordinary diffusion equation for BWBm, which we may contrast with the fractional derivative in time in the kinetic equation for ftp. We remark that if one rescales BWBm with time, the resulting increments would not be stationary whereas
fBm with the same kinetic equation has stationary increments. This illustrates the point that fBm shares its kinetic equation with several other stochastic processes and so a full specification of the process thus requires more than the kinetic equation.
\subsection{Combining memory with infinite variance: ``ambivalence" vs. lfsm.}
\subsubsection{``ambivalent processes" and the fully fractional continuous time random walk}
Questions similar to those in the previous section have been asked in the physics literature about the natural generalisation of the fractional time process to allow for both L\'{e}vy distributions of jump lengths as well as power-law distributed waiting times. The resulting fully fractional kinetic equation is fractional in both in space and time:
\begin{equation}
\frac {\partial^{\alpha H}}{\partial t^{\alpha H} } P_{ap}(x,t) = D \frac{\partial^{\alpha}}{\partial |x|^{\alpha}} P_{ap}(x,t)
\end{equation}
and was used by \citet{BrockmannEA2006} to exemplify the ``ambivalent process". Analogously with the ftp the solution for this process is known \citep{Kolokoltsov} not to be a L\'{e}vy-stable (or stretched L\'{e}vy-stable) distribution but rather a convolution of such distributions.
\subsubsection{A self-similar stable alternative to the a.p.: Linear fractional stable motion}
Analogously to the generalisation of the ftp to the ambivalent process, there are several $H$-self similar Levy symetric $\alpha$-stable processes, described in \cite{WeronEA2005}. We here consider the simplest one, linear fractional stable motion (lfsm), which generalises linear fractional Brownian motion to the infinite variance case. We emphasise that lfsm is, for example, not the fractional Levy motion referred to by \citep{Huillet1999}. We can describe lfsm through a stochastic integral:
\begin{equation}
S_{H}(t|\alpha,b_1,b_2)=\int_{-\infty}^{\infty} K_{H,\alpha}(t-s) Z_{\alpha}(ds)
\end{equation}
where the memory kernel $K_{H,\alpha}$ is given by
\begin{eqnarray}
K_{H,\alpha} & = & b_1 \large[ (t-s)^{H-1/\alpha}_{+} - (-s)^{H-1/\alpha}_{+} \large]\nonumber \\
& + & b_2 \large[ (t-s)^{H-1/\alpha}_{-} - (-s)^{H-1/\alpha}_{-} \large]
\end{eqnarray}
\citet{BurneckiEA1997} showed how mixed linear fractional stable motion can be obtained from $Z_{\alpha}(t)$ using the Lamperti transformation \cite{Lamperti1962,EmbrechtsMaejima2002}, a more general result which enables any self-similar process to be obtained from its corresponding stationary stochastic process. We are concerned here, however, simply with obtaining the kinetic equation. This can be found by direct differentiation of the characteristic function with respect to time (c.f. \cite{PaulBaschnagel1999}).
As with the simpler stable processes the pdf $p_{lfsm}$ of lfsm can be expressed via the Fourier transform of the characteristic function (e.g. \cite{SamTaqBook,LaskinEA2002})
\begin{equation}
p_{lfsm}=\int e^{ikx} \exp (-\bar{\sigma} |k|^{\alpha} t^{\alpha H}) \label{LFSM}
\end{equation}
We see that the characteristic function: $\hat{p}= \exp (-\bar{\sigma} |k|^{\alpha} t^{\alpha H})$
generalises the oLm case. Because $\alpha$ is no longer equal to $1/H$ the effective width parameter now grows like $t^{\alpha H}$. The characteristic function has the correct fBm limit, when $\alpha=$ 2, we see for fBm at any given $t$ it is a Gaussian with width growing as $t^{2H}$. We can see that lfsm is a general stable self-affine process by taking $k'=k\tau^H$ which gives
$p_{lfsm}=t^{-H} \phi_{\alpha}(x/t^H)$, a stable distribution of index $\alpha$ and a prefactor ensuring $H-$selfsimilarity in time \cite{KrishnamurthyEA2000}.
Direct differentiation of this pdf gives
\begin{equation}
\frac{\partial}{\partial t} p_{lfsm} = - \alpha H \bar{\sigma} t^{\alpha H -1} \int_{-\infty}^{\infty} e^{ikx} |k|^{\alpha} \exp (-\bar{\sigma} |k|^{\alpha} t^{\alpha H})
\end{equation}
which, absorbing the constant $\bar{\sigma}$, and factors of $2\pi$ into $D$ can be recognised as
\begin{equation}
\frac{\partial }{\partial t} p_{lfsm} = \alpha H t^{\alpha H-1} D
\frac{\partial^{\alpha} }{\partial x^{\alpha}} p_{lfsm} \label{LFSM_KE}
\end{equation}
using the above definition of the Reisz derivative. Surprisingly the kinetic equation of lfsm seems not have been given explicitly before in either the physics or mathematics literature. \citet{KrishnamurthyEA2000} quoted an equation of motion for integrated activity in lfsm. This has a more complicated structure presumably due to additional memory effects arising from the integration process.
We note that $\alpha H t^{\alpha H-1}=\partial_t t^{\alpha H}$. This factor arises because (\ref{LFSM_KE}) could also be obtained from the space fractional diffusion equation (\ref{OLM_KE}) by a simple transformation of the time variable: $t$ is replaced by $t^{\alpha H}$. The appropriate limits may be easily checked; in particular $\alpha=2$ gives the kinetic equation of fBm.
We also remark that lfsm should be a special case of the nonlinear shot noise process studied by \citet{EliazarKlafter2006} which may allow further generalisation of the kinetic equation we have presented.
\section{lfsm as a model of intermittent bursts}
Intermittency is a frequently observed property in complex systems, and can be studied within several paradigms. One such, of continuing interest, has been Bak et al's self-organized criticality (SOC), a key postulate of which is that slowly driven, interaction dominated, thresholded dynamical systems will establish long ranged correlations via ``avalanches" of spatiotemporal activity. The avalanches are found to obey power laws in size and duration. In consequence, many papers have sought to measure ``bursts" of activity in natural time series. This has most typically been done by means of a fixed threshold. The duration $\tau$ and size $s$ of the bursts are then respectively defined as the interval between the i-th upward crossing time ($t_i$) and the next downward crossing time ($t_{i+1}$) of the threshold, and the integrated area above the threshold between these times.
The search for SOC in the magnetosphere and solar wind has used this approach among others (e.g. \cite{FreemanEA2000,Watkins2002}). The similarity of observed burst size and duration distributions in solar wind and magnetospheric quantitities to those from models of turbulence and SOC led \citet{FreemanEA2000} and \citet{Watkins2002} to conjecture that, at least qualitatively, such behaviour might simply be an artifact of a self similar (or multifractal) time series, rather than unique to a given mechanism. In particular this was in distinction to the idea that one could use the presence or absence of power laws in waiting times defined similarly to the above as a distinguishing feature between SOC and turbulence. One of the present authors has thus elsewhere \cite{Watkins2002} advocated the testing of avalanche diagnostics using controllable self similar models. Similar points have been made subsequently by \citet{CarboneEA2004} for fBm and \citet{Bartolozzi2007} for the multifractal random walk.
In this section we thus present a preliminary investigation of the ability of lfsm to qualitatively mimic SOC signatures in data. As the kinetic equation we have derived is not unique to lfsm and is insufficient to specify all its properties in what follows we have used a numerical simulation of the process $S_{}$ using the algorithm of \citet{StoevTaqqu2004} and analytic arguments, after those of \citet{KearneyMajumdar2005} to predict the scaling of the tail of the pdf of burst size $s$ and duration $\tau$ in lfsm for large $s,\tau$. Rather than estimate the exponents from plots of the pdf or empirical complementary cdf we have elected to use maximum likelihood estimation, as implemented in the algorithms of A. Clauset and co-workers (http://www.santafe.edu/~aaronc/powerlaws/). Detailed comparison with measured experimental exponents is not attempted at this stage and will be the subject of future work.
\begin{figure}
\label{figure1}
\begin{center}
\includegraphics*[width=9cm,angle=0]{figure1.eps}
\end{center}
\caption{Dependance of exponent $\beta$ for pdf $p(\tau)$ of a burst of duration $\tau$ on $H$ for simulated lfsm in the fBm, $\alpha=2$ limit. The average of 7 trials was taken.}
\end{figure}
The expected behaviour of burst duration and sizes for lfsm has, as far as we know, not been investigated.
Dealing first with durations, we make use of the fact that for a fractal curve of self similarity exponent $H$ and dimension $D=2-H$ the points $\{ t_i \}$ have dimension $1-H$. In consequence the probability of crossings over a time interval $\tau$ goes as $\tau^{1-H}$ giving an inter-event probability scaling like $\tau^{-(1-H)}$. The pdf for inter-event intervals in the isoset thus scales as:
\begin{equation}
p(\tau) \sim \tau^{\beta} \label{pdf_beta}
\end{equation}
where
\begin{equation}
\beta=2-H \label{exponent_beta}
\end{equation}
giving the same exponent $3/2$ as for the first passage distribution in the Brownian case.
For symmetric processes this scaling is retained by the subset of the isoset that corresponds to burst ``durations". We expect this to be independent of the detailed nature of the model and so should, in particular, also apply to lfsm.
To establish the behaviour of burst ``sizes" we note first that \citet{KearneyMajumdar2005} considered the zero-drift Wiener Brownian motion (BWBm) case. Rather than their full analytic treatment, we recap their heuristic argument for a burst size (area) $A$ defined using the first-passage time $t_f$. This may then be adapted to burst sizes defined using isosets, and thence to lfsm. They first noted that for BWBm the instantaneous value of the random walk $y(t)\sim t^{1/2}$ for large $t$. Then, defining $A$ by
\begin{equation}
A=\int_{t_i}^{t_f} y(t') dt'
\end{equation}
the integration implies that large $A$ scales as $t_f^{3/2}$. Simple inversion of this expression implies that $t_f$ must scale as $ t_f \sim A^{2/3}$. We independently have the standard result for first passage time for BWBm:
$P(t_f) \sim t_f^{-3/2}$. To get $P(t_f)$ as a function of $A$ i.e. $P(t_f(A))$ one needs to insert the expression for $t_f$ as a function of $A$ in above equation, and in addition will need a Jacobian. After these manipulations $P(A)\sim A^{-4/3}$ \cite{KearneyMajumdar2005}.
In the zero-drift but non-Brownian case we will still argue that $y(t)\sim t^{H}$ for large $t$. As our application uses the above mentioned isoset-based burst size $s$ rather than those based on first passage times we define
$s$ by:
\begin{equation}
s=\int_{t_i}^{t_{i+1}} y(t') dt'
\end{equation}
The rest of the argument goes as before but using (\ref{pdf_beta}). We find:
\begin{equation}
P(s) \sim s^{\gamma}\label{pdf_gamma}
\end{equation}
where
\begin{equation}
\gamma=-2/(1+H)\label{exponent_gamma}
\end{equation}
which we can check in the Brownian case where $H=1/2$ to retrieve $P(s) \sim s^{-4/3}$.
The same exponents, $\beta$ and $\gamma$, but defining the bursts using a DFA-like moving average rather than a fixed threshold, were earlier investigated, for the fBm case only, by \citet{CarboneEA2004}. The format of our figures for the fBm and lfsm cases has been chosen to allow comparison with theirs. They found numerically the same dependences of $\beta$ and $\gamma$ on H, as we have in equations (\ref{exponent_beta}) and (\ref{exponent_gamma}) above, which is intuitively reasonable with hindsight because the choice of fixed or running threshold should not change the asymptotic scaling behaviour. For a fixed threshold the burst size and duration exponents for fBm have also very recently been presented by \cite{Rypdal2008}.
Our numerics confirm that using the fixed threshold definition the expressions for $\beta$ and $\gamma$ also describe fBm well, although the scatter, from a single trial in the case of each value of $\beta$ and $\gamma$ shown in Figures 3 and 4, seems relatively high. We have reduced the scatter in figures 1 and 2 by plotting the average of the exponents over a small number of trials (here 7). The assumption that burst size $s$ grows with duration $\tau$ used in our heuristic derivation above can be seen to be reasonable for the fixed threshold, fBm case in Figure 5.
Perhaps more surprisingly the expressions also seem to hold reasonably well when the stability exponent is reduced, first to 1.8 (Figures 6 and 7) and then to 1.6 (Figures 8 and 9). Again we note that these are averages of 7 trials in each case. By the $\alpha=1$ case presented in Figures 10 and 11, however, the expressions can be seen to fail. In this parameter regime, for any given $H$, they are seen to consistently underestimate both the burst exponents. It has been suggested to us that this could be because $y^H$ ceases to be a good estimate of characteristic displacement when the increments of the walk are very heavy tailed (S. Majumdar, Personal Communication, 2006) but we have so far been unable to find a suitable alternative expression.
\begin{figure}
\label{figure2}
\begin{center}
\includegraphics*[width=9cm,angle=0]{figure2.eps}
\end{center}
\caption{Dependance of exponent $\gamma$ for the pdf $p(s)$ of a burst of size $s$ on $H$ for simulated lfsm in fBm limit. Again an average of 7 trials was taken.}
\end{figure}
\begin{figure}
\label{figure3}
\begin{center}
\includegraphics*[width=9cm,angle=0]{figure3.eps}
\end{center}
\caption{As Figure 1, dependence of duration exponent $\beta$ on H, but 1 trial only.}
\end{figure}
\begin{figure}
\label{figure4}
\begin{center}
\includegraphics*[width=9cm,angle=0]{figure4.eps}
\end{center}
\caption{As Figure 2, dependence of size exponent $\gamma$ on H, again 1 trial only.}
\end{figure}
\begin{figure}
\label{figure5}
\begin{center}
\includegraphics*[width=9cm,angle=0]{figure5.eps}
\end{center}
\caption{Dependence of exponent $\psi$ on $H $in fBm case. $\psi$ captures growth of burst size $s$ with duration $\tau$}
\end{figure}
\begin{figure}
\label{figure2}
\begin{center}
\includegraphics*[width=9cm,angle=0]{figure6.eps}
\end{center}
\caption{As Figure 1 (burst duration exponent $\beta$ vs H, 7 trials), but for $\alpha=1.8$. }
\end{figure}
\begin{figure}
\label{figure2}
\begin{center}
\includegraphics*[width=9cm,angle=0]{figure7.eps}
\end{center}
\caption{As Figure 2 (burst size exponent $\gamma$ vs H, 7 trials), but for $\alpha=1.8$. }
\end{figure}
\begin{figure}
\label{figure2}
\begin{center}
\includegraphics*[width=9cm,angle=0]{figure8.eps}
\end{center}
\caption{As Figure 1 (burst duration exponent $\beta$ vs H, 7 trials), but for $\alpha=1.6$.}
\end{figure}
\begin{figure}
\label{figure2}
\begin{center}
\includegraphics*[width=9cm,angle=0]{figure9.eps}
\end{center}
\caption{As Figure 2 (burst size exponent $\gamma$ vs H, 7 trials), but for $\alpha=1.6$ }
\end{figure}
\begin{figure}
\label{figure2}
\begin{center}
\includegraphics*[width=9cm,angle=0]{figure10.eps}
\end{center}
\caption{As Figure 1 (burst duration exponent $\beta$ vs H, 7 trials), but for $\alpha=1$.}
\end{figure}
\begin{figure}
\label{figure2}
\begin{center}
\includegraphics*[width=9cm,angle=0]{figure11.eps}
\end{center}
\caption{As Figure 2 (burst size exponent $\gamma$ vs H, 7 trials), but for $\alpha=1$ }
\end{figure}
\section{Conclusions}
In this paper we studied the question of whether one would expect the same equation to describe a time series as an anomalous diffusive process. A codification of diffusion-like equations showed that a kinetic equation was ``missing" from the literature; the one corresponding to lfsm. We gave a simple derivation for it by direct differentiation of the well-known characteristic function of lfsm. We then made a preliminary exploration of how lfsm could model the ``burst" sizes and durations previously measured on magnetospheric and solar wind time series. We made simple scaling arguments building on a result of \citet{KearneyMajumdar2005} to show how lfsm could be one candidate explanation for such ``apparent SOC" behaviour and made preliminary comparison with numerics. These arguments fail when the tails of the pdf of increments become very heavy, and further work is needed on this topic.
In future we also plan to consider other stochastic processes, both FARIMA (c.f. \citep{BurneckiEA2008}) and nonlinear shot noises, to allow generalisation of the above initial investigations into burst size and duration. The prevalence of natural processes showing heavy tails and/or long ranged persistence suggests a relevance well beyond our initial area of application in space physics.
We thank in particular Alex Weron, Kristof Burnecki and Marcin Magdziarz for many helpful comments on earlier versions of the paper. NWW is also grateful to Mikko Alava, Robin Ball, Tom Chang, Aleksei Chechkin, Joern Davidsen, Mervyn Freeman, Bogdan Hnat, Mike Kearney, Khurom Kiyani, Yossi Klafter, Vassili Kolokoltsev, Eric Lutz, Satya Majumdar, Martin Rypdal, Dimitri Vyushin and Lev Zelenyi for valuable interactions. NWW acknowledges the stimulating environments of the Newton Institute programme PDS03 and the KITP programme ``The Physics of Climate Change". Research was carried out in part at Oak Ridge National Laboratory, managed by UT-Battelle, LLC, for U.S. DOE under contract number DE-AC05-00OR22725. This research was supported in part by the EPSRC, STFC and NSF under grant number NSF PHY05-51164.
|
1,314,259,995,347 | arxiv | \section{Introduction}
In digital and especially wireless communications, the channels introduce distortions that can hamper accurate signal recovery at receivers. In particular, the single-input single-output (SISO) dispersive channels can incur inter-symbol interference (ISI). Besides ISI, the signals transmitted in the multi-input multi-output (MIMO) systems suffer from co-channel interference (CCI) from other data streams. Both of them require channel equalization at the receiver to avoid errors in data detection \cite{qureshi1985adaptive,proakisdigital}. Among various channel equalization methods, linear equalization admits the simplest form by applying feed-forward linear filters to compensate the channel distortions. Typically, the transmitters need to insert known pilot symbols within data frames for channel estimation or equalizer training. Nonetheless, pilot symbols are not available in some scenarios and also training decreases the overall system throughput. By eliminating training data and maximizing channel capacity for data-bearing transmissions, blind channel equalization presents a bandwidth efficient solution to the distortion compensation.
There has been a plethora of research papers on blind channel equalization and related topics, e.g., blind source separation (BSS); cf. \cite{ding2001blind}. A vast amount of early works are done under the framework of statistical analysis. Specifically, a lot of blind equalization schemes are achieved by exploiting second-order \cite{tong1994blind,abed1997prediction} or higher-order statistics (HOS) \cite{mendel1991tutorial,chi2003batch}. These statistics based algorithms, such as the well-known constant modulus algorithm (CMA) \cite{johnson1998blind} and the super-exponential algorithm (SEA) \cite{shalvi1993super}, directly minimize special non-convex cost functions, and thus tend to exhibit local convergence\cite{ding1992whereabouts,li1995convergence}. Even though HOS algorithms may provide satisfactory performance in a certain cases, a relatively large number of samples are needed due to the nature of higher-order statistics. This drawback limits their applications when the environment is fast time-varying.
As recent advancement in the field of optimization, many non-convex and NP-hard quadratically constrained quadratic program (QCQP) can be reformulated as semidefinite program (SDP) with the rank-1 relaxation, which leads to the so-called semidefinite relaxation (SDR)\cite{luo2010semidefinite,palomar2010convex}.
To tackle the issues of local convergence and large sample requirements, this SDR technique is widely used in detection and equalization, since it can generate a provably approximately optimal solution with a randomized polynomial time complexity.
The SDR MIMO detection in BPSK and QPSK shows promising performance \cite{wang2018non,tan2001application,wang2018iterative} and are extended to other constellations \cite{ma2004semidefinite,mobasher2007near}. Specifically, blind channel equalization, blind source separation and blind MIMO detection are treated in \cite{li2001blind,li2003blind,ma2006blind}, respectively. In \cite{li2001blind}, the authors take advantage of the minimum mean square error (MMSE) criterion to formulate the blind channel equalization problem into a quadratic optimization with binary constraints. Later on, this idea is extended to BSS problem in \cite{li2003blind} with the exploitation of known input alphabets. On another front, efficient high-performance implementations for blind maximum-likelihood (ML) detection of orthogonal space-time block code (OSTBC) is investigated in \cite{ma2006blind}.
In this work, we intend to utilize the \emph{constant modulus} (CM) criterion. Scanning the literature, we note that a similar work by Mariere \textit{et.~al} converts the original CM cost function into a SDP by equating the corresponding polynomial coefficients \cite{mariere2003blind}. The resulting SDP is of high notational and computational complexities, and requires an alternating projection algorithm as post-processing to compute the final equalizer vector. To deal with the complexity, we modify the original CM objective function by changing the $\ell_2$ norm into $\ell_1$ norm, which leads to a substantially simpler form of SDP. Moreover,
in the blind equalization community, it is generally acknowledged that scalar multiplicative and phase rational ambiguities are inherent in the equalized symbols. What's more, the blind algorithms may recover a different data stream in MIMO detection, although the prior information of the interested stream is provided \cite{chi2003batch}.
The aforementioned algorithms including both HOS-based and optimization-based methods, however, do not address these critical issues.
The scalar multiplicative issue is relatively easy to solve by rescaling the power of the equalizer output, whereas the phase ambiguity is quite challenging since almost all the blind cost functions are insensitive to the phase rotations. The usual way for fixing phase is to utilize a reference symbol \cite{zia2010linear}, which essentially also reduces the information data rate.
Since most current wireless systems already employ forward error correction (FEC) codes, we plan to exploit the information embedded in the FEC code (LDPC code in our case). One noticeable property of \emph{asymmetric} LDPC code is that the negation of any valid codeword does not belong to the code \cite{scherb2003phase,scherb2005phase}.
To take advantage of the code information, we will use the relaxed code constraints in real/complex domain \cite{feldman2005using}.
Actually, this set of code constraints have been widely used in our works:
space-time code is concatenated with LDPC code in \cite{wang2014joint,wang2015joint};
partial channel information is treated in \cite{wang2015diversity,wang2016diversity};
multi-user scenario with different channel codes or different interleaving patterns are handled in \cite{wang2016robust,wang2016diversity};
SDR-based MIMO detector is proposed in \cite{wang2018integrated} that can approach ML performance.
Different from previous works \cite{wang2017galois}, the asymmetry property will be explored in this work to resolve the phase ambiguity.
\section{SDP Formulations}
Here we consider a transmission that takes the form
\begin{equation}
\mathbf{x}[n] = \mathbf{H} \mathbf{s}[n] + \mathbf{v}[n], \quad 1 \leq n \leq N
\end{equation}
where $\mathbf{H}$ is the (equivalent) communication channel, $\mathbf{s}[n]$ is the transmitted signal, $\mathbf{x}[n]$ is the received signal, and $\mathbf{v}[n]$ is the additive white Gaussian noise (AWGN) at the receiver. These vectors and matrix are of appropriate size. Note that this system model is quite general in the sense that it incorporates spatial multiplexing MIMO, space-time coded MIMO in the \emph{equivalent spatial diversity} model, and SISO transmission in frequency-selective channel with $\mathbf{H}$ being a Toeplitz matrix.
To begin with the algorithm development, we denote the desired equalizer vector by $\mathbf{w}$. In MIMO detection, this equalizer is aimed for a certain stream and length of the equalizer is equal to the number of receive antennas; for SISO ISI equalization, the equalizer length is related to the channel delay spread. Let $y[n]$ be the equalized symbol, that is, $y[n] = \mathbf{w}^H \mathbf{x}[n]$ for flat-fading MIMO channel and $y[n] = w[n] \ast x[n]$ for SISO ISI channel.
In the sequel, we only consider the SISO ISI channel for derivation simplicity. Without loss of generality, assume the equalizer is $(L+1)$ in length. In order to have a compact form for SISO ISI case, denote $\mathbf{x}_n = [x_n, x_{n-1}, \ldots, x_{n-L}]^T$ and thus $y[n] = \mathbf{w}^H \mathbf{x}_n$.
\subsection{Basic CM-based SDP Formulation}
If perfect equalization is achieved, the sequence $y[n]$ will be of the same modulus as channel input signal $s[n]$. For BPSK or QPSK, the modulus of $y[n]$ is expected to be 1 and consequently a natural formulation of the blind CM equalization is
\begin{equation} \label{CM_cost}
\text{min.} \quad \mathbf{J}(\mathbf{w})
= \frac{1}{N} \sum_{n=1}^{N} \left| |y[n]|^2 - 1\right|
= \frac{1}{N} \sum_{n=1}^{N} \left| \mathbf{w}^H \mathbf{X}_n \mathbf{w} - 1 \right|
\end{equation}
where the second equality follows
\begin{equation}
\vert y[n] \vert ^2 = \mathbf{w}^H \mathbf{x}_n \mathbf{x}_n^H \mathbf{w} = \mathbf{w}^H \mathbf{X}_n \mathbf{w}
\end{equation}
By introducing auxiliary variable $\tau_n$, we can transform the unconstrained problem (\ref{CM_cost}) to the following constrained problem
\begin{equation} \label{cm_qcqp}
\begin{aligned}
& \underset{\mathbf{w}}{\text{min.}}
& & \frac{1}{N} \sum_{n=1}^N \tau_n \\
& \text{s.t.}
& & \mathbf{w}^H \mathbf{X}_n \mathbf{w} - \tau_n \leq 1, \; 1 \leq n \leq N \\
&
& & \mathbf{w}^H \mathbf{X}_n \mathbf{w} + \tau_n \geq 1, \; 1 \leq n \leq N
\end{aligned}
\end{equation}
where $\mathbf{w}$ and $\tau_n$'s are optimization variables. However, notice that the second quadratic constraints do not define a convex set, and therefore this QCQP is non-convex. Let $\mathbf{W} = \mathbf{w} \mathbf{w}^H$ and then this QCQP can be transformed into a convex SDP without the rank-1 requirement of $\mathbf{W}$, shown as follows
\begin{equation} \label{cm_sdp}
\begin{aligned}
& \underset{\mathbf{w}}{\text{min.}}
& & \frac{1}{N} \sum_{n=1}^N \tau_n \\
& \text{s.t.}
& & \text{tr} (\mathbf{X}_n \mathbf{W}) - \tau_n \leq 1, \; 1 \leq n \leq N \\
&
& & \text{tr} (\mathbf{X}_n \mathbf{W}) + \tau_n \geq 1, \; 1 \leq n \leq N
\end{aligned}
\end{equation}
where $\mathbf{W}$ and $\tau_n$'s are optimization variables. Upon obtaining the optimal solution $\mathbf{W}^*$, rank-1 approximation or randomization can be used to find the desired equalizer $\mathbf{w}^*$ in $\epsilon$-accuracy \cite{ye1999approximating}.
\subsection{Integration of LDPC Code Constraints}
The linear programming decoding proposed by Feldman \textit{et.~al} opens a gateway for the unification of the detection and decoding processes \cite{feldman2005using}.
Specifically, Consider an LDPC code $\mathcal{C}$. Let $\mathcal{M}$ and $\mathcal{N}$ be the set of check nodes and variable nodes of the parity check matrix $\mathbf{H}$, respectively.
Denote the set of neighbors of the $m$-th check node as $\mathcal{N}_m$. For a subset $\mathcal{F} \subseteq \mathcal{N}_m$ with odd cardinality $|\mathcal{F}|$, the explicit constraints on the coded bits $f[n]$ are given by the following parity check inequalities
\begin{equation} \label{eq:parity_ineq}
\sum_{ n \in \mathcal{F} } f[n] - \sum_{ n \in (\mathcal{N}_m \backslash \mathcal{F})} f[n] \leq |\mathcal{F}| - 1, \quad \forall m \in \mathcal{M}, \mathcal{F} \subseteq \mathcal{N}_m, |\mathcal{F}| \, \text{odd}
\end{equation}
and box constraints
\begin{equation} \label{eq:box_ineq}
0 \leq f[n] \leq 1, \quad \forall n \in \mathcal{N}.
\end{equation}
It is worthwhile to note that the negation of a valid codeword may still be valid for a generic LDPC code $\mathcal{C}$, and thus the above code constraints cannot fix the phase rotations. However, one special class of LDPC code with the asymmetry property can help to prevent such bad configurations. Its definition is stated in \cite{scherb2003phase} and repeated here.
\begin{definition}
A channel code is called asymmetric if the negation of an arbitrary valid codeword is not a valid codeword, i.e. $\mathbf{c} \in \mathcal{C} \Rightarrow \overline{\mathbf{c}} \notin \mathcal{C}$.
\end{definition}
Obviously, a code is asymmetric if an arbitrary parity check sum node includes an odd number of neighbors, that is, $| \mathcal{N}_m |$ is odd for every $m \in \mathcal{M}$. This observation is formalized in the following theorem \cite{scherb2005phase}.
\begin{theorem}
If there exists an arbitrary row or an arbitrary linear combination of rows in $\mathbf{H}$ such that the number of 1's is odd, then this code is asymmetric.
\end{theorem}
However, to further integrate the LDPC code constraints, we need to explicitly have the variable $\mathbf{w}$ in the optimization problem. Ideally, we want to impose the constraint $\mathbf{W} = \mathbf{w} \mathbf{w}^H$. Nonetheless, this constraint is not convex. As inspired by \cite{vandenberghe1996semidefinite}, we approximate the exact constraint by a convex constraint in the form $\mathbf{W} \succeq \mathbf{w} \mathbf{w}^H$, which is equivalent to
\begin{equation} \label{eq:psd}
\begin{bmatrix}
\mathbf{W} & \mathbf{w} \\
\mathbf{w}^H & 1
\end{bmatrix}
\succeq 0.
\end{equation}
The last step for this integration is to use the squeezing box constraints and symbol-to-bit mapping constraints. In particular, for BPSK, the two kinds of constraints are
\begin{equation} \label{eq:squeeze}
\left| \mathbf{w}^H \mathbf{x}_n - z[n] \right| \leq t_n, \quad 1 \leq n \leq N
\end{equation}
and
\begin{equation} \label{eq:mapping}
z[n] = 2 f[n] - 1, \quad 1 \leq n \leq N
\end{equation}
where $z[n]$ is a dummy variable which represents the point on data constellation, and $t_n$ is the variable to be lifted into the cost function for squeezing box. For simplification, the constraints (\ref{eq:psd}), (\ref{eq:squeeze}) and (\ref{eq:mapping}) can be categorized as connection constraints.
To this end, the SDP with code constraints for fixing phase rotation is as follows
\begin{equation} \label{cm_code_sdp}
\begin{aligned}
& \underset{\mathbf{w}}{\text{min.}}
& & \frac{1}{N} \sum_{n=1}^N \tau_n + \sum_{n=1}^N t_n \\
& \text{s.t.}
& & \text{tr} (\mathbf{X}_n \mathbf{W}) - \tau_n \leq 1, \; 1 \leq n \leq N \\
&
& & \text{tr} (\mathbf{X}_n \mathbf{W}) + \tau_n \geq 1, \; 1 \leq n \leq N \\
&
& & [\text{Connection Constraints (\ref{eq:psd}), (\ref{eq:squeeze}) and (\ref{eq:mapping})}] \\
&
& & [\text{LDPC Code Constraints (\ref{eq:parity_ineq}) and (\ref{eq:box_ineq})}]
\end{aligned}
\end{equation}
\section{Summary}
In this letter, we first reformulate the non-convex CM cost function into a convex SDP with rank-1 relaxation.
We further attempt to address the inherent issue of phase rotation by integrating LDPC code constraints.
It is a novel way and a seemingly promising approach of using the prior information embedded in asymmetric LDPC code instead of reference symbol.
The remaining works are to extend this formulation to higher-order modulations, which is non-trivial, and to exploit other a priori information in the transmission. Preliminary tests are yet to come.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
1,314,259,995,348 | arxiv | \section{Introduction}
\IEEEPARstart{I}{n} Federated Learning (FL), clients cooperatively train a Machine Learning (ML) model with their decentralized datasets under the coordination of a central server\cite{mcmahan2017communication}. One of the typical settings of FL is cross-silo FL\cite{kairouz2019advances} where a neutral third-party agent acts as the central server and clients are a group of organizations, aiming to jointly train an optimal ML model for their respective use. In this case, these organizations are also the owners of the global model and can utilize the well-trained global model to further process tasks for their own interests.
An optimal global model with high performance requires the organizations in cross-silo FL to collaborate efficiently so as to bring considerable benefits to all participants, which can be regarded as the maximization of the social welfare. In fact, there are many studies on optimizing the social welfare in cross-silo FL by directly improving the model performance\cite{huang2021personalized,wang2021efficient,nandury2021cross,majeed2020cross}, increasing the convergence speed\cite{marfoq2020throughput}, reducing the communication cost\cite{zhang2020batchcrypt}, protecting privacy\cite{heikkila2020differentially,li2021practical,long2021federated} and security\cite{jiang2021flashe}, etc.
However, since every organization in cross-silo FL can obtain the final global model regardless of its contribution, the well-trained model becomes a public good, which is non-excludable and non-rivalrous for all organizations\cite{tang2021incentive}. This leads to selfish behaviors that some organizations may only consider their own interests via inactively participating in local training to obtain the final global model for free or at a lower cost. The spread of this behavior can result in huge loss of the social welfare, and then none of the organizations can get the optimal model, which compromises the long-term stability and sustainability of cross-silo FL.
Most of the existing studies motivate organizations to fully contribute to cross-silo FL by designing an incentive mechanism\cite{tang2021incentive,tang2018multi,zhao2018dynamic,feng2019joint,shao2019multimedia}. However, this requires extra negotiation costs since organizations need to reach a consensus on the mechanism in advance, and demands additional running costs where a distributed algorithm runs over all organizations, clearly adding more burden to organizations. Recently, game-theoretical approach is applied to investigate participation behaviors in cross-silo FL\cite{zhang2022enabling}, which needs additional information transmission between the server and organizations, and thus may cause potential privacy issues.
In this paper, we take a brand-new approach using the Multi-player Multi-action Zero-Determinant (MMZD) strategy\cite{7381622} to maximize the social welfare in cross-silo FL without causing additional costs and transmission for organizations. Moreover, the MMZD Alliance (MMZDA) formed by multiple MMZD players is able to control the maximum social welfare more effectively. Another outstanding advantage of our methods is that they can control the social welfare and be applied to any cross-silo FL scenario no matter what strategies or actions other organizations perform. In summary, our contributions include (a preliminary version of this paper is presented in ICASSP 2022\cite{chen2022social}):\\
\begin{itemize}
\item We model the interactions among organizations in cross-silo FL as a public goods game for the first time, focusing on the organization's strategy rather than designing an extra mechanism to solve the social welfare maximization problem.
\item We reveal the existence of the social dilemma in cross-silo FL by mathematical proof for the first time, which demonstrates the adverse effect of selfish behaviors in cross-silo FL in the view of game theory. This can be used as a theoretical basis for exploring organizations' behaviors in cross-silo FL.
\item We overcome the social dilemma by employing the MMZD strategy from the perspectives of individual organization and alliance. Specifically, any organization can unilaterally maximize the social welfare, which ensures the social welfare in cross-silo FL at a certain level and maintains the stability of the system.
\item We further study the scenario in which multiple organizations adopt the same MMZD strategy, forming the MMZDA. We have mathematically proved that the maximum social welfare controlled by the MMZDA can get improved. This approach can also extend the application domains of the MMZD.
\item Experiments prove the effectiveness of the MMZD strategy in maximizing the social welfare. The maximum value of social welfare is able to be enlarged by the MMZDA strategy.
\end{itemize}
The rest of the paper is organized as follows. The related work about cross-silo FL is summarized in Section II. In Section III, we formulate the cross-silo FL game to model the interactions among organizations in cross-silo FL, and further discover the social dilemma in the game. We propose a method based on MMZD strategy for individual organization to control the social welfare in Section IV. Section V studies the MMZD strategy employed by multiple organizations, namely MMZDA, and validates that the MMZD alliance can enlarge the maximum value of social welfare compared to the individual MMZD. Simulation results are reported in Section VI, followed by the conclusion in Section VII.
\section{Related Work}
Existing works related to cross-silo FL can be classified into three categories: the optimization of aggregation algorithm, security and privacy protection, and the incentive mechanism design.
Aggregation algorithms are developed to enhance the performance of cross-silo FL. Based on the original FedAvg algorithm\cite{mcmahan2017communication}, various algorithms to improve the convergence speed, accuracy, and security were proposed in cross-silo FL setting. Marfoq et al. introduced practical algorithms to design an averaging policy under a decentralized model for achieving the fastest convergence\cite{marfoq2020throughput}. And Huang et al. proposed FedAMP to overcome non-iid challenges\cite{huang2021personalized}. Zhang et al. reduced the encryption and communication overhead caused by additively homomorphic encryption, which lowered the cost of aggregation as well\cite{zhang2020batchcrypt}.
Security and privacy protection issues also received attention in cross-silo FL. Heikkil et al. combined additively homomorphic secure summation protocols with differential privacy to guarantee strict privacy for individual data subjects in the cross-silo FL setting\cite{heikkila2020differentially}. Li et al. proposed a brand-new one-shot algorithm that can flexibly achieve differential privacy guarantees\cite{li2020practical}. Jiang et al. designed FLASHE, an optimized homomorphic encryption scheme to meet requirements of semantic security and additive homomorphism in cross-silo FL\cite{jiang2021flashe}. Chu et al. proposed a federated estimation method to accurately estimate the fairness of a model, namely avoiding the bias of data, without infringing the data privacy of any party\cite{chu2021fedfair}.
Many studies used incentive mechanisms to encourage organizations to participate in training since the performance of cross-silo FL is affected by organizations' behaviors. Tang et al. formulated a social welfare maximization problem and proposed an incentive mechanism for cross-silo FL to address the organization heterogeneity and the public goods characteristics\cite{tang2021incentive}. Many incentive mechanisms based on auction\cite{tang2018multi}, contract\cite{zhao2018dynamic,feng2019joint}, and pricing\cite{shao2019multimedia} were proposed for FL. Unfortunately, they cannot be adapted to the cross-silo FL. First of all, most of the existing incentive mechanisms focus on encouraging more clients to participate in FL, instead of improving the performance of the global model from the perspective of social welfare. Secondly, the global model in cross-silo FL has the non-exclusive nature of public goods, leading to potential free-riding behaviors, which is not considered in the existing incentive mechanisms. Last but not least, current incentive mechanisms usually require participants or servers to spend additional computing resources. As a result, complex designs and additional computing costs can bring a burden on the organizations.
Recently, Zhang et al. explored the long-term participation of organizations in cross-silo FL by deploying a game theory approach\cite{zhang2022enabling}, where a strategy and an algorithm were displayed for organizations to reduce free riders and increase the amount of local data for global training. However, the proposed algorithm requires extra information interactions between the organizations and server, which leads to the risk of information leakage.
In light of the above analysis, the following aspects distinguish our work from the existing approaches. First, our study reveals the social dilemma in the cross-silo FL. Second, we are committed to improving the final performance of the FL model, namely the social welfare. Moreover, we adopt the MMZD strategy without additional cost to control the maximum value of social welfare, and we use the MMZD alliance to further expand the ability to control the maximum value of social welfare.
\section{System Model} \label{Modeling}
We consider a cross-silo FL scenario with a set of organizations, denoted as $\mathcal{N}=\{1,2,...,N\}$. All organizations rely on a central server to collaboratively conduct global model training for a specific task, where each of them has their own data for local training. The goal of organizations is to obtain an optimal global model, minimizing the loss based on all datasets. The central server collects the results of local model updates from all organizations, aggregates to obtain the global model, and then distributes it to everyone for the next round of training.
In each round of local training, every organization performs $K$ iterations of model training. We denote the number of global communication rounds for aggregation as $r$. For the current task, the action of organization $i\in\mathcal{N}$, denoted as $y_i\in\{0,1,...,r\}$, represents the number of communication rounds it participates in the task. Then, $\mathbf{y}=(y_1,...,y_i,...,y_N)$ denotes the action vector of all organizations. Here we assume that all organizations in this cross-silo FL may participate in fewer global aggregations due to laziness or selfishness, but they do not carry out malicious attacks, such as model poisoning attack
According to the cross-silo FL model, all organizations get the same model in return. Inspired by \cite{tang2021incentive}, we define the revenue of organization $i$ as:
\begin{equation}\label{eq:revenue}
\Phi_i(\mathbf{y})=m_i(\chi_0-\chi(\mathbf{y})),
\end{equation}
where $m_i$ (in dollars per unit of precision function) denotes the unit revenue of organization $i$ by using the returned final model, $\chi_0$ denotes the precision of the untrained model, and $\chi(\mathbf{y})$ denotes the precision of the trained global model after the actions of organizations in the action vector $\mathbf{y}$. Specifically, $\chi(\mathbf{y})$ can be modeled as
\begin{equation}
\chi(\mathbf{y})=\frac{\theta_0}{\theta_1+K \sum_{i\in N} y_i},
\end{equation}
with positive coefficients $\theta_0$ and $\theta_1$\cite{li2019convergence} being derived based on the loss function, neural network, and local datasets. In particular, we have $\chi_0=\frac{\theta_0}{\theta_1}$. The revenue of each organization is proportional to the difference between the expected loss after $r$ rounds aggregation (i.e., $\chi(\mathbf{y})$) and the minimum expected loss (i.e., $\chi_0$)\cite{tang2021incentive}. As the number of total participation rounds increases, the marginal decrease of the difference reduces.
We define the cost of organization $i$ as:
\begin{equation}
\Psi_i(y_i)=C^{i}_p+C^{i}_m.
\end{equation}
The cost is composed of the organization's computation cost $C^{i}_p$ and its communication cost $C^{i}_m$. On the one hand, the computation cost $C^{i}_p=\beta_i K y_i$, where $\beta_i$ is a positive parameter, denoting the computation cost of each iteration in organization $i$'s local training\footnote{As \cite{tran2019federated} shows, $\beta_i=\frac{\alpha_i}{2} f^2_i D_i S_i$, where $\frac{\alpha_i}{2}$ is the effective capacitance coefficient of organization $i$'s computing chipset, $f_i$ denotes the calculation processing capacity, $D_i$ denotes the number of data units, and $S_i$ denotes the number of CPU cycles required by organization $i$ to process one data unit.}. On the other hand, the communication cost $C^{i}_m$ is defined as a parameter since we assume that every organization uploads its updates in each global aggregation round. If
it chooses not to participate in global aggregation, it will submit a zero vector as updates.
Then the utility of organization $i$ is defined as
the difference between its revenue and cost:
\begin{equation}\label{eq:utility}
U^i(\mathbf{y})=\Phi_i(\mathbf{y})-\Psi_i(y_i).
\end{equation}
According to previous statements, we model the interactions among organizations as a $\emph{cross-silo FL game}$.
\begin{definition}
(Cross-silo FL game). In the cross-silo FL game, organizations participating in cross-silo FL act as players, where their actions and utilities are $y_i$ and $U^i(\mathbf{y})$, respectively.
\end{definition}
The cross-silo FL game can be iterative since these organizations in cross-silo FL usually cooperate for a long time to finish multiple FL tasks. Each game round in the cross-silo FL game corresponds to a certain FL task. Moreover, the social welfare in cross-silo FL game can be denoted as the total utility of all organizations $i$, namely $\Sigma^N_{i=1} U^i(\mathbf{y})$. In the cross-silo FL game, we find that the social dilemma occurs if $\Phi_i(\mathbf{y})-C^{i}_p<0$, which can be summarized as below.
\begin{theorem}\label{th:SD}
(Social dilemma). If $\Phi_i(\mathbf{y})-C^{i}_p<0$, there exists a social dilemma in the cross-silo FL game.
\end{theorem}
\begin{proof}
The social dilemma forms when the Nash equilibrium is not the point of maximum social welfare. First, we study the Nash equilibrium of the cross-silo FL game. Referring to (\ref{eq:utility}), we can derive the derivative of $U^i$ as $m_i\frac{K \theta_0}{(\theta_1+K \Sigma y_i)^2}- \beta_i K$. Given $\Phi_i(\mathbf{y})-C^{i}_p<0$, we have
\begin{equation*}
m_i\frac{K y_i\theta_0}{(\theta_1+K y_i)\theta_1}-\beta_i K y_i<0,
\end{equation*}
which leads to
\begin{equation*}
m_i\frac{K \theta_0}{(\theta_1+K \Sigma y_i )^2}<m_i\frac{K\theta_0}{(\theta_1+K y_i)\theta_1}<\beta_i K.
\end{equation*}
Thus, the derivative of $U^i$ is negative, and the utility function decreases monotonically with $y_i$. The Nash equilibrium strategy of each organization is $y_i=0$, so the Nash equilibrium point is $\mathbf{y}^{NE}=(0,0,...,0)$. Noted that, there is a natural and necessary premise of FL that the utility of the well-trained model must be positive. Thus, we prove that the point $\mathbf{y}^{r}=(y_i=r,i\in\mathcal{N})$ with the social welfare
\begin{equation*}
\sum^N_{i=1}U^i(\mathbf{y}^{r})=\sum \Psi_i(\mathbf{y}^{r})-\sum C^i_p-\sum C^i_m>0,
\end{equation*}
higher than that in the Nash equilibrium point
\begin{equation*}
\sum^N_{i=1}U^i(\mathbf{y}^{NE})=-\sum C^i_m<0.
\end{equation*}
So the social dilemma exists if $\Phi_i(\mathbf{y})-C^{i}_p<0$.
\end{proof}
The condition $\Phi_i(\mathbf{y})-C^{i}_p<0$ in the above theorem indicates that if any organization $i\in\mathcal{N}$ only trains the local model using its local dataset, the utility is negative. In fact, this condition strengthens organizations' motivation to participate in global training in cross-silo FL game.
Thus, there is a social dilemma in the cross-silo FL game and problem follows. If the organization only pursues its own interests and does not participate in the communication round, it will lead to a decrease in the accuracy of the global model, which in turn leads to low social welfare. So it is of great importance to optimize the public goods in the cross-silo FL game, i.e., the accuracy of the global model, which is actually equivalent to the social welfare.
For reference, we summarize key notations used in the system model in Table I.
\begin{table}[htbp]\label{notation}
\centering
\caption{Key Notations.}
\centering
\begin{tabular}{|c|l|}
\hline
Notation & Meaning \\ \hline
$N$ &The number of organizations \\ \hline
$K$ &The number of local training iterations for every organization\\ \hline
$r$ &The number of global communication rounds for aggregation \\ \hline
$y_i$ & \begin{tabular}[c]{@{}l@{}}The number of communication rounds organization $i$\\ participates in the current task\end{tabular}\\ \hline
$\mathbf{y}$ &The action vector of all organizations\\ \hline
$\Phi_i$ &The revenue of organization $i$\\ \hline
$m_i$ &The unit revenue of organization $i$ by using the final model\\ \hline
$\chi(\mathbf{y})$ &\begin{tabular}[c]{@{}l@{}} The precision of the trained global model with the\\corresponding action vector $\mathbf{y}$\end{tabular}\\ \hline
$\Psi_i$ &The cost of organization $i$\\ \hline
$C^{i}_p$ &The computation cost of organization $i$\\ \hline
$C^{i}_m$ &The communication cost of organization $i$\\ \hline
$\beta_i$ & \begin{tabular}[c]{@{}l@{}}The computation cost of each iteration in organization $i$'s\\ local training\end{tabular}\\ \hline
$U^i(\mathbf{y})$ & \begin{tabular}[c]{@{}l@{}} The utility of organization $i$ with the corresponding\\ action vector $\mathbf{y}$\end{tabular}\\ \hline
\end{tabular}
\end{table}
\section{Social welfare maximization by MMZD}\label{maxwelfare}
According to the analysis above, we can see that the underlying cause of the social dilemma is selfishness, leading to the loss of all organizations, namely the low social welfare. Aiming to solve this problem, we resort to the Multi-player Multi-action Zero-Determinant (MMZD) strategy for the social welfare maximization in this section. In each game round, any organization can choose the action $y_i\in\{0,1,...,r\}$, so there are $(r+1)^N$ possible outcomes for each game round. We assume that the organizations have one-round memory since a long-memory player has no priority against others with short memory\cite{7381622}. Fig. \ref{fig:illustration} describes an example of the cross-silo FL game with two organizations and three actions, i.e., $N=2$ and $r=2$, in which all possible outcomes can be denoted as $(y_1,y_2)\in\{(0,0),(0,1),(0,2),(1,0),(1,1),(1,2),(2,0),(2,1),(2,2)\}$. For arbitrary organization $i\in\mathcal{N}$, its mixed strategy $\textbf{p}^i$ is defined as:
\begin{equation}
\textbf{p}^i=[p^i_{1,0},p^i_{1,1},...,p^i_{1,r},p^i_{2,0},...,p^i_{j,g},...,p^i_{(r+1)^{N},r}]^T,
\end{equation}
where $p^i_{j,g} (j\in\{1,2,...,(r+1)^N\},g\in\{0,1,...,r\})$ represents the probability of organization $i$ choosing action $y_i=g$ in the current game round and other organizations choosing the same actions as $j$-th outcome of the previous game round. As presented in Fig. \ref{fig:illustration}, for the previous outcome $(y_1,y_2)=(0,2)$, the conditional probability of organization 1 to select action $y_1=1$ is $p^1_{3,1}$ and the conditional probability of organization 2 to select action $y_2=0$ is $p^2_{3,0}$. In addition, the corresponding utility vector $\mathbf{u}^i$ is denoted as:
\begin{equation}
\mathbf{u}^i=[u^i_{1,0},u^i_{1,1},...,u^i_{1,r},u^i_{2,0},...,u^i_{j,g},...,u^i_{(r+1)^{N},r}]^T,
\end{equation}
where each utility $u^i_{j,g}$ of organization $i$ choosing action $y_i=g$ in the $j$-th outcome can be calculated by $u^i_{j,g}=U^i(\mathbf{y}^{(j,g)})$, with $\mathbf{y}^{(j,g)}$ denoting the action vector $\mathbf{y}$ corresponding to the $j$-th outcome but $y_i=g$. Fig. \ref{fig:example}(a) presents the strategy vectors and utility vectors based on the example shown in Fig. \ref{fig:illustration}.
\begin{figure}
\centering
\includegraphics[scale=0.7]{MMZDexampleFg1.pdf}
\caption{Illustration of the cross-silo FL game with two organizations and three actions ($N=2$ and $r=2$).}
\label{fig:illustration}
\end{figure}
In the cross-silo FL model, an organization’s current move depends only on its last action and the action vector $\mathbf{y}$ in the last game round. We can construct a Markov matrix $\mathbf{M}=[M_{vw}]_{(r+1)^{N} \times (r+1)^{N}}$, with each element $M_{vw}$ denoting the one-step transition probability from state $v$ to $w$. Fig. \ref{fig:example}(b) shows the Markov matrix $\mathbf{M}$ for the case of $r=2$ and $N=2$. For example, the element $p^1_{3,1}p^2_{3,0}$ is at the 3rd row and 4th column in Fig. \ref{fig:example}, which represents the possibility of transitioning from the 3rd-outcome (0,2) in the previous round to the 4th-outcome (1,0) in the current round. Then, we define $\textbf{M}'\equiv \textbf{M}-\textbf{I}$ where $\textbf{I}$ is an identity matrix. And we assume the stationary vector of $\textbf{M}$ is $\pi$. Given $\pi^T\textbf{M}=\pi^T$, we can draw that $\pi^T\textbf{M}'=0$. According to Cramer’s rule, we have
\begin{equation}
Adj(\textbf{M}')\textbf{M}'=\det(\textbf{M}')\textbf{I}=0,
\end{equation}
where $Adj(\textbf{M}')$ denotes the adjugate matrix of $\textbf{M}'$. Thus, it can be noted that every row of $Adj(\textbf{M}')$ is proportional to $\pi$. Hence for any vector $\textbf{a}=(a_1,a_2,\dots,a_{(r+1)^N})^T$, we can draw $\pi^T\cdot\textbf{a}=\det (\textbf{p}^1,\dots,\textbf{p}^N,\textbf{a})$ according to \cite{press2012iterated}.
\begin{figure*}
\includegraphics[scale=0.75]{MMZDexample1.pdf}
\caption{The strategy vectors, utility vectors, Markov matrix, and determinant of $\pi^T\cdot\textbf{a}$ after elementary transformations in the cross-silo FL game example with two organizations and three actions shown in Fig. \ref{fig:illustration}. (a) The strategy vectors and the corresponding utility vectors of the cross-silo FL game. (b) The Markov matrix $\mathbf{M}$ of the cross-silo FL game. (c) After several elementary column operations on $\det(\textbf{p}^1,\textbf{p}^2,\textbf{a})$, the dot product of the stationary vector $\pi$ and an arbitrary vector $\textbf{a}=(a_1,a_2,\dots,a_9)^T$ is equal to $\det(\textbf{p}^1,\textbf{p}^2,\textbf{a})$, where the first columns $\hat{\textbf{p}}^1$ is only controlled by organization 1.}
\label{fig:example}
\end{figure*}
In particular, when $\textbf{a}=\textbf{u}^i$, then organization $i$'s expected utility in the stationary state is:
\begin{equation}\label{eq:u_a}
E^i=\frac{\pi^T\cdot\textbf{u}^i}{\pi^T\cdot\textbf{1}}=\frac{\det (\textbf{p}^1,\dots,\textbf{p}^N,\textbf{u}^i)}{\det (\textbf{p}^1,\dots,\textbf{p}^N,\mathbf{1})},
\end{equation}
which makes a linear combination of all organizations’ expected utilities yielding the following equation:
\begin{equation}\label{eq:zd_equation}
\sum_{x=1}^{N}\alpha_x E^x+\alpha_0=\frac{\det (\textbf{p}^1,\dots,\textbf{p}^N,\sum_{x=1}^{N}\alpha_x \textbf{u}^x+\alpha_0\mathbf{1})}{\det (\textbf{p}^1,\dots,\textbf{p}^N,\mathbf{1})}.
\end{equation}
In the above equation, $\alpha_0$ and $\alpha_x, x\in\mathcal{N},$ are constants for the linear combination. Moreover, after some elementary column operations on $\det(\textbf{p}^1, \textbf{p}^2,\dots,\textbf{p}^N,\mathbf{a})$, in which there can be a certain column controlled by a certain organization. Fig. \ref{fig:example}(c) displays the determinant of $\pi^T\cdot\textbf{a}$ based on the example in Fig. \ref{fig:illustration}, in which the first column is solely determined by organization 1. Thus, when organization $i$ chooses a strategy that satisfies
\begin{equation}\label{eq:zd_stratrgy}
\hat{\textbf{p}}^i=\phi(\sum_{x=1}^{N}\alpha_x \mathbf{u}^x+\alpha_0\mathbf{1}),
\end{equation}
where $\phi$ is a non-zero constant and $\hat{\textbf{p}}^i$ is under the control of organization $i$, the corresponding column of $\hat{\textbf{p}}^i$ and the last column of $\det (\textbf{p}^1,\dots,\textbf{p}^N,\sum_{x=1}^{N}\alpha_x \textbf{u}^x+\alpha_0\mathbf{1})$ will be proportional. And (\ref{eq:zd_equation}) can be converted to:
\begin{equation}\label{eq:zd_zero}
\sum_{x=1}^{N}\alpha_x E^x+\alpha_0=0.
\end{equation}
We further study the social welfare maximization problem with the help of the MMZD in this circumstance. Take organization $1$ performing the MMZD strategy as an example. According to (\ref{eq:zd_zero}), by setting $\alpha_x=1, x\in\mathcal{N}$, the social welfare can be calculated as $\sum_{x=1}^{N} E^x=-\alpha_0$. Thus, the issue of maximizing the social welfare is equivalent to the following optimization problem:
\begin{equation*}\label{eq:min_gamma}
\begin{split}
&\min \alpha_0,\\
&\,s.t.
\begin{cases}
0\le p^1_{j,g}\le 1,j\in\{1,2,...,(r+1)^N\},g\in\{0,...,r\},\\
\hat{\textbf{p}}^1=\phi(\sum_{x=1}^{N} \mathbf{u}^x+\alpha_0\mathbf{1}),\\
\phi \neq 0.\\
\end{cases}
\end{split}
\end{equation*}
We denote $u^x_k,k\in\{1,2,...,(r+1)^{N+1}\}$ as the $k$th element in $\textbf{u}^x$, then we can solve the above optimization problem by considering the following two cases:
\subsection{$\phi>0$}
To meet the constraint $p^1_{j,g}\ge0$, we can get the lower bound of $\alpha_0$ as follows:
\begin{align*}
& {\alpha_0}_{min}=\max(\Lambda_k),\forall{k}\in\{1,2,...,(r+1)^{N+1}\},\\
& \Lambda_k=
\begin{cases}
-\sum_{x=1}^{N} u^x_k-\frac{1}{\phi}, k=1,2,...(r+1)^{N-1},\\
-\sum_{x=1}^{N} u^x_k, k=(r+1)^{N-1}+1,...,(r+1)^{N+1}.\\
\end{cases}
\end{align*}
To meet the constraint $p^1_{j,g}\le1$, we can get the upper bound of $\alpha_0$ as follows:
\begin{align*}
& {\alpha_0}_{max}=\min(\Lambda_l),\forall{l}\in\{(r+1)^{N+1}+1,...,2(r+1)^{N+1}\},\\
& \Lambda_l= \Lambda_{k+(r+1)^N}\\
& =
\begin{cases}
-\sum_{x=1}^{N} u^x_k, &k=1,2,...(r+1)^{N-1},\\
-\sum_{x=1}^{N} u^x_k+\frac{1}{\phi}, & k=(r+1)^{N-1}+1,...,(r+1)^{N+1}.\\
\end{cases}
\end{align*}
Only if ${\alpha_0}_{min}\le{\alpha_0}_{max}$, can $\alpha_0$ have a feasible solution, which is equivalent to $\max(\Lambda_k)\le \min(\Lambda_l),\forall{k}\in\{1,2,...,(r+1)^{N+1}\},\forall{l}\in\{(r+1)^N+1,...,2(r+1)^{N+1}\}$. If there exists $\phi>0$ satisfying the above constraint, we can obtain the minimum value of $\alpha_0$ as follow:
\begin{multline}\label{eq:case1_alphamin}
{\alpha_0}_{min}=\max\{-\sum_{x=1}^{N} u^x_1-\frac{1}{\phi},...,-\sum_{x=1}^{N} u^x_{(r+1)^{N-1}}-\frac{1}{\phi},\\
-\sum_{x=1}^{N} u^x_{(r+1)^{N-1}+1},...,-\sum_{x=1}^{N} u^x_{(r+1)^{N+1}}\}.
\end{multline}
\subsection{$\phi<0$}
Similarly, when $p^1_{j,g}\ge0$, we have ${\alpha_0}_{min}=\max(\Lambda_l),\forall{l}\in\{(r+1)^{N+1}+1,...,2(r+1)^{N+1}\}$; considering $p^1_{j,g}\le1$, we have ${\alpha_0}_{max}=\min(\Lambda_k),\forall{k}\in\{1,2,...,(r+1)^{N+1}\}$. In addition, $\alpha_0$ is feasible only when ${\alpha_0}_{min}\le{\alpha_0}_{max}$, i.e., $\max(\Lambda_l)\le \min(\Lambda_k),\forall{k}\in\{1,2,...,(r+1)^{N+1}\},\forall{l}\in\{(r+1)^{N+1}+1,...,2(r+1)^{N+1}\}$. Finally, we can get the following result:
\begin{multline}\label{eq:case2_alphamin}
{\alpha_0}_{min}=\max\{-\sum_{x=1}^{N} u^x_1,...,-\sum_{x=1}^{N} u^x_{(r+1)^{N-1}},\\
-\sum_{x=1}^{N} u^x_{(r+1)^{N-1}+1}+\frac{1}{\phi},...,-\sum_{x=1}^{N} u^x_{(r+1)^{N+1}}+\frac{1}{\phi}\}.
\end{multline}
In summary, by (\ref{eq:case1_alphamin}) and (\ref{eq:case2_alphamin}), organization $1$ can unilaterally set the expected social welfare $\sum_{x=1}^{N} E^x$ with the MMZD strategy $\textbf{p}^1$ meeting $\hat{\textbf{p}}^1=\phi(\sum_{x=1}^{N} \mathbf{u}^x+\alpha_0\mathbf{1})$, with each element of $\textbf{p}^1$ calculated by:
\begin{align*}\label{eq:p_i_calculate}
p^1_h=
\begin{cases}
\sum_{x=1}^{N} u^x_h+{\alpha_0}_{min}+1, \\\qquad\qquad\qquad h=1,2,...,(r+1)^{N-1},\\
\sum_{x=1}^{N} u^x_h+{\alpha_0}_{min},\\\qquad\qquad\qquad h=(r+1)^{N-1}+1,...,(r+1)^{N+1},\\
\end{cases}
\end{align*}
where $p^1_h$ denotes the $h$-th element in $\textbf{p}^1$.
\section{Social welfare maximization by MMZD alliance}
In the previous section, we proved that a single organization is able to maximize the social welfare within a range. According to (\ref{eq:zd_stratrgy}), every organization can deploy the MMZD strategy to control the social welfare. Therefore, it's possible that multiple organizations utilize the MMZD strategy at the same time. So in this section, we explore multiple organizations that play the MMZD strategy to form an alliance in the cross-silo FL game. We call them MMZD Alliance (MMZDA) organizations (denoted as $\mathcal{A}$), assuming that all MMZDA organizations use the same MMZD strategy to prevent the low social welfare. Besides, we define the other organizations as outsider organizations (denoted as $\mathcal{N}\backslash\mathcal{A}$). Outsider organizations may inactively participate in communication rounds and just want to get model trained by other organizations for free.
The current challenge is how MMZDA controls the maximum value of social welfare. Specifically, will the MMZDA have stronger control and enlarge the maximum social welfare compared with the individual MMZD strategy?
For the sake of convenience, we assume that organization $i\in\mathcal{A},|\mathcal{A}|=N^{\mathcal{A}}$ is an alliance, which performs the same MMZD strategy as the leader organization $a\in\mathcal{A}$. As for an outsider organization $j\in\mathcal{N}\backslash\mathcal{A}, |\mathcal{N}\backslash\mathcal{A}|=N-N^{\mathcal{A}}$, it may not participate in any communication round.
In this new scenario based on the MMZDA, we pay more attention to strategies and behaviors rather than the organizations themselves. Since the MMZDA members take the same actions, we treat them as an entity represented by the leader organization $a$. In each game round, any outsider organization or alliance leader (organization $a$) can choose the action $y_i\in\{0,1,...,r\}$, so there are $(r+1)^{N-N^{\mathcal{A}}}$ possible outcomes for each game round. We assume that the organizations have one-round-memory. And we define $\mathcal{N}^c=\mathcal{N}\backslash\mathcal{A}\cup\{a\}$ as the players of the cross-silo FL game based on MMZDA. For arbitrary organization $i\in\mathcal{N}^c$, its mixed strategy $\textbf{q}^i$ is defined as:
\begin{equation}
\textbf{q}^i=[q^i_{1,0},q^i_{1,1},...,q^i_{1,r},q^i_{2,0},...,q^i_{j,g},...,q^i_{(r+1)^{N-N^{\mathcal{A}}+1},r}]^T,
\end{equation}
where $q^i_{j,g} (j\in\{1,2,...,(r+1)^{N-N^{\mathcal{A}}+1}\},g\in\{0,1,...,r\})$ represents the probability of organization $i$ choosing action $y_i=g$ in the current game round conditioning on the $j$-th outcome of the previous game round. In addition, the corresponding utility vector $\mathbf{v}^i$ is denoted as:
\begin{equation}
\mathbf{v}^i=[v^i_{1,0},v^i_{1,1},...,v^i_{1,r},v^i_{2,0},...,v^i_{j,g},...,v^i_{(r+1)^{N-N^{\mathcal{A}}+1},r}]^T,
\end{equation}
where each utility $v^i_{j,g}$ of organization $i$ choosing action $y_i=g$ in the $j$-th outcome can be calculated by $v^i_{j,g}=U^i(\mathbf{y}^{(j,g)})$, with $\mathbf{y}^{(j,g)}$ denoting the action vector $\mathbf{y}$ corresponding to the $j$-th outcome but $y_i=g$. Moreover, if $i=a$, then $v^a_{j,g}=\sum_{x\in{\mathcal{A}}}U^x(\mathbf{y}^{(j,g)})$. In Section \ref{maxwelfare}, we perform that the linear combination of all organizations' expected utilities can be represented as (\ref{eq:u_a}) and (\ref{eq:zd_equation}).
Similarly, we can draw that
\begin{multline}
\label{eq:zd_equation_a}
\sum_{x\in{\mathcal{N}^c}}\gamma_x F^x+\gamma_0\\
=\frac{\det (\textbf{q}^1,...,\textbf{q}^i,...,\textbf{q}^{N-N^{\mathcal{A}}+1},\sum_{x\in{\mathcal{N}^c}}\gamma_x \textbf{v}^x+\gamma_0\mathbf{1})}{\det (\textbf{q}^1,...,\textbf{q}^i,...,\textbf{q}^{N-N^{\mathcal{A}}+1},\mathbf{1})},
\end{multline}
where $\gamma_0\in\mathbb{R}$ and $\gamma_x\in\mathbb{R},x\in{\mathcal{N}^c},$ are constants. Thus, when organization $i$ chooses a strategy that satisfies $\hat{\textbf{q}}^i=\phi(\sum_{x\in{\mathcal{N}^c}}\gamma_x \mathbf{v}^x+\gamma_0\mathbf{1})$, where $\phi$ is a non-zero constant and $\hat{\textbf{q}}^i$ is under the control of organization $i$, the column related to $\hat{\textbf{q}}^i$ and the last column of $\det (\textbf{q}^1,...,\textbf{q}^i,...,\textbf{q}^N,\textbf{v}^i)$ will be proportional. Then (\ref{eq:zd_equation}) can be converted to:
\begin{equation}\label{eq:zd_zero_a}
\sum_{x\in{\mathcal{N}^c}}\gamma_x F^x+\gamma_0=0.
\end{equation}
In order to investigate the social welfare maximization problem by MMZDA, we rewrite the optimization problem as:
\begin{equation*}\label{eq:min_gamma_2}
\begin{split}
&\min \gamma_0,\\
&\,s.t.
\begin{cases}
0\le q^a_{j,g}\le 1,\\
j\in\{1,2,...,(r+1)^{N-N^{\mathcal{A}}+1}\},g\in\{0,...,r\},\\
\hat{\textbf{q}}^a=\phi(\sum_{x\in{\mathcal{N}^c}} \mathbf{v}^x+\gamma_0\mathbf{1}),\\
\phi \neq 0.\\
\end{cases}
\end{split}
\end{equation*}
Similar to Section \ref{maxwelfare}, we denote $v^x_k,k\in\{1,2,...,(r+1)^{N-N^{\mathcal{A}}+2}\}$ as the $k$th element in $\textbf{v}^x$, then we can solve this optimization problem by discussing these two situations:\\
(1) $\phi>0:$\\
When $q^a_{j,g}\ge0$, we have ${\gamma_0}_{min}=\max(\Lambda_l),\forall{l}\in\{(r+1)^{N-N^{\mathcal{A}}+2}+1,...,2(r+1)^{N-N^{\mathcal{A}}+2}\}$; given $q^a_{j,g}\le1$, we have ${\gamma_0}_{max}=\min(\Lambda_k),\forall{k}\in\{1,2,...,(r+1)^{N-N^{\mathcal{A}}+2}\}$. In addition, $\gamma_0$ is feasible only when ${\gamma_0}_{min}\le{\gamma_0}_{max}$, i.e., $\max(\Lambda_l)\le \min(\Lambda_k),\forall{k}\in\{1,2,...,(r+1)^{N-N^{\mathcal{A}}+2}\},\forall{l}\in\{(r+1)^{N-N^{\mathcal{A}}+2}+1,...,2(r+1)^{N-N^{\mathcal{A}}+2}\}$. Finally, we can get the following result:
\begin{multline}\label{eq:case1_gammamin}
{\gamma_0}_{min}=\max\{-\sum_{x\in{\mathcal{N}^c}} v^x_1-\frac{1}{\phi},...,-\sum_{x\in{\mathcal{N}^c}} v^x_{(r+1)^{N-N^{\mathcal{A}}}}-\frac{1}{\phi},\\
-\sum_{x\in{\mathcal{N}^c}} v^x_{(r+1)^{N-N^{\mathcal{A}}}+1},...,-\sum_{x\in{\mathcal{N}^c}} v^x_{(r+1)^{N-N^{\mathcal{A}}+2}}\}.
\end{multline}
(2) $\phi<0:$\\
Given $q^a_{j,g}\ge0$, we have ${\gamma_0}_{min}=\max(\Lambda_l),\forall{l}\in\{(r+1)^{N-N^{\mathcal{A}}+2}+1,...,2(r+1)^{N-N^{\mathcal{A}}+2}\}$; while $q^a_{j,g}\le1$, we can get ${\gamma_0}_{max}=\min(\Lambda_k),\forall{k}\in\{1,2,...,(r+1)^{N-N^{\mathcal{A}}+2}\}$. In addition, $\gamma_0$ is feasible only when ${\gamma_0}_{min}\le{\gamma_0}_{max}$, i.e., $\max(\Lambda_l)\le \min(\Lambda_k),\forall{k}\in\{1,2,...,(r+1)^{N-N^{\mathcal{A}}+2}\},\forall{l}\in\{(r+1)^{N-N^{\mathcal{A}}+2}+1,...,2(r+1)^{N-N^{\mathcal{A}}+2}\}$. Finally, we can get the following result:
\begin{multline}\label{eq:case2_gammamin}
{\gamma_0}_{min}=\max\{-\sum_{x\in{\mathcal{N}^c}} v^x_1,...,-\sum_{x\in{\mathcal{N}^c}} v^x_{(r+1)^{N-N^{\mathcal{A}}}},\\
-\sum_{x\in{\mathcal{N}^c}} v^x_{(r+1)^{N-N^{\mathcal{A}}}+1}+\frac{1}{\phi},...,-\sum_{x\in{\mathcal{N}^c}} v^x_{(r+1)^{N-N^{\mathcal{A}}+2}}+\frac{1}{\phi}\}.
\end{multline}
In summary, by (\ref{eq:case1_gammamin}) and (\ref{eq:case2_gammamin}), the alliance can unilaterally set the expected social welfare $\sum_{x\in{\mathcal{N}^c}} E^x$ with the MMZD strategy $\textbf{q}^a$ meeting $\hat{\textbf{q}}^a=\phi(\sum_{x\in{\mathcal{N}^c}} \mathbf{v}^x+\gamma_0\mathbf{1})$, with each element of $\textbf{q}^a$ calculated by:
\begin{align*}\label{eq:p_i_calculate}
q^a_h=
\begin{cases}
\sum_{x=1}^{N} v^x_h+{\gamma_0}_{min}+1,\\\qquad\quad h=1,2,...,(r+1)^{N-N^{\mathcal{A}}},\\
\sum_{x=1}^{N} v^x_h+{\gamma_0}_{min},\\\qquad\quad h=(r+1)^{N-N^{\mathcal{A}}}+1,...,(r+1)^{N-N^{\mathcal{A}}+2},\\
\end{cases}
\end{align*}
where $q^a_h$ denotes the $h$-th element in $\textbf{q}^a$.
\begin{theorem}\label{th:MMZDalliance}
The social welfare can achieve larger maximum value by MMZDA than that by single MMZD organization.
\end{theorem}
\begin{proof}
In the same cross-silo FL game, we can achieve a maximum social welfare $\sum_{x=1}^{N} E^x=-\alpha_0$ using MMZD strategy by an individual organization. Meanwhile, we are able to draw another maximum social welfare $\sum_{x\in{\mathcal{N}^c}}\gamma_x F^x=-\gamma_0$ by MMZDA.
\subsection{$\phi>0$}
In this case, we have:
\begin{align*}
&{\alpha_0}_{min}=\max\{-\sum_{x=1}^{N} u^x_1-\frac{1}{\phi},...,-\sum_{x=1}^{N} u^x_{(r+1)^{N-1}}-\frac{1}{\phi},\\
&-\sum_{x=1}^{N} u^x_{(r+1)^{N-1}+1},...,-\sum_{x=1}^{N} u^x_{(r+1)^{N+1}}\}.\\
&{\gamma_0}_{min}=\max\{-\sum_{x\in{\mathcal{N}^c}} v^x_1-\frac{1}{\phi},...,-\sum_{x\in{\mathcal{N}^c}} v^x_{(r+1)^{N-N^{\mathcal{A}}}}-\frac{1}{\phi},\\
&-\sum_{x\in{\mathcal{N}^c}} v^x_{(r+1)^{N-N^{\mathcal{A}}}+1},...,-\sum_{x\in{\mathcal{N}^c}} v^x_{(r+1)^{N-N^{\mathcal{A}}+2}}\}.
\end{align*}
The value of ${\alpha_0}_{min}$ is the maximum value in the certain $(r+1)^{N+1}$ values, and we denote this candidate values as set $X_1$. While the value of ${\gamma_0}_{min}$ generating from the $(r+1)^{N-N^{\mathcal{A}}+2}$ values, we denote this candidate values as set $X_2$. Note that the $(r+1)^{N+1}$ values cover all possible outcomes, but the $(r+1)^{N-N^{\mathcal{A}}+2}$ values do not cover all outcomes, since the MMZDA organizations employ the same strategy. Given a certain element like $-\sum_{x\in{\mathcal{N}^c}} v^x_{k}-\frac{1}{\phi} \in X_2, k\in \{1,2,...,N-N^{\mathcal{A}}\}$, as $v^a_{j,g}=\sum_{x\in{\mathcal{A}}}U^x(\mathbf{y}^{(j,g)})$, we have:
\begin{multline}
-\sum_{x\in{\mathcal{N}^c}} v^x_{k}-\frac{1}{\phi}=\sum_{x\in{\mathcal{A}}}U^x(\mathbf{y}^{(j,g)})+-\sum_{x\in{\mathcal{N}^c}} v^x_{k}-\frac{1}{\phi}\\
=\sum^{N}_{x=1}U^x(\mathbf{y}^{(j\times(r+1)^{N^{\mathcal{A}-1}}-1,g)})-\frac{1}{\phi}
=-\sum_{x=1}^{N} u^x_{k'}-\frac{1}{\phi},
\end{multline}
where $k'=(r+1)^{N^{\mathcal{A}+k-1}}$.
Otherwise, given an element like $-\sum_{x\in{\mathcal{N}^c}} v^x_{k} \in X_2, k\in \{N-N^{\mathcal{A}}+1,...,N-N^{\mathcal{A}}+2\}$, we can draw
\begin{align*}
&-\sum_{x\in{\mathcal{N}^c}} v^x_{k}=\sum_{x\in{\mathcal{A}}}U^x(\mathbf{y}^{(j,g)})+-\sum_{x\in{\mathcal{N}^c}} v^x_{k}\\
&=\sum^{N}_{x=1}U^x(\mathbf{y}^{(j-(r+1)^{N-N^{\mathcal{A}-1}})\times(r+1)^{N^{\mathcal{A}-1}}+(r+1)^{N-1},g)})
\\&=-\sum_{x=1}^{N} u^x_{k'},
(k'=(r+1)^{N^{\mathcal{A}+(k-N+N^{\mathcal{A}})-1}}+(r+1)^{N-1})
\end{align*}
As deduced above, the elements in $X_2$ are all in $X_1$, so $X_2$ is a subset of $X_1$ . Thus, ${\alpha_0}_{min}\geq {\gamma_0}_{min} $ holds true.
\subsection{$\phi<0$}
When $\phi<0$, the situation is quite similar.
\begin{align*}
&{\alpha_0}_{min}=\max\{-\sum_{x=1}^{N} u^x_1,...,-\sum_{x=1}^{N} u^x_{(r+1)^{N-1}},\\
&-\sum_{x=1}^{N} u^x_{(r+1)^{N-1}+1}+\frac{1}{\phi},...,-\sum_{x=1}^{N} u^x_{(r+1)^{N+1}}+\frac{1}{\phi}\}.\\
&{\gamma_0}_{min}=\max\{-\sum_{x\in{\mathcal{N}^c}} v^x_1,...,-\sum_{x\in{\mathcal{N}^c}} v^x_{(r+1)^{N-N^{\mathcal{A}}}},\\
&-\sum_{x\in{\mathcal{N}^c}} v^x_{(r+1)^{N-N^{\mathcal{A}}}+1}+\frac{1}{\phi},...,-\sum_{x\in{\mathcal{N}^c}} v^x_{(r+1)^{N-N^{\mathcal{A}}+2}}+\frac{1}{\phi}\}.
\end{align*}
We denote the $(r+1)^{N+1}$ candidate values of ${\alpha_0}_{min}$ as $X_3$. While the value of ${\gamma_0}_{min}$ generating from the $(r+1)^{N-N^{\mathcal{A}}+2}$ values, we denote this candidate values as $X_4$. We can prove that each value of $X_4$ can be found in $X_3$ as above. So ${\alpha_0}_{min}\geq {\gamma_0}_{min} $ holds true as well.
In summary, we can draw a conclusion that ${\alpha_0}_{min}\geq {\gamma_0}_{min} $, which is equivalent to ${-\alpha_0}_{max}\leq {-\gamma_0}_{max} $. So the social welfare can achieve larger maximization by MMZDA than single MMZD organization.
\end{proof}
The above theorem and previous derivation prove that MMZDA further enhances the ability of non-selfish organizations to maximize the social welfare. If more organizations join the MMZDA, the upper boundary of controllable social welfare continues to increase, leading the entire system to be more stable.
\section{Experimental Evaluation}
In this section, we present the experimental results of our study for the MMZD individual (MMZD) strategy and MMZD Alliance (MMZDA) stategy in social welfare maximization. Generally, all experiments are implemented using Matlab R2021a on a laptop with 2.3 GHz Intel Core i5-8300H processor. In all experiments except for otherwise specification, we set $\phi=0.01$, $K=200$, $r=33$. Parameters $\theta_0=23271.584$ and $\theta_1=50193.243$ are derived based on the simulation dataset\cite{li2019convergence}. For every control group with different strategy settings, we repeat the above experiment 100 times, and take the average value as the final expected social welfare.
\subsection{Evaluation of the MMZD Strategy}
First, we evaluate the performance of the MMZD strategy used by individual organization to maximize the social welfare based on simulation experiments. We set $N=10$ since the number of organizations in cross-silo FL is usually small.
\begin{figure}[h]
\centering
\centering
\subfigure[MMZD]{
\label{fig:subfig:a}
\includegraphics[scale=0.281]{strZD-eps-converted-to.pdf}}
\subfigure[ALLD]{
\label{fig:subfig:b}
\includegraphics[scale=0.281]{strALLD-eps-converted-to.pdf}}
\subfigure[ALLC]{
\label{fig:subfig:c}
\includegraphics[scale=0.281]{strALLC-eps-converted-to.pdf}}
\subfigure[Rand]{
\label{fig:subfig:d}
\includegraphics[scale=0.281]{strRand-eps-converted-to.pdf}}
\caption{The maximum of social welfare under different strategy combinations of organization 1 and others.}
\label{fig:strategies}
\end{figure}
In order to verify the effectiveness of the MMZD strategy on maximizing the social welfare, we compare it with other five classical strategies, by simulating the entire cross-silo FL process for 20 rounds of game. In Fig. \ref{fig:strategies}, organization 1 adopts MMZD, all-defection (ALLD)\cite{hu2019quality}, all-cooperation (ALLC)\cite{hu2019quality}, and random (Rand)\cite{hu2019quality} strategies. Other organizations use ALLD, ALLC, Rand, Tit-For-Tat (TFT)\cite{nowak1993strategy}, and mixed (Mixed) strategies. Specifically, ALLD strategy is defined as: the organization does not perform local training at all, only submits a zero vector in the global aggregation. While ALLC strategy means that the organization participates in all $r$ global aggregation with their local updates in every game round. Organizations which adopt Rand strategy randomly participate in global aggregation from $0$ to $r$ rounds with the probability of $\frac{1}{r+1}$. TFT strategy is defined as the organizations randomly choose the number of participating global aggregation from $0$ to $\frac{\lfloor r \rfloor}{2}$ when the sum of global aggregation rounds in last game round is less than $\frac{Nr}{2}$, otherwise they randomly choose the number of participating global aggregation from $\frac{\lfloor r+1 \rfloor}{2}$ to $r$. We define the mixed strategy as adopting a specific strategy chosen from ALLC, ALLD, Rand, and TFT. By comparing Fig. \ref{fig:strategies}(a) with the other three figures, we can find that the MMZD strategy can effectively control the social welfare. This can prove that free-riding behavior is avoided in some ways.
\begin{figure}
\centering
\includegraphics[scale=0.5]{Round-eps-converted-to.pdf}
\caption{Evolution of the expected social welfare.}
\label{fig:ZDconverge}
\end{figure}
Fig. \ref{fig:ZDconverge} plots the expected social welfare changes in each game round, as the organization 1 adopts the MMZD strategy and other organizations employ different strategies. Fig. \ref{fig:strategies}(a) displays the final result in Fig. \ref{fig:ZDconverge}, which indicates that no matter what kind of strategies other organizations adopt, the social welfare finally converges to a fixed value, verifying the power of the proposed social welfare maximization game.
\subsection{Evaluation of the MMZDA Strategy}
In this subsection, we present the performance of the MMZDA to maximize the social welfare through a series of simulation experiments. And we compare it with the result of a single organization’s MMZD strategy. Meanwhile, we also consider the relative maximum of social welfare in order to further analyze the control ability of MMZDA.
In Fig. \ref{fig:MMZDalliance vs indi}(a), we set $N^{\mathcal{A}}=4$ and randomly choose four organizations to form a MMZDA, with other parameters unchanged in order to compare with the previous experiment (Fig. \ref{fig:strategies}). From this figure, it's clear that no matter what strategies other organizations adopt, the MMZDA strategy expands the maximum value of the social welfare, comparing with the MMZD strategy performed by a single organization. This experimental result verifies Theorem
\ref{th:MMZDalliance}. That is, the social welfare can achieve larger maximum value by MMZDA than that by the single MMZD organization.
Based on the same initial settings, Figs. \ref{fig:MMZDalliance vs indi}(b)-(f) display the evolution process of of the expected social welfare, which is two-fold. The red line represents that the MMZDA strategy is used, while the blue line shows the impact of single organization using the MMZD strategy on the maximum of social welfare. By comparison, we can find that as the number of game rounds increases, no matter what strategies other organizations adopt, the MMZDA strategy always enables the maximum social welfare gradually converge to a larger fixed value, but it does not have a faster convergence speed.
\begin{figure}[h]
\centering
\centering
\subfigure[The maximum of social welfare under MMZDA and MMZD]{
\label{fig:subfig-2:a}
\includegraphics[scale=0.26]{compare-eps-converted-to.pdf}}
\subfigure[ALLD]{
\label{fig:subfig-2:b}
\includegraphics[scale=0.281]{roundalld-eps-converted-to.pdf}}
\subfigure[ALLC]{
\label{fig:subfig-2:c}
\includegraphics[scale=0.281]{roundallc-eps-converted-to.pdf}}
\subfigure[Rand]{
\label{fig:subfig-2:d}
\includegraphics[scale=0.281]{roundrand-eps-converted-to.pdf}}
\subfigure[TFT]{
\label{fig:subfig-2:e}
\includegraphics[scale=0.281]{roundtft-eps-converted-to.pdf}}
\subfigure[Mixed]{
\label{fig:subfig-2:f}
\includegraphics[scale=0.281]{roundmix-eps-converted-to.pdf}}
\caption{Evolution of the expected social welfare under different strategy combinations, under comparison between the MMZDA strategy and the MMZD individual strategy.}
\label{fig:MMZDalliance vs indi}
\end{figure}
In Fig. \ref{fig:changeNAkeepN}, we set $N=10$. Then in Fig. \ref{fig:changeNAkeepN}(a), we explore the changes of the expected social welfare as the number of organizations in the alliance increases when the total number of organizations remains unchanged. Whenever $N^{\mathcal{A}}$ takes a different value, we randomly generate the MMZDA organizations from this 10 organizations. In the histogram, we can conclude that when the total number of organizations does not change, the more organizations that join the MMZD alliance, the higher the maximum social welfare that can be controlled. This also confirms our analysis of the MMZDA strategy, because the increase of $N^{\mathcal{A}}$ expands the range of candidate values in (\ref{eq:case1_gammamin}) and (\ref{eq:case2_gammamin}), thereby increasing the maximum value of social welfare. Besides, in Fig. \ref{fig:changeNAkeepN}(b), we take the social welfare of all organizations participating in all communication rounds as the absolute maximum value of social welfare, and study the ratio of social welfare controlled by MMZDA to the former value. We call this ratio relative maximum of social welfare. In this case, $N^{\mathcal{A}}$ and the relative maximum of social welfare are also positively correlated. Together with Fig. \ref{fig:changeNAkeepN}(a), they show that when $N$ is constant, MMZDA's ability to control the maximum value of social welfare increases as the number of MMZDA organizations increases.
\begin{figure}
\centering
\subfigure[The absolute maximum of social welfare]{
\label{fig:changeNAkeepN:abs}
\includegraphics[scale=0.252]{changeNA-eps-converted-to.pdf}}
\subfigure[The relative maximum of social welfare]{
\label{fig:changeNAkeepN:rel}
\includegraphics[scale=0.265]{changeNAr-eps-converted-to.pdf}}
\caption{The absolute maximum of social welfare and the relative maximum of social welfare as the number of organizations in the MMZDA increases with the total number of organizations unchanged.}
\label{fig:changeNAkeepN}
\end{figure}
Correspondingly, we continue to investigate the impact of $N$ while $N^{\mathcal{A}}$ does not change. In fact, the change of the total number of organizations $N$ brings a series of differences, including adding new organizations' parameters, changing the organizations’ local datasets, and then changing the coefficients $\theta_0$ and $\theta_1$, which also changes the utility function. For verification, we generate a new simulation dataset, and randomly selected $N$ organizations in different situations. We continue to randomly select $N^{\mathcal{A}}=5$ MMZDA organizations from $N$ organizations. According to the new dataset, the corresponding coefficients $\theta_0$ and $\theta_1$ are generated to calculate the final social welfare value. For each distinct $N$, we repeat the process of randomly selecting $N$ organizations 10 times. For each group of selected organizations, we repeat the process of randomly selecting five MMZDA organizations 10 times. Finally, we take the average value as the expected social welfare. As shown in Fig. \ref{fig:changeNkeepNA}(a), we found that simply changing $N$ does not intuitively change the maximum value of social welfare, because the impact of newly joined organizations on social welfare is mutative. More specifically, when $N$ is small or even close to $N^{\mathcal{A}}$ (i.e., the scenario of $N=5$), the maximum value of social welfare is limited by the total number of organizations. When $N$ is large (i.e., the scenario of $N=25$), the small MMZD alliance reduces the ability to control the maximum of social welfare, and cannot bring a large increase of the maximum value. But in Fig. \ref{fig:changeNkeepNA}(b), $N$ and the relative maximum of social welfare are negatively correlated. It reflects that the increase in $N$ weakens MMZDA organizations' control over the maximum value of social welfare, although this does not mean a decrease in the absolute maximum value of social welfare.
\begin{figure}
\centering
\subfigure[The absolute maximum of social welfare]{
\label{fig:changeNkeepNA:abs}
\includegraphics[scale=0.252]{changeN-eps-converted-to.pdf}}
\subfigure[The relative maximum of social welfare]{
\label{fig:changeNkeepNA:rel}
\includegraphics[scale=0.265]{changeNr-eps-converted-to.pdf}}
\caption{The absolute maximum of social welfare and the relative maximum of social welfare as the total number of organizations increases with the number of organizations in the MMZDA unchanged.}
\label{fig:changeNkeepNA}
\end{figure}
In Fig. \ref{fig:keepNdivideNA}, we use the same dataset as the previous experiment (Fig. \ref{fig:changeNkeepNA}). In this experiment, we keep the ratio of the number of alliance organizations to the total number of organizations unchanged. Specifically, we set the number of alliance organizations to be $\frac{1}{3}$ of the total number of organizations. In Fig. \ref{fig:keepNdivideNA}(a), clearly, the maximum value of social welfare increases as $N$ increases. This is because more organizations participate in cross-silo FL game, and the social welfare that can be increased when the proportion of MMZDA organizations remains unchanged. While Fig. \ref{fig:keepNdivideNA}(b) implies that under the same ratio, the relative maximum of social welfare fluctuates around $0.6$ within a certain range. There is not much change overall, and the fluctuations come from the heterogeneity of the organizations. In fact, the control ability of MMZDA also depends on the utility vectors $\mathbf{v}$ of the alliance organizations. During the experiment, we randomly select the alliance organization to more objectively reflect the expected control ability of MMZDA.
\begin{figure}
\centering
\subfigure[The absolute maximum of social welfare]{
\label{fig:keepNdivideNA:abs}
\includegraphics[scale=0.252]{keepk-eps-converted-to.pdf}}
\subfigure[The relative maximum of social welfare]{
\label{fig:keepNdivideNA:rel}
\includegraphics[scale=0.265]{keepkr-eps-converted-to.pdf}}
\caption{The absolute maximum of social welfare and the relative maximum of social welfare as the total number of organizations increases with the rate of MMZDA organizations unchanged.}
\label{fig:keepNdivideNA}
\end{figure}
\section{Conclusion}
In this paper, we model the cross-silo FL game among organizations as a public goods game, revealing the social dilemma in cross-silo FL game theoretically. In order to overcome the social dilemma, we propose a brand-new method using the MMZD to solve the social welfare maximization problem. By the means of the MMZD, an individual organization can unilaterally control social welfare at a certain level, regardless of other organizations' strategies. Meanwhile, we explore the MMZDA consisting of multiple MMZD organizations, which further improves the control of the maximum social welfare. Moreover, our approaches can maintain the stability and sustainability of the system without extra cost. Simulation results prove that the MMZD strategy can efficiently and effectively control social welfare, which reduces the loss from selfish behaviors.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
1,314,259,995,349 | arxiv | \section{Introduction}
Policy evaluation and, in particular, temporal difference (TD) learning is a key ingredient in reinforcement learning.
Here the expected value of future rewards is estimated from simulations of a given policy.
When a stationary policy is fixed, the simulated process is a time-homogeneous Markov chain.
The convergence of the policy evaluation algorithm is analyzed using stochastic approximation techniques under asynchronous Markovian updates.
There is a well-developed theory of stochastic approximation that establishes the convergence of a variety of policy evaluation schemes.
However, less attention has been placed on the convergence of the Markov chain that is estimated.
Usually theoretical results analyze a time-homogeneous Markov chain, which corresponds to evaluating a fixed policy. A mixing time condition is usually an assumption made in advance which specifies the rate of convergence to the chain's equilibrium distribution.
For irreducible time-homogenous Markov chains geometric mixing often holds.
However, when training a reinforcement learning algorithm the policy of interest is rarely fixed.
Usually, the policy evaluation algorithm is combined with a policy improvement step.
Here the set of actions taken by the policy are updated and consequently the transition matrix and equilibrium distribution of the policy of interest evolves in time.
This essential component of the reinforcement learning problem, in turn, invalidates the time-homogeneous assumption and mixing time conditions assumed in the theory of policy evaluation.
In this work, we do not investigate the policy improvement mechanisms used in reinforcement learning.
Instead, we wish to understand the ability of policy evaluation algorithms to accurately assess a current policy as that policy is changed in time according to some separate mechanism.
One should expect --as it is implicitly assumed-- that a good policy evaluation algorithm such as TD learning should be able to track the current policy as it changes in time. Thus, finite-time bounds derived in recent years for policy evaluation, should be applicable for policy tracking.
In this article, we analyze the policy tracking problem. That is we fix a sequence of transition matrices and we analyze how effectively TD learning tracks a policy as it evolves and converges on some final policy.
A key component of our analysis is a new adiabatic theorem for time-inhomogeneous Markov chains.
Adiabatic results are commonly used in physics (see \cite{griffiths2017introduction}).
These results study the Hamiltonian subject to changing external conditions, and prove that if the evolution is sufficiently slow, then the system state will be close to the ground state of the final Hamiltonian. Dynamics with this property are known as adiabatic. However when subjected to rapidly varying (diabatic) conditions there is insufficient time for the functional form to adapt and thus the state of the system is influenced by its initial configuration.
Adiabatic results have had a key role to play in the learning and stability of stochastic systems, see \cite{rajagopalan2009network}, and
results of this type are clearly applicable in the context of stochastic approximation and reinforcement learning.
If we slowly update our policy over time then our time-inhomogeneous Markov chain should remain close to equilibrium and a temporal difference algorithm should be able to successfully evaluate the current policy as it evolves. However, if the policy varies too quickly then these changes will begin to dominate the rate of convergence of the policy evaluation algorithm.
Although the intuition above is clear, new results are required to quantify this convergence.
Here we prove a general adiabatic theorem for irreducible, finite state-space, time-inhomogeneous Markov chains. We prove that so long as the component-wise change in transition probabilities goes to zero then the time-inhomogeneous chain remains close to the stationary distribution the most recent transition matrix.
This theoretical result on the convergence of time-inhomogeneous Markov chains may well be of independent interest.
As an application, we
show how our results can be applied to
asynchronous stochastic approximation and thus TD-learning and Q-learning. The problem setting and our findings can be informally described as follows.
We consider an asynchronous Robbins-Monro scheme, $R_t$, with learning rate $\alpha_t = t^{-\gamma_{\alpha}}$ with $\gamma_{\alpha}>0$. We let $P^{(t)}$ be a sequence of Markov transition matrices for a time-inhomogeneous Markov chain. Similar to $\gamma_\alpha$, we let $\gamma_P$ of the rate of change in the entries of these matrices, and we let $\gamma_{\pi}$ be the rate of change in the smallest probability state (under the equilibrium distribution of $P^{(t)}$).
Under the asynchronous scheme, the components of, $R_t$, are updated according to the transitions of this time-inhomogeneous chain.
We assess the ability of this scheme to approximate a fixed point $R^{\star}_T$ which depends on $P^{(T)}$. We find a bound of the form
\begin{align*}
|| R_T - R^{\star}_T||_\infty = O\left( \frac{1}{T^{\gamma_P-\gamma_\alpha - \gamma_{\pi}}} + \frac{\sqrt{\log T}}{T^{\gamma_\alpha/2- 3 \gamma_\pi/2}}\right)\, .
\end{align*}
That is under the conditions $\gamma_P> \gamma_\alpha + \gamma_{\pi}$ and $\gamma_\alpha > 3 \gamma_\pi$, the chain is adiabatic and the stochastic approximation scheme provides a good assessment of the performance of the current policy.
These result transfer over in a straight-forward manner to give convergence guarantees for tabular $TD$-learning and $Q$-learning, under time-inhomogeneous change.
Although our goal is not to assess a static policy, we note that if the Markov chain's transition matrix $P$ does not change in time, then $\gamma_P=\infty$ and $\gamma_{\pi}=0$ and we obtain a bound of order $O(\sqrt{\log T}/T^{\gamma_{\alpha}/2})$ which is consistent with the best performance bounds for off-policy TD-learning and $Q$-learning (see \cite{qu2020finite}). From this we, further, see that if $\gamma_P-\gamma_\alpha-\gamma_\pi > (\gamma_\alpha-3\gamma_\pi)/2$ then the order of the convergence rate is the same as for a time-homogeneous learning problem.
The results above are considered for stochastic approximation and tabular reinforcement learning. However, in future work, we will show the framework above can be developed using different methodology applicable to online convex optimization and consequently applicable to linear function approximation frameworks in reinforcement learning.
\subsection{Relevant Literature.}
Convergence of the algorithm for tabular learning was first established by \cite{sutton1988learning}. A general proof for $TD(\lambda)$ is given by \cite{dayan1992convergence}.
Convergence results generally require the convergence of an asynchronous stochastic approximation scheme. \cite{Tsitsiklis1994} provides a general criteria for the convergence of a fully asynchronous stochastic approximation scheme. However, more commonly, updates are considered to be formed from a time-homogeneous Markov chain. Here the book of
\cite{benveniste2012adaptive} provides a general treatment.
More recently there has been an increased focused on finite-time bounds for these schemes as a means of comparing and quantifying performance. The work of \cite{Bach2011} provides a relatively general convergence result for stochastic approximation in the setting of convex optimization. Such results can then be applied in the context of stochastic approximation and reinforcement learning. The paper of \cite{qu2020finite} provides a recent instance of this approach and we develop parts of their analysis to derive our bounds for tabular temporal difference learning. A distinct but not entirely unrelated area of interest off-policy evaluation. This considers a different setting where there is also a disparity between the Markov chain used for training and target that is to be evaluated. See \cite{precup2000eligibility} and \cite{duan2020minimax} for two key references in the tabular and function approximation setting.
The works discussed above tend to focus on finite-time bounds under independent or time-homogeneous Markov feedback.
Here results on mixing times are generally required. Mixing times results for time-homogeneous Markov chains are given in the following texts \cite{aldous-fill-2014,levin2017markov,montenegro2006mathematical}.
However as discussed, in reinforcement learning, we often cannot assume the time homogeneity of our Markov chain
Thus we wish to focus on time-inhomogeneous Markov input. Here we require a result that will replace common-place mixing time assumptions.
Adiabatic results focus on the ability of a time-inhomogeneous Markov chain to approximate the stationary distribution of a time-homogeneous chain. The results of \cite{bradford2011adiabatic} and \cite{rajagopalan2009network} give two such examples for a time-inhomogeneous Markov chains in specific applications areas.
Our analysis relies on the results of \cite{seneta1988perturbation,seneta1993sensitivity,seneta2006non} who derives contraction properties in order to develop a sensitivity analysis of time-inhomogeneous Markov chains. Also see the review of \cite{ipsen2011ergodicity}. We apply this style of analysis along with recursions similar to those in stochastic approximation to establish our bounds. These are then applied to temporal difference learning to develop the finite-time bounds.
\subsection{Outline.} In Section \ref{sec:model}, we present our main mathematical models, assumptions and notation.
In Section \ref{sec:adabiatic}, we prove our Adiabatic Theorem. Here Theorem \ref{thrm:adiabatic} gives a mixing time result for time-inhomogeneous Markov chains.
In Section \ref{sec:async}, we first apply this to asynchronous stochastic approximation in Theorem \ref{thrm:asych}.
In Section \ref{sec:TD}, we the use this to study the problem of policy tracking for tabular TD(0) learning.
\section{Model and Assumptions.}\label{sec:model}
Below in Section \ref{sec:Notation}, we give several definitions for vectors, probability distributions and distances between them.
Then, in Section \ref{sec:MC}, we give definitions for time inhomogeneous Markov chains including the definition of the coefficient of ergodicity, which is an object of central interest in the paper. In Section \ref{TabTD}, we define temporal difference learning and in Section \ref{sec:TabQ} we define Q-learning.
\subsection{Mathematical Definitions and Notation.}
\label{sec:Notation}
We let $\mathbb Z_+=\{0,1,2,...\}$ denote the non-negative integers.
We define $\bm 1 = (1 : x \in \mathcal X)$ to be the vector of all ones and $\bm e_x$ to be the $x$-th unit vector. For two probability distributions, $\lambda$ and $\pi$, defined on finite state space $\mathcal X$, the total variation distance between $\lambda$ and $\pi$ is defined by \footnote{Often total variation distance has the subscript $TV$, i.e. $|| \mu -\pi ||_{TV}$. However, we will be primarily be using this norm and so omit the subscript.}
\[
||\lambda - \pi ||
:=
\frac{1}{2}\sum_{x\in \mathcal X} | \lambda_x - \pi_x | \, .
\]
For two matrices $P$ and $P'$, we define
\[
|| P - P'|| = \max_{x\in\mathcal X} || P_{x\cdot} - P'_{x\cdot} || \,.
\]
Specifically for the two Markov chain transition matrices (as below in Section \ref{sec:MC}), this is the maximum total variation distance between the transitions.
We let $|| \cdot ||_\infty$ denote the supremum norm, that is $|| z ||_\infty = \max_{x\in\mathcal X} |z_x| $ for $z \in \mathbb R^{\mathcal X}$. We say that an operation $z \mapsto F z$ where $Fz \in\mathbb R^{\mathcal X}$ is a $\beta$-contraction with respect to the supremum norm if
\[
|| Fx - Fx' ||_{\infty} \leq \beta || x- x' ||_\infty\, .
\]
We will frequently use the subscript $\max$ (respectively $\min$) to denote the modulus of the largest (resp. smallest) element in a set. For instance if $\theta \in \Theta$ denotes a set of parameters then
$
\theta_{\max} = \max_{\theta \in \Theta} ||\theta||
$ and, for probability distribution $\pi$, $\pi_{\min}$ is the smallest probability event, $\pi_{\min}= \min_{x\in\mathcal X} \pi_x$.
\subsection{Markov chains}\label{sec:MC}
We discuss Markov chains and their mixing times properties. \cite{montenegro2006mathematical} provides a good general treatment of the mixing time of non-reversible Markov chains.
We let $(\hat x_t : t \in \mathbb Z_+)$ be a discrete time Markov chain with states in the finite set $\mathcal X$. We let $n=|\mathcal X|$.
The transition matrix of a (time-homogeneous) Markov chain is a non-negative matrix, $P$, whose rows sum to one, i.e. $\sum_{y\in\mathcal X} P_{xy}=1$ for all $ x\in \mathcal X$.
We consider time-inhomogeneous Markov chains, that is a Markov chain whose transition matrix changes in time.
We let $P^{(t)}$ denote the transition matrix of the $t$-th transition of the Markov chain $\hat x$. Thus the Markov chain $\mathcal X$ obeys the Markov property:
\[
\mathbb P (
\hat x_{t+1} = y | \hat x_t = x, \hat x_{t-1},...,\hat x_0)
=
\mathbb P(\hat x_{t+1} = y | \hat x_t =x )
=
P^{(t)}_{xy}\, .
\]
Throughout the paper we will apply the convention that $\mathbb E_x [f(\hat x) ] = \mathbb E [f(\hat x_1) | \hat x_0 = x]$.
We assume each transition matrix, $P$ (and $P^{(t)}$), is irreducible meaning that there is a positive probability of transitioning between any pair of states over some finite-time.
It is known that for finite-time chains that irreducibility implies there is a unique probability distribution $ \pi= (\pi_x: x \in \mathcal X)$ satisfying
\[
\pi P = \pi \, .
\]
This is called the equilibrium distribution (or stationary distribution) of $P$.
All eigenvalues of the matrix must, necessarily, be less than or equal to $1$. Thus irreducibility implies that the modulus of the second largest eigenvalue, $\rho_2$, is less than $1$.
Since the distribution at time $t$ of a time-homogeneous Markov chain evolves according to power of the matrix $P$, the 2nd largest eigenvalue determines the rate of convergence to equilibrium, specifically results of the form
\[
||\mathcal P ( \hat x_T \in \cdot ) - \pi (\cdot) || \leq C \rho_2^T \, ,
\]
are common-place. For instance, see Proposition 2.12 of \cite{montenegro2006mathematical}.
The bounds such as the above holds for time-homogeneous (reversible) Markov chains, whereas we wish to consider time-inhomogeneous chains, which are not reversible. In this case the spectrum of the empirical transition matrix is not tractable. We instead consider the \emph{coefficient of ergodicity}, which is defined as follows,
\begin{equation}\label{eqn:ergodic}
\rho(P)
:=
\sup_{
\substack{ \lambda: || \lambda|| =1\\ \lambda \cdot \bm 1 = 0} }
|| \lambda P ||
\end{equation}
Shortly, in Proposition \ref{prop:ergprop}, we will see that
$\rho(P)$ is the total variation between the rows of $P$:
\[
\rho(P) :=\max_{x,x'} \big\| P_{x} - P_{x'} \big\|\,.
\]
There are several key properties of the coefficient of ergodicity that make it a good alternative to the modulus of the 2nd largest eigenvalue. These known results are summarized shortly in Proposition \ref{eqn:ergodic} in the next section.
\subsection{Tabular Temporal Difference Learning}\label{TabTD}
For a Markov chain $(\hat x_t: t\in\mathbb Z_+)$ with transition matrix $P$, the aim of temporal difference algorithm is to estimate the reward function
\begin{equation} \label{reward_function}
R(x) = R(x;P) := \mathbb E_x \left[
\sum_{t=0}^\infty \beta^t r(\hat x_t)
\right]
\end{equation}
where $r:\mathcal X \rightarrow \mathbb R$ is a bounded instantaneous reward function and $\beta \in (0,1)$.
The transition probabilities of the Markov chain are assumed to be unknown, we seek to calculate $R(x)$ by sampling from the Markov chain. Monte-carlo simulation will work but requires an entire sample path and can have high variance. So an alternative is to bootstrap from past estimates. Specifically, $R(x)$ satisfies the identity
\begin{equation}
0= r(x) + \beta \mathbb E_x [R(\hat x_1)] - R(x).
\end{equation}
Thus we can seek a fixed point through an asynchronous Robbins-Monro scheme:
\begin{align*}
R_{t+1}(\hat x_t) =
R_{t}(\hat x_t)
+
\alpha_t
\left[
r(\hat x_t)
+ \beta
R_t(\hat x_{t+1}) - R_t(\hat x_t)
\right]
\end{align*}
and $R_{t+1}(x) = R_t(x)$ for all $x\neq \hat x_t$. A proof of convergence is given by \cite{Tsitsiklis1994}. The above algorithm is known as $TD(0)$. There are other variants of this algorithm such as $TD(\lambda)$ and $n$-step $TD$ algorithms. In this paper we focus on $TD(0)$. The analysis given in the paper will, almost certainly, transfer to these cases however for concreteness we focus on $TD(0)$.
A key property is that the operator $F$ defined by
\begin{equation}\label{Fcontract}
FR(x) = r(x) + \beta \mathbb E_x [ R(\hat x)] \,
\end{equation}
is a $\beta$-contraction with respect to the supremum norm: that is for all $R= (R(x) : x \in \mathcal X)$ and $R' = ( R'(x) : x\in\mathcal X)$ it holds that
\[
|| FR - FR' ||_\infty \leq \beta || R - R' ||_\infty \, .
\]
(see Lemma \ref{lem:contraction} in the appendix for a proof.)
We let $F_t$ be the transition operator defined by \eqref{Fcontract} where the expectation is taken with respect to the transition matrix $P^{(t)}$.
\subsection{Tabular Q-learning}\label{sec:TabQ}
We now consider a Markov chain that chooses states $s \in \mathcal S$ and actions $a \in \mathcal A$ according to a Markov chain $\hat x = ((\hat s_t, \hat a_t) : t\in\mathbb Z_+)$. We let $P^{(t)}$ be the transition matrix of this chain. That is $P^{(t)}_{(s,a),(s',a')}$ is the probability of taking action $a'$ in state $s'$ after being in state $s$ and taking action $a$ at time $t$.
Given a transition matrix $P$, a bounded function $r(s,a)$ and constant $\beta\in (0,1)$, the task of $Q$-learning is to evaluate the optimal $Q$-factor, which satisfies the fixed point equation
\[
0= r(s,a) + \beta \mathbb E_{s,a} \Big[ \max_{ a'\in\mathcal A} Q( \hat s, a')\Big] -Q(s,a)
\]
where here $\hat s$ is first state reached under transition Matrix $P$ after taking action $a$ in state $s$. Like with the reward function $R(x;P)$ in \eqref{reward_function}, we sometimes wish to make explicit the dependence on the transition matrix, $P$, in which case we write $Q(s,a;P)$
Like TD-learning, $Q$-learning is an asynchronous Robbins-Monro scheme defined as follows:
\[
Q_{t+1} (\hat s_{t}, \hat a_{t})
=
Q_t(\hat s_{t}, \hat a_{t})
+
\alpha_t
\left[
r(\hat s_t,\hat a_t)
+ \beta
\max_{a\in\mathcal A} Q_t(\hat s_{t+1},a) - Q_t(\hat s_t, \hat a_t)
\right] \, .
\]
and $Q_{t+1}(s,a) = Q_t(s,a)$ for all $(s,a)$ such that $s\neq \hat s_t$ or $a \neq \hat a_t$. Again a key property of $Q$-learning is that the mean operation is a $\beta$-contraction.
That is, given a transition matrix $P$, the operator $G$ defined by
\begin{equation}\label{Qcontract}
GQ(x,a) = r(x,a) + \beta \mathbb E_{x,a} \Big[ \max_{a'\in\mathcal A}Q(\hat x,a') \Big]\,
\end{equation}
is a $\beta$-contraction with respect to the supremum norm. I.e. for all $Q= (Q(s,a) : s \in \mathcal S, a \in \mathcal A)$ and $Q'= (Q'(s,a) : s \in \mathcal S, a \in \mathcal A)$ it holds that
\[
|| GQ - GQ'||_\infty \leq \beta || Q - Q' ||_\infty \, .
\]
(See Lemma \ref{lem:contraction} in the appendix.) We let $G_t$ to be the operator above \eqref{Qcontract} defined by transition matrix $P^{(t)}$.
\section{Adabiatic Theorem}\label{sec:adabiatic}
Our main result given here is Theorem \ref{thrm:adiabatic}.
The result determines how, for a time inhomogeneous Markov chain, the rate of change in the sequence transition matrices $\{ P^{(t)} : t=0,...,T\}$ determines the closeness to the stationary distribution of a time-homogeneous Markov chain with transition matrix $P^{(T)}$.
To prove Theorem \ref{thrm:adiabatic}, we first require a supporting result, namely, Proposition \ref{prop:ergprop}, as well as, some standard lemmas given in the appendix.
We then discuss the impact of mixing times and changes in the transition matrix on the sum of discounted rewards.
Proposition \ref{prop:ergprop} determines key properties of the coefficient of ergodicity, $\rho(P)$, defined by \eqref{eqn:ergodic}. The proposition collects together a number of results given by \cite{seneta1988perturbation,seneta2006non} and reviewed by \cite{ipsen2011ergodicity}. For completeness, a proof is given in the appendix.
\begin{proposition}\label{prop:ergprop}
For two transition Matrices $P$ and $\tilde P$ and probability distributions $\lambda$ and $\mu$\\
a)
\[
\rho_2 \leq \rho(P)
\]
where $\rho_2$ is the modulus of the 2nd largest eigenvalue of $P$.
\noindent b)
\[
\rho(P\tilde P) \leq \rho(P) \rho(\tilde P)\, .
\]
\noindent c)
\[
|| \lambda P - \mu P || \leq \rho(P) || \lambda - \mu || \,.
\]
\noindent d)
\begin{align*}
\rho(P)
&
=
\max_{x_1,x_2}
|| P_{x_1,\cdot} - P_{x_2,\cdot}||\, .
\\
&
=
1 -
\min_{x_1,x_2} \sum_{y \in \mathcal X} \min \left\{ P_{x_1,y} , P_{x_2,y} \right\} \, .
\end{align*}
\noindent e)
\[
|| \pi - \tilde \pi || \leq \frac{1}{1- \rho(P)} \| P -\tilde P\| \, .
\]
\end{proposition}
Part a) above shows that the coefficient of ergodicity is a valid proxy for the $2$nd largest eigenvalue, which determines the rate of convergence to equilibirum. Parts b) can be used to obtain linear convergence to equilibrium. Part d) gives a definition of the coefficient of ergodicity which can be directly calculated. Parts c) and e) will be useful in proofs.
The following gives the main result of this section.
\begin{theorem}[Adiabatic Theorem] \label{thrm:adiabatic}
Let $P,P^{(1)},...,P^{(T)}$ be irreducible transition matrices. Let $\pi^{(T)}$ the stationary distribution of $P^{(T)}$ and $\rho(T)$ the coefficient of ergodicity of $P^{(T)}$. Let
$\mu$ and $\lambda$ be two probability distributions. The following holds:\\
a)
\[
|| \lambda P^{(1)}...P^{(T)} - \mu P^{T} ||
\leq
|| \lambda - \mu || \rho(P)^T
+
\sum_{t=1}^T
|| P^{(t)} - P || \rho(P)^{T-t}\, .
\]
b) If
\begin{equation}\label{eqn:P_Conv}
|| P^{(t)} - P^{(t-1)}|| \xrightarrow[t\rightarrow 0]{} 0\,
\end{equation}
and
\[
\limsup_{T\rightarrow\infty} \rho(T) < 1
\]
then
\begin{equation}\label{eqn:PtoPi}
|| \lambda P^{(1)}...P^{(T)} - \pi^{(T)} ||
\xrightarrow[t\rightarrow \infty]{} 0 \, .
\end{equation}
c) If
\[
||
P^{(t)} - P^{(t-1)}
||
\leq \phi_t
\]
for a positive decreasing sequence $\phi_t$, then
\[
|| \lambda P^{(1)}...P^{(T)} - \pi^{(T)} ||
\leq
\phi_{T/2}
\frac{\rho(T)}{(1-\rho(T))^2}
+
\frac{
\rho(T)^{T/2+1}
}{
1-\rho(T)
}
\sum_{t=1}^{T/2}
\phi_t
+
\|
\lambda - \pi^{(T)}
\|
\rho(T)^T \, .
\]
\end{theorem}
Before giving a proof, let's briefly interpret the three parts of the above result.
Note that the probability distribution at time $T$ of a time-homogeneous Markov chain with initial distribution $\mu$ and transition probabilities $P$ is $\mu P^T$, while for a time-inhomogeneous chain with initial distribution $\lambda$ and transition probabilities $P^{(1)},...,P^{(T)}$ it is $\lambda P^{(1)}...P^{(T)}$.
Thus part a) determines how close these marginal distributions are at time $T$ give the distance in total variation between $P$ and $P^{(t)}$ and between $\mu$ and $\lambda$.
Part b) gives a simple yet general adiabatic result which concerns the closeness to stationarity under change.
We show that provided the coefficient of ergodicity is eventually less than $1$, then we need the change in the components of $P^{(t)}$ to go to zero for convergence of the empirical distribution at time $t$ to go to $\pi^{(t)}$.
Notice, importantly, this does not require convergence of $P^{(t)}$.
In other words, the sequence $P^{(t)}$ can vary over the set of irreducible matrices with $\rho(t)< \rho < 1$. So long as the change in $P^{(t)}$ is vanishingly small, then the current marginal distribution is close to the stationary distribution of the current chain.
Conversely, it is clear that if the condition \eqref{eqn:P_Conv} does not converge to zero then, in general, we cannot expect the result \eqref{eqn:PtoPi} to hold.
Of course, if is possible for two transition matrices, to have the same stationary distribution, therefore the full converse cannot ever hold.
Nonetheless, it should be clear that \eqref{eqn:P_Conv} provides a reasonably minimal necessary condition for \eqref{eqn:PtoPi} to hold.
Part c) improves on part b) by more accurately establishing rates of convergence. It should be clear from the statement of part c) that if $\phi_t= t^{-\eta}$ and $\rho(T)< \rho <1$ then the rate convergence to equilibrium is of the order $O(T^{-\eta})$. Another, perhaps, more important point is that the proof can be used to trade-off the rate of convergence exploited by the sequence $P^{(t)}$ and the exploration suggested by the coefficient of ergodicity $\rho$.
\begin{proof}
\noindent a) Recall, from Proposition \ref{prop:ergprop}, that
\[
\|\lambda' P - \mu' P \| \leq \rho(P) \| \lambda' - \mu' \|\,,
\]
and it follows from the definition of the total variation distance that
\[
\| \lambda' (P'-P) \|
\leq
\max_{x\in\mathcal X}\| P'_{x \cdot} - P_{x \cdot} \| = || P' - P ||\, .
\]
Applying the triangle inequality and the two inequalities above gives
\begin{align*}
\|
\lambda P^{(1)} ...P^{(T)} - \mu P^T
\|
&
\leq
\|
\lambda P^{(1)} ... P^{(T-1)} (P^{(T)} -P)
\|
+
\| \lambda P^{(1)} ... P^{(T-1)} P - \mu P^{T-1} P \|
\\
&
\leq
\|
P^{(T)} -P
\|
+
\rho(P)
\|
\lambda P^{(1)} ... P^{(T-1)} - \mu P^{T-1}
\|\, .
\end{align*}
By repeatedly iterating the above inequality, we have that
\[
\|
\lambda P^{(1)} ... P^{(T)} - \mu P^T
\|
\leq
\|
\lambda - \mu
\|
\rho(P)^T
+
\sum_{t=1}^T
\|
P^{(t)}
-
P
\|
\rho(P)^{T-t}
\]
as required.
\medskip
\noindent b) If we let $P=P^{(T)}$ and $\mu = \pi^{(T)}$, where $\pi^{(T)}$ is the stationary distribution of $P^{(T)}$, then part a) gives that
\begin{equation}\label{eqn:express}
\|
\lambda P^{(1)} ... P^{(T)} - \pi^{(T)}
\|
\leq
\|
\lambda - \pi^{(T)}
\|
\rho (T) ^T
+
\sum_{t=1}^T \|
P^{(t)} - P^{(T)}
\|
\rho(T)^{T-t}\, .
\end{equation}
By the triangle inequality we have that
\[
\|
P^{(t)}- P^{(T)}
\|
\leq
\sum_{s=t+1}^T
\|
P^{(s)} - P^{(s-1)}
\|
\,
.
\]
Applying this gives
\begin{align}
\|
\lambda P^{(1)}...P^{(T)} - \pi^{(T)}
\|
&
\leq
\|
\lambda - \pi^{(T)}
\|
\rho(T)^T
+
\sum_{t=1}^T \sum_{s=t+1}^T
\|
P^{(s)}
-
Px^{(s-1)}
\|
\rho(T)^{T-t}
\notag
\\
&
=
\|
\lambda
- \pi^{(T)}
\|
\rho(T)^T
+
\sum_{s=1}^T
\|
P^{(s)}
-
P^{(s-1)}
\|
\rho(T)^T
\frac{\rho(T)^{-s} - 1}{\rho(T)^{-1} -1} \, .
\label{eq:Prop_Bound}
\end{align}
In the equality above we reorder the double summation and sum the resulting geometric series.
Since we assume
\[
\limsup_{T\rightarrow \infty} \rho(T)< 1 ,
\]
there exist $\rho <1$ and $T_0$ such that for all $T>T_0$, it holds that $\rho(T)< \rho < 1$.
For such values of $T$, the above expression \eqref{eqn:express} becomes
\[
\|
\lambda P^{(1)} ... P^{(T)} - \pi^{(T)}
\|
\leq
\|
\lambda
- \pi^{(T)}
\|
\rho^T
+
\sum_{s=1}^T
\|
P^{(s)}
-
P^{(s-1)}
\|
\rho^T
\frac{\rho^{-s} - 1}{\rho^{-1} -1} \, .
\]
Because
\[
\| P^{(s)} - P^{(s-1)} \| \xrightarrow[s \rightarrow \infty]{} 0 \, ,
\]
the remainder of the proof of part b) follows by a Dominated Convergence Theorem argument. Specifically, take $\eta >0$ there exists $s_\eta$ such that for all $s \geq s_{\eta}$,
\[
\|
P^{(s)}- P^{(s-1)}
\|
\leq \eta
\]
and of course for all $s \leq s_{\eta}$, $\max_{x\in\mathcal X}
\|
P^{(s)}_{x \cdot} - P^{(s-1)}_{x \cdot}
\|
\leq 1 $.
Applying this to the summation in \eqref{eq:Prop_Bound} gives
\begin{align*}
\sum_{s=1}^T
\|
P^{(s)}- P^{(s-1)}
\|
\rho^{T-s}
\leq
\eta \sum_{s=s_{\eta}}^T \rho^{T-s}
+
\sum_{s=1}^{s_{\eta}-1}
\rho^{T-s}
\leq
\frac{\eta}{1-\rho} + \rho^T \sum_{s=1}^{s_{\eta}-1} \rho^{-s} \, .
\end{align*}
Therefore, applying the above bound to \eqref{eq:Prop_Bound} gives
\[
\limsup_{T\rightarrow \infty} \;\;
\|
\lambda P^{(1)}...P^{(T)} - \pi^{(T)}
\|
\leq
\frac{\eta}{1-\rho} \, .
\]
Since $\eta$ can be made arbitrarily small the result for part b) holds.
\medskip
\noindent c)
We now perform a closer analysis of the bound \eqref{eq:Prop_Bound}.
We assume that
\[
\|
P^{(t)} - P^{(t-1)}
\|
\leq
\phi_t \, .
\]
Applying this bound to the sum in \eqref{eq:Prop_Bound} gives
\begin{align*}
\sum_{t=1}^T
\|
P^{(t)} - P^{(t-1)}
\|
\rho(T)^{T-t}
&\leq
\sum_{t=1}^T {\rho(T)^{T-t}} \phi_t
\\
&
=
\sum_{t=T/2+1}^{T} {\rho(T)^{T-t}} \phi_t
+
\sum_{t=1}^{T/2} {\rho(T)^{T-t}} \phi_t
\\
&
\leq
\phi_{T/2} \sum_{t=T/2+1}^T {\rho(T)^{T-t}}
+
\rho(T)^{T/2} \sum_{t=1}^{T/2}
\phi_t
\\
&
\leq
\frac{\phi_{T/2}}{
1-\rho(T)
}
+
\rho(T)^{T/2} \sum_{t=1}^{T/2}
\phi_t \, .
\end{align*}
Now substituting this back into the bound \eqref{eq:Prop_Bound} gives
\begin{align*}
\|
\lambda P^{(1)} ... P^{(T)} - \mu P^T
\|
&
\leq
\|
\lambda - \mu
\|
\rho(T)^T
+
\frac{1}{\rho(T)^{-1} -1}
\left[
\frac{\phi_{T/2}}{
1-\rho(T)
}
+
\rho(T)^{T/2} \sum_{t=1}^{T/2}
\phi_t
\right]
\\
&
\leq
\phi_{T/2}
\frac{\rho(T)}{(1-\rho(T))^2}
+
\frac{
\rho(T)^{T/2+1}
}{
1-\rho(T)
}
\sum_{t=1}^{T/2}
\phi_t
+
\|
\lambda - \pi^{(T)}
\|
\rho(T)^T
\end{align*}
as required.
\end{proof}
\subsection{Discounted Reward Processes}
In this subsection, we focus on the sensitivity of the cumulative rewards to changes in the transition matrix. We recall the definition of the reward function $R(x)=R(x;P)$ from \eqref{reward_function}.
The following lemma is a consequence of Theorem \ref{thrm:adiabatic}. It shows that the reward function $R(x;P)$ and $Q$-function $Q(s,a;P)$ is Lipschitz continuous in $P$.
\begin{lemma}\label{RLem}
For a discounted program\\ a)
\[
\|
R(\cdot,P)
-
R(\cdot, \tilde P) \|_\infty
\leq \frac{\beta r_{\max}}{(1-\beta)^2}
\| P- \tilde P\|
\]
b)
\[
\| Q(\cdot,\cdot; P) - Q(\cdot, \cdot ; \tilde P)\|_\infty
\leq
\frac{\beta r_{\max}}{(1-\beta)^2}
\| P- \tilde P\|
\]
\end{lemma}
The following holds as a consequence of the above. It shows that the rewards of Markov chain (and thus a discounted program) can be expressed interms of a Markov chain with ergodicity coefficient strictly less than $1$.
\begin{lemma}\label{DiscRho}
If $\hat x_t$ is a time homogeneous Markov chain then for $\tilde \beta \in ( \beta , 1)$ there exists a positive recurrent time-homogenous Markov chain $\tilde x_t$ whose transition matrix, $\tilde P$, satisfies
\[
\rho(\tilde P) < \frac{\beta}{\hat \beta} <1
\qquad \text{
and}
\qquad
R(x;\tilde P)
=
\frac{1-\beta}{1-\hat \beta}
R(x; P)
\,.
\]
\end{lemma}
The result ensures that fast mixing can be achieved uniformly across all discounted Markov decision processes, which is important for a temporal difference learning algorithm to converge quickly,
\section{Asynchronous Stochastic Approximation} \label{sec:async}
We apply our adiabatic result to asynchronous stochastic approximation. We consider an asynchronous approximation problem where the target fixed point is changing in a time dependent way and so is the time-inhomogeneous Markov chain that determines which components are updated. Because of the time varying setting, we need to be careful to account for dependence on ergodicity coefficients and the minimum stationary probability.
At each time $t$ we are given an operator $R \mapsto F_tR$, where $F_t : \mathbb R^n \rightarrow \mathbb R^n$ is a $\beta$-contraction with respect to the supremum norm. We assume that $||F_t R||_\infty \leq \beta || R ||_\infty + F_{\max} $ for some positive constant $F_{\max}$. (We will shortly see that this property holds in the case of TD-learning.) We focus on the task of tracking $R^\star_t$ a fixed point
\begin{equation}\label{eq:Fix}
F_{t}R^\star_t= R^\star_t.
\end{equation}
Here we suppose that $(\hat x(t) : t \in \mathbb Z_+)$ is a time-homogeneous Markov chain with irreducible transition matrix $P^{(t)}$ at time $t$. We suppose that the coefficient of ergodicity is bounded above:
\[
\sup_{t\in\mathbb Z_+}\rho(P^{(t)}) < \rho \,
\]
for some $\rho>0$. Recall that by Lemma \ref{DiscRho}, any discounted program can be simulated by a Markov chain satisfying this property. We let $\pi^{(t)} = (\pi^{(t)}(x) : x \in\mathcal X)$ be the stationary distribution of $P^{(t)}$. We assume that
\[
\pi^{(t)}_{\min} \geq \frac{C_{\pi}}{t^{\gamma_{\pi}}}
\]
We assume that the sequence $(F_t : t\in\mathbb Z_+)$ is independent of the Markov chain $(\hat x(t):t\in\mathbb Z_+)$. In this sense we focus on tracking the fixed point as it changes over time rather than influencing its location.
We consider a stochastic approximation algorithm which at time $t$ maintains a vector $R_t = (R_t(x) : x \in\mathcal X)$.
We take $R_0(x)=0$ for $x\in\mathcal X$ and we update $R_t$ according to the rule
\begin{equation} \label{SA:update}
R_{t+1}(x)
=
R_t(x)
+
\alpha_t
\left[
F_t R_t(x) - R_t(x) + \epsilon_t
\right]
\qquad\text{ for }x = \hat x(t)
\end{equation}
and $R_{t+1}(x) = R_t(x)$ if $x\neq \hat x(t)$. Here $\epsilon_t$ is a bounded martingale difference sequence with respect to filtration $\mathcal F_t$ generated by the past states $\hat x_s$, $s\leq t$.
We assume that $\alpha_t$ is a power function that is
\[
\alpha_t= \frac{C_\alpha }{t^{\gamma_\alpha}} \,
\]
for constant $C_{\alpha} \in (0,1)$ and for $\gamma_\alpha \in (0,1)$.
We make the assumption that
\[
\gamma_\alpha + \gamma_{\pi} < 1.
\]
Also we assume that $P^{(t)}$ is both bounded above as follows
\begin{equation}\label{RP}
|| P^{(t+1)} - P^{(t)}||
\leq \frac{C_P}{t^{\gamma_P}} \,
\end{equation}
for positive constants $C_P$ and $\gamma_P$.
We assume that $F_t$ is dependent on $P_t$ in that $R_t^\star$ is Lipschitz in $P_t$. That is
\[
|| R^\star_{t+1} - R^\star_t ||_\infty \leq
K || P^{(t+1)} - P^{(t)} || \,.
\]
We recall Lemma \ref{RLem} for justification of this Lipschitz assumption in the context of dynamic programming.
\begin{theorem}\label{thrm:asych}
The distance between the fixed point \eqref{eq:Fix} and the stochastic approximation \eqref{SA:update}, as described above, obeys the following bound with probability greater that $1-\delta$:
\begin{align}
\| R_{T+1}(x) - R_{T+1}^\star(x) \|_\infty
\leq
&
2 R_{\max}
e^{(1-\beta)\tau}
\exp\left\{
-
(T^{1-\gamma_\alpha-\gamma_\pi}-1)/(1-\gamma_\alpha-\gamma_\pi)
\right\}
\label{ada1}
\\
&
+
\frac{{D_a} }{1-\beta}
\sqrt{\tau \log \left(
\frac{2T \tau }{\delta}
\right)}
\frac{1}{T^{(\gamma_\alpha-3\gamma_{\pi})/2}}
\label{ada2}
\\
&
+
\frac{{D_{b}}K}{(1-\beta)}
\frac{1}{T^{\gamma_P-\gamma_\alpha - \gamma_{\pi}}}
\label{ada3}
\\
&
+
\frac{2 R_{\max}D_{b'}}{(1-\rho)^2(1-\beta)}\frac{ 1}{T^{\gamma_P-\gamma_{\pi}}}
+
\frac{8 R_{\max}}{(1-\rho)^2}
\frac{\log T}{T^3}
+
\frac{2R_{\max}}{T^3}\,,
\label{ada4}
\end{align}
where $\tau := 4 \frac{\log T }{|\log \rho|}$, and $D_a$, $D_{b}$, $D_{b'}$ are constants depending only on $\gamma_P$, $C_P$, $\gamma_\alpha$, $C_\alpha$, $\gamma_\pi$, $C_\pi$.
\end{theorem}
Before proceeding with a proof we interpret the terms in Theorem \ref{thrm:asych}.
Like with stochastic gradient descent, the term \eqref{ada1} corresponds to the exponential rate that we forget the initial condition. The term \eqref{ada2} accounts for the mixing/adiabatic time of the Markov chain. The bound requires $2 \gamma_\alpha > 3 \gamma_\pi$ for convergence. A conjecture is that the dependence should be $\gamma_\alpha > \gamma_\pi$. In either case, we require step sizes to converge at a faster rate than the rate that we avoid states in the chain.
The term \eqref{ada3} is the most important term. We see that if
\[
\gamma_P > \gamma_\alpha + \gamma_\pi
\]
then we can track the fixed point solution. Thus the stochastic approximation scheme is adiabatic. However, if $\gamma_P < \gamma_\alpha + \gamma_\pi$ then we do not expect the stochastic approximation scheme to converge on the current fixed point, and thus is diabatic.
The term \eqref{ada4} is dominated by earlier terms does not determine our rate of convergence; however, it does include the dependencies on the coefficient of ergodicity. Further we note that the $O(T^{-3})$ can be replaced with $O(T^{-n})$ for arbitrary $n$. Finally we note that if $\gamma_P-\gamma_\alpha-\gamma_\pi > (\gamma_\alpha-3\gamma_\pi)/2$ then the order of the convergence rate is the same as for the time-homogeneous stationary learning problem typically considered in asynchronous stochastic approximation.
A number of supporting lemmas are required in the proof of Theorem \ref{thrm:asych}, specifically, Lemmas \ref{lem:zcomp}, \ref{lemma:alpha_sum}, \ref{prodbound}, \ref{Lem:Atoa}, \ref{zboundLem}, \ref{AHbound} and \ref{lem7}. These are stated immediately. after the proof of Theorem \ref{thrm:asych} in Section \ref{sec:thrmlem}, and proofs are given in the appendix in Section \ref{app:4}.
\begin{proof}[Proof of Theorem \ref{thrm:asych}]
We can rewrite the update \eqref{SA:update} as
\begin{align*}
R_{t+1}(x)
&=
R_{t}(x)
+
\alpha_t \mathbb I [\hat x_t =x ]
\Big[
F_tR_t(x) - R_t(x) + \epsilon_t
\Big]\,.
\end{align*}
Using the fact that $F_t R_t^\star(x) = R_t^\star(x)$ and adding \& subtracting terms, the above can be rewritten as
\begin{align}
R_{t+1}(x) - R^\star_{t+1}(x)
=
&
(1-\alpha_t \pi^{(t)}(x))
\left[
R_{t}(x) - R^\star_{t}(x)
\right]
\notag
\\
&
+
\alpha_t \pi^{(t)}(x) \left[
F_t R_t(x) - F_t R^\star_t(x)
\right]
\notag
\\
&
+ \left[ R_t^\star (x) - R_{t+1}^\star (x) \right]
\notag
\\
&
+
\alpha_t
\left[
\mathbb P( \hat x_t = x |\mathcal F_{t-\tau})
-\pi^{(t)}(x)
\right]
\left[
F_tR_t(x) -R_t(x)
\right]
\notag
\\
&
+
\alpha_t\left[
\mathbb I [\hat x_t = x]
-
\mathbb P(\hat x_t = x | \mathcal F_{t-\tau} )
\right] \left[
F_t R_t(x) - R_t(x)
\right]
\notag
\\
&
+ \alpha_t \mathbb I [\hat x_t = x] \epsilon_t \,
\label{RstarBound}
\end{align}
given the expression above we define
\begin{align*}
c_t(x) & := \left[
F_t R_t(x) - F_t R^\star_t(x)
\right]
\\
b_t(x) & := \left[ R_t^\star (x) - R_{t+1}^\star (x) \right]
\\
b'_t(x) & :=
\left[
\mathbb P( \hat x_t = x |\mathcal F_{t-\tau})
-\pi^{(t)}(x)
\right]
\left[
F_tR_t(x) -R_t(x)
\right]
\\
\epsilon'_t(x)& := \left[ \mathbb I [\hat x_t = x]
-
\mathbb P(\hat x_t = x | \mathcal F_{t-\tau} ) \right]\left[
F_t R_t(x) - F_t R^\star_t(x)
\right] \, .
\end{align*}
Thus the expression above, \eqref{RstarBound}, can be more compactly written as
\[
R_{t+1}(x) - R^\star_{t+1}(x)
=
(1-\alpha_t \pi^{(t)}(x))
[R_t(x) - R^\star_t(x)]
+\alpha_t \pi^{(t)} (x) c_t(x) + b_t(x) + \alpha_t b'_t(x) + \alpha_t \epsilon_t + \alpha_t \epsilon'_t(x) \, .
\]
In order words, $R_{t+1}(x)- R^\star_{t+1}(x)$ is a combination of one contraction term $c_t$, two bias terms $b_t(x)$ and $b'_t(x)$ and two martingale difference terms $\epsilon_t$ and $\epsilon'_t(x)$. (By assumption $\epsilon_t$ is bounded and, by Lemma \ref{Lem:Bound} ---proved in the appendix--- $R_t(x)$ is bounded and so $\epsilon'_t(x)$ is bounded. We let $\epsilon_{\max}$ upper bound of their sum.)
Ideally we would apply the contraction property to $c_t$ at this point and then take a supremum over $x$; however, martingale difference sequence considered is dependent on $x$ and thus we will not be able to apply concentration inequalities to the martingale difference sequence. To avoid this difficulty, we first expand the iterations of the expression above to in order to apply Azuma-Hoeffding Inequality and then seek to apply the contraction property.
Expanding the recursion using Lemma \ref{lem:zcomp} gives:
\begin{align}
R_{T+1}(x)
- R^\star_{T+1}(x)
=
&
[R_\tau(x) - R_\tau^\star(x)]
\prod_{t=\tau}^T
\left(
1 - \alpha_t \pi^{(t)}(x)
\right)
\notag\\
&
+
\sum_{t=\tau}^T
\left[
\alpha_t \pi^{(t)}(x)c_t(x)
+
b_t(x)
+
\alpha_t b'_t(x)
\right]
\prod_{s=t+1}^T
(1- \alpha_t \pi^{(t)}(x))
\notag
\\
&
+
\sum_{t=\tau}^T
\alpha_t \left[
\epsilon_t
+
\epsilon'_t(x)
\right]
\prod_{s=t+1}^T
(1- \alpha_t \pi^{(t)}(x))
\label{Req}
\end{align}
We now bound each of the terms above to obtain our result.
We start with the martingale difference terms, by the Azuma-Hoeffding Inequality given in Lemma \ref{AHbound} with probability at least $1-\delta$, it holds that for all $t\leq T$
\begin{align*}
\Bigg|
\sum_{s=\tau}^{t}
\alpha_s
\left[
\epsilon_s
+
\epsilon'_s(x)
\right]
\prod_{u=s+1}^{t}
(1- \alpha_u \pi^{(u)}(x))
\Bigg|
&
\leq
\sqrt{2\tau \sum_{s=1}^{t}
\left[ \epsilon_{\max}^2 \alpha^2_{s}\prod_{u=s+1}^{t}
(1- \alpha_u \pi_{\min}^{(u)})^2 \right] \log \left(\frac{2T \tau }{\delta}\right)}
\\
&
\leq
\frac{\epsilon_{\max} D_\alpha}{t^{(\gamma_\alpha-\gamma_{\pi})/2}}
\sqrt{ \tau\log \left( \frac{2T\tau}{\delta}\right)}\,
=: A_{t}.
\end{align*}
In the final inequality, we simplify the expression by applying Lemma \ref{prodbound} (noting that $(1-\alpha)\geq (1-\alpha)^2$). Here $D_\alpha$ is a constant only depending on $\gamma_\alpha$, $C_\alpha$, $\gamma_\pi$, $C_\pi$.
Given $A_{t}$ as defined above and recalling Lemma \ref{Lem:Atoa}, we can define
\[
a_t = \frac{A_t - A_{t-1}}{\alpha_t} + A_{t-1} \,
\]
and by Lemma \ref{Lem:Atoa},
\begin{equation} \label{aDef:Stuff}
a_t \leq
\frac{\epsilon_{\max} D'}{t^{(\gamma_\alpha-\gamma_{\pi})/2}}
\sqrt{ \tau\log \left( \frac{2T\tau}{\delta}\right)} \, ,
\end{equation}
where $D'$ is a constant depending only on $\gamma_\alpha$, $C_\alpha$, $\gamma_\pi$ and $C_{\pi}$.
We can also bound the other terms $c_t(x)$, $b_t(x)$ and $b'_t(x)$. For $c_t(x)$ we apply the assumed contraction property that is
\begin{equation}\label{cBound}
||c_t(\cdot)||_\infty=
\|
F_t R_t - F_t R^\star_t
\|_\infty
\leq
\beta
\| R_t - R^\star_t
\|_\infty
\end{equation}
For $b_t(x)$, we have by the assumed Lipschitz property (see also Lemma \ref{RLem}) that
\begin{equation}\label{bstar}
||
R_t^\star - R^\star_{t+1}
||_\infty
\leq
K
\| P^{(t)} - P^{(t+1)}\|
\leq
K
\frac{C_P}{t^{\gamma_P}}
=: b^\star_t .
\end{equation}
For $b'_t(x)$, we have by the Adiabatic Theorem
and more specifically by Lemma \ref{lem7} that:
\begin{align*}
\left\|
\mathbb P( \hat x_t = \cdot |\mathcal F_{t-\tau})
-\pi^{(t)}(\cdot)
\right\|
&
\leq
\frac{1}{(1-\rho)^2}\frac{D_P}{t^{\gamma_P}}
+
\frac{4}{(1-\rho)^2}
\frac{\log T}{T^4}
+
\frac{1}{T^4} \,,
\end{align*}
where $
\tau := 4 {\log T }/{|\log \rho|}.
$
Thus
\begin{align}
|| b'_t(\cdot) ||_\infty
&
\leq 2R_{\max}
\big\|
\mathbb P( \hat x_t = \cdot |\mathcal F_{t-\tau})
-\pi^{(t)}(\cdot)
\big\|
\notag
\\
&
\leq
2R_{\max}
\left[
\frac{1}{(1-\rho)^2}\frac{D_P}{t^{\gamma_P}}
+
\frac{4}{(1-\rho)^2}
\frac{\log T}{T^4}
+
\frac{1}{T^4}
\right]
=: b'^\star_t \, .
\label{b_dash_star}
\end{align}
Given $a_t$ in \eqref{aDef:Stuff},
the bound for $c_t$ in \eqref{cBound} and the definitions of $b_t^\star$ in \eqref{bstar} and $b'^\star_t$ in \eqref{b_dash_star}, the equality \eqref{Req} now becomes the bound:
\begin{align*}
|R_{T+1}(x) - R^\star_{T+1}(x) |
\leq
&
|R_{\tau}(x) - R^\star_{\tau}(x)|
\prod_{t=\tau}^T
\left(
1 - \alpha_t \pi^{(t)}(x) \right)
\\
&
+ \sum_{t=\tau}^T
\alpha_t \beta
\| R_t - R^\star_t
\|_\infty
\prod_{s=t+1}^T
(1- \alpha_t \pi^{(t)}(x))
\\
&+
\sum_{t=\tau}^T
\left[
\alpha_t a_t
+
b_t^{\star}
+
\alpha_t b_t'^{\star}
\right]
\prod_{s=t+1}^T
(1- \alpha_t \pi^{(t)}(x)) \,.
\end{align*}
Notice that we now have removed the martingale terms, and also applied the bounded terms with the Adiabatic Theorem (via Lemma \ref{lem7}). We now focus on re-introducing the $\beta$-contraction term.
By Lemma \ref{zboundLem}
\[
|R_{T+1}(x) + R^\star_{T+1}(x)|
\leq z_{T+1}
\]
where $z_{t}$ obeys the recursion
\begin{align*}
z_{t+1}
&
= (1-\alpha_t\pi^{(t)}(x)) z_t + \alpha_t \beta \pi^{(t)}(x) z_t
+
\alpha_t a_t
+
b^\star_t + \alpha_t b'^{\star}_t \, .
\\
&
=
(1-\alpha_t (1-\beta ) \pi^{(t)}(x))
z_{t}
+
\alpha_t a_t
+
b^\star_t + \alpha_t b'^{\star}_t
\\
&
\leq
(1-\alpha_t (1-\beta ) \pi^{(t)}_{\min})
z_{t}
+
\alpha_t a_t
+
b^\star_t + \alpha_t b'^{\star}_t \, .
\end{align*}
for $t \geq \tau$
and $z_\tau = \| R_{\tau}(x) - R_{\tau}^\star(x) \|_\infty$.
Thus expanding this recursion using Lemma \ref{lem:zcomp}, we have that
\begin{align}\label{expander2}
&
\| R_{T+1}(x) - R_{T+1}^\star(x) \|_\infty
\notag
\\
&
\leq
\| R_{\tau}(x) - R_{\tau}^\star(x) \|_\infty
\prod_{t=\tau}^T (1- \alpha_t (1-\beta) \pi^{(t)}_{\min})
+
\sum_{t=\tau}^T
\left(
\alpha_t a_t + b^\star_t + \alpha_t b'^\star_t
\right)
\prod_{s=t+1}^T
(1- \alpha_s (1-\beta) \pi^{(t)}_{\min})\, .
\end{align}
We bound the above terms for $a_t$, $b^\star_t$, $b'^\star_t$ by appling Lemma \ref{prodbound}. Specifically, recalling \eqref{aDef:Stuff},
\begin{align}\label{abound}
\sum_{t=\tau}^T \alpha_t a_t
\prod_{s=t+1}^T (1-\alpha_s (1-\beta ) \pi^{(t)}_{\min})
&
\leq
{\epsilon_{\max} D'}
\sqrt{\tau \log \left(
\frac{2T \tau }{\delta}
\right)}
\sum_{t=\tau}^T \
\frac{1}{t^{(\gamma_\alpha-\gamma_{\pi})/2}}
\alpha_t
\prod_{s=t+1}^T (1-\alpha_s (1-\beta ) \pi^{(t)}_{\min})
\notag
\\
&
=
\frac{\epsilon_{\max}{D_a} }{1-\beta}
\sqrt{\tau \log \left(
\frac{2T \tau }{\delta}
\right)}
\frac{1}{T^{(\gamma_\alpha-3\gamma_{\pi})/2}}
\end{align}
where $D_a$ is a constant depending on $\gamma_\alpha$, $C_\alpha$, $C_{\pi}$, and $\gamma_\pi$;
also, recalling \eqref{bstar} and again applying Lemma \ref{prodbound}
\begin{align}
\sum_{t=\tau}^T
b^\star_t
\prod_{s=t+1}^T
(1-\alpha_s(1-\beta)\pi^{(t)}_{\min})
&
\leq
{{C_P}K}
\sum_{t=\tau}^T
\frac{1}{t^{\gamma_P-\gamma_\alpha}}
\alpha_t
\prod_{s=t+1}^T (1-\alpha_s (1-\beta )\pi^{(s)}_{\min})
\notag
\\
&
\leq
{D_{b} K}
\frac{1}{T^{\gamma_P-\gamma_\alpha - \gamma_{\pi}}}
\label{b_bound}
\end{align}
where $D_{b}$ is a constant depending on $\gamma_\alpha$, $C_\alpha$, $\gamma_\pi$, $C_\pi$, $\gamma_P$ and $C_P$;
and
\begin{align}\label{b2bound}
&\sum_{t=\tau}^T
\alpha_t
b'^\star_t
\prod_{s=t+1}^T
(1-\alpha_s(1-\beta)\sigma)]
\notag
\\
&=
2R_{\max}
\sum_{t=\tau}^T
\left[
\frac{1}{(1-\rho)^2}\frac{D_P}{t^{\gamma_P}}
+
\frac{4}{(1-\rho)^2}
\frac{\log T}{T^4}
+
\frac{1}{T^4}
\right]\alpha_t
\prod_{s=t+1}^T
(1- \alpha_s (1-\beta) \pi^{(s)}_{\min})
\notag
\\
&
\leq
\frac{2 R_{\max}D_{b'}}{(1-\rho)^2(1-\beta)}\frac{ 1}{T^{\gamma_P-\gamma_{\pi}}}
+
\frac{8 R_{\max}}{(1-\rho)^2}
\frac{\log T}{T^3}
+
\frac{2R_{\max}}{T^3}\,.
\end{align}
where $D_{b'}$ depends on $\gamma_P$, $C_P$, $\gamma_\alpha$, $C_\alpha$, $\gamma_\pi$, and $C_\pi$.
Further we can bound the term corresponding to the forgetting the estimate at time $\tau$:
\begin{align}\label{initbound}
&
\| R_{\tau}(x) - R_{\tau}^\star(x) \|_\infty
\prod_{t=\tau}^T (1- \alpha_t (1-\beta) \pi^{(t)}_{\min})
\notag
\\
\leq
&
2 R_{\max}
\exp \left\{
- (1-\beta) \sum_{t=\tau}^T \alpha_t \pi^{(t)}_{\min}
\right\}
\notag
\\
\leq
&
2 R_{\max}
e^{(1-\beta)\tau}
\exp\left\{
(1-\beta)\sum_{t=1}^T \frac{C_{\alpha} C_{\pi} }{t^{\gamma_\alpha+ \gamma_\pi}}
\right\}
\notag
\\
\leq
&
2 R_{\max}
e^{(1-\beta)\tau}
\exp\left\{
-C_{\alpha} C_{\pi}(1-\beta)
(T^{1-\gamma_\alpha-\gamma_\pi}-1)/(1-\gamma_\alpha-\gamma_\pi)
\right\}
\notag
\\
\leq
&
2 R_{\max}
e^{(1-\beta)\tau}
\exp\left\{
-
(T^{1-\gamma_\alpha-\gamma_\pi}-1)/(1-\gamma_\alpha-\gamma_\pi)
\right\}\, .
\end{align}
Above we apply the bound $(1-z) \leq e^{-z}$; we add and subtract then bound the first $\tau$ terms in the summation; we apply Lemma \ref{lemma:alpha_sum}; and then simplify by observing that $C_\alpha, C_\pi$, and $(1-\beta)$ are all less than $1$.
Thus applying the above inequalities \eqref{abound}, \eqref{b_bound}, \eqref{b2bound} and \eqref{initbound} to \eqref{expander2} gives the bound:
\begin{align*}
\| R_{T+1}(x) - R_{T+1}^\star(x) \|_\infty
\leq
&
2 R_{\max}
e^{(1-\beta)\tau}
\exp\left\{
-
(T^{1-\gamma_\alpha-\gamma_\pi}-1)/(1-\gamma_\alpha-\gamma_\pi)
\right\}
\\
&
+
\frac{{D_a} }{1-\beta}
\sqrt{\tau \log \left(
\frac{2T \tau }{\delta}
\right)}
\frac{1}{T^{(\gamma_\alpha-3\gamma_{\pi})/2}}
\\
&
+
\frac{{D_{b}}K}{(1-\beta)}
\frac{1}{t^{\gamma_P-\gamma_\alpha - \gamma_{\pi}}}
\\
&
+
\frac{2 R_{\max}D_{b'}}{(1-\rho)^2(1-\beta)}\frac{ 1}{T^{\gamma_P-\gamma_{\pi}}}
+
\frac{8 R_{\max}}{(1-\rho)^2}
\frac{\log T}{T^3}
+
\frac{2R_{\max}}{T^3}\, ,
\end{align*}
as required.
\end{proof}
\subsection{Lemmas for Theorem \ref{thrm:asych}}\label{sec:thrmlem}
We now list additional lemmas that are required for Theorem \ref{thrm:asych}. We only state these lemma below. We restate and then prove the lemmas in Section \ref{app:4} of the appendix.
Lemma \ref{lem:zcomp} is a standard expansion commonly used in stochastic approximation.
\begin{lemma}\label{lem:zcomp}
Suppose that $z_n$ is a positive real valued sequence such that
\[
z_{n+1} \leq z_n (1 - a_n) + c_n
\]
then
\[
z_{n+1} \leq z_0 \prod_{k=0}^n (1-a_k) + \sum_{j=0}^n c_j \prod_{k=j+1}^n (1 - a_k)\, .
\]
\end{lemma}
Lemma \ref{lemma:alpha_sum} is a well-known integral bound.
\begin{lemma}\label{lemma:alpha_sum}
If we let $\alpha_t = t^{-\gamma}$, for $\gamma \in (0,\infty)$, we have
\[
\frac{t^{1-\gamma} - s^{1-\gamma}}{ 1-\gamma }
\leq \sum_{n=s}^t \frac{1}{n^{\gamma}}
\leq
\frac{1}{s^{\gamma}} + \frac{t^{1-\gamma} - s^{1-\gamma} }{ 1-\gamma } \, .
\]
where for $\gamma = 1$, we define
$\;
{(t^{1-\gamma} - 1)}/{(1-\gamma )} := \log t
$.
\end{lemma}
The following lemma is a commonly used bound that combines Lemma \ref{lem:zcomp} and Lemma \ref{lemma:alpha_sum}.
\begin{lemma}\label{prodbound}
For positive sequences $a_t$ and $b_t$ with $a_t \in(0,1)$ and $b_t$ decreasing
\[
\sum_{t=1}^T a_t b_t \prod_{s=t+1}^T (1-a_s)
\leq
b_{T/2}
+
e^{-\sum_{t=T/2}^T a_t}
\sum_{t=1}^{T/2} a_t b_t
\]
Moreover if $a_t = \frac{C_a}{t^{\gamma_a}}$ and $b_t = \frac{C_b}{t^{\gamma_b}}$ for $\gamma_a \in(0,1)$ and $\gamma_b \geq 0$ then
\[
\sum_{t=1}^T a_t b_t \prod_{s=t+1}^T (1-a_s)
\leq
\frac{D_{a,b}}{T^{\gamma_b}}
\, ,
\]
where $D_{a,b}$ is a constant depending on $C_a,C_b,\gamma_a$ and $\gamma_b$.
\end{lemma}
The following lemma is a straight-forward converse to Lemma \ref{lem:zcomp} and appears to be less commonly used.
\begin{lemma}\label{Lem:Atoa}
For any sequence $A_T$ we can write $A_T$ as
\[
A_T
=
\sum_{t=1}^T
a_t \alpha_t
\prod_{s=t+1}^T (1-\alpha_s)
\]
where
\[
a_t = \frac{A_t - A_{t-1}}{\alpha_t} + A_{t-1}\,.
\]
Thus if $A_t = C_A/ t^{\gamma_A}$ and $\alpha_t = C_\alpha/ t^{\gamma_\alpha}$ for $\gamma_{\alpha} \in (0,1]$ then
\[
a_t \leq \frac{D}{t^{\gamma_A}}
\]
where $D$ is a positive constant depending on $\gamma_{\alpha}$, $\gamma_A$, $C_{\alpha}$, $C_A$.
\end{lemma}
Lemma \ref{zboundLem}, below, is also a slightly less standard bound involving the expansion from Lemma \ref{lem:zcomp}.
\begin{lemma}\label{zboundLem}For positive sequence $z_t$ and $\alpha_t \in (0,1)$ if
\[
z_{t+1}
\leq
z_0 \prod_{s=1}^t (1-\alpha_s)
+
\sum_{s=1}^t \beta z_s
\prod_{u=s+1}^t (1-\alpha_u)
+
\sum_{s=1}^t c_s
\prod_{u=s+1}^t (1-\alpha_u)
\]
then $z_t \leq \tilde z_t$ where $\tilde z_t$ solves the recursion
\[
\tilde z_{t+1} = (1-\alpha_t (1-\beta)) \tilde z_t + c_t
\]
with $\tilde z_0\geq z_0$.
\end{lemma}
Lemma \ref{AHbound} is a shifted Azzuma-Heoffding bound which can be found in \cite{qu2020finite}.
\begin{lemma}\label{AHbound}
If $\epsilon_t$ is adapted with $\mathbb E[ \epsilon_t | \mathcal F_{t-\tau}]=0$ and $|\epsilon_t| \leq \epsilon_{\max}$ then with probability greater than $1-\delta$ it holds that, for all $t$ with $\tau \leq t \leq T$,
\[
\left|
\sum_{s=\tau}^t
\alpha_s
\epsilon_s
\prod_{u=s+1}^t (1-\alpha_s \pi_s)
\right|
\leq
\sqrt{
2 \tau
\left[\sum_{s=1}^t \epsilon^2_{\max} \alpha_s^2
\prod_{u=s+1}^t
(1-\alpha_s\pi_s)^2
\right]
\log \bigg(\frac{2\tau T}{\delta}\bigg)
} \, .
\]
\end{lemma}
Lemma \ref{lem7} is a mixing time result that applies the adiabatic theorem in the context of asynchronous stochastisc approximation,.
\begin{lemma}\label{lem7}
For $\tau = 8 \frac{\log T }{|\log \rho|}$ and $t$ such that $\tau \leq t \leq T$ it holds that
\[
\|
\mathbb P( \hat x_t = \cdot | \mathcal F_{t-\tau})
-
\pi^{(t)}(\cdot)
\|
\leq
\frac{1}{(1-\rho)^2}\frac{D_P}{t^{\gamma_P}}
+
\frac{4}{(1-\rho)^2}
\frac{\log T}{T^4}
+
\frac{1}{T^4}\, ,
\]
where $D_P =C_P 2^{\gamma_P}$.
\end{lemma}
\begin{lemma}\label{Lem:Bound}
The sequence $R_t$ defined in \eqref{SA:update} and $F_tR_t$
are bounded in $t$.
\end{lemma}
\section{Application to Tabular Reinforcement Learning}
We now show how the above results apply in the context of temporal difference learning and also $Q$-learning.
\subsection{Temporal Difference Learning}\label{sec:TD}
We consider tabular temporal difference learning as described in Section \ref{TabTD}.
We assume the transition matrix evolves in time. We let $P^{(t)}$ be the transition matrix of the $t$-th transition.
We assume
\[
|| P^{(t+1)} - P^{(t)}||
\leq \frac{C_P}{t^{\gamma_P}} \, .
\]
We then seek to evaluate the fixed point equation:
\[
R^\star_t(x) =r(x) + \beta P^{(t)}R_t^\star( x) \, .
\]
Note that the operation $F_t$ such that
$
F_tR (x) = r(x) + \beta P^{(t)}R(x)
$
is a $\beta$-contraction (see Lemma \ref{lem:contraction}) and by definition $F_tR^\star_t = R^\star_t$. By Lemma \ref{RLem}
\[
\|
R^\star_t
-
R^\star_{t+1}\|_\infty
\leq \frac{\beta r_{\max}}{(1-\beta)^2}
\| P^{(t)} - P^{(t+1)}\| \leq
\frac{\beta r_{\max}}{(1-\beta)^2}
\frac{C_P}{t^{\gamma_P}}
=:
\frac{C_R}{t^{\gamma_R}} \,.
\]
We suppose that $(\hat x_t : t \in\mathbb Z_+)$ is a time inhomogeneous Markov chain with transition matrix $P^{(t)}$ at time $t$. We consider the tabular temporal difference update:
\begin{align*}
R_{t+1}(\hat x_t) =
R_{t}(\hat x_t)
+
\alpha_t
\left[
r(\hat x_t)
+ \beta
R_t(\hat x_{t+1}) - R_t(\hat x_t)
\right]
\end{align*}
and $R_{t+1}(x) = R_t(x)$ for all $x\neq \hat x_t$. As before we assume
$
\alpha_t = \frac{C_{\alpha}}{t^{\gamma_\alpha}}
$
and $\pi^{(t)}$ the stationary distribution of $P^{(t)}$ satisfies
$
\pi^{(t)}_{\min} \geq \frac{C_{\pi}}{t^{\gamma_{\pi}}} \, .
$
Identifying the above terms with the statement of Theorem \ref{thrm:asych} gives the following result.
\begin{theorem}
\begin{align*}
\| R_{T+1}(x) - R_{T+1}^\star(x) \|_\infty
\leq
&
2 R_{\max}
e^{(1-\beta)\tau}
\exp\left\{
-
(T^{1-\gamma_\alpha-\gamma_\pi}-1)/(1-\gamma_\alpha-\gamma_\pi)
\right\}
\\
&
+
\frac{{D_a} }{1-\beta}
\sqrt{\tau \log \left(
\frac{2T \tau }{\delta}
\right)}
\frac{1}{T^{(\gamma_\alpha-3\gamma_{\pi})/2}}
\\
&
+
\frac{{D_{b}}\beta r_{\max}}{(1-\beta)^3}
\frac{1}{T^{\gamma_P-\gamma_\alpha - \gamma_{\pi}}}
\\
&
+
\frac{2 R_{\max}D_{b'}}{(1-\rho)^2(1-\beta)}\frac{ 1}{T^{\gamma_P-\gamma_{\pi}}}
+
\frac{8 R_{\max}}{(1-\rho)^2}
\frac{\log T}{T^3}
+
\frac{2R_{\max}}{T^3}\,.
\end{align*}
\end{theorem}
\subsection{Q-Learning}
We now consider $Q$-learning which is a variant of the temporal difference learning. Here we consider the fixed point:
\[
Q^\star_t = G^{(t)} Q^\star_t
\]
where
\[
G^{(t)}Q(s,a) = r(s,a) + \beta \mathbb E^{(t)} \left[ \max_{a' \in\mathcal A} Q(\hat s,a') \right] \, .
\]
We perform the $Q$-learning update with respect to the time-inhomogeneous Markov chain $(\hat s^{(t)}, \hat a^{(t)})$ with transition probabilities $P^{(t)}$ at time $t$. That is
\[
Q_{t+1} (\hat s_{t}, \hat a_{t})
=
Q_t(\hat s_{t}, \hat a_{t})
+
\alpha_t
\left[
r(\hat s_t,\hat a_t)
+ \beta
\max_{a\in\mathcal A} Q_t(\hat s_{t+1},a) - Q_t(\hat s_t, \hat a_t)
\right] \, .
\]
and $Q_{t+1}(s,a) = Q_t(s,a)$ for all $(s,a)$ such that $s\neq \hat s_t$ or $a \neq \hat a_t$. Then as above
we assume
$
\alpha_t = \frac{C_{\alpha}}{t^{\gamma_\alpha}}
$
and $\pi^{(t)}$ the stationary distribution of $P^{(t)}$ satisfies
$
\pi^{(t)}_{\min} \geq \frac{C_{\pi}}{t^{\gamma_{\pi}}} \, .
$
We assume
\[
|| P^{(t+1)} - P^{(t)}||
\leq \frac{C_P}{t^{\gamma_P}} \, .
\]
By Lemma \ref{RLem} we have that
\[
\|
Q^\star_t - Q^\star_{t+1}
\|
\leq \frac{\beta r_{\max}}{(1-\beta)^2}|| P^{(t+1)} - P^{(t)} ||
\]
Identifying the above terms with the statement of Theorem \ref{thrm:asych}, we also can obtain the following analogous result for $Q$-learning:
\begin{theorem}
\begin{align*}
\| Q_{T+1}- Q_{T+1}^\star \|_\infty
\leq
&
2 R_{\max}
e^{(1-\beta)\tau}
\exp\left\{
-
(T^{1-\gamma_\alpha-\gamma_\pi}-1)/(1-\gamma_\alpha-\gamma_\pi)
\right\}
\\
&
+
\frac{{D_a} }{1-\beta}
\sqrt{\tau \log \left(
\frac{2T \tau }{\delta}
\right)}
\frac{1}{T^{(\gamma_\alpha-3\gamma_{\pi})/2}}
\\
&
+
\frac{{D_{b}}\beta r_{\max}}{(1-\beta)^3}
\frac{1}{T^{\gamma_P-\gamma_\alpha - \gamma_{\pi}}}
\\
&
+
\frac{2 R_{\max}D_{b'}}{(1-\rho)^2(1-\beta)}\frac{ 1}{T^{\gamma_P-\gamma_{\pi}}}
+
\frac{8 R_{\max}}{(1-\rho)^2}
\frac{\log T}{T^3}
+
\frac{2R_{\max}}{T^3}\,.
\end{align*}
\end{theorem}
\section{Conclusions and Future Work.}
As discussed in the introduction, usually theoretical results assume that policy evaluation algorithms are training with respect to a fixed reference policy. Consequently mixing time assumptions have generally been made in advance. This leads to important results. However, in practice this is very rarely the case. To make even simple algorithms converge the reference policy of interest is usually changed in time.
This work takes a more in depth look at the effect of mixing times and adiabatic properties which effect the ability of reinforcement learning algorithms to convergence on a target process as it changes in time. We prove a new mixing time result which could be of independent interest and with this we can highlight issues on the conditioning of the stationary distribution and effects that occur from the changes in the learning target.
The results proven give a better indication of the robustness of stochastic approximation and temporal difference learning to changes in transitions in probability distribution. From this we can see that the key condition for adiabatic TD-learning is that $\gamma_P > \gamma_\alpha + \gamma_\pi$. I.e. the rate of change in $P$ is faster than the sum of the rate of change in the learning rate plus the rate that the least likely equilibrium state goes to zero. Similarly the condition $\gamma_P-\gamma_\alpha-\gamma_\pi > (\gamma_\alpha-3\gamma_\pi)/2$ is required for the convergence rates to be the same as the stationary asynchronous stochastic approximation scheme.
There are certainly a number of directions in which this work can be generalized and developed. In this paper we only consider the policy evaluation process, we separate the changes in the transition matrix $P$ from the updates in the temporal difference learning algorithm. It is of course possible to allow for changes in $P$ to depend on the convergence of the temporal difference learning algorithm. Such as in actor-critic algorithms. Such results may depend on the specific form of that the update in $P$ depends on the temporal difference method and techniques such as the use of belief states (which are required for a Markov description in the POMDP setting) likely necessary to form an analysis. This introduces technical difficulties which would complicate the analysis. Nonetheless one would anticipate similar conclusions to the results proved in this paper.
In this paper we consider stochastic approximation and tabular temporal difference learning. However, it is also important to consider function approximation in reinforcement learning. The Adiabatic Theorem, Theorem \ref{thrm:adiabatic}, is applicable in the case of temporal difference learning with linear function approximation. The set of techniques required are somewhat different to those applied here. In particular, we need to use online convex optimization methodology rather than stochastic approximation bounds.
Given the differences in techiques we leave this work on adiabatic bounds in online convex optimization and linear temporal difference learning as forthcoming work.
|
1,314,259,995,350 | arxiv | \section{Introduction}
The order parameter symmetry of high-$T_c$ compounds
is believed to be predominantly of $d$-wave type.
However a subdominant $s$-wave component of the order
parameter may also occur, especially in materials
with orthorombic distortions.
One of such compounds is YBCO, exhibiting
strong structural distortion and a substantial
anisotropy in the London penetration depth in the $a$-$b$
plane.\cite{Basov95}
Raman scattering\cite{Limonov98} provided evidence
for a $5\%$ admixture of $s$-wave component
while thermal conductivity measurements in rotating
magnetic field\cite{Aubin97} placed an upper limit
of $10\%$.
Angle-resolved photoemission spectroscopy\cite{Lu2001}
on monocrystalline YBCO gave the ratio of 1.5 for gap amplitudes
in the $a$ and $b$ directions in the $CuO_2$.
Measurements of a Josephson current between monocrystalline
\textrm{YBa$_2$Cu$_3$O$_7$} and $s$-wave Nb showed that
the obtained anisotropy could be explained
by a $83\%$ $d$-wave with a $17\%$ $s$-wave
component.\cite{Smilde2005}
In another experiment on YBCO/Nb junction rings the $s$ to $d$ gap
ratio in optimally doped YBCO was estimated to be 0.1.\cite{Kirtley2006}
Inelastic neutron scattering on monocrystalline and untwinned
samples of YBCO lead to magnetic susceptibilities
with intensities and line shapes
breaking the tetragonal symmetry.\cite{Mook2000,Stock2004,Stock2005,Hinkov2004}
It was shown that these data may be interpreted
within an anisotropic band model with an order parameter of mixed
$d$ and $s$ symmetry.\cite{Sigrist2006}
Superconducting states with mixed symmetry are also considered
in other classes of compounds, e.g.
in the recently discovered ferropnictides.\cite{Hirschfeld2009}
The effects of dilute concentrations
of magnetic and nonmagnetic point defects
on a BCS superconductor of pure $s$ or $d$-wave symmetry
were intensively studied in the past and are well known.
In a $d$-wave superconductor with lines of order parameter
nodes any amount of disorder induces a nonzero density
of states at the Fermi level. In an $s$-wave system only
magnetic impurities change the response of the superconducting
state. For sufficiently strong coupling between
the impurity and the conduction band bound states may appear
in the energy gap.\cite{Borkowski1994}
In an earlier work on nonmagnetic
impurities in a {$d+s$}-wave superconductor it was shown
that in the unitary limit
a nonzero density of states (DOS) at the Fermi level
appears above certain critical impurity concentration,
depending on the size of an $s$-wave component.\cite{kim}
However the low-energy DOS is mostly featureless since
the $s$-wave component prevents a buildup of states
due to nonmagnetic scattering.
In contrast the presence of magnetic impurities
in a superconductor with an $s$-wave component
may result in sharp peaks in the low-energy DOS
for small concentration of defects, provided
the energy scale associated with the impurity resonance
is small.
\section{Model and Results}
We consider the order parameter of $d+s$ symmetry on a cylindrical
Fermi surface.
\begin{equation}
\Delta_s+e^{i\theta}\Delta_d(\hat k) ,
\end{equation}
where $\Delta_s$ and $\Delta_d$ are amplitudes of $s$- and $d$-wave component
respectively. We assume $\theta=0$ and $\Delta_s \ll \Delta_d$.
The superconductor is treated in a BCS approximation.
The magnetic scatterer is modelled as
an Anderson impurity treated within the slave boson mean field
approach.\cite{Borkowski2008}
The low energy physics is dominated by the presence
of strongly scattering impurity resonance. The self-consistent
self-energy equations describing the interplay between the superconducting
and magnetic degrees of freedom have the following form,
\begin{equation}
\label{omtil}
\widetilde{\omega} = \omega + {\frac{nN}{2\pi N_0}}
\Gamma \frac{\bar{\omega}}{(-\bar{\omega}^2+\epsilon^2_f)} \quad ,
\end{equation}
\begin{equation}
\label{dtil}
\widetilde{\Delta} = \Delta_s + {\frac{nN}{2\pi N_0}}
\Gamma \frac{\bar{\Delta}}{(-\bar{\omega}^2+\epsilon^2_f)} \quad ,
\end{equation}
\begin{equation}
\bar{\omega} = \omega + \Gamma
\langle \frac{\widetilde{\omega}}{\left(\widetilde{\Delta}^2(k)
- {\widetilde{\omega}}^2\right)^{1/2}}\rangle \quad ,
\end{equation}
\begin{equation}
\bar{\Delta}=\Gamma
\langle\frac{\widetilde{\Delta}}
{\left(\widetilde{\Delta}^2(k)
- {\widetilde{\omega}}^2\right)^{1/2}}
\rangle \quad ,
\end{equation}
where $\widetilde{\omega} (\bar{\omega}), \widetilde{\Delta}
(\bar{\Delta})$ is the renormalized frequency and order parameter
of conduction electrons (impurity) respectively.
We assume $\Delta_d/D=0.01$,
where $2D$ is the bandwidth of the conduction electron band.
In the equations above $\Gamma$ is the hybridization energy
between the impurity and the conduction band and
$\epsilon_f$ is the resonant level energy. Here we assume
constant density of states in the normal state
$N_0=1/2D$ and do the calculations for a nondegenerate
impurity, $N=2$.
Brackets denote average over the Fermi surface.
Initial results for the density of states of this system
were presented in an earlier paper.\cite{Borkowski2008a}
For small impurity concentration $n$, such that $\Delta_s$
and $\Delta_d$ are not significantly affected,
there are two peaks located symmetrically
near the gap center, provided $\epsilon_f \ll \Gamma \ll \Delta_s$.
For larger $n$ the two peaks
merge into one peak centrally located at the Fermi energy.
\small
\begin{figure}[th]
\centering
\includegraphics[width=20pc]{fig1-dresden-revised.eps}
\caption{Logarithm of the density of states at the Fermi level
as a function of impurity concentration for several values
of the ratio $\Delta_s/\Delta_{d0}$. The resonant impurity level
$\epsilon_f$ is close to the Fermi level. $\Gamma/D$ is fixed
at 0.001.
}
\label{fig1}
\end{figure}
\normalsize
\small
\begin{figure}[th]
\centering
\includegraphics[width=20pc]{fig2-dresden-revised.eps}
\caption{
Position of the peak in the conduction electron DOS as a function
of impurity concentration for $d+s$ superconductor for several
values of the resonant level energy. Energy is scaled by half
of the conduction electron bandwidth $D$ and $\Gamma/D=0.001$.
The suppression of $\Delta_s$ and $\Delta_d$
was not taken into account.}
\label{fig2}
\end{figure}
\normalsize
Fig. 1 shows the dependence of the density of states $N(0)$
at $E_F$ as a function of impurity concentration.
At small $n$, $N(0)$ is exponentially small,
$N(0)/N_0 \sim (\Delta_s/\Delta_d)
\exp(-\alpha (\Delta_s/\Delta_d)^2/n)$, where
$\alpha$ is a numerical factor.
The critical concentration $n_0$ for the discontinuous
transition to $N(0)/N_0 \simeq \Delta_s/\Delta_d$
is a quadratic function of $\Delta_s/\Delta_d$.
For $n > n_0$, $N(0)$ approaches DOS
of a $d$-wave superconductor in both the magnitude and its
functional dependence on $n$.
These relations are valid when $\epsilon_f \ll \Gamma, \Delta_s$.
Qualitatively similar scaling of $N(0)$ as a function
of $\Delta_s/\Delta_d$ in presence
of nonmagnetic impurities in the unitary
limit was obtained in ref. \cite{kim}.
The difference between magnetic and nonmagnetic
defetcs in a superconductor with $\Delta_s/\Delta_d \ll 1$
is the character of the transition from the exponentially small $N(0)$
to finite $N(0)$ in the strong scattering limit.
In contrast to nonmagnetic impurities, the transition caused
by resonantly scattering magnetic impurities is discontinuous.
and there are two sharp peaks
of $N(\omega)$ on both sides of the transition.\cite{Borkowski2008a}
The position of peaks as a function of impurity concentration
for different values of $\epsilon_f$ is shown in Fig 2.
These peaks strongly alter the low-energy and low-frequency
response and may be detected in thermodynamic or transport measurements.
The increase of $\Delta_s$ shifts the resonances towards $E_F$
and makes them more narrow.
The $d$-wave component of the superconducting order
parameter is more sensitive to pair breaking
by magnetic impurities in the limit $T \ll T_K$,
where $T_K=\sqrt{\Gamma^2+\epsilon_f^2}$,
than the $s$-wave part.
Depending on the relative size of $\Delta_d$ and $\Delta_s$
there may be another impurity
transition for larger $n$, when the order
parameter nodes disappear due to vanishing of the $d$-wave
component and a full
gap opens up. The impurity peak is then split
and $N(0)$ falls to zero again.
Hower a detailed description of this possibility requires
a careful analysis involving four energy scales:
$\Delta_d$, $\Delta_s$, $\Gamma$, and $\epsilon_f$.
Can such transition be observed experimentally?
The transition may be tuned either by varying impurity concentration
or the ratio $\Delta_s/\Delta_d$.
The density of states of the normal state
at $E_F$, $N_0$, also has an effect.
It appears in equations~\ref{omtil} and \ref{dtil}.
One should bear in mind, however, that
both the superconducting transition temperature
and the impurity resonance energy scale $T_K$
are exponentially sensitive to changes of $N_0$.
Experiments conducted at very low temperatures
may give different signatures of low-energy behavior
depending on the location of the system on the phase diagram
relative to the impurity critical point.
In a $d+s$-superconductor in the limit
of high concentration
of point defects the $d$-wave component vanishes
while the $s$-wave part of the order parameter remains
unaffected.
The situation is different in presence of magnetic
scatterers. If the $s$-wave component is small
and the energy scale of the resonance due to impurity
scattering is at most of the order $\Delta_s$,
increasing impurity concentration
may drive $\Delta_s \rightarrow 0$ while
$\Delta_d$ will be reduced and finite.
This follows from the fact that the largest pair breaking
occurs when the energy scales of the impurity resonance
and the order parameter are comparable.
If $\Delta_d \gg T_K$,
the rate of suppression
of $\Delta_d$ with increasing $n$ is small.
The $d+s$ superconductor with a subdominant $s$-wave
component doped with magnetic impurities
has a different phase diagram in the $T$-$n$ plane
compared to the same superconductor with nonmagnetic defects.
If the impurity resonance scale $T_K$ is comparable to $\Delta_s$
and $\Delta_s \ll \Delta_d$, the $s$-wave component
may vanish at $T \simeq T_K$.
\section{Conclusions}
The $T=0$ impurity transition
discussed in this work
may be detected in the low temperature limit.
While the slave boson mean field formalism used in this paper
cannot be applied at temperatures exceeding $T_K$,
we may also qualitatively describe the expected behavior of the system
as a function of temperature.
At finite but small impurity concentration
there may be even four phase transitions as a function
of temperature: normal to $d$-wave, $d$-wave to $d+s$-wave,
$d+s$ to $d$-wave, and $d$-wave to $d+s$-wave.
Due to the difference of magnitudes
the small $\Delta_s$ may
be driven to zero faster with increasing $n$ than $\Delta_d$.
Nonmonotonic behavior around $T \sim T_K$
is a consequence of strong scattering by the resonance
state forming on the impurity site.
As $T \rightarrow 0$, the impurity scattering
becomes weaker and the $s$-wave component may appear again.
Whether this particular scenario is realized, depends
on the relative size of energy scales:
$\Delta_s/\Delta_d$, $\Delta_s/T_K$,
and $\epsilon_f/\Gamma$.
\begin{acknowledgement}
Some of the computations were performed in the Computer Center
of the Tri-city Academic Computer Network in Gdansk.
\end{acknowledgement}
|
1,314,259,995,351 | arxiv | \section{Introduction}
In this paper we present multiscale models of cancer tumor invasion,
and the scientific-computing methodology for solving the model
equations. The specific model treated here has components at the
molecular level (incorporated via diffusion and taxis processes), the
cellular level (incorporated via a cell age variable),
and the tissue level (incorporated via spatial variables). The tumor
consists of populations of
proliferating and quiescent cells. Proliferating cells are capable of
growing, dividing, entering quiescence, and becoming necrotic. We consider one mutation class of
proliferating and quiescent cells. The different physical scales
cause the model to have widely different time scales. The fully
continuous model treated in detail in this paper depends on variables
representing time, age, and two spatial dimensions. We present this
system as a simplification of a more general system that depends on
time, age, size, and three spatial dimensions, and has an arbitrary
number of mutation classes for proliferating and quiescent cells,
with increasingly aggressive invasion characteristics. Mathematical
modeling of all phases of cancer tumor development, angiogenesis, and
metastasis is a very broad and active area of mathematical biology
\cite{AdamBellomo,AndersonChaplain,AraujoMcElwain,cancerSurvey,HornWebb,cancerModelling}.
This paper focuses on the invasion of nearby tissue by a vascular
tumor, under the assumption that the surrounding tissue is the source of the vasculature. Our fully continuous models have components that are based on
hybrid discrete-continuous (HDC) models
\cite{Sandy2003b,Sandy2003c,SandyHybridCancer,Sandy2000,Sandy2003a}
which use a discrete lattice to represent
space. We use a physiological variable, age, to model aging in the
proliferating and quiescent tumor cell populations
\cite{DysonWebb,GyllenbergWebb}. The models in this paper belong to
the class of so-called structured population models in which
individuals in a population are tracked by properties such as age,
size, maturity, and other quantifiable variables. Diffusion and
haptotaxis terms account for the spatial dynamics of the system in
the models under study. Age, size and/or space structure has also been used in models of tumor cords \cite{BertuzziD'OnofrioFasanoGandolfi,BertuzziFasanoGandolfiMarangi,BertuzziGandolfi,DysonVBWebb}.
Computational and software considerations often limit scientists from
incorporating physiological structure directly into a model. We
discuss the combination of effective computational methodologies for
integration over the time, age and space variables; we use a
moving-grid Galerkin method for the age variable, an adaptive
step-doubling method for the time variable, and an alternating
direction implicit (ADI) scheme for the space variables.
This paper is organized into three main sections. The first develops
the models and presents their biological justifications. The second
section presents computed solutions to the model equations and
discusses their significance. The third section discusses the
computational methodology. We close with a section on conclusions
and further research.
\section{Model Equations}
We extend the hybrid discrete continuous (HDC)
tumor invasion model discussed in \cite{SandyHybridCancer} to
fully deterministic models. In particular, as with \cite{SandyHybridCancer}, we focus on four key variables implicated in the invasion process: tumor cells, surrounding tissue (extracellular matrix), matrix-degradative enzymes and oxygen. Tumor cell motion in the HDC model is driven by a mixture of both biased and unbiased migration where the biased migration is assumed to be from haptotaxis (in response to gradients in the surrounding tissue) and the unbiased migration is just random motility, we shall assume the same here. We assume as in \cite{SandyHybridCancer} that tumor cells produce matrix degrading enzymes which in turn degrade the surrounding tissue creating gradients for the cells to respond to haptotactically. Oxygen production is assumed to be proportional to the tissue density and be consumed by the tumor (see \cite{SandyHybridCancer} and references therein for a more detailed explanation of the HDC model derivation).
One of the important features of the model proposed in \cite{SandyHybridCancer} was the implementation of tumor heterogeneity i.e. the tumor is made up of many different sub-populations with different phenotypes. These phenotypes allow us to model sub-populations with different invasive capacities. We use the same idea here by considering multiple populations of tumor cells with potentially different parameter values.
Since the models we present here are continuous in all variables, then individual processes of the tumor cells (such as division) are also considered to be continuous. These are modeled according to cell age in the simplified model used in the computations, and cell age and size in
the more general system. As with the HDC model, these models are
based on the populations of proliferating and quiescent tumor
cells, the density of surrounding tissue macromolecules, the concentration of matrix degradative enzyme, and the concentration of oxygen.
The general class of partial differential equations for
diffusion and age structure considered in
this paper has a long history. Among the
first classic works are Skellam (1951) \cite{skellam} (who
considered the effects of diffusion on populations), and
Sharpe and Lotka (1911) \cite{SharpeLotka} and McKendrick (1926) (who
considered
population models with linear age structure) \cite{McKendrick,Webb}.
More recently, Gurtin and MacCamy \cite{GnM1} considered models with
nonlinear age structure. Rotenberg \cite{roten} and Gurtin
\cite{gurtin} posed
models dependent on both age and space. Gurtin and MacCamy
\cite{GnM2}
differentiated between two kinds of diffusion in these models:
diffusion due to random dispersal, and diffusion toward an area of
less crowding. Existence
and uniqueness results can be found for various forms of these models
in Busenberg and Iannelli
\cite{BnI}, di Blasio \cite{diblasio}, di Blasio and Lamberti
\cite{DnL}, Langlais \cite{langlais1}, MacCamy
\cite{maccamy}, and Webb \cite{Webb80}. Further analysis has been
done by several authors
\cite{huang,KnL,langlais2,marcati}.
\subsection{The Age-, Space- and Size-structured model of Tumor
Invasion}
The tumor is contained in a region of tissue $\Omega$. The tumor is
composed of proliferating cells (cells that are transiting
the cell cycle to mitosis) and quiescent cells (cells that
are arrested in the cycle, but are capable of resuming
progress). We assume that proliferating cells are motile in
space, but quiescent cells are not, and that both proliferating
and quiescent cells consume oxygen, with quiescent cells at
a lower rate (as in \cite{SandyHybridCancer}). Cells, both proliferating and quiescent, are distinguished
by their position $x \in \Omega$, their age $a$ between $0$
(newly divided) and $a_{M}$ (maximum possible
age), their size $s$ between $s_{m}$ (minimum possible size) and
$s_{M}$ (maximum possible size), and their state in the mutation sequence. Cell age, for both proliferating cells and quiescent cells,
is the time since the cell was newly divided. For
proliferating cells, cell age correlates to phase of the cell cycle
(first gap $G_{1}$, synthesis $S$, second gap $G_{2}$, and mitosis).
An illustration of a distribution of division ages is given in
Figure \ref{cell.cycle}. Cells are also distinguished by cell size,
which can be interpreted as mass, diameter, volume, or
other measurable property. The inclusion of cell age and cell size allows
description of the growth of the tumor mass to be understood at the
level of individual cells, as they double their size and divide
to two new daughter cells. For example, the inclusion of age and size in the diffusivities represents a means by which growing and dividing cells increase total tumor size.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=4.0in]{cell.cycle.figure.eps}
\caption{Schematic of the phases $G_{1}$ (first gap), $S$ (synthesis), $G_{2}$ (second gap), and $M$ (mitosis) of the cell cycle
correlated to cell age. The graph over the mitotic phase corresponds
to the distribution of cell ages at division (the response of the function $\theta$ to age.)}
\label{cell.cycle}
\end{center}
\end{figure}
In the HDC model in \cite{SandyHybridCancer}, the behavior
of individual cells is tracked cell by cell on a spatial lattice. This
discrete formulation relates detailed information about fundamental
processes at the cellular level, such as cell-cell adhesion,
entry to and from quiescence, division, apoptosis, and phenotype
mutation, to behavior of the tumor mass. In the
continuous age-size structured model of this paper,
behavior at the population level is also related to behavior at the
individual cell level, with cell age and size dependent densities
providing the connection to these processes. The use of continuous
densities constitutes a local averaging of individual traits.
The dependent variables of the model are:
\begin{itemize}
\item $p_{i}(x,a,s,t)$ = density of proliferating tumor cells of type
$i$ in the tumor at
position $x$, age $a$, and size $s$ at time $t$, where $i=0$
corresponds to a mutated type p53 gene, and $i=1,2,\dots,n$
corresponds to a linear sequence of
mutated phenotypes of increasing aggressiveness. The number of
mutations can be very large, with successive phenotypes possessing
greater proliferative characteristics and capacity for spatial movement.
\item $q_{i}(x,a,s,t)$ = density of quiescent tumor cells of type $i$
in the tumor at
position $x$, age $a$, size $s$, and mutation phenotype $i=0,2,\dots,n$
at time $t$.
\item $f(x,t)$ = surrounding tissue macromolecule (MM) density at
position $x$ at time $t$. It is assumed that these macromolecules are
distributed heterogeneously in $\Omega$, but immobile in $\Omega$.
\item $m(x,t)$ = matrix degradative enzyme (MDE) concentration at
position $x$ at time $t$. MDE is produced by the tumor cells and
diffuses in $\Omega$.
\item $c(x,t)$ = oxygen concentration at position $x$ at time $t$.
Oxygen is produced by the extracellular MM, diffuses in
$\Omega$, and is consumed by the tumor cells.
\item $P(x,t) = \sum_{i=0}^{n} \, \int_{0}^{a_{M}}
\int_{s_{m}}^{s_{M}}
\, p_{i}(x,a,s,t) \,ds \, da =$ the total population density in $x$
of proliferating cells
of all types at time $t$.
\item $Q(x,t) = \sum _{i=0}^{n} \, \int _{0}^{a_{M}}
\int_{s_{m}}^{s_{M}} \, q_{i}(x,a,s,t) \, ds \, da =$ the total
population density in $x$ of quiescent cells of all types at time $t$.
\item $N(x,t) = P(x,t) + Q(x,t)=$ total tumor population density in
$x$ of all cell types at time $t$.
\end{itemize}
The equations governing the proliferating-cell densities of the tumor
are
\begin{subequations}
\begin{align}\frac{\partial}{\partial t} p_{i}(x,a,s,t) = &
- \, \underbrace{\frac {\partial}{\partial a} p_{i}(x,a,s,t)}
_{\mbox{\tiny cell aging}} \, - \,
\underbrace{\frac {\partial}{\partial s} (\kappa_{i} (a,s,c)
p_{i}(x,a,s,t))}_{\mbox{\tiny cell growth}} \label{p_i} \\
&+
\underbrace{
\nabla \cdot (D_{p_{i}}(x,a,s,N) \nabla
p_{i}(x,a,s,t))}_{\mbox{\tiny diffusion}} - \underbrace{\chi_{i}
\nabla \cdot (p_{i}(x,a,s,t)
\nabla f(x,t))}_{\mbox{\tiny haptotaxis}} \nonumber \\
& -
\underbrace{\rho_{i}(x,a,s,c,N) p_{i}(x,a,s,t)}_{\mbox{\tiny cell death
from insufficient oxygen}} - \underbrace{\theta_{i}(x,a,s,c,N)
p_{i}(x,a,s,t)}_{\mbox{\tiny division with sufficient oxygen}}
\nonumber \\
& - \underbrace{\sigma_{i}(x,a,s,c,N) p_{i}(x,a,s,t)}_
{\mbox{\tiny exit to quiescence}}
+ \underbrace{\tau_{i}(x,a,s,c,N) q_{i}(x,a,s,t)}_{\mbox{\tiny entry
from
quiescence}}, \nonumber
\end{align}
\noindent with age-boundary conditions
\begin{align}
\underbrace{p_{i}(x,0,s,t)}_{\mbox{\tiny newborn type $i$ cells}} = &
\, 4 (1 \, -\psi_{i} \,) \, \underbrace{\int_{0}^{a_M}
\theta_{i}(x,a,2s,c,N(x,t)) p_{i}(x,a,2s,t) \, da}_
{\mbox{\tiny type $i$ cell division}} \label{p_birth} \\
&+ \, 4 \, \psi_{i-1} \, \underbrace{\int_{0}^{a_M}
\theta_{i-1}(x,a,2s,c,N(x,t))
p_{i-1}(x,a,2s,t) \, da}_
{\mbox{\tiny type $i-1$ cell division}}, \nonumber
\end{align}
\noindent where $\psi_i$ is the fraction of type $i$ cells with type
$i+1$ mutation. For cells that have undergone only one
primary cancer forming mutation (such as a p53 mutation), we
set $i=0$ and $\psi_{-1}=0$. The coefficient of 4, rather than the
more intuitive splitting value of 2, results from the assumption of
even cell division; uneven cell division would require a mitosis
kernel and integration over the size variable, $s$, in equation
(\ref{p_birth}) \cite{TuckerNZimmerman,Webb89}.
The equations governing the quiescent-cell densities are
\begin{align}
\frac {\partial}{\partial t} q_{i}(x,a,s,t) = & -
\underbrace{\frac {\partial}{\partial a} q_{i}(x,a,s,t)}
_{\mbox{\tiny cell aging}} -
\underbrace{\nu_{i}(x,a,s,c,N(x,t)) q_{i}(x,a,s,t)}_{\mbox{\tiny cell death
from insufficient oxygen}} \label{q_i} \\
&+
\underbrace{\sigma_{i}(x,a,s,c,N(x,t)) p_{i}(x,a,s,t)}_{\mbox{\tiny
entry from proliferation}} -
\underbrace{\tau_{i}(x,a,s,c,N(x,t)) q_{i}(x,a,s,t)}_{\mbox{\tiny exit to
proliferation}}. \nonumber
\end{align}
The quiescent-cell populations lack a boundary condition in age since
they are "born" when proliferating cells of the same mutation class
become quiescent.
The equations governing tissue macromolecule, matrix degradative
enzyme, and oxygen densities are precisely those used in
\cite{SandyHybridCancer}:
\begin{align}
\frac {\partial}{\partial t} f(x,t) \, = \,& - \,
\underbrace{\delta m(x,t) f(x,t)}_{\mbox{\tiny degradation}},
\label{f} \\
\frac {\partial}{\partial t} m(x,t) \, = \, &
\underbrace{D_{m} \nabla ^{2} m(x,t)}_{\mbox{\tiny diffusion}}
\, + \, \underbrace{\mu P(x,t)}_{\mbox{\tiny production}}
\, - \, \underbrace{\lambda m(x,t)}_{\mbox{\tiny decay}}, \label{m} \\
\frac{\partial}{\partial t} c(x,t) \, = \, &
\underbrace{D_{c} \nabla ^{2} c(x,t)}_{\mbox{\tiny diffusion}}
\, + \, \underbrace{\beta f(x,t)}_{\mbox{\tiny production}} \, - \,
\underbrace{\gamma P(x,t)-\eta Q(x,t)}_{\mbox{\tiny uptake}} \,
\, - \, \underbrace{\alpha c(x,t)}_{\mbox{\tiny decay}}. \label{c}
\end{align}
\end{subequations}
\noindent Equations (\ref{p_i})-(\ref{c}) are combined with initial
conditions and
no-flux boundary conditions on the boundary $\partial \Omega$ of
$\Omega$.
Equation (\ref{p_i}) balances the way cells age, grow, and move in time.
The first term on the right-side of equation (\ref{p_i}) accounts for
the aging of cells, which is one-to-one with advancing time. In the second
term in equation (\ref{p_i}), $\kappa_{i}(a,s,c)$ is the rate at which
proliferating
cells increase size, i.e., $\int_{s_{1}}^{s_{2}}
\frac{1}{\kappa_{i}(a,s,c)}ds$ is the time required for a cell of type
$i$ to grow
from size $s_{1}$ to size $s_{2}$. The diffusion term in equation
(\ref{p_i}) accounts for cell movement due to random
motility, interphase drag, the interaction between cells,
volume displacement due to cell division, and cell-cell adhesion
\cite{AraujoMcElwain}. The diffusion
coefficient $D_{p_{i}}(x,a,s,N(x,t))$ can be allowed to
depend on the independent and dependent
variables to incorporate mechanistic features of these processes.
For example, cells in higher mutation phenotype classes may have
smaller cell-cell adhesion properties, and thus have a larger coefficient.
Dividing cells of larger size may exert
greater force of volume displacement, and thus have a larger
coefficient.
In equation (\ref{p_i}), the haptotaxis term represents directed
movement of cells toward concentrations of MM, which is the source of
oxygen necessary for tumor cell growth, and is degraded by tumor cell
produced MDE.
The coefficient $\rho_{i}(x,a,s,c,N(x,t))$ of
proliferating cell loss in equation (\ref{p_i}) is dependent
on the density of cells in competition for the supply of oxygen.
In equation (\ref{p_birth}), $\theta_{i}(x,a,s,c,N(x,t))$ is the
rate at which cells of type $i$, age $a$, and size $s$ divide at $x$
per unit time, where it is assumed that a mother cell divides into two
daughter cells of equal size (unequal division can also be modeled
\cite{Webb89}). The division rate $\theta_{i}(x,a,s,c,N(x,t))$
depends on the age of cells, the supply of oxygen,
as well as on the density of cells, with reduced capacity
for division as the oxygen supply decreases and the density increases.
The negative sign in front of $\theta_{i}(x,a,s,c,N(x,t))$ reflects
the loss of cells due to the division process. The mother cell of age
$a$ and size $s$ is replaced by two daughter cells, each having age
$0$ and half the size of the mother cell, as described in the
boundary condition (\ref{p_birth}).
The coefficients $\sigma_{i}(x,a,s,c,N(x,t))$ and $\tau_{i}(x,a,s,c,N(x,t))$
of transition to and from quiescence in equation (\ref{p_i})
depend on the supply of oxygen and the density of tumor cells. Lower
oxygen and higher density results in increased entry to quiescence
and higher oxygen and lower density results in increased recruitment
from quiescence.
The equation (\ref{q_i}) governing the quiescent cells is interpreted
similarly, where it is assumed that quiescent cells are not motile.
In this model, we represent the properties of individual cell behavior
as rates of transition dependent on cell spatial position, age, and
size. The inclusion of cell age and size
structure allows incorporation of cell level processes without
tracking of each cell history, cell by cell (as is done in \cite{SandyHybridCancer}). The hybrid and continuum
modeling approaches have complementarity in development,
analysis, and computability, in which advantages of each can be
exploited.
\subsection{A Simplified Two-Dimensional Model with No Size Structure}
The following model is a version of the model above with no size
structure, two spatial dimensions (denoted by $(x,y)\in \Omega$), and
one compartment each of
proliferating- and quiescent-cell types. The equations governing the
two classes of cell densities of the tumor are
\begin{subequations}
\begin{align}\frac{\partial}{\partial t} p(x,y,a,t) = &
- \underbrace{\frac {\partial}{\partial a} p(x,y,a,t)}
_{\mbox{\tiny cell aging}} \\ & +
\underbrace{D_{p}\nabla ^{2} p(x,y,a,t)}_{\mbox{\tiny diffusion}}
- \underbrace{\chi \nabla \cdot \big( p(x,y,a,t) \nabla f(x,y,t)
\big)}_{\mbox{\tiny haptotaxis}} \label{p} \\
& -
\underbrace{\rho(x,y,a,c) p(x,y,a,t)}_{\mbox{\tiny cell death
from insufficient oxygen}} - \underbrace{\theta(x,y,a,c)
p(x,y,a,t)}_{\mbox{\tiny division with sufficient oxygen}} \nonumber
\\
& - \underbrace{\sigma(x,y,a,c,N(x,t)) p(x,a,s,t)}_
{\mbox{\tiny exit to quiescence}}
+ \underbrace{\tau(x,y,a,c) q(x,y,a,t)}_{\mbox{\tiny entry from
quiescence}}, \nonumber \\
\frac {\partial}{\partial t} q(x,y,a,t) = & -
\underbrace{\frac {\partial}{\partial a} q(x,y,a,t)}
_{\mbox{\tiny cell aging}} -
\underbrace{\nu(x,y,a,c) q(x,y,a,t)}_{\mbox{\tiny cell death
from insufficient oxygen}} \label{q} \\
&+
\underbrace{\sigma(x,y,a,c,N(x,t)) p(x,y,a,t)}_{\mbox{\tiny entry
from proliferation}} -
\underbrace{\tau(x,y,a,c) q(x,y,a,t)}_{\mbox{\tiny exit to
proliferation}}, \nonumber
\end{align}
\noindent with age-boundary conditions
\begin{align}
\underbrace{p(x,y,0,t)}_{\mbox{\tiny newborn cells}} \, = \, 2
\underbrace{\int_{0}^{a_0} \theta(x,y,a,c) p(x,y,a,t) \ da}_
{\mbox{\tiny division rate}}. \label{ps_birth}
\end{align}
\end{subequations}
The equations governing tissue macromolecule ($f$), matrix degradative
enzyme ($m$), and oxygen ($c$) densities remain as defined in
equations
(\ref{f})-(\ref{c}). All equations are combined with initial
conditions and
zero flux boundary conditions on an $(x,y)$-rectangle $\Omega$.
\section{Computations of Cancer Tumor Invasion}
We can demonstrate some aspects of the behavior of the reduced system
defined by equations (\ref{p})-(\ref{ps_birth}) and equations
(\ref{f})-(\ref{c}) through computations using
parameters and functional forms chosen for illustrative purpose
rather than biological foundation. Take the spatial domain
$\Omega=[-5,5]\times[-5,5]$ and take
\begin{subequations}
\begin{equation} \label{paramStart}
D_p = 0.0005, \,
\chi = 0.01, \,
D_m = 0.01, \,
D_c = 0.05,
\end{equation}
\begin{equation}
\rho(x,y,c) = 0.1 \, \max\{1.0 - c, 0\}, \,
\nu(x,y,c) = 2.0 \, \max\{1.0 - c, 0\},
\end{equation}
\begin{equation}
\delta(x,y) = 50.0, \,
\mu(x,y) = 1.0, \,
\lambda(x,y) = 0.0, \,
\beta(x,y) = 0.5,
\end{equation}
\begin{equation}
\gamma(x,y) = 0.57, \,
\eta(x,y) = 0.0, \,
\alpha(x,y) = 0.025,
\end{equation}
\begin{equation}
\sigma(x,y,c) = 10.0 \, \max\{1.0 - c, 0\}, \,
\tau(x,y,c) = 2.0 \, c.
\end{equation}
The distribution of division ages is assumed to have the
form of an offset integrand of the Gamma function (see Figure \ref{cell.cycle}),
\begin{equation}
\theta(x,y,a,c) = \left\{ \begin{array}{rl} 10.0 \, c \exp(-10(a-1))
\ (2a-1)^5, &a > 0.5, \\ 0, &a<0.5, \end{array} \right.
\end{equation}
where 0.5 is the minimum age at which a cell can divide. The initial
conditions are
\begin{eqnarray}
p(x,y,a,0) &=& 5.0 \, G(\sqrt{(x-5)^{2}+(y-5)^{2}},0,0.5),\\
q(x,y,a,0) &=& 0.5 \, p(x,y,0),\\
f(x,y,0) &=& 0.2 \, \cos(0.4 \, x^{2}) \, \sin(0.2\,y^{2})+0.2,\\
m(x,y,0) &=& 2.5 \, G(\sqrt{(x-5)^{2}+(y-5)^{2}},0,0.5), \\
c(x,y,0) &=& 10.0 \, f(x,y,0), \label{paramEnd}
\end{eqnarray}
\end{subequations}
\noindent where
$$G(z,z_\mu,z_\sigma) = \frac{\exp(-\frac{(z-z_\mu)^{2}}{2 \,
z_\sigma^{2}})}{\sqrt{2 \, \pi} \, z_\sigma}.$$
Numerical computations of the proliferating-cell density and macromolecule density for the simplified model are illustrated in Figures
\ref{outputP}-\ref{outputf} as snapshots in time\footnote{Animations can be found
online at http://faculty.smu.edu/ayati/cancer.html}. The
simulation in Figures \ref{outputP}-\ref{outputf} demonstrates the temporal development of
spatial heterogeneity in the tumor mass from a radially symmetric
initial condition of tumor cells and heterogeneous initial condition
of surrounding macromolecules. The macromolecule tissue is displaced
by the tumor tissue as a consequence of haptotactic movement of the
tumor cells, driven by the matrix degradative enzyme they produce.
The interior core of the tumor mass becomes necrotic, because of its
increasing distance from the oxygen supply provided by the
macromolecule matrix source. Once aspect of this computation is that the tumor edge consists of an outer layer of proliferating cells and an inner layer of quiescent cells.
\begin{figure}[t]
\begin{center}
\epsfig{file=age_P_bw.eps,width=5.0in}
\caption{Proliferating-cell density ($P$) for the system defined by equations
(\ref{p})-(\ref{ps_birth}) and equations (\ref{f})-(\ref{c}) . The
parameters used in this computation are defined in equations
(\ref{paramStart})-(\ref{paramEnd}).}
\label{outputP}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\epsfig{file=age_Q_bw.eps,width=5.0in}
\caption{Quiescent-cell density ($Q$) for the system defined by equations
(\ref{p})-(\ref{ps_birth}) and equations (\ref{f})-(\ref{c}) . The
parameters used in this computation are defined in equations
(\ref{paramStart})-(\ref{paramEnd}).}
\label{outputQ}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\epsfig{file=age_f_bw.eps,width=5.0in}
\caption{Macromolecule density ($f$) for the system defined by equations
(\ref{p})-(\ref{ps_birth}) and equations (\ref{f})-(\ref{c}) . The
parameters used in this computation are defined in equations
(\ref{paramStart})-(\ref{paramEnd}).}
\label{outputf}
\end{center}
\end{figure}
\section{Computational Methodology}
Computational robustness and efficiency is vital for the methods used
to solve the
high-dimension, multiscale models developed in this paper. The
primary issue is the age discretization and how to decouple it from
the time discretization without ignoring the fact that age and time
advance together. This approach foreshadows how one may wish to
handle size structure. The decoupling of the age and time
discretizations allows for adaptivity in the time variable; we
discuss a particularly effective method for the time integration
called step-doubling. The third computational consideration is in
how we solve the system in the space variables. We use an
alternating direction implicit method, which is a novel approach when
incorporated into the step-doubling method for time.
There is a plethora of numerical methods for solving
models with just age or size structure
\cite{AnguloL-M99,AnguloL-M04,chiu,FnL-M,InKnP,KnC,L-M,sulsky}.
These methods use uniform age and timesteps which are equal to one
another in the case of age structure, or do the equivalent in the
context of size structure of introducing a new size node at every
time step. This approach does not work well
for problems with multiple time scales
because the fastest time scale tends to be in the spatial variables.
To understand the nature of this problem, consider a fixed, uniform
age discretization. Solving the system along characteristics would
require the age interval width equal to the time step. This would
result in many more age nodes than are needed to accurately solve the
problem in the age variable because of the small time step. For size
structure, the analogous situation is to introduce many more size
nodes at the birth boundary than are needed. An additional concern
with size structure is that characteristic curves in the size-time
plane can converge, resulting in unnecessarily narrow size
intervals. Regridding was used in \cite{sulsky} and
\cite{AnguloL-M04} to adjust for the effects of narrowing gaps
between characteristics, but they do not address the issue of small
size nodes at the birth boundary. For example, the method proposed
in \cite{AnguloL-M04} has an advantage of simplicity -- the idea is
to merge the narrowest size interval with one of its neighbors after
each time step -- but is not a satisfactory solution because small
size intervals can arise continuously at the birth boundary while
elsewhere size intervals continue to narrow due to the nature of the
characteristic curves. Moreover, regridding comes at a computational
cost. A natural solution to this problem lies in using a finite
element space with a moving reference frame in age or size, which is
the approach we use in this paper.
Previous numerical methods designed explicitly for models with
dependence on age, time and space were developed outside the context
of an application and required uniform age and time discretizations
with the age step chosen to equal the time step \cite{kim,KnP,LnT}.
In contrast, the methods used to obtain the computational results
presented in this paper
\cite{age-pwconst-paper,age-general-paper} were motivated by models
of {\em Proteus mirabilis} swarm-colony development where the need to
decouple age and time discretizations was clear from the problem
\cite{proteus-tech,EnS,rauprich}. In the process of applying these
methods to the system defined in \cite{EnS}, it became clear that the
numerical methods and software used previously were not merely
inefficient, but also gave qualitatively incorrect answers (although
these methods did decouple the age and time discretizations, they did
so by not moving the age discretization along characteristics; see
the appendix in \cite{proteus-tech} for a discussion.) This is a
critical pitfall to avoid and highlights the importance of using
methods with known convergence properties for a particular system.
We use Galerkin finite element methods that use a moving grid to
allow for independent, nonuniform age and time discretizations and
whose development has focused on robustness as well as computational
efficiency. The important property of these methods is that the age
step need not equal the time
step. Instead, the positions of the age nodes are adjusted by the time
step. The methods preserve the important fact that age and time
advance together. The methods in \cite{kim,KnP,LnT} also discretized
along
characteristics, but they did so simultaneously in age and time and
thus imposed the often
crippling constraint that the time and age steps be both constant and
equal. The difficulty with this approach is twofold. First, the
use of constant age and time
steps prevents adaptivity of the discretization in age or,
especially, time. Second, and more
importantly, the coupling of the age and time meshes can cause great
losses of efficiency since only rarely will the dynamics in time be on
the same scale as the dynamics in age. This is particularly the case
when space is involved since sharp moving fronts can require small
time steps,
whereas the behavior in the age variable can remain relatively
smooth. The age discretization presented in \cite{kim,KnP,LnT} can be
viewed as special cases of the methods
presented in \cite{age-pwconst-paper,age-general-paper} by setting
the time and age
meshes to be constant and equal and using a backward Euler
discretization in time and a piecewise constant finite element space
in age.
Step-doubling \cite{step-doubling-paper,gear,shampine85} is a
conceptually simple, yet quite effective method for the adaptive time
integration of differential equations. Over a time step, we compute
one solution over the entire time step, and then a second solution
over two successive half steps. These two different solutions give
us two things. First, we can compare solutions to determine the
accuracy of our approximation for the purposes of adaptivity of the
time step. Second, we can combine solutions to get a likely
second-order accurate approximation, even when each step in the
step-doubling process is first-order accurate.
To solve the model equations in the spatial variables, we use an ADI
method (also called operator splitting) where we first solve the
equations in just the $x$ derivatives and zero-order terms, and then
in just the $y$ derivatives
\cite{DouglasDupontADI,KarlsenSplitting,McLachlanQuispel,StrangSplitting,ThomasVol1}.
This approach reduces our two-dimensional problem in space to a set
of more easily solved one-dimensional spatial problems; we need to
solve a series of block tridiagonal linear systems instead of a more
computationally expensive wide-banded linear system. Because ADI
methods are time ordered, the ADI method needs to be embedded into
the step-doubling algorithm.
The combination of these methods results in the following breakdown
of the model equations. First, the moving-grid Galerkin methods in
age reduce the age-, time-, and space-dependent equations to systems
of differential equations that depend on time and two spatial
variables. We then solve each of these equations by a combination of
step-doubling and ADI methods; we take a step in the $x$ direction
and zero-order terms, followed by a step in the $y$-direction, within
each substep of step-doubling. This integrated stepping is
illustrated in Figure \ref{step-doubling-ADI}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=2.5in]{step-doubling-ADI.eps}
\caption{Schematic of the combination of the step-doubling and ADI
methods to advance the solution of a time and space dependent system
from time $t$ to time $t+\Delta t$. The operators $L_{x+0}$ and
$L_y$ represent the $x$ derivatives plus zero-order terms, and the
$y$ derivatives, respectively.}
\label{step-doubling-ADI}
\end{center}
\end{figure}
The software used to generate the computed solutions in this paper
has a similar structure to BuGS \cite{BuGS}. The age methods
presented in \cite{age-pwconst-paper,age-general-paper} use
discontinuous piecewise polynomials as basis function for the age
space, which results in a distinct system of parabolic partial
differential equations for each age interval if we keep time and
space continuous. This in turn results in a distinct linear system
for each age interval when we fully discretize the equations. In the
tumor invasion software, we use piecewise constant functions in age
with post-processing to
continuous piecewise linear functions. As mentioned above, the tumor
invasion software works by updating the age discretization at the
beginning of a time step and then applying the step-doubling method
to the subsystems corresponding to each age interval splitting the
spatial operator into two separate operators over each dimension.
As in BuGS, the tumor invasion software requires the user to define
the spatial discretization of the equations by writing a residual
function based on first-order backward differences in time. The
software then uses the implementation of the step-doubling method
described in \cite{step-doubling-paper} to get a second-order
accurate in time implicit finite difference scheme. The software also
features step control for the convergence of Newton's method and
automatic approximation of the Jacobi matrix.
\section{Conclusions and Further Research}
In this paper we presented physiologically and spatially structured
continuous deterministic models of cancer tumor invasion. We
presented a general model whose equations depend on variables
representing size, age, space and time. We then treated a simplified
model without size structure and with only two spatial dimensions.
The simplified model contained one mutation class of proliferating
and quiescent cells. The aim of this approach is to move tumor
invasion modeling away from phenomenological models toward more
mechanistic, biologically informed, and reliably predictive models.
These more complex models required a
more sophisticated computational methodology to investigate
numerically the computationally intensive model equations.
The most immediate extension of this work is to determine the models
parameters and functional forms from biological data and
experiments. The current methodology and software is sufficient to
handle multiple mutation classes of proliferating and quiescent
cells, but a deeper understanding of the biology is needed to benefit
from this extension. Computational results from more biologically
detailed models are expected, in turn, to contribute to a deeper
understanding of the underlying biology.
The most important mathematical extension of the methodology is to
develop size-time finite elements to handle size-structured
equations. Rather than being developed for general forms of
transport, extensions
of the existing methods for age structure to size structure will use
the specific nature of physiological
change in tumor cells to allow the incorporation of size structure
into a model at a low cost in terms of computational resources.
Anticipated complications in handling size structure include birth in
a size-structured context with respect to both the numerical methods
and their analyses. Since the
characteristic curves in the size-time plane are no longer lines with
slope one, as was the case for age structure, some important
questions are: what types of characteristic curves should we consider
and how do
we handle situations where these curves become asymptotically close
within the moving grid framework? What happens if they meet and
shocks form?
Two immediate concerns must be addressed for the
problem of size dependence in tumor invasion models. The first issue
is the introduction of new size nodes at the birth boundary, and the
second is the handling of size intervals that contract due to the
convergence of size-time characteristic curves. We expect the major
complication in the size nodes to occur when growth slows as cells
reach a certain size. However, because of the nonlinearities in the
problem, it is insufficient to merely assume that a size interval
will strictly decrease length. Addressing these two concerns will
lay the foundation for methods that handle more complicated
characteristics, including the formation of shocks that can form in
situations where growth has complex dependencies on the physiological
traits of an individual as well as the external environment.
As in the methods for age-structured systems, the moving grid
formulation is expected to account for the growth of individuals,
taking the place of direct differencing of the size variable. And as
in the case of age structure, the use of a space of discontinuous
piecewise polynomials as the basis functions in size is expected to
allow each size interval to be treated with a separate linear
system. If the system has dependence on both age and size, we would
have a two dimensional array of independent linear systems at each
time step.
An important benefit of using size-time Galerkin finite elements is
having one mathematical framework define many methods with
higher-order accuracy. Because of the need to keep computational
costs down in each dimension of the high-dimension systems under
study, without sacrificing robustness, the ability to choose the
order of convergence of the method is quite useful.
A major extension of the software and methodology is to add a third
space dimension through an additional sub-operator in the ADI
method. This methodology for handling three space dimensions is
expected to be sufficient for generating initial results that aim to
extend our understanding of tumor invasion beyond the 2D-space
models. Other ADI methods that may work within this framework are
Douglas-Gunn \cite{ThomasVol1} and Strang Splitting
\cite{StrangSplitting}.
Although we have provided a specific mathematical treatment of the
spatial dynamics of tumor invasion, we remark that modeling spatial
dynamics can be more complicated in biological systems than in
physical systems. A broad examination of
different modeling approaches is required, including the continuous
approach
in this paper, and how it relates to other approaches, such as the
hybrid discrete-continuous formulation discussed in
\cite{SandyHybridCancer}. Multiscale models of the type considered
in this paper have different time scales for the dynamics at the
different physical scales. For example, in the system defined in
equations (\ref{p_i})-(\ref{c}), the cellular scale gives rise to
time scales in the age and size variables, whereas the tumor scale
gives rise to a different time scale in the spatial variables.
Independent of the specific type of spatial representation used,
decoupling time from age or size is critical for effective solution
of the model equations.
Many of the features of the cancer models, such as taxis, aging and
growth, are seen in other biological systems; prior work on {\em
Proteus mirabilis} swarm-colony development is but one example
\cite{proteus-tech}. Biological systems abound where either spatial
dynamics induce the behavior of interest, or where the spatial
dynamics is the behavior of interest. In the same manner, the
behavior of interest in a biological system can depend on the
distribution of physiological traits such as age or size, or those
distributions are the topic of interest. We hope that the
methodology presented in this paper will provide a template for
handling a broader range of biological problems.
\newpage
\bibliographystyle{siam}
|
1,314,259,995,352 | arxiv | \section{Introduction}
\paragraph{}
Lie algebra expansion \cite{Hatsuda:2001pp,deAzcarraga:2002xi,deAzcarraga:2007et,Bergshoeff:2019ctr} is a powerful tool to generate interesting gravitational theories starting from the first order formulation of the (cosmological) Einstein-Hilbert action. On the one hand, massive gravity theories that are consistent with the holographic c-theorem have been shown to arise from the truncation of an infinite-dimensional Lie algebra that is closely connected to the Lie algebra expansion of the $\rm AdS$ algebra and the cosmological Einstein-Hilbert action \cite{Bergshoeff:2021tbz}. On the other hand, this procedure was the main tool to construct the action principle for Newtonian gravity in first-order formulation (see \cite{VandenBleeken:2017rij,Hansen:2019pkl} for its second order formulation and the relevant $1/c^2$ expansion) as well as establishing new, extended, two and three-dimensional non/ultra-relativistic gravity models \cite{Papageorgiou:2009zc, Bergshoeff:2016lwr,Hartong:2016yrf,Aviles:2018jzw, Ozdemir:2019orp, deAzcarraga:2019mdn,Concha:2019lhn, Penafiel:2019czp, Gomis:2019fdh, Ozdemir:2019tby, Gomis:2019sqv, Gomis:2019nih, Kasikci:2020qsj, Concha:2020sjt, Concha:2020ebl, Concha:2020eam, Concha:2020tqx, Concha:2021jos, Gomis:2022spp, Grumiller2020, Gomis2020, Ravera:2022buz, Concha:2022you, Concha:2021llq, Ravera:2019ize, Ali:2019jjp, Concha:2021jnn}.
In fact, the massive gravity models of \cite{Bergshoeff:2021tbz} arise as scaling limits of ghost-free bi-gravity models \cite{Paulos:2012xe,Afshar:2014dta} which later lead to the discovery that there exist trajectories in the parameter space of bi-gravity theories that connect the central charges of bulk/boundary unitary three-dimensional bi-gravity models to non-unitary massive gravity theories by a continuous change of scaling parameter \cite{Bergshoeff:2013xma,Ozkan:2019iga,Sevim:2019scg}. It is, thus, a natural question whether one can unify the Lie algebra expansion and the scaling limit together to define a non/ultra-relativistic limit for bimetric and multimetric models of gravity to establish similar connections between physical quantities. In this paper, we will show that this is indeed the case by presenting a systematic procedure that relates the space-time decomposed multimetric gravity to extended non-relativistic and ultra-relativistic models of gravity.
In building extended non/ultra-relativistic gravity, the main motivation comes from the formulation of an action principle for Newtonian gravity. This construction requires one to go beyond the standard Bargmann algebra by an extension with additional three new generators. This extension, which was originally formulated by $1/c^2$ expansion of Einstein-Hilbert action \cite{Hansen:2019pkl}, separates the strong gravitational effects from the relativistic effects and it has been shown that in the presence of a \textit{twistless torsion}, the Newtonian gravity action can successfully explain the three classical tests of general relativity \cite{VandenBleeken:2019gqa, Ergen:2020yop, Hansen:2019vqf, Hansen:2020pqs}.
The main procedure to find extended non/ultra-relativistic algebras and corresponding gravity models is the Lie algebra expansion. It is based on the splitting of the generators of a Lie algebra into even and odd classes followed their series expansion with respect to the class that they belong to. If the algebra under consideration is chosen to be the space-time split Poincar\'e algebra, then the expansion yields either extended non-relativistic or extended ultra-relativistic algebras \cite{Bergshoeff:2019ctr}. The corresponding gravity models can also be found in the same spirit, that is, one can start with the first-order formulation of the Einstein-Hilbert action, perform the space-time splitting and expand the fields in accordance with their corresponding generator \cite{Bergshoeff:2019ctr}. This procedure successfully gives rise to the action principle for the Newtonian gravity (see Appendix \ref{AppA} for the equivalence of the first and the second order formulations) and various two and three-dimensional models have been constructed/reconstructed in the recent literature. The transformation rules for the matter fields can also be found by this methodology \cite{Kasikci:2021atn} (see \cite{Hansen:2020pqs} for the corresponding $1/c^2$-expansion) including the rigid supermultiplets of extended superalgebras. The local transformation rules for supersymmetric models is an open problem to date. In particular, in three-dimensions, the nature of expansion does not even allow for the rigid supermultiplets for the algebras where the supergravity actions can be written \cite{Kasikci:2021atn}.
In a separate development, it has been found that the massive gravity models that admit the holographic c-theorem arise from the truncation of an infinite dimensional Lie algebra and its corresponding gauge theory of gravity \cite{Bergshoeff:2021tbz}. This infinite-dimensional Lie algebra is a cosmological algebra and if the cosmological parameter is set to zero, it simply becomes the Lie algebra expansion of the $D$-dimensional Poincar\'e algebra. The inclusion of the cosmological constant is equivalent to the infinite-dimensional expansion of the $\rm (A)dS$ algebra given the fact that the cosmological constant is scaled with the expansion parameter, i.e. $\Lambda \to \Lambda/\lambda^2$ where $\Lambda$ is the cosmological constant and $\lambda$ represents the expansion parameter. The truncation of the resulting infinite-dimensional Lie algebra gives rise to gravity models with a set of auxiliary fields, which, when solved and substituted back into the action, becomes massive gravity models that are compatible with the holographic c-theorem. These models include the new massive gravity \cite{Bergshoeff:2009hq} and its various extensions \cite{Sinha:2010ai,Paulos:2010ke}, all of which have been shown to be related to multimetric gravity by means of a scaling limit \cite{Paulos:2012xe,Afshar:2014dta}.
In this paper, we investigate the connection between these two seemingly unrelated subjects. We begin in Section \ref{AlgAndAct} by reminding the reader about the basics of the Lie algebra expansion and present the general formulation of extended non-relativistic and ultra-relativistic actions, giving particular attention to three and four dimensions. In Section \ref{MultiGrav}, we show that the non-relativistic gravity models with larger symmetries are scaling limits of multimetric theories with a Lorentzian signature. This point is one of our key results, so let us be more precise with our statement. As mentioned, there is a direct connection between the Lie algebra expanded (A)dS algebra and the massive theories of gravity. When the cosmological constant is set to zero, a model that comes from a consistent truncation of the infinite-dimensional algebra does not describe massive gravity but it is a theory of gravity that is coupled to a set of gauge fields \cite{Bergshoeff:2021tbz}. These models arise from the scaling limit of a multimetric theory in the absence of potential terms for the vielbein. Nevertheless, as they are directly relevant to the expansion of the Poincar\'e algebra, we first relate the multimetric models with no potential to non-relativistic and ultra-relativistic gravity models. For example, the scaling limit of a bimetric gravity without potential terms is the gauge theory formulation of Newtonian gravity with no source. Based on our result for how to take the scaling limit, we then turn on the potential terms and establish their contribution to the non-relativistic and ultra-relativistic models. In the case of bi-gravity, the potential terms give rise to a constant background mass density for non-relativistic gravity. In Section \ref{URChapter}, we show that there is an analog construction for the ultra-relativistic gravity models. We show that ultra-relativistic gravity with extended symmetries also arises as a different limit of the same multi-gravity models, which resembles the Galilei / Carroll limits of General Relativity.
We give our comments and conclusions in Section \ref{Discussion}.
\section{Algebras and Actions}\label{AlgAndAct}
\paragraph{}
The Lie algebra expansion is a method to generate higher-dimensional Lie algebras starting from a lower-dimensional core Lie algebra. As we will discuss the details momentarily, it is based on a series expansion of Maurer-Cartan one-forms of the dual algebra. Thus, the expansion that generates larger Lie algebras also generates action principles by expanding a core action that is invariant under the core Lie algebra. In particular, for the space-time decomposed Poincar\'e (or (A)dS) algebra and the corresponding first-order formulation of the (cosmological) Einstein-Hilbert action, the expansion yields non/ultra-relativistic gravity models at each order in expansion parameter \cite{Bergshoeff:2019ctr}. For example,
\begin{eqnarray}
\mathcal{L}_{\rm GR} &=& \lambda \mathcal{L}_{1} + \lambda^3 \mathcal{L}_{3} + \lambda^5 \mathcal{L}_{5} + \ldots \,,
\label{Structure}
\end{eqnarray}
is the structure of the non-relativistic expansion where $\lambda$ is the expansion parameter and $\mathcal{L}_n$ represents the Lagrangian at the relevant $\lambda^{n}$ order. Note that each of these actions is invariant under the corresponding order of the expanded core Lie algebra. As can be seen from the structure of the expanded Lagrangian, the expansion and the scaling limit $(\lambda \to 0)$ yield the same result at the lowest order. For instance, in the case of \eqref{Structure}, the lowest order Lagrangian in the expansion, $\mathcal{L}_1$, can also be found by the same expansion of the gauge fields, then rescaling the core Lagrangian $\mathcal{L}_{\rm core}$ by a factor of $\lambda^{-1}$ and finally taking the limit $\lambda \to 0$ in which case the coefficients of all $\mathcal{L}_n$ with $n >1$ vanishes. However, it is not possible to single out a Lagrangian $\mathcal{L}_n$ with $n>1$ in this way as rescaling the core Lagrangian with $\lambda^{-n}$ would yield divergences in the coefficients of lower order terms. One way to isolate a higher-order Lagrangian to perform the scaling limit is to consider multiple copies of the same core algebra and combine the core Lagrangians to cancel out any lower-order terms that would cause divergences. In the case of Poincar\'e algebra, this means that we must consider multiple copies of Einstein-Hilbert action to obtain a proper scaling limit. This is the leading technical notion of this present paper, which then describes the non/ultra-relativistic scaling limits of multi-gravity models.
Thus, this section is aimed to discuss the non-relativistic and ultra-relativistic expansions of the Poincar\'e algebra and Einstein-Hilbert action in first-order formulation to set the stage for multimetric models and their non/ultra-relativistic scaling limits.
\subsection{Lie Algebra Expansion and the Poincar\'e Algebra}
\paragraph{}
The Lie algebra expansion is a method that takes a core Lie algebra $\mathfrak{g}$ and produces new, higher dimensional algebras as long as $\mathfrak{g}$ can be written as a direct sum of two subspaces $V_0$ and $V_1$ that satisfies the following relations
\begin{align}
\left[V_0, V_0 \right] & \subset V_0 \,, & \left[V_0, V_1 \right] & \subset V_1 \,, & \left[V_1, V_1 \right] & \subset V_0 \,.
\end{align}
Based on these relations, $V_0$ represent the even class of generators while $V_1$ represents the odd class. The direct sum structure of the core Lie algebra suggests that we may also assign a gauge field to each of the generators
\begin{eqnarray}
A_\mu = A_\mu^i X_i + A_\mu^\alpha Y_\alpha \,,
\end{eqnarray}
where $X^i$ represents the even subset of generators while $Y^\alpha$ represents the odd ones. In the next step, we expand the gauge fields with an expansion parameter $\lambda$ with respect to the class that they belong to
\begin{align}
A^i & = \sum_{n=0}^{N_0} \lambda^{2n} A^i_{(2n)} \,, & A^\alpha & = \sum_{n=0}^{N_1} \lambda^{2n+1} A^\alpha_{(2n+1)} \,.
\label{ExpansionCore}
\end{align}
Here, the sum can be extended to infinity to produce and infinite dimensional Lie algebra. The consistent truncation at order $g=(N_0,N_1)$ requires that either $N_0 = N_1$ or $N_0 = N_1 + 1$ is satisfied. With this expansion in hand, one can start with the Maurer-Cartan equations of the core Lie algebra $\mathfrak{g}$, expand the gauge fields with respect to \eqref{ExpansionCore} and read off the structure constants of the expanded algebra from the expanded Maurer-Cartan equations at each order. Equivalently, based on their even/odd character, we may expand the generators $\{X^i, Y^\alpha\}$ with $X_i \in V_0$ and $Y_\alpha \in V_1$ as follows
\begin{align}
X_i^{(2n)} & = \lambda^{2n} \otimes X_i\,, & Y_\alpha^{(2n+1)} &= \lambda^{2n+1} \otimes Y_\alpha \,.
\label{GeneratorExpansion}
\end{align}
Then, using the commutation relations of the core algebra
\begin{align}
\left[X_i, X_j \right] & = f_{ij}{}^k X_k \,, & \left[X_i, Y_\alpha \right] & = f_{i\alpha}{}^\beta Y_\beta \,, & \left[Y_\alpha, Y_\beta\right] & = f_{\alpha \beta}{}^i X_i \,,
\end{align}
we may give the commutation relations for the expanded algebra as \cite{Gomis:2019nih}
\begin{align}
\left[X_i^{(2m)}, X_j^{(2n)} \right] & = f_{ij}{}^k X_k^{(2m+2n)}\,, & \left[X_i^{(2m)}, Y_\alpha^{(2n+1)} \right] & = f_{i\alpha}{}^\beta Y_\beta^{(2m+2n+1)} \,,\nonumber\\\
\left[Y_\alpha^{(2m+1)}, Y_\beta^{(2n+1)}\right] & = f_{\alpha \beta}{}^i X_i^{(2m+2n+2)} \,.
\label{GenExp}
\end{align}
With this result in hand, let us now turn our attention to the space-time split Poincar\'e algebra and its non/ultra-relativistic Lie algebra expansions. The $D$-dimensional Poincar\'e algebra consists of translations $(P_A)$ and Lorentz transformations $M_{AB}$ with the following non-vanishing commutation relations
\begin{align}
\left[M_{AB}, P_C \right] & = 2 \eta_{C[B} P_{A]} \,, & \left[M_{AB}, M_{CD}\right] & = 4 \eta_{[A[C} M_{D]B]} \,.
\end{align}
The space-time decomposition can be achieved by decomposing the $D$-dimensional index $A$ as $A= (0,a)$ in which case the generators are split as
\begin{align}
M_{AB} & = \{M_{0a} \equiv G_a, J_{ab} \}\,, & P_a & = \{P_0 \equiv H, P_a \} \,.
\label{DecompGen}
\end{align}
In this case, the Poincar\'e algebra decomposes as
\begin{align}
\left[G_a, P_b\right] &= \delta_{a b} H\,, & \left[G_a, H\right] & = P_a \,, &\left[J_{a b}, P_c\right] &= \delta_{b c} P_a - \delta_{a c} P_b\,, \nonumber\\
\left[J_{a b}, G_{c}\right] &= \delta_{b c} G_a - \delta_{a c} G_b \,, &\left[J_{a b}, J_{c d}\right] &= 4 \delta_{[a[c}J_{d]b]}\,, & \left[G_a, G_b\right] & = J_{a b}\,.
\label{decomposealgebra}
\end{align}
Based on the spacetime decomposed Poincar\'e algebra, we may discuss the non-relativistic and ultra-relativistic Lie algebra expansions and action principles.
\subsection{Non-Relativistic Algebras and Actions}\label{NRPrelim}
\paragraph{}
The non-relativistic higher-dimensional algebras are achieved with the following choice for the generators \cite{Bergshoeff:2019ctr}
\begin{align}
V_0 & = \{J_{ab}, H\}\,, & V_1 & = \{P_a, G_a\} \,.
\end{align}
With this choice of generators, we can follow the prescription that we presented in \eqref{GenExp}. Thus, the Lie algebra expansion of the spacetime decomposed Poincar\'e algebra is given by
\begin{align}
\left[G_a^{(2m+1)}, P_b^{(2n+1)}\right] &= \delta_{a b} H^{(2m+2n+2)}\,, & \left[G_a^{(2m+1)}, H^{(2n)}\right] & = P_a^{(2m+2n+1)} \,, \nonumber\\
\left[J_{a b}^{(2m)}, P_c^{(2n+1)}\right] &=2 \delta_{c[b} P_{a]}^{(2m + 2n+1)} \,, & \left[J_{a b}^{(2m)}, G_c^{(2n+1)}\right] &=2 \delta_{c[b} G_{a]}^{(2m + 2n+1)} \,, \nonumber\\
\left[J_{a b}^{(2m)}, J_{c d}^{(2n)}\right] &= 4 \delta_{[a[c}J_{d]b]}^{(2m+2n)}\,, & \left[G_a^{(2m+1)}, G_b^{(2n+1)}\right] & = J_{a b}^{(2m+2n+2)}\,.
\label{NRInfinite}
\end{align}
To provide the well-known non-relativistic algebras that arise as a consistent truncation of this infinite-dimensional algebra, let's first focus on the simplest case where we have two even and two odd generators, $P_a^{(1)}, G_a^{(1)}, H^{(0)}, J_{ab}^{(0)}$. These generators satisfy the Galilei algebra
\begin{align}
\left[G_a, H\right] & = P_a \,, &\left[J_{a b}, P_c\right] &= \delta_{b c} P_a - \delta_{a c} P_b\,, \nonumber\\
\left[J_{a b}, G_{c}\right] &= \delta_{b c} G_a - \delta_{a c} G_b \,, &\left[J_{a b}, J_{c d}\right] &= 4 \delta_{[a[c}J_{d]b]}\,,
\label{GalAlb}
\end{align}
where we relabeled the generators as $P_a^{(1)} = P_a, G_a^{(1)} = G_a, H^{(0)} = H$ and $J_{ab}^{(0)} = J_{ab}$. According to the consistent truncation conditions, we may now add two more generators that belong to $V_0$, i.e., $H^{(2)}$ and $J_{ab}^{(2)}$. However, as we will discuss momentarily, this truncation does not have a corresponding invariant action principle that can be achieved by the expansion of Einstein-Hilbert action for in four and higher dimensions. The next consistent truncation requires two more additional generators of odd character, $P_a^{(3)}$ and $G_a^{(3)}$, in which case the algebra becomes identical to the one that underlies the first-order formulation of Newtonian gravity \cite{Hansen:2019vqf}
\begin{align}
\left[G_a, H\right] & = P_a \,, &\left[J_{a b}, P_c\right] &= \delta_{b c} P_a - \delta_{a c} P_b\,, & \left[J_{a b}, G_{c}\right] &= \delta_{b c} G_a - \delta_{a c} G_b \,, \nonumber\\
\left[J_{a b}, T_c\right] &= \delta_{b c} T_a - \delta_{a c} T_b\,, & \left[J_{a b}, B_c\right] &= \delta_{b c} B_a - \delta_{a c} B_b\,, & \left[J_{a b}, J_{c d}\right] &= 4 \delta_{[a[c}J_{d]b]}\,, \nonumber\\
\left[J_{a b}, S_{c d}\right] &= 4 \delta_{[a[c}S_{d]b]}\,, & \left[G_a, P_b\right] &= \delta_{a b} M \,, & \left[B_a, H\right] & = T_a \,,\nonumber\\
\left[G_a, M\right] & = T_a \,, & \left[G_a, G_b\right] & = S_{a b}\,, & \left[S_{a b}, P_c\right] &= \delta_{b c} T_a - \delta_{a c} T_b \,,\nonumber\\
\left[S_{a b}, G_c\right] &= \delta_{b c} B_a - \delta_{a c} B_b\,,
\label{NewtAlg}
\end{align}
where we labeled $H^{(2)} = M$, $J_{ab}^{(2)} = S_{ab}, P_a^{(3)} = T_a$ and $G_a^{(3)} = B_a$.
For the construction of an action principle, we first need to space-time decompose the gauge fields of the Poincar\'e algebra, namely vielbein $E_\mu{}^A$ and the spin connection $\Omega_\mu{}^{AB}$, i.e.
\begin{align}
E^A & = \{E^0 = T, E^a\}\,, & \Omega^{AB} & = \{\Omega^{0a} = \Omega^a, \Omega^{ab}\} \,.
\label{DecompFields}
\end{align}
This step can then be followed by their expansion in line with the expansion of their corresponding generator \eqref{DecompGen}.
\begin{align}
T & = \sum_{n=0}^{N_0} \lambda^{2n} \tau_{(2n)} \,, & E^a & = \sum_{n=0}^{N_1} \lambda^{2n+1} e^a_{(2n+1)} \,,\nonumber\\
\Omega^{ab} & = \sum_{n=0}^{N_0} \lambda^{2n} \omega^{ab}_{(2n)} \,, & \Omega^a & = \sum_{n=0}^{N_1} \lambda^{2n+1} \omega^a_{(2n+1)} \,.
\label{ExpGF}
\end{align}
These expressions can be used in the space-time decomposed Einstein-Hilbert action in the first order formulation to generate invariant non-relativistic gravity models. To perform the expansion explicitly, let us focus on three and four dimensions, however our arguments are dimension independent. In four dimensions, the space-time decomposed action is given by
\begin{eqnarray}
\mathcal{L}_{EH} &=& \epsilon_{ABCD}R^{AB}\wedge E^{C}\wedge E^{D} = 2\epsilon_{a b c} \left(R(\Omega^{a})\wedge E^b \wedge E^c - R(\Omega^{a b})\wedge E^c \wedge T \right) \,.
\end{eqnarray}
Here $R^{AB}$ refers to the group-theoretical curvature of the spin connection and $R(\Omega^a)$ and $R(\Omega^{ab})$ refer to its spacetime decomposition with respect to \eqref{DecompFields}. While these curvatures can simply be read off from the Poincar\'e algebra, it is useful to present their expression explicitly for future purposes
\begin{align}
R(\Omega^a) &= d \Omega^a + \Omega^{ab}\wedge \Omega_b\,, & R(\Omega^{ab}) &= d\Omega^{ab} - \Omega^{ac} \wedge \Omega^b{}_c - \Omega^a \wedge \Omega^b \,.
\label{DecompCurvatures}
\end{align}
The expanded action is then given by
\begin{eqnarray}
\mathcal{L} &=& -2 \epsilon_{abc} \sum_{m=0}^{N_0} \sum_{n=0}^{N_1} \sum_{\ell=0}^{N_0} \lambda^{2m+2n+2\ell +1} R(\omega^{ab}_{(2m)}) \wedge e^c_{(2n+1)} \wedge \tau_{(2\ell)} \nonumber\\
&& + 2 \epsilon_{abc} \sum_{m=0}^{N_1} \sum_{n=0}^{N_1} \sum_{\ell=0}^{N_1} \lambda^{2m+2n+2\ell +3} R(\omega^a_{(2m+1)}) \wedge e^b_{(2n+1)} \wedge e^c_{(2\ell +1)} \,.
\label{4dNRExp}
\end{eqnarray}
As mentioned, the consistent truncation of the algebra as well as the action requires a certain relation between $N_0$ and $N_1$. In the case of the algebra, the necessary condition was that either $N_0 = N_1$ or $N_0 = N_1 +1$ must be satisfied. In the case of the action, the situation is more subtle. For example, let us consider the expansion to order $(N_0,N_1) = (1,1)$
\begin{eqnarray}
\mathcal{L} &=& \lambda \left(-2 \epsilon_{abc} R^{ab}(\omega) \wedge e^a \wedge \tau \right) \nonumber\\
&& + \lambda^3 \Big( -2 \epsilon_{abc} \left(R^{ab}(\omega) \wedge e^c \wedge m + R^{ab}(s) \wedge e^c \wedge \tau + R^{ab}(\omega) \wedge t^c \wedge \tau \right)\nonumber\\
&& \qquad \,\,\, + 2 \epsilon_{abc} R^a(\Omega) \wedge e^b \wedge e^c \Big) \,,
\end{eqnarray}
where we set
\begin{align}
\tau_{(0)} & = \tau \,, & \tau_{(2)} & = m \,, & \omega^{ab}_{(0)} &= \omega^{ab} \,, & \omega^{ab}_{(2)} & = s^{ab} \,,\nonumber\\
e^a_{(1)} &= e^a \,, & e^a_{(3)} &= t^a \,, & \omega^a_{(1)} & = \Omega^a & \omega^a_{(3)} & = b^a \,.
\label{NewtonExpand}
\end{align}
Furthermore, the group theoretical curvatures $R^{ab}(\omega), R^{ab}(s)$ and $R^a(\Omega)$ can easily be read off by expanding the curvatures \eqref{DecompCurvatures} to the $\lambda^3$ order
\begin{align}
R^{ab}(\omega) &= d\omega^{ab} - \omega^{ac} \wedge \omega^b{}_c \,, & R^{ab}(s) & = ds^{ab} - 2 \omega^{ac} \wedge s^b{}_c - \omega^a \wedge \omega^b \,,\nonumber\\
R^{a}(\omega) & = d\omega^a + \omega^{ab} \wedge \omega_b \,.
\label{NewtonianCurvatures}
\end{align}
In this Lagrangian, the $\lambda^1$-order model represents the Galilei gravity which is invariant under the Galilei algebra \eqref{GalAlb}. The $\lambda^3$-order Lagrangian is the first-order formulation of the Newtonian gravity which is invariant under \eqref{NewtAlg}. While we truncated the algebra and the expansion at order $(N_0,N_1) = (1,1)$, we could have stopped at order $(N_0,N_1) = (1,0)$. When this happens, the $\lambda$-order Lagrangian remains unaltered but the $\lambda^3$-order does not have the $R^{ab}(\omega) \wedge t^a \wedge \tau$ term in Lagrangian which spoils its gauge in variance. Although we provide a four-dimensional example here, the argument holds in general. It is better to think about the expansion as a truncation of the infinite-dimensional algebra and the action. Unless all terms that contribute to a certain $\lambda$-order are taken into account, the action is not gauge-invariant \cite{Bergshoeff:2019ctr}. In four-dimensions, the expanded Lagrangian \eqref{4dNRExp} indicates that the consistent truncation requires $N_0 = N_1$.
In three-dimensions, for instance, the argument we provide above means that the consistent truncation occurs if $N_0 = N_1 +1$. To see that, let us take a look at the spacetime decomposed Einstein-Hilbert Lagrangian \cite{Bergshoeff:2019ctr}
\begin{eqnarray}
\mathcal{L} &=& 2 R(\Omega) \wedge T + 2 \epsilon_{ab} R(\Omega^a) \wedge E^b \,,
\label{Decomposed3D}
\end{eqnarray}
where
\begin{align}
R(J) &= d \Omega - \epsilon_{ab} \Omega^a \wedge \Omega^b \,, & R^a (\Omega) & = d\Omega^a - \epsilon^a{}_b \Omega \wedge \Omega^b \,,
\end{align}
where we used the fact that in two dimensions $\Omega^{ab}$ can be written as $\Omega^{ab} = \epsilon^{ab} \Omega$. The expansion of the gauge fields \eqref{ExpGF} gives rise to the following Lagrangian
\begin{eqnarray}
\mathcal{L} &=& 2 \sum_{m=0}^{N_0} \sum_{n=0}^{N_0} \lambda^{2m+2n} R(\omega_{(2m)}) \wedge \tau_{(2n)} + 2 \epsilon_{ab} \sum_{m=0}^{N_1} \sum_{n=0}^{N_1} \lambda^{2m+2n+2} R(\omega^a_{(2m+1)}) \wedge e^b_{(2n+1)} \,. \qquad
\end{eqnarray}
This form of the Lagrangian suggests that unless $N_0 = N_1+1$ we cannot obtain all contribution to any given $\lambda^{2n}$-order. For instance, let us consider the expansion of the Lagrangian to $\lambda^2$-order
\begin{eqnarray}
\mathcal{L} &=& \left(2 R(\omega) \wedge \tau \right) + \lambda^2 \left(2 \left(R(\omega) \wedge m + R(s) \wedge \tau + \epsilon_{ab} R^a(\omega) \wedge e^b \right) \right) \,,
\end{eqnarray}
where
\begin{align}
R(\omega) & = d\omega \,, & R(s) & = ds - \epsilon_{ab} \omega^a \wedge \omega^b \,, & R^a(\omega) & = d\omega^a - \epsilon^{a}{}_b \, \omega \wedge \omega^b \,.
\label{Curvatures3d}
\end{align}
The zeroth-order action is the three-dimensional Galilei gravity. The $\lambda^2$-order theory is known as the extended Bargmann gravity \cite{Papageorgiou:2009zc}. The complete $\lambda^2$ contribution to the Lagrangian occurs as long as $N_0 =1$ and $N_1 = 0$. Consequently, the necessary transformation rules for the gauge fields can be found by truncating the infinite-dimensional algebra \eqref{NRInfinite} by keeping the number of even generators two more than the number of odd generators. In the case of extended Bargmann gravity, the corresponding truncation is known as the extended Bargmann algebra \cite{Papageorgiou:2009zc}
\begin{align}
\left[G_a, H\right] & = P_a \,, &\left[J, P_a\right] &= -\epsilon_{ab} P^b \,, &\left[J, G_a\right] &= -\epsilon_{ab} G^b \nonumber\\
\left[G_a, P_b \right] & = \delta_{ab} M \,, & \left[G_a, G_b\right] & = - \epsilon_{ab} S \,,
\label{3dEBA}
\end{align}
where we set $J_{ab} = \epsilon_{ab} J$ and $S_{ab} = \epsilon_{ab} S$. In the next order for the consistent truncation, one obtains the Extended Newtonian Gravity \cite{Ozdemir:2019orp}, which we defer its details to Section \ref{D3NR}.
\subsection{Ultra-Relativistic Algebras and Actions}
\paragraph{}
The ultra-relativistic higher-dimensional algebras are achieved with the following choice for the generators $ V_0 = \{J_{ab}, P_a\}\,, V_1 = \{H, G_a\}$.
Thus, the Lie algebra expansion of the spacetime decomposed Poincar\'e algebra is given by
\begin{align}
\left[G_a^{(2m+1)}, P_b^{(2n)}\right] &= \delta_{a b} H^{(2m+2n+1)}\,, & \left[G_a^{(2m+1)}, H^{(2n+1)}\right] & = P_a^{(2m+2n+2)} \,, \nonumber\\
\left[J_{a b}^{(2m)}, P_c^{(2n)}\right] &=2 \delta_{c[b} P_{a]}^{(2m + 2n)} \,, & \left[J_{a b}^{(2m)}, G_c^{(2n+1)}\right] &=2 \delta_{c[b} G_{a]}^{(2m + 2n+1)} \,, \nonumber\\
\left[J_{a b}^{(2m)}, J_{c d}^{(2n)}\right] &= 4 \delta_{[a[c}J_{d]b]}^{(2m+2n)}\,, & \left[G_a^{(2m+1)}, G_b^{(2n+1)}\right] & = J_{a b}^{(2m+2n+2)}\,.
\label{URInfinite}
\end{align}
The simplest truncation of this infinite-dimensional algebra is known as the Carroll algebra \cite{Duval:2014uoa, Duval:2014uva, Bergshoeff:2014jla, Duval:2014lpa, Hartong:2015xda,Bergshoeff:2017btm}
\begin{align}
[C_a, P_b] & = \delta_{ab} H \,, &\left[J_{a b}, P_c\right] &= \delta_{b c} P_a - \delta_{a c} P_b\,, \nonumber\\
\left[J_{a b}, C_{c}\right] &= \delta_{b c} C_a - \delta_{a c} C_b \,, &\left[J_{a b}, J_{c d}\right] &= 4 \delta_{[a[c}J_{d]b]}\,,
\label{CarAlb}
\end{align}
where we set $G_a^{(1)} = C_a$ to indicate that it is the generator of Carrollian boosts. Due to the change of the expansion character of the generators, the spacetime decomposed vielbein and the spin-connection are now given by
\begin{align}
T & = \sum_{n=0}^{N_1} \lambda^{2n+1} \tau_{(2n+1)} \,, & E^a & = \sum_{n=0}^{N_0} \lambda^{2n} e^a_{(2n)} \,,\nonumber\\
\Omega^{ab} & = \sum_{n=0}^{N_0} \lambda^{2n} \omega^{ab}_{(2n)} \,, & \Omega^a & = \sum_{n=0}^{N_1} \lambda^{2n+1} \omega^a_{(2n+1)} \,.
\label{ExpUR}
\end{align}
As a result, the spacetime decomposed Einstein-Hilbert action is expanded as
\begin{eqnarray}
\mathcal{L} &=& -2 \epsilon_{abc} \sum_{m=0}^{N_0} \sum_{n=0}^{N_0} \sum_{\ell=0}^{N_1} \lambda^{2m+2n+2\ell +1} R(\omega^{ab}_{(2m)}) \wedge e^c_{(2n)} \wedge \tau_{(2\ell+1)} \nonumber\\
&& + 2 \epsilon_{abc} \sum_{m=0}^{N_1} \sum_{n=0}^{N_0} \sum_{\ell=0}^{N_0} \lambda^{2m+2n+2\ell +1} R(\omega^a_{(2m+1)}) \wedge e^b_{(2n)} \wedge e^c_{(2\ell)} \,.
\label{4dURExp}
\end{eqnarray}
Note that this case is much simpler than the non-relativistic case. All terms in the Lagrangian are expanded to the same $\lambda$-order. Hence, it is guaranteed to have all possible contributions to any given $\lambda$-order as long as we have equal number of even and odd generators. At the lowest order $(\lambda)$ we have the Carroll gravity \cite{Ravera:2019ize, Ali:2019jjp, Hartong:2015xda,Bergshoeff:2017btm}
\begin{eqnarray}
\mathcal{L} &=& -2 \epsilon_{abc} R^{ab}(\omega) \wedge e^a \wedge \tau + 2 \epsilon_{abc} R^a(\omega) \wedge e^b \wedge e^c \,,
\end{eqnarray}
where we set
\begin{align}
\tau_{(1)} & = \tau \,, &e^a_{(0)} &= e^a \,, & \omega^{ab}_{(0)} &= \omega^{ab} \,, & \omega^a_{(1)} & = \omega^a \,.
\label{CarrollExp}
\end{align}
For the next-to-leading order Lagrangian $(\lambda^3)$, we have
\begin{eqnarray}
\mathcal{L} &=& -2 \epsilon_{abc} R^{ab}(s) \wedge e^a \wedge \tau -2 \epsilon_{abc} R^{ab}(\omega) \wedge t^a \wedge \tau -2 \epsilon_{abc} R^{ab}(\omega) \wedge e^a \wedge m \nonumber\\
&& + 2 \epsilon_{abc} R^a(b) \wedge e^b \wedge e^c + 4 \epsilon_{abc} R^a(\omega) \wedge e^b \wedge t^c \,.
\end{eqnarray}
where we have
\begin{align}
\tau_{(1)} & = \tau \,, & \tau_{(3)} & = m \,, & \omega^{ab}_{(0)} &= \omega^{ab} \,, & \omega^{ab}_{(2)} & = s^{ab} \,,\nonumber\\
e^a_{(0)} &= e^a \,, & e^a_{(2)} &= t^a \,, & \omega^a_{(1)} & = \omega^a & \omega^a_{(3)} & = b^a \,.
\label{BeyondCarrollExp}
\end{align}
Here, the group theoretical curvatures read
\begin{align}
R^{ab}(\omega) & = d\omega^{ab} - \omega^{ac} \wedge \omega^b{}_c \,, & R^a(\omega) & = d\omega^a + \omega^{ab} \wedge \omega_b \,, \nonumber\\
R^{ab}(s) & = ds^{ab} - 2 \omega^{ac} \wedge s^b{}_c - \omega^a \wedge \omega^b \,, & R^a(b) & = db^a + \omega^{ab} \wedge b_b + s^{ab} \wedge \omega_b \,.
\end{align}
This Lagrangian is invariant under the following extension of the Carroll algebra
\begin{align}
[C_a, P_b] & = \delta_{ab} H \,, &\left[J_{a b}, P_c\right] &= \delta_{b c} P_a - \delta_{a c} P_b\,, & \left[J_{a b}, C_{c}\right] &= \delta_{b c} C_a - \delta_{a c} C_b \,, \nonumber\\
\left[J_{a b}, J_{c d}\right] &= 4 \delta_{[a[c}J_{d]b]}\,, & [B_a, P_b] & = \delta_{ab} M \,, & [C_a, T_b] & = \delta_{ab} M \,, \nonumber\\
[C_a, H] & =T_a \,, & \left[J_{a b}, T_c\right] &= \delta_{b c} T_a - \delta_{a c} T_b\,, & \left[S_{a b}, P_c\right] &= \delta_{b c} T_a - \delta_{a c} T_b\,, \nonumber\\
\left[J_{a b}, B_c\right] &= \delta_{b c} B_a - \delta_{a c} B_b\,, & \left[S_{a b}, C_c\right] &= \delta_{b c} B_a - \delta_{a c} B_b\,, & \left[J_{a b}, S_{c d}\right] &= 4 \delta_{[a[c}S_{d]b]}\,,\nonumber\\
\left[G_a, G_b\right] & = S_{a b}\,,
\label{ExtCarAlb}
\end{align}
where we labeled $H^{(3)} = M$ and $P_a^{(2)} = T_a$. To our knowledge, it is still an open problem to show the equivalence of the first-order action \eqref{BeyondCarrollExp} and the second order beyond-Carrollian action in \cite{Hansen:2021fxi}. Nevertheless, we refer to this model as the beyond-Carrollian gravity. Note that in three-dimensions is not exception as in the case of non-relativistic expansion. All terms in the spacetime decomposed three-dimensional Einstein-Hilbert Lagrangian are still in the same order in the expansion, see \eqref{Decomposed3D}. Thus, the argument that we present for consistency of truncation also applies to three dimensions and we will not present $D=3$ as an exceptional case.
\section{Non-Relativistic Scaling Limit of Multimetric Gravity}\label{MultiGrav}
\paragraph{}
In this section, we introduce a non/ultra-relativistic scaling limit for multimetric gravity models based on the Lie algebra expansion that we discussed in the previous section. As mentioned, the fundamental idea to have a well-defined scaling limit is the necessity of using multiple copies of Einstein-Hilbert action to get rid of divergent lower order Lagrangian(s). To provide a concrete example, let us consider the non-relativistic expansion of Einstein-Hilbert action to order $\lambda$, i.e.
\begin{eqnarray}
\mathcal{L} &=& M_1^2 \lambda \left(-2 \epsilon_{abc} R^{ab}(\omega) \wedge e^a \wedge \tau \right) + \mathcal{O}(\lambda^3)
\end{eqnarray}
where $M_1$ is a mass parameter that multiplies the Einstein-Hilbert action. Note that we expand the fields based on their expansion character \eqref{ExpGF}
\begin{align}
\tau_{(0)} & = \tau \,, & e^a_{(1)} & = e^a \,, & \omega^{ab}_{(0)} & = \omega^{ab} \,, & \omega^a_{(1)} = \omega^a \,.
\end{align}
If we rescale the mass parameter $M_1^2$ as $M_1^2 \to M_1^2 / \lambda$ and take the limit $\lambda \to 0$, we precisely recover Galilei gravity as a non-relativistic limit of General Relativity. Let us now consider the next order in expansion
\begin{eqnarray}
\mathcal{L} &=&M_1^2 \lambda \left(-2 \epsilon_{abc} R^{ab}(\omega) \wedge e^c \wedge \tau \right) \nonumber\\
&& + M_1^2 \lambda^3 \Big( -2 \epsilon_{abc} \left(R^{ab}(\omega) \wedge e^c \wedge m + R^{ab}(s) \wedge e^c \wedge \tau + R^{ab}(\omega) \wedge t^c \wedge \tau \right)\nonumber\\
&& \qquad \qquad + 2 \epsilon_{abc} R^a(\omega) \wedge e^b \wedge e^c \Big) + \mathcal{O}(\lambda^5) \,,
\end{eqnarray}
where the fields are expanded in accordance with \eqref{NewtonExpand}. Clearly, to single out $\lambda^3$ action, it is not sufficient to rescale the mass parameter $M_1^2 \to M_1^2/\lambda^3$ and take the $\lambda \to 0$ limit since the coefficient of the Galilei gravity diverges in that limit. Thus, we must first annihilate the Galilei gravity action, then perform the proper scaling of the mass parameter and the limit. This can be achieved by considering two copies of the Einstein-Hilbert action
\begin{eqnarray}
\mathcal{L} &=& M_1^2 \left[2\epsilon_{a b c} \left(R(\Omega^{a})\wedge E^b \wedge E^c - R(\Omega^{a b})\wedge E^c \wedge T \right)\right] \nonumber\\
&& + M_2^2 \left[2\epsilon_{a b c} \left(R(\bar \Omega^{a})\wedge \bar E^b \wedge \bar E^c - R(\bar\Omega^{a b})\wedge \bar E^c \wedge \bar T \right)\right] \,,
\end{eqnarray}
where the second set of gauge fields are $\{\bar T\,, \bar E^a\,, \bar \Omega^a\,, \bar \Omega^{ab} \}$. To cancel out the Galilei gravity, we may keep the definitions of the expansion of the first set of fields as in \eqref{NewtonExpand} but expand the second set of the fields as \eqref{ExpGF} with the following definitions
\begin{align}
\bar \tau_{(0)} & = - \tau \,, & \bar \tau_{(2)} & = m \,, &\bar \omega^{ab}_{(0)} &= \omega^{ab} \,, &\bar \omega^{ab}_{(2)} & =- s^{ab} \,,\nonumber\\
\bar e^a_{(1)} &= e^a \,, & \bar e^a_{(3)} &= - t^a \,, & \bar \omega^a_{(1)} & = \omega^a & \bar \omega^a_{(3)} & = b^a \,.
\label{NewtonExpandC2}
\end{align}
With this choice for fields, the Galilei gravity comes with an opposite sign and cancels the contribution from the first copy of the Einstein-Hilbert action as long as $M_1^2 = M_2^2$. On the other hand, the $\mathcal{O}(\lambda^3)$ terms come with the same numerical factors and signatures. Consequently, if we incorporate the following scaling limit for the mass parameters
\begin{align}
M_1^2 = M_2^2 = \frac{M^2}{2\lambda^3} \,,
\end{align}
we precisely recover the first-order formulation of Newtonian gravity after taking the limit $\lambda \to 0$. In this section, our purpose is to define the systematic of non/ultra-relativistic scaling limit of multimetric gravity in this line of consideration. Before proceeding to the actual computation, we remind the reader that the invariant non-relativistic actions in dimensions $D\geq 4$ require an equal number of even and odd fields in the expansion while $D=3$ is an exception, requiring two more even fields than the odd fields \cite{Bergshoeff:2019ctr}. Thus, we shall investigate these two cases separately.
The spacetime decomposed Einstein-Hilbert action in $D$-dimensions take the following form
\begin{eqnarray}
\mathcal{L}_{EH} &=& 2 \epsilon_{a_1 a_2 \ldots a_{d-1} a_d} \, E^{a_1} \wedge \ldots \wedge E^{a_{d-1}} \wedge R(\Omega^{a_d}) \nonumber\\
&& + (d-1) \epsilon_{a_1 a_2 \ldots a_{d-1} a_d} \, T \wedge E^{a_1} \wedge \ldots \wedge E^{a_{d-2}} \wedge R(\Omega^{a_{d-1} a_d}) \,,
\label{DExp}
\end{eqnarray}
where $d = D-1$ is the number of spatial dimensions. For the non-relativistic expansion, the form of the relativistic fields \eqref{ExpGF} suggests that we should investigate $d=2$ and $d > 2$ cases separately since for $d=2$ all $E^a$ terms drop out in the second term of the Lagrangian. The reason for this is the following. Consider the expansion of the fields to order $(N_0, N_1)$ where we either have $N_0 = N_1$ or $N_0 = N_1 +1$. It is sufficient to consider the appearance of the highest order even and odd fields to see which choice include all terms at a given order $\mathcal{O}(\lambda^n)$. If the highest odd field is of order $\lambda^{2N+1}$, then we have two choices for the highest even field depending on the chosen consistent truncation condition:
\begin{itemize}
\item {The highest even field is $\mathcal{O}(\lambda^{2N})$: In this case, the second term in the Lagrangian \eqref{DExp} indicates that both the highest even and the odd fields appears at order $d - 2 + 2N $.}
\item{The highest even field is $\mathcal{O}(\lambda^{2N+2})$: In this case, the second term in the Lagrangian \eqref{DExp} indicates that the highest odd field appears at order $d - 2 + 2N$ while the highest order even field appear at order $d + 2N$. This case cannot define a complete invariant Lagrangian since we terminate the odd fields at $\mathcal{O}(\lambda^{2N+1})$, leading us to miss the contributions of $\mathcal{O}(\lambda^{2N+3})$ odd fields to the action of order $d+2N$.}
\end{itemize}
Note that the situation changes dramatically for $d=2$. In that case, the second term in \eqref{DExp} involves no odd field, and, as explained in Section \ref{NRPrelim}, the consistent truncation occurs by setting $N_0 = N_1 + 1$. Thus, in this section, we shall separately investigate $d=2$ and $d > 2$. For $d > 2$, we choose $D=4$ as a representative example, however our arguments can be straightforwardly generalized to $D > 4$.
\subsection{$D=4$}\label{NR4}
\paragraph{}
In $D=4$, the spacetime decomposed Einstein-Hilbert takes the following form
\begin{eqnarray}
\mathcal{L}_{EH} &=& 2M^2 \epsilon_{a b c} \left(R(\Omega^{a})\wedge E^b \wedge E^c - R(\Omega^{a b})\wedge E^c \wedge T \right) \,.
\label{4dEH}
\end{eqnarray}
As mentioned, the Lie algebra expansion of this Lagrangian is formally given by
\begin{eqnarray}
\mathcal{L}_{\rm EH} &=& \lambda M^2 \mathcal{L}_{1} + \lambda^3 M^2 \mathcal{L}_{3} + \lambda^5 M^2 \mathcal{L}_{5} + \ldots \,.
\label{NREH}
\end{eqnarray}
where $\mathcal{L}_{n}$ represents the Lagrangian at the $\lambda^n$-order. This form of the Lagrangian implies that if we want to single out a $\mathcal{O}(\lambda^N)$ Lagrangian by eliminating all terms of order $n < N$, rescaling the mass parameter by $\lambda^{-N}$ and performing the $\lambda \to 0$ scaling limit, we need $N$-copies of the Einstein-Hilbert action. An example of non-interacting bi-metric gravity and its limit as Newtonian gravity was introduced in Section \ref{MultiGrav} by keeping the same expansion for the first set of gauge fields \eqref{ExpGF} and properly choosing the second set to annihilate the $\mathcal{O}(\lambda)$ action. For a more general result, consider a non-interacting model of multimetric gravity that involves $N$-number of spacetime decomposed vielbein and spin-connection
\begin{eqnarray}
\mathcal{L} &=& \sum_{i=1}^N \left[2M_{(i)}^2 \epsilon_{a b c} \left(R(\Omega^{a}_{(i)})\wedge E^b_{(i)} \wedge E^c_{(i)} - R(\Omega^{a b}_{(i)})\wedge E_{(i)}^c \wedge T_{(i)} \right) \right] \,,
\end{eqnarray}
where $i,j=1,\ldots N$ label the set of fields $\{T_{(i)},E_{(i)}, \Omega^{ab}_{(i)}, \Omega^a_{(i)}\}$. Following our example for the Newtonian gravity, we keep the expansion of the first set of fields $\{T_{(1)}, E_{(1)}, \Omega^a_{(1)}, \Omega^{ab}_{(1)}\}$ the same as \eqref{ExpGF}, that is,
\begin{align}
T_{(1)} & = \sum_{n=0}^{N_0} \lambda^{2n} \tau_{(2n)} \,, & E^a_{(1)} & = \sum_{n=0}^{N_1} \lambda^{2n+1} e^a_{(2n+1)} \,,\nonumber\\
\Omega^{ab}_{(1)} & = \sum_{n=0}^{N_0} \lambda^{2n} \omega^{ab}_{(2n)} \,, & \Omega^a_{(1)} & = \sum_{n=0}^{N_1} \lambda^{2n+1} \omega^a_{(2n+1)} \,.
\label{ExpGF1}
\end{align}
However, for $i > 1$, we can incorporate dimensionless free parameters $\alpha_{(i)}$ as follows
\begin{align}
T_{(i)} & = \sum_{n=0}^{N_0} \lambda^{2n} \alpha_{(i)}^{2n} \tau_{(2n)} \,, & E^a_{(i)} & = \sum_{n=0}^{N_1} \lambda^{2n+1} \alpha_{(i)}^{2n+1} e^a_{(2n+1)} \,,\nonumber\\
\Omega^{ab}_{(i)} & = \sum_{n=0}^{N_0} \lambda^{2n} \alpha_{(i)}^{2n} \omega^{ab}_{(2n)} \,, & \Omega^a_{(i)} & = \sum_{n=0}^{N_1} \lambda^{2n+1} \alpha_{(i)}^{2n+1} \omega^a_{(2n+1)} \,.
\label{AlphaExpansion}
\end{align}
where $\alpha_i \neq \alpha_j$ for $i \neq j$. This expansion would yield the following expansion of $\mathcal{L}_{(i)}$ that is the Einstein-Hilbert action for $i$-th set of fields $\{T_{(i)}, E_{(i)}, \Omega^a_{(i)}, \Omega^{ab}_{(i)}\}$
\begin{eqnarray}
\mathcal{L}_{EH,(i)} &=& \lambda \alpha_{i} M_{i}^2 \mathcal{L}_{1} + \lambda^3 \alpha_{i}^3 M_{i}^2 \mathcal{L}_{3} + \lambda^5 \alpha_{i}^5 M_{i}^2 \mathcal{L}_{5} + \ldots \,.
\end{eqnarray}
Note that the structure of the Lagrangian at each order $\lambda$ does not change by the inclusion of the free parameters but we simply pick up a free coefficient $\alpha_{i}^{2n+1}$ in front of the Lagrangians that are necessary for cancellations. Let us now work out certain examples and then provide a general formalism.
\begin{enumerate}
\item {\textbf{Galilei Gravity:} The Galilei gravity appears as $\mathcal{L}_1$ in the expansion of Einstein-Hilbert action. As mentioned, it is sufficient to consider a single copy of Einstein-Hilbert action and express the relativistic fields and the mass parameter as
\begin{align}
T &= \tau \, & E^a &=\lambda e^a \,,& \Omega^{ab} & = \omega^{ab} \,, & \Omega^a & = \lambda \omega^a \,, & M^2 & = \frac{m^2}{\lambda} \,.
\end{align}
Upon taking the $\lambda \to 0$ limit, this choice would lead to the Galilei gravity.}
\item{\textbf{Newtonian Gravity:} The Newtonian gravity appears at the $\lambda^3$-order in the Lie algebra expansion. Thus, we shall consider two copies of Einstein-Hilbert action. For the first copy, we keep the same form as the Lie algebra expansion, i.e.
\begin{align}
T_{1} &= \tau + \lambda^2 m \,, & E^a_{1} &= \lambda e^a + \lambda^3 t^a \,,& \Omega^{ab}_{1} & = \omega^{ab} + \lambda^2 s^{ab} \,, & \Omega^a_{1} & = \lambda \omega^a + \lambda^3 b^a \,.
\end{align}
For the second copy, we incorporate the free parameter $\alpha_2$
\begin{align}
T_{2} &= \tau + \lambda^2 \alpha_2^2 m \,, & E^a_{2} &= \lambda \alpha_2 e^a + \lambda^3 \alpha_2^3 t^a \,,\nonumber\\
\Omega^{ab}_{2} & = \omega^{ab} + \lambda^2 \alpha_2^2 s^{ab} \,, & \Omega^a_{2} & = \lambda \alpha_2 \omega^a + \lambda^3 \alpha_2^3 b^a \,.
\end{align}
The combination of these two Lagrangian with mass parameters $M_{1,2}$ gives rise to
\begin{eqnarray}
\mathcal{L} &=& \lambda M^2 \left(1 + \frac12 \alpha_2\right) \mathcal{L}_1 + \lambda^3 M^2 \left(1+ \frac12 \alpha_2^3\right) \mathcal{L}_3 + \mathcal{O}(\lambda^5) \,.
\end{eqnarray}
where we set $M_1^2 = 2M_2^2 = M^2$ for simplicity. To cancel out the first term, we fix $\alpha_2$ to be $\alpha_2 = - 2$. The coefficient of $\mathcal{L}_3$ then becomes $-3$. Then, the following scaling of the mass parameter
\begin{eqnarray}
M^2 = \frac{1}{3 \lambda^3} m^2 \,.
\end{eqnarray}
along with the scaling limit $\lambda \to 0$ recover the action principle for the Newtonian gravity up to an overall minus sign.
}
\item{\textbf{Beyond the Newtonian Gravity:} The next-to-Newtonian gravity action arise at order $\mathcal{O}(\lambda^5)$. To perform a proper scaling limit, we need three copies of the Einstein-Hilbert action. Using the expansion of the relativistic fields \eqref{AlphaExpansion}, we obtain the following Lagrangian
\begin{eqnarray}
\mathcal{L} &=& \lambda \left(M_1^2 + M_2^2 \alpha_2 + M_3^2 \alpha_3 \right) \mathcal{L}_1 + \lambda^3 \left(M_1^2 + M_2 ^2\alpha_2^3 + M_3 ^2\alpha_3^3\right) \mathcal{L}_3 \nonumber\\
&& + \lambda^5 \left(M_1^2 + M_2 ^2\alpha_2^5 + M_3 ^2\alpha_3^5 \right) \mathcal{L}_5 + \mathcal{O}(\lambda^7) \,.
\end{eqnarray}
Here, $\mathcal{L}_5$ represents the next-to-Newtonian gravity Lagrangian. For convenience, let us choose $M_1^2 = 2 M_2^2 = 2/3 M_3^2 = M^2$. In that case, to cancel out the coefficients of the Galilei and Newtonian gravity, we need to solve two equations
\begin{align}
0 & = 1 + \frac12 \alpha_2 + \frac32 \alpha_3 \,, & 0 & = 1 + \frac12 \alpha_2^3 + \frac32 \alpha_3^3 \,.
\end{align}
These two equations can be solved to eliminate $\alpha_2$ and $\alpha_3$ as $\alpha_2 = -5/4$ and $\alpha_3 = -1/4$. Then, the following scaling of the mass parameter
\begin{eqnarray}
M^2 = \frac{256}{135 \lambda^5} m^2 \,.
\end{eqnarray}
along with the scaling limit $\lambda \to 0$ recover the Lagrangian for next-to-Newtonian gravity up to an overall minus sign.
}
\end{enumerate}
With these three examples, it is now evident that for a Lagrangian of order-$2N+1$, we need to introduce $N$-copies of Einstein-Hilbert action. Using the expansion of the fields, we obtain
\begin{eqnarray}
\mathcal{L} &=& \sum_{n=0}^{N} \lambda^{2n+1} \left( M_1^2 + \sum_{i=2}^{N+1} \alpha_i^{2n+1} M_i^2 \right) \mathcal{L}_{2n+1} + \mathcal{O}(\lambda^{2N+3}) \,.
\end{eqnarray}
To single out the order-$2N+1$ Lagrangian, one needs to solve $N$ number of algebraic equations that determines the values of $\alpha_i$ for $i = 2, 3, \ldots N$
\begin{eqnarray}
0&=& M_1^2 + \sum_{i=2}^{N+1} \alpha_i^{2n+1} M_i^2 \,, \qquad \text{for} \qquad n = 0,1,\ldots, N-1 \,.
\end{eqnarray}
These values can finally be used in the coefficient of the order-$2N+1$ Lagrangian, along with the scaling of the mass parameters $M_i \to M_i/\lambda^{2N+1}$. Finally, performing the $\lambda \to 0$ limit gets rid of all higher order terms and yields the desired non-relativistic model.
Although we have so far discussed the non-relativistic limit of non-interacting multimetric models, we can turn on potential terms that gives rise to the interaction among $E_{(i)}^a$. Upon spacetime decomposition, they typically take the following form in four dimensions
\begin{eqnarray}
\mathcal{L}_{CC} = \epsilon_{abc} J_{ijkl} T_{i} \wedge E^a_j \wedge E^b_k \wedge E^c_l \,,
\label{CosmologicalConstant}
\end{eqnarray}
where $J_{ijjk}$ is a matrix of constant coefficients of dimension mass-squared. The off-diagonal elements of this matrix determine the interaction between the temporal and spatial vielbein of different gravity sectors. Upon implementing the $\alpha$-expansion, \eqref{ExpGF1} and \eqref{AlphaExpansion}, we notice that the lowest order contribution arise at $\mathcal{O}(\lambda^3)$, i.e.
\begin{eqnarray}
\mathcal{L} &=& \lambda^3 \epsilon_{abc} \tau \wedge e^a \wedge e^b \wedge e^c + \mathcal{O}(\lambda^5) \,,
\end{eqnarray}
where we assumed that the coefficients of $J_{ijkl}$ and $\alpha_{(i)}$ are not chosen in a particular fashion to annihilate each other. This is the same expansion order as the Newtonian gravity. As we will discuss the details in Appendix \ref{AppA}, this expansion implies that the non-relativistic limit of bi-gravity with a non-vanishing potential is the Newtonian gravity with a constant background mass density. For the three-metric model, we need to go to the $\mathcal{O}(\lambda^5)$
\begin{eqnarray}
\mathcal{L} \sim \lambda^5 \epsilon_{abc} \left( m \wedge e^a \wedge e^b \wedge e^c + 3 \tau \wedge t^a\wedge e^b\wedge e^c\right) + \mathcal{O}(\lambda^7) \,.
\end{eqnarray}
Note that we have more than a sufficient number of free parameters to annihilate the order-$\lambda^3$ Lagrangian, hence the details are not presented.
\subsection{$D=3$}\label{D3NR}
\paragraph{}
In three-dimensions, the spacetime decomposed Einstein-Hilbert Lagrangian takes the following form \eqref{Decomposed3D}
\begin{eqnarray}
\mathcal{L} &=& 2 M \left( R(\Omega) \wedge T + \epsilon_{ab} R(\Omega^a) \wedge E^b \right) \,,
\end{eqnarray}
which implies that upon Lie algebra expansion \eqref{ExpGF} we have the following structure for the Lagrangian
\begin{eqnarray}
\mathcal{L}_{EH} &=& \lambda^0 M \mathcal{L}_0 + \lambda^2 M \mathcal{L}_2 + \lambda^4 M \mathcal{L}_4 + \ldots \,.
\end{eqnarray}
Let us start our investigation with the zeroth-order Lagrangian which describes the three-dimensional Galilei gravity
\begin{eqnarray}
\mathcal{L}_0 &=& 2 \tau \wedge d\omega \,.
\end{eqnarray}
This model only involves the gauge fields of time translation and rotations, which, in the case of three-dimensions, is a $U(1) \times U(1)$ gauge theory. Therefore, this model can be considered as an off-diagonal $U(1) \times U(1)$ Chern-Simons gauge theory
\begin{eqnarray}
\mathcal{L} &=& 2 Z_1 \wedge d Z_2 \,,
\end{eqnarray}
which the identification $Z_1 = \tau$ and $Z_2 = \omega$, corresponding to the off-diagonal invariant metric with $g(Z_1,Z_2) = 1$. At this stage, it is obvious that the elimination of lower-order Lagrangians to single out higher-order is more subtle in $D=3$. It is not sufficient to consider multiple copies of Einstein-Hilbert action, but the theory must be extended with a $U(1) \times U(1)$ Chern-Simons gauge theory to cancel out the lowest order Lagrangian, see \cite{Bergshoeff:2016lwr,Ozdemir:2019orp,Ozdemir:2019tby} as particular examples.
Let's now move on to the next order $\mathcal{L}_2$. In this case, we have
\begin{eqnarray}
\mathcal{L} &=& M \left(2 R(\omega) \wedge \tau \right) + \lambda^2 M \left(2 \left(R(\omega) \wedge m + R(s) \wedge \tau + \epsilon_{ab} R^a(\omega) \wedge e^b \right) \right) + \mathcal{O}(\lambda^4) \,.
\end{eqnarray}
where the curvatures are as defined in \eqref{Curvatures3d}. To obtain this model with as a scaling limit, we begin with the three-dimensional Einstein-Hilbert action with an additional $U(1) \times U(1)$ Chern-Simons gauge theory
\begin{eqnarray}
\mathcal{L} &=& 2M R(\Omega) \wedge T + 2M \epsilon_{ab} R(\Omega^a) \wedge E^b + 2 M Z_1 \wedge d Z_2 \,.
\end{eqnarray}
The zeroth-order Galilei gravity can then be annihilated by a proper choice of $Z_1$ and $Z_2$, i.e.
\begin{align}
T & = \tau + \lambda^2 m \,, & E^a & = \lambda e^a \,, & \Omega & = \omega + \lambda^2 s \,, & \Omega^a & = \lambda \omega^a \,,\nonumber\\
Z_1 & = - \tau\,, & Z_2 & = \omega \,,
\end{align}
which precisely recovers the $\mathcal{O}(\lambda^2)$ Lagrangian, known as the extended Bargmann gravity \cite{Papageorgiou:2009zc} upon rescaling the mass parameter $M \to M/\lambda^2$ and taking the limit $\lambda \to 0$. Note that this contraction is much simpler than the one presented in \cite{Bergshoeff:2016lwr}, thank to the Lie algebra expansion. Similarly, if we are to move on with the next order Lagrangian, we have
\begin{eqnarray}
\mathcal{L} &=& M \left(2 R(\omega) \wedge \tau \right) + \lambda^2 M \left(2 R(\omega) \wedge m + 2 R(s) \wedge \tau - 2 R^a(\omega) \wedge e_a \right) \nonumber\\
&& + \lambda^4 M \left( R(s) \wedge m + R(z) \wedge \tau - R(\omega) \wedge y- R^a(\omega) \wedge t_a - R^a(b) \wedge e_a \right) + \mathcal{O}(\lambda^6)
\end{eqnarray}
where the $\mathcal{O}(\lambda^4)$ Lagrangian is known as the extended Newtonian gravity \cite{Ozdemir:2019orp}. Note that the Lagrangian involves the redefinition of the fields
\begin{align}
e^a & \to \epsilon^{ab} e_b\,, & t^a & \to \epsilon^{ab} t_b \,, & y \to - y \,,
\end{align}
in the Lie algebra expansion of the fields to match with the existing literature \cite{Bergshoeff:2019ctr}. As in the case of four-dimensions, we can reproduce this Lagrangian by considering a three-dimensional bigravity with an additional $U(1) \times U(1)$ Chern-Simons gauge theory
\begin{eqnarray}
\mathcal{L} &=& 2 M_1 R(\Omega_1) \wedge T_1 + 2 M_1 \epsilon_{ab} R(\Omega^a_1) \wedge E^b_1 + 2 M_2 R(\Omega_2) \wedge T_2 \nonumber\\
&& + 2 M_2 \epsilon_{ab} R(\Omega^a_2) \wedge E^b_2 + 2 M_3 Z_1 \wedge d Z_2 \,.
\end{eqnarray}
Using the standard expansion for the first set of fields and implementing the $\alpha$-expansion in the second set of fields
\begin{align}
T_1 & = \tau + \lambda^2 m - \lambda^4 y \,, &\Omega_1 & = \omega + \lambda^2 s + \lambda^4 z \,, & \Omega^a_1 & = \lambda \omega^a + \lambda^3 b^a \,,\nonumber\\
E^a_1 & = \lambda \epsilon^{ab} e_b + \lambda^3 \epsilon^{ab} t_b \,, & Z_1 & = \beta_1 \tau \,, & Z_2 & = \omega \,,\nonumber\\
T_2 & = \tau + \alpha_2^2 \lambda^2 m - \alpha_2^4 \lambda^4 y \,, &\Omega_2 & = \omega +\alpha_2^2 \lambda^2 s +\alpha_2^4 \lambda^4 z \,, & \Omega^a_2 & = \alpha_2 \lambda \omega^a +\alpha_2^3 \lambda^3 b^a \,,\nonumber\\
E^a_2 & = \alpha_2 \lambda \epsilon^{ab} e_b +\alpha_2^3 \lambda^3 \epsilon^{ab} t_b \,,
\label{NewtonianScaling}
\end{align}
we obtain
\begin{eqnarray}
\mathcal{L} &=& \left(M_1 + \beta_1 M_3 + M_2 \right) \left(2 R(\omega) \wedge \tau \right) \nonumber\\
&& + \lambda^2 \left(M_1 + \alpha_2^2 M_2 \right) \left(2 R(\omega) \wedge m + 2 R(s) \wedge \tau - 2 R^a(\omega) \wedge e_a \right) \nonumber\\
&& + \lambda^4 \left(M_1 + \alpha_2^4 M_2 \right) \left( R(s) \wedge m + R(z) \wedge \tau - R(\omega) \wedge y- R^a(\omega) \wedge t_a - R^a(b) \wedge e_a \right) \nonumber\\
&& + \mathcal{O}(\lambda^6) \,.
\end{eqnarray}
Note that the three-dimensional case is not complicated than the scaling limit in four-dimensions. The $U(1) \times U(1)$ gauge theory only annihilates the zeroth-order Lagrangian but does not interfere with higher-order ones. Consequently, $\beta_1$ can be solved by using the coefficient of the Galilei gravity and the remainder of the problem is the same as the four-dimensional case. To present a solution, let us set $M_1 = M_3 = - 4 M_2 = M$. In that case, we can set the coefficients of the Galilei and the extended Bargmann gravity by the following choices for $\alpha_2$ and $\beta_1$
\begin{align}
\alpha_2 &= 2 \,, & \beta_1 & = - \frac{3}{4} \,.
\end{align}
Finally, rescaling the mass parameter $M$ as
\begin{eqnarray}
M = - \frac{m}{3 \lambda^4} \,,
\end{eqnarray}
and performing the scaling limit $\lambda \to 0$ precisely recover the extended Newtonian gravity \cite{Ozdemir:2019orp}. Once again, the scaling limit presented in \eqref{NewtonianScaling} is much simpler than the one found in \cite{Ozdemir:2019orp}.
With these three examples, the systematic of three-dimensional non-relativistic scaling limit is obvious. To start with , one needs an $N$-copy of Einstein-Hilbert action with an additional $U(1) \times U(1)$ Chern-Simons gauge theory
\begin{eqnarray}
\mathcal{L} &=& \sum_{i=1}^N \left[ 2M_{(i)} R(\Omega_{(i)}) \wedge T_{(i)} + 2M_{(i)} \epsilon_{ab} R(\Omega^a_{(i)}) \wedge E^b_{(i)} \right] + 2 M_{N+1} Z_1 \wedge d Z_2 \,.
\end{eqnarray}
The fields can then be expanded using the standard expansion for the first set of fields \eqref{ExpGF} and the $\alpha$-expansion for the other set of fields \eqref{AlphaExpansion}. In addition, the gauge fields $Z_1$ and $Z_2$ must be chosen as
\begin{align}
Z_1 = &\beta_1 \tau \,, & Z_2 & = \omega \,.
\end{align}
These choices would lead us to the following Lagrangian for the $N$-metric theory
\begin{eqnarray}
\mathcal{L} &=& \left( \beta_1 M_{N+1} + \sum_{i=1}^N M_{i} \right) \mathcal{L}_0 + \sum_{n=1}^{N} \lambda^{2n} \left( M_1 + \sum_{i=2}^{N} \alpha_i^{2n+1} M_i \right) \mathcal{L}_{2n} + \mathcal{O}(\lambda^{2N+2}) \,.
\end{eqnarray}
The solution for $\beta_1$ is simply
\begin{eqnarray}
\beta_1 = - \frac{1}{M_{N+1}} \sum_{i=1}^N M_{(i)}
\end{eqnarray}
For the remaining coefficients, we can follow our footsteps in $D=4$ and solve the coefficients of the $\mathcal{O}(\lambda^{2N-2})$ models to single out the Lagrangian at order $\lambda^{2N}$. In the last step, we rescale the mass parameters $M_{(i)} \to M_{(i)} / \lambda^{2N}$ and perform the scaling limit $\lambda \to 0$ and obtain the desired non-relativistic Lagrangian.
\subsection{Non-Relativistic Algebras as a Contraction of Multiple Poincar\'e Algebras}
\paragraph{}
The Lie algebra expansion is not just the expansion of the gauge fields but also the expansion of the generators \eqref{GeneratorExpansion}. Thus, it is expected that the scaling limit that we establish here for the Lagrangians can be extended to a relation between $N$-copies of the Poincar\'e algebra and the non-relativistic algebra at the relevant order. The Galilei and the Bargmann algebra have been known to arise from the contraction of the Poincar\'e and Poincar\'e $\oplus $ U(1) algebras, respectively \cite{Andringa:2010it}. In this section, we show that the higher-order non-relativistic algebras that admit an invariant Lagrangian arise from the contraction of multiple copies of Poincar\'e algebra. There is of course an exception in three-dimension with requires an addition of $U(1) \times U(1)$ sector to cancel out divergences.
Let us start our discussion with two representative examples: The Galilei algebra \eqref{GalAlb} as a contraction of the Poincar\'e algebra and the non-relativistic algebra of Newtonian gravity \eqref{NewtAlg} as a contraction of two copies of the Poincar\'e algebra. Consider the spacetime decomposed Poincar\'e algebra \eqref{decomposealgebra}. If we rescale the odd generators as
\begin{align}
P_a & \to \lambda P_a \,, & G_a \to \lambda G_a \,,
\end{align}
and take the $\lambda \to \infty$ limit, the $[P_a, G_b]$ and $[G_a, G_b]$ commutators vanish, giving rise to the Galilei algebra \eqref{GalAlb}. Next, consider two copies of the spacetime decomposed Poincar\'e algebra, the first set being labeled as $\{H^1, P_a^{1}, G_a^{1}, J_{ab}^{1}\}$ and the second set with $\{H^{2}, P_a^{2}, G_a^{2}, J_{ab}^2 \}$. Expressing the non-relativistic generators as
\begin{align}
J_{ab} & = J_{ab}^{1} + J_{ab}^{2}\,, & S_{ab} &= \lambda^2 \left( \alpha_1^2 J_{ab}^{1} + \alpha_2^2 J_{ab}^{2}\right) \,,\nonumber\\
H &= H^{1} + H^{2} \,, & M & = \lambda^2 \left( \alpha_1^2 H^{1} + \alpha_2^2 H^{2} \right) \,,\nonumber\\
P_a & = \lambda \left( \alpha_1 P_a^{1} + \alpha_2 P_a^{2} \right) \,, & T_a & = \lambda^3 \left( \alpha_1^3 P_a^{1} + \alpha_2^3 P_a^{2} \right) \,,\nonumber\\
G_a & = \lambda \left( \alpha_1 G_a^{1} + \alpha_2 G_a^{2} \right) \,, & B_a & = \lambda^3 \left( \alpha_1^3 G_a^{1} + \alpha_2^3 G_a^{2} \right) \,,
\end{align}
we precisely recover the commutation relations for the algebra \eqref{NewtAlg} in the limit $\lambda \to 0$ as long as $\alpha_{1}, \alpha_2 \neq 1$ and $\alpha_1 \neq \alpha_2$. Note that we perform the $\lambda \to 0$ limit rather than $\lambda \to \infty$ since the relations are inverted. We can use these inverted relations to establish the generators of the Poincar\'e algebra in terms of the generators of the algebra \eqref{NewtAlg}, in which case $\lambda \to \infty$ limit must be taken. However this relation is much easier to see that the necessary commutation relations are satisfied. As a matter of fact, the $N$-th order non-relativistic algebra \eqref{NRInfinite} can be established by using the $N$ copies of the Poincar\'e algebra
\begin{align}
J_{ab}^{(2n)} & = \lambda^{2n} \bigoplus\limits_{i=1}^{N_0} \alpha_i^{2n} J_{ab}^{i} \,, & H^{(2n)} & = \lambda^{2n} \bigoplus\limits_{i=1}^{N_0} \alpha_i^{2n} H^{i} \,, \nonumber\\
P_a^{2n+1} & = \lambda^{2n+1} \bigoplus\limits_{i=1}^{N_1} \alpha_i^{2n+1} P_{a}^{i} \,, & G_a^{2n+1} & = \lambda^{2n+1} \bigoplus\limits_{i=1}^{N_1} \alpha_i^{2n+1} G_{a}^{i} \,,
\end{align}
once the $\lambda \to 0$ limit is taken, given that $N_0 = N_1$. Note that this expansion makes sense as long as we impose $\alpha_i \neq 1$ and $\alpha_i \neq \alpha_j$ for $i \neq j$. In this generator expansion these alpha parameters are free and not necessary to be fixed. Once again, these relations can be inverted to find construct the elements of the Poincar\'e algebras in terms of the generators of the non-relativistic algebra. However, with this form of the relation, it is trivial that the commutation relations \eqref{NRInfinite} are satisfied in the $\lambda \to 0$ limit.
In three-dimensions, the spacetime decomposed Poincar\'e algebra takes a simpler form
\begin{align}
\left[G_a, H\right] & = P_a \,, &\left[J, P_a\right] &= -\epsilon_{ab} P^b \,, &\left[J, G_a\right] &= -\epsilon_{ab} G^b \nonumber\\
\left[G_a, P_b \right] & = \delta_{ab} H \,, & \left[G_a, G_b\right] & = - \epsilon_{ab} J \,,
\label{3dPoincare}
\end{align}
To obtain the extended Bargmann algebra \eqref{3dEBA}, we need to introduce two central generators $M$ and $S$. Then, if we make the following scaling and redefinition
\begin{align}
H & \to H + \lambda^2 M\,, & J & \to J + \lambda^2 S \,, & P_a & \to \lambda P_a \,, & G_a & \to \lambda G_a \,,
\end{align}
we precisely recover the extended Bargmann algebra. For the next order extended Newtonian algebra \cite{Ozdemir:2019orp}, we need to consider two copies of the Poincar\'e algebra along with two $U(1)$ generators $Y$ and $Z$. In this case, the following definitions precisely recover the extended Newtonian algebra
\begin{align}
J & = J^{1} + J^{2}\,, & S &= \lambda^2 \left( \alpha_1^2 J^{1} + \alpha_2^2 J^{2}\right) \,,\nonumber\\
H &= H^{1} + H^{2} \,, & M & = \lambda^2 \left( \alpha_1^2 H^{1} + \alpha_2^2 H^{2} \right) \,,\nonumber\\
P_a & = \lambda \left( \alpha_1 P_a^{1} + \alpha_2 P_a^{2} \right) \,, & T_a & = \lambda^3 \left( \alpha_1^3 P_a^{1} + \alpha_2^3 P_a^{2} \right) \,,\nonumber\\
G_a & = \lambda \left( \alpha_1 G_a^{1} + \alpha_2 G_a^{2} \right) \,, & B_a & = \lambda^3 \left( \alpha_1^3 G_a^{1} + \alpha_2^3 G_a^{2} \right) \,,\nonumber\\
Y &= \lambda^4 \left( \alpha_1^4 H^{1} + \alpha_2^4 H^{2} \right) & Z & = \lambda^4 \left( \alpha_1^4 J^{1} + \alpha_2^4 J^{2} \right) \,.
\end{align}
Once again, these relations can be generalized as
\begin{align}
J^{(2n)} & = \lambda^{2n} \bigoplus\limits_{i=1}^{N_0} \alpha_i^{2n} J^{i} \,, & H^{(2n)} & = \lambda^{2n} \bigoplus\limits_{i=1}^{N_0} \alpha_i^{2n} H^{i} \,, \nonumber\\
P_a^{2n+1} & = \lambda^{2n+1} \bigoplus\limits_{i=1}^{N_1} \alpha_i^{2n+1} P_{a}^{i} \,, & G_a^{2n+1} & = \lambda^{2n+1} \bigoplus\limits_{i=1}^{N_1} \alpha_i^{2n+1} G_{a}^{i} \,,
\end{align}
where truncation condition is now $N_0 = N_1 + 1$, and we impose that $\alpha_i \neq 1$ and $\alpha_i \neq \alpha_j$ for $i \neq j$. Note that $J^{2N_1 + 2}$ and $H^{2N_1 +2}$ are the central charges of the extended non-relativistic algebra that we inherit from the $U(1) \times U(1)$ extension of $N$-copies of the Poincar\'e algebra.
\section{Ultra-Relativistic Scaling Limit of Multimetric Gravity}\label{URChapter}
\paragraph{}
The main difference between the non-relativistic and ultra-relativistic expansion is the change of the character of the generator of time translations $H$, and spatial translations $P_a$. In the ultra-relativistic expansion of the Poincar\'e algebra, $H$ is expanded in odd-powers of $\lambda$ while $P_a$ is expanded in even powers. This is also reflected in the expansion of the corresponding gauge fields, see \eqref{ExpUR}. As a result, the $D$-dimensional Einstein-Hilbert action captures all $\mathcal{O}(\lambda^{2N+1})$ terms, giving rise to an invariant Lagrangian as long as $N_0 = N_1$. This statement is true for $D=3$ as well, hence we do not need to present a separate treatment for $D=3$ but can provide a representative example for $D=4$ which can be generalized to arbitrary dimensions.
In four dimensions, the structure of the spacetime decomposed Einstein-Hilbert action \eqref{4dEH} implies that the ultra-relativistic expansion with \eqref{ExpUR} yields the following structure
\begin{eqnarray}
\mathcal{L}_{\rm EH} &=& \lambda M^2 \mathcal{L}_{1} + \lambda^3 M^2 \mathcal{L}_{3} + \lambda^5 M^2 \mathcal{L}_{5} + \ldots \,.
\label{URLag}
\end{eqnarray}
The lowest order Lagrangian can be isolated by the lowest order expressions for the relativistic fields \eqref{CarrollExp} which can be followed by the rescaling the mass parameter $M^2 \to M^2/\lambda$ and the scaling limit $\lambda \to 0$. To isolate the higher-order Lagrangians, we need to find an $\alpha$-expansion which we discussed previously for the non-relativistic models. Following our footsteps, we start with a non-interacting $N$-metric theory
\begin{eqnarray}
\mathcal{L} &=& \sum_{i=1}^N \left[2M_{(i)}^2 \epsilon_{a b c} \left(R(\Omega^{a}_{(i)})\wedge E^b_{(i)} \wedge E^c_{(i)} - R(\Omega^{a b}_{(i)})\wedge E_{(i)}^c \wedge T_{(i)} \right) \right] \,,
\end{eqnarray}
where $i,j=1,\ldots N$ label the set of fields $\{T_{(i)},E_{(i)}, \Omega^{ab}_{(i)}, \Omega^a_{(i)}\}$. Then, we keep the expansion of the first set of fields $\{T_{(1)}, E_{(1)}, \Omega^a_{(1)}, \Omega^{ab}_{(1)}\}$ the same as \eqref{ExpUR}
\begin{align}
T_{(1)} & = \sum_{n=0}^{N_1} \lambda^{2n+1} \tau_{(2n+1)} \,, & E^a_{(1)} & = \sum_{n=0}^{N_0} \lambda^{2n} e^a_{(2n)} \,,\nonumber\\
\Omega^{ab}_{(1)} & = \sum_{n=0}^{N_0} \lambda^{2n} \omega^{ab}_{(2n)} \,, & \Omega^a_{(1)} & = \sum_{n=0}^{N_1} \lambda^{2n+1} \omega^a_{(2n+1)} \,.
\label{ExpUR1}
\end{align}
while for $i > 1$, we incorporate the dimensionless parameters $\alpha_{(i)}$
\begin{align}
T_{(i)} & = \sum_{n=0}^{N_1} \lambda^{2n+1} \alpha_{(i)}^{2n+1} \tau_{(2n+1)} \,, & E^a_{(i)} & = \sum_{n=0}^{N_0} \lambda^{2n} \alpha_{(i)}^{2n} e^a_{(2n)} \,,\nonumber\\
\Omega^{ab}_{(i)} & = \sum_{n=0}^{N_0} \lambda^{2n} \alpha_{(i)}^{2n} \omega^{ab}_{(2n)} \,, & \Omega^a_{(i)} & = \sum_{n=0}^{N_1} \lambda^{2n+1} \alpha_{(i)}^{2n+1} \omega^a_{(2n+1)} \,.
\label{AlphaExpansionUR}
\end{align}
where $\alpha_i \neq 1$ and $\alpha_i \neq \alpha_j$ for $i \neq j$. This yield the following expansion of $\mathcal{L}_{(i)}$ that is the Einstein-Hilbert action for $i$-th set of fields $\{T_{(i)}, E_{(i)}, \Omega^a_{(i)}, \Omega^{ab}_{(i)}\}$
\begin{eqnarray}
\mathcal{L}_{EH (i)} &=& \lambda \alpha_{i} M_{i}^2 \mathcal{L}_{1} + \lambda^3 \alpha_{i}^3 M_{i}^2 \mathcal{L}_{3} + \lambda^5 \alpha_{i}^5 M_{i}^2 \mathcal{L}_{5} + \ldots \,.
\end{eqnarray}
Keeping the first set of fields with the standard expansion while using the $\alpha$-expansion for the reminder of the fields, the Einstein-Hilbert action becomes
\begin{eqnarray}
\mathcal{L}_{EH} &=& \sum_{n=0}^{N} \lambda^{2n+1} \left( M_1^2 + \sum_{i=2}^{N+1} \alpha_i^{2n+1} M_i^2 \right) \mathcal{L}_{2n+1} + \mathcal{O}(\lambda^{2N+3}) \,.
\end{eqnarray}
Once again, we isolate the order-$2N+1$ Lagrangian by solving the $N$ number of algebraic equations that determines the values of $\alpha_i$ for $i = 2, 3, \ldots N$, i.e.
\begin{eqnarray}
0&=& M_1^2 + \sum_{i=2}^{N+1} \alpha_i^{2n+1} M_i^2 \,, \qquad \text{for} \qquad n = 0,1,\ldots, N-1 \,.
\end{eqnarray}
Finally, by rescaling of the mass parameters $M_i^2 \to M_i^2/\lambda^{2N+1}$ and performing the $\lambda \to 0$ limit we eliminate any lower-order divergences and higher order Lagrangians to obtain the desired ultra-relativistic model. Let us now provide examples.
\begin{enumerate}
\item {\textbf{Carroll Gravity:} The Carroll gravity appears as $\mathcal{L}_1$ in the ultra-relativistic expansion of Einstein-Hilbert action \eqref{URLag}. It is, thus, sufficient to consider the Poincar\'e algebra, a single set its gauge fields and the Einstein-Hilbert action. Using the lowest order definitions for the relativistic fields
\begin{align}
T &= \lambda \tau \, & E^a &= e^a \,,& \Omega^{ab} & = \omega^{ab} \,, & \Omega^a & = \lambda \omega^a \,, & M^2 & = \frac{m^2}{\lambda} \,.
\end{align}
we obtain the Carroll gravity
\begin{eqnarray}
\mathcal{L}_1 &=& -2 \epsilon_{abc} R^{ab}(\omega) \wedge e^a \wedge \tau + 2 \epsilon_{abc} R^a(\omega) \wedge e^b \wedge e^c \,,
\label{CarrollGravity}
\end{eqnarray}
upon rescaling the mass parameter with $M^2 \to M^2/\lambda$ and taking the scaling limit $\lambda \to 0$.
}
\item {\textbf{Beyond the Carroll Gravity:} The next order ultra-relativistic model appears at the $\lambda^3$-order in the Lie algebra expansion. Thus, we shall consider two copies of the Einstein-Hilbert action. As for the non-relativistic scaling limit, we keep the same form as the Lie algebra expansion for the first set of fields,
\begin{align}
T_{1} &= \lambda \tau + \lambda^3 m \,, & E^a_{1} &= e^a + \lambda^2 t^a \,,& \Omega^{ab}_{1} & = \omega^{ab} + \lambda^2 s^{ab} \,, & \Omega^a_{1} & = \lambda \omega^a + \lambda^3 b^a \,.
\end{align}
For the second copy, we incorporate the free parameter $\alpha_2$
\begin{align}
T_{2} &= \lambda \tau + \lambda^3 \alpha_2^3 m \,, & E^a_{2} &= \alpha_2 e^a + \lambda^2 \alpha_2^2 t^a \,,\nonumber\\
\Omega^{ab}_{2} & = \omega^{ab} + \lambda^2 \alpha_2^2 s^{ab} \,, & \Omega^a_{2} & = \lambda \alpha_2 \omega^a + \lambda^3 \alpha_2^3 b^a \,.
\end{align}
The combination of these two Lagrangian with mass parameters $M_{1,2}$ gives rise to
\begin{eqnarray}
\mathcal{L} &=& \lambda M^2 \left(1 + \frac12 \alpha_2\right) \mathcal{L}_1 + \lambda^3 M^2 \left(1+ \frac12 \alpha_2^3\right) \mathcal{L}_3 + \mathcal{O}(\lambda^5) \,.
\end{eqnarray}
where we set $M_1^2 = 2 M_2^2 = M^2$ for simplicity. Here. $\mathcal{L}_1$ refer to the Carroll gravity \eqref{CarrollGravity} and $\mathcal{L}_3$ is what we refer to as the beyond the Carroll gravity
\begin{eqnarray}
\mathcal{L}_3 &=& -2 \epsilon_{abc} R^{ab}(s) \wedge e^c \wedge \tau -2 \epsilon_{abc} R^{ab}(\omega) \wedge t^c \wedge \tau -2 \epsilon_{abc} R^{ab}(\omega) \wedge e^c \wedge m \nonumber\\
&& + 2 \epsilon_{abc} R^a(b) \wedge e^b \wedge e^c + 4 \epsilon_{abc} R^a(\omega) \wedge e^b \wedge t^c \,.
\label{BeyondCarrollGravity}
\end{eqnarray}
Note that this is the same form of the non-relativistic limit, which is no surprising since in four dimensions, both the non-relativistic and the ultra-relativistic expansion of the Einstein-Hilbert action take the same form in series expansion in $\lambda$, see \eqref{NREH} and \eqref{URLag}. We remind the reader that this is not longer true for $D \neq 4$.
Nevertheless, we can fix $\alpha_2$ to be $\alpha_2 = - 2$ which cancels out the coefficient of the Carroll gravity and fixed the coefficient of $\mathcal{L}_3$ to be $-3$. Then, the following scaling of the mass parameter
\begin{eqnarray}
M^2 = \frac{1}{3 \lambda^3} m^2 \,.
\end{eqnarray}
along with the scaling limit $\lambda \to 0$ recover the beyond the Carroll gravity model up to an overall minus sign.
}
\end{enumerate}
The multi-gravity models that we discussed so far needs to include a potential term for the vielbein to be physically viable. Unlike the non-relativistic case, however, these terms appear at the $\mathcal{O}(\lambda)$ Lagrangian due to the changing expansion character of the spatial and the temporal vielbein
\begin{eqnarray}
\mathcal{L}_{CC} &=& \lambda \left( \epsilon_{abc} \tau \wedge e^a \wedge e^b \wedge e^c \right) \nonumber\\
&& + \lambda^3 \left( e_{abc} m \wedge e^a \wedge e^b \wedge e^c + 3 e_{abc} \tau \wedge e^a \wedge e^b \wedge t^c \right) + \mathcal{O}(\lambda^5)\,.
\end{eqnarray}
This is precisely the same expansion of the non-relativistic models except that the Lagrangians are now shifted by $\lambda^{-2}$. Nevertheless, our arguments for the elimination of the lower-order cosmological terms still hold since the structure of the potential terms, \eqref{CosmologicalConstant}, contains more than necessary the number of free coefficients.
We end this section with a brief discussion on how to obtain the ultra-relativistic higher-order algebras by contracting the multiple copies of the Poincar\'e algebra. As with the non-relativistic algebras for $D \geq 4$, the ultra-relativistic algebras that admit an invariant action formulation arise from the Lie algebra expansion for $N_0 = N_1$. This implies that for an algebra of order $(N_1,N_1)$, one has the same number of generators as $(N_1 + 1)$-copies of Poincar\'e algebra. Furthermore, the expansion does not change the structure constant of the smaller core algebra, indicating that we can combine the generators of multiple copies of the Poincar\'e algebra in a linearly independent way to exhibit the generators of the larger non/ultra-relativistic algebras. Based on the $\alpha$-expansion of the relativistic fields, we introduce the following expressions for the ultra-relativistic generators in terms of the generators of the Poincar\'e algebra
\begin{align}
J_{ab}^{(2n)} & = \lambda^{2n} \bigoplus\limits_{i=1}^{N_0} \alpha_i^{2n} J_{ab}^{i} \,, & H^{(2n+1)} & = \lambda^{2n+1} \bigoplus\limits_{i=1}^{N_1} \alpha_i^{2n+1} H^{i} \,, \nonumber\\
P_a^{(2n)} & = \lambda^{2n} \bigoplus\limits_{i=1}^{N_0} \alpha_i^{2n} P_{a}^{i} \,, & G_a^{(2n+1)} & = \lambda^{2n+1} \bigoplus\limits_{i=1}^{N_1} \alpha_i^{2n+1} G_{a}^{i} \,,
\end{align}
where $\alpha_i \neq 1$ and $\alpha_i \neq \alpha_j$ for $i \neq j$. Note that in this case these alpha parameters are free and not necessary to be fixed. This direct sum structure trivially satisfy the algebra \eqref{URInfinite}. Once again, these relations can be inverted to express the relativistic generators in terms of the ultra-relativistic ones. Furthermore, we remind the reader that the truncation condition is $N_0 = N_1$.
\section{Discussion}\label{Discussion}
\paragraph{}
In this work, we have presented non-relativistic and ultra-relativistic scaling limits of multimetric gravity theories. By the field content of the multimetric gravity, it is expected that these models contain more number of degrees of freedom compared to that of Galilei/Carroll gravity, which are the limits of General Relativity. We have shown that the limits of multimetric gravity correspond to non/ultra-relativistic gravity with extended symmetries. In particular, we have shown that the non-relativistic limit of bi-metric gravity is the recent formulation of an action principle for the Newtonian gravity when no potential terms are present. On the other hand, turning on the potential terms yield a constant background mass density in the non-relativistic sector. We expect that the scaling limit that we provide in this paper will be helpful in phenomenological studies for both the multigravity and the non/ultra-relativistic gravity.
The work we present can be regarded as a starting point for various further studies. On the technical side, it would be interesting to extend the analysis that we presented here for the string limit and corresponding algebras, i.e., string limit of Einstein-Hilbert action, see \cite{Bergshoeff:2019ctr,Bergshoeff:2018vfn}. Furthermore, here we focus on three and higher dimensions. The reason for this is that the two-dimensional gravitational theories in their $BF$-formalism cannot be established just by the gauge fields of the Poincar\'e (or (A)dS) algebra, but they require the presence of matter fields transforming in the coadjoint representation. We believe that the same scaling limits can be defined in the presence of the matter sector, however, a careful analysis is indeed essential.
A rather interesting continuation of our work would be to include supersymmetry. Although the non/ultra-relativistic superalgebras are now well-understood, thanks to the Lie algebra expansion, we also need theories that contain both gauge fields and matter fields. At present, the existing techniques, which are based on the Lie algebra expansion, can produce reducible representations for the matter multiplets of supersymmetric theories \cite{Kasikci:2021atn}. Furthermore, these multiplets are rigid and their extensions to local theories have been an open problem. We hope that the scaling limit that we present in this paper can shed light on this open problem.
Another interesting point is that although we have achieved the scaling limit with multiple copies of the Poincar\'e algebra, the same limit can also be established with the Poincar\'e and the Euclidean algebras, see \cite{Ozdemir:2019orp} for a three-dimensional example. When the Euclidean algebra is considered, the difference arises in the commutation relations that include $H \equiv P_0$ and $G_a \equiv J_{0a}$, i.e.
\begin{align}
\left[G_a, P_b\right] &= \delta_{a b} H\,, & \left[G_a, H\right] & = - P_a \,, &\left[J_{a b}, P_c\right] &= \delta_{b c} P_a - \delta_{a c} P_b\,, \nonumber\\
\left[J_{a b}, G_{c}\right] &= \delta_{b c} G_a - \delta_{a c} G_b \,, &\left[J_{a b}, J_{c d}\right] &= 4 \delta_{[a[c}J_{d]b]}\,, & \left[G_a, G_b\right] & = - J_{a b}\,.
\label{EuclideanAlgebra}
\end{align}
In the contraction process to obtain the non/ultra-relativistic algebras, the minus factor that appears in the $[G_a, H]$ and $[G_a, G_b]$ commutators can be handled by introducing a minus sign in the definitions of the non/ultra-relativistic generators of extended algebras. For instance, in the case of non-relativistic extended algebras, we have the following definitions for the generators
\begin{align}
J^{(2n)}_{a b} & = \lambda^{2n} \bigoplus\limits_{i=1}^{N_0} \alpha_i^{2n} \sigma_i^{n} J^i_{a b} \,, & H^{(2n)} & = \lambda^{2n} \bigoplus\limits_{i=1}^{N_0} \alpha_i^{2n} \sigma_i^{n} H^i \nonumber \\
P^{(2n+1)}_{a} & = \lambda^{2n+1} \bigoplus\limits_{i=1}^{N_1} \alpha_i^{2n+1} \sigma_i^{n+1} P^i_{a} \,, & G^{(2n+1)}_{a} & = \lambda^{2n+1} \bigoplus\limits_{i=1}^{N_1} \alpha_i^{2n+1} \sigma_i^{n} G^i_{a}\,,
\end{align}
where $\alpha_i \neq 1$ and $\alpha_i \neq \alpha_j$ for $i \neq j$. The necessary sign is introduced by the $\sigma_i$ which is defined as $\sigma_i=1$ for Lorentizan and $\sigma_i=-1$ for Euclidean algebras. With this relation in hand, it is tempting to propose a bimetric theory of gravity that is the sum of Einstein-Hilbert action in the Lorentizan and the Euclidean signatures as a simple, worthwhile phenomenological model. To our knowledge, such a model has not been investigated in the literature. We hope to study the phenomenological aspects of this model in a near future.
As an important remark, in Appendix \ref{AppA}, we have explicitly shown the equivalence of the first and the second order formulation of the Newtonian gravity. It would be interesting to see if the same is true between the ultra-relativistic Beyond Carroll gravity action \eqref{BeyondCarrollGravity} and the metric formulation \cite{Hansen:2021fxi}. Finally, the coadjoint Poincar\'e (or AdS) algebra, which is a Lie algebra expansion of the Poincar\'e (or AdS) algebra for $\{P_A, J_{AB}\} \subset V_0$, is also known to reproduce certain three and four-dimensional extended non-relativistic algebras \cite{Bergshoeff:2020fiz}. It would be interesting to see if there is a relation between the coadjoint algebras and multiple copies of Poincar\'e and Euclidean algebra.
\vspace{0.5cm}
{\bf Acknowledgements}
We thank Eric Bergshoeff and Johannes Lahnsteiner for discussions. The work of O.K. and C.B.S. is supported by TUBITAK grant 121F064. M.O. is supported in part by TUBITAK grant 121F064 and Istanbul Technical University Research Fund under grant number TGA-2020-42570. M.O. acknowledges the support by the Distinguished Young Scientist Award BAGEP of the Science Academy. M.O. also acknowledges the support by the Outstanding Young Scientist Award of the Turkish Academy of Sciences (TUBA-GEBIP). U.Z. is supported by TUBITAK - 2218 National Postdoctoral Research Fellowship Program with grant number 118C512.
|
1,314,259,995,353 | arxiv | \section{Socialist Science}
\label{sec:SocialistScience}
Drug company Q reads a scholarly article in a distinguished journal authored by academic A. Motivated in significant part by A's findings, Q starts a new research effort in this seemingly promising direction.
One year and one million dollars later, Q realizes A's findings are basically wrong~\cite{begley2012drug,prinz2011believe,freedman2015economics,chalmers2009avoidable,scott2008design,begley2013unappreciated,steckler2015preclinical,de2015failure,kola2004can,macleod2014biomedical,kyzas2007almost,hirst2014need,Hyman155cm11,miller2010pharma,schoenfeld2013everything}.
Variants of this story may be invoked by a convenient buzz phrase: science's ``replication crisis,'' ``replicability crisis,'' or ``reproducibility crisis.'' Q is not always a drug company. A is usually an academic.
Q, the sucker in this story, realizes he is continually getting shafted~\cite{ioannidis2005most,ioannidis2013s,pankevich2014improving,lowenstein2009uncertainty,henderson2013threats,dirnagl2012international,dirnagl2009stroke,dirnagl2006bench,rosenblatt2016incentive}. Systematic attempts to repeat many published studies fail~\cite{mobley2013survey,open2015estimating,ioannidis2005contradicted,ioannidis2007non,steward2012replication,chang2015economics}, spreading awareness to the masses and stoking discontent. Debates erupt over the scale of the problem~\cite{maxwell2015psychology,anderson2016response,stroebe2014alleged,klein2014investigating,pashler2012replicability,camerer2016evaluating,etz2016bayesian,michalek2010costs}. Some hope extermination of psychology departments will be sufficient~\cite{fanelli2010positive,lilienfeld2012public,cesario2014priming,Pashler01112012,bakker2012rules,Bones01052012,wagenmakers2011psychologists,ferguson2015everybody}. Others work to re(de)fine the notion of ``replication''~\cite{goodman2016does,brandt2014replication,clemens2015meaning}. Incremental improvements to the current system are proposed~\cite{ioannidis2014make,collins2014nih,vesterinen2010improving,asendorpf2013recommendations,nosek2015promoting,cumming2013new,iqbal2016reproducible,holdren2013increasing,miguel2014promoting,roche2014troubleshooting,chalmers2014increase,ioannidis2014increasing,salman2014increasing,yordanov2015avoidable,moher2015four,ioannidis2014assessing,vasilevsky2013reproducibility,international2004international,kilkenny2010improving,macleod2009reprint,tooth2005quality,festing2002guidelines,moher2010guidance,casadevall2012reforming,nosek2012scientificI,nosek2012scientificII,everett2015tragedy,nekrutenko2012next,sandve2013ten,ioannidis2011improving,pusztai2013reproducibility,valentine2011replication,kidwell2016badges,koole2012rewarding,landis2012call,moher2010consort,moseley2014beyond,peers2012search,peers2014can,ram2013git,schooler2014metascience}~\footnote{{\em{Open science}}, or {\em{even-more-socialist science}}, is a common theme among proposed incremental improvements. The idea of encouraging more transparent access to data and analysis code is an attractive one. We ourselves pushed it in particle physics -- hard, and for many years. Unfortunately, if unsurprisingly, incentives are simply too misaligned for it to work. If we, as a society, want more of something -- like apples, say, or knowledge about how nature works -- we may be better off making it easy for people who produce apples to sell them than mandating that they make their orchards available for anyone to pick.}. Most are never enacted~\cite{baker2014two}. The rest have limited impact~\cite{vanpaemel2015we,roche2015public,vines2014availability,wicherts2006poor,moher2016increasing,grant2013reporting,joseph2013open,song2010dissemination,prayle2012compliance,clarke2010clinical,florez2016bias,bramhall2015quality,plint2006does}.
The incentive structures contributing to the replication crisis are observed to be deeply entrenched features of the ecosystem within which science is done~\cite{begley2015reproducibility,simmons2011false,fiedler2011voodoo,ferguson2012vast,young2008current,tsilidis2013evaluation,brembs2013deep,steen2013has,oboyle2014chrysalis,stamatakis2013undue,sena2010publication,mathieu2009comparison,hannink2013comparison,crowe2015patients,blumle2016fate,chan2014increasing,glasziou2014reducing,glasziou2014role,hoffmann2012scatter,franco2014publication,button2013power,dwan2008systematic,ioannidis2008most,van2010can,ioannidis2012science,stroebe2012scientific,wagenmakers2012agenda,john2012measuring,fanelli2011negative,alberts2014rescuing,ioannidis2014publication,chambers2014instead} and subsequently applied~\cite{glasziou2005paths,duff2010adequacy,dancey2010quality,kilkenny2009survey,mcglynn2003quality,lemon2005surveying,ioannidis2001completeness,savovic2012influence,turner2008selective,bero1998closing,ramagopalan2014prevalence,williamson2012driving}. This ecosystem comprises institutions (including funding agencies, industry, universities, and academic journals), culture, accepted practice (including procedures by which grant money is awarded, articles published, promotions granted, and accolades bestowed), and the bureaucracy required to support all of these. The ecosystem is complex, multifaceted, and not easily changed. The incentive structures contributing to the replication crisis are, similarly, not easily changed. A timely solution to the replication crisis probably requires a new ecosystem.
For convenience, we refer to the current ecosystem~\cite{stephan1996economics} -- funded by taxpayers, with published results available to all citizens -- as {\em{socialist science}}~\footnote{The phrase ``socialist science'' is obviously a grotesquely crude caricature of the intricate and often nuanced set of incentives joining the actors in the current science ecosystem. We intend the phrase as a neutral description of an aspect of the current ecosystem germane to the present discussion. A reader who dislikes the phrase is encouraged to mentally replace it with ``the ecosystem within which science is currently carried out,'' or some alternative reference thereto.}~\footnote{We focus on the incentive flaws common to all socialist science, ignoring differences among scientific disciplines. Rather than treat the symptoms, which express differently in the social sciences, life sciences, and physical sciences, we focus on the underlying disease.}.
\section{Capitalist Science}
\label{sec:CapitalistScience}
Perhaps we can construct a different ecosystem.
Lacking imagination, we look to see what other professions do. Many professions exhibit a very interesting behavior. There occur {\em{pairwise transactions}}, in which one party {\em{sells}} something of value to another party. One party (A) gives something of value to some other party (Q). In return, Q gives A something called {\em{money}}.
Perhaps our academic (A) can sell what he learns to those (Q) who find A's information valuable. Many deep pocketed industries, including the pharmaceutical industry, make high stakes decisions motivated by academic research. A should have little difficulty finding customers Q.
To incentivize accuracy, we need some sort of transaction that causes A to lose money if he turns out to be wrong. We therefore need transactions for which the (in)accuracy of A's information can eventually be objectively determined, and we need a reliable, low-cost procedure for making that determination.
Suppose Q has a specific question and is willing to pay for a useful answer. Q can ensure the usefulness of any answer he receives by specifying, explicitly or categorically, the set of allowed possible answers. A third party (X) brokers the transaction, holds money from Q and A in escrow, and acts as arbiter. If A provides an answer outside Q's set of allowed possible answers, A loses money.
In many cases of practical interest, Q can be tasked with determining the accuracy of A's answer. If Q wants to know what chemical matter binds to a particular target and A answers with a specific molecule, Q will eventually know whether A's answer was right or wrong, and Q can provide evidence to this effect to arbiter X. X collects a deposit from Q at the beginning of the transaction, and X returns this deposit to Q when Q provides sufficient evidence by a previously agreed upon date. Q pays the same amount whether A's answer turns out to be right or wrong. X's fee for brokering the transaction is independent of whether A's answer turns out to be right or wrong and whether or not Q provides evidence X considers sufficient. A has an explicit monetary incentive to be accurate and an explicit monetary disincentive to provide information unless he is pretty sure he is correct. Q has an explicit monetary incentive to ask a question that will be directly useful to him, and for which he can eventually determine, with reasonable proof, the accuracy of whatever answer A provides. X, as an ongoing business interest, has a clear monetary incentive to thoughtfully serve its role as a reasonable and objective adjudicator of the evidence Q provides. Q, A, and X all have an interest in ensuring Q's question is carefully specified to avoid subsequent ambiguities as to A's correctness. This transaction protocol enforces a set of incentives facilitating paid, arm's length transfer of useful, bluntly honest information from A to Q.
Using this transaction protocol, introduced for the first time in Ref.~\cite{knuteson2016knowledge} and implemented at Ref.~\footnote{Kn-X; \url{http://kn-x.com}. Patent pending.}, scientists can sell what they learn from their research.
For convenience, we refer to this new ecosystem as {\em{capitalist science}}~\footnote{The phrase ``capitalist science'' is intended as a neutral description of a salient feature of this new ecosystem. A reader who dislikes the phrase is encouraged to mentally replace it with ``the new ecosystem,'' or something similar.}~\footnote{The ``capitalist science'' in this article supersedes that of Ref.~\cite{knuteson2011capitalist}, which in retrospect is more of a hybrid between socialist and capitalist science.}~\footnote{To be clear, our use of the term ``capitalist'' does not arise from a belief, seemingly held by many, that free markets are the optimal solution to all problems, nor from a belief that global financial markets at the time of this writing function well and should be emulated. (You have no idea.) The complexity and frequent opacity of today's capitalism highlights the glaring need for a mechanism facilitating useful, bluntly honest information transfer between remote parties. Given the embarrassing, hidden-in-plain-sight, farcically tragic comedy of errors that is recent financial history, a mechanism facilitating useful, bluntly honest, arm's length information transfer may turn out to be our best shot at saving capitalism \ldots\ or at least at sending a few of those responsible to prison next time around.}.
In capitalist science, A makes money only if he turns out to be correct. A loses money if he turns out to be wrong. Every transaction includes a monetary incentive for Q to determine the accuracy of A's answer and to back this determination with evidence deemed sufficient by an objective third party (X). These features directly address socialist science's reproducibility crisis. These incentives, present in capitalist science, explicitly and directly reward accuracy and penalize inaccuracy. The absence of such explicit and direct incentives in socialist science is the reason socialist science is suffering a reproducibility crisis.
Capitalist science is the solution to socialist science's reproducibility crisis.
\section{Summary}
\label{sec:Summary}
Now that the solution to socialist science's reproducibility crisis has been found, we can roll up our sleeves \ldots\ sit back, relax, and let events unfold.
Socialist science will continue much as it has, producing results of similar quality. Serious, thoughtful, and impassioned attempts will be made to save it. Some of these may appear to work for a while, but ultimately they will fail. The reproducibility crisis has been stewing for decades~\cite{lykken1968statistical,elms1975crisis,greenwald1975consequences,rosenthal1979file,altman1994scandal,hackam2006translation,pocock1987statistical,sterling1995publication,vul2009puzzlingly,easterbrook1991publication,kerr1998harking}. The relevant malincentives are too firmly embedded in its large and unwieldy ecosystem for socialist science to produce a meaningful solution from within.
The key to change is Q, the socialist science sucker from Section~\ref{sec:SocialistScience}. Q is smart. Q is a socialist science sucker only because socialist science fails to sufficiently discourage the publication of inaccurate information by failing to hold A sufficiently accountable for being wrong. Q is a socialist science sucker only because Q has had no other option.
Q now has another option. Before launching a one year, one million dollar research project motivated by a result produced by socialist science, Q can spend ten days and ten thousand dollars on a few questions to check it out. Before investing ten million dollars on a new materials science innovation, venture capitalists can anonymously conduct due diligence with a convenience and at a depth hitherto unimaginable. Relying on free information of questionable accuracy produced by socialist science can be very costly. Q is smart; Q is accustomed to paying for things of value; and Q's stakes are high.
Over time, Q increasingly relies on information obtained from capitalist science, where A has skin in the game. When A has truly valuable information, he finds himself more interested in selling it than publishing it. For less robust results, A is more inclined to publish, avoiding the harsh penalty imposed by capitalist science for providing information that turns out to be wrong. Over time, useful, robust information slowly migrates to capitalist science, where it is appropriately valued. Socialist science continues to publish the results A is willing to give away for free.
The forces in capitalist science's favor are so strong that socialist science should be allowed to fade into irrelevance in its natural course~\footnote{Although the information market unleashed by capitalist science could create millions of new science-related jobs, it would be irresponsible to reduce funding to socialist science until that promise has been realized.}. Socialist science has been a remarkable institution. The knowledge it has provided has been extraordinary in power and in scope. Its use of an incentive structure spurned by the most developed of today's modern economies only makes its achievements all the more impressive.
We fully hope and expect socialist science will linger among us for many years. At some deep level, the purpose and goals of socialist science are fundamentally different from those of capitalist science. Capitalist science is designed to facilitate useful, accurate information transfer, and to provide A with a strong incentive to learn information some Q will find valuable. Socialist science, in contrast, is not.
|
1,314,259,995,354 | arxiv | \section{Introduction}
\indent Let $\mathbb{RP}^3$ be the three-dimensional real projective space, and $\mathcal{F}$ denote the non-empty set consisting of all embedded surfaces in $\mathbb{RP}^3$ that are diffeomorphic to the two-dimensional projective plane $\mathbb{RP}^2$. Given a Riemannian metric $g$ on $\mathbb{RP}^3$, we define
\begin{equation*}
\mathcal{A}(\mathbb{RP}^{3},g) = \inf_{\Sigma \in \mathcal{F}}area(\Sigma,g).
\end{equation*}
\noindent In this paper, the geometric invariant above will be called the \textit{two-systole} of $(\mathbb{RP}^3,g)$. The term has been used to name slightly different invariants in the literature, depending on the choice of the set $\mathcal{F}$ (\textit{cf}. \cite{Gro}, Sections 1 and 4.A.7, and \cite{Ber}, Section 5). The first systematic study of such invariants was done by Berger in \cite{Ber}, where he sought generalisations of Pu's inequality \cite{Pu} for the (one)-systole of real projective planes, \textit{i.e.} the smallest length of a non-trivial loop in $(\mathbb{RP}^2,g)$. Berger computed that the two-systole of the standard round metric $g_1$ on $\mathbb{RP}^3$, with constant sectional curvature one, is equal to $2\pi$ (see \cite{Ber}, Th\'eor\`eme 7.1). This number is precisely the area of the totally geodesic projective planes in $(\mathbb{RP}^3,g_1)$. \\
\indent In \cite{BraBreEicNev}, Bray, Brendle, Eichmair and Neves studied how the two-systole behaves under the Ricci flow, proving along the way a sharp upper bound for $\mathcal{A}(\mathbb{RP}^3,g)$ in terms of the minimum value of the scalar curvature of $(\mathbb{RP}^3,g)$ (see \cite{BraBreEicNev}, Theorems 1.1 and 1.2). An important part of their analysis was to show that the infimum defining $\mathcal{A}(\mathbb{RP}^{3},g)$ is actually attained by an embedded area-minimising projective plane in $(\mathbb{RP}^{3},g)$ (see \cite{BraBreEicNev}, Proposition 2.3). \\
\indent In this paper, we investigate how large can be the \textit{normalised two-systole},
\begin{equation*}
\frac{\mathcal{A}(\mathbb{RP}^3,g)}{vol(\mathbb{RP}^3,g)^{\frac{2}{3}}},
\end{equation*}
among metrics inside a conformal class defined by a homogeneous metric on $\mathbb{RP}^3$, \textit{i.e.} a Riemannian metric whose isometry group acts transitively. The result we obtain is the following:
\begin{thm} \label{thm-main}
\textit{Let $\overline{g}$ be a homogeneous Riemannian metric on $\mathbb{RP}^3$. If $g$ is a Riemannian metric on $\mathbb{RP}^3$ that is conformal to $\overline{g}$, then
\begin{equation*}
\frac{\mathcal{A}(\mathbb{RP}^3,g)}{vol(\mathbb{RP}^3,g)^{\frac{2}{3}}} \leq \frac{\mathcal{A}(\mathbb{RP}^3,\overline{g})}{vol(\mathbb{RP}^3,\overline{g})^{\frac{2}{3}}}.
\end{equation*}
Moreover, equality holds if and only if $g$ is a constant multiple of $\overline{g}$.}
\end{thm}
\indent Our proof of Theorem \ref{thm-main} is based on the classification of immersed minimal two-spheres in homogeneous three-spheres $(S^3,g)$, which has been obtained by Meeks, Mira, P\'erez and Ros \cite{MeeMirPerRos}. In a few words, up to ambient isometries, there exists a unique immersed minimal sphere, which is actually embedded and invariant under the antipodal map (see Theorem \ref{thm-minimal-spheres} for a more detailed statement). Denoting by $\mathcal{G}^{+}$ the set of oriented minimal two-spheres in a homogeneous $(S^3,g)$, we verify that $\mathcal{G}^{+}$ can be identified with $S^3$ itself, and that the following integral-geometric formula holds:
\begin{equation} \label{eq-formula-intgeo}
\int_{\mathcal{G}^{+}} \left(\Xint-_{\Sigma} f dA_{g} \right) d\mathcal{G}^{+}_{g} = \int_{S^3} f dV_{g} \quad \text{for all} \quad f\in C^{0}(S^3).
\end{equation}
Formula \eqref{eq-formula-intgeo} is well-known in the case of the round three-sphere, where minimal two-spheres are the totally geodesic equators (\textit{cf}. Santal\'o \cite{San}). From this point, a proof of Theorem \ref{thm-main} can be given following essentially the same argument, based on the Uniformisation Theorem, used by Pu and Loewner to establish their theorems about systoles of projective planes and two-tori, respectively (see for example \cite{Gro}, Section 1.B). \\
\indent We remark that the relevance of integral-geometric formulae similar to \eqref{eq-formula-intgeo} in this sort of maximisation problem for one-systoles was already recognised, notably by Gromov and Bavard \cite{Bav}. \\
\indent Restricting our attention to the conformal class of the round metrics, we can thus state the following
\begin{cor}
If $g$ is a Riemannian metric on $\mathbb{RP}^3$ that is conformal to the round metric $g_1$, then
\begin{equation*}
\mathcal{A}(\mathbb{RP}^3,g) \leq \frac{2}{{\sqrt[\leftroot{-1}\uproot{2}\scriptstyle 3]\pi}}vol(\mathbb{RP}^3,g)^{\frac{2}{3}},
\end{equation*}
and equality holds if and only if $g$ is a constant multiple of $g_1$.
\end{cor}
\indent More generally, it seems to be an arduous task to calculate the actual value of the two-systole of all homogeneous metrics, which belong to a two-parameter family up to scaling (see Section \ref{Sec-2}), except perhaps in the case of the family of Berger metrics $g_{\rho}$, $\rho>0$, where explicit formulae can be deduced and a numeric computation is feasible. In Section \ref{Sec-5}, we show that the normalised two-systole of $(\mathbb{RP}^3,g_\rho)$ attains a local strict \textit{minimum} at the round metric ($\rho=1$), and diverges to infinity as the parameter goes to either $0$ or $+\infty$. In a way, one can speak very concretely about systolic freedom (\textit{cf.} \cite{CroKat}, Section 4): the normalised two-systole considered here is unbounded on the space of Riemannian metrics on $\mathbb{RP}^3$, even among homogeneous metrics with positive Ricci curvature. \\
\indent In a companion paper \cite{AmbMon}, we study the \textit{Simon-Smith width} \cite{SimSmi} of three-spheres $(S^3,g)$. When a metric on $S^3$ is the pull-back of a metric $g$ on $\mathbb{RP}^3$ by the canonical projection $\pi : S^3 \rightarrow \mathbb{RP}^3$, and satisfies extra geometric assumptions, the width of $(S^3,\pi^{*}g)$ provides a lower bound to twice the value of the two-systole of $(\mathbb{RP}^3,g)$; for instance, this assertion holds for metrics admitting no stable minimal two-spheres. In \cite{AmbMon}, we interpret the integral-geometric formula \eqref{eq-formula-intgeo} as an evidence that the homogeneous metrics on the three-sphere should be local maxima of the normalised widths in their conformal classes as well. In fact, it was this expectation that led us to investigate the topics discussed here.
\section{Minimal two-spheres in homogeneous three-spheres} \label{Sec-2}
\indent Let $S^3$ denote the unit sphere in $\mathbb{R}^4\simeq \mathbb{C}^2$, centred at the origin,
\begin{equation*}
S^3 = \{(z,w) \in \mathbb{C}^2,\, |z|^2+|w|^2 =1\}.
\end{equation*}
\indent The three-sphere $S^3$ can be identified with the Lie group $SU(2)$ of the special unitary transformations of $\mathbb{C}^2$, which are represented by the two-by-two complex matrices of the form
\begin{equation*}
\left[ \begin{matrix}
z & -\overline{w} \\
w & \overline{z} \end{matrix} \right] ,\quad \text{where} \quad |z|^2 + |w|^2=1.
\end{equation*}
Under this identification, the group operation is given by
\begin{equation*}
(z,w)\cdot(u,v) = (zu-\overline{w}v,wu+\overline{z}v).
\end{equation*}
\indent The left (respect. right) multiplication by an element $(z,w)\in S^3$ will be denoted by $\mathcal{L}_{(z,w)}$ (respect. $\mathcal{R}_{(z,w)}$). Notice that $\mathcal{L}_{(1,0)} : S^3 \rightarrow S^3$ is the identity map, whereas $\mathcal{L}_{(-1,0)} : S^3 \rightarrow S^3$ is the antipodal map. \\
\indent The antipodal map commutes with all left translations. We can identify the quotient of $S^3$ by the antipodal map, \textit{i.e.} the three-dimensional real projective space $\mathbb{RP}^3$, with the Lie group $SO(3)$ of the special orthogonal transformations of $\mathbb{R}^3$. \\
\indent For every Riemannian metric $g$ on $S^3$ that is invariant under left translations there exists an orthonormal basis $\{E_1, E_2, E_3\}$ of left-invariant vector fields, and real numbers $c_1$, $c_2$ and $c_3$, such that
\begin{equation*}
[E_2,E_3] = c_1E_1, \quad [E_3,E_1] = c_2E_2, \quad \text{and} \quad [E_1,E_2] = c_3E_3.
\end{equation*}
See \cite{Mil}, Section 4, for more details. The canonical metric on $S^3$ corresponds to the parameters $c_1=c_2=c_3=2$. The Berger metrics $g_{\rho}$, where $\rho\neq 1$ is a positive real number, corresponds to the parameters $c_1=2\sqrt{\rho}$ and $c_2=c_3=2/\sqrt{\rho}$. Up to a choice of orientation of $S^3$, the constants $c_i$ can be taken to be all positive. \\
\indent The isometry group of a left-invariant metric in $S^3$ will contain transformations other than left translations; in particular, it can have dimension three, four (Berger metrics) or six (round metric). \\
\indent Any compact simply connected (locally) homogeneous Riemannian three-manifold is isometric to $S^3$ endowed with some left-invariant metric (see, for example, Theorem 2.4 in \cite{MeePer}). Since the antipodal map is a left translation, the pull-back by the canonical projection $\pi : S^3 \rightarrow \mathbb{RP}^3$ establishes a bijective correspondence between homogeneous metrics on $\mathbb{RP}^3$ and homogeneous metrics in $S^3$. We will therefore use the terms ``homogeneous" and ``left-invariant" in this paper interchangeably. \\\\
\indent The geometry of immersed two-spheres with constant mean curvature in a homogeneous three-sphere has been extensively studied by Meeks, Mira, P\'erez and Ros \cite{MeeMirPerRos}. The next proposition summarises those properties of minimal two-spheres that we will need to know for the applications we have in mind:
\begin{thm}(cf. \cite{MeeMirPerRos}, Theorems 1.3 and 7.1) \label{thm-minimal-spheres}
\\ \indent Let $g$ be a left-invariant metric on $S^3$.
\begin{itemize}
\item[$i)$] There exists an embedded index one minimal sphere $\Sigma_0$ in $(S^3,g)$.
\item[$ii)$] Every immersed minimal sphere in $(S^3,g)$ is a left translation of $\Sigma_0$. In particular, every immersed minimal sphere in $(S^3,g)$ is an embedded index one minimal sphere isometric to $\Sigma_0$.
\item[$iii)$] The antipodal map leaves every minimal sphere in $(S^3,g)$ invariant.
\item[$iv)$] If a left translation leaves a minimal sphere invariant, then it is either the identity map or the antipodal map.
\end{itemize}
\end{thm}
\indent The results of \cite{MeeMirPerRos} are actually much more general and detailed, and the interested reader is encouraged to study their paper. For the sake of convenience, we will briefly sketch some steps of the proof of the above statement here, taking a slightly different path than the one described in the aforementioned work. In particular, based on the recent progress on min-max theory, one can prove $i)$ directly; properties $iii)$ and $iv)$, which are important for us later, will be explained by different arguments.
\begin{proof}
First, we observe that, due to homogeneity of the metric $g$, any immersed two-sphere $\Sigma$ in $(S^3,g)$ have nullity three (see Section 4 in \cite{MeeMirPerRos}). In particular, zero is an eigenvalue of the Jacobi operator of $\Sigma$ that cannot be the first. Thus, no immersed minimal sphere in $(S^3,g)$ is stable. It follows that any left-invariant metric on $S^3$ satisfies the assumptions of the min-max Theorem 3.4 of Marques and Neves \cite{MarNev-Duke}. Hence, there exists an embedded index one minimal two-sphere $\Sigma_0$ in $(S^3,g)$, confirming $i)$. \\
\indent The next (and most important) step involves the study of the left-invariant Gauss map of an index one minimal two-sphere. A key point is to show that this map must be a diffeomorphism, and here the index one property is used in a crucial way. Then, a Hopf differential type argument is used to prove that all immersed minimal spheres are actually obtained by a left translation of the index one minimal sphere $\Sigma_0$. Because details are involved, we refer the reader to the proof of items $(1)$ and $(2)$ of Theorem 4.1 in \cite{MeeMirPerRos} (the paper \cite{DanMir} by Daniel and Mira contains an insightful discussion on the ideas at the origin of the argument). \\
\indent Recall that the antipodal map $\mathcal{L}_{(-1,0)}$ commutes with all left translations. Thus, in view of $ii)$, in order to prove $iii)$ it is enough to show that $(S^3,g)$ contains a minimal two-sphere that is invariant under the antipodal map. As the antipodal map is an isometry of $(S^3,g)$, we can pass to the quotient and look for minimal projective planes contained in $(\mathbb{RP}^3,g)$, for the inverse image of any such surface will be a minimal sphere in $(S^3,g)$ with the required property. The existence of such surface can be shown, for example, by using Meeks-Simon-Yau Theorem to find the element of $\mathcal{F}$ with the least possible area, as indicated on Remark 7.2 in \cite{MeeMirPerRos} (a detailed argument is presented in \cite{BraBreEicNev}, Proposition 2.3). \\
\indent Finally, we prove item $iv)$ as follows. Fix an orientation of $\Sigma_0$ by defining a normal unit vector field $N$. Let $G : \Sigma_0 \rightarrow S^2$ denote the left-invariant Gauss map of $\Sigma_0$: it assigns to each point $p\in \Sigma_0$ the unique unit vector $G(p)$ in $(T_{(1,0)}S^3,g)$ such that $D\mathcal{L}_{p}(G(p)) = N(p)\in T_{p}S^3$. As observed above, $G$ is a diffeomorphism. Moreover, it is immediate to check that $G(\mathcal{L}_{(-1,0)}(p))= - G(p)$ for every $p\in \Sigma_{0}$. \\
\indent If $\mathcal{L}_{(a,b)}(\Sigma_0)=\Sigma_0$, then the map $\Phi = G\circ \mathcal{L}_{(a,b)}\circ G^{-1}$ is a diffeomorphism of $S^2 \subset (T_{(1,0)}S^3,g)$, which we can use to define the vector field
\begin{equation*}
X : q \in S^{2} \mapsto \Phi(q) - g(\Phi(q),q)q \in T_{(1,0)}S^3.
\end{equation*}
For every $q\in S^2$, the vector $X(q)$ is tangent to $S^2$. Therefore there exists $q_0$ such that $X(q_0)=0$. By Cauchy-Schwartz, it is immediate to conclude that either $\Phi(q_0)=q_0$ or $\Phi(q_0)=-q_0$. In the first case, $\mathcal{L}_{(a,b)}$ has a fixed point, and thus $(a,b)=(1,0)$. In the second case, the composition $\mathcal{L}_{(-a,-b)}=\mathcal{L}_{(-1,0)}\circ \mathcal{L}_{(a,b)}$ has a fixed point, because $(G\circ\mathcal{L}_{(-a,-b)}\circ G^{-1})(q_0)=(G\circ \mathcal{L}_{(-1,0)} \circ G^{-1})(\Phi(q_0))= -\Phi(q_0) = q_0$. It follows that $(-a,-b)=(1,0)$, or equivalently $(a,b)=(-1,0)$, as claimed.
\end{proof}
\begin{rmk} An immersed minimal two-sphere in the round three-sphere must be an embedded totally geodesic equator, as proven by Almgren \cite{Alm} and Calabi \cite{Cal} using the holomorphic differential technique pioneered by Hopf. Much later, Abresch and Rosenberg \cite{AbrRos} constructed a new holomorphic differential on surfaces in Berger spheres and used it to show that immersed minimal two-spheres are rotationally invariant and unique up to ambient isometries. In \cite{MeeMirPerRos}, Section 7, the (unique) minimal two-sphere in an arbitrary homogeneous three-sphere is constructed explicitly by geodesic reflection of certain Plateau discs along their boundaries, and their geometry is described in details.
\end{rmk}
\section{The integral-geometric formula} \label{Sec-3}
\indent The result stated in the previous Section allows us to understand the space of all minimal two-spheres in a three-sphere endowed with a homogeneous metric $g$ completely. In fact, let $\mathcal{G}^{+}$ be set of all oriented immersed minimal spheres in $(S^3,g)$. By item $ii)$ and $iv)$ of Theorem \ref{thm-minimal-spheres}, $\mathcal{G}^{+}$ consists of embedded minimal spheres, and $S^3$ acts transitively and effectively on $\mathcal{G}^{+}$ by left translations . Thus, $\mathcal{G}^{+}$ can be identified with $S^{3}$ itself: choosing any $\Sigma_0$ in $\mathcal{G}^{+}$, the map
\begin{equation*}
(a,b) \in S^3 \mapsto \mathcal{L}_{(a,b)}(\Sigma_0) \in \mathcal{G}^{+}
\end{equation*}
is a bijection. We use this map to endow $\mathcal{G}^{+}$ with the Riemannian metric $g$ and all derived structures (metric, topology, volume element). Notice in particular that the natural topology of $\mathcal{G}^{+}$ (smooth graphical convergence) coincides with the topology induced by the above identification.\\
\indent In the next theorem, we prove the integral-geometric formula \eqref{eq-formula-intgeo}.
\begin{thm} \label{thm-integral-formula}
Let $g$ be a homogeneous Riemannian metric on $S^3$, and $\mathcal{G}^{+}$ denote the set of all oriented minimal two-spheres in $(S^3,g)$. For every continuous function $f$ on $S^3$, the following formula holds:
\begin{equation} \label{eq-formula}
\int_{\mathcal{G}^{+}} \left(\Xint-_{\Sigma} f dA_{g} \right) d\mathcal{G}^{+}_{g} = \int_{S^3} f dV_{g}.
\end{equation}
\end{thm}
\begin{proof}
Let $f$ be a continuous function on $S^3$, and fix $\Sigma_0$ in $\mathcal{G}^{+}$. Let $(a,b)$ be the unique point in $S^{3}$ such that $\mathcal{L}_{(a,b)}(\Sigma_0)=\Sigma$. Since $\mathcal{L}_{(a,b)}$ is an orientation-preserving isometry, we can compute the integral of $f$ over $\Sigma$ by
\begin{equation*}
\int_{\Sigma} f dA_g = \int_{\Sigma_0} f(\mathcal{L}_{(a,b)}(p,q))dA_{g}(p,q).
\end{equation*}
In the above formula, $(p,q)$ denotes the integration variable, which is an arbitrary point of $\Sigma_0$. \\
\indent Clearly, $\Sigma$ and $\Sigma_0$ have the same area in $(S^3,g)$. Given the identification between $\mathcal{G}^{+}$ and $S^3$, we can now use Fubini's Theorem to compute
\begin{align*}
\int_{\mathcal{G}^{+}} \left(\Xint-_{\Sigma} f dA_{g} \right) d\mathcal{G}^{+}_{g} & = \int_{S^3} \left(\Xint-_{\Sigma_0} f(\mathcal{L}_{(a,b)}(p,q))dA_{g}(p,q) \right) dV_{g}(a,b) \\
& = \Xint-_{\Sigma_0} \left( \int_{S^3} f(\mathcal{L}_{(a,b)}(p,q))dV_{g}(a,b) \right) dA_{g}(p,q) \\
& = \Xint-_{\Sigma_0} \left( \int_{S^3} f(\mathcal{R}_{(p,q)}(a,b))dV_{g}(a,b) \right)dA_{g}(p,q).
\end{align*}
\indent As any compact Lie group, $S^3$ is unimodular: the volume form $dV_g$ of the left-invariant metric $g$ must be also invariant by right translations. Thus, for every $(p,q)$ in $\Sigma_0$,
\begin{equation*}
\int_{S^3} f(\mathcal{R}_{(p,q)}(a,b))dV_{g}(a,b) = \int_{S^3} f dV_g.
\end{equation*}
\noindent Therefore
\begin{equation*}
\int_{\mathcal{G}^{+}} \left(\Xint-_{\Sigma} f dA_{g} \right) d\mathcal{G}^{+}_{g} = \Xint-_{\Sigma_0} \left( \int_{S^3} f dV_g\right) dA_g(p,q) = \int_{S^3} f dV_g.
\end{equation*}
\end{proof}
\section{The two-systole of homogeneous metrics} \label{Sec-4}
\indent The next Lemma is a direct consequence of formula \eqref{eq-formula}.
\begin{lemm} \label{lemma}
Let $\overline{g}$ be a homogeneous Riemannian metric on $S^{3}$, $\mathcal{G}^{+}$ denote the set of all oriented minimal spheres in $(S^3,\overline{g})$, and $w(\overline{g})$ be the common value of the area of each element in $\mathcal{G}^+$. If $g$ is a Riemannian metric on $S^{3}$ that is conformal to $\overline{g}$, then
\begin{equation*}
\min_{\Sigma \in \mathcal{G}^{+}}area(\Sigma,g) \leq \frac{w(\overline{g})}{vol(S^{3},\overline{g})^{\frac{2}{3}}} vol(S^{3},g)^{\frac{2}{3}}.
\end{equation*}
Moreover, equality holds if and only if $g$ is a constant multiple of $\overline{g}$.
\end{lemm}
\begin{proof}
\indent Write $g=\phi \overline{g}$ for some positive smooth function $\phi$ on $S^{3}$. For every $\Sigma$ in $\mathcal{G}^{+}$, we have
\begin{equation*}
area(\Sigma,g) = \int_{\Sigma} \phi dA_{\overline{g}} \quad \Rightarrow \quad area(\Sigma,g) = w(\overline{g})\Xint-_{\Sigma} \phi dA_{\overline{g}}.
\end{equation*}
Therefore, the integral-geometric formula \eqref{eq-formula} gives
\begin{multline*}
\Xint-_{\mathcal{G}^{+}}area(\Sigma,g)dV_{\overline{g}} = w(\overline{g})\Xint-_{S^{3}}\phi dV_{\overline{g}} \\
\leq w(\overline{g})\left(\Xint-_{S^{3}}\phi^{\frac{3}{2}} dV_{\overline{g}}\right)^{\frac{2}{3}} = \frac{w(\overline{g})}{vol(S^{3},\overline{g})^{\frac{2}{3}}}vol(S^3,g)^{\frac{2}{3}}.
\end{multline*}
where we used H\"older's inequality and the fact that $dV_g=\phi^{\frac{3}{2}}dV_{\overline{g}}$. Equality holds if and only if $\phi$ is a positive constant. \\
\indent Since $\mathcal{G}^{+}$ is compact and the map $\Sigma\in \mathcal{G}^{+} \mapsto area(\Sigma,g) \in \mathbb{R}$ is continuous, the theorem follows.
\end{proof}
\begin{rmk} \label{rmk-value}
From the proof of item $iii)$ of Theorem \ref{thm-minimal-spheres}, it should be clear that the value of $w(\overline{g})$ in Lemma \ref{lemma} is equal to twice the value of $\mathcal{A}(\mathbb{RP}^3,\widetilde{g})$, where $\widetilde{g}$ is the unique homogeneous metric on $\mathbb{RP}^3$ such that the canonical projection $\pi : (S^3,\overline{g}) \rightarrow (\mathbb{RP}^3,\widetilde{g})$ is a local isometry.
\end{rmk}
\indent We are now ready to prove Theorem \ref{thm-main}:
\begin{proof}
By Theorem \ref{thm-minimal-spheres}, each minimal sphere in the homogeneous $(S^3,\pi^{*}\overline{g})$ is embedded and invariant under the antipodal map. Thus, every element of $\mathcal{G}^+$ projects down to $\mathbb{RP}^3$ as an element of $\mathcal{F}$. The result is now a direct consequence of the definition of the two-systole, Lemma \ref{lemma} and Remark \ref{rmk-value}.
\end{proof}
\section{The two-systole of Berger metrics} \label{Sec-5}
\indent In this section, we compute the value of the normalised two-systole of Berger spheres. We follow the nice exposition of Torralbo in \cite{Tor} and \cite{Tor2}. Given $\rho>0$, the Berger metric on $S^3=\{(z,w)\in \mathbb{C}^2;\, |z|^2+|w|^2=1\}$ is defined by
\begin{equation*}
g_{\rho}(X,Y) = \langle X, Y \rangle + (\rho^2-1)\langle X,\xi\rangle \langle Y,\xi\rangle \quad \text{for all} \quad X, Y \in \mathcal{X}(S^3),
\end{equation*}
where $\langle-,-\rangle$ denotes the Eulidean metric on $\mathbb{R}^4\simeq \mathbb{C}^2$ and $\xi: (z,w) \in S^3 \mapsto (iz,iw) \in \mathbb{C}^2$ is the vector field generating the Hopf action of $S^1$ on $S^3$. Notice that $g_{\rho}(\xi,\xi)$ is constant and equal to $\rho^2$, and that $g_{\rho}$ coincides with the standard metric in the orthogonal complement of $\xi$. The metric $g_{1}$ is the standard metric on the unit three-sphere $S^3$. \\
\indent The volume of $(S^3,g_{\rho})$ is equal to
\begin{equation*}
vol(S^3,g_{\rho}) = \rho vol(S^3,g_1) = 2\pi^2\rho.
\end{equation*}
\indent For all values of $\rho$, the vector field $\xi$ is an eigenvector of the Ricci tensor of $g_{\rho}$ associated to the eigenvalue $2\rho^2$. When $\rho\neq 1$, the other eigenvalue has multiplicity two and is equal to $4-2\rho^2$. In particular, $(S^3,g_{\rho})$ has positive Ricci curvature when $0 < \rho < \sqrt{2}$. \\
\indent As observed in \cite{Tor2}, Section 3, the horizontal two-sphere
\begin{equation*}
\Sigma_0 = \{(z,w)\in S^3;\, w=\overline{w}\}
\end{equation*}
is precisely the unique minimal two-sphere in $(S^3,g_{\rho})$ up to ambient isometries, for all values of $\rho>0$. An explicit formula for its area is given in \cite{Tor}, Proposition 2. We perform the computation differently: using standard polar coordinates $(s,\theta)$ based at the north pole $(0,0,1,0)$ to parametrised $\Sigma_0$, it is straightforward to calculate
\begin{equation*}
area(\Sigma_0,g_{\rho}) = 2\pi \int_{0}^{\pi} \sqrt{\sin^{2}(s)+(\rho^2-1)\sin^{4}(s)} ds.
\end{equation*}
\indent The two-systole of $(\mathbb{RP}^3,g_{\rho})$ is equal to half the area of $\Sigma_0$ in $(S^3,g_{\rho})$. Thus, up to the constant factor $1/{\sqrt[\leftroot{-1}\uproot{2}\scriptstyle 3]\pi}$, the normalised two-systole of $(\mathbb{RP}^3,g_{\rho})$ is computed by the function
\begin{equation*}
F : \rho \in (0,+\infty) \mapsto \frac{1}{\rho^{\frac{2}{3}}}\int_{0}^{\pi}\sin(s) \sqrt{ (1-\sin^{2}(s)) + \rho^2\sin^{2}(s)} ds \in \mathbb{R}.
\end{equation*}
\noindent It is possible to check that
\begin{equation*}
F'(1)=0, \quad F''(1)>0 \quad \text{and} \quad \lim_{\rho\rightarrow 0} F(\rho)= \lim_{\rho\rightarrow +\infty} F(\rho) = +\infty
\end{equation*}
rather easily. In words: among Berger metrics, the normalised two-systole attains a strict local minimum at the round metric $g_{1}$ and diverges to infinity either as the size of the Hopf orbits increase beyond all bounds, or as they collapse to zero. \\
\noindent \textbf{Acknowledgements:} The investigations that led to the writing of this paper initiated while L.A. was a Research Fellow at the University of Warwick, supported by the EPSRC Programme Grant `Singularities of Geometric Partial Differential Equations', reference number EP/K00865X/1. L.A. would also like to thank Fernando Marques for the invitation to visit the University of Princeton in April 2018, during which the first conversations about this project between the authors took place.
|
1,314,259,995,355 | arxiv | \section{\label{sec:level1}Introduction}
AdS/CFT correspondence\cite {Maldacena:1997re} is widely used to study the dynamics of strongly correlated systems. AdS/CFT correspondence dictates that the dynamics of boundary theory can be studied by considering the fields in the dual gravity theory in the bulk AdS spacetime. A serious application of these ideas in QCD started from the work of Sakai and Sugimoto \cite{Sakai:2004cn}.
Alternatively, the phenomenological models of QCD \cite{Erlich:2005qh}, makes use of some known features of QCD and tried to incorporate these features in the bulk gravity theory. In these model the phenomena like chiral symmetry breaking are introduced by bifundamental fields dual to chiral condensate and confinement is realized by introducing an IR cut-off. This is further modified to include the Regge trajectories of mesons in the model and popularly known as soft wall model of QCD \cite{Karch:2006pv}. The role of IR cut-off is played by a dynamical wall in this model.
These models are studied further to investigate the phase structure and thermodynamics of QCD \cite{Sachan:2011iy}. The transport properties like AC and DC conductivity and diffusion constant are also investigated in these models. \cite{Son:2002sd,Kovtun:2003wp,Erlich:2009me,Jain:2009bi,Edalati:2010hk,Kim:2010ag, Albrecht:2012ek,Donos:2012js}. Renormalization Group (RG) flow of transport co-efficients in theories dual to charged black hole is studied using membrane paradigm \cite{Iqbal:2008by,Faulkner:2010jy,Heemskerk:2010hk,Bredberg:2010ky,Sin:2011yh,Matsuo:2009yu,Matsuo:2011fk,Lee:2011qu,Ge:2014eba},where the dynamics of vector and tensor perturbations is used to study the flow equations for conductivity and diffusion constant. Here we adapt this formalism \cite{Iqbal:2008by,Matsuo:2011fk,Ge:2014eba} to investigate the re-normalization flow of AC conductivity in the soft wall model of QCD. This is further generalized to include the effects of Gauss-Bonnet couplings in the bulk.
The paper is organized as follows. First, we consider the charged black hole solution of Einstein-Maxwell theory and perturbation equations for the bulk metric and gauge field in the soft wall model. We calculate the AC conductivity using the membrane paradigm and explicit results are given in the near horizon limit. We also plot the full solution for AC conductivity as a function of frequency and the results in the probe limits are also plotted. Secondly, the results are obtained for AC conductivity while considering higher order gravity corrections in the action known as Gauss-Bonnet corrections. We also give explicit expression for DC conductivity at the cut off surface in the appendix.
\section{Transport Coefficients in Einstein-Maxwell Theory}
Let us consider the Einstein-Maxwell action in 5-dimensions,
\begin{equation}\label{eq:action0}
S=\int~d^5x~\sqrt{-g}e^{-\phi}~\left\lbrace\frac{1}{2\kappa^2}\left(R-2\Lambda\right)-\frac{1}{4g^2}F^2\right\rbrace,
\end{equation}
where $F^2=F_{mn}F^{mn}$ is the Lagrangian density of the Maxwell field, and $\phi$ is the dilaton field. The constant $\kappa^2$ is related to five dimensional Newton's constant $G_5$ as $\kappa^2=8\pi G_5$ , and the cosmological constant is related to AdS radius, $\Lambda=-6/l^2$. The AdS/QCD correspondence relates five dimensional gravitational constant $2\kappa^2$ and five dimensional gauge coupling to the rank of color group ($N_c$) and number of flavours ($N_f$) in the boundary theory,
$\frac{1}{2\kappa^2}=\frac{N_c^2}{8\pi^2}$,~~~ $\frac{1}{2g^2}=\frac{N_cN_f}{4\pi^2}$.
The charged black hole solution\cite{Chamblin:1999tk,Ge:2008ak,Matsuo:2011fk} of Einstein-Maxwell gravity in five dimensions with negative cosmological constant is given by,
\begin{eqnarray}
ds^2&=&\frac{r^2}{l^2}(-f(r)dt^2+\sum_{i=1}^3 dx^idx^i)+\frac{l^2}{r^2f(r)}dr^2, \\
A_t&=&\mu(1-\frac{r_+^2}{r^2}) \nonumber
\end{eqnarray}
where
$f(r)=1+a\frac{r_+^6}{r^6}-(1+a)\frac{r_+^4}{r^4}$, and the charge of the black hole is related with parameter `a' and chemical potential $\mu$ as: $a=\frac{l^2\kappa^2Q^2 }{6g^2}$ , $Q=\frac{2\mu}{r_+}$. \\ ~~~\\
Defining, $u=\frac{r^2_+}{r^2}$ for simplification, the above metric can be written as,
\begin{equation}
ds^2=\dfrac{r_{+}^2}{l^2u}(-f(u)dt^2+\sum_{i=1}^3 dx^idx^i)+\dfrac{l^2du^2}{4u^2f}
\end{equation}
where $f(u)=(1-u)(1+u-au^2)$.
We take dilaton field in soft wall model as $\phi=cu$ and the constant $c=0.388 $GeV$^2$ \cite{Karch:2006pv}.
The equation of motion for the gauge field in the soft wall models can be written as,
\begin{equation}\label{eq:gauge}
\frac{1}{\sqrt{-g}}\partial_m(\sqrt{-g}e^{-\phi}\, F^{mn})~=~0. \\
\end{equation}
Let us consider the metric and gauge field perturbations as,
\begin{eqnarray}
g_{mn}~=~g^0_{mn}+\tilde{h}_{mn}\\
\textit{A}_{m}~=~A_{m}^0+A_{m}.
\end{eqnarray}
We scale the metric perturbation as $\tilde{h}_{mn}=e^{\phi}{h}_{mn}$
and take the Fourier decomposition of the fields as follows,
\begin{eqnarray} \label{eq:eqft}
h_{mn}(t,z,u)~=~\int\frac{d^4k}{(2\pi)^4}e^{-i\omega t+ikz}h_{mn}(k,u)\\
A_m(t,z,u)~=~\int\frac{d^4k}{(2\pi)^4}e^{-i\omega t+ikz}A_m(k,u).
\end{eqnarray}
Now, focusing on the linearised theory for $h_{mn}$ and for vector field $A_m$ propagating in charged black hole background with the gauge condition $h_{un}=0$ and $A_u=0$, one gets the equation of motion for vector modes of metric perturbations $ h^x_z$\textrm{and} $h^x_t $
\begin{eqnarray}
0=kfh'^{x}_z+\omega h'^{x}_t-3a\omega uA_x \label{eq:perti} \\
0=h''^{x}_t-\frac{1}{u}h'^{x}_t-\frac{b^2}{uf}(\omega kh^x_z+k^2h^x_t)-3auA'_x\label{eq:pert1} \\
0=h''^{x}_z+\frac{(u^{-1}f)'}{u^{-1}f}h'^x_z+\frac{b^2}{uf^2}(\omega^2h^x_z+\omega kh'^{x}_t) \label{eq:pertii}
\end{eqnarray}
where, $b=\frac{l^2}{2r_+}$.
Similarly, the gauge field \eqref{gauge}, becomes,
\begin{equation}
0=A''_x+(\frac{f'}{f}-c) A'_x+\frac{b^2}{uf^2}(\omega^2-k^2f)A_x-e^{\phi}\frac{A'_t}{f}h'^{x}_t \label{eq:cd}.
\end{equation}
The gauge field equation is coupled with metric perturbations in charged black hole background and one has to resort to numerical methods to solve these equations in order to calculate the AC conductivity. However, in the near horizon regime an exact solution can be obtained.
We consider AC conductivity flow in the soft wall model in membrane paradigm \cite{Iqbal:2008by} and using the fact that,
\begin{equation}\label{eq:cond}
\sigma_{A}(\omega,u)~=~\dfrac{J^x}{i\omega A_{x}},
\end{equation}
where the current density is given as,
\begin{equation}\label{eq:gauge1}
J^{x}~=~\dfrac{-1}{g^2}\sqrt{-g}e^{-\phi}F^{ux}+\frac{g_{xx}}{2\kappa^2}\sqrt{-g}A'_th^x_t.
\end{equation}
Now, using \eqref{gauge,cond,gauge1} the conductivity flow can be written as,
\begin{eqnarray}
\dfrac{\partial_{u_{c}}\sigma_{A}}{i\omega}-\dfrac{g^2\sigma_{A}^2}{\sqrt{-g}e^{-\phi}g^{uu}g^{xx}}-\dfrac{2\kappa^2g^{xx}g^4\omega^2}{\sqrt{-g}}\dfrac{4 u^3(A'_t)^2 }{r_+^2}\nonumber \\
+\dfrac{\sqrt{-g}e^{-\phi}g^{xx}g^{tt}}{g^2}~=~0.
\end{eqnarray}
\begin{figure}[H]
\centering
\includegraphics[width=0.45\textwidth]{fig1_1.eps}
\includegraphics[width=0.45\textwidth]{fig1_2.eps}
\caption{The radial flow of AC conductivity($\sigma$) at fixed frequency($\omega=0.999$) with varying chemical potential ($\mu$=0.01(black), 0.05(magenta), 0.1(dotted- blue), 0.25(dashed red))}
\label{fig:fig1}
\end{figure}
In the near horizon limit, $\sigma_A$ is a constant and can be evaluated by applying regularity condition at the horizon, (u=1)
\begin{equation}
\sigma_A(u=1)~=~\dfrac{e^{-c}}{g^2}\dfrac{r_+}{l}.
\end{equation}
RG flow plots for AC conductivity has been shown in Fig.1. The frequency dependence of AC conductivity has been evaluated numerically and shown in Fig.3, Fig.4, Fig.5 and Fig.6. In the probe brane limit, taking $f(u)=1-u^2$, the plots in Fig. 3, show striking similarity with condensed matter systems \cite{Hartnoll:2008vx}.
\section{Transport Coefficients with Gauss Bonnet Corrections}
We study the effect of Gauss-Bonnet(GB) coupling on the RG flow of conductivity in the soft wall model approach. The modified action with the GB term is given by,
\begin{equation}\label{eq:action1}
S~=~\int~d^5x~\sqrt{-g}e^{-\phi}~\left\lbrace\frac{1}{2\kappa^2}\left(R-2\Lambda+\alpha
R_{GB}\right)-\frac{1}{4g^2}F^2\right\rbrace,
\end{equation}
where $R_{GB}~=~R^2-4R_{MN}R^{MN}+R^{MNPQ}R_{MNPQ}$ is Gauss-Bonnet term
and `$\alpha$' is the Gauss Bonnet Coupling constant.
We consider the solution for Einstein-Maxwell-Gauss-Bonnet(EMGB) system as \cite{Cai:2001dz,Cai:2009zv,Buchel:2009sk,Ge:2011fb,Hu:2011ze}.
\begin{equation}
ds^2~=~\dfrac{r_{+}^2}{l^2u}(-f(u)N^2dt^2+\sum_{i=1}^3 dx^idx^i)+\dfrac{l^2du^2}{4u^2f},
\end{equation}
where
\begin{eqnarray}
N^2&=&\frac{1}{2} \left(\sqrt{1-4 \alpha }+1\right) \nonumber \\
f(u)&=&\dfrac{1}{2\lambda}(1-\sqrt{1-4\lambda(1-u)(1+u-au^2)}, \nonumber
\end{eqnarray}
and $\lambda$ is related with the Gauss Bonnet coupling term as $\lambda~=~\frac{\alpha}{l^2}$.
Using the membrane paradigm as explained above for the charged black hole case and perturbation of metric and gauge fields as in previous section, the modified equations of motion for vector perturbations are given by,
\begin{equation}
0= \omega h'^{x}_t-\frac{uM'}{M}fN^2kh'^x_z\\-\frac{3aN^2A_x\omega}{M}
\end{equation}
\begin{equation}
0= h''^{x}_t+\frac{M'}{M}h'^{x}_t+\frac{l^4b^2}{4f}\frac{M'}{M}(\omega kh^x_z+k^2h^{x}_t)-\frac{3aN^2A'_x}{M}
\end{equation}
\begin{equation}
\begin{split}
0= h''^{x}_z-\frac{\frac{1}{u}-\frac{f'}{f}+2\lambda[\frac{f}{u}+uf''+u\frac{f'^2}{f}-2f']}{1+2\lambda(uf'-f)}h'^x_z \\+ \frac{l^4b^2}{4N^2uf^2}(\omega^2h^x_z+\omega kh^{x}_t),
\end{split}
\end{equation}
\begin{equation}\label{eq:gauge3}
0= A''_x+(\frac{f'}{f}-c)A'_x+\frac{l^4}{4N^2f^2r_+^2u}(\omega^2-k^2f)A_x-e^\phi\frac{1}{N^2f}h'^{x}_t
\end{equation}
where $M~=~\frac{1-2\lambda f(u)}{u}$.
In order to determine the flow equation of AC conductivity, we consider the current density with the GB corrections defined as,
\begin{equation}
J^x=-\frac{1}{g^2}\sqrt{-g}g^{uu}g^{xx}e^{-\phi}\partial_uA_x +\frac{1}{g^2}\sqrt{-g}\frac{4u^3}{r_+^2 N^2}A'_th^x_t
\end{equation}
The corresponding RG flow equation for AC conductivity (using definition \eqref{cond})becomes,
\begin{equation}\label{eq:gauge2}
\begin{split}
\dfrac{\partial_{u_{c}}\sigma_{A}}{i\omega}-\dfrac{g^2\sigma_{A}^2}{\sqrt{-g}e^{-\phi}g^{uu}g^{xx}}-\dfrac{2\kappa^2 g^{xx}\sqrt{-g}}{g^4\omega^2}(A'_t)^2 \dfrac{4 u^2}{Mr_+^2} \\
+\dfrac{\sqrt{-g}e^{-\phi}g^{xx}g^{tt}}{g^2}=0
\end{split}
\end{equation}
In the near horizon limit, $u_c~=~1$(the cut-off horizon), we can get an exact expression for the AC conductivity,
\begin{eqnarray}
\sigma_A(u_c=1)~=~\dfrac{r_+}{g^2}\dfrac{e^{-c}}{l}
\end{eqnarray}
RG flow plots for AC conductivity with GB corrections has been shown in Fig.3 and we can notice that the qualitative feature of the flow are similar to the case without Gauss Bonnet correction. \\ ~~~\\
\begin{figure}[H]
\centering
\includegraphics[width=0.45\textwidth]{fig3_1.eps}
\includegraphics[width=0.45\textwidth]{fig3_2.eps}
\caption{The radial flow of AC conductivity($\sigma$) with GB corrections($\lambda =0.01$) at fixed frequency($\omega$=0.999) with differnet $\mu$=0.01(blue), 0.05(Red), 0.1(dashed magenta), 0.25(dashed blue)}
\label{fig:fig3}
\end{figure}
\begin{figure*}
\includegraphics[width=0.45\textwidth]{fig8_1.eps}
\includegraphics[width=0.45\textwidth]{fig8_2.eps}
\caption{Frequency dependence of AC conductivity($\sigma$) in the probe limit with varying chemical potential ($\mu=$ 0.01(blue), 0.1(black), 0.9(red), 1.5(brown))}
\label{fig:fig8}
\end{figure*}
\section{Conclusions}
The soft wall model of Holographic QCD is used here to get the insights into the transport properties like AC and DC conductivity in strongly coupled regime of QCD. The flow equations of AC and DC conductivity are considered for different values of chemical potential. The numerical solution of these equations enabled us to calculate the value of real and imaginary part of AC conductivity and the results seem to agree with the models, which consider the dynamics of condensate. This suggests that the soft wall model successfully captures the same features. This has also been noticed recently by \cite{Afonin:2015fga} independently. In the probe limit our results (Fig.3) agree with existing results in the literature \cite{Hartnoll:2008vx,Hartnoll:2009sz,Matsuo:2011fk}. The results at high frequency (Fig.4, Fig.5) show oscillatory behavior of AC conductivity, which is reminiscent of Shubnikov de Haas effect. At low frequency, the Drude behavior\cite{Donos:2012js} is observed (Fig.6). The Gauss-Bonnet coupling does not change the results significantly. \\
\section{Acknowledgement}
We acknowledge the financial support from the DST, Govt. of India, Young Scientist project.
|
1,314,259,995,356 | arxiv | \section{Introduction}
Polymer gels are widely used in food products such as yogurt, tofu, and jelly~\cite{Baziwane2003, Saha2010, Peng2015} and in biomaterials such as anti-adhesion agents, hemostatic agents, and soft contact lenses~\cite{Yeo2007, Gaharwar2014, Calo2015}.
For these applications, it is important to control the stiffness of polymer gels.
For example, flexible polymer gels are used in artificial vitreous substitutes and food for dysphagia, and stiff polymer gels are used in hemostatic agents and artificial cartilage.
By optimizing the polymer gel stiffness for its intended use, the quality of life (QOL) can be improved in various situations.\\
Despite the importance of controlling the stiffness, it is an open question how the stiffness of a polymer gel is determined by its microscopic network structure.
The elastic behavior of polymer gels, which are usually regarded as rubber containing a large amount of solvent, has been conventionally analyzed and predicted based on models of classical rubber elasticity theories, such as the affine~\cite{Flory1953}, phantom~\cite{James1953}, and junction affine network models~\cite{Flory1977}.
However, it is difficult to verify the applicability of these microscopic models to the macroscopic properties of polymer gels because conventional polymer gels inherently have inhomogeneous network structures~\cite{Shibayama1998}.
Thus, the determination of the appropriate microscopic model describing polymer-gel elasticity remains to be achieved~\cite{Patel1992, Hild1998}.\\
In recent years, we overcame the difficulty of inhomogeneity by developing a tetra-arm poly(ethylene glycol) (PEG) hydrogel (tetra gel)~\cite{Sakai2008} with a homogeneous network structure~\cite{Matsunaga2009} (Fig.~\ref{fig:1}a).
In the tetra gel, we can independently and systematically control the structure of the polymer network, as shown in Fig.~\ref{fig:1}b.
Using tetra gels, we have studied the linear elasticity of polymer gels in the as-prepared state by various experimental techniques~\cite{Sakai2008, Akagi2013, Nishi2017, Yoshikawa2019}.\\
Until recently, we analyzed our experiments using the existing models of the classical rubber elasticity theories but observed inconsistencies with respect to the interpretations of experimental results as described in Sec.~\ref{sec:history}.
A very recent discovery revealed that polymer gels have ``negative energy elasticity"~\cite{Yoshikawa2021}, namely, a significant negative internal energy contribution to the shear modulus originating from the solvent.
This is not considered in the classical rubber elasticity theories, which assume that the elastic modulus is mainly determined from the entropy contribution.
Because the internal energy contribution is significant and negative, the (hidden) entropy contribution is large compared to the total modulus.
Our discovery challenges the conventional notion that elasticity of polymer gel can be understood by the classical rubber elasticity theories.\\
In this focus review, we describe how past experimental results on the linear elasticity of polymer gels can be successfully explained by the existence of negative energy elasticity.
The review is organized as follows.
First, we briefly review the experimental study on the linear elasticity of polymer gels.
Second, we present a current state-of-the-art unified formula for the linear elasticity of polymer gels with various network topologies and densities.
Third, using this formula, we re-examine the past experimental results.
Finally, we present the summary and outlook of these investigations.\\[47pt]
\begin{figure}[t!]
\centering
\includegraphics[width=0.89\linewidth]{Fig1.pdf}
\caption{
\textbf{a}
Tetra gel synthesized by AB-type cross-end coupling of two kinds of precursors of equal size in a water solvent.
These precursors are tetra-arm poly(ethylene glycol) (PEG) chains whose terminal functional groups (A and B) are mutually reactive.\\
\textbf{b}
Control parameters of the network density of the tetra gel.
In the polymer network after completion of the chemical reaction, the molar mass of precursors $M$ corresponds to the molar mass between the crosslinks ($M/2$), and the polymer concentration $c$ represents the number density of crosslinks $n$ as $n=cN_{A}/M$.
Here, $N_{A}$ is the Avogadro constant.
}
\label{fig:1}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=0.89\linewidth]{Fig2.pdf}
\caption{
\textbf{a}
Control parameters of the network topology of the tetra gel.
The connectivity $p$ increases monotonically with time and (ideally) reaches $p=2q$ after completion of the reaction, where $q$ is the molar mixing fraction of the precursors of the minor group.\\
\textbf{b}
Dynamic process (DP) of gelation, where two precursors are mixed in a stoichiometrically balanced ratio ($q=1/2$).\\
\textbf{c}
Static replicas (SR) of DP, where two precursors are mixed in a stoichiometrically imbalanced (and balanced) ratio ($0\leq q\leq 1/2$).
Here, the connectivity $p$ after completion of the reaction is tuned as $p=2q$.
}
\label{fig:2}
\end{figure}
\begin{figure*}[t!]
\centering
\includegraphics[width=\linewidth]{Fig3.pdf}
\caption{
Representative experimental results before the discovery of negative energy elasticity.\\
\textbf{a} Normalized shear modulus ($G/G_\mathrm{affine}$) as a function of the overlap parameter ($c/c^*$).
The open gray symbols represent the data from the original paper~\cite{Akagi2013}, which are inaccurate due to the samples and measurement method.
The filled black symbols represent the data from Ref.~\cite{Yoshikawa2021} that are more accurate in terms of the samples and measurement method (see main text).
Rhombuses, circles, squares, and triangles represent $M=5,10,20$, and $40$ kg/mol, respectively.
The blue and red dashed lines show $G/G_\mathrm{affine}=0.5$ (the prediction of the phantom network model) and $1$ (the prediction of the affine network model), respectively.
The overlap parameter in the horizontal axis is converted from the polymer volume fraction $\phi/\phi^*$ (in the original paper~\cite{Akagi2013}) to concentration $c/c^*$.\\
\textbf{b} Normalized shear modulus ($G(p)/G(1)$) as a function of the connectivity of the polymer network ($p$).
The polymer concentrations are $c=40,60,80,100$, and $120$~g/L.
The molar mass of the precursors is $M=20$ kg/mol, and the corresponding overlap concentration is $40$ g/L.
We calculate the dashed lines from the affine and phantom network models with the Bethe approximation.
The data are taken from Ref.~\cite{Nishi2017}.\\
\textbf{c} Shear modulus ($G$) as a function of the connectivity of the polymer network ($p$) in the dynamic gelation process (DP) and the static replica (SR).
The polymer concentrations are $c=30$, $60$, and $120$~g/L, and the molar mass of the precursors is $M=20$ kg/mol.
The data are taken from Ref.~\cite{Yoshikawa2019}.
We note that the data in \textbf{b} and \textbf{c} were measured accurately in the same way as Ref.~\cite{Yoshikawa2021}.
}
\label{fig:3}
\end{figure*}
\section{Past Experimental Results of Linear Elasticity in Tetra Gels}
\label{sec:history}
In this section, we briefly review, in chronological order, our four experiments~\cite{Akagi2013,Nishi2017,Yoshikawa2019,Yoshikawa2021} that investigated the linear elasticity of polymer gels in the as-prepared state using tetra gels (Fig.~\ref{fig:1}a).
As shown in Fig.~\ref{fig:1}b, by tuning the molar mass $M$ and concentration $c$ of precursor solutions, we could independently and systematically control the network density, i.e., the molar mass between the crosslinks $M/2$ and the number density of crosslinks $n=cN_{A}/M$ in tetra gels.
Here, $N_{A}$ is the Avogadro constant, and $c$ is defined as the precursor weight divided by the solvent volume rather than by the solution volume (see Sec. S1 in Ref.~\cite{Yasuda2020}).
In addition, we could control the network topology by tuning the following two parameters:
(i) the connectivity $p$ ($0\leq p\leq1$), i.e., the fraction of the reacted terminal functional groups to all the terminal functional groups, and (ii) molar mixing fraction of minor precursors to all precursors $q$ as $[\mathrm{A}]:[\mathrm{B}]=q:1-q$ for $0\leq q\leq 1/2$.
Each of these experiments~\cite{Akagi2013,Nishi2017,Yoshikawa2019,Yoshikawa2021} involved different network topologies and is summarized in Fig.~\ref{fig:2}a.
Akagi et al.~\cite{Akagi2013} investigated networks with $p\simeq 1$ (after completion of the reaction) and $q=1/2$ (stoichiometrically balanced mixing), as shown by the orange circle in Fig.~\ref{fig:2}a.
Nishi et al.~\cite{Nishi2017} investigated networks with $q=1/2$, as shown by the blue arrow in Fig.~\ref{fig:2}a and Fig.~\ref{fig:2}b.
Yoshikawa et al.~\cite{Yoshikawa2019} compared networks with $q=1/2$ (blue arrow in \ref{fig:2}a and \ref{fig:2}b) and $p=2q$ (red filled circle in Fig.~\ref{fig:2}a and Fig.~\ref{fig:2}c).
Yoshikawa et al.~\cite{Yoshikawa2021} investigated networks with $q=1/2$ and $p=2q$ (red filled circle in Fig.~\ref{fig:2}a and Fig.~\ref{fig:2}c).
We describe the details of these studies in the following.
\newpage
The first two studies (Akagi et al.~\cite{Akagi2013} and Nishi et al.~\cite{Nishi2017}) investigated the applicability of the classical rubber theories to polymer-gel elasticity.
The representative models are the affine~\cite{Flory1953} and phantom~\cite{James1953} network models, which predict the shear modulus $G$ as
\begin{equation}
G_\mathrm{affine}=\nu nk_{B}T
\label{eq:affine}
\end{equation}
and
\begin{equation}
G_\mathrm{phantom}=\xi nk_{B}T,
\label{eq:phantom}
\end{equation}
respectively.
Here, $n$, $k_{B}$, and $T$ are the number density of crosslinks, Boltzmann constant, and absolute temperature, respectively.
In Eqs.~(\ref{eq:affine}) and (\ref{eq:phantom}), $\xi\equiv \nu-\mu$ is the difference between the number per precursor of the elastically effective chains ($\nu$) and the crosslinks ($\mu$).
We cannot experimentally observe $\nu$ and $\xi$.
However, $p$ and $q$ can be observed and used to calculate the functions $\nu=\nu(p,q)$, $\mu=\mu(p,q)$, and $\xi=\xi(p,q)$ using the Bethe (i.e., tree) approximation~\cite{Macosko1976, Miller1976,Yoshikawa2019}.
The difference between these models (Eqs.~(\ref{eq:affine}) and (\ref{eq:phantom})) is the way they address the fluctuation of crosslinks.
The affine network model assumes that the crosslinks are fixed to the gel and that the deformation of a chain follows macroscopic deformation.
On the other hand, the phantom network model assumes that the crosslinks fluctuate and that the deformation of a chain is attenuated.
\newpage
Akagi et al.~\cite{Akagi2013} measured the $c$ and $M$ dependences of the shear modulus $G$ through stretching measurements of the network with $p\simeq 1$ and $q=1/2$.
(Strictly speaking, the connectivity $p$ of all the completely reacted gel samples was almost constant, $p\simeq 0.9$.)
Figure~\ref{fig:3}a demonstrates that all the data of the $c/c^{*}$ dependence of $G/G_\mathrm{affine}$ with different $M$ collapse onto a single master curve.
Here, $c^{*}$ is the overlap concentration of precursors obtained by viscosity measurement.
However, in the original paper~\cite{Akagi2013}, the measurement results (open gray symbols) were
inaccurate for the following two reasons:
(i) a lower elastic modulus than expected was observed because the tetra gels were prepared using precursors with the terminal functional groups (amine and $N$-hydroxysuccinimide) that undergo hydrolysis over time;
(ii) the elastic modulus was measured by stretching measurement, which causes a large error.
To enable an accurate discussion, Fig.~\ref{fig:3}a also shows the accurately remeasured data from Ref.~\cite{Yoshikawa2021} (filled black symbols), overcoming the above two problems;
we (i) used tetra gels prepared using the precursors with terminal functional groups (maleimide and thiol) that do not cause hydrolysis and (ii) measured their elastic modulus by dynamic rheological measurement.
Here, the normalization factors $c^*=c^*(M)$ are different;
the original paper~\cite{Akagi2013} used $c^*=120, 75, 40, 15$ g/L for $M=5, 10, 20, 40$ kg/mol, respectively, whereas Ref.~\cite{Yoshikawa2021} used $c^*=60, 40, 30$ g/L for $M=10, 20, 40$ kg/mol, respectively.
We note that the following data (Figs.~\ref{fig:3}b and c below) were also accurately measured in the same way as Ref.~\cite{Yoshikawa2021}.\\
From our present understanding, Akagi et al. misinterpreted the results of Fig.~\ref{fig:3}a, i.e, a crossover from the phantom network model to the affine network model occurs in polymer gels.
In Fig.~\ref{fig:3}a, the horizontal line showing $G/G_\mathrm{affine} = 0.5$ can be regarded as the prediction of the phantom network model because
\begin{equation}
\frac{G_\mathrm{phantom}}{G_\mathrm{affine}}
=\frac{\xi(p,1/2)}{\nu(p,1/2)}
\simeq 0.5,
\end{equation}
for a tetra-arm network at $p\simeq 1$.
For $c\simeq c^{*}$,
$G$ agrees well with $G_\mathrm{phantom}$, and for $c<c^{*}$, the values of $G$ are smaller than those of $G_\mathrm{phantom}$.
This is probably due to an increase in ineffective connections for $c<c^{*}$~\cite{Yoshikawa2019}.
On the other hand, for $c>c^{*}$, $G/G_\mathrm{affine}$ increases to approach $1$ as $c$ increases.
Previously, it was considered for conventional polymer gels that an increase in $G/G_\mathrm{affine}$ with an increase in $c$ is due to the presence of trapped entanglements.
However, the stress-elongation curve obeying the neo-Hookean model~\cite{Sakai2014} and the fracture energy obeying the Lake-Thomas model~\cite{Akagi2013f} strongly suggest that this is not the case.
Therefore, Akagi et al. interpreted that the result in Fig.~\ref{fig:3}a indicates a crossover from the phantom network model to the affine network model with an increase in $c$.
However, this crossover is negated by the following $p$ dependence results.\\
\begin{figure*}[t!]
\centering
\includegraphics[width=0.64\linewidth]{Fig4.pdf}
\caption{
\textbf{a-b} Decomposition of entropy and energy contributions to shear modulus in (a) vulcanized natural rubber and (b) tetra gel.
We obtain the gray solid line from a least-squares fit to the temperature dependence of the shear modulus $G$ (black symbols).
According to Eq.~(\ref{eq:G-vantHoff}), we have the entropy contribution $G_{S}$ (blue dashed line) and the energy contribution $G_{E}$ (red dashed line), which corresponds to the intercept of the gray solid line.
The data are taken from Refs.~\cite{Anthony1942} and \cite{Yoshikawa2021} for \textbf{a} and \textbf{b}, respectively.
Notably, the shear modulus of vulcanized natural rubber is proportional to the absolute temperature ($G\simeq aT$), while that of the tetra gel is a linear function with a negative intercept [$G=a(T-T_0)$].
Here, the sample of tetra gel is synthesized by equal-weight mixing of the two kinds of precursors whose molar mass $M$ and concentration $c$ are $20$ kg$/$mol and $60$ g$/$L, respectively.
}
\label{fig:4}
\end{figure*}
\begin{figure*}[t!]
\centering
\includegraphics[width=0.95\linewidth]{Fig5.pdf}
\caption{
Experimental evidence for the existence of negative energy elasticity in a polymer gel.
All panels show the temperature ($T$) dependence of the shear modulus $G$.
We obtain each gray line from a least-square fit of each sample, which is characterized by the three parameters of the precursors:
the molar mass $M$, the concentration $c$ and the connectivity $p$.
All gray lines that have the same $M$ and $c$ pass through a vanishing temperature $T_0$ on the $T$ axis, which leads to Eq.~(\ref{eq:Gbya}).
The value of $T_0$ in each graph is the average of the four samples with different values of $p$, and the values in parentheses represent the standard deviation.
(Reprinted from Ref.~\cite{Yoshikawa2021}; CC BY 4.0.)
}
\label{fig:5}
\end{figure*}
\begin{figure*}[t!]
\centering
\includegraphics[width=0.95\linewidth]{Fig6.pdf}
\caption{
\textbf{a} The concentration ($c$) dependence of the vanishing temperature $T_0$ that governs the energy contribution of polymer-gel elasticity.
The blue diamonds, red circles, and black squares represent $M=10$, $20$, and $40$ kg/mol, respectively.
Each symbol represents the average of four samples taken from Fig.~\ref{fig:5} (i.e., the data are taken from Ref.~\cite{Yoshikawa2021}).
\textbf{b, c} The master curve of $T_0$ obtained by normalizing the concentration.
Here, we set $c^*= 60$, $40$, and $30$ g/L for $M=10$, $20$, and $40$ kg/mol, respectively.
The green dashed curve shows the scaling law $T_0\sim (c/c^*)^{-1/3}$ in the dilute regime ($c/c^*<1$).
As $(c/c^*)^{-1}\to 0$ (the dense limit), $T_0$ decreases, approaching nearly zero.
}
\label{fig:6}
\end{figure*}
Nishi et al.~\cite{Nishi2017} investigated $p$ dependence of $G$ in the range $c^{*}< c$ for a dynamic process (DP) in which a network is formed from two precursor solutions in a stoichiometric ratio ($q=1/2$), as shown by the blue arrow in Fig.~\ref{fig:2}b.
Just after mixing two precursor solutions, we measured the time ($t$) courses of (i) $G$ by rheological measurements and (ii) $p$ by ultraviolet-visible light spectroscopy.
Combining $G=G(t)$ and $p=p(t)$, we obtained $G(p)$.
Figure~\ref{fig:3}b shows $G(p)/G(1)$ as a function of $p$, where $G(1)$ is the extrapolation of $G(p)$ at $p = 1$ based on the percolated network model~\cite{Nishi2015}.
Figure~\ref{fig:3}b demonstrates that all the data of the $p$ dependence of $G(p)/G(1)$ with different $c$ (in the range of $c^{*}< c$) collapse onto a single master curve, corresponding to the prediction of the phantom network model under the Bethe approximation, $G_\mathrm{phantom}(p)/G_\mathrm{phantom}(1)$.\\
Yoshikawa et al.~\cite{Yoshikawa2019} compared two methodologies to measure the connectivity ($p$) dependence of shear modulus $G$ in AB-type polymerization.
The first is to measure $G$ during the dynamic process (DP) of gelation in a stoichiometric ratio ($q=1/2$), as shown in Fig.~\ref{fig:2}b.
The second is to measure $G$ of samples whose $p$ after completion of the reaction is tuned by mixing two precursors in stoichiometrically imbalanced ($q<1/2$) and balanced ($q=1/2$) ratios.
Here, assuming the complete reaction of a minor group, we have $p=2q$.
This methodology can be regarded as a static replica (SR) of the DP, as shown in Fig.~\ref{fig:2}c.
In the former (DP), we obtain continuous $p$ dependence by monitoring the time evolution of the same sample, whereas in the latter (SR), we obtain discrete $p$ dependence by using different samples.
The advantage of the SR over the DP is that the SR can accurately measure various physical properties of samples with different $p$ over time because the system is static.
Figure~\ref{fig:3}c demonstrates that $G=G(p)$ in the two methodologies (DP and SR) agrees well in high $p$ (i.e., $p > 0.75$).
[In low $p$, close to the gelation point, the differences in the structural parameters become more pronounced, reflecting the differences in the topology of the DP (e.g., $\xi(p,1/2)$) and SR (e.g., $\xi(p,p/2)$).
See Fig.~4 in Ref.~\cite{Yoshikawa2019}.]
Note that these behaviors of the $p$-dependence of $G$ in the DP and SR are well reproduced by the phantom network model under the Bethe approximation.\\
The above series of studies~\cite{Akagi2013,Nishi2017,Yoshikawa2019} are based on the longstanding basic assumption that the polymer-gel elasticity is mainly determined by the entropy contribution.
Under this assumption, polymer-gel elasticity has been evaluated with the classical rubber elasticity theories~\cite{Flory1953,James1953,Flory1977} that predict that the shear modulus is proportional to the absolute temperature ($G\simeq aT$), such as Eqs.~(\ref{eq:affine}) and (\ref{eq:phantom}).
As shown in Fig.~\ref{fig:4}a, many experimental studies on natural and synthetic rubbers~\cite{Meyer1935,Anthony1942,Mark1965,Mark1976} have confirmed that $G\simeq aT$, which means that $G$ is mainly determined by the entropy contribution.
However, in the case of a polymer gel, the results analyzed using such an assumption ($G\simeq aT$) were found to be inconsistent, even for measurements of $G$ at certain temperatures (room temperature).
For example, the results shown in Figs.~\ref{fig:3}b and c seem to be inconsistent with the result of Fig.~\ref{fig:3}a;
while Fig.~\ref{fig:3}a shows the crossover between the phantom and affine network models depending on $c$, Figs.~\ref{fig:3}b and c are consistent with the phantom network model not depending on $c$.
We cannot reconcile this inconsistency as long as we assume $G\simeq aT$.\\
Yoshikawa et al.~\cite{Yoshikawa2021} examined whether the premise of $G\simeq aT$
[e.g., Eqs.~(\ref{eq:affine}) and (\ref{eq:phantom})] holds for polymer gels by measuring the temperature ($T$) dependence of the shear modulus $G$.
Taking advantage of SR, we prepared various gel samples with different network densities (various $M$ and $c$) and network topologies (various $p$).
As shown in Fig.~\ref{fig:4}b, we found that $G=aT+b$ with a significant negative value of $b$, contrary to the premise of the classical rubber elasticity theories.
As we explain in the next section, the first and second terms ($aT$ and $b$) correspond to entropy and internal energy contributions to $G$, respectively.
Thus, the negative value of $b$ is interpreted as ``negative energy elasticity''.
In Ref.~\cite{Yoshikawa2021}, we confirm the above conclusions with more than 50 different network topologies and densities (Fig.~\ref{fig:5}).
In the next section, we explain the negative energy elasticity based on thermodynamics and provide a self-contained description of the unified formula, which can explain all the experimental results of the linear elasticity of PEG hydrogels with various network topologies and densities~\cite{Akagi2013,Nishi2017,Yoshikawa2019,Yoshikawa2021}.
\section{Unified Formula for Linear Elasticity of Polymer Gels}
This section provides a self-contained summary of the state-of-the-art understanding of the elasticity of isotropic, incompressible gels in the as-prepared state, as obtained using tetra gels with a homogenous network.
In general, for such homogeneous isotropic linear elastic materials, the elastic properties are uniquely determined by the shear modulus $G$, i.e., any of the other elastic moduli can be calculated from $G$.
For example, Young's modulus $E$ can be calculated as $E=3G$, and the bulk modulus $K$ is considered infinite ($K \gg G$).
Therefore, the elasticity of polymer gels in the as-prepared state is entirely determined by $G$.\\
We consider the thermodynamics of the deformation of polymer gels in the as-prepared state.
The derivative of the Helmholtz free energy $F$ of an elastic body with an applied shear strain $\gamma$ is given by
$dF = -SdT -PdV + V\sigma d\gamma$
at temperature $T$ and external pressure $P$~\cite{LandauLifshitz,Flory1953}.
Here, $S$, $V$, and $\sigma$ are the entropy, volume, and shear stress, respectively.
Polymer gels are considered to be incompressible, i.e., the relative volume change $\Delta V/V$ is negligible because the bulk modulus (on the order of GPa) is significantly larger than the shear modulus (on the order of kPa).
(We present detailed analysis of $\Delta V/V$ in Appendices~B and C in Ref.~\cite{Yoshikawa2021}.)
Thus, we have
\begin{equation}
df = -sdT + \sigma d\gamma,
\label{eq:dF}
\end{equation}
where $f\equiv F/V$ and $s\equiv S/V$ are the Helmholtz free energy and entropy densities, respectively.\\
In polymer physics, $f$ of a polymer gel is often written in the form of two separate contributions as~\cite{Flory1953}
\begin{equation}
f\left(T,\gamma\right)
=f_{\mathrm{mix}}\left(T\right)
+f_{\mathrm{el}}\left(T,\gamma\right),
\label{eq:Ftot}
\end{equation}
where
$f_{\mathrm{mix}}\left(T\right)\equiv f\left(T,0\right)$ and
$f_{\mathrm{el}}\left(T,\gamma\right) \equiv f\left(T,\gamma\right)-f_{\mathrm{mix}}\left(T\right)$
are the mixing and elastic free energy densities, respectively.
Here, $f_{\mathrm{mix}}$ is independent of the applied shear strain $\gamma$ because the volume $V$ does not change with deformation.
We emphasize that Eq.~(\ref{eq:Ftot}) does not provide any new information in the as-prepared state; it merely defines $f_{\mathrm{mix}}$ and $f_{\mathrm{el}}$.
Equation~(\ref{eq:dF}) gives the shear stress as $\sigma (T,\gamma) = \partial f (T,\gamma)/\partial \gamma$ in an isothermal process.
Thus, the shear modulus
($G(T)\equiv \lim_{\gamma\to 0} \partial \sigma (T,\gamma)/\partial \gamma$)
is related to the free energy as
\begin{equation}
G\left(T\right)
\equiv
\lim_{\gamma\to 0} \frac{\partial^{2} f}{\partial \gamma^{2}} (T,\gamma)
=
\lim_{\gamma\to 0}
\frac{\partial^{2} f_{\mathrm{el}}}{\partial \gamma^{2}} (T,\gamma).
\label{eq:G-Fel}
\end{equation}
Equation~(\ref{eq:G-Fel}) indicates that $f_{\mathrm{mix}}$ does not contribute to the shear modulus $G$.\\
Recently, we obtained a unified expression of the shear modulus of tetra gels as a function of the microscopic structure of the polymer network as~\cite{Yoshikawa2021}
\begin{equation}
G(T;c,M,p,q)=a(c,M,p,q) \left[ T-T_0\left(\frac{c}{c^*(M)}\right)\right],
\label{eq:Gbya}
\end{equation}
where $c$, $M$, $p$, and $q$ are the polymer concentration, molar mass of precursors, connectivity, and molar mixing fraction, respectively.
(The definitions of $p$ and $q$ are given in the previous section.)
Figures~\ref{fig:5}, \ref{fig:6}a, and \ref{fig:6}b experimentally validate Eq.~(\ref{eq:Gbya}) as follows.
First, Fig.~\ref{fig:5} shows that $G$ is a nearly linear function of $T$ [i.e., $G=aT+b=a(T-T_0)$, where $T_{0}=-b/a$] over the measured range ($278\, \mathrm{K}\leq T \leq 298\, \mathrm{K}$).
Second, Figs.~\ref{fig:5} and \ref{fig:6}a show that $T_0$ does not depend on the network topology ($p$ and $q$) but depends on the network density ($c$ and $M$).
Finally, Fig.~\ref{fig:6}b demonstrates that the dependence of $c$ and $M$ on $T_{0}$ is governed by $T_{0}=T_{0}(c/c^*(M))$.
Here, $c^*(M)$ is the normalization factor chosen to construct the master curve.
We note that $c^*(M)$ is in close agreement with the overlap concentration of the precursors $c_{\mathrm{vis}}^*(M)$ obtained by the viscosity measurement~\cite{Akagi2013}.\\
The first and second terms in Eq.~(\ref{eq:Gbya}) correspond to the entropy and energy elasticity, respectively.
The Helmholtz free energy density satisfies $f=e-Ts$, where $e$ is the internal energy density and $s$ is the entropy density.
Thus, on the basis of Eq.~(\ref{eq:G-Fel}), we define the energy contribution $G_E$ and the entropy contribution $G_S$ to the shear modulus $G=G_E+G_S$ as
$G_{E} \equiv\lim_{\gamma\to 0}\left(\partial^2 e/\partial \gamma^2\right)_{T,V}$
and
$G_{S} \equiv -\lim_{\gamma\to 0}T\left(\partial^2 s/\partial \gamma^2\right)_{T,V}$, respectively.
Here, $G_{S}$ and $G_{E}$ are defined under a constant-volume condition.
According to the Maxwell relation $\left(\partial s/\partial \gamma\right)_{T,V} = -\left(\partial \sigma/\partial T\right)_{\gamma,V}$, we have
\begin{equation}
G_{S} (T)=T\frac{dG}{dT} (T),
\label{eq:G-vantHoff}
\end{equation}
which enables us to determine the entropy and energy contributions from the temperature ($T$) dependence of the shear modulus $G$ under a constant-volume condition~\cite{Fermi1937,Flory1953, Rubinstein2003}.
Substituting Eq.~(\ref{eq:Gbya}) into Eq.~(\ref{eq:G-vantHoff}), we have $G_{S}=a T$ and $G_{E}=G-G_{S}=-aT_{0}$.\\
All measured samples shown in Fig.~\ref{fig:5} have a significant negative $G_{E}$, which indicates that the undeformed state is unstable in terms of the internal energy.
Because the (total) shear moduli of stable materials are generally bound to be positive ($G>0$), $G_{S}=aT$ must be larger than $\left|G_{E}\right|=aT_{0}$.\\
We discuss Eq.~(\ref{eq:Gbya}) in the dilute, semidilute, and dense regimes, respectively.
In the dilute regime ($c/c^*(M)<1$), Fig.~\ref{fig:6}b demonstrates the scaling law $T_0\sim \left(c/c^*(M)\right)^{-1/3}$, which would be key to further understanding the microscopic origin of negative energy elasticity in the future.
We present further discussion in Ref.~\cite{Yoshikawa2021}.\\
In the semidilute regime ($1\lesssim c/c^*(M)\lesssim 4$) and sufficiently high $p$ (i.e., $p > 0.75$), our experiment~\cite{Yoshikawa2019,Yoshikawa2021} shows that the entropy contribution $G_S$ of the tetra gel is phenomenologically well reproduced by a constant multiple of the phantom network model [Eq.~(\ref{eq:phantom})] as $G_S \simeq 2.4G_\mathrm{phantom}\equiv 2.4 \xi nk_{B}T$,
where $n=n(c,M)=cN_{A}/M$ is the number density of the tetra-arm precursors.
This implies that the prefactor $a(c,M,p,q)$ is approximately separable into the product of the network density contribution (with control parameters $c$ and $M$, Fig.~\ref{fig:1}b) and the network topology contribution (with control parameters $p$ and $q$, Fig.~\ref{fig:2}a) as
\begin{equation}
a(c,M,p,q) \simeq 2.4 k_{B} n(c,M) \xi(p,q).
\label{eq:phantom-like}
\end{equation}
We note that Eq.~(\ref{eq:phantom-like}) is valid in the semidilute regime ($1\lesssim c/c^*(M)\lesssim 4$) and sufficiently high $p$.
In fact, $a(c,M,p,q)$ is a complex function in the dilute regime ($c/c^*(M)<1$) or low $p$ [see Fig.~5(d) in Ref.~\cite{Yoshikawa2021}].\\
In the dense regime, Fig.~\ref{fig:6}c indicates that $T_0$ decreases, approaching nearly zero as $(c/c^{*})^{-1} \to 0$, which means that the solvent is removed.
This result is consistent with experimental results on vulcanized natural rubber and synthetic rubbers without solvent; the absolute value of $G_{E}$ is much smaller than the value of $G_{S}$~\cite{Meyer1935,Mark1965,Mark1976}.
In other words, this result suggests that the negative energy elasticity in the polymer gels originates from the solvent.
\section{Re-examination of Past Experimental Results by the Unified Formula}
Based on the unified formula in Eq.~(\ref{eq:Gbya}) together with the phenomenological expression of the prefactor in Eq.~(\ref{eq:phantom-like}), we re-examine past experimental results on polymer-gel elasticity~\cite{Akagi2013,Nishi2017,Yoshikawa2019}.
First, we consider Akagi et al.~\cite{Akagi2013}, which investigated the network with $p\simeq 1$ and $q=1/2$, as shown by the orange circle in Fig.~\ref{fig:2}a.
Substituting Eq.~(\ref{eq:phantom-like}) into Eq.~(\ref{eq:Gbya}) and using Eq.~(\ref{eq:affine}), we have
\begin{equation}
\begin{split}
\frac{G\left(T;c,M,1,\frac{1}{2}\right)}{G_\mathrm{affine}\left(T;c,M,1,\frac{1}{2}\right)}
& \simeq \frac{2.4\xi\left(1,\frac{1}{2}\right)
\left[ T-T_0\left(\frac{c}{c^*(M)}\right)\right]}{\nu\left(1,\frac{1}{2}\right)T}\\
& \simeq 1.2 \left[1-\frac{T_0\left(\frac{c}{c^*(M)}\right)}{T}\right].
\label{eq:PRX-Akagi}
\end{split}
\end{equation}
Here, we use $\xi\left(1,1/2\right)=1$ and $\nu\left(1,1/2\right)=2$.
Equation~(\ref{eq:PRX-Akagi}) shows that $G/G_\mathrm{affine}$ depends only on $c/c^*$ under isothermal conditions (i.e., $T$ is constant), which elucidates Fig.~\ref{fig:3}a.
We emphasize that the ``crossover'' in Fig.~\ref{fig:3}a originates not from the phantom-affine crossover but from the concentration dependence of the negative energy elasticity (i.e., the dependence of $T_0$ on $c/c^*$).
\newpage
Second, we consider the findings of Nishi et al.~\cite{Nishi2017} on the network with $q=1/2$, as shown by the blue arrow in Fig.~\ref{fig:2}a.
As shown in Fig.~\ref{fig:3}b, Nishi et al.~\cite{Nishi2017} considered $G(p)/G(1)$, i.e., the shear modulus $G(p)$ normalized by the modulus of the network with $p= 1$ and $q=1/2$.
To generalize this, we consider $G(p,q)/G(1,1/2)$, i.e., the shear modulus $G(p,q)$ with general $q$ normalized by the modulus of the network with $p=1$ and $q=1/2$.
Substituting Eq.~(\ref{eq:phantom-like}) into Eq.~(\ref{eq:Gbya}), we have
\begin{equation}
\frac{G(T;c,M,p,q)}{G(T;c,M,1,1/2)}
=\frac{a(c,M,p,q)}{a(c,M,1,1/2)}
\simeq \xi(p,q).
\label{eq:PRX-Nishi}
\end{equation}
Equation~(\ref{eq:PRX-Nishi}) (with $q=1/2$) fully explains the result of Fig.~\ref{fig:3}b.
The ratio $G(T;c,M,p,q)/G(T;c,M,1,1/2)$ does not depend on $T$, $c$, and $M$, and is explained by $\xi(p,q)$, corresponding to the prediction of the phantom network model under the Bethe approximation.
In the previous section, we mentioned that the results of Akagi et al.~\cite{Akagi2013} and Nishi et al.~\cite{Nishi2017} seem to be inconsistent with each other.
However, the unified formula [Eq.~(\ref{eq:Gbya}) with Eq.~(\ref{eq:phantom-like})]
can explain both in a consistent manner.\\
Third, we consider Yoshikawa et al.~\cite{Yoshikawa2019}, which compared the networks with $q=1/2$ (blue arrow in Fig.~\ref{fig:2}a) and $p=2q$ (red filled circle in Fig.~\ref{fig:2}a).
The former is the DP, as shown in Fig.~\ref{fig:2}b, and the latter is the SR, as shown in Fig.~\ref{fig:2}c.
[In Ref.~\cite{Yoshikawa2019}, we referred to the SR as imbalanced mixing (IM).]
From the unified formula [Eq.~(\ref{eq:Gbya}) with Eq.~(\ref{eq:phantom-like})], we derive that the connectivity ($p$) dependence of the shear modulus of the DP ($G_\mathrm{DP}(p)$) and that of the SR ($G_\mathrm{SR}(p)$) agree well in high $p$ (i.e., $p > 0.75$), as shown in Fig.~\ref{fig:3}c.
Because $G_\mathrm{DP}(p)\equiv G(T;c,M,p,1/2)$ and $G_\mathrm{SR}(p)\equiv G(T;c,M,p,p/2)$, we obtain
\begin{equation}
\frac{G_\mathrm{SR}(p)}{G_\mathrm{DP}(p)}
\simeq \frac{\xi_\mathrm{SR}(p)}{\xi_\mathrm{DP}(p)},
\label{eq:PRX-Softmatter}
\end{equation}
where
\begin{gather}
\xi_\mathrm{DP}(p)\equiv \xi(p,1/2)=2p-1+O\left(\left(1-p \right)^{2}\right)
\label{eq:PRX-Softmatter-DP}\\
\xi_\mathrm{SR}(p)\equiv \xi(p,p/2)=2p-1+O\left(\left(1-p \right)^{2}\right).
\label{eq:PRX-Softmatter-SR}
\end{gather}
Here, we use the Bethe approximation~\cite{Yoshikawa2019}.
From Eqs.~(\ref{eq:PRX-Softmatter}), (\ref{eq:PRX-Softmatter-DP}), and (\ref{eq:PRX-Softmatter-SR}), we have $G_\mathrm{DP}(p) \simeq G_\mathrm{SR}(p)$ for $p\simeq 1$.
The essence of this result relies on the fact that the prefactor $a(c,M,p,q)$ is separable as in Eq.~(\ref{eq:phantom-like}), which leads to
\begin{equation}
\frac{G(T;c,M,p,q)}{\xi(p,q)}
\simeq 2.4 k_{B} n(c,M) \left[ T-T_0\left(\frac{c}{c^*(M)}\right)\right].
\end{equation}
Thus, $G/\xi$ is independent of network topology ($p$ and $q$).
In fact, Yoshikawa et al.~\cite{Yoshikawa2019} experimentally demonstrated that
$G_\mathrm{DP}(p)/\xi_\mathrm{DP}(p)=G_\mathrm{SR}(p)/\xi_\mathrm{SR}(p)$ holds and that $G_\mathrm{DP}(p)/\xi_\mathrm{DP}(p)$ (and $G_\mathrm{SR}(p)/\xi_\mathrm{SR}(p)$) does not depend on $p$ in the semidilute ($1\lesssim c/c^*(M)\lesssim 4$) and sufficiently high $p$.
\section{Summary and Future Prospects}
In this article, we have described how past experimental results~\cite{Akagi2013,Nishi2017,Yoshikawa2019} on the linear elasticity of polymer gels can be successfully explained by considering negative energy elasticity coexisting with entropy elasticity~\cite{Yoshikawa2021}.
First, we have reviewed the experimental research~\cite{Akagi2013,Nishi2017,Yoshikawa2019,Yoshikawa2021} on the linear elasticity of polymer gel in the as-prepared state using tetra gels.
Each of these experiments involves a different network topology (Fig.~\ref{fig:2}a).
Figure~\ref{fig:3} shows representative experimental results.
Second, we have provided the unified formula in Eq.~(\ref{eq:Gbya}) for the linear elasticity of polymer gels, which has been revealed in a series of studies~\cite{Akagi2013,Nishi2017,Yoshikawa2019,Yoshikawa2021}.
In addition, in the semidilute ($1\lesssim c/c^*(M)\lesssim 4$) and sufficiently high $p$, the prefactor $a(c,M,p,q)$ is separable as in Eq.~(\ref{eq:phantom-like}).
Finally, using this unified formula~(\ref{eq:Gbya}) together with the phenomenological expression of the prefactor in Eq.~(\ref{eq:phantom-like}), we have explained the past experimental results~\cite{Akagi2013,Nishi2017,Yoshikawa2019} and have reconciled the past results that seem to be inconsistent with each other~\cite{Akagi2013,Nishi2017}.\\ \\
The discovery of negative energy elasticity~\cite{Yoshikawa2021} is one of the most significant recent advances in the field of linear elasticity of polymer gels.
This negative energy elasticity, which vanishes when the solvent is removed, is the critical factor that differentiates gels from rubbers.
The models of classical rubber elasticity theories (such as the affine, phantom, and junction affine network models) are inapplicable to polymer gels; the relative change in shear modulus due to changes in temperature is several times greater than predicted by these models.
Thus, the negative energy elasticity is of great practical importance because polymer gels are used at various temperatures.\\ \\
The elucidation of the governing law of the shear modulus is expected to improve our understanding of the static and dynamic properties of swelling of polymer gels in solvents.
This is because the swelling pressure ($\Pi_{\mathrm{tot}}$) is determined by the shear modulus and osmotic pressure as $\Pi_{\mathrm{tot}}=\Pi_{\mathrm{mix}}+\Pi_{\mathrm{el}}$, where $\Pi_{\mathrm{mix}}$ and $\Pi_{\mathrm{el}}$ are the solvent-polymer mixing ($\Pi_{\mathrm{mix}}$) and elastic ($\Pi_{\mathrm{el}}$) contributions, respectively.
As an example of the static properties of swelling, we recently discovered the governing law of osmotic pressure throughout the gelation process involving both the sol and gel states~\cite{Yasuda2020}.
Here, the osmotic pressure was obtained by controlling the swelling pressure of gels with an external solution and subtracting the elastic contribution ($\Pi_{\mathrm{el}}$).
As an example of the dynamic properties of swelling, we demonstrated a significant dependence of the elastic modulus on the collective diffusion coefficient of the polymer networks~\cite{Fujiyabu2019,Kim2020}.
Accordingly, elasticity is the basis of both static and dynamic properties of polymer gels, prompting us to revisit past previous studies.
\newpage
A complete understanding of the linear elasticity of polymer gels is still elusive, and it is important to address the following three points.
First, the microscopic origin of the negative energy elasticity needs to be clarified.
In Ref.~\cite{Yoshikawa2021}, we inferred that the origin is the interaction between the polymer and the solvent.
However, because we performed only macroscopic measurements~\cite{Akagi2013,Nishi2017,Yoshikawa2019,Yoshikawa2021}, we require further investigations to clarify the microscopic origin.
For example, molecular-scale experiments (such as light scattering and single-chain experiments~\cite{Nakajima2006,Liang2017}) and numerical simulations (such as molecular dynamics simulations) will reveal the microscopic origin.
A future theory that explains the origin of negative energy elasticity from the microscopic point of view should predict the vanishing temperature $T_{0}$ from the network structure.
To construct such a theory, the scaling law $T_0\sim \left(c/c^*(M)\right)^{-1/3}$ in the dilute regime ($c/c^*(M)<1$) in Fig.~\ref{fig:6}b would play a key role.\\
Second, in connection with the prefactor $a(c,M,p,q)$, we need to revisit the classical rubber elasticity theories, which are used to calculate the entropy elasticity of various network models.
The entropy elasticity ($G_{S}= G-G_{E}$), rather than the shear modulus itself ($G$), may be explainded by some classical rubber elasticity theory.
However, according to our experiments~\cite{Yoshikawa2021}, $G_{S}$ is $2.4$ times that of the phantom network model, as in Eq.~(\ref{eq:phantom-like}).
The universality and meaning of $2.4$ is an important open question.
For example, a recently proposed theory~\cite{Zhong2016} that predicts $G_{S}$ from the number of topological loop defects does not appear to explain our result because it predicts a lower $G_{S}$ than the phantom network model.
Third, it is important to investigate whether our findings extend to other polymer gels as well:
(i) homogeneous gels with other polymer-solvent systems, such as PEG-acetonitrile~\cite{Li2019}, poly(acrylic acid) (PAA)-water~\cite{Oshima2014}, poly($N$-isopropylacrylamide) (PNIPA)-methanol~\cite{Okaya2020}, and poly($n$-butyl acrylate) (PBA)-$N$,$N$-dimethylformamide (DMF)~\cite{Huang2020,Nakagawa2021} systems;
(ii) gels with other network structures, such as near-critical gels~\cite{Aoyama2021}
and topological gels~\cite{Ito2007}; and
(iii) gels synthesized by other types of polymerization, such as radical polymerization.
We emphasize that $G_{S}$ and $G_{E}$ are defined (and Eq.~(\ref{eq:G-vantHoff}) holds) under constant-volume conditions.
Thus, it is necessary to confirm that the effect of volume changes due to temperature changes is negligible.
It is difficult for a small group to experiment on all of these various polymer gels, and it is essential to investigate the linear elasticity by various groups.
\begin{acknowledgments}
This work was supported by the Japan Society for the Promotion of Science (JSPS) through Grants-in-Aid for
Early Career Scientists grant number 19K14672 to N.S,
JSPS Fellows grant number 21J13478 to Y.Y,
Scientific Research (B) grant number 18H02027 to T.S,
Scientific Research (A) grant number 21H04688 to T.S,
and Transformative Research Areas grant number 20H05733 to T.S.
This work was also supported by the Japan Science and Technology Agency (JST) CREST grant number JPMJCR1992 to T.S.\\
\end{acknowledgments}
|
1,314,259,995,357 | arxiv | \section{Introduction}
Our best understanding of how the particles and three of the forces such as strong, electromagnetic and weak interactions communicate to each other is encapsulated in the Standard Model (SM) of particle physics.
Over time and through many experiments, the SM has proven a well established physics theory.
Despite its spectacular success at explaining the data, the SM is believed to be incomplete which fails to describe a few challenging shortcomings such as the matter dominance over anti-matter in the present universe, neutrino masses, hierarchy problem, dark matter and dark energy, the unification of gravity
with other three fundamental forces etc. To solve these open puzzles, the quest for physics beyond the SM (BSM) is of prime importance. In this context, the B factories have been an excellent testing ground to shed light in exploring the new physics (NP) beyond the SM through low energy experiments. It has literally witnessed the breaking of lepton flavor universality in the charged (neutral) current decays mediated by $b \to c$ ($b \to s$) quark level transitions. Experimentally, the $b \to c \tau \nu$ quark level transition have lepton non universality anomalies in exclusive $B \to D^{(*)}\tau \nu$, $B \to J/\psi \tau \nu$ decays with a tension of $1.4 \sigma$ ($2.5 \sigma$) and $1.8 \sigma$, respectively, obtained from the HFLAV group~\cite{HFLAV:2022pwe}. The $\tau$ polarization fraction and the longitudinal polarization fraction of $D^*$ meson in $B \to D^* \tau \nu$ have $1.6 \sigma$~\cite{Belle:2016dyj, Belle:2017ilt} and $1.6 \sigma$~\cite{Belle:2019ewo} deviations, respectively. Similarly, several measurements in $b \to s \ell ^+ \ell ^-$ transitions such as the angular observable $P^{\prime}_5$ in $B \to K^{*} \mu ^+ \mu ^-$ in the bins
$q^2 \in[4.0, 6.0]$, [4.3, 6.0] and [4.0, 8.0] from ATLAS~\cite{ATLAS:2018gqc}, LHCb~\cite{LHCb:2013ghj,LHCb:2015svh}, CMS~\cite{CMS:2017rzx},
Belle~\cite{Belle:2016xuo}
respectively deviate at $3.3\sigma$, $1\sigma$ and $2.1\sigma$ from the SM expectations~\cite{Descotes-Genon:2012isb,Descotes-Genon:2013vna,Descotes-Genon:2014uoa}.
The updates in the measurement of the the branching fraction of $B_s \to \phi \mu ^+ \mu ^-$~\cite{LHCb:2021zwz,LHCb:2013tgx,LHCb:2015wdu}
in $q^2\in[1.1, 6.0]$ region has discrepancy at the level of $3.6\sigma$ from the SM expectations~\cite{Aebischer:2018iyb,Bharucha:2015bzk} . However, the recent updates by LHCb Collaboration~\cite{LHCb, LHCb1} in the measurements of
$R_K=\mathcal{B}(B \to K \mu ^+ \mu^-)/\mathcal{B}(B \to K e ^+e ^-)$ and $R_{K^*}=\mathcal{B}(B \to K^{*}\, \mu^+ \mu ^-)/\mathcal{B}(B \to K^{*} e ^+ e ^-)$,
in the bin range $q^2\in [0.1, 1.1]$ and $[1.1, 6.0]$, are consistent with the SM predictions. On the other hand, the lepton flavor violating (LFV) decays are forbidden at the tree-level in the SM and can in principle occur via neutrino mixing through loop and box diagrams. Because of such mixing, the rate is considerably below current or future experimental sensitivities. Consequently, this causes mixing between different generations of leptons which give rise to flavor-changing neutral current (FCNC) transitions. Keeping this in mind, the leptonic LFV decays such as $\tau \to \mu \mu \mu,$ $\tau \to eee$, $\mu \to eee$, etc have been analysed in various NP models though the experimental upper limit exists~\cite{ParticleDataGroup:2022pth}. However, the principle in the FCNC transition of the quark sector for LFV decays could be similar to that of the lepton sector. In this regard, we explore the exclusive LFV decays in quark sector such as $B_s \to \ell \ell ^{\prime}$, and $B_{(s)} \to (K^{(*)}, \phi, f_2^{\prime}, K_2^*) \ell \ell ^{\prime}$ decays which occur via $b \to s \ell \ell ^{\prime}$ quark level transition.
Experimentally, these decay channels are not yet observed but the upper limits of few observables exist. The leptonic $B_s \to \mu e$ and $B_s \to \tau \mu$ processes have upper limits of $5.4 \times 10^{-9}$ and $4.2 \times 10^{-5}$, respectively by LHCb Collaboration~\cite{LHCb:2017hag, LHCb:2019ujz}. Similarly, the upper bounds are measured by LHCb and BaBar collaboration in the branching ratios of $B \to K \ell \ell ^{\prime}$ processes are $\mathcal{B}(B \to K e \mu)<7.0 \times 10^{-9}$~\cite{LHCb:2019bix}, $\mathcal{B}(B \to K \tau \mu)<$ $1.5 \times 10^{-5}$~\cite{BaBar:2012azg}, and $\mathcal{B}(B \to K e \tau)<4.5 \times 10^{-5}$~\cite{BaBar:2012azg}, respectively. The upper limit of the branching ratio of $B^0 \to K^* e \mu$ channel is observed as $1.8 \times 10^{-7}$ by Belle Collaboration~\cite{Belle:2018peg}. We analyse the $B \to K_2^* \ell \ell ^{\prime}$ process because a better understanding of $B \to K_2^* \gamma$ channel has been given by BaBar Collaboration in ref.~\cite{BaBar:2003aji}. On the other hand, the $B \to f_2 ^{\prime} \ell \ell ^{\prime}$ process provides very less attention both in theory and experiment. Thus , it can be studied similarly to the $B \to K_2 ^* \ell \ell ^{\prime}$ process.
It is interesting to see if the associated observables could be enhanced in some new physics models that could simultaneously explain the observed $b\to s \ell \ell$ data.
In this analysis, we consider a simplified non-universal $Z^{\prime}$ model in which the NP effects originate from $U(1)^{\prime}$ abelian group extension to the SM gauge symmetry. Consequently, it provides a heavy new gauge boson $Z'$ of mass $m_{Z^{\prime}}$ with generic couplings to quarks and leptons, and induces FCNC transition at tree level. Inspired by these available upper limits, we study the above discussed LFV decays in the presence of non-universal $Z^{\prime}$ model.
The outline of the paper is as follows. In section \ref{TT}, we discuss the theoretical toolkit that includes the most general effective weak Hamiltonian for $b \to s \ell \ell ^{\prime}$ NP operators. We also report the relevant formula for all the decay observables pertaining to $B_s \to \ell \ell ^{\prime}$, $B_{(s)} \to (K^{(*)}, \phi, f_2^{\prime}, K_2^*) \ell \ell ^{\prime}$ decay channels. In section \ref{NP}, we discuss the new physics analysis in the presence of non-universal $Z'$ model by using the updated experimental limits on the $b \to s \ell \ell$ data. In section \ref{NAD}, we discuss the numerical analysis of the aforementioned observables of rare (semi)leptonic LFV decays. We conclude with the summary of
our results in section \ref{CON}.
\newpage
\section{Theoretical Toolkit:}\label{TT}
\subsection{Effective Hamiltonian}
In this section, we focus on the exclusive lepton flavor violating $b \to s \ell \ell^{\prime}$ $(\ell, \ell ^{\prime} =e, \mu , \tau$) transition processes. In the SM, the leptons $\ell$ and $\ell^{\prime}$ are considered to have same flavor whereas the non-universal $Z^{\prime}$ boson couple differently in the NP models. The most structured weak effective Hamiltonian describing the $b\to s\ell \ell^{\prime}$ processes can be represented as~\cite{Dedes:2008iw, Becirevic:2016zri, Crivellin:2015era},
\begin{eqnarray}
\mathcal{H}^{eff} =- \frac{G_F \alpha}{2\sqrt{2}\pi} V_{tb} V_{ts}^* \sum _{m=9,10} C_m^{NP}O_m +h.c.,
\end{eqnarray} \label{effH}
where $G_F$ ($\alpha$) represents the Fermi (electromagnetic) coupling constant and $V_{tb}V_{ts}^*$ is the CKM matrix element. The primed counter parts of the operators can be obtained by replacing $P_L \rightleftharpoons P_R$. It is very sensitive to the semileptonic operators $O_9$ and $O_{10}$ and are given by,
\begin{eqnarray}
O_9 = [\bar{s}\gamma _ \mu (1-\gamma _5)b] [\bar{\ell \gamma ^ \mu \ell ^{\prime}}], \hspace{1cm}
O_{10} = [\bar{s}\gamma _ \mu (1-\gamma _5)b] [\bar{\ell \gamma ^ \mu \gamma _5 \ell ^{\prime}}].
\end{eqnarray}
The standard decomposition of the hadronic matrix element are given as,
\begin{eqnarray}
\langle 0|\bar{b}\gamma P_L(R)s|B_s(p)\rangle = \pm \frac{i}{2}p_\mu f_{B_s}, \nonumber\\
\langle 0|\bar{b}\gamma P_L(R)s|B_s(p)\rangle = \pm \frac{i}{2}p_\mu f_{B_s}, \nonumber\\
\langle 0|\bar{b}P_L(R)s|B_s(p)\rangle = \pm \frac{i}{2}\frac{M_{B_s}^2 f_{B_s}}{m_b+m_s}, \nonumber\\
\end{eqnarray}
where $f_{B_s}$ and p$_\mu$ are the decay constant and momentum of the $B_s$ meson, respectively.
\subsection{Decay observables of (semi)leptonic LFV $b \to s \ell \ell ^{\prime}$ processes:}
\subsubsection{$B_s \to \ell \ell ^{\prime}$}
From the effective Hamiltonian (\ref{effH}) one can obtain the amplitude and the associated branching ratios of $B_s \rightarrow \ell \ell^{\prime}$ process in the SM are given as \cite{ Becirevic:2016zri};
\begin{align}
\label{Bsformula}
&
{\cal B}(B_s\to \ell \ell ^{\prime}) =\dfrac{\tau_{B_s}}{64 \pi^3}\frac{\alpha^2 G_F^2}{m_{B_s}^3} f_{B_s}^2 |V_{tb}V_{ts}^*|^2\lambda^{1/2}(m_{B_s},m_\ell,m_{\ell^{\prime}})\nonumber \\
&\qquad\times \Bigg{\lbrace}[m_{B_s}^2-(m_\ell+m_{\ell^{\prime}})^2]\cdot\left|(C_9^{NP}-C_9')(m_\ell-m_{\ell^{\prime}})\right|^2\nonumber\\
&\hspace{1cm}+[m_{B_s}^2-(m_\ell-m_{\ell^{\prime}})^2]\cdot\left|(C_{10}^{NP}-C_{10}')(m_\ell+m_{\ell^{\prime}})\right|^2\Bigg{\rbrace},
\end{align}
where $\lambda(a,b,c)=[a^2-(b-c)^2][a^2-(b+c)^2]$.
\subsubsection{$B\to K \ell \ell ^ \prime$}
The semileptonic $B \to K \ell \ell^ \prime$ decay mode involves $b \to s$ quark level transition mediated by the effective Hamiltonian (\ref{effH}). Here the kinematic variables given in Ref.~\cite{Korner:1989qb} are defined in such a way that the main decay axis, denoted by $z$, is defined in the rest frame of $B$ meson whereas the K meson, and the lepton pair $\ell$ and $\ell ^ {\prime}$ travel in the opposite directions. The polar angle $\theta_\ell$ is the angle between the meson K and the lepton $\ell$ in the $\ell - \ell ^ \prime$ rest frame.
The standard parametrizations for hadronic matrix elements are provided by
\begin{align}
\langle \bar K(k)|\bar{s}\gamma_\mu b|\bar B(p)\rangle &= \Big{[}(p+k)_\mu- \frac{m_B^2-m_K^2}{q^2}q_\mu \Big{]}f_+(q^2)+\frac{m_B^2-m_K^2}{q^2} q_\mu f_0(q^2),\\
\langle \bar K(k)|\bar{s}\sigma_{\mu\nu} b|\bar B(p)\rangle &= -i (p_\mu k_\nu-p_\nu k_\mu)\frac{2 f_T(q^2,\mu)}{m_B+m_K}.
\end{align}
The hadronic form factors (FFs) $f_{+}(q^2)$, $f_{0}(q^2)$ and $f_{T}(q^2)$ are the functions of $q^2$ which lies between $(m_1 +m_2)^2 $ and $ (m_B-m_K)^2$.
By employing the above definitions, the differential decay rate can be written as~\cite{Becirevic:2016zri},
\begin{align}
\label{eq:semilepPP}
\dfrac{\mathrm{d}{\cal B}}{\mathrm{d}q^2}(\bar B \to \bar K \ell_1^- \ell_2^+)& = \vert{\cal M}_{K}(q^2)\vert^2\times\Big\lbrace
\varphi_7(q^2) |C_7+C_{7}'|^2 + \varphi_{9}(q^2) |C_{9}+C_{9}'|^2 + \varphi_{10}(q^2) |C_{10} +C_{10}'|^2 \nonumber \\
& + \varphi_S(q^2) |C_S+C_{S}'|^2+ \varphi_P(q^2) |C_P+C_{P}'|^2 + \varphi_{79}(q^2) \mathrm{Re}[(C_7+ C_7^\prime) (C_{9}+ C_9^\prime)^*] \nonumber \\
&+ \varphi_{9S}(q^2) \mathrm{Re}[(C_{9}+ C_9^\prime) (C_S+ C_S^\prime)^*]+ \varphi_{10P}(q^2) \mathrm{Re}[(C_{10} +C_{10} ^\prime)(C_P+C_P^\prime)^*] \Big\rbrace ,
\end{align}
where $\varphi_{i}(q^2)$ depends on kinematical quantities and on the form factors and shown in Appendix~\ref{BtoKparameters}.
The normalization factor given in eq.~(\ref{eq:semilepPP}) reads
\begin{equation}
\vert {\cal M}_{K}(q^2)\vert^2=\tau_{B_d}\dfrac{\alpha^2 G_F^2 |V_{tb}V_{ts}^*|^2}{512 \pi^5 m_B^3}\dfrac{\lambda^{1/2}(\sqrt{q^2},m_1,m_2)}{q^2}\lambda^{1/2}(\sqrt{q^2},m_B,m_K),
\end{equation}
whereas the kinematic factor is given as
\begin{eqnarray}
\lambda=m_B^4+m_K^4+q^4-2(m_B^2m_K^2+mK^2q^2+m_B^2q^2).\label{knfactor}\end{eqnarray}
\subsubsection{$B\to K^\ast \ell \ell ^{\prime}$ and $B_s\to \phi \ell \ell ^{\prime}$}
Here we focus on $B \to V \ell \ell ^{\prime}$ ($V= K^*, \phi$) decays proceeding via $b \to s \ell \ell ^{\prime}$ processes where the vector mesons further decay as $K^* \to K \pi$ and $\phi \to K\bar{K}$, respectively. We also express the angular distributions of $B \to K^*(\to K \pi) \ell \ell ^{\prime}$ process. Similarly, the distributions associated with the $B_s \to \phi$ transition can be obtained by trivial replacement of the form factor and the mass of the particle involved. We adopt the details concerning the kinematics from Ref.~\cite{Korner:1989qb}. In the angular conventions~\cite{Becirevic:2016zri}, $\theta _ \ell$ is the angle between the lepton $\ell$ with the decay axis in the lepton pair rest frame, $\theta _K$ is the made by the decay axis with the direction of flight of $K$ meson in the rest frame of $K^*$ vector meson. The angle $\phi$ is the angle spanned between the $K \pi$ and $\ell \ell ^ {\prime}$ planes, respectively shown in Fig.~\ref{Angdist}.
\begin{figure}
\includegraphics[scale=0.25]{Angdist.png}
\caption{Angular distribution of $B \to K^* (\to K \pi) \ell \ell ^ {\prime}$ processes}
\label{Angdist}
\end{figure}
The transition amplitude of exclusive $B\to K^* \ell \ell ^{\prime}$ decay mode are associated with the hadronic matrix elements. These are parametrized in terms of form factors, and are given as~\cite{Becirevic:2016zri}
\begin{align}\label{def:FFV}
\langle \bar{K}^\ast(k)|\bar{s}\gamma^\mu(1-\gamma_5) b|\bar{B}(p)\rangle &= \varepsilon_{\mu\nu\rho\sigma}\varepsilon^{\ast\nu}p^\rho k^\sigma \frac{2 V(q^2)}{m_B+m_{K^\ast}}-i \varepsilon_\mu^\ast(m_B+m_{K^\ast})A_1(q^2)\\[.3em]
&+i(p+k)_\mu (\varepsilon^\ast \cdot q)\frac{A_2(q^2)}{m_B+m_{K^\ast}}+i q_\mu(\varepsilon^\ast \cdot q) \frac{2 m_{K^\ast}}{q^2}[A_3(q^2)-A_0(q^2)],\nonumber\\[.7em]
\langle \bar{K}^\ast(k)|\bar{s}\sigma_{\mu\nu} q^\nu(1-\gamma_5) b|\bar{B}(p)\rangle &= 2 i \varepsilon_{\mu\nu\rho\sigma} \varepsilon^{\ast\nu}p^\rho k^\sigma T_1(q^2)+[\varepsilon_\mu^\ast(m_B^2-m_{K^\ast}^2)-(\varepsilon^\ast \cdot q)(2p-q)_\mu]T_2(q^2)\nonumber\\[.3em]
&+(\varepsilon^\ast \cdot q)\Big{[}q_\mu - \frac{q^2}{m_B^2-m_{K^\ast}^2}(p+k)_\mu \Big{]}T_3(q^2),
\end{align}
where $\varepsilon_\mu$ is the polarization vector of $K^\ast$ meson. The transition form factor $A_3(q^2)$ is associated to the combinations of both $A_{1}(q^2)$ and $A_{2}(q^2)$ form factors and is given as
\begin{eqnarray} 2 m_{K^\ast} A_3(q^2)=(m_B+m_{K^\ast})A_1(q^2)-(m_B-m_{K^\ast})A_2(q^2).
\end{eqnarray}
The full angular distribution of the $B \to K^* \ell \ell ^\prime$ decay mode can be read as~
\begin{equation}
\dfrac{\mathrm{d}^4 {\cal B} ({B}\to {K}^{\ast}(\to K\pi) \ell \ell ^{\prime})}{\mathrm{d}q^2\mathrm{d}\cos \theta_\ell \mathrm{d}\cos \theta_K \mathrm{d}\phi} = \dfrac{9}{32\pi}I(q^2,\theta_\ell,\theta_K,\phi),
\end{equation}
where
\begin{align}
I(q^2,\theta_\ell,\theta_K,\phi) = &I_1^s(q^2)\sin^2\theta_K + I_1^c(q^2)\cos^2\theta_K+[I_2^s(q^2)\sin^2\theta_K+I_2^c(q^2)\cos^2\theta_K]\cos 2\theta_\ell\nonumber\\[.4em]
&+I_3(q^2)\sin^2\theta_K \sin^2\theta_\ell \cos 2\phi+I_4(q^2)\sin 2\theta_K \sin 2\theta_\ell \cos \phi \nonumber \\[.4em]
&+ I_5(q^2) \sin 2\theta_K\sin \theta_\ell\cos\phi+[I_6^s(q^2)\sin^2\theta_K+I_6^c(q^2)\cos^2\theta_K]\cos \theta_\ell \nonumber \\[.4em]
&+I_7(q^2)\sin 2\theta_K \sin \theta_\ell \sin \phi + I_8(q^2)\sin 2\theta_K \sin 2\theta_\ell \sin\phi \nonumber\\[.4em]
&+I_9(q^2) \sin^2\theta_K \sin^2\theta_\ell \sin 2 \phi.
\end{align}
The $q^2$-dependent differential branching fraction, after integrating over the physical region of the phase space $\theta _K$, $\theta _ \ell$ and $\phi$, is simply given as
\begin{eqnarray}
{\mathrm{d}{\cal B}\over \mathrm{d}q^2}=\frac{1}{4}\left[3 I_1^c(q^2)+6 I_1^s(q^2)-I_2^c(q^2)-2I_2^s(q^2)\right]\,
\end{eqnarray}
with
\begin{eqnarray}
(m_i+m_j)^2 \leq q^2 \leq (M_B-M_{K^*})^2, \hspace{0.25cm} -1 \leq \cos\theta_l \leq 1, \hspace{0.25 cm}
-1 \leq \cos\theta_K \leq 1,
\hspace{0.25cm} 0 \leq \phi \leq 2\pi.
\end{eqnarray}
Here the angular coefficients $I^i_j(q^2)$ (i= c,s ; j= 1, 2,..9) are defined in terms of the transversity amplitudes
and given in Appendix~\ref{BtoKstarparameters}.
\subsubsection{$B \to T \{K_2 ^* ,f_2 ^{\prime}\} \ell \ell ^{\prime}$ processes}
In contrast to the previous sub-section, we study the exclusive $B\to T~(K_2^*, f_2 ^{\prime}) \ell \ell^{\prime}$ decay modes mediated via $b\to s$ quark level transition.
The long distance contribution in terms of the hadronic matrix element of $B\to K_2^*$ transition are given as~\cite{Wang:2010ni, Yang:2010qd}
\begin{eqnarray}
\langle K_2^*(k, \epsilon ^*)|\bar s\gamma^{\mu}b|\overline B(p)\rangle
&=&-\frac{2V(q^2)}{m_B+m_{K_2^*}}\epsilon^{\mu\nu\rho\sigma} \epsilon ^*_{T\nu} p_{\rho}k_{\sigma}, \nonumber\\
\langle K_2^*(k,\epsilon ^*)|\bar s\gamma^{\mu}\gamma_5 b|\overline B(p)\rangle
&=&2im_{K_2^*} A_0(q^2)\frac{\epsilon^*_{T } \cdot q }{ q^2}q^{\mu} + i(m_B+m_{K_2^*})A_1(q^2)\left[ \epsilon^{*\mu}_{T}
-\frac{\epsilon ^*_{T } \cdot q }{q^2}q^{\mu} \right] \nonumber\\
&&-iA_2(q^2)\frac{\epsilon ^*_{T} \cdot q }{ m_B+m_{K_2^*} }
\left[ (p+k)^{\mu}-\frac{m_B^2-m_{K_2^*}^2}{q^2}q^{\mu} \right],
\end{eqnarray}
where $p$ ($k$) is the four momentum of $B$ ( $K_2^{*}$) meson.
We use the relevant form factors in our analysis for $B_{(s)}$ to light $J^{PC}=2^{++}$ tensor meson ($T$) derived from the light-cone sum rule (LCSR) approach.
Within this technique, the parameterized $q^2$ dependent form factors are given in the form as \cite{Yang:2010qd}:
\begin{equation}
F^{B_{(s)}T}(q^2)=\frac{F^{B_{(s)}T}(0)}{1-a_T(q^2/m_{B_q}^2)+b_T(q^2/m_{B_q}^2)^2},
\end{equation}
where $F=V,A_{0,1,2}$ and $T_{1,2,3}$ are the transition form factors.
The $B \to K_2^* \ell \ell^{\prime}$ decay mode which undergoes $b\to s \ell \ell^{\prime}$ quark level transition can be expressed in terms of the leptonic polar angle $\theta_\ell$ and leptonic mass squared $q^2$. The angle $\theta_\ell$ is the angle made by the lepton $\ell$ with respect to the di-lepton momentum in the rest frame of $B$ meson. The two-fold differential decay distribution in terms of the variables $\theta _\ell$ and $q^2$ is given as follows~\cite{Kumbhakar:2022szr}
\begin{comment}
With $K^*_2$ being on-shell, the quasi-four body decay $B \to K_2^*(\to K \pi) \ell_1 \ell_2$ can be described by the four kinematic variables: $\theta_{\ell}, \theta_K, \phi $ and leptonic mass squared $q^2 = (p-k)^2$. We take the $\theta_{\ell}$ angle as the angle made by the $\ell_1$ lepton w.r.t. to the di-lepton rest frame. After integrating over $\theta_K$ and $\phi$, we find the two-fold differential decay distribution as follows
\end{comment}
\begin{equation}
\frac{d^2\Gamma}{dq^2 d \cos \theta_\ell}=A(q^2) + B(q^2) \cos \theta_\ell + C(q^2) \cos ^2\theta_\ell,
\label{dist}
\end{equation}
where the $q^2$ parameters $A(q^2)$, $B(q^2)$ and $C(q^2)$ includes form factors and Wilson coefficients. The detailed expressions are given in Appendix~\ref{appenb}.
Now after integrating Eq.~(\ref{dist}) over $\theta_\ell$, we obtain the differential branching ratio as
\begin{equation}
\frac{d\mathcal{B}}{d q^2}= 2\tau_B \left(A + \frac{C}{3}\right),
\end{equation}
and the lepton forward-backward asymmetry is represented as
\begin{equation}
A_{\rm FB}(q^2)= \frac{1}{d\Gamma/dq^2}\left(\int_0^1 d\cos\theta_\ell\frac{d\Gamma}{d\cos\theta_\ell d q^2}-\int_{-1}^0 d\cos\theta_\ell \frac{d\Gamma}{d\cos\theta_\ell d q^2 }\right) = \frac{B}{2\left(A+\frac{C}{3}\right)}.
\end{equation}
Similarly, one can do the analysis for $B \to f_2^{\prime} \ell \ell$ process like the $B\to K_2^{*}$ transition where the form factor can be obtained from Ref.~\cite{Yang:2010qd}.
Analogously, we would like to see whether it is possible to observe non-universality in the LFV decays. Hence, we define the ratios of branching ratios of various LFV $b \to s \ell \ell^{\prime}$ decays as
\begin{eqnarray}
&&R_{K \ell}^{\ell \ell ^{\prime}} = \frac{{ \mathcal{B}}\left( \bar{B} \to \bar{K } \ell \ell ^{\prime} \right)}{{\mathcal{B}}\left( \bar{B} \to \bar{K } \ell \ell \right)},\\
&&R_{V\ell}^{\ell \ell ^{\prime}} = \frac{{\mathcal{B}}\left( \bar{B} \to \bar{V } \ell \ell ^{\prime} \right)}{{\mathcal{B}}\left( \bar{B} \to \bar{V } \ell \ell \right)}, (V=K^* , \phi), \\
&&R_{Kl}^{\ell \ell ^{\prime}} = \frac{{\mathcal{B}}\left( \bar{B} \to \bar{ T} \ell \ell ^{\prime} \right)}{{\mathcal{B}}\left( \bar{B} \to \bar{T } \ell \ell \right)}, (T= K_2^*, f_2^{\prime}),
\end{eqnarray} \label{LNU}
with $\ell= e, \mu$.
\section{New Physics Analysis in the non-universal $Z^{\prime}$ model:}\label{NP}
Out of all the physics beyond the SM scenarios, an extra abelian $U(1)^{\prime}$ gauge group in extension to the SM is more ubiquitous, which provides a neutral massive vector (spin - 1) boson $Z^{\prime}$ \cite{Langacker:2000ju}.
At tree level, this heavy $Z^{\prime}$ boson does flavor changing neutral current transition proceeding via $b\to s(d) \ell \ell^{\prime}$ quark level. It is the most obvious candidate that can evolve in the form of weak effective Hamiltonian of $b\to s(d)$ quark level transition. This is also responsible for an appreciable deviation from the SM results and explain the collider data.
\begin{figure}[htb]
\includegraphics[scale=0.35]{Zprime.png}
\caption{Tree level exchange of $Z^{\prime}$ boson for $b\to s \ell \ell ^{\prime}$ process.}
\label{Feynmandiag}
\end{figure}
In this work, we will formalise it's application to the case of $b \to s$ transition (In general, it is straightforward to generalize for $b \to d$ quark level transition).
There are different kinds of new physics models, such as $Z^{\prime}$, leptoquark (LQ), FCNC mediated $Z$ boson model etc, and have been analysed in Refs~\cite{Mohapatra:2021ynn, Mohapatra:2021izl, Mohapatra:2019wcm, Mohapatra:2021zdp, Mohanta:2010yj, Duraisamy:2016gsd, Sahoo:2015wya}. With the tree level exchange, the new physics scenarios for the parton level $b\to s \ell \ell ^{\prime}$ can be explained in two scenarios: $\mathcal{S}$ - ${ \rm I}:C_9^ {NP}\neq 0$ and $\mathcal{S}$ - $ { \rm II}:C_9^ {NP} = - C_{10}^ {NP}$ \cite{Crivellin:2015era,Capdevila:2017bsm}. However, these two scenarios are possible in $Z^{\prime}$ model whereas LQ does only scenario ($\rm II$). In this analysis, we probe the $b\to s \ell \ell ^{\prime}$ exclusive decays in the presence of non-universal $Z^{\prime}$ model. The Feynman diagram in the presence of $Z^{\prime}$ boson for the exclusive $b\to s \ell \ell ^{\prime}$ decays is given in Fig.~\ref{Feynmandiag}.
The new physics couplings associated with the $Z^{\prime}$ model can be read as~\cite{Crivellin:2015era}
\begin{eqnarray}
C_{9,10}^{NP} = -\frac{\pi}{\sqrt{2} M_{Z^{\prime}}^2}\frac{1}{\alpha G_F V_{tb}V_{ts}^*}\Gamma _{bs}^{L}(\Gamma _{NP}^R\pm \Gamma _{\ell \ell ^{\prime}}^L) .
\end{eqnarray}
The scenario $(\rm I)$ can be obtained with $\Gamma _{\ell \ell ^{\prime}}^L= \Gamma _{\ell \ell ^{\prime}}^R$, whereas the scenario $\rm (II)$ comes with the condition $\Gamma _{\ell \ell ^{\prime}}^R=0$. Here, for simplicity, we consider $\Gamma _{bs} ^R=0$ and the non zero $Z^{\prime} -b-s$ coupling ($\Gamma_{bs}^L$) is taken to be real in our analysis.
\subsection{Constraints on $Z^{\prime}$ couplings from leptonic decays}
In lepton flavor violating $\tau$ decays, the $\tau \to \ell \ell \ell$ ($\ell = \mu, e$) channel provides very sensitive probe of the coupling $\Gamma _{\ell \ell ^{\prime}}$ in $Z^{\prime}$ model. Experimentally, the branching ratio for the process $\tau \rightarrow \mu\mu\mu$ and $\tau \rightarrow eee$ processes are $2.1\times 10^{-8}$ and $2.7\times 10^{-8}$ at $90\%$ CL, respectively~\cite{ParticleDataGroup:2022pth}. Additionally, an upper bound of the branching fraction of $\mu \to eee$ process is $1.0\times 10^{-12}$\cite{ParticleDataGroup:2022pth}. However, a significant sensitivity in beyond the SM could be provided by additional observation of such distinct processes in the collider experiments.
The $Z^{\prime}$ boson contributes to the LFV 3-body leptonic decay $\tau \rightarrow \mu\mu\mu$ at the tree level with the branching ratio which is given by~\cite{Langacker:2008yv, Crivellin:2013hpa}
\begin{eqnarray}
\mathcal{B}(\tau \rightarrow \mu\mu\mu) =\frac{m_{\tau}}{1536\pi^3\Gamma_{\tau}M_{Z'}^4}
[2(|\Gamma_{\mu\tau}^L \Gamma_{\mu\mu}^L|^2 + |\Gamma_{\mu\tau}^R \Gamma_{\mu\mu}^R|^2)
+ |\Gamma_{\mu\tau}^L\Gamma_{\mu\mu}^R|^2
+ |\Gamma_{\mu\tau}^R\Gamma_{\mu\mu}^L|^2],
\end{eqnarray}
where the $m_\tau~(\Gamma _\tau)$ is the mass (total decay width) of $\tau$ lepton.
\begin{table}[h]
\begin{center}
\begin{tabular}{|c | c | c| c |}
\hline
Couplings & $m_{Z^{\prime}}=4.5$ TeV & $m_{Z^{\prime}}=6.0$ TeV & $m_{Z^{\prime}}=7.0$ TeV\\
\hline
\hline
$\Gamma _{\tau \mu}(\mathcal{S} - \rm I)$ & 0.128 & 0.227 & 0.310 \\
$\Gamma _{\tau e}(\mathcal{S} - \rm I)$ & 0.145 & 0.258 & 0.351 \\
$\Gamma _{\mu e}(\mathcal{S} - \rm I)$ & $3.230 \times 10^{-4}$ & $5.742 \times 10^{-4}$ & $7.815 \times 10^{-4}$ \\
\hline
\hline
$\Gamma _{\tau \mu}(\mathcal{S} - \rm II)$ & 0.221 & 0.394 & 0.537\\
$\Gamma _{\tau e}(\mathcal{S} - \rm II)$ & 0.251 & 0.447 & 0.609 \\
$\Gamma _{\mu e}(\mathcal{S} - \rm II)$ & $4.568 \times 10^{-4}$ & $8.120 \times 10^{-4}$ & $1.105 \times 10^{-3}$ \\
\hline
\end{tabular}
\end{center}
\caption{The NP couplings for LFV leptonic decays }
\end{table} {\label{LFVLC3}}
Similarly, the branching ratio of $\tau \to eee$ decay mode can be obtained by replacing $\mu$ by $e$.
The LFV branching fraction of $\mu \to eee$ decay mode is given as~\cite{Dib:2018rpy}
\begin{eqnarray}
\mathcal{B}(\mu \to eee) = \frac{g_{V\mu e}^2g_{Ve e}^2}{m_{Z^{\prime}}^4}\frac{2}{4G_F^2},
\end{eqnarray}
where $g_V^2= |g_{\mu e}^V|^2+|g_{\mu e}^A|^2$, and $G_F$ is the Fermi coupling constant.
Using the upper limits on the above discussed LFV leptonic decays, we obtain the values of lepton flavor violating NP couplings as given in Table - (\ref{LFVLC3}) where the coupling of $Z^{\prime}$ to $\ell \ell$ is considered as SM like in this analysis.
\subsection{Fit Results}
In this sub-section, we consider the $q^2$ bin SM and experimental measurements of various observables that include the angular observable $P_5'$, $ \mathcal{B} (B_s \to \phi \mu \mu)$ and $\mathcal{B} (B_s \to \mu \mu)$ . The form factor independent observable $P_5'$ in $B\to K^* \mu \mu$ process is defined as
\begin{eqnarray}
P_5' = \frac{J_5}{2\sqrt{-J_2^c J_2^s}},
\end{eqnarray}
where the auxiliary functions $J_i^p (i =2,5; p = c, s)$ includes the relevant form factors of the transition $B \to K^*$ and the Wilson coefficients. For the numerical calculation of $B\to K^* \ell \ell$, we employ the FF from the LQCD method~\cite{Bouchard:2013eph}. Similarly, $B_s \to \phi \mu \mu$ processes induced by FCNC $b\to s \mu \mu$ transition, we consider the FFs from the combined analysis of the LCSR and LQCD fit results~\cite{Bharucha:2015bzk}. The branching ratio of $ B_s \to \mu \mu$ leptonic decay mode has also been studied. Now, using these three observables, we perform a naive $\chi^2$ analysis to obtain the NP coupling parameters for the non-universal $Z'$ model. We define the $\chi ^2$ formula which is defined as follows
\begin{eqnarray}
\chi^2(C_i^{\rm NP)}= \sum_i \frac{\Big ({\cal O}_i^{\rm th}(C_{9,10}^{\rm NP}) -{\cal O}_i^{\rm exp} \Big )^2}{(\Delta {\cal O}_i^{\rm exp})^2+(\Delta {\cal O}_i^{\rm sm})^2},
\end{eqnarray}
where, ${\cal O}_i ^ {\rm th}$ represents the theoretical expressions including the NP contributions and ${\cal O}_i ^ {\rm exp}$ are the experimental values. The denominator includes $1 \sigma$ uncertainties associated with the theoretical and experimental results. As the new vector boson $Z^{\prime}$ is not yet observed in collider experiments, its mass scale is constrained in different Grand unified theories and discussed in Refs.~\cite{ATLAS:2017eiz, CMS:2018ipm, Bandyopadhyay:2018cwu}. Bandopadhyay et al. have constrained as $m_{Z^{\prime}}>4.4$ TeV using the recent Drell-Yan data of the LHC~\cite{Bandyopadhyay:2018cwu}.
By using all the input parameters, the values of the NP parameters are given below
\begin{eqnarray}
\Gamma_{bs}^ L|_{\mathcal{S}- \rm I} &=&0.060 ~(m_{Z^{\prime}}=4.5~ \rm TeV), 0.108 ~(m_{Z^{\prime}}=6.0~ \rm TeV), 0.147 ~(m_{Z^{\prime}}=7.0~ \rm TeV),\nonumber\\
\Gamma_{bs}^ L|_{\mathcal{S}- \rm II} &=&0.062 ~(m_{Z^{\prime}}=4.5~ \rm TeV),0.110 ~(m_{Z^{\prime}}=6.0~ \rm TeV), 0.150
~(m_{Z^{\prime}}=7.0~ \rm TeV),
\end{eqnarray}
where we have used three $m_{Z^{\prime}}$ values in this analysis.
\subsection{Input parameters}
In this sub-section, we employ all the input parameters used for our analysis. From Ref.~\cite{ParticleDataGroup:2020ssz}, we consider all the necessary parameters such as CKM matrix element, life time of $B_{(s)}$ meson, Fermi coupling constant, fine structure constant, and masses of quarks, and leptons, etc. We employ the Wilson coefficients at the scale $\mu =m_b$ from Ref.~\cite{Ali:1999mm}.
\begin{table}[htp]
\centering
\scalebox{0.9}{
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Observable & & $m_{Z^{\prime}}$=4.5 TeV & $m_{Z^{\prime}}$=6.0 TeV & $m_{Z^{\prime}}$=7.0 TeV \\
\hline
\hline
\multicolumn{5}{|c|}{$B_s \to \ell \ell ^{\prime}$ ($Z'$ contribution)}\\
\hline
\hline
\multirow{2}{*}{$\mathcal{B}_ {\mu e}\times 10^{-9}$}
& $\mathcal{S} - \rm I$ & $4.041 \times 10^{-9}$ & $1.271 \times 10^{-8}$ & $2.354 \times 10^{-8}$ \\
\cline{2-5}
& $\mathcal{S} - \rm II$ & $1.632 \times 10^{-8}$ & $5.083 \times 10^{-8}$ & $7.248 \times 10^{-8}$ \\
\hline $\mathcal{B}_ {\tau \mu}\times 10^{-8}$
& $\mathcal{S} - \rm I$ & $0.068$ & $0.240$ & $0.449$ \\
\cline{2-5}
& $\mathcal{S} - \rm II$ & $0.485$ & $1.623$ & $3.026$ \\
\hline
\multirow{2}{*}{$\mathcal{B}_ {\tau e}\times 10^{-9}$}
& $\mathcal{S} - \rm I$ & $0.196$ & $0.617$ & $1.143$ \\
\cline{2-5}
& $\mathcal{S} - \rm II$ & 0.406 & 1.282 & 1.832 \\
\hline
\hline
\end{tabular}
}
\caption{Upper limit values of $ B_s \to \ell \ell ^{\prime}$ processes}
\label{tab_Btollp}
\end{table}
\section{Numerical analysis and discussions}\label{NAD}
\subsection{Analysis of $B_s \to \ell \ell^{\prime}$ process}
We estimate the numerical values of the branching ratios of $B_s \to \ell \ell ^{\prime}$ processes and shown in Table.~\ref{tab_Btollp}. One can observe that the branching fraction of $B_s \to \mu e$ mode is highly suppressed as compared to $\tau \mu$ and $\tau e$ channels ($\mathcal{O}(10^{-9})$) present in the final state. We show for three $m_{Z^{\prime}}$ values for the given processes and obtain that the contribution increases in the presence of the NP couplings.
\begin{table}[htp]
\centering
\scalebox{0.9}{
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Observable & & $m_{Z^{\prime}}$=4.5 TeV & $m_{Z^{\prime}}$=6.0 TeV & $m_{Z^{\prime}}$=7.0 TeV \\
\hline
\hline
\multicolumn{5}{|c|}{$B_s \to K \ell \ell ^{\prime}$ ($Z'$ contribution)}\\
\hline
\hline
\multirow{2}{*}{$\mathcal{B}_ {\mu e}\times 10^{-13}$}
& $\mathcal{S} - \rm I$ & $0.221$ & $0.697$ & $1.291$ \\
\cline{2-5}
& $\mathcal{S} - \rm II$ & 0.919 & 2.892 & 5.376 \\
\hline $\mathcal{B}_ {\tau \mu}\times 10^{-8}$
& $\mathcal{S} - \rm I$ & $0.221$ & $0.693$ & $1.292$ \\
\cline{2-5}
& $\mathcal{S} - \rm II$ & $0.354$ & $1.126$ & $2.096$ \\
\hline
\multirow{2}{*}{$\mathcal{B}_ {\tau e}\times 10^{-8}$}
& $\mathcal{S} - \rm I$ & $0.292$ & $0.921$ & $1.705$ \\
\cline{2-5}
& $\mathcal{S} - \rm II$ & 0.458 & 1.453 & 2.702 \\
\hline
\hline
\end{tabular}
}
\caption{Estimated upper limit values of $ B \to K \ell \ell ^{\prime}$ processes in $Z^{\prime}$ model}
\label{tab_BtoK}
\end{table}
\begin{figure}[htb]
\includegraphics[scale=0.57]{BtoKtaumuL=R.pdf}
\quad
\includegraphics[scale=0.57]{BtoKtaumuR=0.pdf}
\quad
\includegraphics[scale=0.57]{BtoKtaueL=R.pdf}
\quad
\includegraphics[scale=0.57]{BtoKtaueR=0.pdf}
\quad
\includegraphics[scale=0.57]{BtoKmueL=R.pdf}
\quad
\includegraphics[scale=0.57]{BtoKmueR=0.pdf}
\caption{Variation of branching ratios of $B\to K \tau \mu$ (top), $B\to K \tau e$ (middle), $B \to K \mu e$ (bottom) processes in $Z^{\prime}$ model. The left (right) panel corresponds to $\mathcal{S}- {\rm I}$ (${\rm II}$).} \label{BtoK}
\end{figure}
\subsection{Analysis of $B \to K \ell \ell^{\prime}$ process}
After having knowledge about the NP coupling in details, we now proceed to analyse the above discussed prominent observables of $B \to K \ell \ell ^{\prime}$ lepton flavor violating process mediated by $b\to s \ell \ell^{\prime}$ transition in the $Z^{\prime}$ model.
In Fig. \ref{BtoK}, we show the variation of the differential branching ratio of $B \to K \tau \mu$ (top-left), $B \to K \tau e$ (middle-left) and $B \to K \mu e$ (bottom) processes w.r.t $q^2$ in the presence of $Z ^{\prime}$ model. Here the magenta, blue, and green lines represent the contributions in the $Z^{\prime}$ model with three different values of $m_{Z^{\prime}}$. We observe that the observables have a higher contribution in the mid $q^2$ region for $B\to K\tau \mu$ and $B\to \tau e$ processes, and in low $q^2$ region for $B\to K\mu e$ transition. This behavior arises due to the lighter lepton masses involved in the later mode. However, the presence of NP does help to enhance the contribution while increasing the $m_{Z^{\prime}}$ values. The predicted branching fractions are shown in Table.~\ref{tab_BtoK}. This indicates that the branching fraction of $B \to K \mu e$ channel is very suppressed as compared to the $\tau \mu$ and $\tau e$ final states.
\begin{figure}[htb]
\includegraphics[scale=0.57]{BRBtoKstaumuL=R.pdf}
\quad
\includegraphics[scale=0.57]{BRBtoKstaumuR=0.pdf}
\quad
\includegraphics[scale=0.57]{FLBtoKstaumuL=R.pdf}
\quad
\includegraphics[scale=0.57]{FLBtoKstaumuR=0.pdf}
\quad
\includegraphics[scale=0.57]{FBBtoKstaumuL=R.pdf}
\quad
\includegraphics[scale=0.57]{FBBtoKstaumuR=0.pdf}
\caption{$B\to K^*\tau \mu$: $d\mathcal{B}/dq^2$,$F_L (q^2)$ and $A_{FB} (q^2)$ for $\mathcal{S}- {\rm I}$ (left panel) and $\mathcal{S}- {\rm II}$ (right panel).} {\label{BtoKstaumu}}
\end{figure}
In this scenario, we display the $q^2$ behavior of the branching ratios of $B\to K \tau \mu$ and $B\to K \tau e$ processes in the top-right and middle-right panels, respectively shown in Fig.~\ref{BtoK}. In the observable, we consider all the central values of the input parameters and form factors. The colour description of the figures is same as the $\mathcal{S} - {\rm I}$. In the variation of whole $q^2$ region, the contributions in the presence of NP coupling indicate that the observable has higher values in the mid-region. However, the presence of the NP coupling increase the central values with same order $\mathcal{O}(10^{-8})$. Table - \ref{tab_BtoK} summarizes the estimated branching fractions in the whole kinematic region.
\begin{table}[htp]
\centering
\scalebox{0.7}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
Observable & & $m_{Z^{\prime}}$=4.5 TeV & $m_{Z^{\prime}}$= 6.0 TeV & $m_{Z^{\prime}}$=7.0 TeV & $m_{Z^{\prime}}$=4.5 TeV & $m_{Z^{\prime}}$= 6.0 TeV & $m_{Z^{\prime}}$=7.0 TeV \\
\hline
\hline
\multicolumn{2}{|c|} {\textcolor{white}{.}} & \multicolumn{3}{|c|}{$B_s \to K^* \ell \ell ^{\prime}$ ($Z'$ contribution)} & \multicolumn{3}{|c|}{$B_s \to \phi \ell \ell ^{\prime}$ ($Z'$ contribution)}\\
\hline
\multirow{2}{*}{$\mathcal{B}_ {\mu e}\times 10^{-13}$}
& $\mathcal{S} - \rm I$ & $0.193$ & $0.608$ & $1.126$ & 0.523 & 1.647 & 3.052 \\
\cline{2-8}
& $\mathcal{S} - \rm II$ & 0.801 & 2.523 & 4.691 & 2.095 & 6.589& 12.203\\
\hline
\multirow{2}{*}{$\mathcal{B}_ {\tau \mu}\times 10^{-9}$}
& $\mathcal{S} - \rm I$ & $1.654$ & $5.178$ & $9.658$ & 4.107 & 12.856 & 23.976 \\
\cline{2-8}
& $\mathcal{S} - \rm II$ & $2.492$ & $7.912$ & $14.723$ & 6.185 & 19.638 & 36.541 \\
\hline
\multirow{2}{*}{$\mathcal{B}_ {\tau e}\times 10^{-9}$}
& $\mathcal{S} - \rm I$ & $2.058$ & $6.483$ & $12.000$ & 5.111 & 16.103 & 29.805 \\
\cline{2-8}
& $\mathcal{S} - \rm II$ & $3.226$ & $10.222$ & $19.006$ & 8.013 & 25.388 & 47.203 \\
\hline
\multirow{2}{*}{$\mathcal{F}_L^{\mu e}$}
& $\mathcal{S} - \rm I$ & $0.488$ & $0.488$ & $0.488$ & 0.552 & 0.552 & 0.552 \\
\cline{2-8}
& $\mathcal{S} - \rm II$ & 0.488 & 0.488 & 0.488 & 0.552 & 0.552 & 0.552 \\
\hline
\multirow{2}{*}{$\mathcal{F}_L^{\tau \mu}$}
& $\mathcal{S} - \rm I$ & 0.456 & 0.456 & 0.456 & 0.501 & 0.501 & 0.501 \\
\cline{2-8}
& $\mathcal{S} - \rm II$ & 0.468 & 0.468 & 0.468 & 0.514 & 0.514 & 0.514\\
\hline
\multirow{2}{*}{$\mathcal{F}_L^{\tau e}$}
& $\mathcal{S} - \rm I$ & 0.468 & 0.468 & 0.468 & 0.514 & 0.514 & 0.514 \\
\cline{2-8}
& $\mathcal{S} - \rm II$ & 0.468 & 0.468 & 0.468 & 0.514 & 0.514 & 0.514 \\
\hline
\multirow{2}{*}{$\mathcal{A}_{FB}^{\mu e}$}
& $\mathcal{S} - \rm I$ & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000\\
\cline{2-8}
& $\mathcal{S} - \rm II$ & 0.349 & 0.349 & 0.349 & 0.295 & 0.295 & 0.295\\
\hline
\multirow{2}{*}{$\mathcal{A}_{FB}^{\tau \mu}$}
& $\mathcal{S} - \rm I$ & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 \\
\cline{2-8}
& $\mathcal{S} - \rm II$ & 0.311 & 0.311 & 0.311 & 0.271 & 0.271 & 0.271 \\
\hline
\multirow{2}{*}{$\mathcal{A}_{FB}^{\tau e}$}
& $\mathcal{S} - \rm I$ & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 \\
\cline{2-8}
& $\mathcal{S} - \rm II$ & 0.311 & 0.311 & 0.311 & 0.272 & 0.272 & 0.272 \\
\hline
\hline
\end{tabular}
}
\caption{Upper limit values of $ B \to (K^*, \phi) \ell \ell ^{\prime}$ processes in $Z^{\prime}$ model}
\label{tab_BtoKsphi}
\end{table}
\subsection{Analysis of $B \to V(K^*,\phi) \ell \ell^{\prime}$ process}
In this sub-section, we provide a detailed study of $B \to V \ell \ell^{\prime}$ processes mediated by $b \to s \ell \ell^{\prime}$ quark level transition where the vector meson $V=K^*,~\phi$. We probe the NP effects on the associated observables such as differential branching ratio ($d\mathcal{B}/dq^2$), the forward-backward asymmetry ($A_{FB}$), and the longitudinal polarisation fraction ($F_L$).
In the right panels of Fig.~\ref{BtoKstaumu} and \ref{BtoKstaue}, we analyse the $q^2$ variation of the above discussed observables of $B\to K^* \tau \mu$ and $B\to K^* \tau e$ processes with respect to $q^2$, respectively. The color description of the plots are same as previous. The $q^2$ dependent branching ratios $d\mathcal{B}/dq^2$ have distinguished contributions in the presence of NP couplings. Higher the values of $m_{Z^{\prime}}$ induce the larger contributions to the observable. In the middle - (left, right) panel, however, in the variation of the sensitive observable $FL(q^2)$, the contribution in presence of NP couplings are indistinguished and coincides for all $m_{Z^{\prime}}$ entries. In the observable $A_{FB}(q^2)$ shown in bottom - left panel, the NP contribution allows no contribution to the NP in scenario - $\rm I$ whereas a definite contribution arises from scenario - $\rm II$. We also show the $q^2$ variation of the branching ratio of $B \to K^* \mu e$ process, shown in Fig.~\ref{BtoKsmue}.
\begin{figure}[htb]
\includegraphics[scale=0.57]{BRBtoKstaueL=R.pdf}
\quad
\includegraphics[scale=0.57]{BRBtoKstaueR=0.pdf}
\quad
\includegraphics[scale=0.57]{FLBtoKstaueL=R.pdf}
\quad
\includegraphics[scale=0.57]{FLBtoKstaueR=0.pdf}
\quad
\includegraphics[scale=0.57]{FBBtoKstaueL=R.pdf}
\quad
\includegraphics[scale=0.57]{FBBtoKstaueR=0.pdf}
\caption{$B\to K^*\tau e$: $d\mathcal{B}/dq^2$,$F_L (q^2)$, $A_{FB} (q^2)$ for $\mathcal{S}- {\rm I}$ (left panel) and $\mathcal{S}- {\rm II}$ (right panel).} {\label{BtoKstaue}}
\end{figure}
In the low $q^2$ region, the NP contribution in the presence of $Z^{\prime}$ couplings gets higher and varies remarkably w.r.t $q^2$ for three values of $m_{Z^{\prime}}$. In the study of whole kinematic region of $q^2$, the lepton polarisation asymmetry observable decreases but doesn't drop at zero point. On the other hand, the observable $A_{FB}(q^2)$ varies in the whole $q^2$ kinematic region in scenario - ${\rm I}$ whereas it provides a significant contribution in scenario - $\rm II$. The associated plots have been depicted in Fig.~\ref{BtoKsmue}.
We also present the values of the branching ratios in the whole kinematic region as shown in Table.~\ref{tab_BtoKsphi}. The branching ratios of $B \to K^* \tau \mu$ and $B \to K^* \tau e$ processes are of the same order i.e $\mathcal{O} (10^{-9})$ and differ only in their central values whereas the branching fraction of $B \to K^* e \mu$ process are suppressed with $\mathcal{O} (10^{-13})$. The forward-backward asymmetry, and the polarisation asymmetry observables differ in all the above discussed processes individually.
\begin{figure}[htb]
\centering
\includegraphics[scale=0.57]{BRBtoKsmueL=R.pdf}
\quad
\includegraphics[scale=0.57]{BRBtoKsmueR=0.pdf}
\quad
\includegraphics[scale=0.57]{FLBtoKsmueL=R.pdf}
\quad
\includegraphics[scale=0.57]{FLBtoKsmueR=0.pdf}
\quad
\includegraphics[scale=0.57]{FBBtoKsmueL=R.pdf}
\quad
\includegraphics[scale=0.57]{FBBtoKsmueR=0.pdf}
\caption{ The observables $d\mathcal{B}/dq^2$, $F_L (q^2)$ and $A_{FB} (q^2)$ of $B\to K^*\mu e$ process in scenario -{$\rm I$}(left panel) and scenario -{$\rm II$} (right panel).} {\label{BtoKsmue}}
\end{figure}
Similar to the $B\to K^* \ell \ell^{\prime}$ process we investigate another decay channel $B_s \to \phi \ell \ell ^{\prime}$. We depict the $q^2$ dependent physical observables such as branching ratio, forward-backward asymmetry, and polarisation asymmetry for $B_s \to \phi \tau \mu$ and $B_s \to \phi \tau e$ in Fig.~\ref{Btophitaumu} and \ref{Btophitaue}, respectively whereas Fig.~\ref{Btophimue} indicates the variation of the above discussed observables of $B_s \to \phi \mu e$ decay channel. The top-left, top-middle, and top-right panels are shown for branching ratio, polarisation asymmetry and forward-backward asymmetry for scenario - $\rm I$, respectively. Similarly, the bottom-left, bottom-middle, and bottom-right panels depict for the given observables in scenario - $\rm II$.
In the presence of NP couplings from $Z^{\prime}$ model, we obtain similar results compared to previous channel with difference in the variation due to the masses of mesons and the transition form factor involved in this analysis.
\begin{figure}[htb]
\includegraphics[scale=0.561]{BRBtophitaumuL=R.pdf}
\quad
\includegraphics[scale=0.561]{FLBtophitaumuL=R.pdf}
\quad
\includegraphics[scale=0.561]{FBBtophitaumuL=R.pdf}
\quad
\includegraphics[scale=0.561]{BRBtophitaumuR=0.pdf}
\quad
\includegraphics[scale=0.561]{FLBtophitaumuR=0.pdf}
\quad
\includegraphics[scale=0.561]{FBBtophitaumuR=0.pdf}
\caption{Variation of BR (top-left),$F_L$ (top-middle), $A_{FB}$ (top-right) of $B\to \phi\tau \mu$ process in $\mathcal{S}- {\rm I}$ and the bottom panel (left, middle and right) depicts for $\mathcal{S}- {\rm II}$.}{\label{Btophitaumu}}
\end{figure}
\begin{figure}[htb]
\includegraphics[scale=0.561]{BRBtophitaueL=R.pdf}
\quad
\includegraphics[scale=0.561]{FLBtophitaueL=R.pdf}
\quad
\includegraphics[scale=0.561]{FBBtophitaueL=R.pdf}
\quad
\includegraphics[scale=0.561]{BRBtophitaueR=0.pdf}
\quad
\includegraphics[scale=0.561]{FLBtophitaueR=0.pdf}
\quad
\includegraphics[scale=0.561]{FBBtophitaueR=0.pdf}
\caption{The $q^2$ variationn of BR (top-left),$F_L$ (top-middle), $A_{FB}$ (top-right) of $B\to \phi\tau e$ process in $\mathcal{S}- {\rm I}$, and the bottom panel (left, middle and right) depicts for $\mathcal{S}- {\rm II}$.}{\label{Btophitaue}}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[scale=0.561]{BRBtophimueL=R.pdf}
\quad
\includegraphics[scale=0.561]{FLBtophimueL=R.pdf}
\quad
\includegraphics[scale=0.561]{FBBtophimueL=R.pdf}
\quad
\includegraphics[scale=0.561]{BRBtophimueR=0.pdf}
\quad
\includegraphics[scale=0.561]{FLBtophimueR=0.pdf}
\quad
\includegraphics[scale=0.561]{FBBtophimueR=0.pdf}
\caption{The $q^2$ variation of BR (top-left),$F_L$ (top-middle), $A_{FB}$ (top-right) of $B\to \phi\mu e$ process in $\mathcal{S}- {\rm I}$ and the bottom panel (left, middle and right) depicts for $\mathcal{S}- {\rm II}$.}{\label{Btophimue}}
\end{figure}
\newpage
\newpage
\subsection{Analysis of $B \to T(K_2^*,f_2^{\prime}) \ell \ell^{\prime}$ process}
Here, we study the exclusive semileptonic lepton flavor violating $B \to T(K_2^*,f_2^{\prime}) \ell \ell^{\prime}$ channels in details in the framework of non-universal $Z^{\prime}$ model. Similar to the previous processes, we also analyse in the scanario -$\rm I$ and $\rm II$.
\begin{figure}[htb]
\centering
\includegraphics[scale=0.57]{BRBtoK2staumuL=R.pdf}
\quad
\includegraphics[scale=0.57]{BRBtoK2staumuR=0.pdf}
\quad
\includegraphics[scale=0.57]{FBBtoK2staumuL=R.pdf}
\quad
\includegraphics[scale=0.57]{FBBtoK2staumuR=0.pdf}
\caption{The variation of $\mathcal{B}$ and $ A_{FB}$ of $B\to K_2^* \tau \mu$ process are shown in $\mathcal{S}- {\rm I}$ (left panel) and $\mathcal{S}- {\rm II}$ (right panel).} {\label{BtoK2staumu}}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[scale=0.57]{BRBtoK2staueL=R.pdf}
\quad
\includegraphics[scale=0.57]{BRBtoK2staueR=0.pdf}
\quad
\includegraphics[scale=0.57]{FBBtoK2staueL=R.pdf}
\quad
\includegraphics[scale=0.57]{FBBtoK2staueR=0.pdf}
\caption{The variation of the $\mathcal{B}$ and $ A_{FB}$ of $B\to K_2^* \tau e$ process in the $Z^{\prime}$ model: Left and right panels are for scenario - ${\rm I}$ and ${\rm II}$, respectively.} {\label{BtoK2staue}}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[scale=0.57]{BRBtoK2smueL=R.pdf}
\quad
\includegraphics[scale=0.57]{BRBtoK2smueR=0.pdf}
\quad
\includegraphics[scale=0.57]{FBBtoK2smueL=R.pdf}
\quad
\includegraphics[scale=0.57]{FBBtoK2smueR=0.pdf}
\caption{The variation of differential branching ratio and forward-backward asymmetry of $B\to K_2^* \mu e$ process with respect to $q^2$. The left (right) panel indicates $\mathcal{S} - {\rm I} ({\rm II})$.}{\label{BtoK2smue}}
\end{figure}
\begin{table}[htp]
\centering
\scalebox{0.7}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
Observable & & $m_{Z^{\prime}}$=4.5 TeV & $m_{Z^{\prime}}$=6.0 TeV & $m_{Z^{\prime}}$=7.0 TeV & $m_{Z^{\prime}}$=4.5 TeV & $m_{Z^{\prime}}$=6.0 TeV & $m_{Z^{\prime}}$=7.0 TeV \\
\hline
\hline
\multicolumn{2}{|c|} {\textcolor{white}{.}} & \multicolumn{3}{|c|}{$B_s \to K_2^* \ell \ell ^{\prime}$ ($Z'$ contribution)} & \multicolumn{3}{|c|}{$B \to f_2^{\prime} \ell \ell ^{\prime}$ ($Z'$ contribution)}\\
\hline
\multirow{2}{*}{$\mathcal{B}_ {\mu e}\times 10^{-12}$}
& $\mathcal{S} - \rm I$ & $0.010$ & $0.034$ & $0.063$ & 0.009 & 0.029 & 0.054 \\
\cline{2-8}
& $\mathcal{S} - \rm II$ & 0.044 & 0.141 & 0.262 & 0.037 & 0.117 & 0.218 \\
\hline
\multirow{2}{*}{$\mathcal{B}_ {\tau \mu}\times 10^{-9}$}
& $\mathcal{S} - \rm I$ & 0.420 & 1.316 & 2.455 & 0.365 & 1.144 & 2.133\\
\cline{2-8}
& $\mathcal{S} - \rm II$ & 0.630 & 2.003 & 3.727 & 0.549 & 1.743 & 3.243\\
\hline
\multirow{2}{*}{$\mathcal{B}_ {\tau e}\times 10^{-9}$}
& $\mathcal{S} - \rm I$ & $0.525$ & $1.655$ & $3.063$ & 0.457 & 1.440 & 2.666\\
\cline{2-8}
& $\mathcal{S} - \rm II$ & $0.823$ & $2.609$ & $4.851$ & 0.716 & 2.271 & 4.222 \\
\hline
\multirow{2}{*}{$\mathcal{A}_{FB}^{\mu e}$}
& $\mathcal{S} - \rm I$ & $-0.009$ & $-0.009$ & $-0.009$ & -0.010 & -0.010 & -0.010 \\
\cline{2-8}
& $\mathcal{S} - \rm II$ & -0.120 & -0.120 & -0.120 & -0.123 & -0.123 & -0.123 \\
\hline
\multirow{2}{*}{$\mathcal{A}_{FB}^{\tau \mu}$}
& $\mathcal{S} - \rm I$ & 0.231 & 0.231 & 0.231 & 0.233 & 0.233 & 0.233 \\
\cline{2-8}
& $\mathcal{S} - \rm II$ & 0.122 & 0.122 & 0.122 & 0.122 & 0.122 & 0.122\\
\hline
\multirow{2}{*}{$\mathcal{A}_{FB}^{\tau e}$}
& $\mathcal{S} - \rm I$ & 0.247 & 0.247 & 0.247 & 0.248 & 0.248 & 0.248 \\
\cline{2-8}
& $\mathcal{S} - \rm II$ & 0.128 & 0.128 & 0.128 & 0.129 & 0.129 & 0.129 \\
\hline
\hline
\end{tabular}
}
\caption{Upper limit values of $ B \to (K_2^*, f_2^{\prime}) \ell \ell ^{\prime}$ processes in $Z^{\prime}$ model}
\label{tab_BtoK2sf2p}
\end{table}
In Fig.~\ref{BtoK2staumu} and \ref{BtoK2staue}, we analyse the variation of the branching ratio and forward-backward asymmetry of $B\to K_2^* \tau \mu$ and $B\to K_2^* \tau e$ processes with respect to $q^2$, respectively. In both of the figures, the left panel corresponds to the scenario - $\rm I$ whereas the right one indicates to scenario - $\rm II$. The former observable $d\mathcal{B}/dq^2$ contributes distinguishable contributions with higher values in the mid $q^2$ regime with the $m_{Z^{\prime}}$ values. In scenario - $\rm I$ presented in the left panel, the later one has an indistinguishable significant contribution with no zero- crossing point while it allows the same at $q^2 \simeq 10.7$ $\rm GeV^2$ in scenario - $\rm II$ in the presence of $Z^{\prime}$ model. However, there is no change in the contribution for all $m_{Z^{\prime}}$ entries. These are shown in the bottom - left panel of Fig.~\ref{BtoK2staumu} and \ref{BtoK2staue}, respectively. Similarly, the analysis of $B\to K_2^* \mu e$ process is shown in Fig.~\ref{BtoK2smue}. The branching fraction starts from higher values at $q^2=0$ and then reduces to zero in the $ \mu e$ final state. However in the forward backward asymmetry observable, the presence of new physics remains constant at $q^2 \simeq 0$ in scenario - $\rm I$ whereas the the scenario - $\rm II$ indicates significant variations with respect to $q^2$. In both the scenarios, no zero-crossing point has been obsereved.
Similar to the $B\to K_2^* \ell \ell^{\prime}$ process, we also probe another $B\to T \ell \ell ^{\prime}$ process where $T= f_2^{\prime}$ and $\ell , \ell ^{\prime} = e, \mu, \tau$. With the non-universal $Z^{\prime}$ NP coupling, one can obtain the significant contributions of the branching ratio which are higher than the $B \to K_2^*$ channel. We also investigate the $A_{FB}(q^2)$ observable and obtain quite similar results as compared to previous channel $B\to K_2^* \tau \mu$ and $B\to K_2^* \tau e$.
One can view the plots that have been shown in Fig.~\ref{Btof2ptaumu} and Fig.~\ref{Btof2ptaue} for $B\to f_2 ^ {\prime} \tau \mu$ and $B\to f_2 ^ {\prime} \tau e$ processes, respectively. We also obtain the similar results but with different contributions as the masses and form factors change in $B \to f_2 ^{\prime} \ell \ell ^{\prime}$ process accordingly. Here also we investigate for three $m_{Z^{\prime}} = 4.5, 6.0$ and $7.0$ (in the units of $\rm GeV^2$) values. Similarly in Fig.~\ref{Btof2pmue}, we study the branching ratio and the forward-backward asymmetry of $B \to f_2 ^{\prime} \mu e$ channel and obtain similar results. The top-left and top-right panel shows the branching ratio whereas the bottom-left and bottom-right panels depict the $A_{FB}(q^2)$ observable in the scenario - $\rm I$ and scenario - $\rm II$, respectively.
We also report the theoretical estimations of the given observables of both $B \to K_2 ^* \ell \ell ^{\prime}$ and $B \to f_2 ^{\prime} \ell \ell ^{\prime}$ processes in Table.~\ref{tab_BtoK2sf2p}. In respect to the scenario - $\rm I$ and $\rm II$, the numerical values of the observables of these LFV decays, presented in the allowed $q^2$ region, differ in the presence of non-universal $Z^{\prime}$ model .
\begin{figure}[htb]
\centering
\includegraphics[scale=0.57]{BRBtof2ptaumuL=R.pdf}
\quad
\includegraphics[scale=0.57]{BRBtof2ptaumuR=0.pdf}
\quad
\includegraphics[scale=0.57]{FBBtof2ptaumuL=R.pdf}
\quad
\includegraphics[scale=0.57]{FBBtof2ptaumuR=0.pdf}
\caption{The $q^2$ variation of $\mathcal{B}$ and $ A_{FB}$ of $B\to f_2^{\prime} \tau \mu$ channel in the scenario - $\rm I$ (left panel) and scenario - $\rm II$ (right panel).}{\label{Btof2ptaumu}}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[scale=0.57]{BRBtof2ptaueL=R.pdf}
\quad
\includegraphics[scale=0.57]{BRBtof2ptaueR=0.pdf}
\quad
\includegraphics[scale=0.57]{FBBtof2ptaueL=R.pdf}
\quad
\includegraphics[scale=0.57]{FBBtof2ptaueR=0.pdf}
\caption{ The $q^2$ dependency of the branching ratio and forward-backward asymmetry of $B\to f_2^{\prime} \tau e$ process. $\mathcal{S}- {\rm I}$ and $\mathcal{S}- {\rm II}$, respectively shown in left and right panel.}{\label{Btof2ptaue}}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[scale=0.57]{BRBtof2pmueL=R.pdf}
\quad
\includegraphics[scale=0.57]{BRBtof2pmueR=0.pdf}
\quad
\includegraphics[scale=0.57]{FBBtof2pmueL=R.pdf}
\quad
\includegraphics[scale=0.57]{FBBtof2pmueR=0.pdf}
\caption{Variation of the branching ratio ($\mathcal{B}$) and forward-backward asymmetry ($ A_{FB}$) of $B\to f_2^{\prime} \mu e$ process. Left panel (right panel) indicates $\mathcal{S}- {\rm I}$ ($\mathcal{S}- {\rm II}$).}{\label{Btof2pmue}}
\end{figure}
\newpage
\subsection{Lepton non-universality observables}
Analogous to the clean observable $R_K$ and $R_{K^*}$, we present the behavior of the LNU observable for the exclusive LFV decays given in Eq.~(\ref{LNU}). In the left-panel of Fig.~\ref{LNUO}, we depict the $q^2$ variations of the LNU observables $\mathcal{R}^{\mu e}_K$, $\mathcal{R}^{\mu e}_{ K^*}$, $\mathcal{R}^{\mu e}_ \phi$, $\mathcal{R}^{\mu e}_{K_2^*}$ and $\mathcal{R}^{\mu e}_{ f_2^{\prime}}$ in scenario - $\rm I$ whereas the right-panel displays the scenario - $\rm II$ in the $q^2 \in [1.0,6.0]$ $\rm GeV^2$ compatible with LHCb measurements. One can visualise that the LNU observable remains constant for different $m_{Z^{\prime}}$ values. All the LNU observables $\mathcal{R}^{\mu e}_{(K, K^*, \phi, K_2^*, f_2^{\prime})}$ are shown with $\mathcal{O} (10^{-6})$ in the given figure. Here the magenta, blue and green line contributions are involved with $m_{Z^{\prime}} = 4.5, 6.0$ and 7.0 $\rm TeV$, respectively. The region $1.0 \leq q^2 \leq 6.0$ behavior says, all the discussed observables show significant contributions with almost a constant value of less than 1. However, the other LNU observables corresponding to $\tau (e, \mu)$ channels couldn't display the constant values in the given regime. Therefore we haven't considered them in our analysis. The numerical estimations of all the $\mathcal{R}$ observables are shown in Table.~\ref{tab_R}.
\begin{figure}[htb]
\centering
\includegraphics[scale=0.57]{RkmueL=R.pdf}
\quad
\includegraphics[scale=0.57]{RkmueR=0.pdf}
\quad
\includegraphics[scale=0.57]{RksmueL=R.pdf}
\quad
\includegraphics[scale=0.57]{RksmueR=0.pdf}
\quad
\includegraphics[scale=0.57]{RphimueL=R.pdf}
\quad
\includegraphics[scale=0.57]{RphimueR=0.pdf}
\quad
\includegraphics[scale=0.57]{Rk2smueL=R.pdf}
\quad
\includegraphics[scale=0.57]{Rk2smueR=0.pdf}
\quad
\includegraphics[scale=0.57]{Rf2pmueL=R.pdf}
\quad
\includegraphics[scale=0.57]{Rf2pmueR=0.pdf}
\caption{The variation of the LNU observables $\mathcal{R}^{\mu e}_{K, K^*, \phi, K_2^*, f_2^{\prime}}$ in the two scenarios: $\mathcal{S}$ - $\rm I$ (left panel) and $\mathcal{S}$ - $\rm II$ (right panel). }{\label{LNUO}}
\end{figure}
\begin{table}[htp]
\centering
\scalebox{0.9}{
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Observable & & $m_{Z^{\prime}}$=4.5 TeV & $m_{Z^{\prime}}$=6.0 TeV & $m_{Z^{\prime}}$=7.0 TeV \\
\hline
\hline
\multicolumn{5}{|c|}{$\mathcal{R}^{\mu e}_{K, K^*, \phi, K_2^*, f_2^{\prime}} \times 10^{-6}$ ($Z'$ contribution)}\\
\hline
\hline
\multirow{2}{*}{$\mathcal{R}^{\mu e}_{K} \times 10^{6}$}
& $\mathcal{S} - \rm I$ & $0.276$ & $0.869$ & $1.611$ \\
\cline{2-5}
& $\mathcal{S} - \rm II$ & 4.520 & 14.121 & 26.305 \\
\hline $\mathcal{R}^{\mu e}_{K^*}\times 10^{6}$
& $\mathcal{S} - \rm I$ & $0.067$ & $0.213$ & $0.395$ \\
\cline{2-5}
& $\mathcal{S} - \rm II$ & $0.271$ & $0.854$ & $1.583$ \\
\hline
\multirow{2}{*}{$\mathcal{R}^{\mu e}_{\phi^*}\times 10^{6}$ }
& $\mathcal{S} - \rm I$ & $0.287$ & $0.902$ & $1.671$ \\
\cline{2-5}
& $\mathcal{S} - \rm II$ & 4.601 & 14.409 & 26.905 \\
\hline
\multirow{2}{*}{$\mathcal{R}^{\mu e}_{K_2^*}\times 10^{6}$ }
& $\mathcal{S} - \rm I$ & $0.301$ & $0.948$ & $1.757$ \\
\cline{2-5}
& $\mathcal{S} - \rm II$ & 5.226 & 16.353 & 30.557 \\
\hline
\multirow{2}{*}{$\mathcal{R}^{\mu e}_{f_2^{\prime}}\times 10^{6}$ }
& $\mathcal{S} - \rm I$ & $0.298$ & $0.939$ & $1.740$ \\
\cline{2-5}
& $\mathcal{S} - \rm II$ & 5.120 & 16.022 & 29.940 \\
\hline
\hline
\end{tabular}
}
\caption{Estimated upper limit values of $\mathcal{R}^{\mu e}_{K, K^*, \phi, K_2^*, f_2^{\prime}}$ LNU observables}
\label{tab_R}
\end{table}
\section{Conclusion}\label{CON}
In this work, we have investigated the flavor violating (semi)leptonic $ B_s \to \ell \ell^{\prime}$, $B_{(s)} \to (K^{(*)}, \phi, f_2^{\prime}$, $K_2^*) \ell \ell ^{\prime}$ channels induced by $b\to s \ell \ell ^{\prime}$ neutral current transition in the presence of non-universal $Z^{\prime}$ model. These decays are extremely rare in the SM because a tiny neutrino mass occur at the loop level. However, the extension of Abelian gauge group $U(1)^{\prime}$ to the SM induces tree level contribution in presence of non-universal $Z^{\prime}$ vector boson. We consider the NP couplings from the branching fractions of $B_s \to \ell \ell$, $B_s \to \phi \ell \ell$ and the angular observable $P_5^{\prime}$ in the $B \to K^* \ell \ell$ processes with the naive $\chi ^2$ analysis. Using such couplings, we mainly analyse the variation of the branching fractions, forward-backward asymmetries, polarisation asymmetries of all the associated semi(leptonic) $ B_s \to \ell \ell^{\prime}$, $B_{(s)} \to (K^{(*)}, \phi, f_2^{\prime}$, $K_2^*) \ell \ell ^{\prime}$ decay channels in the presence of non-universal $Z^{\prime}$ boson. We also compute the theoretical values of all the observables. To inspect the presence of lepton non-universality, we construct and analyse the observables $\mathcal{R}^{\mu e}_{K, K^*, \phi, K_2^*, f_2^{\prime}}$ in the $q^2 \in [1.0,6.0]$ regime which are compatible with the LHCb measurement. We obtain that the $q^2$ variations of the observables have distinguished contributions in the presence of NP couplings and three different $m_{Z^{\prime}}$ values. Additionally, the theoretically estimated values are sizeable and have definite contributions. However, these decay channels could be further analysed in upcoming LHCb and B-factories with large number of events which could lead to the origin of univocal signal of new physics.
\acknowledgments
LN and RD would like to acknowledge DST INSPIRE fellowship programme for financial support.
|
1,314,259,995,358 | arxiv | \section{Introduction}
Given a class $\mathcal{C}$ of graphs and integers $\Delta$ and $k$, the \emph{degree/diameter problem} aims to find the maximum order $N(\Delta,k,\mathcal{C})$ of a graph in $\mathcal{C}$ with maximum degree $\Delta$ and diameter $k$. For background on this problem the reader is referred to the survey \cite{MS05a}.
Given a surface $\Sigma$, let $\mathcal{G}(\Sigma)$ denote the class of graphs embeddable in $\Sigma$. We set $N(\Delta, k, \Sigma):=N(\Delta, k, \mathcal{G}(\Sigma))$ for simplicity.
The Moore bound \[1+\Delta+\Delta(\Delta-1)+\ldots+\Delta(\Delta-1)^{k-1}\] provides an upper bound for the order of an arbitrary graph with maximum degree $\Delta$ and diameter $k$. This bound, however, is a very rough upper bound when considering graphs on surfaces. The current best upper bound for $N(\Delta,k,\Sigma)$ was provided by \v{S}iagiov\'a and Simanjuntak \cite{SR04} who showed that, for every
surface $\Sigma$ of Euler genus $g$, every $k\ge2$ and every $\Delta\ge3$, \[N(\Delta,k,\Sigma)\le cgk\Delta^{\lfloor k/2\rfloor}.\]
Before continuing we clarify what we mean by a surface and its Euler genus, as different authors adopt different definitions. A {\it surface} is a compact (connected) 2-manifold (without boundary). Every surface is homeomorphic to the sphere with $h$ handles or the sphere with $c$ cross-caps \cite[Theorem~3.1.3]{MohTho01}. The sphere with $h$ handles has {\it Euler genus} $2h$, while the sphere with $c$ cross-caps has {\it Euler genus} $c$.
The bound $cgk\Delta^{\lfloor k/2\rfloor}$ still seems to be rough for graphs on surfaces of Euler genus $g$, as demonstrated in \cite{PVW12}. For the class $\mathcal{P}$ of planar graphs the paper \cite{PVW12} recently showed that, for a fixed $k$, the limit \[\lim_{\Delta\rightarrow \infty}\frac{N(\Delta,k,\mathcal{P})}{\Delta^{\lfloor k/2\rfloor}}\] is an absolute constant, independent of $\Delta$ or $k$.
Knor and \v{S}ir\'a\v{n} \cite{KS97} proved that, for every surface $\Sigma$, there exists an integer $\Delta_0$ such that, for all $\Delta\ge\Delta_0$, \[N(\Delta, 2, \Sigma)=N(\Delta, 2, \mathcal{P})=\lfloor\frac{3}{2}\Delta\rfloor+1.\] This result motivated Miller and \v{S}ir\'a\v{n} to pose the following problem \cite[pp.~46]{MS05a}.
\begin{problem}[{\cite[pp.~46]{MS05a}}]
\label{prob:Miller-Siran}
Prove or disprove that, for each surface $\Sigma$ and each diameter $k\ge2$, there exists a constant $\Delta_0$ such that, for maximum degree $\Delta\ge \Delta_0$, $N(\Delta,k,\Sigma)=N(\Delta,k,\mathcal{P})$.
\end{problem}
Problem \ref{prob:Miller-Siran} was answered in the negative by Pineda-Villavicencio and Wood \cite{PVW12}, where the authors proved that, for every
surface $\Sigma$ of Euler genus $g$, every odd $k\ge3$ and every $\Delta\ge \sqrt{1+24g}+2$, \[N(\Delta,k,\Sigma)\ge\sqrt{\frac{3}{8}g}\Delta^{\lfloor k/2\rfloor}.\]
In Section \ref{sec:LargeGrapSurf} we construct graphs whose orders improve the above lower bound for $N(\Delta,k,\Sigma)$ by a factor of $4$. We obtain
\[N(\Delta,k,\Sigma)\ge\begin{cases}6\Delta^{\lfloor k/2\rfloor}& \text{if $\Sigma$ is the Klein bottle}\\ \left(\frac{7}{2}+\sqrt{6g+\frac{1}{4}}\right)\Delta^{\lfloor k/2\rfloor}& \text{otherwise.}\end{cases}\]
Sections \ref{sec:LPlanar} and \ref{sec:LGTorus} are devoted to the construction of new largest known planar and toroidal graphs for maximum degree $3\le\Delta\le 10$ and diameter $2\le k\le10$. In the appendix we provide tables cataloging the largest known such graphs. In the case of planar graphs the existing table was updated with the new orders. For toroidal graphs no such table existed; only a table recording largest regular graphs was available at \cite{Preen_RegToroidal}. We also updated accordingly (created in the case of toroidal graphs \cite{LPP_Toroidal}) the online table of largest known planar graphs \cite{LPP_Planar}.
Our constructions extend approaches put forward in \cite{FHS98}; Section \ref{sec:MultDiag} explains the methodology.
\section{Multigraphs and diagrams}
\label{sec:MultDiag}
We start from the definition of a \emph{diagram} presented in \cite{FHS98}. A diagram $\mathcal{D}$ is a multigraph where edges are labelled in the form $\alpha(\Delta, \beta)$ ($\alpha$ and $\beta$ positive integers); see Figure \ref{fig:PodsDiag} (a). An edge in a diagram $\mathcal{D}$ is called \emph{thin} if $\alpha=\beta=1$, otherwise it is called \emph{thick}. Similarly, a vertex in $\mathcal{D}$ is called \emph{thin} if all its incident edges are thin, otherwise it is called \emph{thick}. For thin edges labels are omitted. The {\it unlabelled degree} of a vertex $v$ in $\mathcal{D}$ is the number of edges incident with $v$, while the (labelled)\emph{degree} of a vertex $v$ in $\mathcal{D}$ is the sum of all the $\alpha$ values on the labels of the edges incident with $v$. For instance, in Fig.~ \ref{fig:PodsDiag} (a) vertex $v$ has unlabelled degree two and (labelled) degree three. The \emph{weight} of a walk in $\mathcal{D}$ is the sum of all the $\beta$ values on the labels of the edges of the walk. An edge $e$ in $\mathcal{D}$ with an endvertex of unlabelled degree one is called {\it pending}, otherwise $e$ is called {\it non-pending}. For an integer $k\ge 2$, $\mathcal{D}_{\Delta}^k$ denotes any diagram $\mathcal{D}$ with maximum degree $\Delta$ and labels $\alpha(\Delta,\beta)$ satisfying $\beta\le k$ for non-pending edges, and $\beta\le \lfloor k/2\rfloor$ for pending edges.
\begin{figure}[!ht]
\begin{center}
\includegraphics[scale=1]{PodsDiag}
\caption{}
\label{fig:PodsDiag}
\end{center}
\end{figure}
The {\it depth} of a tree rooted at a vertex $v$ is the length of a longest path from $v$ to the tree leaves. For a positive integer $\gamma$, a \emph{$(\Delta,\gamma)$-tree} is a tree of depht $\gamma$ with its root and leaves having degree $1$ and all other vertices having degree $\Delta$. Given a positive integer $\beta$, we call a \emph{$(\Delta,\beta)$-pod} the planar graph obtained from two $(\Delta,\lfloor\beta/2\rfloor)$-trees, identifying their leaves if $\beta$ is even and matching their leaves if $\beta$ is odd; see Figure \ref{fig:PodsDiag} (b) for an example. The roots of the two trees used in the pod construction are the \emph{roots} of the pod; the remaining vertices are called \emph{internal}. In a pod, a path linking its roots is called a \emph{vein}. The number of internal vertices in a $(\Delta, \beta)$-pod with $\beta\ge2$ is $$\frac{\Delta(\Delta-1)^{(\beta-2)/2}-2}{\Delta-2} \quad \textrm{if $\beta$ is even, }$$and $$\frac{2(\Delta-1)^{(\beta-1)/2}-2}{\Delta-2} \quad \textrm{if $\beta$ is odd.}$$
From a diagram $\mathcal{D}_{\Delta}^{k}$ we define a compound graph $G(\mathcal{D}_{\Delta}^{k})$ as follows: a thick non-pending edge $e$ in $\mathcal{D}_{\Delta}^{k}$, labelled by $\alpha(\Delta, \beta)$, is replaced by $\alpha$ ``disjoint'' $(\Delta, \beta)$-pods. By ``disjoint'' pods we mean pods that only share their roots. The endvertices of $e$ are identified with the roots of the pods; see Figure \ref{fig:PodsDiag} (c). If instead $e$ is a thick pending edge in $\mathcal{D}_{\Delta}^{k}$, $e$ is replaced by $\alpha$ ``disjoint'' $(\Delta, \beta)$-trees. The endvertex of $e$ with unlabelled degree other than one is identified with the roots of the trees replacing $e$.
Next we establish relations between $\mathcal{D}_{\Delta}^{k}$ and $G(\mathcal{D}_{\Delta}^{k})$; most of them already appeared in \cite{FHS98}.
\begin{prop}[{\cite[Lemma 1]{FHS98}}]\label{prop:Degree} The maximum degree of $G(\mathcal{D}_{\Delta}^{k})$ is at most $\Delta$.
\end{prop}
\begin{lem}[{\cite[Lemma 2]{FHS98}}]\label{lem:lemma2} Consider a diagram $\mathcal{D}_{\Delta}^{k}$ and the graph $G(\mathcal{D}_{\Delta}^{k})$, and suppose that $e$ and $e'$ are two thick edges of $\mathcal{D}_{\Delta}^{k}$ which lie on a cycle of weight at most $2k+1$. Let $v$ and $v'$ be vertices in $G(\mathcal{D}_{\Delta}^{k})$ of pods corresponding to $e$ and $e'$ respectively. Then the distance in $G(\mathcal{D}_{\Delta}^{k})$ between $v$ and $v'$ is at most $k$.
\end{lem}
\begin{prop}\label{prop:Diameter} Consider a diagram $\mathcal{D}_{\Delta}^{k}$ and the graph $G(\mathcal{D}_{\Delta}^{k})$, and suppose the following conditions hold.
\begin{enumerate}
\item Any two thick edges of $\mathcal{D}_{\Delta}^{k}$ are contained in a closed walk of weight at most $2k+1$.
\item For any thin vertex $v$ and any thick edge $e$ of $\mathcal{D}_{\Delta}^{k}$, $v$ and $e$ lie in closed walk of weight at most $2k+1$.
\item There is a path of weight at most $k$ between any two thin vertices of $\mathcal{D}_{\Delta}^{k}$.
\end{enumerate}
Then the graph $G(\mathcal{D}_{\Delta}^{k})$ has diameter at most $k$.
\end{prop}
\begin{proof} If a thick non-pending edge $e$ of $\mathcal{D}_{\Delta}^{k}$ has label $\alpha(\Delta, \beta)$ then the distance between any two vertices in $G(\mathcal{D}_{\Delta}^{k})$ belonging to the pods which replaced $e$ is at most $k$ \cite[pp.~277]{FHS98}. Since $\beta\le \lfloor k/2\rfloor$, for a thick pending edge $e$ labelled by $\alpha(\Delta, \beta)$, the distance between any two vertices in $G(\mathcal{D}_{\Delta}^{k})$ belonging to the trees which replaced $e$ is at most $k$ as well. Condition (1) assures that any two vertices of $G(\mathcal{D}_{\Delta}^{k})$ belonging to pods replacing distinct thick edges are at distance at most $k$. Conditions (2) and (3) guarantee that the distance from a thin vertex to any vertex in a pod of $G(\mathcal{D}_{\Delta}^{k})$, and to any other thin vertex, is also at most $k$.
\end{proof}
As we will show in the proofs of Proposition \ref{prop:G(Y)-diameter} and Lemma\ref{lem:BasicLemma}, Condition (1) can often be relaxed to containment in two closed walks of weight $2k+2$, rather than just one closed walk of weight at most $2k+1$.
\section{Large planar graphs with odd diameter}
\label{sec:LPlanar}
Figure \ref{fig:DiagramY} (a) depicts the diagram $C_{\Delta}^k$ ($\Delta\ge 4$) suggested in \cite{FHS98}, which gives rise to the largest known planar graphs of maximum degree $\Delta\ge6$ and odd diameters $k\ge5$. The order of $G(C_{\Delta}^k)$ for odd $k\ge5$ is
\begin{align*}
|G(C_{\Delta}^k)|=(\lfloor\frac{9\Delta}{2}\rfloor-12)\frac{\Delta(\Delta-1)^{\frac{k-3}{2}}-2}{\Delta-2}+9.
\end{align*}
As noted in \cite{FHS98}, the diagram $C_{\Delta}^k$ has maximum degree $\Delta$ and readily satisfies the conditions of Proposition \ref{prop:Diameter}. Thus, the graph $G(C_{\Delta}^{k})$ has maximum degree $\Delta$ and diameter $k$. A minor modification of $C_{\Delta}^k$ produces a diagram $Y_{\Delta}^k$ (for odd $k\ge5$) with three additional vertices and, in the case of odd $\Delta$, with an extra pending edge; see Figures \ref{fig:DiagramY} (b) and (c). Clearly, in both cases $Y_{\Delta}^k$ has maximum degree $\Delta$.
\begin{figure}[!ht]
\begin{center}
\includegraphics[scale=1]{DiagramY}
\caption{Diagrams $C_\Delta^k$ and $Y_\Delta^k$.}
\label{fig:DiagramY}
\end{center}
\end{figure}
\begin{prop}\label{prop:G(Y)-diameter}
For odd $k\ge5$ the diameter of the graph $G(Y_{\Delta}^k)$ is $k$.
\end{prop}
\begin{proof}
The diagram $Y_{\Delta}^k$ does not satisfy Proposition \ref{prop:Diameter}, as the pairs of edges $(ac,bg)$, $(ac,hi)$ and $(bg,hi)$, and only those pairs, violate Condition (1). It is not difficult to verify that all other pair of edges meet the conditions of Proposition \ref{prop:Diameter}.
The edges $ac$ and $bg$, however, are contained in the two closed walks of weight $2k+2$, namely $abgeica$ and $adgbxca$. Let $u$ be a vertex in $G(Y_{\Delta}^k)$ of a pod replacing $ac$, and $P$ the vein containing $u$. Similarly, let $u'$ be a vertex in $G(Y_{\Delta}^k)$ of a pod replacing $bg$, and $P'$ the vein containing $u'$. We observe that, since $k$ is odd, if $u$ and $u'$ are at distance $k+1$ in the closed walk $abP'geicPa$, then they cannot also be at distance $k+1$ in the closed walk $adgP'bxcPa$. This alternative to Condition (1) guarantees that the distance between any two vertices in the pods replacing $ac$ and $bg$ is at most $k$. A similar argument applies to the pairs of edges $(ac,hi)$ and $(bg,hi)$; note the symmetry in $Y_{\Delta}^k$.
\end{proof}
The number of vertices in $G(Y_{\Delta}^k)$ is
\[
|G(Y_{\Delta}^k)| =
\begin{cases}
|G(C_{\Delta}^k)|+3 & \text{if $\Delta$ is even}\\
|G(C_{\Delta}^k)|+\frac{(\Delta-1)^{\frac{k-3}{2}} -1}{\Delta-2}+3 & \text{if $\Delta$ is odd}\\
\end{cases}
\]
For odd $k\ge5$ and each $\Delta\ge6$ new largest known planar graphs arise from $G(Y_{\Delta}^k)$.
When $k\ge7$ we can do even better, by incorporating three additional pending edges to $Y_{\Delta}^k$. The resulting diagram $Z_{\Delta}^k$ is shown in Figure \ref{fig:DiagramZ}.
\begin{figure}[!ht]
\begin{center}
\includegraphics[scale=1]{DiagramZ}
\caption{Diagram $Z_\Delta^k$.}
\label{fig:DiagramZ}
\end{center}
\end{figure}
\begin{prop}\label{prop:G(Z)-diameter}
For odd $k\ge7$ the diameter of the graph $G(Z_{\Delta}^k)$ is $k$.
\end{prop}
\begin{proof}
By virtue of Proposition \ref{prop:G(Y)-diameter}, we only need to verify that Condition (1) of Proposition \ref{prop:Diameter} holds for any pair of thick edges of $Z_{\Delta}^k$, in which at least one of the three additional pending edges is implicated. This fact can be verified with little effort.
\end{proof}
For the new diagram $Z_{\Delta}^k$ we have
\begin{align*}
|G(Z_{\Delta}^k)|=|G(Y_{\Delta}^k)|+3(\Delta-2)\frac{(\Delta-1)^{\lfloor\frac{k-4}{2}\rfloor}-1}{\Delta-2}.
\end{align*}
For odd $k\ge7$ and $\Delta\ge6$ new largest known planar graphs arise from $G(Z_{\Delta}^k)$.
The new record orders obtained from $G(Y_{\Delta}^k)$ and $G(Z_{\Delta}^k)$ have been added to the table of largest known planar graphs \cite{LPP_Planar}, and they are also displayed in Table \ref{tab:LargePlanar} of the appendix.
\section{Large graphs embedded in the torus}
\label{sec:LGTorus}
The diagram-based approach explained in the previous section can be used to produce large graphs embeddable in an arbitrary surface.
\begin{rmk}
\label{rmk:Genus}
If a diagram $\mathcal{D}_{\Delta}^{k}$ is embeddable in a surface $\Sigma$ then the graph $G(\mathcal{D}_{\Delta}^{k})$ is also embeddable in $\Sigma$.
\end{rmk}
In this section we obtain large graphs in the torus. For our constructions we will use the diagrams $P_{\Delta}^{k}$ (for $\Delta\ge3$) and $Q_{\Delta}^{k}$ (for $\Delta\ge5$ and odd diameter $k$), depicted in Figure \ref{fig:DiagramsPQ} (a) and (b), respectively. Since the Petersen graph embeds in the torus, the diagram $P_{\Delta}^{k}$, based on the Petersen graph, also embeds in the torus. Furthermore, $P_{\Delta}^{k}$ readily satisfies Proposition \ref{prop:Diameter}. Thus, we have the following.
\begin{figure}[!ht]
\begin{center}
\includegraphics[scale=1]{DiagramsPQ}
\caption{Diagrams $P_\Delta^k$ and $Q_\Delta^k$.}
\label{fig:DiagramsPQ}
\end{center}
\end{figure}
\begin{prop}\label{prop:G(P)-diameter}
For any $k\ge3$ the diameter of the graph $G(P_{\Delta}^k)$ is $k$.
\end{prop}
The order of the graph $G(P_{\Delta}^k)$ is
\begin{displaymath}
|G(P_{\Delta}^k)| =
\begin{cases}
5\big(2(\Delta-1)^{\frac{k-2}{2}}-2\big)+10 & \text{if $k$ is even}\\
5\big(\Delta(\Delta-1)^{\frac{k-3}{2}}-2\big)+10 & \text{if $k$ is odd}\\
\end{cases}
\end{displaymath}
An embedding of $Q_{\Delta}^{k}$ in the torus, based on an embedding of $K_7$, is presented in Fig.~\ref{fig:DiagramQ}. We use the drawing solution suggested in \cite[Section 2]{KocNeiSzy01}, where the torus is represented by the inner unshaded rectangle. This rectangle is surrounded by a larger, shaded rectangle, containing copies of the actual vertices and edges of the embedding. This drawing solution allows easy visualisation of the faces and adjacency of the embedding.
\begin{figure}[!ht]
\begin{center}
\includegraphics[scale=1]{DiagramQ}
\caption{Embedding of $Q_{\Delta}^{k}$ in the torus based on an embedding $K_7$.}
\label{fig:DiagramQ}
\end{center}
\end{figure}
Next we prove that $G(Q_{\Delta}^{k})$ has diameter at most $k$.
\begin{lem}\label{lem:BasicLemma}
Let $\mathcal{D}_{\Delta}^k$ be a diagram for odd $k\ge 3$. Let $e=xy$ and $e'=x'y'$ be two thick edges in $\mathcal{D}_{\Delta}^k$, labelled by $\alpha(\Delta,k-1)$ and $\alpha'(\Delta,k-1)$, respectively. Suppose there is a thin edge in $\mathcal{D}_{\Delta}^k$ joining $x$ and $x'$, and thin edges $f=xy$ and $f=x'y'$ parallel to $e$ and $e'$, respectively. Then the distance in $G(\mathcal{D}_{\Delta}^k)$ between any vertex $u$ in a pod replacing $e$ and any vertex $u'$ in a pod replacing $e'$ is at most $k$.
\end{lem}
\begin{proof}
We use a similar argument as in the proof of Proposition \ref{prop:G(Y)-diameter}. Note that, also in this case, the thick edges $e$ and $e'$ are contained in two closed walks of weight $2k+2$ (see Figure \ref{fig:BasicLemma}). Let $P$ and $P'$ be the veins in $G(\mathcal{D}_{\Delta}^k)$ containing $u$ and $u'$, respectively. Since $k$ is odd, if $u$ and $u'$ are at distance $k+1$ in the closed walk $xx'P'y'f'x'xPyfx$, then they cannot also be at distance $k+1$ in the closed walk $xx'f'y'P'x'xfyPx$.
\end{proof}
\begin{figure}[!ht]
\begin{center}
\includegraphics[scale=1]{BasicLemma}
\caption{Auxiliary figure for Lemma \ref{lem:BasicLemma}.}
\label{fig:BasicLemma}
\end{center}
\end{figure}
From Lemma \ref{lem:BasicLemma} it immediately follows
\begin{prop}\label{prop:G(Q)-diameter}
For odd $k\ge3$ the diameter of the graph $G(Q_{\Delta}^k)$ is $k$.
\end{prop}
From $Q_{\Delta}^k$ we obtain
\begin{displaymath}
|G(Q_{\Delta}^k)| =
\begin{cases}
5(\Delta-4)\frac{2(\Delta-1)^{\frac{k-2}{2}}-2}{\Delta-2}+14 & \text{if $k$ is even}\\
5(\Delta-4)\frac{\Delta(\Delta-1)^{\frac{k-3}{2}}-2}{\Delta-2}+14 & \text{if $k$ is odd}\\
\end{cases}
\end{displaymath}
The orders for toroidal graphs obtained from $P_{\Delta}^k$ and $Q_{\Delta}^k$ are displayed in Table \ref{tab:TorusTable}.
\section{Large graphs on surfaces}
\label{sec:LargeGrapSurf}
As mentioned in the introduction, Pineda-Villavicencio and Wood \cite{PVW12} constructed, for every
surface $\Sigma$ of Euler genus $g$, every odd diameter $k\ge3$ and every maximum $\Delta\ge \sqrt{1+24g}+2$, graphs with order \[\sqrt{\frac{3}{8}g}\Delta^{\lfloor k/2\rfloor}.\]
This is the current best lower bound for $N(\Delta,k,\Sigma)$. In the following we improve this lower bound on $N(\Delta,k,\Sigma)$ by a factor of $4$, obtaining the following bound.
\[N(\Delta,k,\Sigma)\ge\begin{cases}6\Delta^{\lfloor k/2\rfloor}& \text{if $\Sigma$ is the Klein bottle}\\ \left(\frac{7}{2}+\sqrt{6g+\frac{1}{4}}\right)\Delta^{\lfloor k/2\rfloor}& \text{otherwise.}\end{cases}\]
Our construction modifies a complete graph embedded in the surface $\Sigma$, so we need the Map Colouring Theorem. This theorem was jointly proved by Heawood, Ringel and Youngs; see \cite[Theorems 4.4.5 and 8.3.1]{MohTho01}.
\begin{thm}[Map Colouring Theorem]\label{theo:MapColTheo}
Let $\Sigma$ be a surface with Euler genus $g$ and let $G$ be a graph embedded in $\Sigma$. Then
\[\chi(G)\le\frac{7+\sqrt{1+24g}}{2}.\]
Furthermore, with the exception of the Klein bottle where $\chi(G)\le6$, there is a complete graph $G$ embedded in $\Sigma$ realising the equality.
\end{thm}
The right-hand side of the inequality of Theorem \ref{theo:MapColTheo} is called the {\it Heawood number} of the surface $\Sigma$ and is denoted $H(\Sigma)$. Define the {\it chromatic number $\chi$ of a surface $\Sigma$} as follows:
\[\chi(\Sigma)=\begin{cases}6& \text{if $\Sigma$ is the Klein bottle}\\ H(\Sigma)& \text{otherwise.}\end{cases}\]
The main result of this section is the following.
\begin{thm}\label{thm:Main}
For every surface $\Sigma$ of Euler genus $g$, and for every $\Delta>\lceil\frac{\chi(\Sigma)-1}{2}\rceil+1$ and every odd $k\ge 3$,
\begin{align*}
N(\Delta,k,\Sigma)\ge \chi(\Sigma)\left(\Delta-1-\Big\lceil\frac{\chi(\Sigma)-1}{2}\Big\rceil\right)\frac{\Delta(\Delta-1)^{\frac{k-3}{2}}-2}{\Delta-2}+2\chi(\Sigma).
\end{align*}
\end{thm}
Before proving Theorem \ref{thm:Main} we recall the operation of vertex splitting. {\it Splitting a vertex $v$} consists of replacing $v$ by two adjacent vertices $v'$ and $v''$, and of replacing each edge incident with $v$ by an edge incident with either $v'$ or $v''$ leaving the other end of the edge unchanged.
\begin{proof}[Proof of Theorem \ref{thm:Main}] We construct large graphs based on a generalisation of the diagram $Q_{\Delta}^k$ in Figure \ref{fig:DiagramsPQ} (b). For a given $g$ we construct a diagram $\mathcal{Q}_{\Delta}^k$ embeddable in a surface $\Sigma$ of Euler genus $g$ such that the graph $G(\mathcal{Q}_{\Delta}^k)$ has maximum degree $\Delta$ and diameter $k$.
To obtain $\mathcal{Q}_{\Delta}^k$ we start from the complete graph $K_{\chi(\Sigma)}$ and an embedding of $K_{\chi(\Sigma)}$ in $\Sigma$. We split every vertex $v$ in $K_{\chi(\Sigma)}$ as follows. On the surface $\Sigma$ we operate inside a neighbourhood $B_\epsilon(v)$ centred at $v$, with radius $\epsilon$ small enough so that no vertex of $K_{\chi(\Sigma)}$ other than $v$ is contained in $B_\epsilon(v)$. Take any edge of $K_{\chi(\Sigma)}$ incident with $v$ and denote it by $e_1$, then denote the other edges incident with $v$ clockwise by $e_2, e_3,\ldots,e_{\chi(\Sigma)-1}$. Split a vertex $v$ and obtain adjacent vertices $v'$ and $v''$ so that the vertex $v'$ is incident with the edges $e_1,e_2,\ldots,e_{\lfloor\frac{\chi(\Sigma)-1}{2}\rfloor}$ and the vertex $v''$ incident with the edges $e_{\lfloor\frac{\chi(\Sigma)-1}{2}\rfloor+1},e_{\lfloor\frac{\chi(\Sigma)-1}{2}\rfloor+2},\ldots,e_{\chi(\Sigma)-1}$. Then a thick edge $v'v''$ labelled by $(\Delta-1-\lceil\frac{\chi(\Sigma)-1}{2}\rceil)(\Delta,k-1)$ is added; see Figure \ref{fig:DiagramsQGeneral} (b). Note that splitting each vertex $v$ of $K_{\chi(\Sigma)}$ and the subsequent addition of one parallel thick edge do not affect the embeddability in $\Sigma$ as all these operations are carried out inside the neighbourhood $B_\epsilon(v)$.
\begin{figure}[!ht]
\begin{center}
\includegraphics[scale=1]{DiagramsQGeneral}
\caption{}
\label{fig:DiagramsQGeneral}
\end{center}
\end{figure}
The resulting diagram $\mathcal{Q}_{\Delta}^k$ has maximum degree $\Delta$, and so does the graph $G(\mathcal{Q}_{\Delta}^k)$. The embeddability of $G(\mathcal{Q}_{\Delta}^k)$ follows from the embeddability of $\mathcal{Q}_{\Delta}^k$. Note also that every thick edge in $\mathcal{Q}_{\Delta}^k$ has a parallel thin edge, and any two thick edges in $\mathcal{Q}_{\Delta}^k$ are joined by a thin edge. Thus, by Lemma \ref{lem:BasicLemma}, the diameter of $G(\mathcal{Q}_{\Delta}^k)$ is $k$. Finally we have
\begin{displaymath}
|G(\mathcal{Q}_{\Delta}^k)|=\chi(\Sigma)\left(\Delta-1-\Big\lceil\frac{\chi(\Sigma)-1}{2}\Big\rceil\right)\frac{\Delta(\Delta-1)^{\frac{k-3}{2}}-2}{\Delta-2}+2\chi(\Sigma).
\end{displaymath}
\end{proof}
An example of the construction put forward in Theorem \ref{thm:Main} was already depicted in Fig.~\ref{fig:DiagramsPQ} (b); see also Fig.~\ref{fig:DiagramQ} for an embedding of such a construction in the torus.
When $\chi(\Sigma)$ is even we can think of one improvement. Since the vertices in $\mathcal{Q}_{\Delta}^k)$ arising from $v'$ have degree $\Delta-1$, it is possible to add an extra pending edge $v'v''$ labelled by $1(\Delta,\frac{k-3}{2})$; see Figure \ref{fig:DiagramsQGeneral} (c). This would increase the order of $G(\mathcal{Q}_{\Delta}^k)$ by another $\chi(\Sigma)\frac{(\Delta-1)^{\frac{k-3}{2}}-1}{\Delta-2}$ vertices. Thus, we have the following.
\begin{cor}\label{cor:thmMain}For every surface $\Sigma$ of Euler genus $g$ and even $\chi(\Sigma)$, and for every $\Delta>\lceil\frac{\chi(\Sigma)-1}{2}\rceil+1$ and every odd $k\ge 3$, \begin{multline*}
N(\Delta,k,\Sigma)\ge\chi(\Sigma)\left(\Delta-1-\Big\lceil\frac{\chi(\Sigma)-1}{2}\Big\rceil\right)\frac{\Delta(\Delta-1)^{\frac{k-3}{2}}-2}{\Delta-2}\\
+\chi(\Sigma)\frac{(\Delta-1)^{\frac{k-3}{2}}-1}{\Delta-2}+2\chi(\Sigma).
\end{multline*}
\end{cor}
\section{Conclusions}
Our results and those from \cite{PVW12} imply that, for a fixed odd diameter $k$, $N(\Delta,k,\Sigma)$ is asymptotically larger than $N(\Delta,k,\mathcal{P})$. For even diameter, however, we believe this is not the case; thus, we dare to conjecture the following.
\begin{conj}
\label{conj:FP-PV}
For each surface $\Sigma$ and each even diameter $k\ge2$, there exists a constant $\Delta_0$ such that, for maximum degree $\Delta\ge \Delta_0$, $N(\Delta,k,\Sigma)$ and $N(\Delta,k,\mathcal{P})$ are asymptotically equivalent for a fixed $k$; that is,
\[\lim_{\Delta\rightarrow \infty} \frac{N(\Delta,k,\Sigma)}{N(\Delta,k,\mathcal{P})}=1.\]
\end{conj}
Knor and \v{S}ir\'a\v{n} \cite{KS97} result for diameter 2 supports this conjecture.
For odd $k$ we think the actual assymptotic value of $N(\Delta,k,\Sigma)$ is $(c_1+c_2\sqrt{g})\Delta^{\lfloor k/2\rfloor}$, where $c_1$ and $c_2$ are absolute constants. The case of $g=0$ was proved in \cite{PVW12}.
All the graphs constructed in this paper are non-regular. We could look at large regular graphs embedded in surfaces as well. This variation of the degree/diameter problem has already attracted some interest; see, for instance, \cite{Pre10}. Such direction merits further attention.
|
1,314,259,995,359 | arxiv | \section{Introduction}
The nature and identity of dark matter (DM) remain fundamental
open questions in contemporary astrophysics; enormous effort is
currently being directed at finding the answer.
Numerical simulations of the cosmological process of structure
formation \citep[e.g.][]{Davies1985,Springel2005,Frenk2012} have shown
that a model based on the assumption that the DM consists of
cold dark matter (CDM) particles can very successfully reproduce a number of large-scale astrophysical measurements
\citep[e.g.][]{Planck2018, Wang2016, Alam2017}.
Several plausible DM candidates behave like CDM on large scales,
but luckily, their different physical properties can make them
distinguishable on subgalactic scales.
The defining property of standard CDM is the nearly scale-invariant primordial power-spectrum
of density fluctuations, which results in an equally distinctive halo
mass function, characterized by a large
population of haloes down to masses comparable to the Earth's mass \citep{Jenkins2001, Diemand2008,
Angulo2012, Green2005, Wang2020}. Most alternative DM models predict a
suppression of the primordial power spectrum on small scales and an
associated truncation of the halo mass function at a mass, $M_{\rm cut}$. For example, in the
popular warm dark matter (WDM) model, free-streaming arising from the
thermal velocities of the particles at
early times is the cause of the suppression which occurs at a mass
scale that is roughly inversely proportional to the WDM particle mass
\citep[e.g.][]{Avila2003, Schneider2012, Lovell2012, Bose16}.
Current constraints on the warm DM model stem primarily from a
combination of the abundance of satellite galaxies in the Milky Way
\citep{Kennedy2014,Lovell2012,Lovell2016,Newton2020} and the properties of the
Lyman-$\alpha$ forest inferred from high-redshift QSO spectra
\citep{Viel2013, Baur2016,Irsic2017}. A joint analysis of these,
together with
constraints from gravitational lensing (see below),
place $M_{\rm cut}$ at
$\approx 4.3 \times 10^{7}$M$_\odot$ for a thermal WDM relic. These bounds, however, are
subject to possible systematics such as uncertainties in the galaxy
formation physics in the case of satellites \citep{Newton2020} and
assumptions on the thermal history of the intergalactic medium at
high-redshift in the case of the Lyman-$\alpha$ forest
\citep[e.g.][]{Garzilli2017,Garzilli2019}. Constraints from
independent probes, such as we discuss here, are therefore a priority.
Strong gravitational lensing has emerged as an independent way to quantify the abundance
of low-mass DM haloes and thus constrain the WDM particle mass.
This technique uses galaxy-galaxy strong gravitational lenses \citep[e.g.,][]{Bolton2005, Shu2016}
to detect low-mass haloes through the perturbations they cause to the
lens image
\citep[see also the alternative approach based on flux ratio anomalies of lensed quasars, e.g.,][]{Xu2015, Gilman2017, Gilman2019, Gilman2020, Harvey2020}.
These perturbations make it possible to detect both satellite haloes
in the main lens (subhaloes) and low-mass `central' haloes
along the line-of-sight (LOS) \citep{Li2016, Despali2018}, even if
they contain negligible baryonic mass. In fact, in the mass range of
interest, $\lesssim 10^8$M$_\odot$, haloes are too small to have made
a galaxy and so are completely dark \citep{Benitez-Llambay2020}. This is a great advantage as the
abundance and structure of isolated DM haloes is unaffected by
complications associated with baryonic processes and
is very robustly determined by cosmological simulations. Distortions
of strong lenses are therefore
an especially clean way to probe the DM particle mass.
A small number of detections have already been claimed, albeit for
subhaloes more massive than those that can test WDM models \citep{Vegetti2010, Vegetti2012, Hezaveh2016}. The most challenging aspect of lensing lies in using these detections -- and
non-detections as well -- to extract quantitative inferences about the DM model \citep[e.g.][]{Vegetti2014, Vegetti2018, Ritondale2019}.
To do so, it is necessary to formulate robust predictions. For
example, how many detections are to be expected in a
given lensing system assuming CDM, or as a function of the WDM particle mass?
Quantifying the number of detectable haloes, $N_{\rm d}$, in a
specific lens means identifying which DM haloes,
out of the cosmological population of haloes, can cause `observable' perturbations to that system.
More specifically, for a warm DM particle with cutoff mass, $M_{\rm cut}$,
\begin{equation}
N_{\rm d}(M_{\rm cut}) = \int n(x,y,z,\zeta_{\rm h}|M_{\rm cut})\cdot p(x,y,z,\zeta_{\rm h} |\theta, \textbf{n}) \ dV \bf d\zeta_{\rm h},
\label{ndetectable}
\end{equation}
where $n$ is the cosmological number density of DM
haloes\footnote{Here we focus on LOS haloes. The same formalism
applies to subhaloes in the main lens, albeit with a different
density, $n$.} at sky-projected location, $(x,y)$,
redshift, $z$, and with properties, $\zeta_{\rm h}$ (such as mass, concentration, etc.), while $p$ is the probability
of actually detecting such haloes were they to be truly present in the observed system, given the properties of the lens
itself, $\theta$, and those of the data -- for instance, the noise
properties, $\bf n$. In other words, were a halo to
be truly present:
\begin{itemize}
\item{$p=0$ if its perturbations are too small to be observable,
implying that a perturbing halo mass component would not be required in the modelling to describe the data; }
\item{$p=1$ if its perturbations make a model including a perturbing mass component
statistically preferable to one that does not.}
\end{itemize}
The increase in Bayesian evidence between the two models (or the increase in log-likelihood) is often
used as a deciding metric, and most studies \citep[e.g.][]{Vegetti2014,Vegetti2018} have indeed reduced
$p$ to a binary classification: were the halo to be truly present,
$p=1$, if including a perturbing mass component
causes the Bayesian evidence or log-likelihood to increase beyond some given threshold.
This is usually referred to as the {\it sensitivity function} and, simply put, it identifies the region of
parameter space (comprising both physical cosmological volume and halo properties) which can be actually
probed by lensing. In contrast to the cosmological number
density of DM haloes, $n$, the sensitivity function itself
does not directly depend on the DM model\footnote{Although it does
depend on the density profile of the perturbing haloes, which, in
turn, may itself depend on the DM model.}, but it does shape
expectations for the number of detectable haloes,
$N_{\rm d}$ -- as well as expectations for any other observable
obtained through structure lensing studies.
While advanced tools to model optical strong lensing data have been
developed \citep[e.g.][]{Vegetti2009, Nightingale2021}, it remains
computationally expensive to calculate the sensitivity function and
formulate these predictions.
Systematic exploration is required to establish the range of
properties that make a perturber detectable.
A minimum list of the independent variables include the halo mass and halo concentration,
the projected location of the halo with respect to the lensing system, and its redshift. In addition,
the sensitivity function is unique to each individual lensing system because degeneracies in the lensing
effects are such that different lensing configurations can `reabsorb'
the perturbations of identical DM haloes
with different efficiencies. In practice, mapping the entire parameter space for each lens is
often computationally prohibitive, and a number of simplifying assumptions have been used to
obtain estimates of the integral in equation~(\ref{ndetectable}).
Here, we explore the effect of these simplifications.
Among the independent variables mentioned above, halo concentration
and halo redshift are the most important.
For instance, \citet{Minor2021} has recently shown that the halo
concentration must be included as a free parameter when modelling a
perturber: if the concentration is fixed, the inferred perturber's mass may be
biased by a factor of up to 6. They also show that higher halo concentrations make perturbers more easily detectable, as the lensing effect of any mass
distribution is driven by its surface density. However, the intrinsic scatter in the concentration of
DM haloes \citep[e.g.][]{Neto2007, Ludlow2016, Wang2020} has so far
been ignored in sensitivity mapping studies; instead, the mass-concentration relation
has been collapsed onto the concentration axis entirely, forcing all haloes onto the mean value for their mass.
Additionally, the dependence of the mean mass-concentration relation on the DM model itself has also
been neglected \citep[e.g.][]{Vegetti2018,Ritondale2019}. This latter
assumption leaves cosmological halo abundances as the single measure to differentiate
between WDM models of different particle mass, despite the fact that
warmer DM models produce
haloes that are increasingly less dense than their equal-mass CDM
counterparts \citep[e.g.][]{Lovell2012, Bose16}.
As regards the perturber's redshift, this axis has often been
collapsed by adopting a one-to-one scaling relation
that recasts a halo's redshift in terms of its effective mass
\citep{Li2016,Despali2018}. This is obtained by requiring that the lensing convergence
-- i.e.\ the strength of the lensing effect -- should remain nearly
constant. This is not equivalent to performing a full non-linear
search, as done on real data, and therefore does not fully take into
account the modelling degeneracies that can occur in the real case. Lastly, we also briefly reflect on the influence of
noise properties and the role of a specific noise realization.
To explore the parameter space of a sensitivity function fully, we use mock data and models that are somewhat simpler
than those employed in state-of-the-art lens modelling studies \citep[e.g.][]{Vegetti2009, Nightingale2019, Powell2020}.
Specifically, we use models featuring parametric sources rather than non-parametric pixelized sources
\citep[e.g.][]{Warren2003, Dye2005, Birrer2015a, Nightingale2015}. This allows us to develop and apply a
fitting procedure based on a gradient descent algorithm, which is efficient enough to enable the exploration
of all of the independent dimensions of the parameter space relevant to this problem. This work compares how
previous assumptions regarding the sensitivity function affect the
power of strong lensing to discriminate between different
DM models, which we assume is not strongly dependent on the specific approach to source modelling.
In this work, we concentrate on LOS isolated perturbers, which we
model as pure NFW profiles \citep{Navarro1997}. Satellite subhaloes have different density profiles and their number
density in the main lens is affected by a variety of physical
processes. He et al. (in preparation) use the high-resolution
cosmological hydrodynamical simulations of \cite{Richings2021} to
facilitate a comparison between the lensing perturbations caused by
satellite and LOS halos, thereby allowing an estimate of their
relative importance. In their study, they also employ different sets of lens configurations, modelling and fitting techniques. While they do not focus on the effect of halo concentration, their independent analysis finds quantitatively similar results on the redshift dependence of a perturber's detectability, which further reinforces the need to move away from the approximations used so far.
We stress that the present work is not intended as a substitute for
analyses aimed at quantifying the sensitivity function of actual sets
of observed lenses -- which should be tailored to the lens
configurations featured in the real data and should be performed using
the same modelling techniques as applied to the real data.
This paper is organised as follows.
Section~2 provides a quick overview of the standard procedure used to estimate the sensitivity function;
Section~3 describes our modelling framework and fitting procedures;
Section~4 describes our results, focusing on the dependence of halo detectability on redshift and concentration;
Section~5 uses our sensitivity maps to estimate the number of expected
detections for different warm DM models; and
Section~6 examines the consequences of our results for future substructure lensing studies.
\section{Sensitivity Mapping Overview}
In the interest of clarity, we start by outlining the procedure usually adopted to measure the sensitivity function.
Let us assume, for example, that we wish to predict the number of
detectable haloes~(equation~\ref{ndetectable}) for a specific strong
lens. One would start by modelling the lens image in order to infer: (i) a mass model for the lens galaxy; (ii) a light model for the source galaxy. Then,
\begin{enumerate}[label=\arabic*)]
\item{using these and the noise properties of the data themselves, one
{simulates a strong lens image which includes}
a perturbing DM halo with a set of properties (such as mass
and concentration), located at projected location, $(x,y)$
and redshift, $z$;}
\item{one fits these mock data in two full but distinct non-linear
searches, the first with a model that includes a perturbing halo mass component, the second without.}
\item{one compares the two model fits, by means of the
Bayesian evidence or the maximum log-likelihood. If a model
including a halo mass component provides a significantly better
fit, the original data are sensitive to a halo with those specific
properties, thereby mapping the probability of detection, $p$.}
\end{enumerate}
\noindent This procedure is repeated multiple times so as to sample the entire parameter space of perturbers' locations and properties.
We dwelve more deeply into the different steps of this procedure in
the next section. In particular, we shall demonstrate that, in
practice, we do not need to perform one of the fits at all.
\section{Modelling framework}
We assume we have optical (mock) data, $\textbf{d}$, for a lensing system characterized by the presence of some perturbing
LOS halo with properties, $\bf\zeta_{\rm h}$, and we wish to assess its detectability\footnote{For compactness of notation, we include the
perturber's sky coordinates, $(x,y)$, and redshift, $z$, in the halo
properties, $\zeta_{\rm h}$.}. We do so by quantifying the
log-likelihood\footnote{By `log-likelihood' we mean the natural
logarithm of the likelihood. All other instances of `log' in this
work represent the base 10 logarithm.}
difference
\begin{equation}
\Delta\mathscr{L} \left( \bf\zeta_{\rm h}\right) = \mathscr{L}_{\rm m,h}\left(\bf{\hat\theta}_{\rm m}, \bf{\hat\zeta}_{\rm h}\right) - \mathscr{L}_{\rm m}\left(\bf{\bar\theta}_{\rm m}\right),
\label{DL}
\end{equation}
where:
\begin{itemize}
\item{$\mathscr{L}_{\rm m}\left(\bf{\bar\theta}_{\rm m}\right)$ is the log-likelihood value corresponding to the best-fitting model that does not include a
perturbing halo mass component. This model is optimized over the parameters of the so called macromodel alone, $\bf{\theta}_{\rm m}$,
which include the parameters describing the source, $\bf{\theta}_{\rm
s}$, and those describing the lens, as well as any external shear, $\bf{\theta}_{\rm l}$;}
\item{$\mathscr{L}_{\rm m,h}\left(\bf{\hat\theta}_{\rm m}, \bf{\hat\zeta}_{\rm h}\right)$ is the log-likelihood value corresponding to the best-fitting model
that does include a perturbing halo mass component, which is optimized over both macromodel and halo parameters,
with best-fitting values, $\bf{\hat\theta}_{\rm m}$ and $\bf{\hat\zeta}_{\rm h}$ respectively.}
\end{itemize}
We take the detection probability, $p(\zeta_{\rm h})$, in
equation~(\ref{ndetectable}) to be a function of the log-likelihood gain, $\Delta\mathscr{L}$:
\begin{equation}
p(\zeta_{\rm h}) = p(\Delta\mathscr{L}(\zeta_{\rm h})),
\label{pDL}
\end{equation}
that is, perturbing haloes that result in higher values of the
log-likelihood gain, $\Delta\mathscr{L}$, are more easily detectable;
for the moment we defer fixing the functional link between detection probability and log-likelihood increase.
All likelihood values are obtained by comparing the pixelised flux values, $\bf d$,
with the model flux distributions, $\bf f$. We ignore the effect of noise covariance (due to effects like PSF convolution) so that we simply have
\begin{equation}
\mathscr{L} = - {1\over 2} \sum_{\rm pixels} \Big| {{{\bf d - \bf f}\over \bf n}} \Big|^2 \ ,
\label{calL}
\end{equation}
where $\bf n$ represents the noise map associated with the data and we
can neglect the normalization term in $\ln\bf n$ because
we are only interested in log-likelihood differences. We assume that the data only include flux from the source,
i.e.\ that both sky background and lens fluxes have been subtracted before performing the fits.
A number of authors \citep[e.g.][]{Vegetti2018,Ritondale2019} have
argued for adopting the gain in Bayesian evidence, rather than in the log-likelihood,
as a basis for quantifying halo detectability.
We recall that the evidence is defined as the integral of the posterior over the entire parameter space.
We agree that the gain in log evidence is a more sound statistical metric for model
comparison. However, calculating the evidence is orders of magnitude more computationally expensive than identifying the best-fitting model, as it requires sampling the likelihood surface over the entire parameter space in order to integrate it. This has become one of the main reasons behind the need for making simplifications when calculating
the sensitivity function.
It is important to stress that, as long as the same criterion is consistently employed to
both (i) detect perturbers on real data and (ii) measure the sensitivity function and make predictions for the expected number of detections, an evidence based strategy and a likelihood based one are both
perfectly acceptable and will both yield correct results for the DM particle mass.
It is indeed possible that an evidence-based criterion may help to weed out false detections better in studies of real data. On the other hand, this is not strictly necessary
{if the models used in sensitivity mapping share the same complexity of the real data, and therefore also share any spurious detections}. In fact, a computationally more efficient criterion may facilitate a robust characterization of the properties and frequency of false positives, and therefore help to take them into account when
making predictions.
For the present purposes,
a likelihood-based criterion is beneficial in that it allows us to explore the entire parameter space systematically. Furthermore, for data of sufficiently high quality, the gain in evidence and in log-likelihood
become equivalent \citep[{as embodied by the BIC,} see e.g.][]{ESLII, ICSM}, and numerical experiments seem to indicate that
the quality of {\it Hubble Space Telescope} (HST) data is, in fact, high enough for the two approaches to often provide very
similar results (He et al.\ in preparation).
\subsection{Model families and mock data}
Our lensing systems feature:
\begin{itemize}
\item{a power-law mass profile to represent the main lens \citep{Tessore2016}, characterized by the following free parameters:
$(x_{\rm l}, y_{\rm l})$, the centre of the mass distribution;
$\epsilon_{\rm l}$, the Einstein radius; $\beta_{\rm l}$,
the slope of the mass profile; $(e_{1, l}, e_{2, l})$, the two
independent components of the profile's ellipticity;
$(\gamma_1,\gamma_2)$, the two independent components of the external shear.}
\item{a parametric source with a Sersic profile: projected
centre, $(x_{\rm s}, y_{\rm s})$; effective radius, $r_{\rm eff}$;
ellipticity, $(e_{1,s}, e_{2, s})$; Sersic index, $n_{\rm s}$; and total flux, $I_{\rm s}$.}
\end{itemize}
These two components define our macromodel: $\theta_{\rm m}$ is therefore a 15-dimensional vector.
The perturbing haloes are modelled with spherically symmetric
Navarro-Frenk-White mass profiles \citep{Navarro1997},
introducing the following additional 5 parameters: projected centre,
$(x_{\rm h}, y_{\rm h})$; redshift, $z_{\rm h}$; mass, $M_{\rm h}$;
and concentration $c_{\rm h}$. Throughout the paper we take halo
masses, $M_{\rm h}$, to be the virial mass, $M_{200}$, i.e. the mass
contained within a sphere of density 200 times the critical density.
We use the open source software \texttt{PyAutoLens}\footnote{{\tt PyAutoLens}
is open-source and available from \url{https://github.com/Jammy2211/PyAutoLens}} \citep{Nightingale2015, Nightingale2018, Nightingale2021} to generate all of our mock data and for all
our lensing modelling.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figA1.pdf}
\caption{Model fluxes for the two lensing configurations used in this
study, giant arcs (left), quad (right) (linear scale with arbitrary units).}
\label{macromodelsfig}
\end{figure}
Within this framework, we choose two different lensing configurations for our exploration. Both have their source
at $z_{\rm s} =1$ and lens at $z_{\rm l}=0.5$, but one is in a quad-configuration while the other features two asymmetric arcs.
Values for the ground truth macromodel parameters, $\theta_{\rm m}$, are recorded
in Table~\ref{inputmacro}; Fig.~\ref{macromodelsfig} illustrates their
geometry, displaying the corresponding model fluxes,
$\bf{f}(\theta_{\rm m})$. The exact values are of no importance here: we simply choose
sets of parameters that are qualitatively in line with what is found in lensing studies of real systems,
and we use a pair of different configurations to ensure that the trends we identify are not a peculiarity of the specific
system we happen to adopt. Throughout this work, we use pixel size and point-spread function (PSF) width typical for HST data,
fixing both quantities at $0.05''$.
\subsection{Signal-to-noise and noise realization}
\label{stonoise}
The sensitivity function of a lens scales with the quality of the
available data, quantified here by the noise map, $\bf n$,
and the associated maximum value of the signal-to-noise, $\textbf{n}\propto 1/SN_{\rm max}$.
Notice, however, that while the noise map itself is known, the actual noise realization of the observed data is not.
We are therefore interested in assessing how different noise realizations affect
the value of the log-likelihood change.
Let us assume that the data are characterized by a noise realization, $\bf r$, so that $\bf d = \langle d\rangle+r$, where $\langle\cdot\rangle$ denotes an average over noise realizations. In the case of mock data, $\langle \bf d\rangle$ is the input model flux corresponding to the ground truth model
parameters, which we indicate with $(\theta_{\rm m},\zeta_{\rm h})$: $\langle {\bf d}\rangle=\textbf{f}(\theta_{\rm m},\zeta_{\rm h})$. The log-likelihood gain is as in equation~(\ref{DL}), with terms in the same order:
\begin{equation}
\Delta\mathscr{L} \left( \bf\zeta_{\rm h}, \textbf{r}\right) = - {1\over 2} \Big| {{{\textbf{d} - \textbf{f}(\hat\theta_{\rm m}, \hat\zeta_{\rm h})}\over \bf n}} \Big|^2 + {1\over 2} \Big| {{{\textbf{d} - \textbf{f}(\bar\theta_{\rm m})}\over \bf n}} \Big|^2,
\label{expand1}
\end{equation}
where we have implicitly assumed a sum over image pixels. The fluxes, $\textbf{f}(\hat\theta_{\rm m}, \hat\zeta_{\rm h})$, and $\textbf{f}(\bar\theta_{\rm m})$, correspond to the model that best fit the noise-corrupted data, $\bf d$, with and without an extra halo.
It is convenient to consider instead the
model fluxes that provide the best fit to the noise-free data,
$\langle\bf d\rangle$, which we refer to as $\textbf{f}_{\rm h}$ and $\textbf{f}_{\rm m}$. These models do not achieve the maximum log-likelihood values we require in equation~(\ref{expand1}). For example, for the model including a halo mass component,
\begin{equation}
{1\over 2} \Big| {{{\textbf{d} - \textbf{f}(\hat\theta_{\rm m}, \hat\zeta_{\rm h})}\over \bf n}} \Big|^2 = {1\over 2} \Big| {{{\textbf{d} - \textbf{f}_{\rm h}}\over \bf n}} \Big|^2-l_{\rm h}^{\rm bf}(\textbf{r})\ ,
\label{lbfshift1}
\end{equation}
where the difference, $l_{\rm h}^{\rm bf}$, is a function of the noise realization and has a positive value.
Similarly,
\begin{equation}
{1\over 2} \Big| {{{\textbf{d} - \textbf{f}(\bar\theta_{\rm m})}\over \bf n}} \Big|^2 = {1\over 2} \Big| {{{\textbf{d} - \textbf{f}_{\rm m}}\over \bf n}} \Big|^2-l_{\rm m}^{\rm bf}(\textbf{r})\ .
\label{lbfshift2}
\end{equation}
Furthermore, in order to highlight the dependence on the noise realization, $\bf r$, we can recast the model fluxes, $\textbf{f}_{\rm h}$
and $\textbf{f}_{\rm m}$, in terms of their associated residuals, $\textbf{f}_{\rm h} = \langle \textbf{d}\rangle+\delta_{\rm h}$, and $\textbf{f}_{\rm m}= \langle \textbf{d}\rangle+\delta_{\rm m}$,
which gives:
\begin{equation}
{1\over 2} \Big| {{{\textbf{d} - \textbf{f}(\hat\theta_{\rm m}, \hat\zeta_{\rm h})}\over \bf n}} \Big|^2 = {1\over 2} \Big| {{{\textbf{r} - {\delta}_{\rm h}}\over \bf n}} \Big|^2-l_{\rm h}^{\rm bf}(\textbf{r}),
\label{lbfshift1a}
\end{equation}
and
\begin{equation}
{1\over 2} \Big| {{{\textbf{d} - \textbf{f}(\bar\theta_{\rm m})}\over \bf n}} \Big|^2 = {1\over 2} \Big| {{{\textbf{r} - {\delta}_{\rm m}}\over \bf n}} \Big|^2-l_{\rm m}^{\rm bf}(\textbf{r}).
\label{lbfshift2a}
\end{equation}
Equation~(\ref{expand1}) is therefore the difference between the
right-hand sides of the two equations above. We are interested in the mean and the standard deviation of this difference across noise realizations.
Let us first consider the two shifts, $l^{\rm bf}_{\rm h}$, and $l_{\rm m}^{\rm bf}$. These are nonzero, but they are not the leading terms of equation~(\ref{expand1}) that we seek.
We can show this by estimating their average magnitude across varying
noise realizations. This can be calculated analytically under the assumption
that the likelihood surface is Gaussian. If so, we can see that
\begin{equation}
\langle l_{\rm bf}\rangle = {k\over 2}\ln 2,
\label{lbfshift3}
\end{equation}
which is valid for both $l^{\rm bf}_{\rm h}$ and $l_{\rm m}^{\rm bf}$, and in which $k$ is the number of independent parameters in the likelihood. In our case, the model which
includes a halo mass component features 5 additional free parameters, so that
\begin{equation}
\langle l^{\rm bf}_{\rm h}-l^{\rm bf}_{\rm m}\rangle = {5\over 2}\ln 2 \approx 1.73.
\label{lbfshift4}
\end{equation}
This is considerably smaller than the log-likelihood differences we are after and we therefore ignore these terms from now on.
By expanding the chi-square
terms in Eqns.~(\ref{lbfshift1a}) and~(\ref{lbfshift2a}), we
finally obtain the leading terms we are interested in:
\begin{equation}
\Delta\mathscr{L} \left( \bf\zeta_{\rm h}, \textbf{r} \right) \approx {1\over 2} \Big(\Big|{\delta_{\rm h}\over{\bf n}} \Big|^2 - \Big|{\delta_{\rm m}\over{\bf n}} \Big|^2\Big) + {{\textbf{r}}\over{\textbf{n}}}\cdot{{\delta_{\rm h}-\delta_{\rm m}}\over{\textbf{n}}}.
\label{expandall}
\end{equation}
Here, the first term quantifies the inability of a model that does not include a perturbing halo mass component
to describe the perturbed data. This is what sensitivity mapping is after, and is independent of the noise realization.
The second term introduces scatter in the measurement of
the log-likelihood gain as a consequence of varying noise realizations, $\bf r$. The case we are interested in
is the one in which a model featuring a halo mass component provides a
satisfactory fit to the data,
$\delta_{\rm h}/\textbf{n}\ll 1$. This is also the case of mock data in which
the model used to generate data is the same used to fit it: $\langle
\bf d\rangle = \bf f_{\rm h}$. In this case,
\begin{equation}
\Delta\mathscr{L} \left( \bf\zeta_{\rm h}, \textbf{r} \right) \approx {1\over 2} \Big|{\delta\over{\bf n}} \Big|^2 + {{\delta}\over{\textbf{n}}}\cdot{{\textbf{r}}\over{\textbf{n}}},
\label{expand2}
\end{equation}
where we have used $\delta\equiv\delta_{\rm m}$ for compactness.
By definition, the noise realization, $\bf r$, is a random variable with zero mean; furthermore,
by construction, the residuals,
$\delta$, are not correlated with $\bf r$. As a result, the second term in equation~(\ref{expand2}) averages to zero:
\begin{equation}
\langle\Delta\mathscr{L} \left( \bf\zeta_{\rm h} \right)\rangle \approx {1\over 2} \Big|{\delta\over{\bf n}} \Big|^2.
\label{meanDL}
\end{equation}
We can estimate the magnitude of the scatter introduced by the same term by taking the ratio, $\textbf{r}/\textbf{n}$, to be a set of independent normal random variables with unit variance, which results in a standard deviation of
\begin{equation}
{\rm std}(\Delta\mathscr{L} \left( \bf\zeta_{\rm h} \right)) \approx{} \sqrt{\Big|{\delta\over{\bf n}} \Big|^2} \sim \sqrt{2\langle\Delta\mathscr{L} \left( \bf\zeta_{\rm h} \right)\rangle}.
\label{stdDL}
\end{equation}
We test this scaling in Appendix~B, where Fig.~\ref{mean:std} shows experiments that highlight the scaling of equation~(\ref{stdDL}).
In conclusion, from equation~(\ref{meanDL}) we deduce that the mean log-likelihood increase scales with the square
of the maximum signal-to-noise ratio, $SN_{\rm max}$, and we note that the Bayesian
evidence will also feature in the same scaling.
From eqns.~(\ref{expandall}) and~(\ref{expand2}) we see that, in real data,
a scatter of the order of $\sqrt{2\Delta\mathscr{L} }$ should be expected.
In fact, the same scatter should be expected when mapping the sensitivity function using mock data that include a random noise realization. This implies that multiple noise realizations should be used and results averaged. However, the analysis
above also shows that this can be avoided by using noise-free mock data (i.e. $\textbf{r}=0$), while at the same time
using the appropriate noise map, $\bf n$, featuring the same maximum signal-to-noise as in the real data. This is
the strategy we adopt in this work.
\subsection{Fitting procedure}
Having chosen our macromodels, $\theta_{\rm m}$, we can introduce intervening LOS haloes with input parameters, ${\zeta}_{\rm h}$,
and simulate the resulting model fluxes, $\bf{f}(\theta_{\rm m}, {\zeta}_{\rm h})=\bf f$.
As outlined in Section~2, each determination of the likelihood gain, $\Delta\mathscr{L}({\zeta}_{\rm h}, \bf{r})$, requires two
non-linear searches. However, in our case, $\bf r$=0, so that we have, $(\hat\theta_{\rm m}, \hat\zeta_{\rm h})=(\theta_{\rm m}, \zeta_{\rm h})$,
or equivalently, $\mathscr{L}(\hat\theta_{\rm m}, \hat\zeta_{\rm h})=0$, by construction, and therefore,
\begin{equation}
\Delta\mathscr{L}(\zeta_{\rm h}) = \mathscr{L}(\bar\theta_{\rm m}) .
\label{DLsimple}
\end{equation}
Thus, for each set of halo parameters, $\zeta_{\rm h}$, we only require one non-linear search in order to determine the
best fitting parameters, $\bar\theta_{\rm m}$, of the model that does not include a halo mass component. This is also the fit with fewer free parameters -- and therefore both the fastest to run, and the least likely to get stuck in local minima during a likelihood optimization.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{arcsmap.pdf}
\caption{An illustration of the sky-projections of our maps of the
log-likelihood increase, $\Delta\mathscr{L}$, for our `arcs' lensing
configuration. Columns show a grid of different perturber
redshifts. Rows are for different perturber concentrations. The
perturber mass is fixed at $M_{\rm h}=10^{10}$~M$_{\rm \odot}$ in
all panels. Individual panels share the same colour scale. It is apparent that more concentrated haloes
result in larger $\Delta\mathscr{L}$ values. Also, the $\Delta\mathscr{L}$ values decrease away from the redshift of the main lens, $z_{\rm l}=0.5$, for both higher and lower perturber redshift.}
\label{arcsmap}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{quadmap.pdf}
\caption{Same as Fig.~\ref{arcsmap}, for our `quad' lensing configuration. The perturber halo has a mass of $M_{\rm h}=10^{9.5}$~M$_{\rm \odot}$.}
\label{quadmap}
\end{figure*}
\subsubsection{Gradient descent approach}
We perform these optimizations using an iterative gradient descent
algorithm. In essence, at each step, $i$, provisional
estimates of the best-fitting parameters, $\theta_{\rm m}^i$, and corresponding model fluxes, $\textbf{f}_i$, are used to
calculate the increments, $\delta\theta_{\rm m}^i$, which provide the best linear improvement of the model fluxes themselves.
That is, the increment, $\delta\theta_{\rm m}^i$, minimizes the log-likelihood,
\begin{equation}
\Delta\mathscr{L}_{i+1} ={1\over 2}\sum_{\rm pixels}\Big|{1\over{\bf n}}\Big[\textbf{d}- \Big(\textbf{f}_i+\left.{\partial \textbf{ f}\over\partial \theta_{\rm m}}\right|_{\theta_{\rm m}^i}\cdot \delta\theta_{\rm m}^i \Big)\Big]\Big|^{2},
\label{graddesceq}
\end{equation}
where $\left.{\partial \bf f / \partial \theta_{\rm m}}\right|_{\theta_{\rm m}^i}$ is the gradient of the model fluxes calculated
at $\theta_{\rm m}^i$. This minimization is easily solved by the corresponding least square problem.
The parameters at the subsequent step are therefore, $\theta_{\rm m}^{i+1}=\theta_{\rm m}^i+\eta \delta\theta_{\rm m}^i$,
where $\eta$ is the so called learning rate. Iterations are stopped when the corresponding likelihood value converges.
In order to avoid convergence at possible local maxima, we repeat the procedure over a set of different initialization
parameters, close to the input parameters, $\theta_{\rm m}$. In practice, we find this to be rarely necessary, possibly because
for most perturbing haloes the best-fitting parameters, $\bar\theta_{\rm m}$, are sufficiently close to the input values themselves, and the log-likelihood surface is smooth in our noise-free setting.
Despite allowing for this redundancy, we find gradient descent to be efficient and inexpensive for models featuring
parametric sources. This is because the (noise-free) gradient maps, ${\partial \bf f /\partial \theta_{\rm m}}$, are well behaved and easily
estimated. In contrast, this is not so when non-parametric pixelized sources are used. We have tried using gradient descent
to optimize the parameters of the lens while, at each iterative step, a linear source inversion \citep[e.g.][]{Warren2003, Dye2005}
determines the source model. However, we find this approach to be unsuitable, despite the fact that in this case,
gradient descent is used on a significantly smaller parameter space (featuring 8 dimensions instead of 15).
Due to the nature of the semi-linear source inversion, the residuals, $\bf d-f_i$, contain
little information on the mass model itself, unless unrealistically high regularization
values \citep[e.g.][]{Suyu2006} are used.
\section{Mapping the sensitivity function}
For each of the two macromodels we investigate, we map the log-likelihood gain, $\Delta\mathscr{L}$, over the space of halo
parameters, $\zeta_{\rm h}=(x_{\rm h}, y_{\rm h}, z_{\rm h}, M_{\rm h}, c_{\rm
h})$, using a rectangular grid as follows.
\begin{itemize}
\item{The halo mass is varied between $8.0\leq\log M_{\rm h}/M_{\rm \odot}\leq 10.0$, at intervals of 0.5 dex;}
\item{Halo redshift is varied between $0.05\leq z_{\rm h}\leq0.95$, at intervals of $\delta z = 0.15$. }
\item{Halo concentrations deviate from the mass-concentration relation between
$\log c - \log c(M,z) \equiv \delta \log c = 4\sigma_{\log c}$ and $\delta\log c = -2\sigma_{\log c}$, in intervals of $\sigma_{\log c}$.
Here $\sigma_{\log c}$ is the lognormal scatter of the mass concentration relation, which we take to be independent of
halo mass and redshift, and fix at $\sigma_{\log c}=0.15$~dex \citep{Wang2020}.
For this exploration, we assume that $\delta \log c = 0$ means the
mass-concentration relation, $c(M,z)$, of CDM haloes,
as measured by \citet{Ludlow2016}.}
\item{Projected locations, $x_{\rm h}, y_{\rm h}$, are mapped over 50 intervals in both coordinates. We scale the total extent of our maps with redshift so as to achieve better
spatial resolution in the $(x,y)$ plane when $z_{\rm h}>z_{\rm l}$.}
\end{itemize}
Figs~\ref{arcsmap} and~\ref{quadmap} illustrate some of our maps as sky-projections of the log-likelihood
increase, $\Delta\mathscr{L}$, for our `arcs' and `quad' configurations respectively. The first figure shows results for a perturbing halo
of mass, $M_{\rm h}=10^{10}$~M$_{\rm \odot}$, the second for $M_{\rm h}=10^{9.5}$~M$_{\rm \odot}$.
Columns correspond to different values of the perturber redshift, $z_{\rm h}$. Rows are for different halo concentrations.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{redshiftdep.pdf}
\caption{The average dependence of the log-likelihood increase, $\Delta\mathscr{L}$, on redshift (see text). The top row refers to our `quad' configuration,
the bottom one to our `arcs'. The right column is for a perturber of
mass, $M_{\rm h}=10^{10}$~M$_{\rm \odot}$, the left column for one of mass $M_{\rm h}=10^{9.5}$~M$_{\rm \odot}$. Profiles of different colours refer to different values of the perturber concentrations (in terms of the shift relative to the median concentration at the relevant redshift; see text). }
\label{redshiftdep}
\end{figure}
\subsection{Dependence on redshift}
\label{redshiftdependence0}
A common assumption in sensitivity mapping is that the perturber's redshift can be recast in terms of its effective mass.
\citet{Li2016} noticed that the mass and redshift of a perturber are highly
degenerate, and used this to introduce the idea of rescaling a perturber's
mass as a function of redshift.
\citet{Despali2018} investigated this equivalence further and provided a universal scaling between redshift and effective mass, which is obtained by requiring that
the map of deflection angles be minimally changed. In practice, at any fixed projected location,
the perturbing characteristics of a halo of mass, $M_{\rm h}$, at a redshift, $z_{\rm h}$, have been equated to
those of a halo located at the redshift of the lens and having an `effective' mass of $\log M_{\rm sh}=\log M_{\rm h}+\delta M(z_{\rm h})$.
The mass shift, $\delta M(z_{\rm h})$, is clearly zero at $z_{\rm h}=z_{\rm l}$, and is found to be monotonically increasing with
redshift, so that detecting a halo of fixed mass becomes more challenging with increasing redshift,
and is easiest for close-by perturbers.
Figs~\ref{arcsmap} and~\ref{quadmap} show that our calculations do not
support this working hypothesis in previous work. When fitting image
fluxes (rather than deflection angles), we find that haloes are harder
to detect when they are behind {\it or} in front of the main lens.
The details depend on the precise lensing configuration, mass and concentration
of the perturbing halo, as well as on the precise projected location. However, a decrease in $\Delta\mathscr{L}$ when the halo redshift deviates from the lens redshift
is a universal qualitative feature in our maps displayed by both our adopted lensing configurations
and across halo masses.
This means that the effect of foreground haloes of fixed mass is more easily `reabsorbed' by suitable
macromodels when these perturbers are at low redshifts.
In other words, degeneracies in the lens modelling make the detection of perturbers of the same mass
increasingly more difficult with decreasing redshift.
Fig.~\ref{redshiftdep} summarizes the redshift dependence of the log-likelihood
increase, showing the ratio between the log-likelihood increase for a perturber at some redshift, $\Delta\mathscr{L}(z_{\rm h},M_{\rm h},c_{\rm h})$,
divided by that at the redshift of the lens, $\Delta\mathscr{L}(z_{\rm h}=z_{\rm l},M_{\rm h},c_{\rm h})$.
This ratio is then averaged over the perturbers' projected locations, $(x,y)$. The top row refers to our `quad' configuration,
the bottom one to our `arcs'. The right column is for a perturber of
mass, $M_{\rm h}=10^{10}$~M$_{\rm \odot}$, the left column to one of
mass, $M_{\rm h}=10^{9.5}$~M$_{\rm \odot}$. Profiles of different colours refer to different values of the perturbers'
concentrations. As described, for most haloes, the log-likelihood increase {\it decreases} for redshifts that are
both higher and lower than the lens' redshift, $z_{\rm l}$. The size of this decrease depends systematically
and monotonically on halo concentration, with the less-concentrated haloes displaying sharper falloffs.
Depending on the lensing configuration, for haloes on the mass-concentration relation, $\Delta\mathscr{L}(z_{\rm h})$
decreases by a factor between 1.15 to 1.6 between $z_{\rm h}=z_{\rm
l}$ and $z_{\rm h}=0.2$, and then
drops more sharply towards lower redshift.
For the highest halo concentrations, we see, instead, a mild apparent increase in detectability at low redshift. However, analysis of the top rows
of Figs.~\ref{arcsmap} and~\ref{quadmap} shows that this increase is due to the
fact that Fig.~\ref{redshiftdep} displays averages over projected
locations. At the highest concentrations, the projected area in which
perturbers result in `intermediate' log-likelihood values,
$50\lessapprox\Delta\mathscr{L}\lessapprox100$ (corresponding to orange hues in
Figs.~\ref{arcsmap} and~\ref{quadmap}) increases at the lowest redshifts. In the same regions, log-likelihood values are
lower at $z_{\rm h}=z_{\rm l}$, which drives the mild increase apparent in the average quantities shown in Fig.~\ref{redshiftdep}. On the other hand, even at the highest concentrations, it remains true that the peak values of the log-likelihood gain, $\Delta\mathscr{L}$, {\it decrease} with decreasing redshift.
In any case, this mild increase is limited
to extremely concentrated haloes, and, therefore, is not a representative behaviour.
The qualitative contradiction between the predictions using deflection angle maps and our results implies
that previous estimates of the number of detectable haloes obtained using the relation between mass and redshift
proposed in \citet{Despali2018} are likely to overestimate the number of low redshift haloes.
We will return to this point in Section~\ref{redshiftdependence1}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{concdep.pdf}
\caption{The average dependence of the log-likelihood increase, $\Delta\mathscr{L}$, on halo concentration (see text).The top row refers to our `quad' configuration,
the bottom one to our `arcs' configuration. The right column is for a perturber of
mass, $M_{\rm h}=10^{10}$~M$_{\rm \odot}$, the left column to one of mass,
$M_{\rm h}=10^{9.5}$~M$_{\rm \odot}$. Lines of different colours
refer to different values of the perturber
redshift. }
\label{concdep}
\end{figure}
\subsection{Dependence on concentration}
\label{concentrationdependence0}
Most previous studies have fixed the halo concentration to
the mean for their mass for the adopted mass-concentration relation. However, Figs.~\ref{arcsmap} and~\ref{quadmap} clearly show that concentration
makes a significant difference to halo detectability. From top to bottom, the values of the log-likelihood
gain decrease monotonically: the perturbations of less concentrated haloes are more easily reabsorbed
by changes of the macromodel parameters. In turn, at fixed mass, more concentrated
haloes are more easily detected.
Fig.~\ref{concdep} provides a summary of the dependence of the log-likelihood
increase, $\Delta\mathscr{L}$, on halo concentration. This shows the ratio between the log-likelihood increase,
$\Delta\mathscr{L}(z_{\rm h},M_{\rm h},c_{\rm h})$, for a perturber of any concentration, $c$, divided by that for one on the mass-concentration
relation, $\Delta\mathscr{L}(z_{\rm h},M_{\rm h},c_{\rm h}=c(M_{\rm h}, z_{\rm h}))$. We reiterate that
$\delta \log c=0$ in our mapping of the sensitivity function corresponds to the mass-concentration relation measured by \citet{Ludlow2016}. Similarly to Fig.~\ref{redshiftdep},
these ratios are then averaged over projected locations, $(x,y)$. The top row refers to our `quad' configuration,
the bottom to one to our `arcs'. The right column is for a perturber
of mass, $M_{\rm h}=10^{10}$~M$_{\rm \odot}$, the left column
for one of mass, $M_{\rm h}=10^{9.5}$~M$_{\rm \odot}$. Profiles of different colours refer to different values of the perturbers'
redshift. It is clear that $\Delta\mathscr{L}$ increases monotonically with concentration in all cases.
The scalings appear qualitatively similar in all four panels, although, as for the dependence on redshift,
quantitative details are still dependent on the lensing configuration and other halo parameters.
In particular, we record a significant secondary dependence on redshift: the detectability of haloes
at the lowest redshifts is most strongly boosted by concentration. The magnitude of this boost
then decreases for redshifts approaching the redshift of the lens, where it has a minimum, to then increase again towards higher redshifts.
Notably, we find the dependence on concentration to be essentially exponential, al least when averaged
over projected locations:
%
\begin{equation}
\langle\Delta\mathscr{L}(\delta_{{\rm log}c})\rangle_{(x,y)} \sim 10^{\alpha\cdot \delta_{{\rm log}c}}\langle\Delta\mathscr{L}(\delta_{{\rm log}c}=0)\rangle_{(x,y)} \ .
\label{concexp}
\end{equation}
The top-right panel of Fig.~\ref{concdep} displays guiding lines for the exponent $\alpha=\{0.18,0.28,0.4\}$.
The log-likelihood increase grows roughly by a factor between 1.5 to 2.5 for each additional $+1\sigma$
deviation from the mass-concentration relation. We will analyse the consequences of this on the
expected number of haloes in Section~\ref{concentrationdependence1}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{NCDM.pdf}
\caption{The cumulative number of detectable haloes, $N_{\rm d}(<M)$, in a CDM universe. Lines of different colour refer to different
values of the log-likelihood threshold required for detectability.
The left panel refers to our quad configuration, the right panel
to our configuration featuring asymmetric arcs. The values displayed
are for
$SN_{\rm max}=50$}
\label{NCDM}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=.9\textwidth]{Nofsn.pdf}
\caption{The number of detectable haloes of mass, $M_{\rm h}<10^{10}$~M$_{\rm \odot}$
(left panel), and $M_{\rm h}<10^{9.5}M_{\rm \odot}$ (right panel) in a CDM universe as
a function of the maximum $SN$ of the data. Values are normalised to the number of
detections predicted for $SN_{\rm max}=50$, which we use as a fiducial value in this work. Lines of different colour refer to different
values of the log-likelihood threshold required for detectability.}
\label{Nofsn}
\end{figure*}
\section{The population of detectable haloes}
Using our maps of the log-likelihood increase, $\Delta\mathscr{L}(\zeta_{\rm h})$, we are ready to perform the integral in
eqn.~(\ref{ndetectable}). As done in previous work, we use a sharp threshold, $\Delta\mathscr{L}_{\rm th}$, in the
log-likelihood increase to separate detectable haloes from non-detectable haloes:
\begin{equation}
p(\zeta_{\rm h}) =
\begin{cases}
1 &{\rm if}\ \Delta\mathscr{L}(\zeta_{\rm h})\geq \Delta\mathscr{L}_{\rm th}\\
0 &{\rm if}\ \Delta\mathscr{L}(\zeta_{\rm h})< \Delta\mathscr{L}_{\rm th},\\
\end{cases}
\label{pofdl}
\end{equation}
although we notice that the scatter characterized in equation~(\ref{stdDL}) would provide a
{natural} scaling for a smooth transition in the detection probability. This will be useful when preparing detailed
predictions for real data, but would not affect our conclusions here,
so in this study we retain a sharp transition for simplicity.
We parametrize the cosmological number density of DM haloes as suggested by \citet{Lovell2014}:
\begin{equation}
n(M_{\rm h},z | M_{\rm cut}) = n_{\rm CDM}(M_{\rm h},z)\Big(1+{M_{\rm cut}\over M_{\rm h}}\Big)^{-1.3},
\label{halomassf}
\end{equation}
where $n_{\rm CDM}(M_{\rm h},z)$ is the CDM halo number density, for which we adopt the form
derived by \citet{Sheth2001}. We take this distribution to be uniform over projected sky coordinates,
and assume that the distribution in concentration is lognormal, with a spread of $\sigma_{\log c}=0.15$~dex
(independent of mass, redshift and DM model). We take the median concentration to be dependent on
the DM model, and adopt the parametrization proposed by \citet{Bose16}:
\begin{align}
&c(M_{\rm h},z | M_{\rm cut}) = \nonumber \\
&c_{\rm CDM}(M_{\rm h},z)\Big[(1+z)^{0.026z-0.04}\Big(1+60{M_{\rm cut}\over M_{\rm h}}\Big)^{-0.17}\Big],
\label{mcrel}
\end{align}
in which the median concentration of CDM haloes is as recorded by \citet{Ludlow2016}.
These prescriptions allow us to calculate the integral of equation~\ref{ndetectable} using a Monte Carlo strategy.
We randomly sample the candidate haloes' $\zeta_{\rm h}$ according to their cosmological number density, and then check
whether they would be detectable using our maps of log-likelihood increase, which we linearly interpolate
between our rectangular grid points.
\subsection{The effect of data quality}
Although our objective is not to provide absolute figures for the
number of detectable haloes, Fig.~\ref{NCDM} shows the cumulative distributions
of detectable LOS haloes, $N_{\rm d}$, we obtain for our two lens configurations. Both panels
are for a CDM universe and are calculated from our full maps of the
log-likelihood increase, that is including both the full redshift dependence
and the scatter in the mass concentration relation.
We have used $SN_{\rm max}=50$. We stress that these figures cannot be directly applied
to real data analysed with different techniques and featuring different
lensing configurations. For definitiveness, we include all haloes with projected
coordinates in a $4.5''\times4.5''$ area at $z_{\rm h}\leq z_{\rm l}$, decreasing
to $2.1''\times2.1''$ at $z_{\rm h}=z_{\rm s}=1$, as displayed in Figs.~\ref{arcsmap} and~\ref{quadmap}.
Our two lensing configurations provide substantially different
numbers of detectable haloes: a quad configuration appears less prone
to modelling degeneracies and therefore more promising for the detection
of perturbers. The number of expected detections is also a strong function of
the imposed detection threshold.
In our quad lens configuration, thresholds of $\Delta\mathscr{L}_{\rm th}=\{$10, 20, 35, 50$\}$ yield $N_{\rm d}=\{$0.2, 0.07, 0.02, 0.01$\}$ detections with $M_{\rm h}<10^{10}$~M$_{\rm \odot}$
per lens, respectively.
The analysis of Section~\ref{stonoise} also allows us to address systematically
the dependence of the total number of detected haloes with the quality
of the
data, which in this work we have characterized by the value of
$SN_{\rm max}$. Equation~\ref{meanDL}
allows us to equate changes in the value of $SN_{\rm max}$ with
changes in the value of the log-likelihood threshold required for
detection, $\Delta\mathscr{L}_{\rm th}$:
\begin{equation}
N_{\rm d}(SN_{\rm max}, \Delta\mathscr{L}_{\rm th}) = N_{\rm d}(\alpha SN_{\rm max}, \alpha^{-2}\Delta\mathscr{L}_{\rm th})\ ,
\label{snchange}
\end{equation}
for any factor $\alpha$.
We use this equivalence to focus on how
the number of expected detections for a CDM universe would change
for higher or lower values of the signal-to-noise, i.e.\ for longer or shorter
exposure times.
Fig.~\ref{Nofsn} shows the number of expected detections
of haloes of mass, $M_{\rm h}<10^{10}$~M$_{\rm \odot}$
(left panel) and $M_{\rm h}<10^{9.5}$~M$_{\rm \odot}$ (right panel) for varying
values of $SN_{\rm max}$, normalized by the number of detections
predicted for $SN_{\rm max}=50$. The figure displays the case of our
`quad' configuration; results for our `arcs' configuration are similar,
albeit with a more marked dependence on the $SN$ itself.
Lines of different colour refer to different
values of the log-likelihood threshold required for detectability.
As expected, the number of detectable haloes is a rapidly increasing function
of the signal-to-noise ratio. We find that, for a likelihood threshold
of $\Delta\mathscr{L}_{\rm th}=20$, an increase in signal-to-noise ratio from
50 to 60 corresponds to a doubling in the number of expected detections
$N_{\rm d}(M_{\rm h}<10^{10}$~M$_{\rm \odot})$. The same increase in data quality
makes the expected detections $N_{\rm d}(M_{\rm h}<10^{9.5}$~M$_{\rm \odot})$
grow by a factor of three. As shown in the same figure, these factors
are even larger if higher values of the log-likelihood
ratio are required for detectability. For a value of $\Delta\mathscr{L}_{\rm th}=50$,
analogous to what has been used in most previous studies, we find the corresponding
figures to be 2.5 and 5 for haloes of $M_{\rm h}<10^{10}$~M$_{\rm \odot}$ and
$M_{\rm h}<10^{9.5}$~M$_{\rm \odot}$ respectively. It should be noted that
an increase in the maximum $SN$ ratio from 50 to 60 corresponds to
an increase in the exposure time of a factor of $\approx 1.44$, {which is therefore smaller than the corresponding gain
in the number of detectable haloes.}
\begin{figure}
\centering
\includegraphics[width=.8\columnwidth]{nredshift_swapped.pdf}
\caption{Predictions for the redshift distribution of detected CDM haloes obtained when:
(i)~using the relation between halo mass and redshift proposed by \citet{Despali2018} (dashed line),
(ii)~using the full redshift dependence of the log-likelihood increase
(solid line). The top panel is for the arcs configuration, the
bottom panel for the quad configuration. Concentration effects
are not included in this comparison.}
\label{nredshift}
\end{figure}
\subsection{The effect of redshift dependence}
\label{redshiftdependence1}
We now examine the consequences of dropping the simplifying assumption
of a tight relationship between halo mass and redshift.
We isolate this effect by considering a population of CDM haloes
assumed to lie on the \citet{Ludlow2016} mass-concentration relation,
i.e.\ with reference to the previous Section, here we ignore the
scatter in halo concentration. (We will consider this shortly.) Fig.~\ref{nredshift} shows
the redshift distribution of the population of detected haloes of
mass, $M_{\rm h}<10^{10}$~M$_{\rm \odot}$, we obtain when:
for our two lensing configurations
\begin{itemize}
\item{using the mass shift proposed by \citet{Despali2018} and described in Section.~\ref{redshiftdependence0},
shown by a dashed line\footnote{We use the same
mass shift, $\delta M(z)$, for all projected coordinates, $(x,y)$.};}
\item{using the full redshift dependence of our log-likelihood maps, shown by a solid line.}
\end{itemize}
As expected, the two curves match at $z_{\rm h}=z_{\rm l}$, but we
predict significantly fewer detections
for foreground haloes, a reflection of the dependence on redshift of the log-likelihood increase described in
Sect.~\ref{redshiftdependence0}. We also find that collapsing the
redshift axis leads to an overestimate of
detectable haloes also at $z_{\rm h}>z_{\rm l}$, though by a smaller factor.
The magnitude of the global overestimate varies with the lensing
configuration. For definitiveness, we use a threshold of
$\Delta\mathscr{L}_{\rm th}=20$ in Fig.~\ref{nredshift}. For the arcs configuration,
the overestimate is a factor 1.95; for the quad configuration it is
a factor of 1.63.
We stress that these figures should not be used to `correct' previous measurements of the number of detectable haloes
and are meant only as an estimate of the magnitude of the effect.
\begin{figure}
\centering
\includegraphics[width=.9\columnwidth]{cshift.pdf}
\caption{The distribution of halo concentration for the population of detectable haloes in a CDM universe.
The top panel shows a 2 dimensional histogram of all detectable haloes in the plane of halo mass
and concentration shift from the median relation, in units of the lognormal spread in the
mass-concentration relation. The colour scale shows the number of
perturbers in each M-c pixel for a detection threshold of
$\Delta\mathscr{L}_{\rm th}=35$. Lines of different colours display the mean concentration shift, $\log c/\sigma_{\log c}$, as a function
of mass for different thresholds for detectability (see text).
The bottom panel shows the distribution of concentration for all
detectable haloes of $M_{\rm h}<10^{10}$~M$_{\rm \odot}$, as a function of the thresholds for detectability. For reference, the dashed line shows the
parent distribution of all cosmological haloes. }
\label{cshift}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Nshift.pdf}
\caption{The boost to the cumulative number of expected detections, $N_c(<M_{\rm h})$, resulting
from including the scatter in the mass-concentration relation for our `quad' configuration.
Lines of different colours correspond to different thresholds for detectability. }
\label{Nshift}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.8\columnwidth]{Ncofn.pdf}
\caption{The boost to the cumulative number of expected detections, $N_c(<M_{\rm h})$, resulting
from including the scatter in the mass-concentration relation, as a function of the
expected number of detections.}
\label{Ncofn}
\end{figure}
\subsection{The effect of the scatter in concentration}
\label{concentrationdependence1}
We now consider the effect of accounting for scatter in the mass-concentration relation.
We again focus on a population of CDM haloes, and compare the case in which all haloes
are assumed to lie exactly on the median mass-concentration relation, to the case in which a lognormal
scatter is included. For definitiveness, in both cases we use the full redshift dependence of the
log-likelihood increase, $\Delta\mathscr{L}$, and, for simplicity, we restrict
attention to our quad configuration.
Results are analogous for our `arcs' lensing morphology.
Fig.~\ref{cshift} shows the distribution of detectable haloes in the space of halo mass
and shift relative to the median halo concentration, for the case in which the scatter in concentration is accounted for.
The 2 dimensional histogram in the top panel is for an assumed
threshold, $\Delta\mathscr{L}_{\rm th}=35$. The vertical
dashed line shows the median mass-concentration relation, $\delta\log c=0$. It is clear that,
thanks to the dependence on concentration of the log-likelihood increase described in
Section ~\ref{concentrationdependence0}, most detectable haloes are high-concentration haloes.
This is quantified in the bottom panel of the figure, which collapses the mass axis to show
the distribution of detected haloes over concentration shifts. For reference, the dashed line
shows the Gaussian distribution of all cosmological haloes. Coloured lines show the distribution of
the population of detectable haloes for different values of the
log-likelihood threshold, $\Delta\mathscr{L}_{\rm th}$.
Haloes with high concentration achieve $\Delta\mathscr{L}>\Delta\mathscr{L}_{\rm th}$ more easily, so that higher
thresholds for detectability correspond to increasingly concentrated populations of
detectable haloes. For $\Delta\mathscr{L}_{\rm th}=50$ and our quad configuration, we find
$\langle\delta\log c/\sigma_{\log c}\rangle = 1.25$ when including all
haloes of $M_{\rm h}<10^{10}$~M$_{\rm \odot}$.
The average concentration shift increases further for decreasing
perturber masses,
as shown by the coloured lines in the top panel. These represent the `mass-concentration relation
of detectable haloes'. Lower mass haloes require stronger concentration boosts to achieve
detectability, so that for $\Delta\mathscr{L}_{\rm th}=50$,
$\langle\delta\log c/\sigma{\log c}\rangle = 2.2$ for haloes of $M_{\rm h}\approx10^{9.2}$~M$_{\rm \odot}$.
\begin{figure*}
\centering
\includegraphics[width=.9\textwidth]{Nofmhf.pdf}
\caption{A comparison between the cumulative number of expected detections for our `quad'
configuration (threshold for detection, $\Delta\mathscr{L}_{\rm th}=30$) when including both
(i)~the scatter in the mass-concentration relation and (ii)~the
dependence of the median concentration on the DM model (left), and when
concentration effects are neglected (right). Concentration substantially
enhances the spread between
the expected detections in WDM models with different cutoff masses, $M_{\rm cut}$.}
\label{Nofmhf}
\end{figure*}
The result is that the scatter in the mass-concentration relation
boosts the number of expected detections. {At low halo masses, for any fixed halo mass,} there is a fraction of
haloes with high enough concentration that become detectable if the scatter in the mass-concentration
relation is accounted for, and which is lost when $c=c(M,z)$.
This is quantified in Fig.~\ref{Nshift}, which shows the ratio between the cumulative number of
detectable haloes, $N_c(<M_{\rm h})$, predicted when accounting for the scatter in the mass-concentration
relation, to the corresponding number obtained when $c=c(M,z)$ for all haloes.
Lines of different colours refer to different values of the detection
threshold, $\Delta\mathscr{L}_{\rm th}$.
For all thresholds, the ratios are a strong function of halo mass. The
onset of the sharp rise identifies
the halo mass that is not detectable at that threshold if $c=c(M,z)$, but for which detections
are possible because of concentration effects. For $\Delta\mathscr{L}_{\rm th}=50$, even including all haloes with $M_{\rm h}<10^{10}$~M$_{\rm \odot}$,
concentration effects boost the number of detectable haloes by a factor of 2.75 for our `quad'
configuration. Since our `arcs' configuration leads to lower values of the log-likelihood
increase and fewer detections, the corresponding boost is a factor $\approx26$,
exemplifying, on the one hand, the importance of concentration effects,
and, on the other, the need for estimates tailored to the specific lensing configuration.
Fig.~\ref{Ncofn} displays the same boost factor as a function of the number of expected
detections, $N_c$ (which include the effect of
concentration). Different values of the expected number of detections correspond
to different values of the data $SN$ (or equivalently, different values of the log-likelihood threshold for detection). Once
again we see that the exact figures depend on the lens configuration;
for example,
here the arcs configuration appears more sensitive to concentration
effects than the quad.
\subsection{The effect of concentration on distinguishing DM models}
\label{concentrationdependence2}
In most previous estimates of the dependence of the number of
detectable haloes on the properties of the DM model, particularly the
mass of a WDM particle (or, equivalently, the cutoff mass in the
mass function) it was assumed that all haloes lie exactly on the
median mass-concentration relation of CDM haloes.
Not only is this a poor approximation but it left the differences in
the halo abundances alone as the statistic to differentiate amongst
different models. Our results show that concentration is a crucial
ingredient for halo detectability. Warmer WDM models make low-mass
haloes that are progressively less concentrated, as quantified by
equation~\ref{mcrel}. This makes the concentration parameter potentially helpful in
boosting the spread among the expected numbers of detections in
different DM models.
We test this in Fig.~\ref{Nofmhf}, in which we show, side by side, the cumulative number
of expected detections, $N_{\rm d}(<M_{\rm h})$, for a range of WDM models,
parameterized by their cutoff mass, $M_{\rm cut}$. The panel on the right displays results
for the case in which concentration effects are neglected whereas the
panel on the left includes
both: (i)~the scatter in the mass-concentration relation and (ii)~the dependence of the median concentration
on the DM model. For definitiveness, we show results pertaining to our `quad' configuration,
using a threshold for detection of $\Delta\mathscr{L}_{\rm th}=30$.
It is clear that concentration effects significantly enhance the dependence of the
expected number of detections on the DM model. Reduced
concentration values act together with
reduced cosmological abundances to determine the expected number of detections.
Once again, while the qualitative trend is clear and the magnitude of the effect significant,
precise values are dependent on the lensing configuration and on the value of the
threshold chosen for detection. We attempt to quantify how much concentration
effects can actually sharpen substructure lensing constraints in Section~\ref{final}.
\section{Discussion and Conclusions}
We have quantified the ability of low-mass DM haloes along the
line-of-sight to perturb strong gravitational lenses, and explored how
this depends on halo properties. This is a fundamental ingredient of
sensitivity mapping, that is the process of assessing which
perturbers, out of the cosmological population of haloes, would
actually be detectable when modelling strong lensing data. It is
impossible to quantify the number of expected detections in different
DM models in a given observational dataset without building the
sensitivity function. Therefore, sensitivity mapping is a key aspect
of placing constraints on the identity and properties of DM from the
number of perturbing haloes detected in strong lensing studies.
We have adopted a likelihood-based approach, i.e.\ we differentiate
between detectable and non-detectable haloes according to the likelihood
gain, $\Delta\mathscr{L}$, associated with including a halo mass component in the
lens modelling. Some previous studies have proposed using instead the Bayesian
evidence for this comparison. It should be stressed that
both approaches are equally valid. As long as the same criterion is
consistently applied to both measure the sensitivity function — and
therefore make predictions for the different DM models — and detect
perturbers in the data, both approaches will return the correct inference on the DM properties {if the models used in sensitivity mapping
share the same complexities of the real data}.
At this stage, it cannot be excluded that the sensitivity functions
derived using likelihood or Bayesian evidence may exhibit some
differences when compared side by side, possibly reflecting that the
two criteria may lead to different sensitivity to perturbers
in different regions of the parameter space. However, it seems quite
unlikely that this would happen systematically at the high levels of
significance that we are considering here and that have become the
norm in structure lensing. At present, an evidence-based sensitivity function
has not been derived as a function of either redshift or halo
concentration, where our major findings are. Therefore, we are unable
to make a direct comparison and have limited our analysis to the differences with the approximate strategies that have been adopted so far.
Rather than attempting to quantify the number of expected detections for
specific observed strong lenses, we have focused on building an understanding
of the sensitivity function itself, and how this scales with some of the crucial
parameters at play. We have concentrated on the importance of image
depth, and
shown that, as the log-likelihood difference
scales with $SN^2$, high signal-to-noise data are extremely beneficial in that they allow
the detection of a larger number of perturbers, particularly of low-mass
haloes (see Fig.~\ref{Nofsn}). We have also shown that the specific noise realization introduces
scatter in the log-likelihood difference, of magnitude $\approx\sqrt{2\Delta\mathscr{L}}$
(see Sect.~\ref{stonoise}),
which suggests that a smooth link
between the probability of detection, $p$, and the log-likelihood
gain, $\Delta\mathscr{L},$ rather than a sharp threshold,
may be a better choice {when comparing with} real data.
We find that our two different lens configurations yield significantly different
numbers of expected halo detections, which indicates that some lensing morphologies
(a quad configuration in our case) are more valuable for strong lensing
analyses (see Fig.~\ref{NCDM}). This will be useful in selecting
lenses to target with deeper observations.
We also note that our estimates for the total number of detectable haloes are
somewhat lower than what has been suggested by similar studies for the same values of
lens and source redshifts, and for similar data quality. This may be due to
the increased flexibility of our lens modelling, including the possibility of
shifts in the lens centre \citep[see][]{Vegetti2014}, as well as the assumed power-law profile slope \citep[see][]{Li2016, Despali2018}.
With particular reference to \citet{Li2016} and \citet{Despali2018},
the fact that we perform fully non-linear searches when optimising our
macromodels certainly enables
them to reproduce better the perturbed data without the need of including
a halo mass component, hence lowering the log-likelihood gain. If anything,
this highlights the importance of using exactly the same techniques to both i)~model
real data and make perturber detections and ii)~produce estimates of the expected
detections, as any mismatch would inevitably introduce systematic biases.
We then concentrated on the role of halo redshift
and halo concentration. In previous work,
simplifications had been made to collapse these axes, in order to make
the calculation of the sensitivity function computationally feasible.
Concerning the redshift of the perturber, we have shown that, contrary to previous understanding,
it becomes increasingly challenging to detect perturbing haloes in front of the main lens
when they get closer to the observer (see Figs.~\ref{arcsmap},~\ref{quadmap} and~\ref{redshiftdep}). This implies that previous studies of the number of detectable haloes
have likely overestimated the number of foreground detections, at redshifts $z_{\rm h}<z_{\rm l}$.
The exact magnitude of this overestimation appears to depend on the specific lensing
configuration. As a reference, our experiments show this factor to be between 1.5 and 2 (see Fig.~\ref{nredshift}).
These previous estimates are not based on a calculation of the Bayesian
evidence at $z_{\rm h}\neq z_{\rm l}$. Therefore, it is not currently
known whether an evidence-based criterion for detection would indeed
yield a dependence of the perturbers’ detectability on redshift that
is analogous to the one we measure. Certainly, we find that the
strategy adopted so far (of using deflection angles as proxies)
underestimates the degree of degeneracy in the lens modelling and
therefore artificially makes the detection of foreground perturbers
easier {than in actually is in reality.}
Concerning concentration, we find that detectability is a strong function of
halo concentration, such that the population of detectable haloes is, in fact, a population
of systematically high-concentration haloes (see
Fig.~\ref{cshift}). The shift in the average concentration
relative to the mass-concentration relation becomes increasingly
large for haloes of lower masses, and increases when a higher threshold
for detectability is adopted. For a threshold of $\Delta\mathscr{L}_{\rm th}=50$,
the average shift in the concentration of haloes with masses below $10^{10}$~M$_{\rm \odot}$
is about $1.25\sigma_{\log c}$, where $\sigma_{\log c}$ is the lognormal scatter in the
mass-concentration relation.
Crucially, accounting for the scatter in the
mass-concentration relation results in a boost to the number of detectable haloes.
This boost is a strong function of the lensing configuration and of the threshold for detectability (or, equivalently, of the data quality as quantified by the maximum signal-to-noise; see Figs.~\ref{Nshift} and~\ref{Ncofn}).
As reference, for a combination of a lens configuration and detection threshold
that results in a total of 0.03 detections with $M_{\rm h} < 10^{10}$~M$_{\rm \odot}$ per lens in a CDM universe --
which is roughly comparable to what was previously predicted for lenses with HST data -- this boost
amounts to a factor of $\approx 2.5$, and quickly grows to $\gtrsim10$ for the detections
expected at $M_{\rm h} \lesssim 10^{9.5}$~M$_{\rm \odot}$.
Unfortunately, without a tailored study, it is impossible to provide a precise quantification
of how the two effects above would combine to affect previous estimates of the expected number
of detections in real observed strong lenses, especially since the two effects
have opposite signs. The overestimate related to the redshift dependence is
sensitive to the lensing configuration and, certainly to the redshifts of
both lens and source, which here we have kept fixed.
The underestimate due to the concentration dependence is
a strong function of lensing configuration and data quality. It would appear that
the correction due to concentration is larger than that due to the redshift dependence,
but further study is required to ascertain in which regime that is the case, and by
how much.
What we can already establish in the present study is how
concentration effects can facilitate differentiating WDM models
with different cutoff halo masses. We have shown that taking into
account the dependence of the median halo concentration on the DM
model increases the spread among the number of expected detections
(see Fig.~\ref{Nofmhf}). For warmer models, lower halo concentrations
conspire with lower cosmological halo abundances increasingly to
suppress the number of detectable haloes. The effect of halo
concentration had not been included in previous studies, leaving only halo
abundances to differentiate among DM models, therefore
making it harder to distinguish them in strong lensing studies.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{limits.pdf}
\caption{The change on limits to the WDM cutoff mass, $M_{\rm cut}$,
from including concentration effects, at a fixed number of expected detections for a CDM universe: $N_{\rm d, CDM}=1$.
Lines show likelihood ratios (see text)
resulting from the detection of (1, 2, 3) perturbers, respectively,
from the top to the bottom
row. Dashed lines display the inference based on predictions that ignore concentration effects. These are included in the solid lines. }
\label{limits}
\end{figure}
\subsection{Sharper DM constraints from substructure lensing}
\label{final}
In order to quantify the extent to which concentration effects can sharpen
future substructure lensing constraints,
we assume we have a set of strong lenses such that the total expected number of detections in CDM is
\begin{equation}
N_{\rm d, tot}(M_{\rm h}<10^{9.5}\,\textrm{M}_{\rm \odot}, {\rm CDM})
=1.
\label{n1}
\end{equation}
We ignore the contribution of satellite haloes, and assume that the figure above
only includes haloes along the LOS, on which we have focussed in this work.
While we are not able to tailor our analysis quantitatively to any specific set of observed strong lenses,
this figure is representative of what is achievable with current HST data \citep{Vegetti2018,Ritondale2019},
and therefore provides a useful reference point.
We use our maps of log-likelihood increase, $\Delta\mathscr{L}$, to calculate the number of expected detections in the
same set of lenses for WDM models with different cutoff masses, $M_{\rm cut}$.
We do so separately for our `quad' and `arcs' lensing configurations,
requiring that the number of lenses in the two separate sets be such
as to satisfy Equation~\ref{n1} separately\footnote{
We require a detection threshold, $\Delta\mathscr{L}_{\rm th}=30$, for our quad
configuration and $\Delta\mathscr{L}_{\rm th}=20$
for our arcs configuration, which, as we have shown, leads to systematically fewer detections
per lens.}. Furthermore, we set up predictions for both, the case in which concentration effects are accounted for
and the case in which they are ignored. Then, we compare the inferences on the DM model that would result
from actually detecting $i = (1, 2, 3)$ individual haloes, in the two
different configurations. These are displayed in Fig.~\ref{limits}, which
shows the likelihood ratio,
\begin{equation}
R = {{P( i | N_{\rm d, tot}(M_{\rm cut}))}\over {P( i | N_{\rm d, tot}(M_{\rm cut}=10^6 \, {\rm M_\odot}))}},
\label{likratio}
\end{equation}
where $P(\cdot|m)$ is the Poisson probability distribution with mean, $m$, and $i$ is the number of actual
detections. Inference resulting from predictions that ignore
concentration effects are shown with dashed lines, while
the likelihood ratio obtained when accounting for concentration
effects is shown with solid lines. The vertical
lines indicate the limits on the WDM cutoff mass corresponding to a likelihood ratio of $R=0.05$.
The right and left columns correspond, respectively,
to the two lensing configurations. Details are only marginally different and the magnitude of the effect is very similar in the two cases:
at fixed number of expected haloes in a CDM universe, concentration effects make
constraints on the WDM cutoff mass significantly more stringent. The suppression in the
concentration of WDM haloes enhances the effect of lower cosmological halo abundances,
allowing constraints that are about one order of magnitude more
stringent in $M_{\rm cut}$.
\subsection{Outlook}
Our results bring renewed confidence to the field of halo detection
with strong lensing data, and boost confidence that meaningful
constraints can be obtained from analysis of current optical data.
A number of previous works have contributed to the realization that it is extremely challenging
to use current optical HST data to obtain constraints on the
cutoff of the DM halo mass function that are competitive with those
obtained from the satellites of the Milky Way or measurements of the
Lyman-$\alpha$ forest \citep[see][and references therein]{Enzi2021}.
This is because, if halo abundances alone are used to differentiate
between CDM and WDM,
in order to be able to probe a WDM model with a cutoff mass, $M_{\rm
cut}$ (the mass below which the abundance of haloes declines sharply), it is necessary to
be sensitive to perturbers of that halo mass and below.
However, evidence is mounting that detecting haloes of mass $M_{\rm h}\approx10^{8.5}$~M$_{\rm \odot}$
is extremely challenging with current lensing data, and therefore that
it would be very difficult to place competitive constraints.
Concentration effects change this picture completely. For example, the limits displayed in Fig.~\ref{limits}
stem from detections of haloes of mass $M_{\rm h}>10^{9.5}$~M$_{\rm \odot}$ --
which is realistic with current HST data -- but they can rule out values
of the cutoff mass scale,
$M_{\rm cut}\gtrsim10^8$~M$_{\rm \odot}$. This is a direct reflection of the effects of halo
concentration, which, in contrast to halo abundances, first affects
haloes of masses significantly {\it above} the
cutoff mass itself. For this reason, concentration effects allow substructure lensing studies to probe
WDM models with cutoff masses at least one order of magnitude {\it below} the lowest sensitivity mass scale.
This implies that substructure lensing is, in fact, a much more
sensitive probe of the identity of the DM
than had been previously recognized.
\section*{Software Citations}
This work used the following software packages:
\begin{itemize}
\item
\href{https://github.com/astropy/astropy}{\texttt{Astropy}}
\citep{astropy1, astropy2}
\item
\href{https://bitbucket.org/bdiemer/colossus/src/master/}{\texttt{Colossus}}
\citep{colossus}
\item
\href{https://github.com/matplotlib/matplotlib}{\texttt{matplotlib}}
\citep{matplotlib}
\item
\href{https://github.com/numpy/numpy}{\texttt{NumPy}}
\citep{numpy}
\item
\href{https://github.com/Jammy2211/PyAutoLens}{\texttt{PyAutoLens}}
\citep{Nightingale2015, Nightingale2018, Nightingale2021}
\item
\href{https://www.python.org/}{\texttt{Python}}
\citep{python}
\item
\href{https://github.com/scipy/scipy}{\texttt{Scipy}}
\citep{scipy}
\end{itemize}
\section*{Acknowledgements}
NCA is supported by an STFC/UKRI Ernest Rutherford Fellowship, Project Reference: ST/S004998/1.
JWN and RJM acknowledge funding from the UKSA through awards ST/V001582/1 and ST/T002565/1; RJM is also supported by the Royal Society. ARR is supported by the European Research Council Horizon2020 grant `EWC' (award AMD-776247-6). CSF acknowledges support by the European Research Council (ERC) through Advanced Investigator grant to CSF, DMIDAS (GA 786910). RL acknowledge the support of National Nature Science Foundation of China (Nos 11773032,12022306).
This work used the DiRAC Data Centric system at Durham
University, operated by the Institute for Computational Cosmology
on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk).
This equipment was funded by BIS National E-infrastructure capital
grant ST/K00042X/1, STFC capital grants ST/H008519/1 and
ST/K00087X/1, STFC DiRAC Operations grant ST/K003267/1 and
Durham University. DiRAC is part of the National E-Infrastructure.
|
1,314,259,995,360 | arxiv | \section{Introduction}
The fractional Brownian motion (fBm) is a popular model in financial mathematics, economics and natural sciences. As is well known the fBm $B^H$ is the only continuous Gaussian process which is selfsimilar with stationary increments and depending on index $0<H<1$. Moreover, a fBm with Hurst index $H$ is H\"older up to order $H$.
For a real mean zero Gaussian process with stationary increments, Orey suggested the following definition of index.
\begin{dfn}[see \cite{or}, \cite{rn0}] Let $X$ be a real-valued mean zero Gaussian stochastic process with stationary increments and continuous in quadratic mean. Let $\s_X$ be the incremental variance of $X$ given by $\s_X^2(h)=\E[X(t+h)-X(t)]^2$ for $t,h\gs 0$. Define
\begin{equation}\label{orey1}
\widehat\b_*:=\inf\Big\{\b>0\dvit \lim_{h\downarrow 0}\frac{h^\b}{\s_X(h)}=0\Big\}=\limsup_{h\downarrow 0}\frac{\ln\s_X(h)}{\ln h}
\end{equation}
and
\begin{equation}\label{orey2}
\widehat\b^*:=\sup\Big\{\b>0\dvit \lim_{h\downarrow 0}\frac{h^\b}{\s_X(h)}=+\infty\Big\}=\liminf_{h\downarrow 0}\frac{\ln\s_X(h)}{\ln h}\,.
\end{equation}
If $\widehat\b_*=\widehat\b^*$ then $X$ has the Orey index $\b_X$.
\end{dfn}
If Gaussian process with stationary increments has Orey index then almost all sample paths satisfy a H\" older condition of order $\g$ for each $\g\in (0,\b_X)$ (see Section 9.4 of Cramer and Leadbetter \cite{crle}). For fBm $B^H$ with the Hurst index $0<H<1$ the Orey index $\b_X=H$. So we have a class of Gaussian processes with stationary increments depending on Orey index $\b_X$.
Recently there haves been two extensions of fBm which preserve many properties of fBm, but have no stationary increments except for particular parameter values. One of them is a so called sub-fractional Brownian motion (sfBm) and another one is a bifractional Brownian motion (bifBm). Thus it is very natural to extend the definition of the Orey index for Gaussian processes such that there was a possibility to consider processes which may not have stationary increments and are H\"older up to the Orey index.
We shall give such extension of the Orey index. As it will be proved, processes sfBm and bifBm satisfy this extended definition of the Orey index and they are H\"older up to the Orey index. Moreover, for fBm, sfBm, and bifBm, the Orey index coincides with their self-similarity parameter. Therefore it is enough to construct and consider the asymptotic behavior of an estimate of the Orey index instead of estimating parameters of each of the processes under consideration.
Many authors have already considered the asymptotic behavior of the first- and second-order quadratic variations of Gaussian processes. The
conditions in these papers were expressed in terms of covariance of a Gaussian process and depended on some parameter $\g\in(0,2)$. If a Gaussian process has the Orey index then conditions on a covariance function may expressed by means of it. As it will be shown below, the Orey index can be obtained for some well-known Gaussian processes.
Moreover, if we wanted to consider stochastic differential equations (SDE) driven by processes with a bounded $p$-variation, we should know when the Riemann-Stieltjes (RS) integral is defined. For Gaussian processes the Orey index helps to obtain these conditions.
The purpose of this paper is to give an extension of the definition of the Orey index for a second order stochastic processes which may not have stationary increments and to estimate the Orey index for Gaussian process from discrete observations of its sample paths.
Norvai\v sa \cite{rn2} extends the definition of the Orey index for a second order stochastic processes which may not have stationary increments. He showed that sfBm and bifBm satisfies this extended definition of the Orey index. In this paper we shall give a different extension of the definition of the Orey index. This new definition will be more convenient for our purposes.
The paper is organized in the following way. Section 2 contains the definition of the Orey index for the second order stochastic process. The conditions when the second order stochastic process has the Orey index are also given. For some well-known Gaussian processes which do not have stationary increments the Orey index is obtained.
Section 3 contains the results on an almost sure asymptotic behavior of the second-order quadratic variations of a Gaussian process. Here we also verify obtained conditions for the same well-known Gaussian processes.
\section{Orey index for the second order stochastic processes}
Let $X = \{X(t)\dvit t \in [0, T ]\}$ be a second order stochastic process with the
incremental variance function $\s_X^2$ defined on $[0, T ]^2 := [0, T ]\times[0, T ]$ with values
\[
\s_X^2(s, t) := \E[X(t)-X(s)]^2,\quad (s, t)\in[0, T ]^2.
\]
Denote by $\Psi$ a class of continuous functions $\varphi\dvit (0,T]\to [0,\infty)$ such that $\lim_{h\downarrow 0}\varphi(h)=0$ and $\lim_{h\downarrow 0}[h\cdot L^3(h)]=0$, where $L(h)=\varphi(h)/h\to\infty$, $h\downarrow 0$. For example, we can take $\varphi(h)=h\cdot\vert\ln h\vert^\a$ or $\varphi(h)=h^{1-\b}$, where $\a>0$, $0<\b<1/3$.
Set
\begin{align}
\g_*:=&\inf\bigg\{\g>0\dvit \lim_{h\downarrow 0}\sup_{\varphi(h)\ls s\ls T-h}\frac{h^\g}{\s_X(s,s+h)}=0\bigg\},\label{oreyeq1}\\
\widetilde\g_*:=&\inf\Big\{\g>0\dvit \lim_{h\downarrow 0}\frac{h^\g}{\s_X(0,h)}=0\Big\}\label{oreyeq2}
\end{align}
and
\begin{align}
\g^*:=&\sup\bigg\{\g>0\dvit \lim_{h\downarrow 0}\inf_{\varphi(h)\ls s\ls T-h}\frac{h^\g}{\s_X(s,s+h)}=+\infty\bigg\},\label{oreyeq3}\\
\widetilde\g^*:=&\sup\Big\{\g>0\dvit \lim_{h\downarrow 0}\frac{h^\g}{\s_X(0,h)}=+\infty\Big\}\,,\label{oreyeq4}
\end{align}
where $\varphi\in\Psi$. Note that $0\ls\widetilde\g^*\ls\widetilde\g_*\ls +\infty$ and $0\ls\g^*\ls\g_*\ls +\infty$.
We give the following extension of the Orey index.
\begin{dfn}\label{oreydef}
Let $X = \{X(t)\dvit t \in [0, T ]\}$ be a second order stochastic process with the
incremental variance function $\s_X^2$ such that $\sup_{0\ls s\ls T-h}\s_X(s,s+h)\to 0$ as $h\to 0$. If $\g_*=\widetilde\g_*=\g^*=\widetilde\g^*$ for any function $\varphi\in\Psi$, then we say that the process $X$ has the Orey index $\g_X=\g_*=\widetilde\g_*=\g^*=\widetilde\g^*$.
\end{dfn}
\begin{rem}\label{oreydef1} From our definition of the Orey index we get the definition of the Orey index for a real-valued mean zero Gaussian stochastic process with stationary increments and continuous in quadratic mean.
\end{rem}
Let us introduce notions
\begin{align}\label{oreyeq}
\widehat\g_*:=&\limsup_{h\downarrow 0}\sup_{\varphi(h)\ls s\ls T-h}\frac{\ln\s_X(s,s+h)}{\ln h}\quad\mbox{and}\quad \overline\g_*:=\limsup_{h\downarrow 0}\frac{\ln\s_X(0,h)}{\ln h}\,,\\
\widehat\g^*:=&\liminf_{h\downarrow 0}\inf_{\varphi(h)\ls s\ls T-h}\frac{\ln\s_X(s,s+h)}{\ln h}\quad\mbox{and}\quad \overline\g^{\,*}:=\liminf_{h\downarrow 0}\frac{\ln\s_X(0,h)}{\ln h}\,.
\end{align}
We have that $\widetilde\g_*=\overline\g_*$ $\widetilde\g^*=\overline\g^{\,*}$. It follows from Remark \ref{oreydef1} and (\ref{orey1}) and (\ref{orey2}). Now we compare quantities $\widehat\g^*$ and $\widehat\g_*$ with $\g^*$ and $\g_*$, respectively, for a second order stochastic process $X$.
\begin{lem}\label{orey} Let $X = \{X(t)\dvit t \in [0, T ]\}$ be a second order stochastic process with the
incremental variance function $\s_X^2$ such that
\begin{equation}\label{orey0}
\sup_{\varphi(h)\ls s\ls T-h}\s_X(s,s+h)\longrightarrow 0\qquad\mbox{as}\ h\downarrow 0.
\end{equation}
{If $0<\widetilde\g^*\ls\widetilde\g_*< +\infty$, then} $\widehat\g^*=\g^*$, $\widehat\g_*=\g_*$.
\end{lem}
\begin{proof} The proof of the lemma repeats the outlines of the proof of limits of the logarithmic ratios (see Annex A.4 in \cite{tr}). For completeness we give this proof in Appendix.
\end{proof}
Assume that for some $\g\in(0,1)$ the second order stochastic process $X$ satisfies conditions:
(C1)\quad $\s_X(0,\d)=\mathcal{O}(\d^{\g})$, as $\d\downarrow 0$;
(C2)\quad there exist a constant $\kappa>0$ such that
\[
\L(\d):=\sup_{\varphi(\d)\ls t\ls T-\d} \sup_{0<h\ls \d}\bigg\vert\frac{\s_X(t,t+h)}{\kappa h^{\g}}- 1\bigg\vert\longrightarrow 0\qquad\mbox{as}\ \d\downarrow 0
\]
for every function $\varphi\in\Psi$.
For $(s,t)\in[0,T]^2$ set
\begin{equation}\label{variacija0}
c^2(s,t):=\frac{\s^2_X(s,t)}{\kap^2\vert t-s\vert^{2\g}}-1.
\end{equation}
It follows from $(C1)$ and $(C2)$ that for any $\varphi\in\Psi$
\begin{align}\label{variacija}
\sup_{0\ls s\ls T-h} \s^2_X(s,s+h)\ls& \sup_{0\ls s\ls \varphi(h)} \s^2_X(s,s+h)+\sup_{\varphi(h)\ls s\ls T-h}
\s^2(s,s+h)\nonumber\\
\ls& 4\sup_{0\ls \d\ls \varphi(h)+h} \s^2_X(0,\d)+\kap^2h^{2\g} \Big(\sup_{\varphi(h)\ls s\ls T-h}\vert c^2(s,s+h)\vert+1\Big)\nonumber\\
\ls&\mathcal{O}\big((\varphi(h))^{2\g}\big) +\kap^2h^{2\g}\big[\L^2(h)+2\L(h)+1\big]\longrightarrow 0\quad\mbox{as}\ h\downarrow 0.
\end{align}
Thus the process $X$ is continuous in quadratic mean for all $s\in[0,T-h]$.
\begin{thm} Assume that for some constant $\g\in(0,1)$ the second order stochastic process $X$ satisfies conditions $(C1)$ and $(C2)$. Then the Orey index is equal to $\g_X$.
\end{thm}
\begin{proof} By Lemma \ref{orey} it suffice to show that $\widehat\g_*=\widehat\g^*=\g_X$ and $\overline\g_*=\overline\g^{\,*}=\g_X$.
For simplicity, we shall omit index $X$ for $\g$. Observe first that condition $(C1)$ implies $\overline\g_*=\overline\g^{\,*}=\g$. Really,
\[
\frac{\ln\s_X(0,h)}{\ln h}=\g+\frac{\ln (\mathcal{O}(h^\g)/ h^\g)}{\ln h}\longrightarrow \g\qquad\mbox{as}\ h\downarrow 0.
\]
It remains to prove $\widehat\g^*=\widehat\g_*$. By conditions $(C1)$ and $(C2)$ it follows that there exists $\d_0$ such that for $\d\ls \d_0< 1$ inequalities $\s_X(s,s+\d)\ls 1/2$ and $\L(\d)<1/2$ holds for all $0\ls s\ls T-\d_0$. Suppose that these inequalities are fulfill in the course of the proof of this theorem.
For $(s,t)\in[0,T]^2$ set
\[
b(s,t):=\frac{\s_X(s,t)}{\kap\vert t-s\vert^\g}-1.
\]
Assume that $-1/2<b(s_0,s_0+\d_0)\ls 0$ for some fixed $s_0\in[\varphi(\d_0),T-\d_0]$. Furthermore, it is known that $-2x\ls\ln(1-x)\ls -x$ for $0\ls x\ls 1/2$. Then by inequality above we get
\begin{align*}
\ln\s_X(s_0,s_0+\d_0)=& \ln (\kap \d_0^\g)+\ln(1+b(s_0,s_0+\d_0))= \ln (\kap \d_0^\g)+\ln(1-(-b(s_0,s_0+\d_0)))\\
\ls& \ln (\kap \d_0^\g)+b(s_0,s_0+\d_0)\ls \ln(\kap \d_0^\g)+\L(\d_0)
\end{align*}
and
\begin{align*}
\ln\s_X(s_0,s_0+\d_0)\gs& \ln(\kap \d_0^\g)+2 b(s_0,s_0+\d_0)=\ln(\kap \d_0^\g)-2\vert b(s_0,s_0+\d_0)\vert\\
\gs&\ln(\kap \d_0^\g)-2\L(\d_0)
\end{align*}
for any $\varphi\in\Psi$.
It is known that $\vert\ln(1+x)\vert\ls x$ for $x\gs 0$. Assume that $0\ls b(s_0,s_0+\d_0)< 1/2$ for some fixed $s_0\in[\varphi(\d_0),T-\d_0]$, then
\begin{align*}
\ln\s_X(s_0,s_0+\d_0)=&\ln(\kap \d_0^\g)+\ln(1+b(s_0,s_0+\d_0))\ls\ln(\kap \d_0^\g)+b(s_0,s_0+\d_0)\\
\ls& \ln(\kap \d_0^\g)+\L(\d_0)
\end{align*}
and
\begin{align*}
\ln\s_X(s_0,s_0+\d_0)=&\ln(\kap \d_0^\g)+\ln(1+b(s_0,s_0+\d_0))\gs \ln(\kap \d_0^\g)-2\vert b(s_0,s_0+\d_0)\vert\\
\gs& \ln(\kap d_0^\g)-2\L(\d_0)
\end{align*}
for any $\varphi\in\Psi$. Thus for every $s\in[\varphi(\d_0),T-\d_0]$ we obtain
\[
\ln(\kap \d_0^\g)-2\L(\d_0)\ls\ln\s_X(s,s+\d_0)\ls\ln(\kap \d_0^\g)+\L(\d_0).
\]
Consequently,
\begin{align*}
\g+\frac{\ln\kap}{\ln \d_0}-\frac{\L(\d_0)}{\vert\ln \d_0\vert}\ls&\inf_{\varphi(\d_0)\ls s\ls T-\d_0} \frac{\ln\s_X(s,s+\d_0)}{\ln \d_0}\ls \sup_{\varphi(\d_0)\ls s\ls T-\d_0} \frac{\ln\s_X(s,s+\d_0)}{\ln \d_0}\\
\ls& \g+\frac{\ln\kap}{\ln \d_0}+2\,\frac{\L(\d_0)}{\vert\ln \d_0\vert}\,.
\end{align*}
and both sides of the above inequality goes to $\g$ as $\d_0\to 0$. Thus
$\widehat\g_*=\widehat\g^*=\g_X$.
\end{proof}
\subsection{Subfractional Brownian motion}
We shall prove that sfBm satisfies conditions $(C1)$ and $(C2)$.
\begin{dfn}{\rm (\cite{BGT})}
A \textbf{sub-fractional Brownian motion} with index $H$, $H\in(0,1)$, is a mean zero Gaussian stochastic process $S^H=(S^H_t, t\gs 0)$ with covariance function
\[
G_H(s,t):=s^{2H} +t^{2H}-\frac{1}{2}\big[(s+t)^{2H}+\vert s-t\vert^{2H}\big].
\]
\end{dfn}
The incremental variance function of sfBm is of the following form
\begin{equation}\label{increm1}
\s_{S^H}^2(s,t)=\E\vert S^H_t-S^H_s\vert^2= \vert t-s\vert^{2H}+(s+t)^{2H}-2^{2H-1}(t^{2H}+s^{2H}).
\end{equation}
Since for any $0\ls s\ls t\ls T$ inequalities (see \cite{BGT})
\begin{align}
&(t-s)^{2H}\ls\s_{S^H}^2(s,t)\ls (2-2^{2H-1})(t-s)^{2H}, \qquad\mbox{if}\quad 0<H<1/2,\label{subf1}\\
&(2-2^{2H-1})(t-s)^{2H}\ls\s_{S^H}^2(s,t)\ls(t-s)^{2H}, \qquad\mbox{if}\quad 1/2<H<1\label{subf2}
\end{align}
holds, then condition $(C1)$ is satisfied.
From incremental variance function (\ref{increm1}) we get
\[
\s_{S^H}^2(s,s+h)=h^{2H}+f_s(h),
\]
where
\[
f_s(h):=(2s+h)^{2H}-2^{2H-1}\big[s^{2H} +(s+h)^{2H}\big].
\]
Note that
\[
f_s(0)=f^\prime_s(0)=0.
\]
By Taylor formula we obtain
\begin{align*}
f_s(h)=&f_s(0)+f^\prime_s(0)h+\int_0^h f^{\prime\prime}_s(x)(h-x)\,dx=\int_0^h f^{\prime\prime}_s(x)(h-x)\,dx \\ =&2H(2H-1)\int_0^h\big[(2s+x)^{2H-2}-2^{2H-1}(s+x)^{2H-2}\big](h-x)\,dx.
\end{align*}
From inequality
\begin{align*}
&\big[2^{2H-1}(s+x)^{2H-2}-(2s+x)^{2H-2}\big]=\frac{1}{(s+x)^{2-2H}}\bigg[2^{2H-1}-\bigg(\frac{s+x}{2s+x}\bigg)^{2-2H}\bigg]\\
&\quad=\frac{1}{(s+x)^{2-2H}}\bigg[2^{2H-1}-\bigg(1-\frac{s}{2s+x}\bigg)^{2-2H}\bigg]
\ls \frac{1}{(s+x)^{2-2H}}\big[2^{2H-1}-2^{-1}\big],
\end{align*}
it follows that for $s>0$
\[
\vert f_s(h)\vert\ls (2^{2H}-1)\int_0^h\frac{h-x}{(s+x)^{2-2H}}\,dx\ls \frac{1}{2}\,(2^{2H}-1)s^{2H-2}h^2
\]
and
\begin{align*}
\sup_{\varphi(\d)\ls s\ls T-\d}\sup_{0<h\ls \d}\bigg\vert \frac{\s_{S^H}^2(s,s+h)}{h^{2H}}-1\bigg\vert=&\sup_{\varphi(\d)\ls s\ls T-\d}\sup_{0<h\ls \d} \frac{\vert f_s(h)\vert}{h^{2H}}\\
\ls& \sup_{\varphi(\d)\ls s\ls T-\d}\frac{2^{2H-1}\d^{2-2H}}{s^{2-2H}}\ls \frac{2^{2H-1}}{(L(\d))^{2-2H}}
\end{align*}
for every $\varphi\in\Psi$, where $L(h)=\varphi(h)/h$. So we get condition $(C2)$ with $\kappa=1$.
\begin{rem} In condition $(C2)$ the function $\varphi(\d)$ we could not change by $\d$ or $0$. Really, let $H>1/2$. Then
\begin{align*}
&\sup_{0\ls s\ls T-\d}\sup_{0\ls h\ls \d}\vert h^{-2H}f_s(h)\vert\gs\sup_{\d\ls s\ls T-\d}\sup_{0\ls h\ls \d}\vert h^{-2H}f_s(h)\vert\\
&\quad= 2H(2H-1) \sup_{\d\ls s\ls T-\d}\sup_{0\ls h\ls \d} \int_0^h \Big[\frac{2^{2H-1}}{h^{2H}(s+x)^{2-2H}}-\frac{1}{h^{2H}(2s+x)^{2-2H}}\Big](h-x)\,dx\\
&\quad\gs
2H(2H-1) \sup_{\d\ls s\ls T-\d}\sup_{0\ls h\ls \d} \int_0^h \frac{2^{2H-1}-1}{h^{2H}(2s+x)^{2-2H}}\,(h-x)\,dx\\
&\quad\gs H(2H-1) \sup_{\d\ls s\ls T-\d}\sup_{0\ls h\ls \d}\frac{(2^{2H-1}-1)h^{2-2H}}{(2s+h)^{2-2H}}\\
&\quad= H(2H-1)(2^{2H-1}-1)\sup_{\d\ls s\ls T-\d}\frac{\d^{2-2H}}{(2s+\d)^{2-2H}}\\
&\quad= H(2H-1)(2^{2H-1}-1)3^{2H-2}.
\end{align*}
\end{rem}
\subsection{Bifractional Brownian motion}
\begin{dfn}{\rm (\cite{HV})}
A \textbf{bifractional Brownian motion} $B^{HK}=(B^{HK}_t, t\gs 0)$ with parameters $H\in(0,1)$ and $K\in(0,1]$ is a centered Gaussian process with covariance function
\[
R_{HK}(t,s)=2^{-K}\big((t^{2H} +s^{2H})^K-\vert t-s\vert^{2HK}\big),\qquad s,t\gs 0.
\]
\end{dfn}
The incremental variance function of bifBm is of the following form
\[
\s_{B^{H,K}}^2(s,t)=\E\vert B^{H,K}_t-B^{H,K}_s\vert^2= 2^{1-K}\big[\vert t-s\vert^{2HK}-(t^{2H}+s^{2H})^K\big]+t^{2HK}+s^{2HK}.
\]
Let $H\in(0,1)$ and $K\in(0,1]$. Then
\begin{equation}\label{nelyg17}
2^{-K}\vert t-s\vert^{2HK}\ls\s_{B^{H,K}}^2(s,t)\ls 2^{1-K}\vert t-s\vert^{2HK}
\end{equation}
for all $s,t\in[0,\infty)$ (see \cite{HV}). Thus condition $(C1)$ holds.
Since
\[
\s_{B^{H,K}}^2(s,s+h)=2^{1-K}( h^{2HK}-f_s(h))
\]
with
\[
f_s(h):=\big[s^{2H} +(s+h)^{2H}\big]^K-2^{K-1}\big[s^{2HK}+(s+h)^{2HK}\big],
\]
then $f_s(0)=f^\prime_s(0)=0$ and by Taylor formula we obtain
\[
\frac{\s_{B^{HK}}^2(s,s+h)}{2^{1-K}h^{2HK}}-1=-h^{-2HK}\int_0^h f^{\prime\prime}_s(x)(h-x)\,dx,
\]
where
\begin{align*}
f^{\prime\prime}_s(x)=& 4K(K-1)H^2\big[s^{2H}+(s+x)^{2H}\big]^{K-2}(s+x)^{2(2H-1)}\\ &+2HK(2H-1)\big[s^{2H}+(s+x)^{2H}\big]^{K-1}(s+x)^{2H-2}\\
&-2^K HK(2HK-1)(s+x)^{2HK-2}.
\end{align*}
Note that for $H\gs 1/2$
\[
\frac{(s+x)^{2(2H-1)}}{[s^{2H}+(s+x)^{2H}]^{2-K}}=\bigg[\frac{(s+x)^{2H}}{s^{2H}+(s+x)^{2H}}\bigg]^{2-K}\,(s+x)^{2HK-2} \ls (s+x)^{2HK-2}.
\]
Thus for $s>0$
\begin{align*}
\sup_{0\ls x\ls h}\vert f^{\prime\prime}_s(x)\vert\ls& \frac{4}{s^{2-2HK}}\,{\bf 1}_{\{H\gs 1/2\}}+\frac{4}{(2s^{2H})^{2-K}s^{2(1-2H)}}\,{\bf 1}_{\{H< 1/2\}}\\
&+\frac{2}{(2s^{2H})^{1-K}s^{2-2H}}+\frac{2}{s^{2-2HK}}
\ls \frac{8}{s^{2-2HK}}
\end{align*}
and
\[
\sup_{\varphi(\d)\ls s\ls T-\d}\sup_{0<h\ls \d}\bigg\vert \frac{\s_{B_{HK}}^2(s,s+h)}{2^{1-K}h^{2H}}-1\bigg\vert\ls \sup_{\varphi(\d)\ls s\ls T-\d}\frac{8 \d^{2-2HK}}{s^{2-2HK}}
\ls\frac{8}{(L(\d))^{2-2HK}}
\]
for every $\varphi\in\Psi$. So condition $(C2)$ holds.
\subsection{Ornstein-Uhlenbeck process}\label{OU}
The fractional Ornstein-Uhlenbeck (fO-U) process of the first kind is the unique solution of the following stochastic differential equation
\begin{equation}\label{O-U1}
X_t=x_0-\mu\int_0^t X_s\,ds+\t B^H_t,\qquad t\ls T,
\end{equation}
with $\mu,\t>0$, where $B^H$, $0<H<1$, is a fBm. It has explicit solution
\[
X_t=x_0 e^{-\mu t}+\t\int_0^t e^{-\mu(t-u)}dB^H_u,
\]
where the integral exists as a Riemann-Stieltjes integral for all $t > 0$ (see, e.g., \cite{chm}).
First of all we verify condition $(C1)$. From \cite{chm} we know that
\[
\int_0^t e^{\mu u}dB^H_u=e^{\mu t}B^H_t-\mu \int_0^t e^{\mu u}B^H_u du.
\]
Thus
\[
X_t^2\ls 2x_0^2+2\t^2 \bigg(\int_0^t e^{\mu u}dB^H_u\bigg)^2\ls 2x_0^2+4\t^2 \bigg(e^{2\mu t}(B^H_t)^2+\mu^2 t \int_0^t e^{2\mu u} (B^H_u)^2 du\bigg)
\]
and
\[
\sup_{t\ls T}\E X_t^2\ls 2x_0^2+4\t^2 e^{2\mu T}T^{2H}(1+\mu^2T^2).
\]
From (\ref{O-U1}) we get
\[
\s_X^2(0,h)\ls 2\mu^2\E\bigg(\int_0^h X_t\,dt\bigg)^2+2\t^2\E(B^H_h)^2\ls 2\mu^2 h^2\sup_{t\ls h}\E X_t^2+2\t^2h^{2H}\ls C h^{2H}.
\]
This proves a condition $(C1)$.
The incremental variance function of $X$ has the following form
\begin{align*}
\s_X^2(t,t+h)=&\mu^2\E\bigg(\int_t^{t+h}X_s\,ds\bigg)^2-2\mu\t\E\bigg([B^H(t+h)-B^H(t)]\int_t^{t+h}X_s\,ds\bigg)\\ &+\t^2\s_{B^H}^2(t,t+h).
\end{align*}
Cauchy-Schwarz inequality yields
\begin{align*}
&\E\bigg([B^H(t+h)-B^H(t)]\int_t^{t+h}X_s\,ds\bigg)\ls \E^{1/2}[B^H(t+h)-B^H(t)]^2 \bigg(h\int_t^{t+h}\E X^2_s\,ds\bigg)^{1/2}\\
&\quad\ls h^{H+1}\Big(\sup_{t\ls s\ls t+h}\E X^2_s\Big)^{1/2}.
\end{align*}
Thus for every $\varphi\in\Psi$
\[
\sup_{\varphi(\d)\ls t\ls T-\d} \sup_{0<h\ls \d}\bigg\vert\frac{\s^2_X(t,t+h)}{\t^2 h^{2H}}- 1\bigg\vert\ls \t^{-2}\d^{1-H}\Big[\d^{1-H}\mu^2 \sup_{t\ls T}\E X^2_t+2\mu\t\Big(\sup_{t\ls T}\E X^2_t\Big)^{1/2}\Big] \longrightarrow 0
\]
as $\d\downarrow 0$. Condition $(C2)$ follow from the inequality
\[
\bigg\vert\frac{\s_X(t,t+h)}{\t h^{H}}- 1\bigg\vert = \bigg\vert\frac{\s^2_X(t,t+h)}{\t^2 h^{2H}}- 1\bigg\vert\Big/ \bigg\vert\frac{\s_X(t,t+h)}{\t h^{H}}+ 1\bigg\vert\ls \bigg\vert\frac{\s^2_X(t,t+h)}{\t^2 h^{2H}}- 1\bigg\vert\,.
\]
\subsection{Fractional Brownian bridge}
The fractional Brownian bridge is defined in $[0,T]$ by
\[
X_t^H=B^H_t-\frac{t^{2H}+T^{2H}-\vert t-T\vert^{2H}}{2T^{2H}}\, B^H_T.
\]
where $B^H$, $0<H<1$, is a fBm in $[0,T]$.
Now we verify condition $(C1)$. The incremental variance function of $X^H$ has the following form
\[
\s_{X^H}^2(t,t+h)=h^{2H}-\frac{1}{4T^{2H}}\, f^2_t(h)
\]
where
\[
f^2_t(h):=\big[(t+h)^{2H}-t^{2H}-\vert t+h-T\vert^{2H}+\vert t-T\vert^{2H}\big]^2.
\]
Thus
\begin{equation}\label{bridge}
\s_{X^H}^2(t,t+h)\ls h^{2H}.
\end{equation}
So condition $(C1)$ is satisfied.
Assume $H<1/2$. Since
\[
\big\vert(t+h)^{2H}-t^{2H}\big\vert\ls h^{2H}\quad\mbox{and}\quad \big\vert(T-t-h)^{2H}-(T-t)^{2H}\big\vert\ls h^{2H}
\]
then for every $\varphi\in\Psi$
\[
\sup_{\varphi(\d)\ls t\ls T-\d} \sup_{0<h\ls \d}\bigg\vert\frac{\s^2_{X^H}(t,t+h)}{h^{2H}}- 1\bigg\vert =\frac{1}{4T^{2H}}\sup_{\varphi(\d)\ls t\ls T-\d} \sup_{0<h\ls \d}\frac{f^2_t(h)}{h^{2H}} \ls T^{-2H}\d^{2H}.
\]
Assume $H\gs 1/2$. Then $f_t(0)=0$ and by Taylor formula we obtain
\[
\frac{\s_{X^H}^2(t,t+h)}{h^{2H}}-1=-\frac{1}{4T^{2H}h^{2H}}\bigg(\int_0^h f^{\prime}_t(x)\,dx\bigg)^2,
\]
where
\[
f^{\prime}_t(x)= 2H\big[(t+x)^{2H-1}-(T-t-x)^{2H-1}\big].
\]
Thus for every $\varphi\in\Psi$ and $H\gs 1/2$ we get
\[
\sup_{\varphi(\d)\ls t\ls T-\d} \sup_{0<h\ls \d}\bigg\vert\frac{\s^2_{X^H}(t,t+h)}{h^{2H}}- 1\bigg\vert\ls \frac{\d^{2-2H}}{4T^{2H}}\cdot 4H^2T^{4H-2}=H^2T^{2H-2}\d^{2-2H}.
\]
\section{The convergence of the second order quadratic variation of process $X$ along irregular partition}
Let $\pi_n=\{0=t^n_0<t^n_1<\cdots<t^n_{N_n}=T\}$, $T>0$, be a sequence of
partitions of the interval $[0,T]$ and $(N_n)$ is an increasing sequence of natural numbers. Such sequence of partitions is called irregular. Define
\[
m_n=\max_{1\ls k\ls N_n}\D^n_k t,\qquad p_n=\min_{1\ls k\ls N_n}\D^n_k t,\qquad \D^n_k t=t^n_k-t^n_{k-1}.
\]
Usually in practice observations of the process are available at discrete regular time intervals. However, it may happen that part of the observations are lost, resulting in observations at irregular time intervals.
\begin{dfn} A sequence of partitions $(\pi_n)_{n\in\mathbb{N}}$ is regular if we
have $m_n=p_n=T N_n^{-1}$ for all $n\in\mathbb{N}$ or, equivalently,
$t^n_k=\frac{kT}{N_n}$ for all $n\in\mathbb{N}$ and all $k\in\{0,\ldots,N_n\}$.
\end{dfn}
\begin{dfn} The second order quadratic variations of Gaussian processes $X$
along the partitions $(\pi_n)_{n\in\mathbb{N}}$ with Orey index $\g$ is
defined by
\[
V_{\pi_n}^{(2)}(X,2)=2\sum_{k=1}^{N_n-1} \frac{\Delta^n_{k+1} t(\D^{(2)n}_{ir,k}X)^2}{
(\Delta^n_k t)^{\g+1/2}(\Delta^n_{k+1} t)^{\g+1/2}[\Delta^n_k t+\Delta^n_{k+1} t]}\,,
\]
where
\[
\D^{(2)n}_{ir,k}X^n_k=\Delta^n_k t X(t^n_{k+1})
+\Delta^n_{k+1} t X(t^n_{k-1})-(\Delta^n_k t+\Delta^n_{k+1} t)X(t^n_{k}).
\]
\end{dfn}
If the sequence $(\pi_n)_{n\in\mathbb{N}}$ is regular then one has
\[
V^{(2)}_{N_n}(X,2)=(T^{-1}N_n)^{2\g-1}\sum_{k=1}^{N_n-1}\big(\D^{(2)}_{n,k}X\big)^2,\qquad \D^{(2)}_{n,k}X=X(t^n_{k+1})-2X(t^n_{k})
+X(t^n_{k-1})\,.
\]
To study the almost sure convergence of the second order quadratic variation of $X$ we need additional assumptions on the sequence $(\pi_n)_{n\in\mathbb{N}}$.
\begin{dfn}{\rm (see \cite{begyn1})}\label{begyn} Let $(\ell_k)_{k\gs 1}$ be a sequence of real numbers in the interval
$(0,\infty)$. We say that $(\pi_n)_{n\in\mathbb{N}}$ is a sequence of partitions
with asymptotic ratios $(\ell_k)_{k\gs 1}$ if it satisfies the following
assumptions:
1. There exists $c\gs 1$ such that $m_n\ls c p_n$ for all $n$.
2.
$
\lim_{n\to\infty}\sup_{1\ls k\ls N_n}\bigg\vert \frac{\D_{k-1,n} t}{\D^n_k t}-\ell_k\bigg\vert=0.
$
The set $\mathcal{L}=\{\ell_1;\ell_2;\ldots;\ell_k;\ldots\}$ will be called the
range of the asymptotic ratios of the sequence $(\pi_n)_{n\in\mathbb{N}}$.
\end{dfn}
It is clear that if the sequence $(\pi_n)_{n\in\mathbb{N}}$ is regular, then it is a
sequence with asymptotic ratios $\ell_k = 1$ for all $k\gs 1$.
\begin{dfn}{\rm (see \cite{begyn1})}
The function $g:(0,\infty)\to\mathbb{R}$ is invariant on $\mathcal{L}$ if for all $\ell,\hat\ell\in\mathcal{L}$, $g(\ell)=g(\hat\ell)$.
\end{dfn}
For example, let $\mathcal{L}=\{\alpha,\alpha^{-1}\}$ be the set containing two real positive numbers and let
\[
g(\lambda)=\frac{1+\lambda^{2\g-1}-(1+\l)^{2\g-1}}{\lambda^{\g-1/2}}\,.
\]
The function $g$ is invariant on $\mathcal{L}$.
\begin{prop}\label{prop1} Let $X=\{X(t): t\in[0,T]\}$, $T>0$, be a mean zero second order process satisfying conditions $(C1)$ and $(C2)$. Let $(\pi_n)_{n\in\mathbb{N}}$ be a sequence of partitions with asymptotic ratios $(\ell_k)_{k\gs 1}$ and range of the asymptotic ratios $\mathcal{L}$. If the function $g$ is invariant on $\mathcal{L}$ or
the sequence of functions $\ell_n(t)$ converges uniformly to $\ell(t)$ on the
interval $[0,T]$, where
\[
\ell_n(t)=\sum_{k=1}^{N_n-1}\ell_k{\bf 1}_{[t^n_k,t^n_{k+1})}(t),
\]
then
\[
\lim_{n\to\infty}\E V_{\pi_n}^{(2)}(X,2)=2\kap^2\int_0^T g(\ell(t))\,dt,
\]
where
\[
g(\lambda)=\frac{1+\lambda^{2\g-1}-(1+\l)^{2\g-1}}{\lambda^{\g-1/2}}\,.
\]
\end{prop}
\proof Rewrite the expectation of the increments of the second order irregular variation in the following way
\begin{align*}
\E(\D^{(2)n}_{ir,k}X)^2=&(\D^n_k t)^2\s^2_X(t_k^n, t_{k+1}^n)+(\D^n_{k+1} t)^2\s^2_X(t_{k-1}^n, t_k^n)\\
&+\D^n_kt\cdot \D^n_{k+1}t\big[\s^2_X(t_k^n, t_{k+1}^n)-\s^2_X(t_{k-1}^n,t_{k+1}^n)+\s^2_X(t_{k-1}^n, t_k^n)\big]\\
=&[\D^n_kt+\D^n_{k+1}t]\big[\D^n_k t\cdot\s^2_X(t_k^n,t_k^n+\D^n_{k+1} t)+\D^n_{k+1} t\cdot\s^2_X(t_{k-1}^n,t_{k-1}^n+\D^n_k t)\big]\\
&-\D^n_k t\cdot \D^n_{k+1} t\cdot\s^2_X(t_{k-1}^n,t_{k-1}^n+\D^n_k t+\D^n_{k+1} t)\\
=&I^{(1)}_k-I^{(2)}_k+I^{(3)}_k,
\end{align*}
where
\begin{align*}
I^{(1)}_k:=&[\D^n_k t+\D^n_{k+1} t]\big\{\D^n_k t\big[\s^2_X(t_k^n,t_{k+1}^n)-\kap^2( \D^n_{k+1} t)^{2\g}\big]\\
&+\D^n_{k+1} t\big[\s^2_X(t_{k-1}^n, t_k^n)-\kap^2(\D^n_k t)^{2\g}\big]\big\},\\
I^{(2)}_k:=&\D^n_k t\cdot \D^n_{k+1} t\big[\s^2_X(t_{k-1}^n,t_{k+1}^n)-\kap^2(\D^n_k t+\D^n_{k+1} t)^{2\g}\big],\\
I^{(3)}_k:=&\kap^2[\D^n_k t+\D^n_{k+1} t]\D^n_k t \cdot \D^n_{k+1} t \big\{(\D^n_{k+1} t)^{2\g-1}+(\D^n_k t)^{2\g-1} -(\D^n_k t+\D^n_{k+1} t)^{2\g-1}\big\}.
\end{align*}
Set
\[
\mu_k^n=[\D^n_k t+\D^n_{k+1} t](\D^n_{k+1} t)^{\g+1/2}(\D^n_k t)^{\g+1/2}\quad\mbox{and}\quad \ell_k^n=\frac{\D^n_k t}{\D^n_{k+1} t}\,.
\]
Then
\begin{align*}
I^{(1)}_k=& \kap^2[\D^n_k t+\D^n_{k+1} t]\D^n_k t\cdot\D^n_{k+1} t\big[(\D^n_{k+1} t)^{2\g-1} c^2(t_k^n,t_{k+1}^n)+(\D^n_k t)^{2\g-1}c^2(t_{k-1}^n,t_k^n)\big]\\
=&\kap^2\mu_k^n\big[(\ell_k^n)^{1/2-\g}c^2(t_k^n,t_{k+1}^n)+(\ell_k^n)^{\g-1/2}c^2(t_{k-1}^n,t_k^n)\big],\\
I^{(2)}_k=& \kap^2\mu_k^n(\D^n_k t)^{1/2-\g}(\D^n_{k+1} t)^{1/2-\g}(\D^n_k t+\D^n_{k+1} t)^{2\g-1}c^2(t_{k-1}^n,t_{k+1}^n)\\
=&\kap^2\mu_k^n (\ell_k^n)^{1/2-\g}(1+\ell_k^n)^{2\g-1}c^2(t_{k-1}^n,t_{k+1}^n)
\end{align*}
and
\begin{align*}
I^{(3)}_k=& \kap^2\mu_k^n\big((\ell_k^n)^{1/2-\g}+(\ell_k^n)^{\g-1/2}-(\ell_k^n)^{1/2-\g}(1+\ell_k^n)^{2\g-1}\big)\\
=&\kap^2\mu_k^n(\ell_k^n)^{1/2-\g}\big[1+(\ell_k^n)^{2\g-1}-(1+\ell_k^n)^{2\g-1}\big],
\end{align*}
where the function $c^2(s,t)$ is defined in (\ref{variacija0}). We further observe that
\begin{align}\label{lygybe}
\E V_{\pi_n}^{(2)}(X,2)=&2\sum_{k=1}^{\tau_n+1}\frac{\D^n_{k+1} t\cdot\E(\D^{(2)n}_{ir,k}X)^2}{\mu_k^n} +2\sum_{k=\tau_n+2}^{N_n-1} \frac{\D^n_{k+1} t\cdot\E(\D^{(2)n}_{ir,k}X)^2}{\mu_k^n}\nonumber\\
=&2\sum_{k=1}^{\tau_n+1}\frac{\D^n_{k+1} t\cdot\E(\D^{(2)n}_{ir,k}X)^2}{\mu_k^n} +2\kap^2\sum_{k=\tau_n+2}^{N_n-1}\D^n_{k+1} t\cdot J^{(1)}_k\nonumber\\
&+2\kap^2\sum_{k=\tau_n+2}^{N_n-1}\D^n_{k+1} t\cdot J^{(2)}_k,
\end{align}
where $\tau_n=[\varphi(m_n)N_n]$, $[a]$ is an integer part of a real number $a$,
\begin{align*}
J^{(1)}_k=&(\ell_k^n)^{1/2-\g}\big[c^2(t_k^n,t_{k+1}^n)+\ell^{2\g-1}c^2(t_{k-1}^n,t_k^n)-(1+\ell_k^n)^{2\g-1} c^2(t_{k-1}^n,t_{k+1}^n)\big],\\
J^{(2)}_k=&(\ell_k^n)^{1/2-\g}\big[1+\ell^{2\g-1}_k-(1+\ell_k^n)^{2\g-1}\big].
\end{align*}
Now we estimate the first term of equality (\ref{lygybe}). Note that
\begin{align}
\tau_n\ls& \frac{\varphi(m_n)}{p_n}\ls c L(m_n),\qquad 2p_n^{2\g+2}\ls\mu_k^n\ls 2m_n^{2\g+2}, \label{nelyg10a}\\
&\sum_{k=1}^{\tau_n+1}\Delta t_{k+1}^n\ls \varphi(m_n)\frac{m_n}{p_n}+m_n\ls
2c\varphi(m_n).\label{nelyg11}
\end{align}
By conditions $(C1)$, $(C2)$, and inequalities (\ref{nelyg10a}), (\ref{nelyg11}) we get
\begin{align*}
&2\sum_{k=1}^{\tau_n+1}\frac{\Delta_{k+1}^n t\cdot\E(\D^{(2)n}_{ir,k}X)^2}{\mu_k^n}\\
&\quad \ls\frac{8c^3\varphi(m_n)}{p_n^{2\g}}\,\max_{1\ls k\ls \tau_n+2}\s_X^2(t^n_{k-1},t^n_k) \ls\frac{32c^3\varphi(m_n)}{p_n^{2\g}}
\sup_{1\ls k\ls \tau_n+2}\s_X^2(0,t^n_k)\\
&\quad \ls\frac{32c^3\varphi(m_n)}{p_n^{2\g}}\max_{1\ls k\ls \tau_n+2}
\mathcal{O}\big((t^n_k)^{2\g}\big)
=\frac{32c^3\varphi(m_n)}{p_n^{2\g}}\, O\big(\big((c L(m_n)+2)m_n\big)^{2\g}\big)\\
&\quad\ls 32c^3\varphi(m_n)\,O\big(\big(c L(m_n)+2\big)^{2\g}\big)
\end{align*}
as $m_n\downarrow 0$. From the properties of function $\varphi$ we obtain that the right
hand side of the above inequality tends to zero as $m_n\downarrow 0$.
{Next, since $[\varphi(m_n)N_n]+1\gs \varphi(m_n)$, for the second term of equality (\ref{lygybe}) we get
\begin{align*}
&2\kap^2\sum_{k=\tau_n+2}^{N_n-1}\D^n_{k+1} t\cdot J^{(1)}_k\\
&\quad\ls 2\kap^2\max_{\tau_n+1\ls k\ls N_n-1}\big\vert c^2(t_k^n,t_{k+1}^n)\big\vert\sum_{k=\tau_n+2}^{N_n-1}\D^n_{k+1} t \big[ (\ell_k^n)^{1/2-\g}+(\ell_k^n)^{\g-1/2}\big]\\
&\qquad+2\kap^2\max_{\tau_n+2\ls k\ls N_n-1}\big\vert c^2(t_{k-1}^n,t_{k+1}^n)\big\vert\sum_{k=\tau_n+2}^{N_n-1}\D^n_{k+1} t\cdot (\ell_k^n)^{1/2-\g}(1+\ell_k^n)^{2\g-1}\\
&\quad\ls 2\kap^2T\sup_{\varphi(m_n)\ls t\ls T-m_n}\sup_{0<h\ls m_n}\big\vert c^2(t,t+h)\big\vert \max_{1\ls k\ls N_n}\big[(\ell_k^n)^{1/2-\g}+(\ell_k^n)^{\g-1/2}\big]\\
&\qquad+2\kap^2T \sup_{\varphi(m_n)\ls t\ls T-2m_n}\sup_{0<h\ls m_n}\big\vert c^2(t,t+2h)\big\vert \max_{1\ls k\ls N_n}\big[(\ell_k^n)^{1/2-\g}(1+\ell_k^n)^{2\g-1}\big]\\
&\quad\ls 2\kap^2T\big[\L^2(m_n)+2\L(m_n)\big] \max_{1\ls k\ls N_n}\big[(\ell_k^n)^{1/2-\g}+(\ell_k^n)^{\g-1/2}\big]\\
&\qquad+2\kap^2T\big[\L^2(2m_n)+2\L(2m_n)\big] \max_{1\ls k\ls N_n}\big[(\ell_k^n)^{1/2-\g}(1+\ell_k^n)^{2\g-1}\big]\\
&\quad\ls 4\kap^2Tc\big[\L^2(m_n)+2\L(m_n)\big]
+2\kap^2T(1+c)c\big[\L^2(2m_n)+2\L(2m_n)\big].
\end{align*}
Thus the second term of equality (\ref{lygybe}) tends to zero as $n\to\infty$.}
It still remains to investigate asymptotic behavior of the third term of equality (\ref{lygybe}). If the function $g$ is invariant {on} $\mathcal{L}$, then
\begin{align*}
2\kap^2\sum_{k=\tau_n+1}^{N_n-1}\D^n_{k+1} t\cdot J^{(2)}_k=&2\kap^2\sum_{k=\tau_n+1}^{N_n-1}\D^n_{k+1} t\,\frac{1+\ell^{2\g-1}_k-(1+\ell_k^n)^{2\g-1}}{(\ell_k^n)^{\g-1/2}}\\
=&2\kap^2 g(\ell)T - 2\kap^2 g(\ell)\sum_{k=1}^{\tau_n}\D^n_{k+1} t \longrightarrow 2\kap^2 g(\ell)T\quad\mbox{as}\ n\to\infty
\end{align*}
for all $\ell\in\mathcal{L}$ by the inequality (\ref{nelyg11}). If the sequence of functions $\ell_n(t)$ converges uniformly to $\ell(t)$ on the interval $[0,T]$, then
\begin{align*}
2\kap^2\sum_{k=\tau_n+1}^{N_n-1}\D^n_{k+1} t\cdot J^{(2)}_k=&
2\kap^2\int_0^T g(\ell_n(t))\,dt-\sum_{k=1}^{\tau_n}\D^n_{k+1} t\cdot J^{(2)}_k\\
&\longrightarrow 2\kap^2\int_0^T g(\ell(t))\,dt
\end{align*}
since
\[
\sum_{k=1}^{\tau_n}\D^n_{k+1} t\cdot J^{(2)}_k\ls
2(1+c)c^{1/2}\varphi(m_n)\longrightarrow
0\quad\mbox{as}\ n\to\infty.
\]
Thus
\[
\E V_{\pi_n}^{(2)}(X,2)\longrightarrow 2\kap^2\int_0^T g(\ell(t))\,dt\qquad\mbox{as}\ n\to\infty.
\]
\endproof
\begin{cor}\label{cor0} Let $(\pi_n)_{n\in\mathbb{N}}$ be a sequence of regular partitions of the interval $[0,T]$, $T>0$, and let $X=\{X(t): t\in[0,T]\}$, $T>0$, be a mean zero second order process satisfying conditions $(C1)$ and $(C2)$. Then
\[
\E V_{N_n}^{(2)}(X,2)\longrightarrow \kap^2(4-2^{2\g}) T\qquad \mbox{as}\ n\to\infty.
\]
\end{cor}
\proof For regular subdivision $\ell_k=1$. Thus $g(\l)=2-2^{2\g-1}$ and the statement of the corollary follows immediately from Proposition \ref{prop1}.
\endproof
{Now we give a little more general version of the statement of Corollary \ref{cor0}. }
\begin{prop}\label{prop2} Let $(\pi_n)_{n\in\mathbb{N}}$ be a sequence of regular partitions of the interval $[0,T]$, $T>0$. Assume that condition $(C1)$ is fulfilled for some constant $\g\in(0,1)$ and there exists a continuous bounded function $g_0: (0,T)\to\mathbb{R}$ such that
\begin{equation}\label{asump}
\lim_{h\to 0+}\sup_{\varphi(h)\ls t\ls T-h}\bigg\vert \frac{{\bf E}\big(X_{t+h}-2X_t+X_{t-h}\big)^2}{h^{2\g}}-g_0(t)\bigg\vert=0.
\end{equation}
Then
\[
\E V_{N_n}^{(2)}(X,2)\longrightarrow \int_0^T g_0(t)\, dt\quad \mbox{as}\ n\to\infty.
\]
\end{prop}
\proof Note that
\begin{align}\label{nelyg10}
&\bigg\vert \E V_{N_n}^{(2)}(X,2)- \int_0^T g_0(t)\, dt\bigg\vert\nonumber \\
&\quad\ls \bigg(\frac{T}{N_n}\bigg)^{1-2\g} \sum_{k=1}^{\tau_n} \E(\D^{(2)}_{n,k}X)^2 + \frac{T}{N_n}\sum_{k=\tau_n+1}^{N_n-1}\bigg\vert\frac{\E(\D^{(2)}_{n,k}X)^2}{T^{2\g} N_n^{-2\g}} -g_0\Big(\frac{kT}{N_n}\Big) \bigg\vert\nonumber\\
&\qquad
+\bigg\vert \int_0^T g_0(t)\, dt-\frac{T}{N_n}\sum_{k=\tau_n+1}^{N_n-1} g_0\Big(\frac{kT}{N_n}\Big)\bigg\vert\,,
\end{align}
{where $\tau_n=[\varphi(TN_n^{-1})N_n]$. By condition $(C1)$ we get
\begin{align*}
\max_{1\ls k\ls \tau_n+1}\s^2(t^n_{k-1},t^n_k)\ls& 2\sup_{1\ls k\ls \tau_n+1} \s^2(0,t^n_k)
=\mathcal{O}\big((TN^{-1}_n(\tau_n+1))^{2\g}\big)
=\mathcal{O}\big((\varphi(TN^{-1}_n))^{2\g}\big).
\end{align*}}
{Thus
\begin{align*}
\bigg(\frac{T}{N_n}\bigg)^{1-2\g} \sum_{k=1}^{\tau_n} \E(\D^{(2)}_{n,k}X)^2
\ls& 4 T\bigg(\frac{T}{N_n}\bigg)^{-2\g}\varphi\bigg(\frac{T}{N_n}\bigg)\max_{1\ls k\ls \tau_n+1}\s^2(t^n_{k-1},t^n_k)\\
=&4 T\bigg(\frac{T}{N_n}\bigg)^{-2\g}\varphi\bigg(\frac{T}{N_n}\bigg) \mathcal{O}\bigg(\bigg(\varphi\bigg(\frac{T}{N_n}\bigg)\bigg)^{2\g}\bigg)
\end{align*}
and the first term in inequality (\ref{nelyg10}) tends to zero as $n\to \infty$.}
Assumption (\ref{asump}) yields
\begin{align*}
&\max_{\tau_n+1\ls k\ls N_n-1}\bigg\vert\frac{\E(\D^{(2)}_{n,k}X)^2}{T^{2\g}N_n^{-2\g}} -g_0\Big(\frac{kT}{N_n}\Big) \bigg\vert\\
&\quad\ls \sup_{\varphi(TN^{-1}_n)\ls t\ls T-TN^{-1}_n}\bigg\vert \frac{{\bf E}\big(X_{t+TN^{-1}_n}-2X_t+X_{t-TN^{-1}_n}\big)^2}{(TN^{-1}_n)^{2\g}}-g_0(t)\bigg\vert \longrightarrow 0 \qquad \mbox{as}\ n\to \infty.
\end{align*}
The third term of the right hand side of (\ref{nelyg10}) also converges towards $0$ as $n\to \infty$ as a consequence of classical results for Riemann sums and inequality
\[
\frac{T}{N_n}\sum_{k=1}^{\tau_n}\bigg\vert g_0\Big(\frac{kT}{N_n}\Big)\bigg\vert \ls \sup_{0\ls t\ls T} \vert g_0(t)\vert\, \varphi\bigg(\frac{T}{N_n}\bigg).
\]
\endproof
\begin{thm}\label{thm2} Assume that conditions of Proposition \ref{prop1} are satisfied and the partition $\pi_n$ is such that $p_n\nez{n\to\infty} o(\ln^{-1} n)$. Moreover assume that $X$ is a Gaussian process with the Orey index $\g$ and
\begin{equation}\label{nelyg8}
\max_{1\ls k\ls N_n-1}\sum_{j=1}^{N_n-1}\vert d^{(2)n}_{jk}\vert\ls C p_n^{2+2\g},
\end{equation}
for some constant $C$ and every sequence of partitions $(\pi_n)$ of
the interval $[0,T]$, where $d^{(2)n}_{jk}=\E (\D_{ir,j}^{(2)n} X \D_{ir,k}^{(2)n} X)$, $1\ls j,k\ls n$. Then
\[
V_{\pi_n}^{(2)}(X,2)\longrightarrow 2\kap^2\int_0^T h(\ell(t))\,dt\quad\mbox{a.s.}\qquad \mbox{as}\ n\to\infty.
\]
\end{thm}
\proof The proof is standard. We give it for completeness in Appendix. One can found it, i.e., in \cite{begyn1}.
\begin{cor}\label{cor} Let $(\pi_n)_{n\in\mathbb{N}}$ be a sequence of regular partitions of the interval $[0,T]$, $T>0$. Assume that $X$ is a Gaussian process satisfying conditions $(C1)$ and $(C2)$ and having the Orey index $\g$. Moreover, assume that
\begin{equation}\label{nelyg9}
\max_{1\ls k\ls N_n-1}\sum_{j=1}^{N_n-1}\vert d^{(2)n}_{jk}\vert\ls C \bigg(\frac{T}{N_n}\bigg)^{2+2\g}
\end{equation}
for some constant $C$, and every sequence of partitions $(\pi_n)$ of
the interval $[0,T]$, where $d^{(2)n}_{jk}=\E (\D_{n,j}^{(2)} X \D^{(2)}_{n,k} X)$, $1\ls j,k\ls N_n-1$. Then
\[
V_{N_n}^{(2)}(X,2)\longrightarrow \kap^2(4-2^{2\g}) T\qquad\mbox{a.s.}\quad \mbox{as}\ n\to\infty.
\]
\end{cor}
\proof For regular partition $\pi_n$ condition (\ref{nelyg8}) transforms to (\ref{nelyg9}).
\begin{thm}\label{thm3} Assume that conditions of Proposition \ref{prop2} are satisfied. Moreover, assume that inequality (\ref{nelyg9}) holds, then
\[
V_{N_n}^{(2)}(X,2)\longrightarrow \int_0^T g_0(t)\, dt\qquad\mbox{a.s.}\quad \mbox{as}\ n\to\infty.
\]
\end{thm}
\proof The proof of the theorem evidently follows from Proposition \ref{prop2} and arguments used to prove Theorem \ref{thm2}.
\begin{rem} Let $X$ be a sfBm and let {$H\neq 1/2$.} Then in assumption $(\ref{asump})$ the function $\varphi(h)$ we could not change by $h$ or $0$. Observe that following equality
\begin{align*}
{\bf E}\big(X_{t+h}-2X_t+X_{t-h}\big)^2
=& (4-2^{2H})h^{2H}-2^{2H-1}(t+h)^{2H}-3\cdot 2^{2H}t^{2H} \\
&-2^{2H-1}(t-h)^{2H}+2(2t+h)^{2H}+2(2t-h)^{2H}
\end{align*}
holds. Set
\[
\l_t(h):=\E\big(X_{t+h}-2X_t+X_{t-h}\big)^2- (4-2^{2H})h^{2H}
\]
and note that $\l_t(0)=\l^\prime_t(0)=\l^{\prime\prime}_t(0)=\l^{(3)}_t(0)=0$. The Taylor formula yields
\[
\l_t(h)=\int_0^h \frac{(h-x)^3}{3!}\,\l^{(4)}_t(x)\,dx, \qquad\forall\ h\ls t\ls T-h,
\]
where
\begin{align*}
\lambda_t^{(4)}(x)=& C_H\Big(2\big[(2t+x)^{2H-4}+(2t-x)^{2H-4}\big] -2^{2H-1}\big[(t+x)^{2H-4}+(t-x)^{2H-4}\big]\Big),\\
C_H=&2H(2H-1)(2H-2)(2H-3).
\end{align*}
Note that
\begin{align*}
&\sup_{0\ls t\ls T-h}\bigg\vert \frac{{\bf E}\big(X_{t+h}-2X_t+X_{t-h}\big)^2}{h^{2H}}-(4-2^{2H})\bigg\vert\\
&\quad\gs\sup_{h\ls t\ls T-h}\bigg\vert \int_0^h \frac{(h-x)^3}{3! h^{2H}}\,\l^{(4)}_t(x)\,dx\bigg\vert
\gs \bigg\vert \int_0^h \frac{(h-x)^3}{3! h^{2H}}\,\l^{(4)}_h(x)\,dx\bigg\vert\,.
\end{align*}
After a change of variable $y=\frac{h-x}{ah+bx}$ with certain constants $a$ and $b$, we obtain equality
\begin{align*}
h^{-2H}\int_0^h (h-x)^3 \l^{(4)}_h(x)\,dx=&2\cdot 3^{2H}C_H \int_0^{1/2} y^3(1+y)^{-2H-1}dy+2C_H \int_0^{1/2} y^3(1-y)^{-2H-1}dy\\
&+6^{2H}C_H \int_0^1 y^3(1+y)^{-2H-1}dy+2^{2H-2}H^{-1}C_H.
\end{align*}
All these integrals are finite and don't depend on $h$. Thus
\[
\lim_{h\to 0+}\sup_{h\ls t\ls T-h}\bigg\vert \frac{ \E\big(X_{t+h}-2X_t+X_{t-h}\big)^2}{h^{2H}} -(4-2^{2H})\bigg\vert>0.
\]
For this reason the condition $(c)$ of Theorem 1 in B\'egyn \cite{begyn2}
is not satisfied for a sfBm $X$ {with $H\neq 1/2$.} In the case under consideration condition $(c)$ has the form
\[
\lim_{h\to 0+}\sup_{h\ls t\ls T-h}\bigg\vert \frac{(\delta^h_1\circ\delta^h_2 R)(t,t)}{h^{2H}} -(4-2^{2H})\bigg\vert= 0,
\]
where $R(s,t)$ is a covariance function of a sfBm and
\begin{align*}
(\delta^h_1\circ\delta^h_2 R)(t,t):=&4 R(t,t)+2R(t-h,t+h)-4R(t+h,t)\\
&-4R(t-h,t)+R(t+h,t+h)+R(t-h,t-h)\\
=&{\bf E}\big(X_{t+h}-2X_t+X_{t-h}\big)^2.
\end{align*}
On the other hand, assumption (\ref{asump}) is satisfied for sfBm.
{Really, from inequality
\begin{align*}
&\sup_{\varphi(h)\ls t\ls T-h}\bigg\vert \frac{\E\big(X_{t+h}-2X_t+X_{t-h}\big)^2}{h^{2H}}-(4-2^{2H})\bigg\vert \\
&\quad\ls h^{-2H}\sup_{\varphi(h)\ls t\ls T-h}\sup_{0\ls x\ls h}\vert \l^{(4)}_t(x)\vert \int_0^h (h-x)^3\,dx\\
&\quad\ls \vert C_H\vert\cdot h^{4-2H}\sup_{\varphi(h)\ls t\ls T-h}\bigg(\frac{2}{(2t)^{4-2H}}+\frac{2}{(2t-h)^{4-2H}} +\frac{2^{2H-1}}{t^{4-2H}} +\frac{2^{2H-1}}{(t-h)^{4-2H}}\bigg) \\
&\quad\ls \vert C_H\vert\cdot h^{4-2H} \bigg(\frac{2}{(2\varphi(h))^{4-2H}}+\frac{2}{(2\varphi(h)-h)^{4-2H}} +\frac{2^{2H-1}}{\varphi(h)^{4-2H}} +\frac{2^{2H-1}}{(\varphi(h)-h)^{4-2H}}\bigg) \\
&\quad\ls \vert C_H\vert\cdot \bigg[\bigg(\frac{h}{\varphi(h)}\bigg)^{4-2H} +\frac{2}{(2L(h)-1)^{4-2H}}+2^{2H-1}\bigg(\frac{h}{\varphi(h)}\bigg)^{4-2H} +\frac{2^{2H-1}}{(L(h)-1)^{4-2H}}\bigg]
\end{align*}
we obtain the required assertion.}
\end{rem}
\subsection{Bifractional Brownian motion}
We shall prove that the conditions of Theorem \ref{thm2} are satisfied for bifBm. The bifBm satisfies conditions $(C1)$ and $(C2)$. So it suffices to verify the inequality (\ref{nelyg8}).
Repeating outlines of the proof of Theorem 4 of Begyn \cite{begyn1} the study of the asymptotic properties of the $d^{(2)n}_{jk}$ we divide into three steps, according, to the value of $k-j$.
If $j=k$ then (\ref{nelyg17}) yields
\begin{align}\label{subf4}
d_{kk}^{(2)n}
\ls& 2\big[(\D^n_k t)^2\E(\D^n_{k+1} B^{HK})^2+(\D^n_{k+1} t)^2\E(\D^n_k B^{HK})^2\big]\nonumber\\
\ls& 2^{2-K}\big[(\D^n_k t)^2\vert t_{k+1}-t_k\vert^{2HK}+(\D_{n,k+1} t)^2\vert t_k-t_{k-1}\vert^{2HK}\big]\nonumber\\
\ls& 2^{3-K} m_n^{2+2HK}.
\end{align}
By using the Cauchy-Schwarz inequality we get
\begin{align}\label{subf5}
\big\vert d_{jk}^{(2)n}\big\vert\ls \E^{1/2}\big\vert(\D^{(2)n}_{ir,j} B^{HK})\big\vert^2\cdot \E^{1/2}\big\vert(\D^{(2)n}_{ir,k}B^{HK})\big\vert^2
\ls 2^{3-K} m_n^{2+2HK}
\end{align}
for $1\ls k-j\ls 2$ and
\begin{align}
d^{(2)n}_{j1}\ls& 2^{3-K} m_n^{2+2HK}\quad\mbox{for}\ 1\ls j\ls N_n-1,\\
d^{(2)n}_{1k}\ls& 2^{3-K} m_n^{2+2HK}\quad\mbox{for}\ 1\ls k\ls N_n-1.
\end{align}
Now consider the case $\vert j-k\vert\gs 3$. By symmetry of $d^{(2)n}_{jk}$ one can take $j-k\gs 3$.
Note that \textbf{for $j\neq 1$ and $k\neq 1$ equality}
\begin{align*}
d^{(2)n}_{jk}= \int_{t^n_j}^{t^n_{j+1}}du \int^{t^n_j}_{t^n_{j-1}}dv \int_v^u dw \int_{t^n_k}^{t^n_{k+1}}dx\int^{t^n_k}_{t^n_{k-1}}dy \int_y^x \frac{\partial^4 R_{HK}}{\partial s^2\partial t^2}(w,z)\,dz
\end{align*}
holds. The fourth order mixed partial derivative of the covariance function $R_{HK}(s,t)$ is of the following form
\begin{align*}
\frac{\partial^4 R_{HK}}{\partial s^2\partial t^2}(s,t)
=&-\frac{2HK(2H-1)(2HK-2)(2HK-3)}{2^K\vert s-t\vert^{2(2-KH)}}\nonumber\\
&+\frac{ K(K-1)(K-2)(K-3)(2H)^4}{2^K}\,(st)^{4H-2}\big(s^{2H}+t^{2H}\big)^{K-4}\nonumber\\
&+\frac{ K(K-1)(2H)^2(2H-1)}{2^K}\,\big[(K-2)(2H)+(2H-1)\big](st)^{2H-2}\big(s^{2H}+t^{2H}\big)^{K-2}
\end{align*}
for each $s,t>0$ such that $s\neq t$. Since $2s^Ht^H\ls s^{2H}+t^{2H}$ and $K-2<0$, $K-4<0$ it follows that
\begin{align*}
(st)^{2H-2}\big(s^{2H}+t^{2H}\big)^{K-2}\ls& 2^{K-2}(st)^{KH-2}\\
(st)^{4H-2}\big(s^{2H}+t^{2H}\big)^{K-4}\ls& 2^{K-4}(st)^{KH-2}.
\end{align*}
Thus
\[
\bigg\vert\frac{\partial^4 R^{HK}}{\partial s^2\partial t^2}(s,t)\bigg\vert\ls\frac{C_1}{\vert s-t\vert^{2(2-KH)}} +\frac{C_2}{(st)^{2-KH}}
\]
and
\begin{align}\label{nelyg12}
\vert d^{(2)n}_{jk}\vert
\ls&\int_{t^n_j}^{t^n_{j+1}}du \int^{t^n_j}_{t^n_{j-1}}dv \int_v^u dw \int_{t^n_k}^{t^n_{k+1}}dx\int^{t^n_k}_{t^n_{k-1}}dy \int_y^x \frac{C_1}{\vert w-z\vert^{2(2-KH)}}\, dz\nonumber\\
&+\int_{t^n_j}^{t^n_{j+1}}du \int^{t^n_j}_{t^n_{j-1}}dv \int_v^u dw \int_{t^n_k}^{t^n_{k+1}}dx\int^{t^n_k}_{t^n_{k-1}}dy \int_y^x \frac{C_2}{(wz)^{2-KH}}\, dz\nonumber\\
=:&I^{n,1}_{jk}+I^{n,2}_{jk},
\end{align}
where constants $C_1$ and $C_2$ depends on $H$ and $K$. Inequality
\[
\vert w-z\vert\gs t^n_{j-1}-t^n_{k+1}=\sum_{i=k+2}^{j-1}\D_{n,i} t\gs (j-k-2)p_n
\]
on the integration set imply
\begin{equation}\label{nelyg13}
I^{n,1}_{jk}\ls\frac{4C_1 m_n^6}{(j-k-2)^{2(2-HK)}p_n^{2(2-HK)}}\ls \frac{4C_1 c^6 p_n^{2+2HK}}{(j-k-2)^{2(2-HK)}}\,,
\end{equation}
where $c$ is a constant defined in Definition \ref{begyn}. {Moreover,}
\begin{equation}\label{nelyg14}
\sum_{j-k\gs 3}^{n-1}\frac{1}{(j-k-2)^{2(2-HK)}}\ls \sum_{j=1}^\infty \frac{1}{j^{2(2-KH)}} <\infty.
\end{equation}
Now we estimate $I^{n,2}_{jk}$. {By modifying the computations above we similarly find that}
\begin{align}\label{nelyg15}
I^{n,2}_{jk}\ls& \frac{4C_2 m_n^6}{(t_{j-1}t_{k-1})^{2-KH}}=\frac{4C_2 m_n^6}{(t_{k-1}\sum_{i=k}^{j-1}\D_i t +t_{k-1}^2)^{2-KH}}\nonumber\\
\ls&\frac{4C_2 m_n^6}{p_n^{2-KH}((t_{j-1}-t_{k-1}) +t_{k-1})^{2-KH}}
\ls \frac{4C_2 c^6 p_n^{4+KH}}{(t_{j-1}-t_{k-1})^{2-KH}}\nonumber\\
\ls& 4C_2 c^6 \,\frac{p_n^{2+2KH}}{(j-k)^{2-KH}}\,.
\end{align}
Note that
\begin{equation}\label{nelyg16}
\sum_{j-k\gs 3}^{N_n-1}\frac{1}{(j-k)^{2-KH}}\ls \sum_{j=1}^\infty \frac{1}{j^{2-KH}} <\infty.
\end{equation}
The inequality (\ref{nelyg8}) follows from inequalities (\ref{nelyg12})-(\ref{nelyg16}).
\subsection{Subfractional Brownian motion}
We recall that conditions $(C1)$ and $(C2)$ are satisfied for sfBm. So the statement of Theorem \ref{thm2} is satisfied if inequality (\ref{nelyg8}) holds. To prove this, we apply similar arguments as for bifBm.
If $j=k$ or $1\ls k-j\ls 2$ then (\ref{subf1}) and (\ref{subf2}) yields
\[
d_{jk}^{(2)n}\ls 8 m_n^{2+2H}.
\]
The same inequality holds for $d_{j1}^{(2)n}$, $1\ls j\ls N_n-1$ and $d_{1k}^{(2)n}$, $1\ls k\ls N_n-1$.
The fourth order mixed partial derivative of the covariance function $G_H(s,t)$ is of the following form
\[
\frac{\partial^4 G_H}{\partial s^2\partial t^2}(s,t)=-H(2H-1)(2H-2)(2H-3)\bigg[\frac{1}{\vert s-t\vert^{2(2-H)}}+\frac{1}{(s+t)^{2(2-H)}} \bigg].
\]
for each $s,t>0$ such that $s\neq t$. Note that $(s+t)^{2(2-H)}\gs \vert s-t\vert^{2(2-H)}$ if $s\neq t$.
Thus
\[
\bigg\vert \frac{\partial^4 G_H}{\partial s^2\partial t^2}(s,t)\bigg\vert\ls \frac{2H(2H-1)(2H-2)(2H-3)}{\vert s-t\vert^{2(2-H)}}
\]
and
\[
\vert d^{(2)n}_{jk}\vert\ls\frac{4C_H m_n^6}{(j-k-2)^{2(2-H)}p_n^{2(2-H)}}\ls \frac{4C_H c^6 p_n^{2+2H}}{(j-k-2)^{2(2-H)}}
\]
for $j-k\gs 3$, $2\ls k\ls N_n-1$, where $C_H=2H(2H-1)(2H-2)(2H-3)$, $c$ is a constant defined in Definition \ref{begyn}. Thus
\begin{align}\label{subf3}
\max_{2\ls k\ls N_n-1}\sum_{j-k\gs 3}d^{(2)n}_{jk}\ls& 4C_H c^6 p_n^{2+2H}\max_{2\ls k\ls N_n-1}\sum_{j-k\gs 3}\frac{1}{(j-k-2)^{2(2-H)}} \nonumber\\
\ls& 4C_H c^6 p_n^{2+2H}\sum_{j=1}^\infty\frac{1}{j^{2(2-H)}}\ls C p_n^{2+2H}
\end{align}
for some constant $C$ and inequality (\ref{nelyg8}) holds.
\subsection{Ornstein-Uhlenbeck process}
First we show the following lemma.
\begin{lem}\label{lem}
Let $X$ be the solution of equation (\ref{O-U1}). Then
\[
\big\vert V^{(2)}_{\pi_n}(X,2)-\t^2 V^{(2)}_{\pi_n}(B^H,2)\big\vert=O(p_n^{1-\eps})
\]
for every $\eps>0$.
\end{lem}
\begin{proof} It is evident that
\[
\D^{(2)n}_{ir,k} X=-\mu\bigg(\D^n_k t\int_{t^n_k}^{t^n_{k+1}} X_s\,ds-\D^n_{k+1} t\int^{t^n_k}_{t^n_{k-1}} X_s\,ds\bigg)+\t \D^{(2)n}_{ir,k} B^H.
\]
After simple calculations we get the estimate
\begin{align*}
\sup_{t^n_k\ls s\ls t^n_{k+1}} \vert X_s-X_k\vert\ls& \mu(\D^n_{k+1} t)\sup_{t\ls T}\vert X_t\vert +\t\sup_{t^n_k\ls s\ls t^n_{k+1}} ( B^H_s-B^H_k)\\
\ls& \mu\, m_n \sup_{t\ls T}\vert X_t\vert+\t L^{H,H-\eps}_T m_n^{H-\eps},
\end{align*}
where $L^{H,H-\eps}_T$ is defined in subsection \ref{OU}. Thus
\begin{align*}
&\bigg(\D^n_k t\int_{t^n_k}^{t^n_{k+1}}( X_s-X_k)\,ds-\D^n_{k+1} t\int^{t^n_k}_{t^n_{k-1}} ( X_s-X_k)\,ds \bigg)^2\\
&\quad\ls 2m_n^3 \int_{t^n_k}^{t^n_{k+1}} ( X_s-X_k)^2\,ds+2m_n^3 \int^{t^n_k}_{t^n_{k-1}} ( X_s-X_k)^2\,ds\\
&\quad\ls 2m_n^4\Big(\sup_{t^n_k\ls s\ls t^n_{k+1}} ( X_s-X_k)^2+\sup_{t^n_{k-1}\ls s\ls t^n_k} ( X_k-X_s)^2\Big)\\
&\quad\ls 8m_n^{4+2H-2\eps}\Big(\mu^2 m_n^{2-2H+2\eps} \sup_{t\ls T} X_t^2+\t^2(L^{H,H-\eps}_T)^2 \Big)
\end{align*}
and
\begin{align*}
&\bigg\vert\bigg(\D^n_k t\int_{t^n_k}^{t^n_{k+1}} X_s\,ds-\D^n_{k+1} t\int^{t^n_k}_{t^n_{k-1}} X_s\,ds\bigg) \D^{(2)n}_{ir,k} B^H\bigg\vert\\
&\quad=\bigg\vert\bigg(\D^n_k t\int_{t^n_k}^{t^n_{k+1}}( X_s-X_k)\,ds-\D_{n,k+1} t\int^{t^n_k}_{t^n_{k-1}} ( X_s-X_k)\,ds \bigg) \D^{(2)n}_{ir,k} B^H\bigg\vert\\
&\quad \ls 2 m_n^{2+H-\eps}\Big(\mu\, m_n^{1-H+\eps}\sup_{t\ls T}\vert X_t\vert+\t L^{H,H-\eps}_T\Big)\cdot 2 m_n L^{H,H-\eps}_T m_n^{H-\eps}\\
&\quad=4 m_n^{3+2H-2\eps}\Big(\mu\, m_n^{1-H+\eps}\sup_{t\ls T}\vert X_t\vert+\t L^{H,H-\eps}_T\Big)\cdot L^{H,H-\eps}_T.
\end{align*}
Thus for every $\eps>0$
\begin{align*}
\big\vert V^{(2)}_{\pi_n}(X,2)-\t^2 V^{(2)}_{\pi_n}(B^H,2)\big\vert \ls& 8c^{2+2H} m_n^{2-2\eps}\Big(\mu^2 m_n^{2-2H+2\eps} \sup_{t\ls T} X_t^2+2\t^2(L^{H,H-\eps}_T)^2 \Big)T\\
&+ 4c^{2+2H} m_n^{1-2\eps}\Big(\mu m_n^{1-H+\eps}\sup_{t\ls T}\vert X_t\vert+\t L^{H,H-\eps}_T\Big)\cdot L^{H,H-\eps}_T T\\
=&O(m_n^{1-2\eps})
\end{align*}
since
\[
\frac{1}{\mu^n_k}\ls\frac{1}{2p_n^{2H+2}}\,.
\]
This implies the statement of the lemma.
\end{proof}
As in previous cases it is enough to verify condition (\ref{nelyg8}) of Theorem \ref{thm2} for fBm $B^H$. The following inequality
\[
\bigg\vert\frac{\partial^4 F_H}{\partial s^2\partial t^2}(s,t)\bigg\vert\ls \frac{
H\vert (2H-1)(2H-2)(2H-3)\vert}{\vert s-t\vert^{2(2-H)}}\,,
\]
holds for the covariance function $F_H(s,t)$ of $B^H$. Applying similar arguments as for sfBm we obtain
\[
\max_{1\ls k\ls N_n-1}\sum_{j=1}^{N_n-1}\vert d^{(2)n}_{jk}\vert\ls C p_n^{2+2H}.
\]
From Lemma \ref{lem} and inequality above we get the statement of Theorem \ref{thm2}.
\subsection{Fractional Brownian bridge}
The fractional Brownian bridge is defined in $[0,T]$ by
\[
X_t^H=B^H_t-\frac{t^{2H}+T^{2H}-\vert t-T\vert^{2H}}{2T^{2H}}\, B^H_T,
\]
where $B^H$ is a fBm in $[0,T]$. Denote $g(t,T)=t^{2H}+T^{2H}-\vert t-T\vert^{2H}$. Then
\begin{align*}
\E X_t^H X_s^H=&\E B_t^H B_s^H-(2T^{2H})^{-1}g(t,T)\E B_T^H B_s^H-(2T^{2H})^{-1}g(s,T)\E B_T^H B_t^H\\
&+(2T^{2H})^{-2}g(s,T)g(t,T)\E (B_T^H)^2\\
=&\E B_t^H B_s^H-(2T^{2H})^{-1}g(s,T)g(t,T)+(4T^{2H})^{-1}g(s,T)g(t,T)\\
=&\E B_t^H B_s^H-(4T^{2H})^{-1}g(s,T)g(t,T).
\end{align*}
If $j=k$ or $1\ls k-j\ls 2$ then application of inequality (\ref{bridge}) yields
\[
d_{jk}^{(2)n}\ls 4 m_n^{2+2H}.
\]
The same inequality holds for $d_{j1}^{(2)n}$, $1\ls j\ls N_n-1$ and $d_{1k}^{(2)n}$, $1\ls k\ls N_n-1$.
Further
\begin{align*}
\bigg\vert\frac{\partial^4 g(s,T)g(t,T)}{\partial s^2\partial t^2}\bigg\vert=& 4H^2(2H-1)^2\big\vert\big(s^{2H-2}-(T-s)^{2H-2}\big) \big(t^{2H-2}-(T-t)^{2H-2}\big)\big\vert\\
\ls& 4H^2(2H-1)^2\big[(st)^{2H-2}+[t(T-s)]^{2H-2}+[s(T-t)]^{2H-2}\\
&+[(T-t)(T-s)]^{2H-2}\big].
\end{align*}
Similarly as for bifBm we estimate $d_{jk}^{(2)n}$. Assume that $\vert j-k\vert\gs 3$. By symmetry of $d^{(2)n}_{jk}$ one can take $j-k\gs 3$. We first note that
\begin{align*}
I_1:=&\int_{t^n_j}^{t^n_{j+1}}du \int^{t^n_j}_{t^n_{j-1}}dv \int_v^u dw \int_{t^n_k}^{t^n_{k+1}}dx\int^{t^n_k}_{t^n_{k-1}}dy \int_y^x \frac{1}{(wz)^{2-2H}}\, dz\\
\ls& \frac{4 m_n^6}{(t^n_{j-1}t^n_{k-1})^{2-2H}}\ls 4 c^6 \,\frac{p_n^{2+4H}}{(j-k)^{2-2H}}\,,\\
\end{align*}
Assume $k\neq N_n-1$ and $j\neq 1$ or $j\neq N_n-1$ and $k\neq 1$. Then
\begin{align*}
I_2:=&\int_{t^n_j}^{t^n_{j+1}}du \int^{t^n_j}_{t^n_{j-1}}dv \int_v^u dw \int_{t^n_k}^{t^n_{k+1}}dx\int^{t^n_k}_{t^n_{k-1}}dy \int_y^x \frac{1}{[w(T-z)]^{2-2H}}\, dz\\
\ls& \frac{4 m_n^6}{[t^n_{j-1}(t^n_{j-1}-t^n_{k+1})]^{2-2H}}\ls \frac{4 m_n^6}{p_n^{2-2H}(t^n_{j-1}-t^n_{k+1})^{2-2H}} \ls 4 c^6 \,\frac{p_n^{2+4H}}{(j-k-2)^{2-2H}}\,,\\
I_3:=&\int_{t^n_j}^{t^n_{j+1}}du \int^{t^n_j}_{t^n_{j-1}}dv \int_v^u dw \int_{t^n_k}^{t^n_{k+1}}dx\int^{t^n_k}_{t^n_{k-1}}dy \int_y^x \frac{1}{[z(T-w)]^{2-2H}}\, dz\\
\ls& \frac{4 m_n^6}{[t^n_{k-1}(T-t^n_{j+1})]^{2-2H}}\ls \frac{4 m_n^6}{p_n^{2-2H}(T-t^n_{j+1})^{2-2H}}\ls 4 c^6 \,\frac{p_n^{2+4H}}{(N_n-j-1)^{2-2H}},\\
I_4:=&\int_{t^n_j}^{t^n_{j+1}}du \int^{t^n_j}_{t^n_{j-1}}dv \int_v^u dw \int_{t^n_k}^{t^n_{k+1}}dx\int^{t^n_k}_{t^n_{k-1}}dy \int_y^x \frac{1}{[(T-w)(T-z)]^{2-2H}}\, dz\\
\ls& \frac{4 m_n^6}{p_n^{2-2H}(t^n_{j+1}-t^n_{k+1})^{2-2H}} \ls 4 c^6 \,\frac{p_n^{2+4H}}{(j-k)^{2-2H}}\,.
\end{align*}
Thus we obtain
\[
\max_{1\ls k\ls N_n-1}\sum_{j=1}^{N_n-1}\vert d^{(2)n}_{jk}\vert\ls C p_n^{2+2H}.
\]
and get the statement of Theorem \ref{thm2}.
\section{On the estimation of Orey index for irregular partition}
Let $(\pi_n)_{n\gs 1}$ be a sequence of partitions of $[0,T]$ such that $0=t^n_0<t^n_1<\cdots<t^n_{m(n)}=T$ for all $n\gs 1$. Assume that we have two sequences of partitions $(\pi_{i(n)})_{n\gs 1}$ and $(\pi_{j(n)})_{n\gs 1}$ of $[0,T]$ such that $\pi_{i(n)}\subset \pi_{j(n)}\subseteq \pi_n$, $i(n)< j(n)\ls m(n)$, for all $n\in\mathbb{N}$, where $\pi_{i(n)}=\{0=t^n_0<t^n_{i(1)}<t^n_{i(2)}<\cdots<t^n_{i(n)}=T\}$ and $\pi_{j(n)}=\{0=t^n_0<t^n_{j(1)}<t^n_{j(2)}<\cdots<t^n_{j(n)}=T\}$. {Set
\[
\D^n_{i(k)} t=t^n_{i(k)}-t^n_{i(k-1)},\qquad m_{i(n)}=\max_{1\ls k\ls i(n)}\D^n_{i(k)} t,\qquad p_{i(n)}=\min_{1\ls k\ls i(n)}\D^n_{i(k)} t.
\]
Moreover, assume that $p_{j(n)}\neq m_{i(n)}$ and $m_{i(n)}\ls c p_{i(n)}$, for all $i(n)$, $n\gs 1$, $c\gs 1$. Note that $p_{j(n)}\ls p_{i(n)}$. }
Let $X$ be a Gaussian process with Orey index $\g\in(0,1)$. Set
\[
V_{\pi_{i(n)}}^{(2)}(X,2)=2\sum_{k=1}^{i(n)-1} \frac{\D^n_{i(k+1)} t(\D^{(2)n}_{ir,k}X)^2}{
(\D^n_{i(k)} t)^{\g+1/2}(\D^n_{i(k+1)} t)^{\g+1/2}[\D^n_{i(k)} t+\D^n_{i(k+1)} t]}\,,
\]
where
\[
\D^{(2)n}_{ir,i(k)}X=\D^n_{i(k)} t\cdot X(t^n_{i(k+1)})
+\D^n_{i(k+1)} t\cdot X(t^n_{i(k-1)})-(\D^n_{i(k)} t+\D^n_{i(k+1)} t)X(t^n_{i(k)}).
\]
Denote
\[
V_{i(n)}^{(2)}(X,2)=\sum_{k=1}^{i(n)-1} (\D^{(2)n}_{ir,k}X)^2\quad\mbox{and}\quad
\mu_k^n=(\Delta t^n_{i(k)})^{\g+1/2}(\D^n_{i(k)} t)^{\g+1/2}[\D^n_{i(k)} t+\D^n_{i(k+1)} t].
\]
Define
\[
\widehat \g_n=-\frac{1}{2}+\frac{1}{2\ln(p_{j(n)}/m_{i(n)})}\, \ln\frac{V_{{j(n)}}^{(2)}(X,2)}{V_{{i(n)}}^{(2)}(X,2)}\,.
\]
\begin{thm} {Assume that conditions of Proposition $\ref{prop1}$ are satisfied for two sequences of partitions $(\pi_{i(n)})_{n\gs 1}$ and $(\pi_{j(n)})_{n\gs 1}$ of $[0,T]$ with the properties mentioned above. Then
\begin{equation}\label{riba}
V_{\pi_{k(n)}}^{(2)}(X,2)\longrightarrow 2\kap^2\int_0^T h(\ell(t))\,dt\quad\mbox{a.s.}\qquad \mbox{as}\ n\to\infty
\end{equation}
for $k(n)=i(n)$ or $k(n)=j(n)$.}
If sequences of partitions $\{\pi_{i(n)}\}$ and $\{\pi_{j(n)}\}$, $i(n)< j(n)$, are regular or such that $p_{j(n)}/p_{i(n)}\to 0$ as $n\to\infty$, then
\[
\widehat\g_n\arol{{a.s.}}\g.
\]
\end{thm}
\textbf{Proof}. Proposition $\ref{prop1}$ yields the limit (\ref{riba}). It is evident that
\[
\frac{1}{2m_n^{2\g+1}}\ls\frac{\Delta t^n_{i(k)}}{\mu^n_k}\ls\frac{1}{2p_n^{2\g+1}}
\]
and
\begin{equation}\label{nelyg3}
\bigg(\frac{p_{i(n)}}{m_{j(n)}}\bigg)^{2\g+1}\frac{V_{{j(n)}}^{(2)}(X,2)}{V_{{i(n)}}^{(2)}(X,2)}\ls \frac{V_{\pi_{j(n)}}^{(2)}(X,2)}{V_{\pi_{i(n)}}^{(2)}(X,2)} \ls \bigg(\frac{m_{i(n)}}{p_{j(n)}}\bigg)^{2\g+1}\frac{V_{{j(n)}}^{(2)}(X,2)}{V_{{i(n)}}^{(2)}(X,2)}\,.
\end{equation}
Further
\begin{align*}
\widehat \g_n=&-\frac{1}{2}+\frac{1}{2\ln(p_{j(n)}/m_{i(n)})}\bigg((2\g+1)\ln(p_{j(n)}/m_{i(n)})+\ln\frac{m_{i(n)}^{2\g+1} V_{{j(n)}}^{(2)}(X,2)}{p_{j(n)}^{2\g+1}V_{{i(n)}}^{(2)}(X,2)}\bigg)\\
=&\g+\frac{1}{2\ln(p_{j(n)}/m_{i(n)})}\,\ln\frac{m_{i(n)}^{2\g+1} V_{{j(n)}}^{(2)}(X,2)}{p_{j(n)}^{2\g+1}V_{{i(n)}}^{(2)}(X,2)}\\
=&\g+\frac{1}{2\ln(p_{j(n)}/m_{i(n)})}\,\ln\frac{V_{\pi_{j(n)}}^{(2)}(X,2)}{V_{\pi_{i(n)}}^{(2)}(X,2)}\\
&+\frac{1}{2\ln(p_{j(n)}/m_{i(n)})}\,\ln\bigg(\frac{m_{i(n)}^{2\g+1} V_{{j(n)}}^{(2)}(X,2)}{p_{j(n)}^{2\g+1}V_{{i(n)}}^{(2)}(X,2)}\Big / \frac{V_{\pi_{j(n)}}^{(2)}(X,2)}{V_{\pi_{i(n)}}^{(2)}(X,2)}\bigg)\\
\ls&\g+\frac{1}{2\ln(p_{j(n)}/m_{i(n)})}\,\ln\frac{V_{\pi_{j(n)}}^{(2)}(X,2)}{V_{\pi_{i(n)}}^{(2)}(X,2)}
\end{align*}
since $\ln(p_{j(n)}/m_{i(n)})\ls0$
\[
\frac{1}{2\ln(p_{j(n)}/m_{i(n)})}\,\ln\bigg(\frac{m_{i(n)}^{2\g+1} V_{{j(n)}}^{(2)}(X,2)}{p_{j(n)}^{2\g+1}V_{{i(n)}}^{(2)}(X,2)}\Big / \frac{V_{\pi_{j(n)}}^{(2)}(X,2)}{V_{\pi_{i(n)}}^{(2)}(X,2)}\bigg)\ls 0
\]
In the same way we get
\begin{align}\label{nelyg_n}
\widehat {\g}_n=& -\frac{1}{2}+\frac{1}{2\ln(p_{j(n)}/m_{i(n)})}\bigg((2\g+1)\ln(m_{j(n)}/p_{i(n)})+\ln\frac{p_{i(n)}^{2\g+1} V_{{j(n)}}^{(2)}(X,2)}{m_{j(n)}^{2\g+1}V_{{i(n)}}^{(2)}(X,2)}\bigg)\nonumber\\
=&-\frac{1}{2}+\bigg(\g+\frac{1}{2}\bigg)\frac{\ln(m_{j(n)}/p_{i(n)})}{\ln(p_{j(n)}/m_{i(n)})} +\frac{1}{2\ln(p_{j(n)}/m_{i(n)})}\,\ln\frac{p_{i(n)}^{2\g+1} V_{{j(n)}}^{(2)}(X,2)}{m_{j(n)}^{2\g+1}V_{{i(n)}}^{(2)}(X,2)}\nonumber\\
=&\g+\bigg(\g+\frac{1}{2}\bigg)\frac{\ln(m_{j(n)}/p_{i(n)})-\ln(p_{j(n)}/m_{i(n)})}{\ln(p_{j(n)}/m_{i(n)})} +\frac{1}{2\ln(p_{j(n)}/m_{i(n)})}\,\ln\frac{V_{\pi_{j(n)}}^{(2)}(X,2)}{V_{\pi_{i(n)}}^{(2)}(X,2)}\nonumber\\
&+\frac{1}{2\ln(p_{j(n)}/m_{i(n)})}\,\ln\bigg(\frac{p_{i(n)}^{2\g+1} V_{{j(n)}}^{(2)}(X,2)}{m_{j(n)}^{2\g+1}V_{{i(n)}}^{(2)}(X,2)}\Big / \frac{V_{\pi_{j(n)}}^{(2)}(X,2)}{V_{\pi_{i(n)}}^{(2)}(X,2)}\bigg)\nonumber\\
\gs&\g+\bigg(\g+\frac{1}{2}\bigg)\frac{\ln(m_{j(n)}/p_{i(n)})-\ln(p_{j(n)}/m_{i(n)})}{\ln(p_{j(n)}/m_{i(n)})} +\frac{1}{2\ln(p_{j(n)}/m_{i(n)})}\,\ln\frac{V_{\pi_{j(n)}}^{(2)}(X,2)}{V_{\pi_{i(n)}}^{(2)}(X,2)}\,,
\end{align}
since
\[
\frac{1}{2\ln(p_{j(n)}/m_{i(n)})}\,\ln\bigg(\frac{p_{i(n)}^{2\g+1} V_{{j(n)}}^{(2)}(X,2)}{m_{j(n)}^{2\g+1}V_{{i(n)}}^{(2)}(X,2)}\Big / \frac{V_{\pi_{j(n)}}^{(2)}(X,2)}{V_{\pi_{i(n)}}^{(2)}(X,2)}\bigg)\gs 0
\]
and
\[
\bigg(\g+\frac{1}{2}\bigg)\frac{\ln(m_{j(n)}/p_{i(n)})-\ln(p_{j(n)}/m_{i(n)})}{\ln(p_{j(n)}/m_{i(n)})}\ls 0.
\]
If sequences of partitions $\{\pi_{i(n)}\}$ and $\{\pi_{j(n)}\}$, $i(n)< j(n)$, are regular then the second term in the inequality (\ref{nelyg_n}) is equal to $0$ and
\[
\vert\widehat {\g}^n-\g \vert \ls\frac{1}{2\ln (m_{i(n)}/p_{j(n)})}\,\bigg\vert\ln\frac{V_{\pi_{j(n)}}^{(2)}(X,2)}{V_{\pi_{i(n)}}^{(2)}(X,2)}\bigg\vert.
\]
Under conditions of the theorem in the regular case of partitions the statement of the theorem hold. In an irregular case of partitions we obtain inequalities
\[
\bigg\vert\widehat {\g}_n-\g -\frac{1}{2\ln (p_{j(n)}/m_{i(n)})}\,\ln\frac{V_{\pi_{j(n)}}^{(2)}(X,2)}{V_{\pi_{i(n)}}^{(2)}(X,2)}\bigg\vert\ls \bigg(\g+\frac{1}{2}\bigg)\frac{\ln(m_{j(n)}/p_{j(n)})+\ln(m_{i(n)}/p_{i(n)})}{\ln(m_{i(n)}/p_{j(n)})}
\]
and
\begin{align*}
\vert\widehat {\g}^n-\g \vert \ls& \bigg\vert\widehat {\g}^n-\g -\frac{1}{2\ln (p_{j(n)}/m_{i(n)})}\,\ln\frac{V_{\pi_{j(n)}}^{(2)}(X,2)}{V_{\pi_{i(n)}}^{(2)}(X,2)}
+\frac{1}{2\ln (p_{j(n)}/m_{i(n)})}\, \ln\frac{V_{\pi_{j(n)}}^{(2)}(X,2)}{V_{\pi_{i(n)}}^{(2)}(X,2)}\bigg\vert\\
\ls&\frac{3}{2}\,\frac{\ln(m_{j(n)}/p_{j(n)})+\ln(m_{i(n)}/p_{i(n)})}{\ln(m_{i(n)}/p_{j(n)})}
+\frac{1}{2\ln (m_{i(n)}/p_{j(n)})}\,\bigg\vert\ln\frac{V_{\pi_{j(n)}}^{(2)}(X,2)}{V_{\pi_{i(n)}}^{(2)}(X,2)}\bigg\vert.
\end{align*}
For irregular case of partitions $\{\pi_{i(n)}\}$ and $\{\pi_{j(n)}\}$, $i(n)< j(n)$, the second term in above inequality goes to $0$ as $\ln (p_{i(n)}/p_{j(n)})\to\infty$, $n\to\infty$. Thus the statement of the theorem holds.
\section{Appendix}
\subsection{Proof of Lemma \ref{orey}}
Assume, without lost of generality, that $0<h<1$. {We first prove that $\widehat\g_*\ls\g_*$, where
\[
\widehat\g_*:=\limsup_{h\downarrow 0}\sup_{\varphi(h)\ls s\ls T-h}\frac{\ln\s_X(s,s+h)}{\ln h}\,, \quad\g_*:=\inf\bigg\{\g>0\dvit \lim_{h\downarrow 0}\sup_{\varphi(h)\ls s\ls T-h}\frac{h^\g}{\s_X(s,s+h)}=0\bigg\}.
\]
} Let $\g>\g_*$. It suffices to show that $\g\gs \widehat\g_*$. By definition of the greatest lower bound, there exists a real number $\a$ such that $\g>\a>\g_*$, and
\[
\sup_{\varphi(h)\ls s\ls T-h}\frac{h^\a}{\s_X(s,s+h)} \longrightarrow 0\quad\mbox{as}\ h\downarrow 0.
\]
But
\begin{equation}\label{orey00}
\sup_{\varphi(h)\ls s\ls T-h}\frac{h^\g}{\s_X(s,s+h)}=h^{\g-\a}\sup_{\varphi(h)\ls s\ls T-h}\frac{h^{\a}}{\s_X(s,s+h)} \longrightarrow 0\quad\mbox{as}\ h\downarrow 0
\end{equation}
as the product of two functions tending to $0$. Under the statement
\begin{equation}\label{orey3}
\sup_{\varphi(h)\ls s\ls T-h}\s_X(s,s+h)\longrightarrow 0\qquad\mbox{as}\ h\downarrow 0
\end{equation}
and relation (\ref{orey00}) there exists an $h_0$ such that for all $h\ls h_0<1$
\[
\sup_{\varphi(h)\ls s\ls T-h}\frac{h^\g}{\s_X(s,s+h)}=\frac{h^\g}{\inf_{\varphi(h)\ls s\ls T-h}\s_X(s,s+h)}<1 \quad\mbox{and}\quad \sup_{0\ls s\ls T-h}\s_X(s,s+h)<1.
\]
Moreover,
\[
h^\g<\inf_{\varphi(h)\ls s\ls T-h}\s_X(s,s+h)
\]
for all $h\ls h_0<1$. So
\[
\ln h^\g<\ln \Big(\inf_{\varphi(h)\ls s\ls T-h}\s_X(s,s+h)\Big)\ls\ln \Big(\sup_{\varphi(h)\ls s\ls T-h}\s_X(s,s+h)\Big)
\]
and
\begin{align*}
\g>&\frac{\ln \big(\sup_{\varphi(h)\ls s\ls T-h} \s_X(s,s+h)\big)}{\ln h}=\sup_{\varphi(h)\ls s\ls T-h} \frac{\ln \s_X(s,s+h)}{\ln h}\\
\gs&\limsup_{h\downarrow 0}\sup_{\varphi(h)\ls s\ls T-h}\frac{\ln\s_X(s,s+h)}{\ln h}= \widehat\g_*\,.
\end{align*}
Thus $\widehat\g_*\ls \g_*$.
Next we prove that $\widehat\g_*\gs\g_*$. Let $\g>\a>\widehat\g_*$. It suffices to show that $\g\gs \g_*$. Under the condition $\a>\widehat\g_*$ and statement (\ref{orey3}) there exists $h_0$ such that for $h\ls h_0<1$
\[
\inf_{\varphi(h)\ls s\ls T-h}\frac{\ln \s_X(s,s+h)}{\ln h}<\a, \qquad \sup_{0\ls s\ls T-h}\s_X(s,s+h)<1.
\]
This implies the inequality
\[
\ln \Big(\inf_{\varphi(h)\ls s\ls T-h} \s_X(s,s+h)\Big)> \ln h^\a
\]
and
\[
\inf_{\varphi(h)\ls s\ls T-h} \s_X(s,s+h)>h^\a.
\]
Thus
\[
\sup_{\varphi(h)\ls s\ls T-h}\frac{h^\a}{\s_X(s,s+h)}<1.
\]
So
\[
\sup_{\varphi(h)\ls s\ls T-h}\frac{h^\g}{\s_X(s,s+h)}< h^{\g-\a}\longrightarrow 0\quad\mbox{as}\ h\to 0.
\]
Therefore $\g\gs\g_*$.
Now we prove that $\widehat\g^*=\g^*$, where
\[
\widehat\g^*:=\liminf_{h\downarrow 0}\inf_{\varphi(h)\ls s\ls T-h}\frac{\ln\s_X(s,s+h)}{\ln h},\quad \g^*:=\sup\bigg\{\g>0\dvit \lim_{h\downarrow 0}\inf_{\varphi(h)\ls s\ls T-h}\frac{h^\g}{\s_X(s,s+h)}=+\infty\bigg\}.
\]
We first prove $\widehat\g^*\gs\g^*$. By definition of greatest upper bound, there exists a real number $\g$ such that $\g^*>\g$, and
\begin{equation}\label{relation}
\lim_{h\downarrow 0}\inf_{\varphi(h)\ls s\ls T-h}\frac{h^\g}{\s_X(s,s+h)}=+\infty
\end{equation}
It suffices to show that $\widehat\g^*\gs\g$. Under the condition $\g^*>\g$ and statements (\ref{orey3})-(\ref{relation}) there exists $h_0$ such that for $h\ls h_0<1$
\[
\inf_{\varphi(h)\ls s\ls T-h}\frac{h^\g}{\s_X(s,s+h)}>1, \qquad \sup_{0\ls s\ls T-h}\s_X(s,s+h)<1.
\]
Moreover,
\[
h^\g>\sup_{\varphi(h)\ls s\ls T-h}\s_X(s,s+h)\gs\inf_{\varphi(h)\ls s\ls T-h}\s_X(s,s+h)
\]
and
\[
\g\ln h>\ln \inf_{\varphi(h)\ls s\ls T-h}\s_X(s,s+h),\qquad \inf_{\varphi(h)\ls s\ls T-h}\frac{\ln \s_X(s,s+h)}{\ln h}>\g.
\]
So $\widehat\g^*\gs\g$.
{We show that $\g^*\gs\widehat\g^*$.} Assume that $\widehat\g^*>\a>\g$. It sufficient to show that $\g^*>\g$. Under the condition $\widehat\g^*>\a$ and statement (\ref{orey3}) there exists $h_0$ such that for $h\ls h_0<1$
\[
\inf_{\varphi(h)\ls s\ls T-h}\frac{\ln\s_X(s,s+h)}{\ln h}>\a, \qquad \sup_{0\ls s\ls T-h}\s_X(s,s+h)<1.
\]
Moreover,
\[
\sup_{\varphi(h)\ls s\ls T-h}\frac{\ln\s_X(s,s+h)}{\ln h}>\a
\]
and
\[
\ln \Big(\sup_{\varphi(h)\ls s\ls T-h}\s_X(s,s+h)\Big)<\ln h^\a.
\]
Thus
\[
\sup_{\varphi(h)\ls s\ls T-h}\s_X(s,s+h)< h^\a\quad\mbox{and}\quad \inf_{\varphi(h)\ls s\ls T-h}\frac{h^\a}{\s_X(s,s+h)}>1.
\]
Then
\[
\inf_{\varphi(h)\ls s\ls T-h}\frac{h^\g}{\s_X(s,s+h)}>h^{\g-\a}\to \infty
\]
and $\g^*>\g$.
\subsection{Proof of Theorem \ref{thm2}}
Note that $V_{\pi_n}^{(2)}\big(X,2\big)$ is the square of the Euclidean norm of $(N_n-1)$-dimensional Gaussian vector which components are
\[
\sqrt{\frac{2\D^n_k t}{\mu^n_k}}\,\Delta^{(2)n}_{ir,k} X, \qquad 1\ls k\ls N_n-1.
\]
Denote by $(\lambda_{1,n}, \ldots, \lambda_{N_n-1,n})$ the eigenvalues of the corresponding covariance matrix whereas $\lambda^{*}_{n}$ stands for a maximal eigenvalue. There exists one $(N_n-1)$-dimensional Gaussian
vector $Y_n$, such that its components are independent Gaussian variables
$\mathcal{N}(0,1)$ and
\[
V_{\pi_n}^{(2)}\big(X,2\big)=\sum_{j=1}^{N_n-1}\lambda_{j,n}\big(Y^{(j)}_n\big)^2, \qquad \E V_{\pi_n}^{(2)}\big(X,2\big)=\sum_{j=1}^{N_n-1}\l_{j,n}.
\]
Since $\E V_{\pi_n}^{(2)}\big(X,2\big)$ is a convergent sequence (Proposition \ref{prop1}), the sums $\sum_{j=1}^{N_n-1}\l_{j,n}$ are bounded. Moreover, one has
\[
\sum_{j=1}^{N_n-1}\l^2_{j,n}\ls \l_n^*\sum_{j=1}^{N_n-1}\l_{j,n}.
\]
From result of linear algebra and (\ref{nelyg8}) we may further conclude that
\begin{align*}
\l_n^*\ls& 2\max_{1\ls k\ls N_n-1}\sum_{j=1}^{N_n-1}\sqrt{\frac{\D_{n,j} t\D_{n,k}
t}{\mu^n_j\mu^n_k}}\,\big\vert \E\big(\D^{(2)n}_{ir,j} X\D^{(2)n}_{ir,k} X\big) \big\vert\\
\ls&2\,\frac{1}{p_n^{2\g+1}}\,\max_{1\ls k\ls N_n-1}\sum_{j=1}^{N_n-1} \vert
d_{jk}^{(2)n}\vert\ls C p_n.
\end{align*}
The Hanson and Wright's inequality (see \cite{HW}, \cite{begyn1}) yields that for $\eps>0$
\begin{equation}\label{nelyg5}
\pr\big(\big\vert V_{\pi_n}^{(2)}\big(X,2\big)-\E V_{\pi_n}^{(2)}\big(X,2\big)\big\vert\gs \eps\big)
\ls 2\exp\bigg({-}\min\bigg[\frac{C_1\eps}{\l_n^*}\,,\frac{C_2\eps^2}{\sum_{j=1}^{N_n-1}\l^2_{j,n}}\bigg]\bigg),
\end{equation}
where $C_1$, $C_2$ are nonnegative constants. Therefore, the inequality (\ref{nelyg5}) becomes
\[
\pr\big(\big\vert V_{\pi_n}^{(2)}\big(X,2\big)-\E V_{\pi_n}^{(2)}\big(X,2\big)\big\vert\gs \eps\big)
\ls 2\exp\bigg({-}\frac{K\eps^2}{\l_n^*}\bigg),\quad \forall\ 0<\eps\ls 1,
\]
where $K$ is a positive constant. Set
\[
\eps_n^2=\frac{2C}{K}\,p_n\ln n.
\]
Then
\[
\pr\big(\big\vert V_{\pi_n}^{(2)}\big(X,2\big)-\E V_{\pi_n}^{(2)}\big(X,2\big)\big\vert\gs \eps_n\big)
\ls 2\exp\bigg({-}2\ln n\bigg)=\frac{2}{n^2}\,.
\]
Since $\lim_{n\to\infty}\eps_n=0$ and
\[
\sum_{n=1}^\infty \pr\big(\vert V_{\pi_n}^{(2)}(X,2)-\E V_{\pi_n}^{(2)}(X,2)\vert\gs\eps_n\big)<\infty
\]
then Borel-Cantelli lemma gives the statement of the theorem.
|
1,314,259,995,361 | arxiv | \section{Introduction}
Let $\mathcal{A}$ denotes the class of analytic functions of the form $f(z)=z+\sum_{k=2}^{\infty}a_kz^k$ in the open unit disk ${\Delta}:=\{z: |z|<1\}$.
Let $f(z)=w$ and $\Gamma_w$ be the image of $\Gamma_z$ (the circle $C_r: z=re^{i\theta}$) under the function $f$ in $\mathcal{A}$. The curve $\Gamma_w$ is said to be starlike with respect to $w_0=0$ if $\arg(w-w_0)$ is a non-decreasing function of $\theta$, that is,
\begin{equation*}\label{arg-def}
\frac{d}{d\theta} \arg(w-w_0)\geq0, \quad \theta\in [0,2\pi],
\end{equation*}
which is equivalent to
\begin{equation}\label{charcter}
\frac{d}{d\theta} \arg(w-w_0)=\Re\left(\frac{zf'(z)}{f(z)}\right)\geq0.
\end{equation}
If the inequality \eqref{charcter} holds for each circle $|z|=r<1$, then it characterizes a special class $\mathcal{S}^{*}$, the class of starlike functions in ${\Delta}$. It is obvious from \eqref{arg-def} that for each $0<r<1$, the curve $\Gamma_w$ is not allowed to have a loop which ensure the univalency of the function. But if for some $0\neq z\in \Delta$, $\Re(zf'(z)/f(z))<0$, then $f$ is not starlike with respect to $0$, or equivalently we can say that the image curve $\Gamma_w: f(|z|=r)$ is definitely not starlike, but still it may or may not be univalent. From \eqref{charcter}, we also see the importance of the Carathe\'{o}dory functions by writing \eqref{charcter} in terms of subordination as:
\begin{equation}\label{star-subord}
\frac{zf'(z)}{f(z)}\prec \frac{1+z}{1-z} \quad (z\in\Delta),
\end{equation}
where the symbol $\prec$ stands for the usual subordination.
In 1992, Ma-Minda~\cite{minda94} generalized \eqref{star-subord} by unifying all the subclasses of starlike functions as follows:
\begin{equation}\label{mindaclass}
\mathcal{S}^*(\Psi):= \biggl\{f\in \mathcal{A} : \frac{zf'(z)}{f(z)} \prec \Psi(z) \biggl\},
\end{equation}
where $\Psi$ has positive real part, $\Psi({\Delta})$ symmetric about the real axis with $\Psi'(0)>0$ and $\Psi(0)=1$. For some special classes, refer \cite{Goel-2020,Kanas-2000,PSharma-2019} and the references there in.
In view of the above, let us now consider the analytic univalent function $\psi$ in ${\Delta}$ such that $\psi(0)=0$, $\psi({\Delta})$ is starlike with respect to $0$ and introduce the following class of analytic functions:
\begin{equation}\label{gen-ma-min}
\mathcal{F}(\psi):= \left\{f\in \mathcal{A}: \frac{zf'(z)}{f(z)}-1 \prec \psi(z),\; \psi(0)=0 \right\}.
\end{equation}
Note that when $1+\psi(z)\not \prec (1+z)/(1-z)$, then the functions in the class $\mathcal{F}(\psi)$ may not be univalent in ${\Delta}$ which also implies $\mathcal{F}(\psi)\not\subseteq \mathcal{S}^{*}$. Thus in case, when the function $1+\psi:=\Psi$ has positive real part, $\Psi({\Delta})$ symmetric about the real axis with $\Psi'(0)>0$, then $\mathcal{F}(\psi)$ reduces to the class $\mathcal{S}^{*}(\Psi)$.
The functions in the class defined in \eqref{mindaclass} are univalent which help a lot in studying the geometrical properties of the functions, for example, Growth and Distortion theorems etc. But this may not be quite easy to study the analogous results in the class $\mathcal{F}(\psi)$.
In this direction, recently, Kargar et al.~\cite{kargar-2019} considered the following class, the first of it's kind:
\begin{equation}\label{boothlem}
\mathcal{BS}(\alpha):= \biggl\{f\in \mathcal{A} : \frac{zf'(z)}{f(z)}-1 \prec \frac{z}{1-\alpha z^2},\; \alpha\in [0,1) \biggl\},
\end{equation}
where $z/(1-\alpha z^2)=:\psi(z)$ (Booth lemniscate function~\cite{piejko-2013} and \cite{piejko-2015}) is an analytic univalent function and symmetric with respect to the real and imaginary axes. Note that the function $(1+z/(1-\alpha z^2))$ assumes negative values for $\alpha\neq0$, therefore functions in this class may not be univalent. For $f$ belonging to $\mathcal{BS}(\alpha)$, using the vertical strip domain $\{w\in \mathbb{C}: \mu <\Re{w}<\nu, \;\text{where}\; \mu<1<\nu \},$ Kargar et al.~\cite{kargar-2019} proved that $|f(z)/z|$ is bounded and obtained the coefficient estimates when $0\leq \alpha \leq 3-2\sqrt{2}$ along with Fekete-Szeg\"{o} inequality for the associated $k-th$ root transformation. In 2018, Najmadi et al.~\cite{NNEbadian-2018} obtained the bounds for the quantities $\Re{f(z)}$, $|f(z)|$ and $|f'(z)|$ when $0\leq \alpha\leq 3-2\sqrt{2}$. Recently, Kargar et al.~\cite{kar-Eba-2019} obtained the best dominant of the subordination $f(z)/z \prec F(z)$ for the range $0<\alpha\leq 3-2\sqrt{2}$ using the convolution technique, where $F(z)=\left({1+z\sqrt{\alpha}})/{(1-z\sqrt{\alpha}}\right)^{\frac{1}{2\sqrt{\alpha}}}.$ Cho et al.~\cite{cho-kumar-2018} dealt with the first order differential subordination implications and also solved the various sharp radius problems pertaining to the class $\mathcal{BS}(\alpha)$.
In 2019, Masih et al. \cite{Masih-2019} considered the following class with $\beta\in [0,1/2]$:
\begin{equation}\label{cissoidclass}
\mathcal{S}_{cs}(\beta):= \biggl\{f\in \mathcal{A} : \left(\frac{zf'(z)}{f(z)}-1\right) \prec \frac{z}{(1-z)(1+\beta z)},\; \beta\in [0,1) \biggl\}.
\end{equation}
They proved the growth theorem and also obtained the sharp estimates for the logarithmic coefficients but only for the range $\beta\in [0,1/2]$. Note that for $\beta\in[0,1/2]$, $\mathcal{S}_{cs}(\beta)$ is a Ma-Minda subclass, but for the other range, functions in this class may not be univalent.
In this paper, we establish the sharp growth theorem for the class $\mathcal{F}(\psi)$ with certain geometric conditions on $\psi$ and obtain covering theorem. Further provide some examples including newly defined classes are also discussed. As an application, we obtain growth theorem for the complete range of $\alpha$ and $\beta$ for the functions in the classes $\mathcal{BS}(\alpha)$ and $\mathcal{S}_{cs}(\beta)$, respectively that improves the earlier known bounds. Finally, the sharp Bohr-radii for the classes $S(\mathcal{BS}(\alpha))$ and $\mathcal{BS}(\alpha)$ are obtained. For some classes, we study the geometrical behavior of an analytic function of the form $f(z)/z$ which arises frequently while working with the class $\mathcal{S}^{*}(\Psi)$ and play an important role, for example, in obtaining the bounds for $\Re(f(z)/z)$ and $\arg(f(z)/z)$. The geometric properties and coefficients estimation for the class $\mathcal{F}(\psi)$ are still open.
\section{Main Results}
Let $\mathcal{F}(\psi)$ be the class as defined in \eqref{gen-ma-min}. Now we begin with the following:
\begin{theorem}[Growth Theorem]\label{gen-thm1}
If $\max_{|z|=r}\Re\psi(z)=\psi(r)$ and $\min_{|z|=r}\Re\psi(z)=\psi(-r)$. Then $f\in \mathcal{F}(\psi)$ satisfies the sharp inequalities:
\begin{equation}\label{maingththm-eq}
r \exp\left(\int_{0}^{r}\frac{\psi(-t)}{t}dt\right) \leq |f(z)| \leq
r \exp\left(\int_{0}^{r}\frac{\psi(t)}{t}dt\right), \quad (|z|=r).
\end{equation}
\end{theorem}
\begin{proof}
Let $f\in \mathcal{F}(\psi)$. For $z=re^{i\theta}$, we have
\begin{equation}\label{realbound}
\phi(-r) \leq \Re{\psi(re^{i\theta})} \leq \phi(r).
\end{equation}
Let $\Phi(z)=\psi(\omega(z))$, where $\omega$ is a Schwarz function. Then from \eqref{mindaclass}, we have
\begin{equation*}
\log \frac{f(z)}{z}=\int_{0}^{z} \frac{\Phi(\zeta)}{\zeta}d\zeta.
\end{equation*}
Now by taking $\zeta=te^{i\beta}$ so that $d\zeta=e^{i\beta}dt$, where $\beta$ is fixed but arbitrary and $z=re^{i\beta}$, we have
\begin{equation}\label{log-func}
\log \frac{f(z)}{z}=\int_{0}^{r}\frac{\Phi(te^{i\beta})}{t}dt.
\end{equation}
From the Maximum-minimum modulus principle, we find that $\Phi$ also satisfies the inequality \eqref{realbound}. Therefore, without loss of generality, we may replace $\Phi$ by $\psi$ and $\beta$ by $\theta$ in \eqref{log-func}. Then by equating real parts on either side of \eqref{log-func}, we have
\begin{equation}\label{main-eq}
\log\left|\frac{f(z)}{z}\right| =\int_{0}^{r} \frac{\Re{\Phi(te^{i\theta})}}{t}dt
\end{equation}
and now using the inequalities \eqref{realbound} in \eqref{main-eq}, we obtain
\begin{equation*}
\int_{0}^{r} \frac{\psi(-t)}{t}dt \leq \log \left|\frac{f(z)}{z}\right| \leq \int_{0}^{r} \frac{\psi(t)}{t}dt,
\end{equation*}
and \eqref{maingththm-eq} follows. The result is sharp for the function
\begin{equation}\label{f0}
f_0(z)=z \exp\int_{0}^{z}\frac{\psi(t)}{t}dt.
\end{equation}
This completes the proof.
\end{proof}
\begin{remark}
In the above theorem, we chose $\max_{|z|=r}\Re\psi(z)=\psi(r)$ and $\min_{|z|=r}\Re\psi(z)=\psi(-r)$ for computational convenience. However, these conditions may change according to the choice of $\psi$ in that case, appropriately these may be replaced.
\end{remark}
\begin{remark}
If $1+\psi$ is a Carath\'{e}odory univalent function, then Theorem~\ref{gen-thm1} reduces to the result~\cite[Corollary~1, p.~161]{minda94} and moreover, letting $r$ tends to $1$ in Theorem~\ref{gen-thm1}, we obtain the covering theorem (Koebe-radius) for the class $\mathcal{F}(\psi)$.
\end{remark}
\begin{corollary}[Covering Theorem]
If $f\in \mathcal{F}(\psi)$ and $f_0$ as defined in \eqref{f0}, then either $f$ is a rotation of $f_0$ or
$$ \{w\in{\Delta} : |w|\leq-{f}_0(-1) \} \subset f({\Delta}),$$
where $-{f}_0(-1)=\lim_{r\rightarrow 1}(-f_0(-r)).$
\end{corollary}
Let $L(f,r)$ denotes the length of the boundary curve $f(|z|=r)$. Note that for $z=re^{i\theta}$, we have $L(f,r):=\int_{0}^{2\pi}|zf'(z)|d\theta$. Now we obtain the following result:
\begin{corollary}
Assume that $\max_{|z|=r}|\psi(z)|=\psi(r)$ and also $\psi$ satisfies the conditions of Theorem~\ref{gen-thm1}. Let $M(r)=\exp\left(\int_{0}^{r}\frac{\psi(t)}{t}dt\right)$. If $f\in \mathcal{F}(\psi)$, then for $|z|=r$, we have
\begin{equation*}
\Re\frac{f(z)}{z} \leq M(r), \;\;
|f'(z)| \leq (1+\psi(r)) M(r)\;\;
\text{and}\;\;
L(f,r) \leq 2\pi r (1+\psi(r)) M(r) .
\end{equation*}
\end{corollary}
Let $$\psi(z)= \left\{
\begin{array}{ll}
\beta z/(1+\alpha z), & \hbox{$\beta>0$, $0< \alpha <1$ ;} \\
\eta z, & \hbox{$\eta>0$.}
\end{array}
\right.$$
Then the above two choices of $\psi$ are clearly convex univalent and $\psi({\Delta})$ are symmetric about real axis as $\overline{\psi(z)}=\psi(\bar{z})$. It is further evident that $ 1+\psi(z)\not\prec (1+z)/(1-z)$ except for the second choice of $\psi$ when $0<\eta\leq1$.
We now obtain the following sharp result from Theorem~\ref{gen-thm1}:
\begin{example}
Let $f\in \mathcal{F}({\beta z}/{(1+\alpha z)})$, where $\beta>0$ and $0< \alpha <1$ and $|z|=r$. Then
$$r(1-\alpha r)^{\frac{\beta}{\alpha}} \leq |f(z)|\leq r(1+\alpha r)^{\frac{\beta}{\alpha}},$$ which implies:
$$\left\{w : |w|\leq (1-\alpha)^{\frac{\beta}{\alpha}} \right\} \subset f({\Delta}),\;\;
|f'(z)|\leq \left(1+\frac{\beta r}{1+\alpha r}\right)(1+\alpha r)^{\frac{\beta}{\alpha}}\;\; \text{and}\;\;
\Re\frac{f(z)}{z} \leq (1+\alpha r)^{\frac{\beta}{\alpha}}.$$
\end{example}
\begin{example}
Let $f\in \mathcal{F}(\eta z)$, where $\eta>0$ and $|z|=r$. Then
$$r \exp(-\eta r ) \leq |f(z)|\leq r \exp(\eta r),$$ which implies:
$$\left\{w : |w|\leq \exp({-\eta}) \right \} \subset f({\Delta}),\;\;
|f'(z)|\leq (1+\eta r) \exp(\eta r)\;\; \text{and}\;\;
\Re\frac{f(z)}{z}\leq \exp(\eta r).$$
\end{example}
From the above examples, it is clear that $f\in \mathcal{F}(\psi)$ if and only if
\begin{equation*}
\frac{zf'(z)}{f(z)}\in
\left\{
\begin{array}
{lr}
\Omega_1, & \text{when}\; \psi(z)={\beta z}/{(1+\alpha z)}; \\
\Omega_2, &\text{when}\; \psi(z)=\eta z,
\end{array}
\right.
\end{equation*}
where $\Omega_1=\{w\in \mathbb{C}: |w-1|< |\beta-\alpha(w-1)|\}$ and $\Omega_2=\{w\in \mathbb{C}: |w-1|< \eta\}$, respectively for $z\in{\Delta}$.
\section{Some Applications and Further results}\label{sec-1}
\subsection{On Booth-Lemniscate}
Let $\mathcal{BS}(\alpha)$ be the class as defined in \eqref{boothlem}.
\begin{theorem}\label{grth}
Let $0< \alpha<1$ and $f\in \mathcal{BS}(\alpha)$, then for $|z|=r$
\begin{equation}\label{grth-thm}
-\hat{f}(-r)\leq|f(z)|\leq \hat{f}(r),
\end{equation}
where
\begin{equation}\label{hat}
\hat{f}(z)=z\left(\frac{1+z\sqrt{\alpha}}{1-z\sqrt{\alpha}}\right)^{\frac{1}{2\sqrt{\alpha}}}.
\end{equation}
The result is sharp.
\end{theorem}
\begin{proof}
Let $\psi(z):= {z}/{(1-\alpha z^2)}$ and $f\in \mathcal{BS}(\alpha):=\mathcal{F}(\psi)$. For $z=re^{i\theta}$, we have
\begin{equation*}
- \frac{r}{1-\alpha r^2} \leq \Re{\psi(re^{i\theta})} \leq \frac{r}{1-\alpha r^2}
\end{equation*}
and
\begin{equation*}
-\int_{0}^{r} \frac{1}{1-\alpha t^2}dt \leq \log \left|\frac{f(z)}{z}\right| \leq \int_{0}^{r} \frac{1}{1-\alpha t^2}dt,
\end{equation*}
where
$$\int_{0}^{r} \frac{1}{1-\alpha t^2}dt=\frac{1}{2\sqrt{\alpha}} \log{\frac{1+\sqrt{\alpha}r}{1-\sqrt{\alpha}r}}.$$
Hence, the result follows from Theorem~\ref{gen-ma-min}.
\end{proof}
\begin{remark}
Theorem~\ref{grth} improves the upper bound of $\Re{f(z)}$ and bounds of $|f(z)|$, obtained in \cite[Theorem~2, p.~116]{NNEbadian-2018} and \cite[Theorem~3, p.~116]{NNEbadian-2018} respectively.
\end{remark}
We now extend \cite[Theorem~2.6, p.~1238]{kar-Eba-2019} for the complete range of $\alpha$ using Theorem~\ref{grth}:
\begin{corollary}
Let $f\in\mathcal{BS}(\alpha)$, $\alpha\in(0,1)$ and $|z|=r$, then
$$
\Re\frac{f(z)}{z} \leq \left(\frac{1+r\sqrt{\alpha}}{1-r\sqrt{\alpha}}\right)^{\frac{1}{2\sqrt{\alpha}}}
\;\text{and} \quad
|f'(z)|\leq \left(1+\frac{r}{1-\alpha r^2}\right) \left(\frac{1+r\sqrt{\alpha}}{1-r\sqrt{\alpha}}\right)^{\frac{1}{2\sqrt{\alpha}}}.
$$
The result is sharp for the function $\hat{f}$ given in \eqref{hat}.
\end{corollary}
\begin{corollary}
Let $\alpha\in (0,1)$ be fixed. Then $f\in \mathcal{BS}(\alpha)$ satisfies the inequality
\begin{equation*}
L(f,r) \leq 2\pi r\left(1+\frac{r}{1-\alpha r^2}\right) \left(\frac{1+r\sqrt{\alpha}}{1-r\sqrt{\alpha}}\right)^{\frac{1}{2\sqrt{\alpha}}},\quad (|z|=r).
\end{equation*}
\end{corollary}
\begin{corollary}[Koebe-radius]\label{r*}
Let $0< \alpha<1$ and $\hat{f}$ as given in \eqref{hat}. If $f\in \mathcal{BS}(\alpha)$, then either $f$is a rotation of $\hat{f}$ or
$$ \{w\in \mathbb{C} : |w|\leq-\hat{f}(-1) \} \subset f({\Delta}).$$
\end{corollary}
\begin{proof}
The proof follows by letting $r$ tends to $1$ in the inequality $-\hat{f}(-r)\leq|f(z)|$, given in \eqref{grth-thm}.
\end{proof}
\begin{theorem}
Let $\alpha\in (0,2-\sqrt{3}]$ be fixed. Then $f\in \mathcal{BS}(\alpha)$ satisfies the sharp inequality
\begin{equation*}
\left|\arg\frac{f(z)}{z}\right| \leq \max_{|z|=r}\; \arg\left(\frac{1+z\sqrt{\alpha}}{1-z\sqrt{\alpha}}\right)^{\frac{1}{2\sqrt{\alpha}}}.
\end{equation*}
\end{theorem}
\begin{proof}
From \cite[Theorem~2.5, p.~1238]{kar-Eba-2019}, we have $f(z)/z \prec \hat{f}(z)/z$ for $0<\alpha\leq 2-\sqrt{3}$, where $\hat{f}$ is defined in \eqref{hat}. Since the function $\hat{f}(z)/z$ is convex and symmetric about the real axis in ${\Delta}$, therefore we easily see that
$$\left(\frac{1-\sqrt{\alpha}}{1+\sqrt{\alpha}}\right)^{\frac{1}{2\sqrt{\alpha}}}>0.$$
Thus $\hat{f}(z)/z$ is a Carathe\'{o}dory function and the result follows.
\end{proof}
For our next result, we need the following definition and a related class:
\begin{definition}
Let $f(z)=\sum_{k=0}^{\infty}a_kz^k$ and $g(z)=\sum_{k=0}^{\infty}b_kz^k$ are analytic in ${\Delta}$ and $f({\Delta})=\Omega$. Consider a class of analytic functions $S(f):=\{g : g\prec f\}$ or equivalently $S(\Omega):=\{g : g(z)\in \Omega\}$. Then the class $S(f)$ is said to satisfy Bohr-phenomenon, if there exists a constant $r_0\in (0,1]$ such that the inequality
$\sum_{k=1}^{\infty}|b_k|r^k \leq d(f(0),\partial\Omega)$
holds for all $|z|=r\leq r_0$, where $d(f(0),\partial\Omega)$ denotes the Euclidean distance between $f(0)$ and the boundary of $\Omega=f({\Delta})$.
The largest such $r_0$ for which the inequality holds, is called the Bohr-radius.
\end{definition}
See the articles \cite{jain2019,bhowmik2018} and the references therein for more. Let us now introduce the following class:
\begin{equation*}\label{bohrclass}
S(\mathcal{BS}(\alpha)):= \biggl\{g : g\prec f, \; g(z)=\sum_{k=1}^{\infty}b_k z^k \;\text{and}\; f\in \mathcal{BS}(\alpha) \biggl \}.
\end{equation*}
\begin{theorem}[Booth-Bohr-radius]
The class $S(\mathcal{BS}(\alpha))$ satisfies Bohr-phenomenon in $|z|\leq r(\alpha)$, where $r(\alpha)$ is the unique positive root of the equation
\begin{equation}\label{boothbohr-eq}
r\left(\frac{1+r\sqrt{\alpha}}{1-r\sqrt{\alpha}}\right)^{\frac{1}{2\sqrt{\alpha}}}- \left(\frac{1-\sqrt{\alpha}}{1+\sqrt{\alpha}}\right)^{\frac{1}{2\sqrt{\alpha}}}=0,
\end{equation}
whenever $0<\alpha \leq 3-2\sqrt{2}$. The result is sharp for the function $\hat{f}$ given in \eqref{hat}.
\end{theorem}
\begin{proof}
Since $g\in S(\mathcal{BS}(\alpha))$, we have $g\prec f$ for a fixed $f\in \mathcal{BS}(\alpha)$. From Corollary~\ref{r*}, we obtain the Koebe-radius $r_{*}=-\hat{f}(-1)$ such that $ r_{*}\leq d(0,\partial\Omega)=|f(z)|$ for $|z|=1$. Also using \cite[Theorem~2.5, p.~1238]{kar-Eba-2019}, we have
\begin{equation}\label{f-f0}
\frac{f(z)}{z}\prec \frac{\hat{f}(z)}{z}.
\end{equation}
Recall the result \cite[Lemma~1, p.1090]{bhowmik2018}, which reads as:
let $f$ and $g$ be analytic in ${\Delta}$ with $g\prec f,$ where
$f(z)=\sum_{n=0}^{\infty}a_n z^n$ and $ g(z)= \sum_{k=0}^{\infty}b_k z^k.$
Then
$\sum_{k=0}^{\infty}|b_k|r^k \leq \sum_{n=0}^{\infty}|a_n|r^n$ for $ |z|=r\leq1/3.$
Now using the result for $g\prec f$ and \eqref{f-f0}, we have
\begin{equation*}
\sum_{k=1}^{\infty}|b_k|r^k \leq r+\sum_{n=2}^{\infty}|a_n|r^n \leq \hat{f}(r)\quad\text{for}\; |z|=r\leq1/3.
\end{equation*}
Finally, to establish the inequality
$\sum_{k=1}^{\infty}|b_k|r^k \leq d(f(0),\partial\Omega),$
it is enough to show $\hat{f}(r) \leq r_{*}.$
But this holds whenever $r\leq r(\alpha)$, where $r(\alpha)$ is the least positive root of the equation $\hat{f}(r)=r_{*}.$ Now let $T(r):=\hat{f}(r)-r_{*}$, then
\begin{equation*}
T'(r)=\left(\frac{1+r\sqrt{\alpha}}{1-r\sqrt{\alpha}}\right)^{\frac{1}{2\sqrt{\alpha}}}+ r\left(\frac{1+r\sqrt{\alpha}}{1-r\sqrt{\alpha}}\right)^{\frac{1}{2\sqrt{\alpha}}-1}\frac{1}{(1-r\sqrt{\alpha})^2}.
\end{equation*}
Since $(1+r\sqrt{\alpha})/(1-r\sqrt{\alpha})>0$, therefore $T'(r)>0$ and so $T$ is an increasing function of $r$. Also $T(0)<0$ and $T(1)>0$. Thus
the existence of the root $r(\alpha)$ is ensured by the Intermediate Value theorem for the continuous functions. By a computation, it can easily be seen that $r(\alpha)< 1/3$ and
hence the result.
\end{proof}
\begin{corollary}
Let $0<\alpha \leq 3-2\sqrt{2}$. The Bohr-radius for the class $\mathcal{BS}(\alpha)$ is $r(\alpha)$, where $r(\alpha)$ is the unique positive root of the Eq.~\eqref{boothbohr-eq}.
\end{corollary}
\subsection{On Cissoid of Diocles}
Let us consider
\begin{equation*}
S_{\beta}(z)= \frac{z}{(1-z)(1+\beta z)}=\frac{1}{1+\beta}\left(\frac{1}{1-z}+\frac{1}{1+\beta z}\right)= \sum_{n=1}^{\infty}\frac{1-(-\beta)^n}{1+\beta}z^n,
\end{equation*}
where $\beta\in [0,1)$. Clearly, it is analytic, symmetric about the real-axis and
maps the unit disk ${\Delta}$ onto the domain bounded by {\it Cissoid of Diocles}:
\begin{equation*}
CS(\beta):=\left\{ w=u+iv\in \mathbb{C}: \left(u-\frac{1}{2(\beta-1)}\right)(u^2+v^2)+ \frac{2\beta}{(1+\beta)^2(\beta-1)}v^2=0 \right\}.
\end{equation*}
Let us now consider the class $\mathcal{S}_{cs}(\beta)$ as defined in \eqref{cissoidclass}. Masih et al. \cite{Masih-2019} considered this class with $\beta\in [0,1/2]$ since $\Re(1+{z}/{((1-z)(1+\beta z))}\geq (2\beta-1)/(2(\beta-1))\geq0$. Clearly, $\mathcal{S}_{cs}(\beta) = \mathcal{F}(S_{\beta}(z))$ for $\beta\in [0,1)$ and we have the following result:
\begin{theorem}\label{cissoidgth}
Let $f\in \mathcal{S}_{cs}(\beta)$ and $\beta\in [0,1)$. Then
\begin{equation*}
-\tilde{f}(-r) \leq |f(z)| \leq \tilde{f}(r),
\end{equation*}
where
\begin{equation}\label{tilde-f}
\tilde{f}(z)=z\left(\frac{1+\beta z}{1-z}\right)^{\frac{1}{1+\beta}}.
\end{equation}
\end{theorem}
\begin{proof}
Let $\psi(z):={z}/{((1-z)(1+\beta z))}$ and $f\in \mathcal{S}^{*}_{cs}(\beta):=\mathcal{F}(\psi)$. Following the proof of \cite[Theorem~3.1, p.~5]{Masih-2019}, it is easy to see that for $z=re^{i\theta}$, where $\theta\in[0,2\pi]$, we have
\begin{equation*}
\min_{|z|=r}\Re\psi(z)= \frac{-r+(\beta-1)r^2+\beta r^3}{(1+r)^2(1-\beta r)^2}=\psi(-r)
\end{equation*}
and
\begin{align*}
\max_{|z|=r}\Re\psi(z) &= \lim_{\theta\rightarrow 0}\frac{-r^2+\beta r^2 -\beta r^3 \cos{\theta}+r \cos{\theta}}{(1+r^2-2r \cos{\theta})(1+{\beta}^2 r^2+2\beta r \cos{\theta})}\\
&\leq \frac{\beta-1}{2(1+\beta)^2}=\max_{|z|=1}\Re\psi(z).
\end{align*}
Thus, we have $\psi(-r)\leq\Re\psi(z)\leq \psi(r)$ for $r\neq1$ and $1/(2(\beta-1))=\psi(-1)\leq \Re\psi(z)\leq (\beta-1)/(2(\beta+1)^2)$ for $r=1$.
Also note that
\begin{equation*}
\tilde{f}(z)=z\exp\int_{0}^{z}\frac{\psi(t)}{t}dt=z\left(\frac{1+\beta z}{1-z}\right)^{\frac{1}{1+\beta}}.
\end{equation*}
Now the result follows from Theorem~\ref{gen-ma-min}.
\end{proof}
\begin{remark}\label{3.2}
Let $\tilde{F}(z)={\tilde{f}(z)}/{z}$ and $|z|=1$ , where $\tilde{f}$ is as defined in Theorem~\ref{cissoidgth}. A calculation show that
\begin{equation*}
1+\frac{\tilde{F}''(z)}{\tilde{F}'(z)}= 1+ \frac{-\beta z}{(1+\beta z)(1-z)}+\frac{2z}{1-z},
\end{equation*}
which implies that
\begin{equation*}
\Re\left(1+\frac{\tilde{F}''(z)}{\tilde{F}'(z)}\right) \geq \beta \Re\left(\frac{-z}{(1+\beta z)(1-z)}\right).
\end{equation*}
Since $$\Re\left(\frac{-z}{(1+\beta z)(1-z)}\right)=\frac{1-\beta}{2(1-{\beta}^2+2\beta \cos{\theta})}=:g(\theta),$$
and a simple calculation shows that $g$ attains its minimum at $\theta=0$. Therefore, we have
$$ 1+\frac{\tilde{F}''(z)}{\tilde{F}'(z)}\geq \frac{\beta(1-\beta)}{2(1+\beta)^2}\geq0.$$
Hence $\tilde{F}$ is convex univalent in ${\Delta}$.
\end{remark}
Observe that the function $ S_{\beta}(z)$ is not convex when $\beta\neq 0$ and the result, $f(z)/z\prec \tilde{F}(z)$ similar to theorem~\ref{fk-z} is still open for $f\in \mathcal{S}_{cs}(\beta)$.
By letting $r$ tends to $1$ in the above Theorem~\ref{cissoidgth}, we obtain:
\begin{corollary}[Koebe-radius]\label{cissoidkoebe}
Let $\tilde{f}$ as given in \eqref{tilde-f}. If $f\in \mathcal{S}_{cs}(\beta)$, then either $f$is a rotation of $\tilde{f}$ or
$$ \left\{w\in \mathbb{C} : |w|\leq-\tilde{f}(-1)=\left(\frac{1-\beta}{2}\right)^{1/(1+\beta)} \right\} \subset f({\Delta}).$$
\end{corollary}
\begin{remark}
We improved the result \cite[Corollary~4.3.1, p.~8]{Masih-2019} in Theorem~\ref{cissoidgth} and Corollary~\ref{cissoidkoebe} by extending the range of $\beta$.
\end{remark}
\subsection{Modified Koebe function:}
The Koebe function $k(z)=z/(1-z)^2$ has a pole at $z=1$ and maps unit disk onto the domain $\mathbb{C}-(-\infty, 1/4]$, which is a slit domain. We now introduced the modified Koebe function:
\begin{equation}\label{modkoebe}
K(z):= \frac{z}{(1+\eta z)^2}, \quad 0\leq \eta <1,
\end{equation}
which is bounded in ${\Delta}$ and symmetric about the real-axis. It is interesting to observe the geometry of the domain $K({\Delta})$, which assumes different shapes for different choices of $\eta$ such as a convex or a Bean or a Cardioid shaped domain. Especially when $\eta$ tends to $1$, we see that one of the rotation of the image domain $K(\Delta)$ will converge to $k({\Delta})$ and thereby justifying the name of $K(z)$. Since $k(z)= (u^2(z)-1)/4$, where $u(z)=(1+z)/(1-z)$, in a similar fashion, we can write
\begin{equation*}
K(z)= \frac{1}{4\eta}(1-v^2(z)),
\end{equation*}
where
$ v(z)={(1-\eta z)}/{(1+\eta z)}$ and $\eta\neq0.$
\begin{lemma}\label{modkoebeconvex}
The function $K(z)$ as defined in \eqref{modkoebe} is convex for $0\leq \eta\leq 2-\sqrt{3}$.
\end{lemma}
\begin{proof}
Let $K(z)=z/ (1+\eta z)^2$. When $\eta=0$, $K(z)$ is the identity function and hence is convex. So let us consider $0<\eta<1$. By a computation, we obtain that
\begin{equation*}
1+\frac{zK''(z)}{K'(z)}= \frac{1-4\eta z+ \eta^2 z^2}{(1-\eta z)(1+\eta z)}.
\end{equation*}
Putting $z=e^{i \theta}$, we have
\begin{equation}\label{realexpress}
\Re\left(1+\frac{zK''(z)}{K'(z)}\right)=\frac{1-4 \eta(1-\eta^2)\cos{\theta}-\eta^4(1+\cos{\theta}){\sin}^2{\theta} }{((1+\eta^2)^2-(2\eta \cos{\theta})^2)}.
\end{equation}
Since $((1+\eta^2)^2-(2\eta \cos{\theta})^2)>0$ for all $\theta$ and for each fixed $\eta$. Therefore, we now only need to consider the numerator in \eqref{realexpress}.
A computation reveals that
\begin{equation*}
N(\theta):= 1-4 \eta(1-\eta^2)\cos{\theta}-\eta^4(1+\cos{\theta}){\sin}^2{\theta}
\end{equation*}
is increasing in $0\leq \theta \leq \pi$ (note that $N(\theta)=N(-\theta)$) with $N(\theta)\geq 0$ when $0<\eta\leq 2-\sqrt{3}$, while $N(\theta)$ takes negative values when $\eta>2-\sqrt{3}$. Hence by the definition of convexity, result follows.
\end{proof}
Now let us consider the function
\begin{equation*}
\psi(z):= \frac{\gamma z}{(1+ \eta z)^2 }= \gamma K(z), \quad \text{where} \quad \gamma>0,
\end{equation*}
and introduce a related class defined as follows:
\begin{equation}
\mathcal{S}_{\gamma}(\eta):= \biggl\{f\in \mathcal{A} : \left(\frac{zf'(z)}{f(z)}-1\right) \prec \frac{\gamma z}{(1+\eta z)^2},\; \eta\in [0,1),\; \gamma>0 \biggl\}.
\end{equation}
Note that if $\gamma$ and $\eta$ satisfies the condition $(1-\eta)^2\geq\gamma$, then the class $\mathcal{S}_{\gamma}(\eta)$ reduces to a Ma-Minda subclass of univalent starlike functions.
Also letting $\eta=1/4$ and $\gamma=25(\sqrt{2}-1)/16$, we see that the class $\mathcal{S}^{*}(\sqrt{1+z}) \subset \mathcal{S}_{\gamma}(\eta)$.
\begin{theorem}\label{modkobeGrth}
Let $f\in \mathcal{S}_{\gamma}(\eta)$ and $\eta\in [0,2-\sqrt{3}]$. Then
\begin{equation*}
-\kappa(-r) \leq |f(z)| \leq \kappa(r),
\end{equation*}
where
\begin{equation*}\label{kappa-f}
{\kappa}(z):=z \exp\left(\frac{\gamma z}{(1+\eta z)^2}\right).
\end{equation*}
\end{theorem}
\begin{proof}
Since $\psi(z)=\gamma K(z)$, using Lemma~\ref{modkoebeconvex}, we see that for $|z|=r$, $\psi(-r)\leq \Re \psi(z)\leq \psi(r).$
Also, we have $\kappa(z)=z\exp\int_{0}^{z}({\psi(t)}/{t})dt$. Hence, the result follows from Theorem~\ref{gen-ma-min}.
\end{proof}
Using Lemma~\ref{modkoebeconvex}, we also obtain that $\Re \psi(z)\geq \psi(-r)$ for all $\eta\in [0,1)$ which implies $-\kappa(-r) \leq |f(z)|$. So we have the following results:
\begin{corollary}[Radius of starlikeness]
Let $f\in \mathcal{S}_{\gamma}(\eta)$, $\gamma>0$ and $\eta\in [0,1)$. Then $f$ is starlike (univalent) of order $\alpha\in [0,1)$ inside the disk $|z|<r_0$, where $r_0$ is the smallest positive root of the equation
$$(1-\alpha)\eta^2 r^2-(2(1-\alpha)\eta+\gamma)r+(1-\alpha)=0.$$
\end{corollary}
\begin{corollary}[Koebe-radius]
Let $f\in \mathcal{S}_{\gamma}(\eta)$ and $\eta\in [0,1)$. Then either $f$ is a rotation of $\kappa$ or
\begin{equation*}
\left\{w\in \mathbb{C} : |w|\leq-\kappa(-1)= \exp\left(\frac{-\gamma }{(1-\eta)^2}\right) \right\} \subset f({\Delta}).
\end{equation*}
\end{corollary}
\begin{remark}\label{Fk-convex}
Let $F_{\kappa}(z):= \kappa(z)/z= \exp(\gamma z/(1+\eta z)^2)$. We see that for $\eta=0$ and $\gamma\leq1$, $F_{\kappa}$ is clearly convex. So consider $0<\eta<1$. After some calculations, we obtain that
\begin{equation*}
G(z):=1+ \frac{zF''_{\kappa}(z)}{F'_{\kappa}(z)} =\frac{\eta^4 z^4 +(2\eta^3+\gamma\eta^2)z^3-(6\eta^2+2\eta\gamma)z^2+ (\gamma-2\eta)z+1}{(1+\eta z)^3(1-\eta z)}.
\end{equation*}
Now for $z=e^{i\theta}$, the denominator of the real part of $G$ is $(1+\eta^2-2\eta \cos{\theta})(1+\eta^2+2\eta \cos{\theta})^3>0$, since $(1-\eta)^2>0$ and therefore, it suffices to consider the numerator. After a rigorous computation, we find that numerator of the real part of $G$ is non negative if and only if
$0<\gamma<1$ and $0<\eta\leq \eta_0$, where $\eta_0$ (depends on $\gamma$) is the smallest positive root of the equation
\begin{equation}\label{eta0-def}
(1-\gamma)+(3\gamma-10) \eta^2+12 \eta^3+(8-3\gamma)\eta^4-16\eta^5+(2+\gamma)\eta^6+4\eta^7-\eta^8=0.
\end{equation}
Hence, $F_{\kappa}$ convex for $0<\gamma<1$ and $0<\eta\leq \eta_0$.
\end{remark}
For our next result, we need to recall the following result of Ruscheweyh and Stankiewicz~\cite{RusStan-1985}:
\begin{lemma}[\cite{RusStan-1985}]\label{convo-result}
Let the analytic functions $F$ and $G$ be convex univalent in ${\Delta}$. If $f\prec F$ and $g \prec G$, then $$f*g \prec F*G \quad (z\in{\Delta}).$$
\end{lemma}
\begin{theorem}\label{fk-z}
Let $\eta\in [0,2-\sqrt{3}]$. If $f$ belongs to the class $\mathcal{S}_{\gamma}(\eta) $, then
$$\frac{f(z)}{z} \prec F_{\kappa}(z),\quad (z\in {\Delta})$$
where $F_{\kappa}(z)=\kappa(z)/z$ is the best dominant and $\kappa$ as defined in Theorem~\ref{modkobeGrth}.
\end{theorem}
\begin{proof}
Let $f\in \mathcal{S}_{\gamma}(\eta)$, then
\begin{equation}\label{phi-psi}
\phi(z):= \frac{zf'(z)}{f(z)}-1 \prec \psi(z).
\end{equation}
It is well-known that the function
\begin{equation*}
g(z)=\log \left(\frac{1}{1-z}\right) =\sum_{n=1}^{\infty}\frac{z^n}{n} \in \mathcal{C},
\end{equation*}
where $\mathcal{C}$ is the usual class of normalized convex(univalent) function and thus for $f\in \mathcal{A}$, we get
\begin{equation}\label{phi-gExp1}
\phi(z)* g(z)= \int_{0}^{z}\frac{\phi(t)}{t}dt \quad \text{and}\quad \psi(z)* g(z)= \int_{0}^{z}\frac{\psi(t)}{t}dt.
\end{equation}
From Lemma~\ref{modkoebeconvex}, we see that $\psi$ is convex for $\eta\in [0,2-\sqrt{3}]$. Thus applying Lemma~\ref{convo-result} in \eqref{phi-psi}, we get
\begin{equation}\label{phi-gExp2}
\phi(z)* g(z) \prec \psi(z)*g(z).
\end{equation}
Now from \eqref{phi-gExp1} and \eqref{phi-gExp2}, we obtain
\begin{equation*}
\int_{0}^{z}\frac{\phi(t)}{t}dt \prec \int_{0}^{z}\frac{\psi(t)}{t}dt,
\end{equation*}
which implies that
\begin{equation*}
\frac{f(z)}{z}:= \exp\int_{0}^{z}\frac{\phi(t)}{t}dt \prec \exp\int_{0}^{z}\frac{\psi(t)}{t}dt=: \frac{\kappa(z)}{z}.
\end{equation*}
This completes the proof.
\end{proof}
\begin{corollary}
Let $0<\gamma<1$ and $0<\eta\leq \min\{2-\sqrt{3},\eta_0\}$, where $\eta_0$ is the least positive root of the equation \eqref{eta0-def} and also let $0<\gamma\leq \pi/2$ when $\eta=0$. If $f\in \mathcal{S}_{\gamma}(\eta)$, then $f$ satisfies the sharp inequality
$$\left|\arg\frac{f(z)}{z}\right|\leq \max_{|z|=r}\; \arg\exp\left(\frac{\gamma z}{(1+\eta z)^2}\right).$$
\end{corollary}
\begin{proof}
Let $F_{\kappa}(z):=\kappa(z)/z=\exp(\gamma z/(1+\eta z)^2)$ which is symmetric about the real axis. From Theorem~\ref{fk-z}, have $f(z)/z \prec F_{\kappa}(z)$ for $0\leq \eta\leq 2-\sqrt{3}$. Since for $\eta=0$, $\Re F_{\kappa}(z)>0$ if and only $\gamma\leq \pi/2$. The result is obvious. Now from Remark~\ref{Fk-convex}, we see that if $0<\gamma<1$ and $0<\eta\leq \min\{2-\sqrt{3},\eta_0\}$, where $\eta_0$ is the least positive root of the equation \eqref{eta0-def} then $F_{\kappa}$ is convex which implies
$$\Re F_{\kappa}(z)\geq \exp\left(\frac{-\gamma}{(1-\eta)^2}\right)>0,$$
and $F_{\kappa}$ is also a Carathe\'{o}dory function in this case. Hence the result follows.
\end{proof}
Now using Theorem~\ref{modkobeGrth}, Remark~\ref{Fk-convex} and Theorem~\ref{fk-z}, we obtain the following result:
\begin{theorem}
Let $f\in \mathcal{S}_{\gamma}(\eta)$, then
$$\Re\left(\frac{f(z)}{z}\right) \leq \exp\left(\frac{\gamma r}{(1+\eta r)^2}\right) \quad \text{for}\quad \eta\in[0,1)$$
and
$$\min_{|z|=r} \exp\left(\frac{\gamma z}{(1+\eta z)^2}\right) \leq \Re\left(\frac{f(z)}{z}\right) \quad \text{for}\quad \eta\in[0,2-\sqrt{3}]. $$
In partcular, if $0<\gamma<1$ and $0<\eta\leq \min\{2-\sqrt{3},\eta_0\}$, where $\eta_0$ is the least positive root of the equation \eqref{eta0-def}, then
$$ \exp\left(\frac{-\gamma r }{(1-\eta r)^2}\right) \leq \Re\left(\frac{f(z)}{z}\right).$$
The result is sharp.
\end{theorem}
We conclude this paper by introducing the following three new subclasses of $\mathcal{F}(\psi)$:
\begin{equation*}
\mathcal{T}:= \biggl\{f\in \mathcal{A} : \frac{zf'(z)}{f(z)}-1 \prec \log(1-z) \biggl\},
\end{equation*}
which means $zf'(z)/f(z) \in \{w\in \mathbb{C}: |\exp(w-1)-1|<1 \},$
\begin{equation*}
\mathcal{S}_{p}:= \biggl\{f\in \mathcal{A} : \frac{zf'(z)}{f(z)} \prec 1- \left(\log\frac{1+\sqrt{z}}{1-\sqrt{z}}\right)^2 \biggl\},
\end{equation*}
or equivalently $zf'(z)/f(z)\in \{w \in \mathbb{C}:|1-w|<\Re((1-w)+{\pi}^2) \}$, a parabola with opening in left half plane
and
\begin{equation*}
\mathcal{L}(\beta):= \biggl\{f\in \mathcal{A} : \frac{zf'(z)}{f(z)}-1 \prec \frac{z}{\cos(\beta z)},\; \beta\in [0,1] \biggl\}.
\end{equation*}
The above new classes are still open to study. Also see figure~\ref{f1}. Note that for the classes $\mathcal{T}$ and $\mathcal{L}(\beta)$, the function $f_0$ defined in \eqref{f0} takes the respective particular form
\begin{equation*}
f_{\mathcal{T}}(z):=z \exp(-Li_{2}(z)),
\end{equation*}
where $$-\int_{0}^{z}\frac{\log(1-t)}{t}dt= \sum_{n=1}^{\infty}\frac{z^n}{n^2}=:Li_{2}(z)$$ known as dilogarithm function and
\begin{equation*}
f_{\mathcal{L}}(z):= z \exp \int_{0}^{z}\frac{1}{\cos{\beta t}}dt= z(\sec{\beta z}+\tan{\beta z})^{1/\beta}),\quad \beta\neq0.
\end{equation*}
\begin{figure}[h]
\begin{tabular}{c}
\includegraphics[scale=0.4]{CosLog.pdf}
\end{tabular}
\caption{Boundary curves of the functions $z/{\cos{z}}$ and $\log(1-z)$}\label{f1}
\end{figure}
\section*{Conclusion}
It is interesting to observe that even in the class $\mathcal{F}(\psi)$, functions may not be univalent. But with the conditions on the bounds for the real part of $\psi$, a similar result holds as obtained by Ma-Minda~\cite{minda94} which is quiet important to obtain the Koebe domain. From Remark~\ref{3.2} and Remark~\ref{Fk-convex}, we also note that the function $f_0(z)/z$, where $f_0$ as defined in \eqref{f0} behaves quite differently in the particular classes.
\section*{Conflict of interest}
The authors declare that they have no conflict of interest.
|
1,314,259,995,362 | arxiv | \section{Introduction}
\label{sec:Intro}
One of the key observing modes of the Near Infrared and Slitless Spectrograph (NIRISS, Doyon et al., in prep) onboard the James Webb Space Telescope (JWST) is the Single Object Slitless Spectroscopy (SOSS) mode (Albert et al., in prep). It enables time-series spectroscopy in the 0.6--2.8\,\micron\ range for bright targets, which is of particular use for exoplanet transit spectroscopy. Indeed, simulations have demonstrated SOSS as a key mode to use with JWST on the brightest exoplanet targets \citep{greene.2016,batalha.2017,louie.2018,schlawin.2018} and it has been selected by multiple Cycle 1 programs, including the Early Release Science program DD-ERS 1366 \citep{1366jwst, bean.2018}, as well as many Guaranteed Time Observations, e.g., GTO 1201 \citep{1201jwst} and the General Observer Programs 1935 \citep{1935jwst}, 2062 \citep{2062jwst}, 2113 \citep{2113jwst}, 2589 \citep{2589jwst}, 2594 \citep{2594jwst} and 2722 \citep{2722jwst}. SOSS uses the GR700XD cross-dispersion grating prism (Doyon et al., in prep) in the pupil wheel of NIRISS to produce a series of three spectral traces: order 1 (0.83--2.8\,\micron), order 2 (0.6--1.4\,\micron) and order 3 (0.6--0.95\,\micron). In practice, order 3 does not warrant much consideration due to its faint signal and the fact that it does not increase the wavelength domain. A slight (22 pixel wide) defocus along the spatial axis is purposely included to enable observations of bright targets without saturating the detector pixels. Mechanical constraints in the thickness of the GR700XD element at the design phase prevented the first and second orders from being fully separated, resulting in an overlap by about half the trace width towards the red wavelength ends of the traces (See Figure~\ref{fig:sossmode})
As a result, established methods for spectrum extraction cannot be applied directly to the regions affected by contamination. Typically, at high signal-to-noise, a box-extraction method \citep{deboer.1981} is preferred, which is performed by simply summing over the spatial axis all pixels located within a fixed-width aperture. This technique has been utilized in many space-based relative spectral measurements of exoplanets to date, for example: transits, eclipses or phase curves \citep[e.g.,][]{deming.2013, wakeford.2013, sing.2015, evans.2017} as well as in many ground-based observations \citep[e.g.,][]{jordan.2013, diamond.2018}. The main advantages are the fact that a box extraction is easy to implement and that it is less prone to modelling errors.
On the other hand, at lower signal-to-noise, an optimal extraction \citep{horne.1986, robertson.1986, marsh.1989} is often better. Indeed, by weighting the pixels according to their relative contribution to the signal, a better precision can be reached. This requires the determination of a spatial profile, which is a delicate task that can introduce biases in the resulting spectra \citep{horne.1986, jordan.2013}. Nevertheless, it is still used in the exoplanet community to perform spectrophotometric measurements from space \citep[e.g.,][]{kreidberg.2014, stevenson.2019} or from the ground \citep[e.g.,][]{berta.2011, stevenson.2014}, with comparable results. However, these two methods have no mechanism to distinguish between contributions from overlapping traces from different sources or diffraction orders.
Yet, the challenge of extracting spectra from blended sources is not unprecedented. It was needed notably in the context of long-slit spectroscopy for several science applications, such as the observation of galactic nuclei \citep{lucy.2003} or crowded star fields \citep{hynes.2002}. In fact, a task as common as a simple sky subtraction is by itself a type of decontamination. Hence, various techniques have been proposed over the past two decades \citep[e.g.,][]{hynes.2002, khmil.2002, lucy.2003, bolton.2010}. However, due to the particularity of the NIRISS SOSS mode and the precision it requires, it was necessary to develop a dedicated algorithm.
In this article, we present ATOCA; an algorithm designed to properly decontaminate and extract overlapping orders. Though the methods that make up the ATOCA algorithm can be applied generally to the problem of extracting overlapping spectral orders, we focus here on the NIRISS/SOSS mode of JWST. Proper extraction of SOSS observations was our primary motivation for creating ATOCA, and the algorithm has been made part of the official JWST pipeline\footnote{\url{https://github.com/spacetelescope/jwst}} -- the data management system (DMS) -- as part of the stage 2 spectral extraction step.
The article is divided as follow: Section \ref{sec:the_problem} presents an estimate of the level of contamination that is expected with the NIRISS/SOSS mode. Then, the algorithm and its implementation are presented in sections \ref{sec:algorithm} and \ref{sec:implementation}, followed by section \ref{sec:validation} where we evaluate the performance of the decontamination.
\begin{figure*}[tbph]
\centering
\includegraphics[width=\linewidth]{Figures/CV3.pdf}
\caption{High signal-to-noise stack of a SOSS mode observation on a tungsten lamp obtained at cryogenic vacuum testing (CV3) of the telescope and instruments. Order 1 is the most apparent feature extending from 0.83\,\micron\ on the right to 2.8\,\micron\ on the left portion of the image. The second order can be seen starting at 0.6\,\micron\ at the top right part of the image and extends out to 1.4\,\micron. At approximately 1.1\,\micron\ the second order is significantly blended with the first order. The faint third order can be observed above the 2nd order. The overlap between the spectral orders 1 and 2 on the left side of the image complicates the spectrum extraction and motivates this paper. Since the order 2 covers shorter wavelengths than order 1, this problem should be even more striking in actual astrophysical targets which are warmer ($T\ge3000$\,K) than the tungsten filament used in the laboratory (1500\,K). The vertical lines were added to mark the wavelengths, from right to left, at 0.9, 1.2, 1.6, 2.0, 2.4, 2.8\,$\mu$m for order 1 (in black) and at 0.6, 0.8, 1.0, 1.2, 1.4\,$\mu$m for order 2 (in red).}
\label{fig:sossmode}
\end{figure*}
\section{The SOSS Trace Overlap Problem} \label{sec:the_problem}
\begin{figure*}
\centering
\begin{subfigure}
\centering
\includegraphics[scale=0.67]{Figures/contamination_factor_order1_box-extract_width_25.pdf} %
\end{subfigure}
\begin{subfigure}
\centering
\includegraphics[scale=0.67]{Figures/contamination_factor_order2_box-extract_width_25.pdf} %
\end{subfigure}
\caption{Contamination factors (see equation \ref{eq:transit_contamination}) for a range of stellar effective temperatures (color-coded). These values hold for a standard box extraction, using a 25-pixel aperture. The wavelength domain is not entirely shown here; the first order (left panel) is virtually uncontaminated below 1.6\,\micron\ whereas the second order (right panel) contamination levels increase exponentially at longer wavelengths ($>1.1\,\micron$).}%
\label{fig:contamination_factors}
\end{figure*}
\begin{figure*}
\centering
\begin{subfigure}
\centering
\includegraphics[scale=0.67]{Figures/transit_contamination_order1_box-extract_width_25.pdf}
\end{subfigure}
\begin{subfigure}
\centering
\includegraphics[scale=0.67]{Figures/transit_contamination_order2_box-extract_width_25.pdf}
\end{subfigure}
\caption{Expected contamination levels during transit for a diverse sample of exoplanets. Each target is shifted vertically by 2 ppm for the first order (left panel) and by 50 ppm for the second order (right panel). A horizontal dotted line is drawn for each model to mark its zero point. The color of the lines refers to the effective temperature of the host star, which influences the level of contamination. As in Figure \ref{fig:contamination_factors}, these estimates were made assuming a 25-pixel-wide box extraction and the wavelength domain was limited for the same reasons. }%
\label{fig:transit_contamination}
\end{figure*}
The optics of the SOSS mode were designed with strong mechanical constraints, one of which, the total thickness of the GR700XD element, prevented the cross-dispersing prism from being sufficiently inclined to cleanly separate spectral orders 1 and 2 (see Albert et al. in prep). As a result, the red end of order 2 ($\lambda \geq 1.1\mu$m) partially overlaps with that of order 1 ($\lambda \geq 2.2\mu$m) (see Figure \ref{fig:sossmode}). This cross contamination of the signals is a major issue during spectrum extraction and will bias results using the simple aperture-based methods discussed in Section \ref{sec:Intro}.
The amount of contamination can be characterized by measuring the contaminating signal present in the extraction region, $F_\mathrm{contam}$, allowing for the definition of a contamination factor, $c_{order}$, via the following ratio:
\begin{equation} \label{eq:contamination_factor}
c_{order} = \frac{F_{\mathrm{contam}}}{F_{order}} \,.
\end{equation}
Here, $F_{order}$ is the flux that would be extracted from the targeted order if there was no contamination. For example, for the first order, $F_\mathrm{contam}$ is given by the signal from the second order that is overlapping with the region of interest. Based on laboratory measurements of the blaze function, detector efficiency, and end-to-end throughput of the telescope and NIRISS instrument, it is possible to simulate this contamination factor for any given extraction method.
Figure \ref{fig:contamination_factors} shows the contamination factors, $c$, for a standard box extraction using a 25-pixel aperture. The simulations were made using ATOCA and are described in Section \ref{sec:simulation}. The aperture width was determined by minimizing the dispersion of the white light-curve over the contaminated band-pass (2.0--2.8\,\micron). To get a realistic estimation of the dispersion, we had to consider the expected jitter on the telescope pointing $\sim$ 5\,mas (see Albert et al., in prep.) which has the effect to increase the width of the required aperture. The incident fluxes at different stellar effective temperatures were modeled using PHOENIX HiRes synthetic spectra \citep{husser.2013}. Note also that the contamination factor is presented in \% to make a distinction with the transit depth (which is is typically in ppm), both being relative quantities.
For the first diffraction order, most of the affected wavelength range exhibits levels of contamination below 0.1\,\%, until 2.6\,\micron, after which $c_1$ increases exponentially up to 2\,\%. The stellar effective temperature also has a significant effect, since the second order covers shorter wavelengths ($0.6\leq\lambda\leq1.4\,\mu$m) than the first order ($0.83\leq\lambda\leq2.8\,\mu$m), hence a star with a stronger relative flux contribution at short wavelengths will be more affected. The second order, on the other hand, suffers more drastically from contamination, reaching levels above 100\%. Fortunately, the wavelength domain affected ($\lambda\geq0.85\,\mu$m) is shared with order 1 (and is within the region where order 1 does not suffer from contamination), so very little information is lost. The wavelengths towards the blue end of the spectrum ($\lambda\leq0.85\,\mu$m) and complementary to order 1 correspond to the part of the detector where the orders' spatial positions deviate from one another, creating a drop in the contamination levels.
The above discussion holds for any absolute flux measurement, but what is the impact for the intended application of the SOSS mode --- exoplanet time-series --- whose measurements are relative?
The fluxes measured in order 1 will be a combination of the true flux in that order, $F_1$, contaminated by some flux from order 2, $F_\mathrm{contam}$. Assuming a simplified top-hat model for an exoplanet transit (no limb darkening, instantaneous ingress and egress, non-grazing) this flux can be calculated, for cases in and out of transit, via:
\begin{equation}
F_{out} = F_1 + F_\mathrm{contam}
\end{equation}
\begin{equation}
F_{in} = (1-d-\delta_1(\lambda_1)) F_1 + (1-d-\delta_2(\lambda_2)) F_\mathrm{contam}
\end{equation}
where $F_{out}$ and $F_{in}$ are respectively the mean flux outside of transit and during transit measured by an extraction around the first order's trace, $d$ is the transit depth due to the opaque planet (i.e., without considering an atmosphere)
and $\delta_1(\lambda_1)$ and $\delta_2(\lambda_2)$ are the wavelength-dependent transit depths due to the planet's atmosphere for orders 1 and 2, respectively. $\lambda_1$ and $\lambda_2$ are the wavelength solutions at each order, which are both a function of the column position, $x$, such that $\lambda_1(x)$ and $\lambda_2(x)$.
The transit depth measured on a contaminated trace is, by definition:
\begin{equation}
D = 1 - F_{in}/F_{out}.
\end{equation}
Recalling the order contamination factor from equation \ref{eq:contamination_factor}, then the transit depth can be written:
\begin{equation} \label{eq:transit_contamination}
D = d + \delta_1(\lambda_1) + \frac{c_1}{1+c_1} \left( \delta_2(\lambda_2) - \delta_1(\lambda_1) \right).
\end{equation}
In the case where there is no chromatic variation in the atmospheric signal (i.e., a flat transmission spectrum), $\delta_2(\lambda_2) = \delta_1(\lambda_1)$ so $D = d + \delta_1(\lambda_1)$. Therefore, contamination has no bearing on the retrieved transit depth. In other words, the second order contaminating signal changes by exactly the same relative amount during transit as the first order signal.
In the case where the atmospheric signal is different at the two overlapping wavelengths, then the difference
\begin{equation}
\Delta(x) = \delta_2(\lambda_2(x)) - \delta_1(\lambda_1(x))
\end{equation}
modulated by $c_1/(1 + c_1)$ will affect the transit depth (i.e., the last term in equation \ref{eq:transit_contamination}). To make a distinction with the contamination factor $c$, we will use the name ``transit contamination'' to refer to this last term of equation \ref{eq:transit_contamination}. Generally, $\Delta$ will be about the same order of magnitude as $\delta_1$ and $\delta_2$, so a good estimation can be drawn simply from the contamination term. Moreover, for order 1, $c$ will be small, hence $c/(c+1) \approx c$. So, concerning the first order's relative measurement, the spurious signal can be approximated to less than 1\% (i.e., $c_1$) of the chromatic contribution of the transit signal, $\delta_1$. For example, if we take a hypothetical transmission spectrum with a spectral feature for the first order of $\delta_1(2.7\micron) = 300$\,ppm above the mean transit depth $d$. Let's also assume that there is a spectral feature from the second order at the corresponding columns (see Figure \ref{fig:sossmode}) of $\delta_2(1.35\micron) = -200$\,ppm, i.e., $200$\,ppm below the mean transit depth. This would result in a difference of $\Delta(x)=500$\,ppm and the resulting contamination signal will be around 1.5\,ppm, considering a contamination factor of $c_1\approx3$\,\% (see Figure \ref{fig:contamination_factors}).
Nevertheless, to fully grasp the importance of this effect, we computed the resulting contamination signal in transmission for a variety of exoplanets, most of them being part of the NIRISS Exploration of the Atmospheric diversity of Transiting exoplanets (NEAT) GTO program \citep{1201jwst}. The results are shown in Figure \ref{fig:transit_contamination}. The transit models for each planet were produced using the SCARLET atmosphere framework \citep{benneke_atmospheric_2012, benneke_how_2013, benneke.2015} assuming, for simplicity, cloud-free atmospheres with solar elemental abundances and chemical equilibrium. These assumptions generally lead to stronger signals, hence upper limits on the estimates. For the first order, the transit contamination is constrained below 8\,ppm for all targets except WASP-107\,b, for which it reaches almost $-12$\,ppm. This corresponds to $\approx$1\% of the planet atmospheric signal, as expected. For example, the transmission spectrum of WASP-107\,b presents spectroscopic variations around 2500\,ppm (see Figure \ref{fig:hot_jup_tr}), which would lead to an expected transit contamination signal of 25\,ppm, not far from 12\,ppm value. On the other hand, the second order is much more affected, with levels around 100\,ppm or more in the longer wavelength range shown in Figure \ref{fig:transit_contamination}. This contamination comes from the wings of the first order's spatial profile. The longer wavelengths are not presented since the second order becomes almost completely diluted into the first order (see Figure \ref{fig:sossmode}). On the other side, below 0.8\,\micron, where the second order contributes unique wavelength coverage, the transit contamination seems to vanish. However, this drastic drop in contamination is attributable to the simulations, as discussed in Section \ref{sec:results}.
Whilst the systematic error on the transit signal may seems small, it must not be taken lightly and should be prevented using the ATOCA extraction method presented in the next section. We also want to emphasize that this is a systematic error, and not a randomly distributed source of noise such as shot noise. These estimations could also be worsened by any other relative signals that depends on wavelength; like limb darkening and stellar contamination from unocculted regions \citep[e.g.][]{rackham.2018, genest.2022}. Moreover, the examples presented here assume that the trace shape is perfectly stable within a whole time-series and that the trace position is varying within the expectation, i.e., following a random normal distribution with a dispersion of 5\,mas. In the context of real observations, these assumptions may not be true due to the finite pointing precision of the Fine Guidance Sensor (FGS) and possible variations in the point spread function (breathing effects, wavefront variations, etc.). Therefore, to obtain the most stable transmission spectrum, one may need to increase the width of the aperture (along the spatial axis) used for the box extraction \citep[e.g.,][]{diamond.2018, mikal-evans.2021} in order to minimize the variation due to the signal moving in and out of the aperture, hence increase the contribution of the contaminating order. Furthermore, even in the context of standard extractions, ATOCA will help to calibrate and extract the one-dimensional spectra. The contamination would also need to be properly characterized by identifying the contribution of each order to ensure the reliability of any results. Finally, extraction using ATOCA will ensure that science applications needing absolute flux calibration can be realized with SOSS.
\section{Description of the Extraction Method} \label{sec:algorithm}
\begin{figure*}
\centering
\includegraphics[scale=0.8]{Figures/ATOCA_inputs.pdf}
\caption{\label{fig:2}
Spectrum and associated spectral profiles for a 2300\,K star, along with contributions from the first (blue) and second (orange) orders.
The top panel shows the spectral throughputs (dashed lines) and the flux density of a 2300\,K PHOENIX spectrum downgraded at each order's resolution and at a higher resolution representative of the underlying flux (in green). The bottom left panel displays the spatial profiles along a given column where the overlap occurs. At the bottom right, an example of the convolution kernels centered at $1 \, \mu \rm m$ is shown. The green curve is the underlying flux (same as top panel). }
\end{figure*}
The core idea behind ATOCA is to determine the underlying flux by fitting each order directly and simultaneously on the detector image, pixel by pixel. To do so, we first need to establish a linear model of each individual pixel with the flux as independent variable. More precisely, we need to discretize the flux by evaluating it on a wavelength grid. Each of the nodes (or elements) of the resulting flux array is an independent variable. Then, by minimizing the $\chi^2$ with respect to each of these nodes, we are able to express the flux as the solution of a linear system, which ultimately enables us to explicitly extract it. Hence, no forward modelling of the flux is needed for an extraction. However, to be accurate, it requires a thorough knowledge of the detector's properties. The formalism of the algorithm is described below.
For the following equations, each pixel that will be used for the fit will be labeled by the index $i$ and the total number of relevant pixels is given by $N_i$. There is no need to account for the two-dimensional nature of the detector with additional indices. Each diffraction order also needs to be identified, here using the index $n$. Now, to determine the flux falling on each pixel, we need for each order: 1) the wavelength solution, 2) the spectral throughput \footnote{Throughput is defined here as an end-to-end wavelength-dependent transmission (detected flux divided by the flux impinging the telescope).}, and 3) the spatial throughput. From the wavelength solution, we can define the central wavelength of a pixel $i$ at order $n$ as $\lambda_{ni}$. It will also be useful to define $\lambda^+_{ni}$ and $\lambda^-_{ni}$; the wavelengths at the pixel borders in the spectral direction, and $\Delta \lambda_{ni}=\lambda^+_{ni} - \lambda^-_{ni}$; the pixel spectral coverage. The spectral and spatial throughputs are given respectively by the function $T_n(\lambda)$ and the constant $P_{ni}$. Take notice that the former depends on the wavelength but not explicitly on the pixel, whereas the opposite is true for the latter. Finally, let the spectral flux density of the target incident upon the spectrograph be $f(\lambda)$.
It is also important to consider that the flux density is seen by each order at a different resolution, so for an order $n$, we need an additional input: 4) the spectral resolution kernel $\kappa(\tilde{\lambda}, \lambda)$. It specifies the convolved flux $\tilde{f}$ in relation with an incident flux through the equation
\begin{equation} \label{eq:kernel}
\tilde{f}_n(\tilde{\lambda}) = \int_{0}^{\infty} \kappa_n(\tilde{\lambda}, \lambda) f(\lambda) d\lambda \, .
\end{equation}
Figure \ref{fig:2} presents some visualizations of the aforementioned quantities. The difference between the two orders after convolution with the resolution kernels becomes apparent in the overlapping wavelength range, where the first order (blue curve) is not superimposed perfectly on the second order (orange curve).
\subsection{The model}
With all this in place, we are now able to define a model of the detector. The number of photo-electrons detected by pixel $i$ can be represented, up to a multiplicative constant, as
\begin{equation}\label{eq:pix_model}
\begin{aligned}
M_{i} & = \sum_n \int_{\lambda_{ni}^-}^{\lambda_{ni}^+} P_{ni}T_n(\lambda)\tilde{f}_n(\lambda)\lambda d\lambda \\
& = \sum_n \int_{\lambda_{ni}^-}^{\lambda_{ni}^+} a_{ni}(\lambda)\tilde{f}_n(\lambda)d\lambda,
\end{aligned}
\end{equation}
where $a_{ni}(\lambda)$ accounts for all the coefficients that are not the flux. Note that the summation is made over all orders $n$ that contribute to the signal measured by a given pixel $i$. In the case of the NIRISS/SOSS mode, the index $n$ covers only the first and second orders; the third order is not considered since it does not cover the same pixels. To translate this model into a numerical form, we can define a grid where $f$ is projected, labeled by the index $k$ so that $f(\lambda_k) = f_k$ and $\Delta \lambda_k = \lambda_{k+1} - \lambda_k$. The length of the discretized grid would then be given by $N_k$, so that $1 \leq k \leq N_k$. Similarly, $N_{\tilde{k}}$ is the length of the convolved flux $\tilde{f}$, so that $1 \leq \tilde{k} \leq N_{\tilde{k}}$. We also need a numerical form of this integral. There are multiple ways to do this, but for ATOCA, we use the trapezoidal method on a specified grid as illustrated in Figure \ref{fig:trpz}. The details of this method are in Section \ref{sec:trpz}. Independently of the chosen integration technique, the numerical form of the integral will look like,
\begin{equation}\label{eq:pix_model_num}
M_i = \sum_n \sum_{\tilde{k}} w_{in\tilde{k}} a_{in\tilde{k}}\tilde{f}_{n\tilde{k}},
\end{equation}
with $w_{in\tilde{k}}$ given by the integration method. To link the diffraction orders, we want to write these equations according to the underlying flux $f(\lambda)$, following equation \ref{eq:kernel}. In the numerical form,
\begin{equation}
\tilde{f}_{n\tilde{k}} = \sum_k \kappa_{n\tilde{k}k}f_{k}
\end{equation}
with $\kappa_{n\tilde{k}k}$ being the coefficients of the convolution kernel at order $n$. The numerical integration method as well as the kernel are comprised in them.
Finally, we have that
\begin{equation}
M_i = \sum_n \sum_{\tilde{k}} w_{in\tilde{k}} \, a_{in\tilde{k}}
\sum_k \kappa_{n\tilde{k}k}f_k \, .
\end{equation}
This equation can be written in a more intuitive matrix form as
\begin{equation}
\bigg(M\bigg)_{N_i}
= \sum_n \bigg(w_n a_n\bigg)_{N_i \times N_{\tilde{k}}}
\bigg(\kappa_n\bigg)_{N_{\tilde{k}} \times N_k}
\bigg(f\bigg)_{N_k} \, .
\end{equation}
To simplify the notation again, we can put all the coefficients for each order in single matrices $\textbf{b}_n$ with dimensions $N_i \times N_k$,
\begin{equation} \label{eq:bn}
\bigg(M\bigg)_{N_i}
= \sum_n \bigg(b_n\bigg)_{N_i \times N_k}
\bigg(f\bigg)_{N_k} \, ,
\end{equation}
and add them together in one matrix $\textbf{B}$ to have a final model of each valid pixel given by
\begin{equation}\label{eq:B}
\boxed{
\textbf{M}_{N_i}
= \textbf{B}_{N_i \times N_k} \textbf{f}_{N_k}
} \, .
\end{equation}
This result is one of the main utilities of ATOCA, which is a linear model of the full NIRISS/SOSS detector. One could use it to generate quick simulations, given a model of the incident flux.
\subsection{Solving for \textbf{f}}
Now that we have a model of the intensity at each relevant pixel, we can link their individual measured intensity $D_i$, with $f_i$ by fitting directly the pixel model on the detector using a $\chi^2$ minimization. Given the following equation,
\begin{equation}\label{eq:chi2}
\chi^2 = \left\| \frac{\textbf{D} - \textbf{M}}{\mathbf{\sigma}} \right\|^2 \, .
\end{equation}
with \textbf{D} being the array of measured intensities on each pixels $i$ and $\mathbf{\sigma}$ the array of their uncertainties, then the best solution can be found by imposing
\begin{equation}
\frac{d \chi^2}{d f_k} = 0 \, .
\end{equation}
The detailed calculations found in the Appendix \ref{ssec:chi2} lead to the following system of equations:
\begin{equation}\label{eq:solution}
\left(\frac{\textbf{B}}{\mathbf{\sigma}}\right)^T_{\rm N_k \times N_i} \left(\frac{\textbf{D}}{\mathbf{\sigma}}\right)_{\rm N_i}
= \left(\frac{\textbf{B}}{\mathbf{\sigma}}\right)_{\rm N_k \times N_i}^T
\left(\frac{\textbf{B}}{\mathbf{\sigma}}\right)_{\rm N_i \times N_k}
\mathbf{f}_{\rm N_k} \, ,
\end{equation}
where
\begin{equation}
\left(\frac{\textbf{B}}{\mathbf{\sigma}}\right)_{\rm N_i \times N_k} = \mathrm{diag} \left(\frac{1}{\mathbf{\sigma}}\right)_{\rm N_i \times N_i} \textbf{B}_{\rm N_i \times N_k} \, .
\end{equation}
This system can now be solved for $\mathbf{f}$. A comparison with the optimal extraction method \citep{horne.1986} is presented in Appendix \ref{sec:comparison}.
To precisely estimate the integral representing each pixel, an oversampled numerical grid is required (see Figure \ref{fig:trpz} and paragraph \textit{Grid sampling} of Section \ref{sec:considerations}). Hence, for a given pixel, the solution of equation \ref{eq:solution} would be highly degenerate. In many situations, the system will still be invertible since many pixels can cover slightly different wavelength ranges, but the solutions will then be extremely unstable. This is an ill-conditioned system, where a slight change in the observation vector $\mathbf{D}$ could cause large differences in the solution $\mathbf{f}$. To circumvent this problem, the system needs to be regularized.
\subsection{Regularization} \label{sec:regularization}
There are multiple ways to perform regularization. The one chosen here is Tikhonov regularization \citep{tikhonov.1963}. It is also referred as Phillip-Twomey's regularization \citep{phillips.1962} and ridge-regression \citep{horel.1962}. This technique has been used in astrophysics in a variety of similar situations requiring the inversion of an integral equation \citep[e.g.,][]{kunasz.1973,thompson.1990}. In fact, it is part of some advanced spectral extraction methods for fiber-fed spectrographs that were identified among the most effective methods \citep{min.2020}. It is also used in the context of spline interpolation of noisy data \citep{green1993, hastie.1990}. The main idea is to add a regularization term to the linear $\chi^2$ (equation \ref{eq:chi2}), yielding the following equation:
\begin{equation}\label{eq:tikhonnov}
\chi^2_{\rm Reg} = \chi^2 + \alpha \left\| \Gamma \textbf{f} \right\|^2 \, .
\end{equation}
Here, $\alpha$ is a Lagrange multiplier and $\Gamma$ is a linear operator that adds a ``cost" depending on the nature of the solution. Generally, it is used to favor smoother solutions, and hence reducing the level of overfitting. We can obtain the new solution by minimizing the system in the same fashion as before, differentiating with respect to $f_k$, with the following result:
\begin{equation}\label{eq:tikho_solution}
\left(\frac{\textbf{B}}{\mathbf{\sigma}}\right)^T \left(\frac{\textbf{D}}{\mathbf{\sigma}}\right)
= \left[ \left(\frac{\textbf{B}}{\mathbf{\sigma}}\right)^T
\left(\frac{\textbf{B}}{\mathbf{\sigma}}\right)
- \alpha \Gamma \right]
\mathbf{f}_{\rm N_k} \, .
\end{equation}
In this case, the Tikhonov matrix $\Gamma$ was chosen to be the first derivative operator as in \cite{lamost_tikhonov.2015} or \cite{piskunov_optimal_2021}.
The choice of $\alpha$ can be done in many different ways. Usually, the general idea is to find a good balance between the regularization term and the $\chi^2$ which can be done using the L-curve criterion (e.g., \citealp{hansen1992}) or generalized cross-validation techniques, GCV \citep{golub.1979, wahba.1977}. The L-curve technique has the advantage of being robust, especially against correlated noise \citep{hansen1993} compared to the GCV. However, it tends to over-smooth the solution, which is not optimal. As for the GCV, it is too computationally intensive in the context of large-scale problems. However, in the present situation, the objective is not the direct product of the regularization, i.e., the underlying flux, but rather the modeling of the pixel after re-integration (this will be discussed in sections \ref{sec:considerations} paragraph \textit{Proper output} and \ref{sec:implementation}). Thus, it is not necessary to find the most physically accurate solution. In fact, what is needed is the smoothest solution that, once re-projected on the detector, fits the observations within the uncertainties. Consequently, we defined custom criteria to determine $\alpha$ with the objective to keep the dependency of the solution's sensitivity to the scaling factor lower than the expected noise. Mathematically, this comes back to defining a threshold on the derivative of the $\chi^2$ with respect to $\log \alpha$. The same convergence criterion was used by \cite{khmil.2002} in a similar context.
\subsection{Other considerations} \label{sec:considerations}
\paragraph{Grid sampling} One important aspect of the technique is the choice of the wavelength grid. Since the simulated pixels are the results of numerical integrations, they are subject to computational errors. One way to define the grid would be to use a grid representative of the native pixel sampling and to oversample each interval of this grid by a certain factor. This will however create unnecessarily large systems and increase the computation time. Moreover, for the regularization method to be well-behaved, it is better to have errors of the same order for each node. Indeed, as highlighted by \cite{puetter_digital_2005}, the dynamical range needs to be constrained, lest some regions will be overfitted and others underfitted. Given all this, we opted for an irregular grid designed to make the magnitude of the integration error between subsequent nodes more uniform. To estimate the integration error on each node, we compared the result of a trapezoidal integration (as it is done in ATOCA) with a more precise Romberg's integration. The intervals with an estimated error higher than a specified tolerance were oversampled by a factor of two iteratively until the tolerance was satisfied. This method only requires an estimate of the function to be integrated, which can be given by a user or directly estimated from the data. An example of an uneven grid is shown in Figure \ref{fig:trpz}, represented by the vertical dotted lines.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{Figures/integration_adapt_grid.pdf}
\caption{Example of the trapezoidal integration on a specific grid. The shaded grey region represents the area under the curve for a trapezoidal integration over the wavelength coverage of a given pixel delimited by the vertical dashed lines. The vertical dotted lines indicate the irregular grid used to do the integration. The flux at each order's resolution is given by the solid lines.
}
\label{fig:trpz}
\end{figure}
\paragraph{Proper output} As mentioned before, the underlying flux $\mathbf{f}$ is not the end product of an extraction. Indeed, since it has a resolution higher than both orders on the detector, $\mathbf{f}$ will be degenerate. In fact, solving for $\mathbf{f}$ is a deconvolution, which is subject to instabilities or artifacts \citep{bolton.2010}. Thus, one will have to reconvolve the result to get rid of these effects. In this case, the underlying flux has to be integrated on bins representing a pixel grid. This can be done by invoking equation \ref{eq:B} and reconstructing the detector, which can be used to assess the quality of the detector modeling by examining the residuals. It could also be used to rebuild each order independently using the $\textbf{b}_n$ matrices from equation \ref{eq:bn}. Furthermore, it is possible to get a one-dimensional spectrum by re-integrating on a grid representative of the pixel sampling. In fact, this is equivalent to reconstructing a single row of the detector. In this context, without any dimension in the cross-dispersion axis, the spatial profile would not be relevant anymore and its value should be set to unity.
\paragraph{Wavelength distortions} In a variety of situations, the wavelength solution will not be constant along the axis perpendicular to the dispersion. Generally, this can be due to differential refraction in the Earth's atmosphere and imperfect spectrograph optics \citep{horne.1986}. This effect is seen in many spectrographs \citep[e.g.][]{bolton.2010, piskunov_optimal_2021}. It can also be caused by observing techniques like the spatial-scan mode of the \textit{Hubble Space Telescope}'s Wide Field Camera 3 \citep{deming.2013}. In the case of the NIRISS SOSS mode, a tilt is present due to the grism configuration (Albert et al., in prep.). Standard extraction procedures will either neglect this distortion \citep{sing.2015} or resample the detector image by interpolating along the dispersion axis \citep[e.g.,][]{kreidberg.2014}. ATOCA has the advantage of implicitly accounting for this distortion by treating each pixel individually, and using the full 2D wavelength solution.
\section{Implementation} \label{sec:implementation}
\begin{figure*}%
\centering
\includegraphics[scale=0.6, trim={3cm 2cm 3cm 2cm},clip]{Figures/Decontamination_steps.pdf}
\caption{\label{fig:steps}
Decontamination steps for the first and second spectral orders. From top to bottom: A) processed image of the detector with only signal from the targeted object, B) selection of relevant pixels, C) reconstructed image of the contaminating orders after a fit using ATOCA, D) decontaminated image of the first order after subtraction of the contaminating order; the data is now ready for a standard box extraction around first order (delimited by the green lines). Panels E and F are the same steps as C and D, but the extraction of the second order. The color scale is logarithmic, with the lowest intensity in black and the highest in yellow. The pixels that are not considered in the analysis are in white.}
\end{figure*}
ATOCA has been implemented in \texttt{python 3.8} as an option for the spectral extraction of NIRISS/SOSS observations in the JWST Data Management System (DMS). It is part of the \texttt{Extract1dStep}\footnote{\url{ https://jwst-pipeline.readthedocs.io/en/latest/jwst/extract_1d/arguments.html}} of the \texttt{spec2pipeline} calibration step. A description of the inputs and an example Jupyter Notebook can be found at \url{https://github.com/AntoineDarveau/atoca_demo}.
However, the end product is not the spectrum extracted with this method, as described in the Section \ref{sec:considerations} paragraph \textit{Proper output}. Indeed, one caveat regarding the product of an extraction with ATOCA is the dependency on the accuracy of the model. Just like the optimal extraction, the method needs a relative precision on the spatial profile much better than the corresponding data in order to make an unbiased spectral extraction \citep{horne.1986}. Moreover, inconsistencies between the spectral orders in the wavelength solution, the throughput or the resolution kernels could result in an intermediate solution between the two orders that will not satisfy the expected accuracy. However, in the case of contaminated spectra, it is possible to circumvent this problem by reconstructing the trace of each individual order with the $b_n$ matrices of equation \ref{eq:bn}, which are then used to decontaminate the detector image; the process is shown in Figure \ref{fig:steps}. This allows for a more classical extraction, like a box extraction, to be performed afterwards on each decontaminated trace. This technique has the advantage of reducing the dependency on the accuracy of the model. For instance, as shown in Figure \ref{fig:steps}, the modelling will only affect contaminated columns of the first order, leaving the rest untouched. This technique was preferred in the context of NIRISS/SOSS mode, although the ATOCA spectra are also available as a byproduct of the detector fit. Another output of the ATOCA is the model of each order (1st and 2nd) that were used for the decontamination. This could be useful to assess the quality of the fit by simply looking at the residuals. It would also be required to quantify the level of contamination (see Section \ref{sec:the_problem}). Furthermore, this model of the detector can be used to assign values to bad pixels located inside the aperture without the need for a separate outlier correction routine.
As shown in Figure \ref{fig:steps}, only the well-behaved pixels that will be extracted are considered in the application of ATOCA. This is in order to reduce the computational time and to make sure that the detector fit is not biased by superfluous regions of the detector. However, even with pixel selection, the size of the matrix $B$ in equation \ref{eq:lin_sys} is still considerable, with a size of $N_{pixel} \times N_{k}$. For example, in the realistic scenario of a 40-pixel aperture and a reasonable oversampling of the wavelength grid (tolerance of $10^{-3}$ per pixel), the dimension of the matrix will be around $150,000 \times 6000 = 9\times 10^8$. To overcome this problem, we took advantage of the fact that most elements of the matrix are null. Indeed, each order (or source) modeled by the matrix $B$ is represented as a block-diagonal that can be shifted with respect to the main diagonal. This enabled us to use the \texttt{scipy} \citep{scipy.2020} sparse matrices and drastically lessen the computational time and the amount of memory needed.
The implementation required the addition of new reference files to the Calibration Reference Data System (CRDS)\footnote{\url{https://jwst-crds.stsci.edu/}} used by the DMS, since ATOCA needs, for each order, the two-dimensional wavelength solutions, the spectral resolution kernels, the spatial profiles, and the spectral throughputs. The one-dimensional wavelength solutions and the throughputs are already a product of the standard calibrations, so the two-dimensional wavelength maps can be created simply by applying the tilt to the existing solution. The convolution kernels were determined from monochromatic point spread functions generated with \texttt{WebbPSF}. The resulting images were rectified to remove the tilt and summed over the spatial axis to keep only the spectral dependency. The most demanding input is still the determination of the spatial profile, but techniques typically used for optimal extractions could be applied directly to most of the spectral ranges, except for the overlapping parts. An algorithm to estimate the spatial profiles of both orders within the contaminated region is currently in development (Radica et al., in prep), and will be made available to complement ATOCA before the release of Cycle 1 data.
As mentioned in section \ref{sec:considerations}, ATOCA needs an estimate of the underlying flux $f_k$ to generate an oversampled grid. In the context of the NIRISS/SOSS mode, this is done by extracting the underlying flux $f_k$ for each order separately with only the most contaminated pixels masked. These rough extractions are done over a grid at native pixel sampling, which ensures the stability of the solution and removes the need for any regularization. Precision will be lost in this process and the contamination will not be treated correctly, but it is sufficient to generate the estimate. The latter is also used to provide a first estimate of the regularization factor, which is refined afterwards as described in section \ref{sec:regularization}. This step can take some time depending of the precision needed for the oversampled grid and the number of relevant pixels. Fortunately, it only needs to be performed once for a given time series, since the underlying spectrum will not vary enough to justify a different level of regularization.
In realistic observations, the position of the trace will change slightly between visits due to the angle of the pupil wheel and variation in the target acquisition, which will alter the wavelength solution as well as the spatial profile. These changes will effectively take the form of a rotation and a spatial shift of these reference files and are implemented as input parameters (rotation and translation) captured by the keyword \texttt{soss\_transform = [x, y, theta]}. These parameters can be either specified by the user or determined within \texttt{Extract1dStep} by fitting the measured trace centroid.
\section{Validation} \label{sec:validation}
\subsection{Simulations} \label{sec:simulation}
Two types of simulations were used in the context of this work: simulations produced with ATOCA itself (ATOCA simulations), and those produced by the instrument development team (IDT simulations). The first type takes advantage of the ATOCA capability to directly model the detector image using equation \ref{eq:lin_sys}, given an input spectrum. Hence it is possible to use it to generate simplistic simulations to validate the internal consistency of the method. To make sure that the numerical precision was not an issue, the wavelength grid used for the simulation was oversampled to limit the integration error over each pixel below $\sim$1\,ppm. We only included photon and background noise, so neither bad pixels, 1/f noise, nor cosmic ray hits were taken into account. The reference files were based on the best current knowledge of the NIRISS/SOSS mode, as described in section \ref{sec:implementation}. We used the throughput and the one-dimensional wavelength solution estimated or measured in the lab by the NIRISS instrument development team. The spatial profiles were determined from the IDT simulations
The IDT simulations (see Albert et al, in prep) are made by distributing an incident flux directly on an oversampled image using a trace as wide as a single oversampled pixel. This signal is then convolved in two dimensions with a grid of monochromatic kernels from WebbPSF and re-sampled at the native pixel resolution. These simulations were used to test the versatility and robustness of ATOCA on more realistic simulations. They also include the 1/f noise, the effects of flat-fielding and bad pixels. In all situations, the incident stellar fluxes are taken from high-resolution PHOENIX synthetic spectra \citep{husser.2013} and the transit models from SCARLET \citep{benneke.2015}.
\subsection{Results} \label{sec:results}
\begin{figure*}
\centering
\begin{subfigure}
\centering
\includegraphics[scale=0.6,trim={0.5cm 2cm 5.7cm 2cm},clip]{Figures/box_extraction_comparison_order1_teff2300_snr10000.pdf}
\end{subfigure}
\begin{subfigure}
\centering
\includegraphics[scale=0.6,trim={0.5cm 2cm 5.7cm 2cm},clip]{Figures/box_extraction_comparison_order2_teff2300_snr10000.pdf}
\end{subfigure}
\caption{Decontamination on a single image. The extracted spectra are shown in the top panel. The four other panels show the extraction residuals before and after application of ATOCA, at two different scales. The dashed curves correspond to the expected 1-$\sigma$ uncertainties (in absolute value for the panels 2 and 3).
For the last panel, the 1-$\sigma$ thresholds are marked by a dotted line. The simulation is the equivalent of a deep stack with a signal-to-noise ratio of 10,000 for a 2300\,K star.}%
\label{fig:consistency2300}
\end{figure*}
\begin{figure*}
\centering
\begin{subfigure}
\centering
\includegraphics[scale=0.70]{Figures/Sensitivity_plot_ord_1.pdf}
\end{subfigure}
\begin{subfigure}
\centering
\includegraphics[scale=0.70]{Figures/Sensitivity_plot_ord_2.pdf}
\end{subfigure}
\caption{Stability of the decontaminated extraction. The top panel displays the expected signal-to-noise ratio (opaque blue) and the corresponding precision (opaque purple) of the time-series observation of BD+601753. The standard deviation along the 876 box-extracted spectra is used to determine the measured signal-to-noise and precision (light blue and light purple). The sensitivity shown at the bottom panel is given by the ratio of the measured and expected scatter. A median filter of 81 pixels is represented by the red curve.}
\label{fig:stability9400}
\end{figure*}
\begin{figure*}
\centering
\begin{subfigure}
\centering
\includegraphics[scale=0.8]{Figures/WASP-107_nbins256_star4500_decont_transit_spec_ord1.pdf}
\end{subfigure}
\begin{subfigure}
\centering
\includegraphics[scale=0.8]{Figures/WASP-107_nbins256_star4500_decont_transit_spec_ord2.pdf}
\end{subfigure}
\caption{Decontaminated transmission spectrum for a target similar to WASP-107\,b. The transmission spectrum as well as the residual in ppm and scaled to the uncertainties $\sigma$ are presented by the three panels. In each of them, the results are shown at native sampling in blue and with 8-pixel bins in purple. The dashed black lines indicate the expected value or the $1\sigma$ uncertainties. The estimation of the contamination from equation \ref{eq:transit_contamination} is plotted in red. In the first panel, the input transmission spectrum is also shown in orange. The vertical gray dashed line delimit the wavelength range from the second order that is complementary to (or shared with) the first order.}
\label{fig:hot_jup_tr}
\end{figure*}
\begin{figure*}
\centering
\begin{subfigure}
\centering
\includegraphics[scale=0.8]{Figures/WASP-107_nbins256_star4500_contaminated_transit_spec_ord1.pdf}
\end{subfigure}
\begin{subfigure}
\centering
\includegraphics[scale=0.8]{Figures/WASP-107_nbins256_star4500_contaminated_transit_spec_ord2.pdf}
\end{subfigure}
\caption{Transmission spectrum for a target similar to WASP-107 b, without decontamination. Same description as in figure \ref{fig:hot_jup_tr}. }
\label{fig:hot_jup_tr_cont}
\end{figure*}
\begin{figure*}[tbph]
\centering
\includegraphics[trim={0cm 0cm 2cm 0cm} , clip, width=\linewidth]{Figures/WASP-52_residual_sigma.pdf}
\caption{Comparison between the ATOCA modeling and the detector image of a single exposure. The top panel presents a close up of the region of the detector where the overlap occurs, corresponding to the lower left corner of Figure \ref{fig:sossmode}. The residual between the model extracted by ATOCA and the simulated detector image is shown in the bottom panel. The color scale is in units of $\sigma$, the pixels uncertainty. The pixels that were not used for the fit (e.g., bad pixels or background) are in white. }
\label{fig:residual}
\end{figure*}
\begin{figure*}
\centering
\begin{subfigure}
\centering
\includegraphics[scale=0.68]{Figures/Sensitivity_WASP-52_order_1.pdf}
\end{subfigure}
\begin{subfigure}
\centering
\includegraphics[scale=0.68]{Figures/Sensitivity_WASP-52_order_2.pdf}
\end{subfigure}
\caption{Stability of the decontaminated extraction. The top panel displays the expected signal-to-noise ratio (blue) and the corresponding precision (purple) of the time-series observation of WASP-52 b. The sensitivity shown at the bottom panel is given by the ratio of the measured scatter before and after decontamination.}
\label{fig:stabilityWASP52}
\end{figure*}
For the first series of validations, the simulations were made with ATOCA itself to assess the performances of the decontamination and internal consistency. For each decontamination, the estimated relative tolerance of the wavelength grid had to be less than $10^{-3}$, a level of precision that implies a reasonable grid length, and hence moderate computational time. We also do not expect the precision of a single pixel to get higher than this, which corresponds to a SNR of 1000.
A first test was done on a single exposure by comparing each extraction to an equivalent simulation free of contamination. Different maximum pixel signal-to-noise ratios were tested, ranging from 200 to 10,000. This verifies our ability to decontaminate observations with the most extreme levels of precision planned for the NIRISS/SOSS mode. An example of the results is shown in Figure \ref{fig:consistency2300} for a star with an effective temperature of 2300\,K and for a SNR of 10,000. The decontamination performance proved to be very effective at removing the contamination from the second order. In the case presented in Figure \ref{fig:consistency2300}, ATOCA was able to go from a contamination of $\sim10000$\,ppm, equivalent to a hundred times the expected uncertainties, to virtually no contamination. The residuals fall within the expected uncertainties ($<$100 ppm) and seem free of any correlated noise. The same conclusions hold for all stellar temperatures. It is also interesting to note the clear cut around 0.85\,\micron\ in the contamination levels (before decontamination). This is an artefact of the simulations since the monochromatic kernels used for the two-dimensional convolution only cover 128 native pixels. Thus, there is a threshold at 69 rows around the center of the trace where the wings of the spatial profile are not modeled. In the context of real observations, the contamination levels should extend below 0.85\,\micron, while continuing to decrease.
The second and third tests were inspired by the commissioning programs \cite{1091jwst} and \cite{1541jwst}. The former is a flat time series observation comprising 876 integrations on the standard A1V star, BD+601753. This will quantify the stability of frame-by-frame decontamination for representative SNRs (177 at maximum pixel). The results are presented in Figure \ref{fig:stability9400}. The flat sensitivity spectrum shows that the frame-by-frame decontamination is stable. The combined spectrum reaches a precision of less than 100 ppm at best and between 150 and 400 ppm in the regions subject to contamination. The measured standard deviation along the time series is in good agreement with the expected uncertainties in the entire spectral range, as evidenced by the sensitivity curve. Note that these results only accounts for photon noise; in realistic observations, other sources of noise like 1/f, jitter or other detector effects might become dominant at certain wavelengths.
The latter program consists of another time-series observation of an expected featureless transit, evaluating the precision of relative measurements. Our simulation was based on the primary target of this program, the massive hot-Jupiter HAT-P-14\,b, which was simplified to a step-transit (no limb darkening, instantaneous ingress and egress) with a flat transmission spectrum. We also neglected the effect of non-linearity in the ramps and forced a signal-to-noise ratio of 400 per pixel at maximum. This is above the capability of a SOSS-mode single integration, but it allows to push the decontamination at higher levels. The same framework was also applied to a transit spectrum of an exoplanet similar to WASP-107\,b in a fourth verification (same transmission spectrum and same star, but different magnitude). In this case however, the signal-to-noise was artificially increased to 1000 for the same reason as mentioned above. Both validations led to the same conclusions, so the results of the flat transit are not shown here. The WASP-107\,b-like transmission spectrum is presented in Figure \ref{fig:hot_jup_tr}. It confirms that the procedure can reach the expected precision and accuracy on relative measurements. Even with a required precision on ATOCA's wavelength grid of $10^{-3}$ per pixel for the integration-by-integration decontamination, the combination of all extractions reaches an accuracy of less than 100 ppm in the order 1 contaminated region with, again, no evidence of systematic bias. The performance is even more obvious when it comes to the second order, where the contamination reaches levels of 1000 ppm. For comparison, Figure \ref{fig:hot_jup_tr_cont} presents the results of the WASP-107\,b time series without any decontamination. We can see that the polluting signal follows the expected curve (red) taken from equation \ref{eq:transit_contamination}. It also confirms that the levels of contamination for the first order are small compared to the actual signal. The result of the flat transit is not shown here since it leads to the same conclusions.
Based on these four tests, some remarks can be made. First, it is interesting to note that they entail qualitatively different high-resolution PHOENIX synthetic spectra \citep{husser.2013} at $T_{eff} = 9400 \rm \, K $ (BD+601753), $T_{eff} = 6700 \rm \, K $ (HAT-P-14), $T_{eff}=4500 \rm \, K$ (WASP-107) and $T_{eff} = 2300 \rm \, K $; the hottest spectrum showing well-defined absorption features on a smooth continuum and the coldest one containing features with noise-like behaviour. This is an assessment of the robustness of ATOCA regarding the nature of the underlying spectrum $f_k$.
Second, the tests were initially run assuming that the spatial profiles, the wavelength solutions, and the throughputs were exactly known, resulting in errors consistent with the expected noise limits for each application of ATOCA. However, the same tests were repeated with the reference files slightly shifted from their nominal values to confirm the robustness of the decontamination by applying a rotation and spatial shift (see end of Section \ref{sec:implementation}). We found no evidence that it affected the extraction for reasonable values, i.e., within the expected precision of the reference files. More precisely, we tested for shift in the dispersion direction up to 0.5\,pixel, for shift in the spatial axis up to 0.1\,pixel and for rotations up to $0.01\deg$.
Third, based on the apparent agreement between the expected and measured transit contamination seen in Figure \ref{fig:hot_jup_tr_cont}, it would seem that the correction for contamination could be made after a standard extraction, directly on the one-dimensional spectra using equation \ref{eq:transit_contamination}. However, these examples used an idealized transit, with a perfectly stable stellar spectrum and without considering any limb darkening. Further analysis should be done before making any conclusion on this possible alternative to correct for order contamination.
ATOCA was also applied on the realistic time series simulation from the IDT to test the robustness of the algorithm. We present here an example on WASP-52. In this case, the tests were designed to assess the quality of integration-by-integration decontamination as well as the stability of the decontamination. Only the stellar spectrum was included since adding a transit would only add complexity to the interpretation of the results, without bringing additional information. The time series comprises a total of 103 integrations with a signal-to-noise ratio per pixel reaching up to $\sim 250$.
Contrary to the more simplistic simulations, we did not have access to each individual order, which is more representative of the context of real observations. Therefore, we used the residual of the full detector model (combined orders) for individual integrations as an indicator of the quality of the decontamination. The logic being that, if the model is able to represent correctly the overlapping region as well as the pixels covering the wavelength domain shared between both orders, then we can be confident that the overall model is accurate. Figure \ref{fig:residual} presents the residuals for a single integration, given by the equation
\begin{equation}
\rm residual = \frac{observation - model}{uncertainty} \, .
\end{equation}
In this situation, since the input spectrum is perfectly stable, the uncertainties could be determined empirically using the standard deviation of each pixel throughout the full time series. The result is consistent with Gaussian noise and there is no evidence of any correlated features.
Concerning the stability of the decontamination, we had to do a similar comparison as in Figure \ref{fig:stability9400}. However, this time, the spectrum extracted with the standard technique could not be used directly to avoid a possible contribution from the bad pixel modeling. Instead, we used the standard deviation of the pixels along all integrations, as it was done above to estimate the uncertainties, and then computed the summation in quadrature with the weight specific to the extraction method. In this manner, the bad pixels are not included in the summation. This was done for the time series before and after decontamination. The results are presented in Figure \ref{fig:stabilityWASP52}. The sensitivity increases slightly at longer wavelengths where the contamination is at its peak, but it remains contained below 5\% for practically all of the domain of both spectral orders. This means that the precision, which is around 1000\,ppm in the current example as shown in the top panel, would differ by only 50\,ppm.
Based on these two tests, we can conclude that ATOCA can model the detector image within the uncertainties and at a low cost in terms of noise. This shows again that the algorithm is robust to inexact reference files.
On a different note, it is important to mention that the quality of the modeling is strongly influenced by the intertwining between both orders in their shared wavelength domain. This effect is accentuated in regions where the signal from both orders is strong, in which case a poor representation of the relation between the two can lead to over- and under-estimation. It can also be compensated with lower regularization factors, i.e., over-fitting. This was seen in many situations where the reference files were biased on purpose, as well as in the realistic simulations. This effect can be overcome by a proper estimation of the reference files. Thankfully, the regions that require higher precision are practically free of contamination, so the calibrations of the spatial profiles, the wavelength solutions and the throughput are relatively straightforward. Conversely, in the overlapping region, the throughput from the second order drops considerably, leaving the solution of the underlying flux dominated by the corresponding wavelength from the first order. This means that the model of the order 2 in the overlapping region is more permissive.
\section{Future improvements}
\paragraph{Hyper-parameters} The choice of regularization parameters could benefit from further improvements. Indeed, the current criterion could lead to unstable solutions, which would be very effective at modeling each valid pixel of the detector, but mediocre when it comes to accurately estimating any pixels that are not included in the fit. The latter objective could be achieved using other criteria. For example, non-exhaustive cross-validation techniques (e.g., k-fold, Monte-Carlo) would be a judicious choice since their primary objective is to be able to simulate a set of data points that are voluntarily excluded from the fit.
\paragraph{Choice of Tikhonov matrix} The injection-recovery tests on our simulations pointed towards comparable performances with the first and second derivative operator. The former was preferred due to its slightly lower complexity. However, the latter could end up being a more appropriate choice. Indeed, the second derivative is used in spline interpolation on noisy data to smooth out the solutions, which is not far from the problem we are facing here. It would also be a more physical explanation, given that the finite resolution of observations enforces a smooth solution with only small variations of the second derivative.
\paragraph{Background fitting} One forthcoming challenge with the NIRISS/SOSS mode is the background subtraction. While the spatial spread of the trace profile is very effective to improve the precision of the measurements, it also greatly reduces the number of pixels available to measure and remove the background contribution. This problem is even more concerning in the SUBSTRIP96 observing mode, where the two traces cover the entire range of rows for some columns. ATOCA could circumvent this problem by directly including the background in the fitting, adding the parameters needed to model the background at each column to the solution vector.
\section{Conclusion}
In this work, we presented an alternative spectral extraction method to solve the overlap problem pertaining to the NIRISS/SOSS mode. We first characterized the extent of the contamination for the first and second diffraction orders. It was found that for absolute measurements, the levels were kept below 0.1\% for most of the wavelength domain of the first order, except for wavelengths greater than 2.6~\micron\ where they reach 1\%. For the second order, the effect is much more important, but concerns mainly the wavelength range already covered by order 1. For relative measurements, for which the SOSS mode is specifically designed, the same levels of contamination are expected, but only on the chromatic differences of the signal (e.g., the difference in transit depth). This means that the systematic error due to the overlap should not be the dominant source of noise in the first diffraction order, although one should always assess its importance.
Nevertheless, it is still important to provide a way to disentangle each order's contribution to at least quantify the contamination, but also to allow a proper extraction for any absolute measurements or scenarios where the relative contamination becomes non-negligible. Consequently, we developed ATOCA, an algorithm that enables the modeling and extraction of overlapping orders (or sources), and decontamination the detector image. We showed that, given reasonable estimates of the spatial profiles, the wavelength solutions, and the spectral throughputs, ATOCA was able to decontaminate the data up to the required precision. We also characterized the robustness of the decontamination by introducing errors in the reference files and by applying it to realistic simulations. A first version of the algorithm is available in the JWST official pipeline. A development version is also available on github\footnote{\url{https://github.com/AntoineDarveau/jwst}}.
ATOCA is a promising technique to disentangle the contributions of overlapping spectral traces. Its framework might be transferable to other contexts, like field decontamination or multi-object slitless spectroscopy. ATOCA might also provide a powerful alternative to manage distorted wavelength solutions. All this potential has yet to be vetted throughout real observations which should come soon with the upcoming commissioning of NIRISS/SOSS.
\paragraph{Acknowledgments}
The name of the algorithm, ATOCA, is a word used in North-America French to designate the cranberry fruit. It was borrowed from the native American languages, possibly from Algonquin spoken by nations living in the area of the present-day Wisconsin \citep{canada} or from the Wandat word \textit{atokha} \citep{dhfq}. In this regard, we want to acknowledge the pivotal contribution of the First Nations to the North American French culture. We want to thank Anne Boucher for the design of the ATOCA logo. This project was undertaken with the financial support of the Canadian Space Agency (CSA-ASC) and the \textit{Fonds de Recherche du Qu\'ebec en Nature et Technologies} (FRQNT). We would also like to thank the Space Telescope Science Institute (STScI) for their trust and help during the implementation process. A.D.B., C.P. and S.P. wants to thank the Technologies for Exo-Planetary Science (TEPS) CREATE program, without whom this research would not be possible. The authors also acknowledge financial and social support of the Institute for Research on Exoplanets (iREx) and the University of Montreal.
MR would like to acknowledge funding from FRQNT, as well as the National Sciences and Engineering Research Council of Canada (NSERC).
C.P. acknowledges financial support by the NSERC Vanier Scholarship.
D.J. is supported by NRC Canada and by an NSERC Discovery Grant.
Support for J.D.T. was provided by NASA through the NASA Hubble Fellowship grant \# HST-HF2-51495.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555.
\software{\texttt{WebbPSF} \citep{webbpsf}, JWST Data Management System \url{https://jwst-pipeline.readthedocs.io/en/latest/index.html}, \texttt{scipy} \citep{scipy.2020}, \texttt{ipython} \citep{ipython.2007}, \texttt{matplotlib} \citep{matplotlib.2007}, \texttt{numpy} \citep{numpy2020}.}
\bibliographystyle{apj}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.